Combining artificial intelligence with their passions

  • 0 Replies
  • 592 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
Combining artificial intelligence with their passions
« on: March 22, 2019, 12:00:09 pm »
Combining artificial intelligence with their passions
7 March 2019, 10:30 pm

Computational thinking will be the mark of an MIT education when the MIT Stephen A. Schwarzman College of Computing opens this fall, and glimpses of what's to come were on display during the final reception of a three-day celebration of the college Feb. 26-28.

In a tent filled with electronic screens, students and postdocs took turns explaining how they had created something new by combining computing with topics they felt passionate about, including predicting panic selling on Wall Street, analyzing the filler ingredients in common drugs, and developing more energy-efficient software and hardware. The poster session featured undergraduates, graduate students, and postdocs from each of MIT’s five schools. Eight projects are highlighted here.

Low-cost screening tool for genetic mutations linked to autism

Autism is thought to have a strong genetic basis, but few of the genetic mutations responsible have been found. In collaboration with Boston Children’s Hospital and Harvard Medical School, MIT researchers are using AI to explore autism’s hidden origins.

Working with his advisors, Bonnie Berger and Po-Ru Loh, professors of math and medicine at MIT and Harvard respectively, graduate student Maxwell Sherman has helped develop an algorithm to detect previously unidentified mutations in people with autism which cause some cells to carry too much or too little DNA.

The team has found that up to 1 percent of people with autism carry the mutations, and that inexpensive consumer genetic tests can detect them with a mere saliva sample. Hundreds of U.S. children who carry the mutations and are at risk for autism could be identified this way each year, researchers say.  

“Early detection of autism gives kids earlier access to supportive services,” says Sherman, “and that can have lasting benefits.”

Can deep learning models be trusted?

As AI systems automate more tasks, the need to evaluate their decisions and alert the public to possible failures has taken on new urgency. In a project with the MIT-IBM Watson AI Lab, graduate student Lily Weng is helping to build an efficient, general framework for quantifying how easily deep neural networks can be tricked or misled into making mistakes.

Working with a team led by Pin-Yu Chen, a researcher at IBM, and Luca Daniel, a professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), Weng developed a method that reports how much each individual input can be altered before the neural network makes a mistake. The team is now expanding the framework to larger, and more general neural networks, and developing tools to quantify their level of vulnerability based on different ways of measuring input-alteration. The work has spawned a series of papers, summarized in a recent MIT-IBM blog post.

Mapping the spread of Ebola virus

By the time the Ebola virus spread from Guinea and Liberia to Sierra Leone in 2014, the government was prepared. It quickly closed its schools and shut its borders with the two countries. Still, relative to its population, Sierra Leone fared worse than its neighbors, with 14,000 suspected infections and 4,000 deaths.

Marie Charpignon, a graduate student in the MIT Institute for Data, Systems, and Society (IDSS), wanted to know why. Her search became a final project for Network Science and Models, a class taught by Patrick Jaillet, the Dugald C. Jackson Professor in EECS.

In a network analysis of trade, migration, and World Health Organization data, Charpignon discovered that a severe shortage of medical resources seemed to explain why Ebola had caused relatively more devastation in Sierra Leone, despite the country’s precautions.

“Sierra Leone had one doctor for every 30,000 residents, and the doctors were the first to be infected,” she says. “That further reduced the availability of medical help.”

If Sierra Leone had not acted as decisively, she says, the outbreak could have been far worse. Her results suggest that epidemiology models should factor in where hospitals and medical staff are clustered to better predict how an epidemic will unfold.

An AI for sustainable, economical buildings

When labor is cheap, buildings are designed to use fewer materials, but as labor costs rise, design choices shift to inefficient but easily constructed buildings. That’s why much of the world today favors buildings made of standardized steel-reinforced concrete, says graduate student Mohamed Ismail.

AI is now changing the design equation. In collaboration with TARA, a New Delhi-based nonprofit, Ismail and his advisor, Caitlin Mueller, an associate professor in the Department of Architecture and the Department of Civil and Environmental Engineering, are using computational tools to reduce the amount of reinforced concrete in India’s buildings.

“We can, once again, make structural performance part of the architectural design process, and build exciting, elegant buildings that are also efficient and economical,” says Ismail.

The work involves calculating how much load a building can bear as the shape of its design shifts. Ismael and Mueller developed an optimization algorithm to compute a shape that would maximize efficiency and provide a sculptural element. The hybrid nature of reinforced concrete, which is both liquid and solid, brittle and ductile, was one challenge they had to overcome. Making sure the models would translate on the ground, by staying in close contact with the client, was another.

“If something didn’t work, I could remotely connect to my computer at MIT, adjust the code, and have a new design ready for TARA within an hour,” says Ismail.

Robots that understand language

The more that robots can engage with humans, the more useful they become. That means asking for feedback when they get confused and seamlessly absorbing new information as they interact with us and their environment. Ideally, this means moving to a world in which we talk to robots instead of programming them.

In a project led by Boris Katz, a researcher at the Computer Science and Artificial Intelligence Laboratory and Nicholas Roy, a professor in MIT’s Department of Aeronautics and Astronautics, graduate student Yen-Ling Kuo has designed a set of experiments to understand how humans and robots can cooperate and what robots must learn to follow commands.

In one video game experiment, volunteers are asked to drive a car full of bunnies through an obstacle course of walls and pits of flames. It sounds like “absurdist comedy,” Kuo admits, but the goal is straightforward: to understand how humans plot a course through hazardous conditions while interpreting the actions of others around them. Data from the experiments will be used to design algorithms that help robots to plan and explain their understanding of what others are doing.

A deep learning tool to unlock your inner artist

Creativity is thought to play an important role in healthy aging, with research showing that creative people are better at adapting to the challenges of old age. The trouble is, not everyone is in touch with their inner artist.

“Maybe they were accountants, or worked in business and don’t see themselves as creative types,” says Guillermo Bernal, a graduate student at the MIT Media Lab. “I started to think, what if we could leverage deep learning models to help people explore their creative side?”

With Media Lab professor Pattie Maes, Bernal developed Paper Dreams, an interactive storytelling tool that uses generative models to give the user a shot of inspiration. As a sketch unfolds, Paper Dreams imagines how the scene could develop further and suggests colors, textures, and new objects for the artist to add. A “serendipity dial” lets the artist decide how off-beat they want the suggestions to be.

“Seeing the drawing and colors evolve in real-time as you manipulate them is a magical experience,” says Bernal, who is exploring ways to make the platform more accessible.

Preventing maternal deaths in Rwanda

The top cause of death for new mothers in Rwanda are infections following a caesarean section. To identify at-risk mothers sooner, researchers at MIT, Harvard Medical School, Brigham Women’s Hospital, and Partners in Health, Rwanda, are developing a computational tool to predict whether a mother’s post-surgical wound is likely to be infected.  

Researchers gathered C-section wound photos from 527 women, using health workers who captured the pictures with their smartphones 10 to 12 days after surgery. Working with his advisor, Richard Fletcher, a researcher in MIT’s D-Lab, graduate student Subby Olubeko helped train a pair of models to pick out the wounds that developed into infections.  When they tested the logistic regression model on the full dataset, it gave almost perfect predictions.

The color of the wound’s drainage, and how bright the wound appears at its center, are two of the features the model picks up on, says Olubeko. The team plans to run a field experiment this spring to collect wound photos from a more diverse group of women and to shoot infrared images to see if they reveal additional information.

Do native ads shape our perception of the news?

The migration of news to the web has given advertisers the ability to place ever more personalized, engaging ads amid high-quality news stories. Often masquerading as legitimate news, so-called “native” ads, pushed by content recommendation networks, have brought badly needed revenue to the struggling U.S. news industry. But at what cost?

“Native ads were supposed to help the news industry cope with the financial crisis, but what if they’re reinforcing the public’s mistrust of the media and driving readers away from quality news?” says graduate student Manon Revel.

Claims of fake news dominated the 2016 U.S. presidential elections, but politicized native ads were also common. Curious to measure their reach, Revel joined a project led by Adam Berinsky, a professor in MIT’s Department of Political Science, Munther Dahleh, a professor in EECS and director of IDSS, Dean Eckles, a professor at MIT’s Sloan School of Management, and Ali Jadbabaie, a CEE professor who is associate director of IDSS.  

Analyzing a sample of native ads that popped up on readers’ screens before the election, they found that 25 percent could be considered highly political, and that 75 percent fit the description of clickbait. A similar trend emerged when they looked at coverage of the 2018 midterm elections. The team is now running experiments to see how exposure to native ads influences how readers rate the credibility of real news.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm
AI-Generated Art Cannot Receive Copyrights
by frankinstien (AI News )
August 24, 2023, 08:49:45 am

Users Online

205 Guests, 0 Users

Most Online Today: 248. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles