MIT art installation aims to empower a more discerning public

  • 0 Replies
  • 592 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
MIT art installation aims to empower a more discerning public
« on: November 27, 2019, 12:00:26 pm »
MIT art installation aims to empower a more discerning public
25 November 2019, 4:30 pm

Videos doctored by artificial intelligence, culturally known as “deepfakes,” are being created and shared by the public at an alarming rate. Using advanced computer graphics and audio processing to realistically emulate speech and mannerisms, deepfakes have the power to distort reality, erode truth, and spread misinformation. In a troubling example, researchers around the world have sounded the alarm that they carry significant potential to influence American voters in the 2020 elections.

While technology companies race to develop ways to detect and control deepfakes on social media platforms, and lawmakers search for ways to regulate them, a team of artists and computer scientists led by the MIT Center for Advanced Virtuality have designed an art installation to empower and educate the public on how to discern reality from deepfakes on their own.

“Computer-based misinformation is a global challenge,” says Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality. “We are galvanized to make a broad impact on the literacy of the public, and we are committed to using AI not for misinformation, but for truth. We are pleased to bring onboard people such as our new XR Creative Director Francesca Panetta to help further this mission.”

Panetta is the director of “In Event of Moon Disaster,” along with co-director Halsey Burgund, a fellow in the MIT Open Documentary Lab. She says, “We hope that our work will spark critical awareness among the public. We want them to be alert to what is possible with today’s technology, to explore their own susceptibility, and to be ready to question what they see and hear as we enter a future fraught with challenges over the question of truth.”

With “In Event of Moon Disaster,” which opened Friday at the International Documentary Festival Amsterdam, the team has reimagined the story of the moon landing. Installed in a 1960s-era living room, audiences are invited to sit on vintage furniture surrounded by three screens, including a vintage television set. The screens play an edited array of vintage footage from NASA, taking the audience on a journey from takeoff into space and to the moon. Then, on the center television, Richard Nixon reads a contingency speech written for him by his speech writer, Bill Safire, “in event of moon disaster” which he was to read if the Apollo 11 astronauts had not been able to return to Earth. In this installation, Richard Nixon reads this speech from the Oval Office.

To recreate this moving elegy that never happened, the team used deep learning techniques and the contributions of a voice actor to build the voice of Richard Nixon, producing a synthetic speech working with the Ukranian-based company Respeecher. They also worked with Israeli company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips, making it look as though he is reading this very speech from the Oval Office. The resulting video is highly believable, highlighting the possibilities of deepfake technology today.

The researchers chose to create a deepfake of this historical moment for a number of reasons: Space is a widely loved topic, so potentially engaging to a wide audience; the piece is apolitical and less likely to alienate, unlike a lot of misinformation; and, as the 1969 moon landing is an event widely accepted by the general public to have taken place, the deepfake elements will be starkly obvious.

Rounding out the educational experience, “In Event of Moon Disaster” transparently provides information regarding what is possible with today’s technology, and the goal of increasing public awareness and ability to identify misinformation in the form of deepfakes. This will be in the form of newspapers written especially for the exhibit which detail the making of the installation, how to spot a deepfake, and the most current work being done in algorithmic detection. Audience participants will be encouraged to take this away.

"Our goal was to use the most advanced artificial intelligence techniques available today to create the most believable result possible — and then point to it and say, ‘This is fake; here’s how we did it; and here’s why we did it,’” says Burgund.

While the physical installation opens in November 2019 in Amsterdam, the team is building a web-based version that is expected to go live in spring 2020.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

335 Guests, 0 Users

Most Online Today: 346. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles