Hey Alexa! Sorry I fooled you ...

  • 0 Replies
  • 592 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
Hey Alexa! Sorry I fooled you ...
« on: February 08, 2020, 12:01:39 pm »
Hey Alexa! Sorry I fooled you ...
7 February 2020, 4:20 pm

A human can likely tell the difference between a turtle and a rifle. Two years ago, Google’s AI wasn’t so sure. For quite some time, a subset of computer science research has been dedicated to better understanding how machine-learning models handle these “adversarial” attacks, which are inputs deliberately created to trick or fool machine-learning algorithms.

While much of this work has focused on speech and images, recently, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) tested the boundaries of text. They came up with “TextFooler,” a general framework that can successfully attack natural language processing (NLP) systems — the types of systems that let us interact with our Siri and Alexa voice assistants — and “fool” them into making the wrong predictions.

One could imagine using TextFooler for many applications related to internet safety, such as email spam filtering, hate speech flagging, or “sensitive” political speech text detection — which are all based on text classification models.

“If those tools are vulnerable to purposeful adversarial attacking, then the consequences may be disastrous,” says Di Jin, MIT PhD student and lead author on a new paper about TextFooler. “These tools need to have effective defense approaches to protect themselves, and in order to make such a safe defense system, we need to first examine the adversarial methods.”

TextFooler works in two parts: altering a given text, and then using that text to test two different language tasks to see if the system can successfully trick machine-learning models.  

The system first identifies the most important words that will influence the target model’s prediction, and then selects the synonyms that fit contextually. This is all while maintaining grammar and the original meaning to look “human” enough, until the prediction is altered.

Then, the framework is applied to two different tasks — text classification, and entailment (which is the relationship between text fragments in a sentence), with the goal of changing the classification or invalidating the entailment judgment of the original models.

In one example, TextFooler’s input and output were:

“The characters, cast in impossibly contrived situations, are totally estranged from reality.”

“The characters, cast in impossibly engineered circumstances, are fully estranged from reality.”

In this case, when testing on an NLP model, it gets the example input right, but then gets the modified input wrong.

In total, TextFooler successfully attacked three target models, including “BERT,” the popular open-source NLP model. It fooled the target models with an accuracy of over 90 percent to under 20 percent, by changing only 10 percent of the words in a given text. The team evaluated success on three criteria: changing the model's prediction for classification or entailment; whether it looked similar in meaning to a human reader, compared with the original example; and whether the text looked natural enough.

The researchers note that while attacking existing models is not the end goal, they hope that this work will help more abstract models generalize to new, unseen data.

“The system can be used or extended to attack any classification-based NLP models to test their robustness,” says Jin. “On the other hand, the generated adversaries can be used to improve the robustness and generalization of deep-learning models via adversarial training, which is a critical direction of this work.”

Jin wrote the paper alongside MIT Professor Peter Szolovits, Zhijing Jin of the University of Hong Kong, and Joey Tianyi Zhou of A*STAR, Singapore. They will present the paper at the AAAI Conference on Artificial Intelligence in New York.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm
AI-Generated Art Cannot Receive Copyrights
by frankinstien (AI News )
August 24, 2023, 08:49:45 am

Users Online

185 Guests, 0 Users

Most Online Today: 248. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles