This robot helps you lift objects — by looking at your biceps

  • 0 Replies
  • 1082 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
This robot helps you lift objects — by looking at your biceps
22 May 2019, 3:00 pm

We humans are very good at collaboration. For instance, when two people work together to carry a heavy object like a table or a sofa, they tend to instinctively coordinate their motions, constantly recalibrating to make sure their hands are at the same height as the other person’s. Our natural ability to make these types of adjustments allows us to collaborate on tasks big and small.

But a computer or a robot still can’t follow a human’s lead with ease. We usually either explicitly program them using machine-speak, or train them to understand our words, à la virtual assistants like Siri or Alexa.

In contrast, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently showed that a smoother robot-human collaboration is possible through a new system they developed, where machines help people lift objects by monitoring their muscle movements.

Dubbed RoboRaise, the system involves putting electromyography (EMG) sensors on a user’s biceps and triceps to monitor muscle activity. Its algorithms then continuously detect changes to the person’s arm level, as well as discrete up-and-down hand gestures the user might make for finer motor control.

The team used the system for a series of tasks involving picking up and assembling mock airplane components. In experiments, users worked on these tasks with the robot and were able to control it to within a few inches of the desired heights by lifting and then tensing their arm. It was more accurate when gestures were used, and the robot responded correctly to roughly 70 percent of all gestures.

Graduate student Joseph DelPreto says he could imagine people using RoboRaise to help in manufacturing and construction settings, or even as an assistant around the house.

“Our approach to lifting objects with a robot aims to be intuitive and similar to how you might lift something with another person — roughly copying each other's motions while inferring helpful adjustments,” says DelPreto, lead author on a new paper about the project with MIT Professor and CSAIL Director Daniela Rus. “The key insight is to use nonverbal cues that encode instructions for how to coordinate, for example to lift a little higher or lower. Using muscle signals to communicate almost makes the robot an extension of yourself that you can fluidly control.”

The project builds off the team’s existing system that allows users to instantly correct robot mistakes with brainwaves and hand gestures, now enabling continuous motion in a more collaborative way. “We aim to develop human-robot interaction where the robot adapts to the human, rather than the other way around. This way the robot becomes an intelligent tool for physical work,” says Rus.

EMG signals can be tricky to work with: They’re often very noisy, and it can be difficult to predict exactly how a limb is moving based on muscle activity. Even if you can estimate how a person is moving, how you want the robot itself to respond may be unclear.

RoboRaise gets around this by putting the human in control. The team’s system uses noninvasive, on-body sensors that detect the firing of neurons as you tense or relax muscles. Using wearables also gets around problems of occlusions or ambient noise, which can complicate tasks involving vision or speech.

RoboRaise’s algorithm then processes biceps activity to estimate how the person’s arm is moving so the robot can roughly mimic it, and the person can slightly tense or relax their arm to move the robot up or down. If a user needs the robot to move farther away from their own position or hold a pose for a while, they can just gesture up or down for finer control; a neural network detects these gestures at any time based on biceps and triceps activity.

A new user can start using the system very quickly, with minimal calibration. After putting on the sensors, they just need to tense and relax their arm a few times then lift a light weight to a few heights. The neural network that detects gestures is only trained on data from previous users.

The team tested the system with 10 users through a series of three lifting experiments: one where the robot didn’t move at all, another where the robot moved in response to their muscles but didn’t help lift the object, and a third where the robot and person lifted an object together.

When the person had feedback from the robot — when they could see it moving or when they were lifting something together — the achieved height was significantly more accurate compared to having no feedback.

The team also tested RoboRaise on assembly tasks, such as lifting a rubber sheet onto a base structure. It was able to successfully lift both rigid and flexible objects onto the bases. RoboRaise was implemented on the team’s Baxter humanoid robot, but the team says it could be adapted for any robotic platform.

In the future, the team hopes that adding more muscles or different types of sensors to the system will increase the degrees of freedom, with the ultimate goal of doing even more complex tasks. Cues like exertion or fatigue from muscle activity could also help robots provide more intuitive assistance. The team tested one version of the system that uses biceps and triceps levels to tell the robot how stiffly the person is holding their end of the object; together, the human and machine could fluidly drag an object around or rigidly pull it taut.

The team will present their work at the International Conference on Robotics and Automation this week in Montreal, Canada. The project was funded in part by The Boeing Company.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

 


Project Acuitas
by WriterOfMinds (General Project Discussion)
Today at 01:32:29 am
Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

260 Guests, 0 Users

Most Online Today: 285. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles