Deep learning with point clouds in Robotics News

Deep learning with point clouds
21 October 2019, 5:10 pm

If you’ve ever seen a self-driving car in the wild, you might wonder about that spinning cylinder on top of it.

It’s a “lidar sensor,” and it’s what allows the car to navigate the world. By sending out pulses of infrared light and measuring the time it takes for them to bounce off objects, the sensor creates a “point cloud” that builds a 3D snapshot of the car’s surroundings.

Making sense of raw point-cloud data is difficult, and before the age of machine learning it traditionally required highly trained engineers to tediously specify which qualities they wanted to capture by hand. But in a new series of papers out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), researchers show that they can use deep learning to automatically process point clouds for a wide range of 3D-imaging applications.

“In computer vision and machine learning today, 90 percent of the advances deal only with two-dimensional images,” says MIT Professor Justin Solomon, who was senior author of the new series of papers spearheaded by PhD student Yue Wang. “Our work aims to address a fundamental need to better represent the 3D world, with application not just in autonomous driving, but any field that requires understanding 3D shapes.”

Most previous approaches haven’t been especially successful at capturing the patterns from data that are needed to get meaningful information out of a bunch of 3D points in space. But in one of the team’s papers, they showed that their “EdgeConv” method of analyzing point clouds using a type of neural network called a dynamic graph convolutional neural network allowed them to classify and segment individual objects.

“By building ‘graphs’ of neighboring points, the algorithm can capture hierarchical patterns and therefore infer multiple types of generic information that can be used by a myriad of downstream tasks,” says Wadim Kehl, a machine learning scientist at Toyota Research Institute who was not involved in the work.

In addition to developing EdgeConv, the team also explored other specific aspects of point-cloud processing. For example, one challenge is that most sensors change perspectives as they move around the 3D world; every time we take a new scan of the same object, its position may be different than the last time we saw it. To merge multiple point clouds together into a single detailed view of the world, you need to align multiple 3D points in a process called “registration.”

Registration is vital for many forms of imaging, from satellite data to medical procedures. For example, when a doctor has to take multiple magnetic resonance imaging scans of a patient over time, registration is what makes it possible to align the scans to see what’s changed.

“Registration is what allows us to integrate 3D data from different sources into a common coordinate system,” says Wang. “Without it, we wouldn’t actually be able to get as meaningful information from all these methods that have been developed.”

Solomon and Wang’s second paper demonstrates a new registration algorithm called “Deep Closest Point” (DCP) that was shown to better find a point cloud’s distinguishing patterns, points, and edges (known as “local features”) in order to align it with other point clouds. This is especially important for such tasks as enabling self-driving cars to situate themselves in a scene (“localization”), as well as for robotic hands to locate and grasp individual objects.

One limitation of DCP is that it assumes we can see an entire shape instead of just one side. This means it can’t handle the more difficult task of aligning partial views of shapes (known as “partial-to-partial registration”). As a result, in a third paper the researchers presented an improved algorithm for this task that they call the Partial Registration Network (PRNet).

Solomon says that existing 3D data tends to be “quite messy and unstructured compared to 2D images and photographs.” His team sought to figure out how to get meaningful information out of all that disorganized 3D data without the controlled environment that a lot of machine learning technologies now require.

A key observation behind the success of DCP and PRNet is the idea that a critical aspect of point-cloud processing is context. The geometric features on point cloud A that suggest the best ways to align it to point cloud B may be different from the features needed to align it to point cloud C. For example, in partial registration, an interesting part of a shape in one point cloud may not be visible in the other — making it useless for registration.

Wang says that the team’s tools have already been deployed by many researchers in the computer vision community and beyond. Even physicists are using them for an application the CSAIL team had never considered: particle physics.

Moving forward, the researchers hope to use the algorithms on real-world data, including data gathered from self-driving cars. Wang says they also plan to explore the potential of training their systems using self-supervised learning, to minimize the amount of human annotation needed.

Solomon and Wang were the two sole authors of the DCP and PRNet papers. Their co-authors on the EdgeConv paper were research assistant Yongbin Sun and Professor Sanjay Sarma of MIT, alongside postdoc Ziwei Liu of University of California at Berkeley and Professor Michael M. Bronstein of Imperial College London.

The projects were supported, in part, by the U.S. Air Force, the U.S. Army Research Office, Amazon, Google Research, IBM, the National Science Foundation, the Skoltech-MIT Next Generation Program, and the Toyota Research Institute.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

2 Comments | Started Today at 12:04:32 PM


Now you see it... in General Chat

Now you don't.


5 Comments | Started October 21, 2019, 03:38:29 PM


Can your PC run it? in Gaming

That's always been a conundrum...whether your PC would be able to run a specific game (or not).
Don't buy that game if you're not sure it will play on your computer.

Well, wonder no longer.
A friend pointed me to a really nice site today and it works surprisingly well.

Can You Run It? A Free, Quick and Private evaluation/diagnosis of your computer in seconds.


1 Comment | Started October 22, 2019, 05:13:27 PM


advancements on the spider pully leg design in Home Made Robots

(mostly for HopefullySomething…)

YAW-YAW-YAW …  still works.

It may make the bot a little weird to look at,  but Ive made it easier to put together.
Ive redone my little pully leg design, and I found the softbody bag isn't really whats making it work, its actually the bracing itself which is supporting the rotation which is the important member.  So its hollow bones with the tendons going up the bone, and I worked out something really cool,  and its the fact that a yaw rotation could be a pitch or a roll, just depends on the orientation.  So in-between these yaw rotations theres actually a twist that happens which will change the orientation of the rotation (just 90 degree changes will change a yaw into a pitch or roll) that make a ROLL-YAW-PITCH action for the leg,  and hopefully the bot given a proper hip like rotation set may be able to jump off the ground better like a grasshopper!

The other cool thing about it,  is I probably have to make the tendons a little fatter and the tunnel a bit biffer… and then you can cut this one like a ginger bread man - just flat, and the twists and bends will change the yaws into pitch and roll.    So one lazer cutting and you've got the bot, as long as u can cut out the tendons with it.
Might be a bit kinky, but with the right plastic it could still pull ok. hen to assemble it, I imagine I just have to brace it with some triangle frames to get it to the right 90 degree bends it needs to transform the yaws to pitch and roll.

So if you need to make manufacturing easier for yourself, its just a puzzle away.

To power it,  one solenoid per pulley wire.  do the motor driver however you want,  but ive got something quite zany in mind for that as well,  but it might not come to fruition, but have to wait and see.

14 Comments | Started October 17, 2019, 07:17:41 AM


Who are you, what do you do & what do you want to do ? in New Users Please Post Here

New fun topic for us all to get to know each other a little better? :hugs:

As I'm the admin and I've started this silly thread I'll set the ball rolling

Im a bus driver from UK,? drive a 1990 BMW 525 and spend far too much time here? :D and my bot's name is Sal

And if you don't know my name by now there something wrong hehe


315 Comments | Started November 21, 2005, 10:54:42 PM


Computer science in service of medicine in Robotics News

Computer science in service of medicine
18 October 2019, 5:00 am

MIT’s Ray and Maria Stata Center (Building 32), known for its striking outward appearance, is also designed to foster collaboration among the people inside. Sitting in the famous building’s amphitheater on a brisk fall day, Kristy Carpenter smiles as she speaks enthusiastically about how interdisciplinary efforts between the fields of computer science and molecular biology are helping accelerate the process of drug discovery and design.

Carpenter, an MIT senior with a joint major in both subjects, said she didn’t want to specialize in only one or the other — it’s the intersection between both disciplines, and the application of that work to improving human health, that she finds compelling.

“For me, to be really fulfilled in my work as a scientist, I want to have some tangible impact,” she says.

Carpenter explains that artificial intelligence, which can help compute the combinations of compounds that would be better for a particular drug, can reduce trial-and-error time and ideally quicken the process of designing new medicines.

“I feel like helping make drugs in a more efficient manner, or coming up with some new medicine or way to tackle cancer or Alzheimer’s or something, would really make me feel fulfilled,” she says.

In the future, Carpenter hopes to get a PhD and pursue computational approaches to biomedicine, perhaps at one of the national laboratories or the National Institutes of Health. She also plans to continue advocating for diversity and inclusion in science, technology, engineering, and mathematics (STEM), throughout her career, drawing in part from her experiences as part of the leadership of the MIT chapter of the American Indian Science and Engineering Society (AISES) and the MIT Women’s Independent Living Group.

Finding her niche in STEM

Carpenter was first drawn to computer science and coding in middle school. She recalls becoming engrossed in a program called Scratch, spending hours in the computer lab playing with the block-based visual programming language, which, as it happens, was developed at MIT’s Media Lab.

As an MIT student, Carpenter found her way into the computational biology major after a summer internship at Lawrence Livermore National Lab, where researchers were using computer simulations and physics to look at a particular protein implicated in tumors.

Next, she got hooked on using computational biology for drug discovery and design during her sophomore year, as an intern at Massachusetts General Hospital. There, she learned that developing a new drug can be a very long, tedious, and complicated process that can take years, but that using machine learning and screening drugs virtually can help hasten this process. She followed that internship with an Undergraduate Research Opportunities Program (UROP) project in the lab of Professor Collin Stultz, within the MIT Research Laboratory of Electronics.

Building community

For Carpenter, who is part Japanese-American and part Alaskan Native and grew up outside of Seattle, the fact that there were Native American students at MIT, albeit just about a dozen of them, was an important factor in deciding where to attend college.

Soon after Carpenter was admitted, a senior from MIT’s AISES chapter called her and told her about the organization.

“They sort of recruited me before I even came here,” she recalls.

Carpenter is now the vice president of the chapter. The people in the organization, which Carpenter describes as a cultural group at MIT, have become her close friends.

“AISES has been a really important part of my time here,” Carpenter says. “At MIT, it’s mostly about having a community of Native students since it’s very easy for us to get isolated here. It’s hard to find people of a similar background, and so AISES is a place where we can all gather just to hang out, socialize, check in with each other.”

The organization also puts on movie screenings and other events to “show that we exist and that there are Native people at MIT because a lot of people forget that.”

Carpenter first became a member of the national AISES organization as a high school student, when she and her father made serious efforts to reconnect with their Alutiiq heritage. She began educating herself more about the history of Alaska Natives on Kodiak Island, and learning the Alutiiq language, which is severely endangered — just about a couple hundred people still speak it and even fewer speak it fluently.

Carpenter started to teach herself the language and then took an online class in high school through Kodiak College. She said she learned very basic amounts and knows simple sentences and personal introductions.

“I feel like learning the language was one of the best ways to connect to my culture and sort of legitimize myself in a way. Also, I knew it was important to keep the culture around,” she says. “I would always be telling my friends about it and trying to teach them what I was learning.”

Carpenter has also built her MIT community through the Women’s Independent Living Group, one of the few all-women housing options at the Institute. She joined the group of about 40 women the spring semester of her sophomore year.

“I really appreciate the group because there’s a lot of diversity in major and diversity in [graduation] year,” she says. “The living group is meant to be a strong community of women at MIT.”

Carpenter is now the president of the living group, which has been a significant source of support for her. When she was trying to increase her iron intake so she could donate blood, her friends in the living group helped cook meals and cheered her on.

Carpenter also hopes to rise in the ranks at the organizations where she ends up working after MIT, taking a leadership role in advocating for diversity, equity, and inclusion.

“I don’t want to lose sight of where I came from or my heritage or being a woman in STEM,” Carpenter says. “Wherever I end up working, I hopefully will move up and keep my Native and Asian identity visible, to be an example for others.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started October 22, 2019, 12:00:29 PM


XKCD Comic : Wardrobe in XKCD Comic

21 October 2019, 5:00 am

If you'd just agree to hold your meetings in here, you'd have PLENTY of time to figure things out before the deadline.

Source: xkcd.com

Started October 22, 2019, 12:00:23 PM


Consciousness & Self-awareness in General AI Discussion

That one caught my eye:

Quote from: Wikipedia
While consciousness is being aware of one's environment and body and lifestyle, self-awareness is the recognition of that awareness.

That's a very exciting challenge. First, you have consciousness, with external consciousness (being aware of where I end and where my environment begins) and internal consciousness (being aware of what's happening inside of my mind). Then you have self-awareness: being aware that I'm aware of...

Algorithmically, it implies having internal sensors to feel what's happening inside the program (internal sensors send data to program input), including sensors to feel this internal sensor loop. Trying to figure out a minimal algorithm drives me mad.

9 Comments | Started July 04, 2019, 09:57:28 AM


Open access task force releases final recommendations in Robotics News

Open access task force releases final recommendations
17 October 2019, 8:00 pm

The Ad Hoc Task Force on Open Access to MIT’s Research has released its final recommendations, which aim to support and increase the open sharing of MIT publications, data, software, and educational materials.

The Institute-wide open access (OA) task force, convened by Provost Martin Schmidt in July 2017, was charged with exploring how MIT should update and revise MIT’s current OA policies to “further the Institute’s mission of disseminating the fruits of its research and scholarship as widely as possible.” A draft set of recommendations was released in March 2019 for public comment, and this valuable input provided by the community was incorporated into the final recommendations.

“In 2009, MIT made a bold statement when it passed one of the country’s first faculty open access policies and the first to be university-wide,” says MIT Libraries Director Chris Bourg, co-chair of the task force with Hal Abelson, Class of 1922 Professor of Electrical Engineering and Computer Science. “Ten years later, we remain convinced that openly sharing research and educational materials is key to the MIT mission of advancing knowledge and bringing that knowledge to bear on the world’s greatest challenges. Through the course of our work, the task force heard from MIT community members who are passionate about extending the reach of their work, and we feel our recommendations provide policies and infrastructure to support that.”

The recommendations include ratifying an Institute-wide set of principles for open science and open scholarship, which affirm MIT’s larger commitment to the idea that scholarship and its dissemination should remain in the hands of researchers and their institutions. The MIT Libraries are working with the task force and the Committee on the Library System to develop a framework for negotiations with publishers based on these principles.

Recommendations to broaden the MIT Faculty Open Access Policy to cover all MIT authors and to adopt an OA policy for monographs received widespread support across the Institute and in the broader community. The task force also calls for heads of departments, labs, and centers to develop discipline-specific plans to encourage and support open sharing. The libraries have already begun working with the departments of Linguistics and Philosophy and Brain and Cognitive Sciences to develop sample plans.

“Scholarship serves humanity best when it is available to everyone,” says Abelson. “These recommendations reinforce MIT's leadership in open access to scholarship.”

In an email to the MIT community, Provost Martin Schmidt announced that he would appoint an implementation team this fall to prioritize and enact the task force’s recommendations. He has asked Chris Bourg to convene and lead this team.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started October 21, 2019, 12:00:09 PM


beyond omega level coding in General AI Discussion

NOW ! I summon to the field my monster NecroBump !
its special ability activates : revives D thread !!!

but wait there is more !

flip ! I activate my trap card : delayed hero signal :


I activate my spell card : swords of revealing light.

kek, it is on the way

33 Comments | Started May 17, 2019, 04:35:47 PM
I Am Mother

I Am Mother in Robots in Movies

I Am Mother is a 2019 Australian science fiction thriller film directed by Grant Sputore, from a screenplay by Michael Lloyd Green. Starring Clara Rugaard, Luke Hawker, Rose Byrne, and Hilary Swank, the film follows Daughter, a girl in a post-apocalyptic bunker, being raised by Mother, an android supposed to aid in the repopulation of Earth.

Sep 30, 2019, 21:39:16 pm
Mitsuku wins 2019 Loebner Prize

Mitsuku wins 2019 Loebner Prize in Articles

For the fourth consecutive year, Steve Worswick’s Mitsuku has won the Loebner Prize for the most humanlike chatbot entry to the contest. This is the fifth time that Steve has won the Loebner Prize. The Loebner Prize is the world’s longest running Turing-Test competition and has been organised by AISB, the world’s oldest AI society, since 2014.

Sep 30, 2019, 21:18:50 pm
Metal Gear Series - Metal Gear RAY

Metal Gear Series - Metal Gear RAY in Robots in Games

Metal Gear RAY is an anti-Metal Gear introduced in Metal Gear Solid 2: Sons of Liberty. This Metal Gear model comes in two variants: a manned prototype version developed to combat Metal Gear derivatives and an unmanned, computer-controlled version.

Metal Gear RAY differs from previous Metal Gear models in that it is not a nuclear launch platform, but instead a weapon of conventional warfare, originally designed by the U.S. Marines to hunt down and destroy the many Metal Gear derivatives that became common after Metal Gear REX's plans leaked following the events of Shadow Moses.

Apr 08, 2019, 17:35:36 pm
Fallout 3 - Liberty Prime

Fallout 3 - Liberty Prime in Robots in Games

Liberty Prime is a giant, military robot, that appears in the Fallout games. Liberty Prime fires dual, head-mounted energy beams, which are similar to shots fired from a Tesla cannon.

He first appears in Fallout 3 and also it's add-on Broken Steel. Then again in Fallout 4 and later in 2017 in Fallout: The Board Game.

Apr 07, 2019, 15:20:23 pm
Building Chatbots with Python

Building Chatbots with Python in Books

Build your own chatbot using Python and open source tools. This book begins with an introduction to chatbots where you will gain vital information on their architecture. You will then dive straight into natural language processing with the natural language toolkit (NLTK) for building a custom language processing platform for your chatbot. With this foundation, you will take a look at different natural language processing techniques so that you can choose the right one for you.

Apr 06, 2019, 20:34:29 pm
Voicebot and Chatbot Design

Voicebot and Chatbot Design in Books

Flexible conversational interfaces with Amazon Alexa, Google Home, and Facebook Messenger.

We are entering the age of conversational interfaces, where we will interact with AI bots using chat and voice. But how do we create a good conversation? How do we design and build voicebots and chatbots that can carry successful conversations in in the real world?

In this book, Rachel Batish introduces us to the world of conversational applications, bots and AI. You’ll discover how - with little technical knowledge - you can build successful and meaningful conversational UIs. You’ll find detailed guidance on how to build and deploy bots on the leading conversational platforms, including Amazon Alexa, Google Home, and Facebook Messenger.

Apr 05, 2019, 15:43:30 pm
Build Better Chatbots

Build Better Chatbots in Books

A Complete Guide to Getting Started with Chatbots.

Learn best practices for building bots by focusing on the technological implementation and UX in this practical book. You will cover key topics such as setting up a development environment for creating chatbots for multiple channels (Facebook Messenger, Skype, and KiK); building a chatbot (design to implementation); integrating to IFTT (If This Then That) and IoT (Internet of Things); carrying out analytics and metrics for chatbots; and most importantly monetizing models and business sense for chatbots.

Build Better Chatbots is easy to follow with code snippets provided in the book and complete code open sourced and available to download.

Apr 04, 2019, 15:21:57 pm
Chatbots and Conversational UI Development

Chatbots and Conversational UI Development in Books

Conversation as an interface is the best way for machines to interact with us using the universally accepted human tool that is language. Chatbots and voice user interfaces are two flavors of conversational UIs. Chatbots are real-time, data-driven answer engines that talk in natural language and are context-aware. Voice user interfaces are driven by voice and can understand and respond to users using speech. This book covers both types of conversational UIs by leveraging APIs from multiple platforms. We'll take a project-based approach to understand how these UIs are built and the best use cases for deploying them.

Build over 8 chatbots and conversational user interfaces with leading tools such as Chatfuel, Dialogflow, Microsoft Bot Framework, Twilio, Alexa Skills, and Google Actions and deploying them on channels like Facebook Messenger, Amazon Alexa and Google Home.

Apr 03, 2019, 22:30:30 pm
Human + Machine: Reimagining Work in the Age of AI

Human + Machine: Reimagining Work in the Age of AI in Books

Look around you. Artificial intelligence is no longer just a futuristic notion. It's here right now--in software that senses what we need, supply chains that "think" in real time, and robots that respond to changes in their environment. Twenty-first-century pioneer companies are already using AI to innovate and grow fast. The bottom line is this: Businesses that understand how to harness AI can surge ahead. Those that neglect it will fall behind. Which side are you on?

Apr 02, 2019, 17:19:14 pm
Metal Arms: Glitch In The System - Glitch

Metal Arms: Glitch In The System - Glitch in Robots in Games

Metal Arms: Glitch in the System is a third-person shooter action-adventure video game, developed by American team Swingin' Ape Studios and released in 2003. The game follows a robot named Glitch as he joins forces with the Droids in their fight against General Corrosive and his Milbots.

Apr 01, 2019, 21:17:33 pm
10 of the Most Innovative Chatbots on the Web

10 of the Most Innovative Chatbots on the Web in Articles

Love them or hate them, chatbots are here to stay. Chatbots have become extraordinarily popular in recent years largely due to dramatic advancements in machine learning and other underlying technologies such as natural language processing. Today’s chatbots are smarter, more responsive, and more useful – and we’re likely to see even more of them in the coming years.

Mar 31, 2019, 00:32:28 am