avatar

Tyler

Paul McEuen delivers inaugural Dresselhaus Lecture on cell-sized robots in Robotics News

Paul McEuen delivers inaugural Dresselhaus Lecture on cell-sized robots
4 December 2019, 8:30 pm

Functional, intelligent robots the size of a single cell are within reach, said Cornell University Professor Paul McEuen at the inaugural Mildred S. Dresselhaus Lecture at MIT on Nov. 13.

“To build a robot that is on the scale of 100 microns in size, and have it work, that’s a big dream,” said McEuen, the John A. Newman Professor of Physical Science at Cornell University and director of Kavli Institute at Cornell for Nanoscale Science. “One hundred microns is a very special size. It is the border between the visible and the invisible, or microscopic, world.”

In a talk entitled “Cell-sized Sensors and Robots” in front of a large audience in MIT’s 10-250 lecture hall, McEuen introduced his concept for a new generation of machines that work at the microscale by combining microelectronics, solar cells, and light. The microbots, as he calls them, operate using optical wireless integrated circuits and surface electrochemical actuators.

Kicking off the Dresselhaus Lectures

Inaugurated this year to honor MIT professor and physicist Mildred "Millie" Dresselhaus, the Dresselhaus Lecture recognizes a significant figure in science and engineering whose leadership and impact echo the late Intistute Professor's life, accomplishments, and values. The lecture will be presented annually in November, the month of her birth.

Dresselhaus spent over 50 years at MIT, where she was a professor in the Department of Electrical Engineering and Computer Science (originally the Department of Electrical Engineering) as well as in the Department of Physics. She was MIT’s first female Institute Professor, co-organizer of the first MIT Women’s Forum, the first solo recipient of a Kavli Prize, and the first woman to win the National Medal of Science in the engineering category.

Her research into the fundamental properties of carbon earned her the nickname the “Queen of Carbon Science.” She was also nationally known for her work to develop wider opportunities for women in science and engineering.

“Millie was a physicist, a materials scientist, and an electrical engineer; an MIT professor, researcher, and doctoral supervisor; a prolific author; and a longtime leader in the scientific community,” said Asu Ozdaglar, current EECS department head, in her opening remarks. “Even in her final years, she was active in her field at MIT and in the department, attending EECS faculty meetings and playing an important role in developing the MIT.nano facility.”

Pushing the boundaries of physics

McEuen, who first met Dresselhaus when he attended graduate school at Yale University with her son, expressed what a privilege it was to celebrate Millie as the inaugural speaker. “When I think of my scientific heroes, it’s a very, very short list. And I think at the top of it would be Millie Dresselhaus. To be able to give this lecture in her honor means the world to me.”

After earning his bachelor’s degree in engineering physics from the University of Oklahoma, McEuen continued his research at Yale University, where he completed his PhD in 1990 in applied physics. McEuen spent two years at MIT as a postdoc studying condensed matter physics, and then became a principal investigator at the Lawrence Berkeley National Laboratory. He spent eight years teaching at the University of California at Berkeley before joining the faculty at Cornell as a professor in the physics department in 2001.

“Paul is a pioneer for our generation, exploring the domain of atoms and molecules to push the frontier even further. It is no exaggeration to say that his discoveries and innovations will help define the Nano Age,” said Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh (1990) Professor in Emerging Technology.

The world is our oyster”

McEuen joked at the beginning of his talk that speaking of technology measured in microns sounds “so 1950s” in today’s world, in which researchers can manipulate at the scale of nanometers. One micron — an abbreviation for micrometer — is one millionth of a meter; a nanometer is one billionth of a meter.

“[But] if you want a micro robot, you need nanoscale parts. Just as the birth of the transistor gave rise to all the computational systems we have now,” he said, “the birth of simple, nanoscale mechanical and electronic elements is going to give birth to a robotics technology at the microscopic scale of less than 100 microns.”

The motto of McEuen and his research group at Cornell is “anything, as long as it’s small.” This focus includes fundamentals of nanostructures, atomically-thin origami for metamaterials and micromachines, and microscale smart phones and optobots. McEuen emphasized the importance of borrowing from other fields, such as microelectronics technology, to build something new. Cornell researchers have used this technology to build an optical wireless integrated circuit (OWIC) — essentially a microscopic cellphone made of solar cells that power it and receive external information, a simple transistor circuit to serve as its brain, and a light-emitting diode to blink out data.

Why make something so small? The first reason is cost; the second is its wide array of applications. Such tiny devices could measure voltage or temperature, making them useful for microfluidic experiments. In the future, they could be deployed as smart, secure tags for counterfeiting, invisible sensors for the internet of things, or used for neural interfacing to measure electrical activity in the brain.

Adding a surface electrochemical actuator to these OWICs brings mechanical movement to McEuen’s microbots. By capping a very thin piece of platinum on one side and applying a voltage to the other, “we could make all kinds of cool things.”

At the end of his talk, McEuen answered audience questions moderated by Bulović, such as how do the microbots communicate with one another and what is their functional lifespan. He closed with a final quote from Millie Dresselhaus: “Follow your interests, get the best available education and training, set your sights high, be persistent, be flexible, keep your options open, accept help when offered, and be prepared to help others.”

Nominations for the 2020 Dresselhaus lecture can be submitted on MIT.nano’s website. Any significant figure in science and engineering from anywhere in the world may be considered.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started December 05, 2019, 12:00:43 PM
avatar

Tyler

XKCD Comic : AI Hiring Algorithm in XKCD Comic

AI Hiring Algorithm
4 December 2019, 5:00 am

So glad Kate over in R&D pushed for using the AlgoMaxAnalyzer to look into this. Hiring her was a great decisio- waaaait.

Source: xkcd.com

Started December 05, 2019, 12:00:43 PM
avatar

WriterOfMinds

On the prevalence of AI hype, and how to watch out for it in AI News

https://thegradient.pub/an-epidemic-of-ai-misinformation/

Quote
In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers - and journalists – and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:

Stripping away the rhetoric, what does the AI system actually do? Does a “reading system” really read?
How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubik’s cube system work in opening bottles? How much retraining would be required?)
Is there a demo where interested readers can probe for themselves?
If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)
How far does succeeding at the particular task actually take us toward building genuine AI?
How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19x19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.

3 Comments | Started December 02, 2019, 08:31:46 PM
avatar

Maviarab

Who are you, what do you do & what do you want to do ? in New Users Please Post Here

New fun topic for us all to get to know each other a little better? :hugs:

As I'm the admin and I've started this silly thread I'll set the ball rolling

Im a bus driver from UK,? drive a 1990 BMW 525 and spend far too much time here? :D and my bot's name is Sal

And if you don't know my name by now there something wrong hehe

Admin

322 Comments | Started November 21, 2005, 10:54:42 PM
avatar

Tyler

XKCD Comic : Is it Christmas? in XKCD Comic

Is it Christmas?
2 December 2019, 5:00 am

We've tested it on 30 different days and it hasn't gotten one wrong yet.

Source: xkcd.com

14 Comments | Started December 03, 2019, 12:01:28 PM
avatar

Tyler

Helping machines perceive some laws of physics in Robotics News

Helping machines perceive some laws of physics
2 December 2019, 5:00 am

Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.

Now MIT researchers have designed a model that demonstrates an understanding of some basic “intuitive physics” about how objects should behave. The model could be used to help build smarter artificial intelligence and, in turn, provide information to help scientists understand infant cognition.

The model, called ADEPT, observes objects moving around a scene and makes predictions about how the objects should behave, based on their underlying physics. While tracking the objects, the model outputs a signal at each video frame that correlates to a level of “surprise” — the bigger the signal, the greater the surprise. If an object ever dramatically mismatches the model’s predictions — by, say, vanishing or teleporting across a scene — its surprise levels will spike.

In response to videos showing objects moving in physically plausible and implausible ways, the model registered levels of surprise that matched levels reported by humans who had watched the same videos.  

“By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport,” says first author Kevin A. Smith, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). “We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.”

Joining Smith on the paper are co-first authors Lingjie Mei, an undergraduate in the Department of Electrical Engineering and Computer Science, and BCS research scientist Shunyu Yao; Jiajun Wu PhD ’19; CBMM investigator Elizabeth Spelke; Joshua B. Tenenbaum, a professor of computational cognitive science, and researcher in CBMM, BCS, and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and CBMM investigator Tomer D. Ullman PhD ’15.

Mismatched realities

ADEPT relies on two modules: an “inverse graphics” module that captures object representations from raw images, and a “physics engine” that predicts the objects’ future representations from a distribution of possibilities.

Inverse graphics basically extracts information of objects — such as shape, pose, and velocity — from pixel inputs. This module captures frames of video as images and uses inverse graphics to extract this information from objects in the scene. But it doesn’t get bogged down in the details. ADEPT requires only some approximate geometry of each shape to function. In part, this helps the model generalize predictions to new objects, not just those it’s trained on.

“It doesn’t matter if an object is rectangle or circle, or if it’s a truck or a duck. ADEPT just sees there’s an object with some position, moving in a certain way, to make predictions,” Smith says. “Similarly, young infants also don’t seem to care much about some properties like shape when making physical predictions.”

These coarse object descriptions are fed into a physics engine — software that simulates behavior of physical systems, such as rigid or fluidic bodies, and is commonly used for films, video games, and computer graphics. The researchers’ physics engine “pushes the objects forward in time,” Ullman says. This creates a range of predictions, or a “belief distribution,” for what will happen to those objects in the next frame.

Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch.

ADEPT then resamples from its belief distribution and notes a very low probability that the object had simply vanished. If there’s a low enough probability, the model registers great “surprise” as a signal spike. Basically, surprise is inversely proportional to the probability of an event occurring. If the probability is very low, the signal spike is very high.  

“If an object goes behind a wall, your physics engine maintains a belief that the object is still behind the wall. If the wall goes down, and nothing is there, there’s a mismatch,” Ullman says. “Then, the model says, ‘There’s an object in my prediction, but I see nothing. The only explanation is that it disappeared, so that’s surprising.’”

Violation of expectations

In development psychology, researchers run “violation of expectations” tests in which infants are shown pairs of videos. One video shows a plausible event, with objects adhering to their expected notions of how the world works. The other video is the same in every way, except objects behave in a way that violates expectations in some way. Researchers will often use these tests to measure how long the infant looks at a scene after an implausible action has occurred. The longer they stare, researchers hypothesize, the more they may be surprised or interested in what just happened.

For their experiments, the researchers created several scenarios based on classical developmental research to examine the model’s core object knowledge. They employed 60 adults to watch 64 videos of known physically plausible and physically implausible scenarios. Objects, for instance, will move behind a wall and, when the wall drops, they’ll still be there or they’ll be gone. The participants rated their surprise at various moments on an increasing scale of 0 to 100. Then, the researchers showed the same videos to the model. Specifically, the scenarios examined the model’s ability to capture notions of permanence (objects do not appear or disappear for no reason), continuity (objects move along connected trajectories), and solidity (objects cannot move through one another).

ADEPT matched humans particularly well on videos where objects moved behind walls and disappeared when the wall was removed. Interestingly, the model also matched surprise levels on videos that humans weren’t surprised by but maybe should have been. For example, in a video where an object moving at a certain speed disappears behind a wall and immediately comes out the other side, the object might have sped up dramatically when it went behind the wall or it might have teleported to the other side. In general, humans and ADEPT were both less certain about whether that event was or wasn’t surprising. The researchers also found traditional neural networks that learn physics from observations — but don’t explicitly represent objects — are far less accurate at differentiating surprising from unsurprising scenes, and their picks for surprising scenes don’t often align with humans.

Next, the researchers plan to delve further into how infants observe and learn about the world, with aims of incorporating any new findings into their model. Studies, for example, show that infants up until a certain age actually aren’t very surprised when objects completely change in some ways — such as if a truck disappears behind a wall, but reemerges as a duck.

“We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents,” Smith says.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

3 Comments | Started December 02, 2019, 12:06:09 PM
avatar

Freddy

Beginners topics and resources for AI Projects and Chat Bots in General Project Discussion

Making this thread to bring things together for beginners.

If anyone wants to drop in links and things I will add them to this top post.

Just a few to get started :

7 Comments | Started July 31, 2015, 10:39:08 PM
avatar

Zero

Consciousness & Self-awareness in General AI Discussion

That one caught my eye:

Quote from: Wikipedia
While consciousness is being aware of one's environment and body and lifestyle, self-awareness is the recognition of that awareness.

That's a very exciting challenge. First, you have consciousness, with external consciousness (being aware of where I end and where my environment begins) and internal consciousness (being aware of what's happening inside of my mind). Then you have self-awareness: being aware that I'm aware of...

Algorithmically, it implies having internal sensors to feel what's happening inside the program (internal sensors send data to program input), including sensors to feel this internal sensor loop. Trying to figure out a minimal algorithm drives me mad.

21 Comments | Started July 04, 2019, 09:57:28 AM
avatar

Tyler

XKCD Comic : Group Chat Rules in XKCD Comic

Group Chat Rules
29 November 2019, 5:00 am

There's no group chat member more enigmatic than the cool person who you all assume has the chat on mute, but who then instantly chimes in with no delay the moment something relevant to them is mentioned.

Source: xkcd.com

Started November 30, 2019, 12:01:01 PM
avatar

LOCKSUIT

Re: How does the brain represent images? in General AI Discussion

"we humans don't suddenly reach a point where the smallest objects visualized appear to our brains as pixels, which implies our brains aren't storing images in pixelized formats like JPEG or GIF or BMP."

But I do see pixels when I look at a cat down the end of the road. My vision is pixels, but what i c is features. I may see a little yellow blob with a spike on its back, which is from the cat, but looks like a horn shape maybe, it is the lowest features, like lines a balls.

11 Comments | Started November 28, 2019, 03:26:08 AM
Star Wars: Episode IX – The Rise of Skywalker

Star Wars: Episode IX – The Rise of Skywalker in Robots in Movies

Star Wars: The Rise of Skywalker (also known as Star Wars: Episode IX – The Rise of Skywalker) is an American epic space opera film produced, co-written, and directed by J. J. Abrams.

A year after the events of The Last Jedi, the remnants of the Resistance face the First Order once again—while reckoning with the past and their own inner turmoil. Meanwhile, the ancient conflict between the Jedi and the Sith reaches its climax, altogether bringing the Skywalker saga to a definitive end.

Nov 15, 2019, 22:31:39 pm
Terminator: Dark Fate

Terminator: Dark Fate in Robots in Movies

Terminator: Dark Fate is a 2019 American science fiction action film directed by Tim Miller and created from a story by James Cameron. Cameron considers the film a direct sequel to his films The Terminator (1984) and Terminator 2: Judgment Day. The film stars Linda Hamilton and Arnold Schwarzenegger returning in their roles of Sarah Connor and the T-800 "Terminator", respectively, reuniting after 28 years.

SPOILERS:

In 1998, three years after defeating the T-1000 and averting the rise of the malevolent artificial intelligence (AI) SkynetSarah Connor and her teenage son John are relaxing on a beach at Guatemala. A T-800 Terminator, sent from the future before Skynet's erasure, arrives and shoots John, killing him.

Mackenzie Davis stars as Grace: A soldier from the year 2042 adopted by Resistance leader Daniella Ramos who was converted into a cyborg and sent by her adoptive mother to protect her younger self from a new advanced Terminator prototype.

Oct 29, 2019, 21:27:46 pm
Life Like

Life Like in Robots in Movies

A couple, James and Sophie, buy an android called Henry to help around the house.

In the beginning, this is perfect for both James and Sophie as Henry does housework and makes a good companion to Sophie. But when Henry’s childlike brain adapts by developing emotions, complications begin to arise

Oct 29, 2019, 21:14:49 pm
I Am Mother

I Am Mother in Robots in Movies

I Am Mother is a 2019 Australian science fiction thriller film directed by Grant Sputore, from a screenplay by Michael Lloyd Green. Starring Clara Rugaard, Luke Hawker, Rose Byrne, and Hilary Swank, the film follows Daughter, a girl in a post-apocalyptic bunker, being raised by Mother, an android supposed to aid in the repopulation of Earth.

Sep 30, 2019, 21:39:16 pm
Mitsuku wins 2019 Loebner Prize

Mitsuku wins 2019 Loebner Prize in Articles

For the fourth consecutive year, Steve Worswick’s Mitsuku has won the Loebner Prize for the most humanlike chatbot entry to the contest. This is the fifth time that Steve has won the Loebner Prize. The Loebner Prize is the world’s longest running Turing-Test competition and has been organised by AISB, the world’s oldest AI society, since 2014.

Sep 30, 2019, 21:18:50 pm
Metal Gear Series - Metal Gear RAY

Metal Gear Series - Metal Gear RAY in Robots in Games

Metal Gear RAY is an anti-Metal Gear introduced in Metal Gear Solid 2: Sons of Liberty. This Metal Gear model comes in two variants: a manned prototype version developed to combat Metal Gear derivatives and an unmanned, computer-controlled version.

Metal Gear RAY differs from previous Metal Gear models in that it is not a nuclear launch platform, but instead a weapon of conventional warfare, originally designed by the U.S. Marines to hunt down and destroy the many Metal Gear derivatives that became common after Metal Gear REX's plans leaked following the events of Shadow Moses.

Apr 08, 2019, 17:35:36 pm
Fallout 3 - Liberty Prime

Fallout 3 - Liberty Prime in Robots in Games

Liberty Prime is a giant, military robot, that appears in the Fallout games. Liberty Prime fires dual, head-mounted energy beams, which are similar to shots fired from a Tesla cannon.

He first appears in Fallout 3 and also it's add-on Broken Steel. Then again in Fallout 4 and later in 2017 in Fallout: The Board Game.

Apr 07, 2019, 15:20:23 pm
Building Chatbots with Python

Building Chatbots with Python in Books

Build your own chatbot using Python and open source tools. This book begins with an introduction to chatbots where you will gain vital information on their architecture. You will then dive straight into natural language processing with the natural language toolkit (NLTK) for building a custom language processing platform for your chatbot. With this foundation, you will take a look at different natural language processing techniques so that you can choose the right one for you.

Apr 06, 2019, 20:34:29 pm
Voicebot and Chatbot Design

Voicebot and Chatbot Design in Books

Flexible conversational interfaces with Amazon Alexa, Google Home, and Facebook Messenger.

We are entering the age of conversational interfaces, where we will interact with AI bots using chat and voice. But how do we create a good conversation? How do we design and build voicebots and chatbots that can carry successful conversations in in the real world?

In this book, Rachel Batish introduces us to the world of conversational applications, bots and AI. You’ll discover how - with little technical knowledge - you can build successful and meaningful conversational UIs. You’ll find detailed guidance on how to build and deploy bots on the leading conversational platforms, including Amazon Alexa, Google Home, and Facebook Messenger.

Apr 05, 2019, 15:43:30 pm
Build Better Chatbots

Build Better Chatbots in Books

A Complete Guide to Getting Started with Chatbots.

Learn best practices for building bots by focusing on the technological implementation and UX in this practical book. You will cover key topics such as setting up a development environment for creating chatbots for multiple channels (Facebook Messenger, Skype, and KiK); building a chatbot (design to implementation); integrating to IFTT (If This Then That) and IoT (Internet of Things); carrying out analytics and metrics for chatbots; and most importantly monetizing models and business sense for chatbots.

Build Better Chatbots is easy to follow with code snippets provided in the book and complete code open sourced and available to download.

Apr 04, 2019, 15:21:57 pm
Chatbots and Conversational UI Development

Chatbots and Conversational UI Development in Books

Conversation as an interface is the best way for machines to interact with us using the universally accepted human tool that is language. Chatbots and voice user interfaces are two flavors of conversational UIs. Chatbots are real-time, data-driven answer engines that talk in natural language and are context-aware. Voice user interfaces are driven by voice and can understand and respond to users using speech. This book covers both types of conversational UIs by leveraging APIs from multiple platforms. We'll take a project-based approach to understand how these UIs are built and the best use cases for deploying them.

Build over 8 chatbots and conversational user interfaces with leading tools such as Chatfuel, Dialogflow, Microsoft Bot Framework, Twilio, Alexa Skills, and Google Actions and deploying them on channels like Facebook Messenger, Amazon Alexa and Google Home.

Apr 03, 2019, 22:30:30 pm