Recent Posts

Pages: [1] 2 3 ... 10
AI News / Re: Noam Chomsky Has Weighed In On A.I. Where Do You stand?
« Last post by keghn on Today at 05:11:18 pm »
 In this cognitive machine there is a system for consciousness. Simply a a pointer that indexes through video. And then a
pointer that index through a single image frame.
 All video is then rebuilt up form smaller pieces so that objects can be identified. So that object can be tracked form one frame
to another with index pointer. The focus. A consciousness pointer.   

 The consciousness index focus pointer as a x and y, and also a z move through time, or through video, or new built video.   
 The pointer can select a atomic piece form a image like a cat or cup and use x, y, and z, and track the flight and movement from
one frame to another or use different x, y, z, and more dimensions, to morph form on object to another. Or, third to use as
a atomic piece selector and build new video in empty frames. Like in a way image editor works. 
 At the same time it make it's own internal language that it will learn to match to external spoken language.

 To find atomic pieces of thing in video and images i well use some thing like a detector NN.

YoloV2, Yolo 9000, SSD Mobilenet, Faster RCNN NasNet comparison: 

Segmentation with PSPNet-50 ADE20K - 4K dashcam #2:
Robotics News / Robo-picker grasps and packs
« Last post by Tyler on Today at 12:04:04 pm »
Robo-picker grasps and packs
20 February 2018, 4:59 am

Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored.

Now engineers from MIT and Princeton University have developed a robotic system that may one day lend a hand with this household chore, as well as assist in other picking and sorting tasks, from organizing products in a warehouse to clearing debris from a disaster zone.

The team’s “pick-and-place” system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an “object-agnostic” grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.

Once it has successfully grasped an item, the robot lifts it out from the bin. A set of cameras then takes images of the object from various angles, and with the help of a new image-matching algorithm the robot can compare the images of the picked object with a library of other images to find the closest match. In this way, the robot identifies the object, then stows it away in a separate bin.

In general, the robot follows a “grasp-first-then-recognize” workflow, which turns out to be an effective sequence compared to other pick-and-place technologies.

“This can be applied to warehouse sorting, but also may be used to pick things from your kitchen cabinet or clear debris after an accident. There are many situations where picking technologies could have an impact,” says Alberto Rodriguez, the Walter Henry Gale Career Development Professor in Mechanical Engineering at MIT.

Rodriguez and his colleagues at MIT and Princeton will present a paper detailing their system at the IEEE International Conference on Robotics and Automation, in May.

Building a library of successes and failures

While pick-and-place technologies may have many uses, existing systems are typically designed to function only in tightly controlled environments.

Today, most industrial picking robots are designed for one specific, repetitive task, such as gripping a car part off an assembly line, always in the same, carefully calibrated orientation. However, Rodriguez is working to design robots as more flexible, adaptable, and intelligent pickers, for unstructured settings such as retail warehouses, where a picker may consistently encounter and have to sort hundreds, if not thousands of novel objects each day, often amid dense clutter.

The team’s design is based on two general operations: picking — the act of successfully grasping an object, and perceiving — the ability to recognize and classify an object, once grasped.  

The researchers trained the robotic arm to pick novel objects out from a cluttered bin, using any one of four main grasping behaviors: suctioning onto an object, either vertically, or from the side; gripping the object vertically like the claw in an arcade game; or, for objects that lie flush against a wall, gripping vertically, then using a flexible spatula to slide between the object and the wall.

Rodriguez and his team showed the robot images of bins cluttered with objects, captured from the robot’s vantage point. They then showed the robot which objects were graspable, with which of the four main grasping behaviors, and which were not, marking each example as a success or failure. They did this for hundreds of examples, and over time, the researchers built up a library of picking successes and failures. They then incorporated this library into a “deep neural network” — a class of learning algorithms that enables the robot to match the current problem it faces with a successful outcome from the past, based on its library of successes and failures.

“We developed a system where, just by looking at a tote filled with objects, the robot knew how to predict which ones were graspable or suctionable, and which configuration of these picking behaviors was likely to be successful,” Rodriguez says. “Once it was in the gripper, the object was much easier to recognize, without all the clutter.”

From pixels to labels

The researchers developed a perception system in a similar manner, enabling the robot to recognize and classify an object once it’s been successfully grasped.

To do so, they first assembled a library of product images taken from online sources such as retailer websites. They labeled each image with the correct identification — for instance, duct tape versus masking tape — and then developed another learning algorithm to relate the pixels in a given image to the correct label for a given object.

“We’re comparing things that, for humans, may be very easy to identify as the same, but in reality, as pixels, they could look significantly different,” Rodriguez says. “We make sure that this algorithm gets it right for these training examples. Then the hope is that we’ve given it enough training examples that, when we give it a new object, it will also predict the correct label.”

Last July, the team packed up the 2-ton robot and shipped it to Japan, where, a month later, they reassembled it to participate in the Amazon Robotics Challenge, a yearly competition sponsored by the online megaretailer to encourage innovations in warehouse technology. Rodriguez’s team was one of 16 taking part in a competition to pick and stow objects from a cluttered bin.

In the end, the team’s robot had a 54 percent success rate in picking objects up using suction and a 75 percent success rate using grasping, and was able to recognize novel objects with 100 percent accuracy. The robot also stowed all 20 objects within the allotted time.

For his work, Rodriguez was recently granted an Amazon Research Award and will be working with the company to further improve pick-and-place technology — foremost, its speed and reactivity.

“Picking in unstructured environments is not reliable unless you add some level of reactiveness,” Rodriguez says. “When humans pick, we sort of do small adjustments as we are picking. Figuring out how to do this more responsive picking, I think, is one of the key technologies we’re interested in.”

The team has already taken some steps toward this goal by adding tactile sensors to the robot’s gripper and running the system through a new training regime.

“The gripper now has tactile sensors, and we’ve enabled a system where the robot spends all day continuously picking things from one place to another. It’s capturing information about when it succeeds and fails, and how it feels to pick up, or fails to pick up objects,” Rodriguez says. “Hopefully it will use that information to start bringing that reactiveness to grasping.”

This research was sponsored in part by ABB Inc., Mathworks, and Amazon.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.
Bot Conversations / Young Alpha
« Last post by on Today at 01:56:11 am »
Codename: Young Alpha

This is a young alpha prototype. Not fully  worked out yet.  Before this, I created a more complex version, but it runs in a shell.

On the first try I failed to port the shell version to the web.  So, to rethink my strategy, I started from scratch creating a basic design with the goal of making it compatible with the complex design running in the shell.

If you think you can see where I am going with this, then please comment.  Nothing is getting logged this early in alpha testing.  So, I would appreciate hearing your impressions, directly from you.   Thank you.

Live Demo:
Future of AI / Re: Emergence of the universe's PURPOSE !!!!
« Last post by keghn on February 19, 2018, 04:54:20 pm »
@Locksuit, Looks like you feeling pretty good at this moment. So reward your self with more calories, food, and save calories
by relaxing or being very humble humble if too your are too manic. 

AI News / Re: Noam Chomsky Has Weighed In On A.I. Where Do You stand?
« Last post by ranch vermin on February 19, 2018, 04:43:24 pm »
For GA's, I dont think you have to worry about chromosomes, and anything natural,  just reinforcement learning should work.

The big hurdle is after youve mastered the basic physics of things,   you have to get things worthwhile for the robot to do,  because otherwise hes stuck in a limbo of not understanding the "game" we are in.
Future of AI / Re: Emergence of the universe's PURPOSE !!!!
« Last post by ranch vermin on February 19, 2018, 04:38:22 pm »
Im glad you think youve got it,  cause its still a mystery to me.
XKCD Comic / XKCD Comic : 2018 CVE List
« Last post by Tyler on February 19, 2018, 12:01:26 pm »
2018 CVE List
19 February 2018, 5:00 am

CVE-2018-?????: It turns out Bruce Schneier is just two mischevious kids in a trenchcoat.


Future of AI / Emergence of the universe's PURPOSE !!!!
« Last post by LOCKSUIT on February 19, 2018, 06:05:55 am »
The universe had a big bang, and why will any good or non-suffering EMERGE out of it and stay forever at the end state never again another tear? HOW? > It must have a destined purpose OVER the determined physics. Paradigm patterns: The galaxy first became a host. Then a cell Self-Organized in the water. Then the best organism = organism that populates the most. Then human brains learn discoveries/findings by reward = change to Earth ahem the RIGHT change ("Problem Solving"!). One day soon humans will create an AGI that discovers by say language alone. Then the singularity as I wrote it. Then heaven. The purpose of the universe is the brain algorithm which creates a consciousness that senses what the machine thinks/says it does. Intelligence is the later EMERGED part of the universe's evolution that starts the beginning of consciousnesses finally being animated and also what there-off takes us to the universe's end state (brain discoveries = the right change). This heaven may keep growing out and search for others in need of help.

Diagram from "The Singularity is Near". Also other diagrams!

Link - click here....

The universe has it so that each alien utopia sphere that would become identical arrangement and immortally stay as is is the system that makes consciousnesses, gives good senses, and understands this!! As exponential S curves, existence of the universe, particle physics information for atoms (plus the following) which holds information, atoms for DNA which stores evolved or learned information, DNA for cell division humans and brains (which just so happen to have sensors etc.) which learn and manipulate information, this exact algorithm, consciousnesses, the universe purpose's bio-evolution that wanted to wipe dinosaurs etc and wanted the neocortex to populate & it plus brain plus small rodents grow bigger in later generations ex. frontal cortex, brains for the 21st century technological hardware software sensor motor recognize evaluate act storage nervous system culture world (we just so happen to make ex. sensors) which holds manipulated information, technology for AI which holds information, geography, asteroids, collapse fusion and bursts for stars and to spread unit formation, the discoverer, the Singularity, and AI for universal particle transformation which holds information, was Destined particle-Destiny. Intelligence/purpose is the most powerful thing in the universe, and because the universe wants it to come from evolution & saturate matter around itself (or should I say the universe), it's more powerful than physics, it's part of the laws of physics. From randomness came highest perfect technology. 1st the universe evolved, then the cell, then the universe again. Luck is on our side.
Future of AI / Re: the emergence of AI
« Last post by LOCKSUIT on February 19, 2018, 05:04:07 am »
Hence my profile motto !
AI News / Re: AIRIS unsupervised one shot learning
« Last post by korrelan on February 18, 2018, 10:33:29 pm »
I spent an hour and re-produced a simple version of the game editor AIRIS is using; I also implemented the same type/ method of AI.

Teaching the AI to understand a simple maze... to grab food and to wait at doors...

But it’s still driven by a simple list.

Pages: [1] 2 3 ... 10

Young Alpha
by (Bot Conversations)
Today at 01:56:11 am
Emergence of the universe's PURPOSE !!!!
by keghn (Future of AI)
February 19, 2018, 04:54:20 pm
XKCD Comic : 2018 CVE List
by Tyler (XKCD Comic)
February 19, 2018, 12:01:26 pm
the emergence of AI
by LOCKSUIT (Future of AI)
February 19, 2018, 05:04:07 am
Robot Vacuum Cleaners
by tekchamps (General Robotics Talk)
February 18, 2018, 06:10:53 pm
The last invention.
by LOCKSUIT (General Project Discussion)
February 17, 2018, 11:51:20 pm
Supervised AGI
by keghn (General AI Discussion)
February 16, 2018, 08:24:24 pm
Strange learning curves
by Kaeldric (AI Programming)
February 16, 2018, 06:28:14 pm

Users Online

28 Guests, 1 User
Users active in past 15 minutes:
[Trusty Member]

Most Online Today: 51. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)