Recent Posts

Pages: [1] 2 3 ... 10
1
General Robotics Talk / They Are Coming For US!
« Last post by danlovy on Today at 01:01:05 pm »

2
XKCD Comic / XKCD Comic : Self-Driving Issues
« Last post by Tyler on Today at 12:00:06 pm »
Self-Driving Issues
21 February 2018, 5:00 am

If most people turn into muderers all of a sudden, we'll need to push out a firmware update or something.

Source: xkcd.com

3
Bot Conversations / Re: Young Alpha
« Last post by ranch vermin on Today at 07:50:59 am »
Just keep saying depends to it.
4
Bot Conversations / Re: Young Alpha
« Last post by LOCKSUIT on Today at 05:39:36 am »
Ohhhhhhh ohhh oh I reallly like this !!! Tell us the reasoning behind this! I myself am working very hard developing a real NLP AGI and would greatly benefit from knowing a short comprehensive explanation about what this is and how it works.

It seems to be directed at yes/no/maybe questions. And the comment below tells you if you were right or wrong. It has some randomness included too I see ex. the backprop answer to us is always 1 type if you say "maybe" > could be / fuzzy / intermediate.

I see that "ok" is a win-win line heh heh. Always correct.

If you talked about your wife named "Bugs", and THEN asked "Do you love bugs?" You will say yes!

If someone says "Do you love to eat cats?" you say "No." (hopefully..), but if they say "Do you hate to eat cats?" you say "Yes." The RNN highlights yes and no by "do you", and uses hate/love and cat/apple to do some weight-ins. You then give your answer yes OR no depending on the weights.
5
Sounds like a magic machine,  where does the cocacola flavour go in its database? :)
6
General AI Discussion / Re: Why we laugh. (or cry) - very funny videos included
« Last post by keghn on February 20, 2018, 10:27:51 pm »
 People are to be clones of parents or who every beat lesson into them.

 When a person is act in a way that is less then their parent by dropping thing a fumbling around and forgetting allot then there is the a bad temper to counter it to burn precious calories.

 When people greater sorrow or feeling for unluck then there the chock up to counter it.

 When a person is too happy they jump up and run a round to counter it

 When person is make discoveries things better then there parent could then they burn precious calories by laughing to counter it.
 If laughing is so good let see some do it all day long.

 In most part, in the past, most humans have live life where calories were hard find. And in war torn counters to this day
every calorie is precious. 



7
General AI Discussion / Why we laugh. (or cry) - very funny videos included
« Last post by LOCKSUIT on February 20, 2018, 09:10:52 pm »
Warning - You may laugh so hard that you hemorrhage with internal bleeding.





Have you ever ate really yummy food or layed down in bed after working the fields and made a big smile on your face? Or even laughed for a bit? This is your body's way of saying "woh" that food or rest is helpful to my survival!

Now, have you ever smiled when someone says ok prisoner you can finally get some food today? Or told you a breakthough to keep your life extended is here and you'll forever get to eat of course? Or some sort of interesting knowledge? This is because it's "linked" to real-rewards like food. Obviously going after / avoiding things "related-to" food or pain is important, too not just direct reward, and so laughing is done there, too, not just for food. Else we'd never become scientific or play video games (which we improve at by reward ranking).

Lastly, why do we laugh at poo being flung around? Because we made it something to enjoy just like video games, we laugh because we linked reward to the representations. I've smiled myself when I opened video games on christmas or some non-christmas day.

It comes down to this.....Certain sensory inputs make us laugh more or less (or cry more or less). And other inputs don't, but we can make them triggers. You can make a game objective good or the opposite of the objective the objective and you smile now when ex. see visually the goal come to fruitation.
8
AI News / Re: Noam Chomsky Has Weighed In On A.I. Where Do You stand?
« Last post by keghn on February 20, 2018, 05:11:18 pm »
 In this cognitive machine there is a system for consciousness. Simply a a pointer that indexes through video. And then a
pointer that index through a single image frame.
 All video is then rebuilt up form smaller pieces so that objects can be identified. So that object can be tracked form one frame
to another with index pointer. The focus. A consciousness pointer.   

 The consciousness index focus pointer as a x and y, and also a z move through time, or through video, or new built video.   
 The pointer can select a atomic piece form a image like a cat or cup and use x, y, and z, and track the flight and movement from
one frame to another or use different x, y, z, and more dimensions, to morph form on object to another. Or, third to use as
a atomic piece selector and build new video in empty frames. Like in a way image editor works. 
 At the same time it make it's own internal language that it will learn to match to external spoken language.

 To find atomic pieces of thing in video and images i well use some thing like a detector NN.

YoloV2, Yolo 9000, SSD Mobilenet, Faster RCNN NasNet comparison: 


Segmentation with PSPNet-50 ADE20K - 4K dashcam #2:
9
Robotics News / Robo-picker grasps and packs
« Last post by Tyler on February 20, 2018, 12:04:04 pm »
Robo-picker grasps and packs
20 February 2018, 4:59 am

Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored.

Now engineers from MIT and Princeton University have developed a robotic system that may one day lend a hand with this household chore, as well as assist in other picking and sorting tasks, from organizing products in a warehouse to clearing debris from a disaster zone.

The team’s “pick-and-place” system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an “object-agnostic” grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.

Once it has successfully grasped an item, the robot lifts it out from the bin. A set of cameras then takes images of the object from various angles, and with the help of a new image-matching algorithm the robot can compare the images of the picked object with a library of other images to find the closest match. In this way, the robot identifies the object, then stows it away in a separate bin.

In general, the robot follows a “grasp-first-then-recognize” workflow, which turns out to be an effective sequence compared to other pick-and-place technologies.

“This can be applied to warehouse sorting, but also may be used to pick things from your kitchen cabinet or clear debris after an accident. There are many situations where picking technologies could have an impact,” says Alberto Rodriguez, the Walter Henry Gale Career Development Professor in Mechanical Engineering at MIT.

Rodriguez and his colleagues at MIT and Princeton will present a paper detailing their system at the IEEE International Conference on Robotics and Automation, in May.

Building a library of successes and failures

While pick-and-place technologies may have many uses, existing systems are typically designed to function only in tightly controlled environments.

Today, most industrial picking robots are designed for one specific, repetitive task, such as gripping a car part off an assembly line, always in the same, carefully calibrated orientation. However, Rodriguez is working to design robots as more flexible, adaptable, and intelligent pickers, for unstructured settings such as retail warehouses, where a picker may consistently encounter and have to sort hundreds, if not thousands of novel objects each day, often amid dense clutter.

The team’s design is based on two general operations: picking — the act of successfully grasping an object, and perceiving — the ability to recognize and classify an object, once grasped.  

The researchers trained the robotic arm to pick novel objects out from a cluttered bin, using any one of four main grasping behaviors: suctioning onto an object, either vertically, or from the side; gripping the object vertically like the claw in an arcade game; or, for objects that lie flush against a wall, gripping vertically, then using a flexible spatula to slide between the object and the wall.

Rodriguez and his team showed the robot images of bins cluttered with objects, captured from the robot’s vantage point. They then showed the robot which objects were graspable, with which of the four main grasping behaviors, and which were not, marking each example as a success or failure. They did this for hundreds of examples, and over time, the researchers built up a library of picking successes and failures. They then incorporated this library into a “deep neural network” — a class of learning algorithms that enables the robot to match the current problem it faces with a successful outcome from the past, based on its library of successes and failures.

“We developed a system where, just by looking at a tote filled with objects, the robot knew how to predict which ones were graspable or suctionable, and which configuration of these picking behaviors was likely to be successful,” Rodriguez says. “Once it was in the gripper, the object was much easier to recognize, without all the clutter.”

From pixels to labels

The researchers developed a perception system in a similar manner, enabling the robot to recognize and classify an object once it’s been successfully grasped.

To do so, they first assembled a library of product images taken from online sources such as retailer websites. They labeled each image with the correct identification — for instance, duct tape versus masking tape — and then developed another learning algorithm to relate the pixels in a given image to the correct label for a given object.

“We’re comparing things that, for humans, may be very easy to identify as the same, but in reality, as pixels, they could look significantly different,” Rodriguez says. “We make sure that this algorithm gets it right for these training examples. Then the hope is that we’ve given it enough training examples that, when we give it a new object, it will also predict the correct label.”

Last July, the team packed up the 2-ton robot and shipped it to Japan, where, a month later, they reassembled it to participate in the Amazon Robotics Challenge, a yearly competition sponsored by the online megaretailer to encourage innovations in warehouse technology. Rodriguez’s team was one of 16 taking part in a competition to pick and stow objects from a cluttered bin.

In the end, the team’s robot had a 54 percent success rate in picking objects up using suction and a 75 percent success rate using grasping, and was able to recognize novel objects with 100 percent accuracy. The robot also stowed all 20 objects within the allotted time.

For his work, Rodriguez was recently granted an Amazon Research Award and will be working with the company to further improve pick-and-place technology — foremost, its speed and reactivity.

“Picking in unstructured environments is not reliable unless you add some level of reactiveness,” Rodriguez says. “When humans pick, we sort of do small adjustments as we are picking. Figuring out how to do this more responsive picking, I think, is one of the key technologies we’re interested in.”

The team has already taken some steps toward this goal by adding tactile sensors to the robot’s gripper and running the system through a new training regime.

“The gripper now has tactile sensors, and we’ve enabled a system where the robot spends all day continuously picking things from one place to another. It’s capturing information about when it succeeds and fails, and how it feels to pick up, or fails to pick up objects,” Rodriguez says. “Hopefully it will use that information to start bringing that reactiveness to grasping.”

This research was sponsored in part by ABB Inc., Mathworks, and Amazon.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
10
Bot Conversations / Young Alpha
« Last post by 8pla.net on February 20, 2018, 01:56:11 am »
Codename: Young Alpha

This is a young alpha prototype. Not fully  worked out yet.  Before this, I created a more complex version, but it runs in a shell.

On the first try I failed to port the shell version to the web.  So, to rethink my strategy, I started from scratch creating a basic design with the goal of making it compatible with the complex design running in the shell.

If you think you can see where I am going with this, then please comment.  Nothing is getting logged this early in alpha testing.  So, I would appreciate hearing your impressions, directly from you.   Thank you.

Live Demo: http://aihax.com/codename
Pages: [1] 2 3 ... 10

They Are Coming For US!
by danlovy (General Robotics Talk)
Today at 01:01:05 pm
XKCD Comic : Self-Driving Issues
by Tyler (XKCD Comic)
Today at 12:00:06 pm
Young Alpha
by ranch vermin (Bot Conversations)
Today at 07:50:59 am
Why we laugh. (or cry) - very funny videos included
by keghn (General AI Discussion)
February 20, 2018, 10:27:51 pm
Emergence of the universe's PURPOSE !!!!
by keghn (Future of AI)
February 19, 2018, 04:54:20 pm
XKCD Comic : 2018 CVE List
by Tyler (XKCD Comic)
February 19, 2018, 12:01:26 pm
the emergence of AI
by LOCKSUIT (Future of AI)
February 19, 2018, 05:04:07 am
Robot Vacuum Cleaners
by tekchamps (General Robotics Talk)
February 18, 2018, 06:10:53 pm

Users Online

41 Guests, 0 Users

Most Online Today: 53. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles