Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Robotics News / Intel to acquire Mobileye for $15.3 billion
« Last post by Tyler on March 26, 2017, 10:48:12 AM »
Intel to acquire Mobileye for $15.3 billion
14 March 2017, 1:35 pm

Source: Intel Intel announced plans to acquire Israel-based Mobileye, a developer of vision technology used in autonomous driving applications, for $15.3 billion. Mobileye share prices jumped from $47 to $61 (the tender offering price is $63.54) on the news, a 30% premium. The purchase marks the largest acquisition of an Israeli hi-tech company ever.

Source: Frost & Sullivan;VDS Automotive SYS Konferenz 2014/ This transaction jumpstarts Intel’s efforts to enter the emerging autonomous driving marketplace, an arena much different than Intel’s present business model. The process to design and bring a chip to market involves multiple levels of safety checks and approvals as well as incorporation into car company design plans – a process that often takes 4 to 5 years – which is why it makes sense to acquire a company already versed in those activities. As can be seen in the Frost & Sullivan chart on the right, we are presently producing cars with Level 2 and Level 3 automated systems. Intel wants to be a strategic partner going forward to fully automated and driverless Level 4 and Level 5 cars.

Mobileye is a pioneer in the development of vision systems for on-board Driving Assistance Systems; providing data for decision making applications such as Mobileye’s Adaptive Cruise Control, Lane Departure Warning, Forward Collision Warning, Headway Monitoring, High Beam Assist and more. Mobileye technology is already included in BMW 5-Series, 6-Series, 7-Series, Volvo S80, XC70 and V70 models, and Buick Lucerne, Cadillac DTS and STS.

Last year, Intel reorganized and created a new Autonomous Driving Division which included strategic partnerships with, and investments in, Delphi, Mobileye and a bunch of smaller companies involved in the chipmaking and sensor process. Thus, with this acquisition, Intel gains the ability to offer automakers a larger package of all of the components they will need as vehicles become autonomous and perhaps gaining, as well, on their competitors in the field: NXP Semiconductors, Freescale Semiconductor, Cypress Semiconductor, and STMicroelectronics, the company that makes Mobileye’s chips.

Mobileye’s newest chip, the EyeQ4, designed for computer vision processing in ADAS applications, is a low-power supercomputer on a chip. The design features are described in this article by Imagination Technology.

Bottom line:
“They’re paying a huge premium in order to catch up, to get into the front of the line, rather than attempt to build from scratch,” said Mike Ramsey, an analyst with technology researcher Gartner in a BloombergTechnology article.


Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.
22
Robotics News / Developing ROS programs for the Sphero robot
« Last post by Tyler on March 26, 2017, 04:48:29 AM »
Developing ROS programs for the Sphero robot
13 March 2017, 3:30 pm



You probably know the Sphero robot. It is a small robot with the shape of a ball. In case that you have one, you must know that it is possible to control it using ROS, by installing in your computer the Sphero ROS packages developed by Melonee Wise and connecting to the robot using the bluetooth of the computer.

Now, you can use the ROS Development Studio to create ROS control programs for that robot, testing as you go by using the integrated simulation.

The ROS Development Studio (RDS) provides off-the-shelf a simulation of Sphero with a maze environment. The simulation provides the same interface as the ROS module created by Melonee, so you can test your develop and test the programs on the environment, and once working properly, transfer it to the real robot.

We created the simulation to teach ROS to the students of the Robot Ignite Academy. They have to learn ROS enough to make the Sphero get out of the maze by using odometry and IMU.



Using the simulation

To use the Sphero simulation on RDS go to rds.theconstructsim.com and sign in. If you select the Public simulations, you will quickly identify the Sphero simulation.

Press the red Play button. A new screen will appear giving you details about the simulation and asking you which launch file you want to launch. The main.launch selected by default is the correct one, so just press Run.

After a few seconds the simulation will appear together with the development environment for creating the programs for Sphero and testing them.

On the left hand side you have a notebook containing information about the robot and how to program it with ROS. This notebook contains just some examples, but it can be completed and/or modified at your will. As you can see it is an iPython notebook and follows its standard. So it is up to you to modify it, add new information or else. Remember that any change you do to the notebook will be saved in a simulation in your private area of RDS, so you can come back later and launch it with your modifications.



You must know that the code included in the notebook is directly executable by selecting the cell of the code (do a single click on it) and pressing the small play button at the top of the notebook. This means that, once you press that button, the code will be executed and start controlling the Sphero simulated robot for a few time-steps (remember to have the simulation activated (Play button of the simulation activated) to see the robot move).

On the center area, you can see the IDE. It is the development environment for developing the code. You can browse there all the packages related to the simulation or any other packages that you may create.

On the right hand side, you can see the simulation and beneath it, the shell. The simulation shows the Sphero robot as well as the environment of the maze. On the shell, you can issue commands in the computer that contains the simulation of the robot. For instance, you can use the shell to launch the keyboard controller and move the Sphero around. Try typing the following:

  • $ roslaunch sphero_gazebo keyboard_teleop.launch
Now you must be able to move the robot around the maze by pressing some keys of the keyboard (instructions provided on the screen).



You can also launch there Rviz, and then watch the robot, the frames and any other additional information you may want of the robot. Type the following:

  • $ rosrun rviz rviz
Then press the Screen red icon located at the bottom-left of the screen (named the graphical tools). A new tab should appear, showing how the Rviz is loading. After a while, you can configure the Rviz to show the information you desire.

There are many ways you can configure the screen to provide more focus to what interests you the most.

To end this post, I would like to indicate that you can download the simulation to your computer at any time, by doing right-click on the directories and selecting Download. You can also clone the The Construct simulations repository to download it (among other simulations available).

If you liked this tutorial, you may also enjoy these:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.
23
Split-second decisions: Navigating the fine line between man and machine
13 March 2017, 1:45 pm

Level 3 automation, where the car handles all aspects of driving with the driver on standby, is being tested in Sweden. Image courtesy of Volvo cars Today’s self-driving car isn’t exactly autonomous – the driver has to be able to take over in a pinch, and therein lies the roadblock researchers are trying to overcome. Automated cars are hurtling towards us at breakneck speed, with all-electric Teslas already running limited autopilot systems on roads worldwide and Google trialling its own autonomous pod cars.

However, before we can reply to emails while being driven to work, we have to have a foolproof way to determine when drivers can safely take control and when it should be left to the car.

‘Even in a limited number of tests, we have found that humans are not always performing as required,’ explained Dr Riender Happee, from Delft University of Technology in the Netherlands, who is coordinating the EU-funded HFAuto project to examine the problem and potential solutions.

‘We are close to concluding that the technology always has to be ready to resolve the situation if the driver doesn’t take back control.’

But in these car-to-human transitions, how can a computer decide whether it should hand back control?

‘Eye tracking can indicate driver state and attention,’ said Dr Happee. ‘We’re still to prove the practical usability, but if the car detects the driver is not in an adequate state, the car can stop in the safety lane instead of giving back control.’

Next level

It’s all a question of the level of automation. According to the scale of US-based standards organisation SAE International, Level 1 automation already exists in the form of automated braking and self-parking.

Level 4 & 5 automation, where you punch in the destination and sit back for a nap, is still on the horizon.

But we’ll soon reach Level 3 automation, where drivers can hand over control in situations like motorway driving and let their attention wander, as long as they can safely intervene when the car asks them to.

HFAuto’s 13 PhD students have been researching this human-machine transition challenge since 2013.

Backed with Marie Skłodowska-Curie action funding, the students have travelled Europe for secondments, to examine carmakers’ latest prototypes, and to carry out simulator and on-road tests of transition takeovers.

Alongside further trials of their transition interface, HFAuto partner Volvo has already started testing 100 highly automated Level 3 cars on Swedish public roads.

Another European research group is approaching the problem with a self-driving system that uses external sensors together with cameras inside the cab to monitor the driver’s attentiveness and actions.

Blink

‘Looking at what’s happening in the scene outside of the cars is nothing without the perspective of what’s happening inside the car,’ explained Dr Oihana Otaegui, head of the Vicomtech-IK4 applied research centre in San Sebastián, Spain.

She coordinates the work as part of the EU-funded VI-DAS project. The idea is to avoid high-risk transitions by monitoring factors like a driver’s gaze, blinking frequency and head pose — and combining this with real-time on-road factors to calculate how much time a driver needs to take the wheel.

Its self-driving system uses external cameras as affordable sensors, collecting data for the underlying artificial intelligence system, which tries to understand road situations like a human would.

VI-DAS is also studying real accidents to discern challenging situations where humans fail and using this to help train the system to detect and avoid such situations.

The group aims to have its first interface prototype working by September, with iterated prototypes appearing at the end of 2018 and 2019.

Dr Otaegui says the system could have potential security sector uses given its focus on creating artificial intelligence perception in any given environment, and hopes it could lead to fully automated driving.

‘It could even go down the path of Levels 4 and 5, depending on how well we can teach our system to react — and it will indeed be improving all the time we are working on this automation.’

The question of transitions is so important because it has an impact on liability – who is responsible in the case of an accident.

It’s clear that Level 2 drivers can be held liable if they cause a fender bender, while carmakers will take the rap once Level 4 is deployed. However, with Level 3 transitions, liability remains a burning question.

HFAuto’s Dr Happee believes the solution lies in specialist insurance options that will emerge.

‘Insurance solutions are expected (to emerge) where a car can be bought with risk insurance covering your own errors, and those which can be blamed on carmakers,’ he said.

Yet it goes further than that. Should a car choose to hit pedestrians in the road, or swerve into the path of an oncoming lorry, killing its occupants?

‘One thing coming out of our discussions is that no one would buy a car which will sacrifice its owner for the lives of others,’ said Dr Happee. ‘So it comes down to making these as safe as possible.’

The five levels of automation:

  • Driver Assistance: the car can either steer or regulate speed on its own.
  • Partial Automation: the vehicle can handle both steering and speed selection on its own in specific controlled situations, such as on a motorway.
  • Conditional Automation: the vehicle can be instructed to handle all aspects of driving, but the driver needs to be on standby to intervene if needed.
  • High Automation: the vehicle can be instructed to handle all aspects of driving, even if the driver is not available to intervene.
  • Level 5 – Full Automation: the vehicle handles all aspects of driving, all the time.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.
24
AI News / Re: Most Human Like Android Built to Date
« Last post by 8pla.net on March 25, 2017, 10:24:26 PM »
Best Video at the 2017 ACM/IEEE International Conference on Human Robot Interaction. (HRI 2017).



This video is entertaining, but it also presents a new algorithm for modeling topic with no need to understand what's being said in the conversation using matrices of co-occurrence frequency.
25
AI News / Most Human Like Android Built to Date
« Last post by 8pla.net on March 25, 2017, 09:18:00 PM »
ERICA: The ERATO Intelligent Conversational Android
developed by Hiroshi Ishiguro Laboratories at ATR, Kyoto, Japan

Quote, "an autonomous android system capable of conversational interaction, featuring advanced sensing and speech synthesis technologies, and arguably the most human like android built to date."

The Institute of Electrical and Electronics Engineers (IEEE)
Citation: http://ieeexplore.ieee.org/document/7745086/?reload=true


Demo in Japanese:



Demo in English:

26
New Users Please Post Here / Re: Hello All
« Last post by Art on March 25, 2017, 07:50:49 PM »
Right on! Nice vid. Perhaps that 20-25 year estimate might be reaching or wishful thinking a bit.  O0
27
New Users Please Post Here / Re: Hello All
« Last post by korrelan on March 25, 2017, 06:22:17 PM »
Haha... I enjoyed the vid... I suppose after watching that; the bar is not set too high for an AGI to supersede us.

Summer Glau...  :P

 :)
28
New Users Please Post Here / Re: Hello All
« Last post by 8pla.net on March 25, 2017, 05:51:08 PM »
"The question is when is a machine human?"

They already are...




My intentions of sharing this video, AI Advancement Now Smart as Humans!! New 2016 Breakthrough! Artificial Robots and Humans, on your new thread are good, Agent Smith,.  It may add humor, but the analysis it makes otherwise between machines and humans is still fully on topic here, I think.


Wow, is that a movie prop of actress Summer Glau from the TV show, Terminator: The Sarah Connor Chronicles ?
29
Robotic science may (or may not) help us keep up with the death of bees
10 March 2017, 12:10 pm

Credit: SCAD Beginning in 2006 beekeepers became aware that their honeybee populations were dying off at increasingly rapid rates. Scientists are also concerned about the dwindling populations of monarch butterflies. Researchers have been scrambling to come up with explanations and an effective strategy to save both insects or replicate their pollination functions in agriculture.

Photo: SCAD Although the Plan Bee drones pictured above are just one SCAD (Savannah College of Art and Design) student’s concept for how a swarm of drones could handle pollinating an indoor crop, scientists are considering different options for dealing with the crisis, using modern technology to replace living bees with robotic ones.Researchers from the Wyss Institute and the School of Engineering and Applied Sciences at Harvard introduced the first RoboBees in 2013, and other scientists around the world have been researching and designing their solutions ever since.



Honeybees pollinate almost a third of all the food we consume and, in the U.S., account for more than $15 billion worth of crops every year. Apples, berries, cucumbers and almonds rely on bees for their pollination. Butterflies also pollinate, but less efficiently than bees and mostly they pollinate wildflowers.

The National Academy of Sciences said:

“Honey bees enable the production of no fewer than 90 commercially grown crops as part of the large, commercial, beekeeping industry that leases honey bee colonies for pollination services in the United States.

Although overall honey bee colony numbers in recent years have remained relatively stable and sufficient to meet commercial pollination needs, this has come at a cost to beekeepers who must work harder to counter increasing colony mortality rates.”
Florida and California have been hit especially hard by decreasing bee colony populations. In 2006, California produced nearly twice as much honey as the next state. But in 2011, California’s honey production fell by nearly half. The recent severe drought in California has become an additional factor driving both its honey yield and bee numbers down as less rain means less flowers available to pollinate.

In the U.S., the Obama Administration created a task force which developed The National Pollinator Health Strategy plan to:

  • Restore honey bee colony health to sustainable levels by 2025.
  • Increase Eastern monarch butterfly populations to 225 million butterflies by year 2020.
  • Restore or enhance seven million acres of land for pollinators over the next five years.
For this story, I wrote to the EPA specialist for bee pollination asking whether funding was continuing under the Trump Administration or whether the program itself was to be continued. No answer.

Japan’s National Institute of Advanced Industrial Science and Technology scientists have invented a drone that transports pollen between flowers using horsehair coated in a special sticky gel. And scientists at the Universities of Sheffield and Sussex (UK) are attempting to produce the first accurate model of a honeybee brain, particularly those portions of the brain that enable vision and smell. Then they intend to create a flying robot able to sense and act as autonomously as a bee.

Bottom Line: As novel and technologically interesting as these inventions may be, the metrics will need to be near to the present costs of pollination. Or, as biologist Dave Goulson said to a Popular Science reporter, “Even if bee bots are really cool, there are lots of things we can do to protect bees instead of replacing them with robots.”

Saul Cunningham, of the Australian National University, confirmed that sentiment by showing that today’s concepts are far from being economically feasible:

“If you think about the almond industry, for example, you have orchards that stretch for kilometres and each individual tree can support 50,000 flowers,” he says. “So the scale on which you would have to operate your robotic pollinators is mind-boggling.”

“Several more financially viable strategies for tackling the bee decline are currently being pursued including better management of bees through the use of fewer pesticides, breeding crop varieties that can self-pollinate instead of relying on cross-pollination, and the use of machines to spray pollen over crops.”


Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.
30
My AI is really really efficient.

I'm not talking about that thread called How To Generate A Brain which explains why simulation won't be so big.

I'm talking about this - images are the cancer, saving them all every moment and trying to search them is the nightmare.

That causes torture to the storage device's capacity, computation speed, and storage access time.

Guess what my AI does though.

Most of the images it saves are forgotten quickly. The ones that strengthen enough to last will be only a few. Any images that are similar only update the saved to something nearer to its center.

On top of that my AI only needs to run for a short time to show it off. So that's few images, and few images in that short time.

It only saves 4 sensory images per second not 60. Cameras already capture motion blur.

On top of that, the first parts of my AI don't even require images in fact I can skip them a bit. And these first parts will be amazing.

I'm even going to run a evolutionary algorithm of em.

I'm thinking engine concerns are storage use ex. blocks usage, computation/storage access speed, crashing, missing ex. somotosensory receptors, parallelism. I only remember Blender slowing/crashing when I had a older computer.

I would think that any program, having large storage ex. 500TBs, should run even if it was slow as a turtle without crashing by saving memory in storage it gotta da storage right...that covers ALL of the above except for ex. missing somotosensory receptors.

I can see if Blender has tools to be somotosensory receptors.
Pages: 1 2 [3] 4 5 ... 10

Welcome

Please login or register.



Login with username, password and session length

Users Online

25 Guests, 2 Users
Users active in past 15 minutes:
LOCKSUIT, squarebear
[Trusty Member]
[Terminator]

Most Online Today: 47. Most Online Ever: 208 (August 27, 2008, 08:24:30 AM)

Articles