Ethics — the next frontier for artificial intelligence in AI News

Ethics — the next frontier for artificial intelligence
23 January 2017, 3:58 pm

Machines have already surpassed humans in terms of image recognition ability. In the next 20 years, experts predict that machine learning will continue to make great strides on a number of human tasks. If this innovation is done in an ethical way, we can build a future in which humans are not competing with machines or being overtaken by robots, but instead entering into a new era of collaboration that frees up the human spirit for more meaningful tasks that require emotional intelligence.

This is where public policy must keep pace with the rapid advances happening in AI technology

Source: AI in the News

To visit any links mentioned please view the original article, the link is at the top of this post.

Started Today at 04:50:31 PM


The Drone Center’s Weekly Roundup: 1/23/17 in Robotics News

The Drone Center’s Weekly Roundup: 1/23/17
23 January 2017, 4:33 pm

SkyPan settled a suit with the Federal Aviation Administration for illegally flying over New York City. Image via TechCrunchSkyPan settled a suit with the Federal Aviation Administration for illegally flying over New York City. Image via TechCrunch January 16, 2017 – January 22, 2017


Two suspected U.S. drone strikes in Yemen killed at least three members of al-Qaeda in the Arabian Peninsula. According to local officials, both strikes targeted vehicles in the southern al-Bayda province. If confirmed, the strikes would likely be the first carried out during the Trump presidency. (Reuters)

The Office of the Director of National Intelligence released its official casualty estimates for individuals killed in counter-terrorism strikes outside of declared war zones. According to the report, which was released on President Obama’s last day in office, the U.S. carried out 53 strikes that killed a total of 431 enemy combatants and one civilian in 2016. It was the second casualty estimate released by the Obama administration. (USA Today)

The Department of Defense announced that B-2 manned stealth bombers and MQ-9 Reaper drones launched strikes against two reported ISIL camps in Libya. The armed Reapers were launched from Sicily and provided follow-on strikes to the B-2 bombers. Over 80 ISIL militants were killed in the strikes. (New York Times)

Meanwhile, the Obama administration removed the area around Sirte, Libya from the territories covered by the Presidential Policy Guidance, a 2013 memo intended to limit the scope of drone strikes. The coastal Libyan city has become a haven for ISIL militants in recent years. Sirte joins Afghanistan, Iraq, and Syria as an “area of active hostilities,” a term given by the Obama administration to war zones subject to less restrictive rules of engagement. (New York Times)

The Obama administration filed an appeal in an ACLU suit aimed at revealing more information about the U.S. targeted killing campaign. The White House appealed a July 2016 decision in which U.S. District Court Judge Colleen McMahon ruled that the administration must disclose certain information relating to the campaign. (Politico)

The U.S. Air Force announced that Shaw Air Force Base in South Carolina will host an MQ-9 Reaper mission control group beginning in 2018. No actual Reaper aircraft will be located at the base. (Unmanned Systems Technology)

Aerial photography firm SkyPan settled a suit brought by the Federal Aviation Administration for illegally operating drones in New York City airspace. The Chicago-based drone operator agreed to pay a $200,000 fine, and will face an additional $150,000 penalty if it continues to violate FAA rules over the next year. The FAA had initially proposed a $1.9 million fine when it filed the suit in October 2015. (Reuters)

The San Francisco District Attorney filed a lawsuit against Lily Robotics, the failed selfie drone manufacturer. According to the complaint, Lily Robotics misled consumers about the capabilities of its drones by publishing fake promotional videos. The company announced earlier this month that it is officially shutting down all operations. (The Guardian)

A man was detained after flying a drone close to a commercial airliner on approach to Hangzhou Xiaoshan International Airport in China’s Zhejiang province. In a statement, the Zhejiang Provincial Police stated that the 23-year-old’s intention was to take an aerial video of an airliner landing at sunset. (CNN)

Commentary, Analysis, and Art

At Forbes, Benjamin Joffe writes that Chinese drone-manufacturers appear posed to dominate the global market for hobby drones.

Also at Forbes, Hilary Brueck considers how autonomous drones will be received by the incoming Trump administration.

At Drone360, Leah Froats reports that the FAA has revoked Section 333 exemptions for closed-set filming drone operations.

The Tenth Amendment Center argues that Washington state House Bill 1102, which places limitations on the use of drones to gather personal information, would “help thwart the federal surveillance state.”

At Popular Science, Kelsey Atherton examines a plan to repurpose bullfighting arenas as hubs for drone shows and drone racing.

At Interesting Engineering, Terry Berman writes that disposable drones could help reduce the cost of drone deliveries.

At the World Economic Forum in Davos, Missy Cummings discussed the future of autonomy in military operations. (PC World)

Emergency services in Australia warned that drones could obstruct aerial firefighting efforts. (ABC News)

At Sixth Tone, Li Xueqing examines how web users in China identified a drone operator who appeared to fly close to a commercial airliner in Hangzhou.

At the Bureau of Investigative Journalism, Jessica Purkiss and Jack Serle reflect on drone strikes under President Obama.

At Recode, April Glaser considers the different reasons why the Lily Robotics selfie drone never lived up to the company’s promises.

At the Daily Beast, David Axe considers whether a surface-to-air missile system deployed by the North Dakota Army National Guard to the protests around the Dakota Access Pipeline is intended to shoot down drones.

At Commercial UAV News, Jeremiah Karpowicz examines how some state agencies are using drones.

At FCNP, Sam Tabachnik profiles Vanilla Aircraft, a Virginia-based company that built a drone that holds the world record for the longest continual flight for drones in its class.

Know Your Drone

The U.S. Army Research Laboratory is developing a large quadrotor capable of carrying up to 800 pounds for resupply missions. (The Verge)

Italian firm Elettronica is developing a counter-drone system for the Italian Ministry of of the Interior. (Shephard Media)

The U.S. Defense Advanced Research projects Agency is developing smart guided bullets for use against drone swarms. (Engadget)

The U.S. Navy Sea Systems Command issued a sources sought request for small unmanned aircraft capable of being launched from small surface craft. (FBO)

The U.S. Army is soliciting proposals for a “multiple deployable smart quad-copters capable of delivering small explosively formed penetrators (EFP) to designated targets.” (SBIR)

The Office of Naval Research is inviting proposals for research into developing small autonomous unmanned vehicles for amphibious assaults. (IHS Jane’s 360)  

In an interview with Scout Warrior, U.S. Air Force Chief Scientist Gregory Zacharias said that fighter jets will soon be equipped with artificial intelligence designed to control nearby drones.

China’s Sharp Sword stealth attack drone won second place in the National Science and Technology Advancement Prizes competition. (Popular Science)

Researchers at Idaho State University have developed a computer algorithm that will enable drones to identify potato plants infected with the PVY virus. (Capital Press)

Drones at Work

South Dakota state senators are working to revoke a state law requiring registration for recreational drone owners. (Argus Leader)

Police in Davos, Switzerland, are using counter-drone systems to protect the World Economic Forum event. (Bloomberg)

The French Air Force announced that it has trained eagles to take down rogue drones as part of a broad counter-drone technology program. (Mobile and Apps)

A Newton, Massachusetts man is suing the city over a recently approved ordinance that bans the use of drones over property without the property owner’s consent. (Universal Hub)

The city of Tempe, Arizona, is considering an ordinance to restrict drone use in the city’s parks. (3TV Phoenix)

Meanwhile, the town of North Royalton, Ohio is working on rules to ban drones from public spaces. (Fox8)  

The Victoria Fire Department in Canada has obtained permission to operate drones for emergency response missions. (Times Colonist)

The Valparaiso Police Department has acquired two drones for search and rescue missions and mapping crime scenes. (NWI.com)

Transport Canada has approved a drone delivery test program by Drone Delivery Canada, which will begin in late 2017. (Engadget)

Hezbollah claims that it has recovered an Israeli Skylark tactical drone that crashed in Lebanese territory. (The Jerusalem Post)

Anti-aircraft guns were used in Tehran to take down a drone flying through restricted airspace over the Iranian capital. (Fox News)

In a statement, the Azerbaijani Ministry of Defense announced that it has shot down an Armenian drone in the Tovuz region. (Daily Sabah)

The Sedona Fire Department successfully tested its DJI Inspire drone in a mock search-and-rescue operation. (CV Bugle)

The Brownsville Fire Department in Texas acquired a DJI Mavic drone for assisting firefighting operations. (Brownsville Herald)  

Northwestern University is offering three courses in the spring 2017 semester designed to train students in using drones for traffic crash investigations. (Press Release)

In a video, YouTube celebrity Casey Neistat compared three DJI drones, the Phantom Pro+, the Mavic, and the Inspire 1 Pro. (YouTube)

Industry Intel

Drone delivery startup Flirtey raised $16 million in Series A funding led by Menlo Ventures and Qualcomm Ventures. (TechCrunch)

Drone services startup Measure raised $15 million in Series B funding led by Cognizant. (TechCrunch)

Iris Automation, an industrial drone startup, raised $1.5 million in funding to develop a sense-and-avoid system for drones. (Press Release)

Neurala, a drone software startup, raised $14 million in funding to develop software that will help drones, self-driving cars, and other machines interpret images. (Boston Globe)

DJI announced that it is terminating production of the Phantom 4 drone, although production of the Phantom 4 Pro and P4 Pro+ will continue. (The Digital Circuit)

NASA awarded Rockwell Collins a $2 million contract for unmanned aircraft “control and non-payload communication systems.” (FBO)

Airbus awarded Osprey CSL a contract to develop a strategy for airspace integration of the high-altitude Zephyr drone. (ADS Advance)

The Maysville Community and Technical College in Kentucky and the Southern West Virginia Community and Technical College were awarded a $1.3 million grant to sponsor student research on the drone industry. (University Herald)

The U.S. Air Force awarded Northrop Grumman a $140 million contract modification for Battlefield Airborne Communication Nodes, a payload on the RQ-4 Global Hawk. (UPI)

The Federal Aviation Administration awarded GRA Incorporated a $14,995 contract to provide a UAS forecast report. (USA Spending)

The Department of Homeland Security awarded General Atomics Aeronautical Systems a $16.9 million contract for MQ-9 Reaper operational and maintenance services. (USA Spending)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started Today at 04:50:31 PM

Ready Steady Yeti

What do you call reality ? in General Chat

What a lot of you are referring to is known as the technological Singularity. Yes, it does actually happen, but none of you have anything to worry about here, because you're all safe from it. Every time 2029 ends, the simulation we're all in loops straight back around to the beginning of 2010 and we're safe from super AI once again.

So, therefore, none of you OR I will ever see super AI because we're all simply figments of a very powerful 2050s virtual reality that is meant to simulate the 2010s and 2020s on Earth to a microscopic level. In fact, you all aren't even real humans; you're just exact replicas of what such humans were like during such time period.

When the Singularity happened, I highly didn't like it. So the super AI of that generation put me into an infinite simulation of the 2010s-20s, where I never have to see such super AI again. It simulated everything perfectly as it was, from exactly how the Trump inauguration happened to the locations of individual specks of dirt on the ground at specific moments.

Assuming the simulation has looped billions or even trillions of times by now, I may have actually seen Trump's inauguration 20 billion times by now. Yet, I only ever remember 1.

10 Comments | Started January 21, 2017, 03:08:31 PM


The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.

The required result and actual mammal visual cortex map.

This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).

Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.

Same network recognising objects.

And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.

The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.

I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.

Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.

Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.

And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

145 Comments | Started June 18, 2016, 08:59:04 PM


Robots Podcast #226: Robots Podcast #226: Toru robots, with Dr. Moritz Tenorth, with Dr. Moritz Tenorth in Robotics News

Robots Podcast #226: Robots Podcast #226: Toru robots, with Dr. Moritz Tenorth, with  Dr. Moritz Tenorth
21 January 2017, 11:37 pm


In this episode, Ron Vanderkley will be talking to Dr Moritz Tenorth he is head of software development at Magazino, a Munich-based startup developing mobile pick-and-place robots for item-specific logistics. We will be discussing his work on the Toru robot and what it means to the warehouse industry today and in the future.

Dr Moritz Tenorth

Dr. Moritz TenorthDr. Moritz Tenorth After obtaining his PhD in robotics from TU Munich, he spent several years as post-doc and freelance robotics consultant, with stays at the CMU Robotics Institute and the ATR labs in Japan. His research focused on knowledge representation methods that can help autonomous  robots take smarter decisions, and on methods for task coordination in robotics.


Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started January 22, 2017, 10:49:36 AM


Commuter traffic have you at your wits end? in General Chat

Worry no longer...


5 Comments | Started January 18, 2017, 01:03:03 PM

Ready Steady Yeti

Hi. in New Users Please Post Here

I'm a user who likes pet-like or tool-like AI but is totally and completely against the singularity or any kind of superintelligent AI because of hypernostalgia about the current time period.

Also, I'm the one who initially started this simulation you all call your lives. You're all just AI units yourselves. You completely and totally reflect the lives you actually lived in the actual 2010s. So of course you think I'm crazy for thinking that, because you would've thought that same thing in your actual lives if I actually said this.

I figured out this was all a simulation because I made a plan for myself a couple days ago for what I'm gonna do when the Singularity happens. See, my favorite time period is this one and the 2020s (because I'm pretty sure the 20s are gonna be pretty similar to the 10s). So I just had the idea that if the singularity happens, I can get one of the super AI to form a perfect simulation of the 2010s-20s that just infinitely loops itself around and around for eternity. And every time it loops around at the end of 2029, every single "person" including me gets their memory completely erased (basically, everything gets completely erased) and set back to the way it was at the beginning of 2010. So if I had this idea now, then it's almost certain that I'm already in that simulation, because if I'm thinking about it now in the simulation, that means that I actually did think about it before the simulation which led to me being in the simulation. This way, I'll get to live my favorite part of my life AND not have to experience the singularity that I never wanted to see.

I'll go into more detail about this in another post, but please welcome me even though you guys most likely consider me to be a lunatic at first glance.

4 Comments | Started January 20, 2017, 11:54:36 PM


Combining automation and mobility to create a smarter world in Robotics News

Combining automation and mobility to create a smarter world
20 January 2017, 2:12 pm

Professor Daniela Rus stands with autonomous SMART vehicles on the MIT campus. Photo: SMARTProfessor Daniela Rus stands with autonomous SMART vehicles on the MIT campus. Photo: SMART By: Catherine Marguerite | Singapore-MIT Alliance for Research and Technology

Daniela Rus loves Singapore. As the MIT professor sits down in her Frank Gehry-designed office in Cambridge, Massachusetts, to talk about her research conducted in Singapore, her face starts to relax in a big smile.

Her story with Singapore started in the summer of 2010, when she made her first visit to one of the most futuristic and forward-looking cities in the world. “It was love at first sight,” says the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). That summer, she came to Singapore to join the Singapore-MIT Alliance for Research and Technology (SMART) as the first principal investigator in residence for the Future of Urban Mobility Research Program.

“In 2010, nobody was talking about autonomous driving. We were pioneers in developing and deploying the first mobility on demand for people with self-driving golf buggies,” says Rus. “And look where we stand today! Every single car maker is investing millions of dollars to advance autonomous driving. Singapore did not hesitate to provide us, at an early stage, with all the financial, logistical, and transportation resources to facilitate our work.”

Since her first visit, Rus has returned each year to follow up on the research, and has been involved in leading revolutionary projects for the future of urban mobility. “Our team worked tremendously hard on self-driving technologies, and we are now presenting a wide range of different devices that allow autonomous and secure mobility,” she says. “Our objective today is to make taking a driverless car for a spin as easy as programming a smartphone. A simple interaction between the human and machine will provide a transportation butler.”

The first mobility devices her team worked on were self-driving golf buggies. Two years ago, these buggies advanced to a point where the group decided to open them to the public in a trial that lasted one week at the Chinese Gardens, an idea facilitated by Singapore’s Land and Transportation Agency (LTA). Over the course of a week, more than 500 people booked rides from the comfort of their homes and came to the Chinese Gardens at the designated time and spot to experience mobility-on-demand with robots.

The test was conducted around winding paths trafficked by pedestrians, bicyclists, and the occasional monitor lizard. The experiments also tested an online booking system that enabled visitors to schedule pickups and drop-offs around the garden, automatically routing and redeploying the vehicles to accommodate all the requests. The public’s response was joyful and positive, and this brought the team renewed enthusiasm to take the technology to the next level.

Since the Chinese Gardens public trial, the autonomous car group has introduced a few other self-driving vehicles: a self-driving city car, and two personal mobility robots, a self-driving scooter and a self-driving wheelchair. Each of these vehicles was created in three phases: In the first phase, the vehicle was converted to drive-by-wire control, which allows a computer to control acceleration, braking, and steering of the car. In the second phase, the vehicle drives on each of the pathways in its operation environment and makes a map using features detected by the sensors. In the third phase, the vehicle uses the map to compute a path from the customer’s pick-up point to the customer’s drop-off point and proceeds to drive along the path, localizing continuously and avoiding any other cars, people, and unexpected obstacles. The devices also used traffic data from LTA to model traffic patterns and to study the benefits of ride-sharing systems.

Last April, the team conducted a new test with the public at MIT. This time, they deployed a self-driving scooter that allowed users to use the same autonomy system indoors as well as outdoors. The trial included autonomous rides in MIT’s Infinite Corridor. A significant challenge in this type of space is localization, or accurately knowing the location of the robot in a long and plain corridor that does not have many distinctive features. The system proved to work very well in this type of environment, and the trial completed the demonstration of a comprehensive uniform autonomous mobility system.

“One can easily see the usefulness of such a system between self-driving city cars, golf buggies, and scooters,” Rus says. “A mobility-impaired user could, for example, use a self-driving scooter to get down the hall and through the lobby of an apartment building, take a self-driving golf buggy across the building’s parking lot, and pick up an autonomous car on the public roads to go to a similarly equipped amusement park or shopping centre.”

Daniela Rus, a Class of 2002 MacArthur Fellow and member of the USA National Academy of Engineering, knows that each successful step into urban mobility will bring a positive contribution of artificial intelligence to the public. According to the World Health Organization, 3,400 people die each day in the world from traffic-related accidents.

“It is a new space race,” she says, convinced that autonomy is part of the solution to safe transportation. Daniela Rus will continue visiting her beloved Singapore, where she particularly enjoys the food, the beautiful flowers, the kindness of its people, and the smartness of its youth. “Singapore is definitely a model in many fields,” she concludes.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started January 21, 2017, 10:48:47 AM


Responsive and Responsible Leadership given prominance at #WEF17 World Economic Forum in Robotics News

Responsive and Responsible Leadership given prominance at #WEF17 World Economic Forum
20 January 2017, 12:30 pm


The population of the scenic ski-resort Davos, nestled in the Swiss Alps, swelled by nearly +3,000 people between the 17th and 20th of January. World leaders, academics, business tycoons, press and interlopers of all varieties were drawn to the 2017 World Economic Forum (WEF) Annual Meeting. The WEF is the foremost creative force for engaging the world’s top leaders in collaborative activities to shape the global, regional and industry agendas for the coming year and beyond. Perhaps unsurprisingly given recent geopolitical events, the theme of this year’s forum was Responsive and Responsible Leadership.

With the onset of the fourth industrial revolution, increasingly discontented segments of society not experiencing congruous economic and social progress are in danger of existential uncertainty and exclusion. Responsive and Responsible Leadership entails inclusive development and equitable growth, both nationally and globally. It also involves working rapidly to close generational divides by exercising shared stewardship of those systems that are critical to our prosperity.

In the end, leaders from all walks of life at the Annual Meeting 2017 must be ready to react credibly and responsibly to societal and global concerns that have been neglected for too long.”
Developing last year’s theme—“The fourth industrial revolution”—this year’s luminaries posited questions, among many others, concerning incipient robotics and artificial intelligence technologies set to have a pronounced impact on the global economy and global consciousness alike. What can we learn from the first wave of AI? How can the humanitarian sector benefit from big data algorithms? How will drone technology change the face of warfare? Can AI and computational tech help foster responsive and responsible leadership? What are the downsides of technology in the fourth industrial revolution?

Enjoy a selection of tech-themed videos below.

And a bit about global science including big data, open source science and education.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started January 21, 2017, 04:49:22 AM


NHTSA ODI report exonerates Tesla in fatal crash in Robotics News

NHTSA ODI report exonerates Tesla in fatal crash
20 January 2017, 10:46 am

Tesla Model S autopilot-software. Source: TeslaTesla Model S autopilot-software. Source: Tesla NHTSA released the report from their Office of Defects Investigation on the fatal Tesla crash in Florida last spring. It’s a report that is surprisingly favorable to Tesla. So much so that even I am surprised.

While I did not think Tesla would be found defective, this report seems to come from a different agency than the one that recently warned comma.ai that:

It is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.
The ODI report rules that Tesla properly considered driver distraction risks in its design of the product. It goes even further, noting that drivers using Tesla Autopilot (including those monitoring it properly and those who did not) still had a decently lower accident rate for mile than drivers of ordinary cars without autopilot. In other words, while the Autopilot without supervision is not good enough to drive on its own, the Autopilot even with occasionally lapsed supervision that is known to happen is still overall a safer system than not having the Autopilot at all.


This will provide powerful support for companies developing autopilot style systems, and companies designing robocars who wish to use customer supervised driving as a means to build up test miles and verification data. They are not putting their customers at risk as long as they do it as well as Tesla. This is interesting (and the report notes that evaluation of Autopilot distraction is not a settled question) because it seems probable that people using the Autopilot and ignoring the road to do e-Mail or watch movies are not safer than regular drivers. But the overall collection of distracted and watchful drivers is still a win.

This might change as companies introduce technologies which watch drivers and keep them out of the more dangerous inattentive style of use. As the autopilots get better, it will become more and more tempting, after all.

Tesla stock did not seem to be moved by this report. But it was also not moved by the accident or other investigations — it actually went on a broadly upward course for 2 months following the announcement of the fatality.

The ODI’s job is to judge if a vehicle is defective. That is different from saying it’s not perfect. Perfection is not expected, especially from ADAS and similar systems. The discussion about the finer points of whether drivers might over-trust the system are not firmly settled here. That can still be true without the car being defective and failing to perform as designed or being designed negligently.

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started January 20, 2017, 10:49:02 PM

Passengers in Robots in Movies

[Arthur] Passengers is a 2016 American science fiction adventure film directed by Morten Tyldum and written by Jon Spaihts. It stars Jennifer Lawrence, Chris Pratt, Michael Sheen, Laurence Fishburne and Andy García. The film tells about two people who wake up 90 years too soon from an induced hibernation on board a spaceship bound for a new planet.

The starship Avalon is transporting over 5,000 colonists to the planet Homestead II, a journey that takes 120 years. The colonists and the entire crew are in hibernation pods, but as the ship passes through a large asteroid field, the ship's shield is heavily strained causing a malfunction that awakens one passenger, mechanical engineer Jim Preston (Chris Pratt), 90 years early.

After a year of isolation, with no company except Arthur (Michael Sheen), an android bartender, Jim, despondent, contemplates suicide. One day he notices beautiful Aurora Lane (Jennifer Lawrence) in her pod. Her video profile reveals she is a writer with a humorous personality. After struggling with the morality of manually reviving Aurora for companionship, he awakens her, claiming it was due to a pod malfunction like his.

Jan 20, 2017, 18:59:23 pm

iBot in Chatbots - English

iBot is a bot that can talk with you about anything you'd like! Just make sure that you use proper grammar, capitalization, and punctuation when you're chatting with her, as she learns from you. The more you chat with her, the more she'll learn from you.

Jan 06, 2017, 19:52:36 pm
Ghost in the Shell

Ghost in the Shell in Robots in Movies

Ghost in the Shell is an upcoming American science fiction action film directed by Rupert Sanders and written by Jonathan Herman and Jamie Moss, based on the Japanese manga of the same name by Masamune Shirow. The film stars Scarlett Johansson, Pilou Asbæk, Takeshi Kitano, Juliette Binoche, and Michael Pitt. It will be released on March 31, 2017 in 2D, 3D and IMAX 3D.

Cyborg counter-cyberterrorist field commander The Major (Scarlett Johansson), and her task force Section 9 thwart cyber criminals and hackers. Now, they must face a new enemy who will stop at nothing to sabotage Hanka Robotic's artificial intelligence technology.

Jan 06, 2017, 19:40:57 pm
Kill Command

Kill Command in Robots in Movies

In a technologically advanced near future, Katherine Mills, a cyborg working for Harbinger Corporation, discovers a reprogramming anomaly regarding a warfare A.I. system located at Harbinger I Training Facility, an undisclosed military training island. Upon their arrival, the team notices global communications have been disabled, limiting them to local access only. They discover autonomously operating surveillance drones monitoring them. The team begins their mission of eliminating A.I. threats.

Jan 06, 2017, 19:08:03 pm

Botika in Chatbots - Non English

Botika is an Artificial Intelligent with Natural Language Processing (NLP) that understand casual daily Bahasa Indonesia conversation

Botika MVP can respond to Bahasa Indonesia conversation in five categories: Greeting, ask airline schedule, ask travel destination, ask food recommendation and order cancellation

Nov 26, 2016, 12:17:56 pm
Emma - chatShopper

Emma - chatShopper in Chatbots - English

[Messenger] Our shopping chatbot Emma helps you to find the perfect fashion products - from jackets to sneakers and accessories. Currently available in German and English (products from UK online partner store).

Nov 07, 2016, 16:11:33 pm
VIN Decoder Chatbot

VIN Decoder Chatbot in Chatbots - English

[Facebook] You can ask Robert for complete VIN decoding info on US vehicles, and receive the information on technical parts of the vehicle.

Nov 03, 2016, 08:20:44 am
[Facebook Messenger] Soccer Fan Bot

[Facebook Messenger] Soccer Fan Bot in Chatbots - English

This is a Facebook Messenger bot called Soccer Fan Bot. It can do 3 things:

- Update you on the score of your team by typing "Update me on France" for example.

- Propose you 3 pictures of either a soccer player or player's wife and ask you to guess the one corresponding to the proposed name. Just write "guess player" or "guess wife".

- Give you a fact, just type "give me a fact".

Aug 17, 2016, 11:46:51 am
[Thai] BE (Buddhist Era)

[Thai] BE (Buddhist Era) in Chatbots - Non English

Be has been made with the program-o engine. Almost all knowledge is about Thailand and Thai people. She speaks only Thai language.

Aug 17, 2016, 11:38:54 am
Aici desktop client

Aici desktop client in Chatbots - English

Aici client is a small windows application (WPF, .net3.5) that provides a text interface (input and output) for talking to an AI based on the principles of resonating neural networks. Also see : Jan Bogaerts' Blog

Aug 02, 2010, 10:56:49 am