avatar

infurl

essay topic in General AI Discussion

"The problem is not that artificial intelligence will get too smart and take over the world, the problem is that it's too stupid and already has."

-- Pedro Domingos (Computer Scientist)

13 Comments | Started August 14, 2017, 08:57:17 am
avatar

Eric

Robotis OP2, neural network. An infant approach! in General Project Discussion

Hello AI folks,
My name is Eric and I'm a student in Engineering physics in Lund, Sweden. I am very interested in machine learning and I've just started learning about how these things work. I’ve come to some realizations, so far I've understood that when you want to teach a coded program something either you have
1. A data set with input data and result which you run through your program to train it. For example the MNIST Data http://yann.lecun.com/exdb/mnist/base
2. You generate your own data and result by having a simulation environment. Just like the examples we've seen when a program learns how to play a video game.

After realising these two types of ways to train for example a neural network I started to wonder if you could do it differently. I started thinking about a couple of things.
1. Bringing the learning process into the real world, could there be factors that help the learning or would it just be, like in many other experiments, a too complex enviromen with too much data to handle for the learning program.
2. Could we compensate this environment with supervised learning? Could this affect the quality of the input data?

My idea:
Have a robot like the the Robotis OP2 http://en.robotis.com/index/product.php?cate_code=111310
as an agent acting in my home. The inputs will be all the sensoric data it can detect and all it’s movements in form of walking, falling, getting up and so on. As far as I’ve understood I would need something like a cost-function, giving it a rating of how well it has achieved.
I’m thinking of trying to model its neural network to get as close as possible to an infants mind, being entirely impulsdriven (like babies are in their early years). That would mean that it’s objective would change all the time and therefore also the costfunction(?) which would decide the weights of each neuron, right?

I understand this sounds like an extremely difficult task but do you have any inputs with your knowledge of how machine learning actually works?
I’m trying to aim for a more flexible but initially simple system that can modify itself to a greater extent.

Thanks for all the help!

16 Comments | Started August 15, 2017, 07:50:16 pm
avatar

Tyler

XKCD Comic : Eclipse Science in XKCD Comic

Eclipse Science
16 August 2017, 5:00 am

I was thinking of observing stars to verify Einstein's theory of relativity again, but I gotta say, that thing is looking pretty solid at this point.

Source: xkcd.com

Started August 17, 2017, 12:00:36 pm
avatar

dimaria

Foward from Yahoo VR group in General Chat

I posted this e-mail at the other groups I am in, I don?t think Hal is the best for a spoke person, but then I am always impressed with what you people get your Hals to do .
So I am just going to pass this up to you I have the groups address and his address.

Thanks,
Les

1 Comment | Started August 15, 2017, 11:01:26 am
avatar

Tyler

XKCD Comic : Eclipse Searches in XKCD Comic

Eclipse Searches
14 August 2017, 5:00 am

There were traffic jams for the eclipses in 1970 and 1979, and that was *before* we had the potential for overnight viral social media frenzies.

Source: xkcd.com

Started August 15, 2017, 12:02:30 pm
avatar

LOCKSUIT

Determined and at the core of AI. in General Project Discussion

Hello machine.

This is my project thread.

The reason no one is more determined to create AI than me is because only I collect information from everywhere and create a precise hierarchy 24/7. After initialization, it only took me 1 year before I discovered the field of AI that is actually well developed. And I instantly noticed it. I instantly noticed the core of AI from my first read. That's how fast my Hierarchy self-corrects me. Now it's been 1.5 years since and I am here to tell you that I have empirical knowledge that I have the core of AI, and ASI! 100% guarantee !

All of my posts on the forum are in separate threads, mine, yours, but this thread is going to try to hold my next posts together so you can to quickly and easily find, follow, and understand all of my work. Anything important I've said elsewhere is on my desktop, so you will hear about it again here. You don't currently have access to my desktop, only my website in replace to make up for it, while this thread is an extension of it. But this thread won't be permanently engraved to my desktop/website since anything new on this thread will be copied to my desktop/website. Currently my website (and this extension thread) is awaiting my recent work, which I really shouldn't show you all of it.

- Immortal Discoveries

93 Comments | Started March 12, 2017, 04:12:26 am
avatar

Marco

Ideas/opinions for troubleshooting exploding output values (DQN) in AI Programming

Hello folks!

While introducing myself, I mentioned that I work on adding a Deep Reinforcement Learning (DQN) implementation to the library ConvNetSharp. I've been facing one particular issue for days now: during training, the output values grow exponentially till they reach negative or positive infinity. For this particular issue I could need some fresh ideas or opinions, which could aid me in finding further opportunities for tracking down the cause of this issue.

So here is some information about the project itself. Afterwards (i.e. the next paragraph), I'll list the conducted steps for troubleshooting. For C#, there are not that many promising libraries out there for neural networks. I already worked with Encog, which is quite convenient, but it does not provide GPU support neither convolutional neural nets. The alternative, which I chose now, is ConvNetSharp. The drawback of that library is the lack of documentation and in-code comments, but it supports CUDA (using managed CUDA). An alternative would be to implement some interface between C# and Python, but I don't have any idea for such an approach, e.g. TCP most likely will turn out to be a bottleneck. The DQN implementation is adapted to ConvNetJS's deepqlearn.js and a former ConvNetSharp Port . For testing my implementation, I created a slot machine simulation using 3 reels, which are stopped individually by the agent. The agent receives as input the current 9 slots. The available actions are Wait and Stop Reel. A reward is handed out according to the score as soon as all reels are stopped. The best score is 1. If I use the old ConvNetSharp port with the DQN Demo, the action values (output values of the neural net) stay below 1. The same scenario on my implementation, using the most recent version of ConvNetSharp, faces the issue of exponentially growth during training.

Here is what I checked so far.

  • Logged inputs, outputs, rewards, experiences (all, except the growing outputs, look fine)
  • Used former DQN ConvNetSharp Demo for testing the slot machine simulation (well, the agent does not come up with a suitable solution, but the outputs do not explode)
  • Varying hyperparameters, such as a very low learning rate or big and small training batches

There are two components, which are vague to me. The regression layer of ConvNetSharp got introduced recently and I'm not sure if I'm using the Volume (i.e. Tensor) object as intended by its author. As I'm not familiar with the actual implementation details of neural nets, I cannot figure out if the issue is caused by ConvNetSharp or not. I was in touch with the author of ConvNetSharp a few times, but still wasn't able to make progress on this issue. Parts of this issue are tracked on Github.


It would be great if someone has some fresh ideas for getting new insights about the underlying issue.

10 Comments | Started August 08, 2017, 12:25:45 pm
avatar

Tyler

Bringing neural networks to cellphones in Robotics News

Bringing neural networks to cellphones
18 July 2017, 6:00 pm

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Energy evaluator

Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. Different types of networks vary according to their number of layers, the number of connections between the nodes, and the number of nodes in each layer.

The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation.

“The first thing we did was develop an energy-modeling tool that accounts for data movement, transactions, and data flow,” Sze says. “If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks.”

In the past, Sze explains, researchers attempting to reduce neural networks’ power consumption used a technique called “pruning.” Low-weight connections between nodes contribute very little to a neural network’s final output, so many of them can be safely eliminated, or pruned.

Principled pruning

With the aid of their energy model, Sze and her colleagues — first author Tien-Ju Yang and Yu-Hsin Chen, both graduate students in electrical engineering and computer science — varied this approach. Although cutting even a large number of low-weight connections can have little effect on a neural net’s output, cutting all of them probably would, so pruning techniques must have some mechanism for deciding when to stop.

The MIT researchers thus begin pruning those layers of the network that consume the most energy. That way, the cuts translate to the greatest possible energy savings. They call this method “energy-aware pruning.”

Weights in a neural network can be either positive or negative, so the researchers’ method also looks for cases in which connections with weights of opposite sign tend to cancel each other out. The inputs to a given node are the outputs of nodes in the layer below, multiplied by the weights of their connections. So the researchers’ method looks not only at the weights but also at the way the associated nodes handle training data. Only if groups of connections with positive and negative weights consistently offset each other can they be safely cut. This leads to more efficient networks with fewer connections than earlier pruning methods did.

"Recently, much activity in the deep-learning community has been directed toward development of efficient neural-network architectures for computationally constrained platforms,” says Hartwig Adam, the team lead for mobile vision at Google. “However, most of this research is focused on either reducing model size or computation, while for smartphones and many other devices energy consumption is of utmost importance because of battery usage and heat restrictions. This work is taking an innovative approach to CNN [convolutional neural net] architecture optimization that is directly guided by minimization of power consumption using a sophisticated new energy estimation tool, and it demonstrates large performance gains over computation-focused methods. I hope other researchers in the field will follow suit and adopt this general methodology to neural-network-model architecture design."

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Started August 14, 2017, 12:00:25 pm
avatar

keghn

Deep Q Learning for Video Games - The Math of Intelligence #9 in General AI Discussion


Deep Q Learning for Video Games - The Math of Intelligence #9 :  

3 Comments | Started August 13, 2017, 05:48:49 pm
avatar

Zero

Who's afraid of C ? in AI Programming

A nice book about C11 : http://icube-icps.unistra.fr/img_auth.php/d/db/ModernC.pdf
And a nice DevKit : http://www.smorgasbordet.com/pellesc/

7 Comments | Started August 12, 2017, 03:47:49 pm
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am
Black Eyed Peas - Imma Be Rocking That Body

Black Eyed Peas - Imma Be Rocking That Body in Video

For the robots of course...

Jul 05, 2017, 22:02:31 pm
Winnie

Winnie in Assistants

[Messenger] The Chatbot That Helps You Launch Your Website.

Jul 04, 2017, 23:56:00 pm
Conversation, Deception and Intelligence

Conversation, Deception and Intelligence in Articles

A blog dedicated to science, technology, and my interests in music, art, film and especially to Alan Turing for his Imitation Game: a measure for machine intelligence through text-based dialogue.

Jul 04, 2017, 22:29:29 pm
Transformers: The Last Knight

Transformers: The Last Knight in Robots in Movies

Transformers: The Last Knight is a 2017 American science fiction action film based on the toy line of the same name created by Hasbro. It is the fifth installment of the live-action Transformers film series and a direct sequel to 2014's Transformers: Age of Extinction. Directed by Michael Bay, the film features Mark Wahlberg returning from Age of Extinction, along with Josh Duhamel and John Turturro reprising their roles from the first three films, with Anthony Hopkins joining the cast.

Humans and Transformers are at war, Optimus Prime is gone. The key to saving our future lies buried in the secrets of the past, in the hidden history of Transformers on Earth.

Jun 26, 2017, 03:20:32 am