avatar

infurl

essay topic in General AI Discussion

"The problem is not that artificial intelligence will get too smart and take over the world, the problem is that it's too stupid and already has."

-- Pedro Domingos (Computer Scientist)

12 Comments | Started August 14, 2017, 08:57:17 am
avatar

Eric

Robotis OP2, neural network. An infant approach! in General Project Discussion

Hello AI folks,
My name is Eric and I'm a student in Engineering physics in Lund, Sweden. I am very interested in machine learning and I've just started learning about how these things work. I’ve come to some realizations, so far I've understood that when you want to teach a coded program something either you have
1. A data set with input data and result which you run through your program to train it. For example the MNIST Data http://yann.lecun.com/exdb/mnist/base
2. You generate your own data and result by having a simulation environment. Just like the examples we've seen when a program learns how to play a video game.

After realising these two types of ways to train for example a neural network I started to wonder if you could do it differently. I started thinking about a couple of things.
1. Bringing the learning process into the real world, could there be factors that help the learning or would it just be, like in many other experiments, a too complex enviromen with too much data to handle for the learning program.
2. Could we compensate this environment with supervised learning? Could this affect the quality of the input data?

My idea:
Have a robot like the the Robotis OP2 http://en.robotis.com/index/product.php?cate_code=111310
as an agent acting in my home. The inputs will be all the sensoric data it can detect and all it’s movements in form of walking, falling, getting up and so on. As far as I’ve understood I would need something like a cost-function, giving it a rating of how well it has achieved.
I’m thinking of trying to model its neural network to get as close as possible to an infants mind, being entirely impulsdriven (like babies are in their early years). That would mean that it’s objective would change all the time and therefore also the costfunction(?) which would decide the weights of each neuron, right?

I understand this sounds like an extremely difficult task but do you have any inputs with your knowledge of how machine learning actually works?
I’m trying to aim for a more flexible but initially simple system that can modify itself to a greater extent.

Thanks for all the help!

9 Comments | Started August 15, 2017, 07:50:16 pm
avatar

dimaria

Foward from Yahoo VR group in General Chat

I posted this e-mail at the other groups I am in, I don?t think Hal is the best for a spoke person, but then I am always impressed with what you people get your Hals to do .
So I am just going to pass this up to you I have the groups address and his address.

Thanks,
Les

1 Comment | Started August 15, 2017, 11:01:26 am
avatar

Tyler

XKCD Comic : Eclipse Searches in XKCD Comic

Eclipse Searches
14 August 2017, 5:00 am

There were traffic jams for the eclipses in 1970 and 1979, and that was *before* we had the potential for overnight viral social media frenzies.

Source: xkcd.com

Started August 15, 2017, 12:02:30 pm
avatar

LOCKSUIT

Determined and at the core of AI. in General Project Discussion

Hello machine.

This is my project thread.

The reason no one is more determined to create AI than me is because only I collect information from everywhere and create a precise hierarchy 24/7. After initialization, it only took me 1 year before I discovered the field of AI that is actually well developed. And I instantly noticed it. I instantly noticed the core of AI from my first read. That's how fast my Hierarchy self-corrects me. Now it's been 1.5 years since and I am here to tell you that I have empirical knowledge that I have the core of AI, and ASI! 100% guarantee !

All of my posts on the forum are in separate threads, mine, yours, but this thread is going to try to hold my next posts together so you can to quickly and easily find, follow, and understand all of my work. Anything important I've said elsewhere is on my desktop, so you will hear about it again here. You don't currently have access to my desktop, only my website in replace to make up for it, while this thread is an extension of it. But this thread won't be permanently engraved to my desktop/website since anything new on this thread will be copied to my desktop/website. Currently my website (and this extension thread) is awaiting my recent work, which I really shouldn't show you all of it.

- Immortal Discoveries

93 Comments | Started March 12, 2017, 04:12:26 am
avatar

Marco

Ideas/opinions for troubleshooting exploding output values (DQN) in AI Programming

Hello folks!

While introducing myself, I mentioned that I work on adding a Deep Reinforcement Learning (DQN) implementation to the library ConvNetSharp. I've been facing one particular issue for days now: during training, the output values grow exponentially till they reach negative or positive infinity. For this particular issue I could need some fresh ideas or opinions, which could aid me in finding further opportunities for tracking down the cause of this issue.

So here is some information about the project itself. Afterwards (i.e. the next paragraph), I'll list the conducted steps for troubleshooting. For C#, there are not that many promising libraries out there for neural networks. I already worked with Encog, which is quite convenient, but it does not provide GPU support neither convolutional neural nets. The alternative, which I chose now, is ConvNetSharp. The drawback of that library is the lack of documentation and in-code comments, but it supports CUDA (using managed CUDA). An alternative would be to implement some interface between C# and Python, but I don't have any idea for such an approach, e.g. TCP most likely will turn out to be a bottleneck. The DQN implementation is adapted to ConvNetJS's deepqlearn.js and a former ConvNetSharp Port . For testing my implementation, I created a slot machine simulation using 3 reels, which are stopped individually by the agent. The agent receives as input the current 9 slots. The available actions are Wait and Stop Reel. A reward is handed out according to the score as soon as all reels are stopped. The best score is 1. If I use the old ConvNetSharp port with the DQN Demo, the action values (output values of the neural net) stay below 1. The same scenario on my implementation, using the most recent version of ConvNetSharp, faces the issue of exponentially growth during training.

Here is what I checked so far.

  • Logged inputs, outputs, rewards, experiences (all, except the growing outputs, look fine)
  • Used former DQN ConvNetSharp Demo for testing the slot machine simulation (well, the agent does not come up with a suitable solution, but the outputs do not explode)
  • Varying hyperparameters, such as a very low learning rate or big and small training batches

There are two components, which are vague to me. The regression layer of ConvNetSharp got introduced recently and I'm not sure if I'm using the Volume (i.e. Tensor) object as intended by its author. As I'm not familiar with the actual implementation details of neural nets, I cannot figure out if the issue is caused by ConvNetSharp or not. I was in touch with the author of ConvNetSharp a few times, but still wasn't able to make progress on this issue. Parts of this issue are tracked on Github.


It would be great if someone has some fresh ideas for getting new insights about the underlying issue.

10 Comments | Started August 08, 2017, 12:25:45 pm
avatar

Tyler

Bringing neural networks to cellphones in Robotics News

Bringing neural networks to cellphones
18 July 2017, 6:00 pm

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Energy evaluator

Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. Different types of networks vary according to their number of layers, the number of connections between the nodes, and the number of nodes in each layer.

The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation.

“The first thing we did was develop an energy-modeling tool that accounts for data movement, transactions, and data flow,” Sze says. “If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks.”

In the past, Sze explains, researchers attempting to reduce neural networks’ power consumption used a technique called “pruning.” Low-weight connections between nodes contribute very little to a neural network’s final output, so many of them can be safely eliminated, or pruned.

Principled pruning

With the aid of their energy model, Sze and her colleagues — first author Tien-Ju Yang and Yu-Hsin Chen, both graduate students in electrical engineering and computer science — varied this approach. Although cutting even a large number of low-weight connections can have little effect on a neural net’s output, cutting all of them probably would, so pruning techniques must have some mechanism for deciding when to stop.

The MIT researchers thus begin pruning those layers of the network that consume the most energy. That way, the cuts translate to the greatest possible energy savings. They call this method “energy-aware pruning.”

Weights in a neural network can be either positive or negative, so the researchers’ method also looks for cases in which connections with weights of opposite sign tend to cancel each other out. The inputs to a given node are the outputs of nodes in the layer below, multiplied by the weights of their connections. So the researchers’ method looks not only at the weights but also at the way the associated nodes handle training data. Only if groups of connections with positive and negative weights consistently offset each other can they be safely cut. This leads to more efficient networks with fewer connections than earlier pruning methods did.

"Recently, much activity in the deep-learning community has been directed toward development of efficient neural-network architectures for computationally constrained platforms,” says Hartwig Adam, the team lead for mobile vision at Google. “However, most of this research is focused on either reducing model size or computation, while for smartphones and many other devices energy consumption is of utmost importance because of battery usage and heat restrictions. This work is taking an innovative approach to CNN [convolutional neural net] architecture optimization that is directly guided by minimization of power consumption using a sophisticated new energy estimation tool, and it demonstrates large performance gains over computation-focused methods. I hope other researchers in the field will follow suit and adopt this general methodology to neural-network-model architecture design."

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Started August 14, 2017, 12:00:25 pm
avatar

keghn

Deep Q Learning for Video Games - The Math of Intelligence #9 in General AI Discussion


Deep Q Learning for Video Games - The Math of Intelligence #9 :  

3 Comments | Started August 13, 2017, 05:48:49 pm
avatar

Zero

Who's afraid of C ? in AI Programming

A nice book about C11 : http://icube-icps.unistra.fr/img_auth.php/d/db/ModernC.pdf
And a nice DevKit : http://www.smorgasbordet.com/pellesc/

7 Comments | Started August 12, 2017, 03:47:49 pm
avatar

Tyler

Watch 3-D movies at home, sans glasses in Robotics News

Watch 3-D movies at home, sans glasses
12 July 2017, 2:00 pm

While 3-D movies continue to be popular in theaters, they haven’t made the leap to our homes just yet — and the reason rests largely on the ridge of your nose.

Ever wonder why we wear those pesky 3-D glasses? Theaters generally either use special polarized light or project a pair of images that create a simulated sense of depth. To actually get the 3-D effect, though, you have to wear glasses, which have proven too inconvenient to create much of a market for 3-D TVs.

But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aim to change that with “Home3D,” a new system that allows users to watch 3-D movies at home without having to wear special glasses.

Home3D converts traditional 3-D movies from stereo into a format that’s compatible with so-called “automultiscopic displays.” According to postdoc Petr Kellnhofer, these displays are rapidly improving in resolution and show great potential for home theater systems.

“Automultiscopic displays aren’t as popular as they could be because they can’t actually play the stereo formats that traditional 3-D movies use in theaters,” says Kellnhofer, who was the lead author on a paper about Home3D that he will present at this month’s SIGGRAPH computer graphics conference in Los Angeles. “By converting existing 3-D movies to this format, our system helps open the door to bringing 3-D TVs into people’s homes.”

Home3D can run in real-time on a graphics-processing unit (GPU), meaning it could run on a system such as an Xbox or a PlayStation. The team says that in the future Home3D could take the form of a chip that could be put into TVs or media players such as Google’s Chromecast.

The team’s algorithms for Home3D also let users customize the viewing experience, dialing up or down the desired level of 3-D for any given movie. In a user study involving clips from movies including “The Avengers” and “Big Buck Bunny,” participants rated Home3D videos as higher quality 60 percent of the time, compared to 3-D videos converted with other approaches.

Kellnhofer wrote the paper with MIT professors Fredo Durand, William Freeman, and Wojciech Matusik, as well as postdoc Pitchaya Sitthi-Amorn, former CSAIL postdoc Piotr Didyk, and former master’s student Szu-Po Wang '14 MNG '16. Didyk is now at Saarland University and the Max-Planck Institute in Germany.

How it works

Home3D converts 3-D movies from “stereoscopic” to “multiview” video, which means that, rather than showing just a pair of images, the screen displays three or more images that simulate what the scene looks like from different locations. As a result, each eye perceives what it would see while really being at a given location inside the scene. This allows the brain to naturally compute the depth in the image.

Existing techniques for converting 3-D movies have major limitations. So-called “phase-based rendering” is fast, high-resolution, and largely accurate, but it doesn't perform well when the left-eye and right-eye images are too different from each other. Meanwhile, “depth image-based rendering” is much better at managing those differences, but it has to run at a low-resolution that can sometimes lose small details. (One assumption it makes is that each pixel has only one depth value, which means that it can’t reproduce effects such as transparency and motion blur.)

The CSAIL team's key innovation is a new algorithm that combines elements of these two techniques. Kellnhofer says the algorithm can handle larger left/right differences than phase-based approaches, while also resolving issues such as depth of focus and reflections that can be challenging for depth-image-based approaches.

“The researchers have used several clever algorithmic tricks to reduce a lot of the artifacts that previous algorithms suffered from, and they made it work in real-time,” saysGordon Wetzstein, an assistant professor of electrical engineering at Stanford University, who was not involved in the research. “This is the first paper that produces extremely high-quality multiview content from existing stereoscopic footage.”

Didyk says that modern TVs are so high-resolution that it can be hard to notice much difference for 2-D content.

“But using them for glasses-free 3-D is a compelling application because it makes great use of the additional pixels these TVs can provide,” Didyk says.

One downside to converting traditional 3-D video to multiview TVs is that the limited resolution can lead to images appearing with duplicates near or around them — a phenomenon referred to as “ghosting.” The team hopes to further hone the algorithm to minimize ghosting, but for now, they say that they are excited that their conversion system has demonstrated the potential for bringing existing 3-D movie content beyond the multiplex.

“Glasses-free 3-D TV is often considered a chicken-and-egg problem,” Wetzstein says. “Without the content, who needs good 3-D TV display technology? Without the technology, who would produce high-quality content? This research partly solves the lack-of-content problem, which is really exciting.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Started August 13, 2017, 12:01:16 pm
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am
Black Eyed Peas - Imma Be Rocking That Body

Black Eyed Peas - Imma Be Rocking That Body in Video

For the robots of course...

Jul 05, 2017, 22:02:31 pm
Winnie

Winnie in Assistants

[Messenger] The Chatbot That Helps You Launch Your Website.

Jul 04, 2017, 23:56:00 pm
Conversation, Deception and Intelligence

Conversation, Deception and Intelligence in Articles

A blog dedicated to science, technology, and my interests in music, art, film and especially to Alan Turing for his Imitation Game: a measure for machine intelligence through text-based dialogue.

Jul 04, 2017, 22:29:29 pm
Transformers: The Last Knight

Transformers: The Last Knight in Robots in Movies

Transformers: The Last Knight is a 2017 American science fiction action film based on the toy line of the same name created by Hasbro. It is the fifth installment of the live-action Transformers film series and a direct sequel to 2014's Transformers: Age of Extinction. Directed by Michael Bay, the film features Mark Wahlberg returning from Age of Extinction, along with Josh Duhamel and John Turturro reprising their roles from the first three films, with Anthony Hopkins joining the cast.

Humans and Transformers are at war, Optimus Prime is gone. The key to saving our future lies buried in the secrets of the past, in the hidden history of Transformers on Earth.

Jun 26, 2017, 03:20:32 am