avatar

LOCKSUIT

If your mom calls you ... don't believe anything in General AI Discussion

Because this:
https://www.youtube.com/watch?v=0sR1rU3gLzQ

Good technology. AI and data science is the thing these days.

3 Comments | Started November 13, 2019, 10:35:50 PM
avatar

Tyler

XKCD Comic : Machine Learning Captcha in XKCD Comic

Machine Learning Captcha
13 November 2019, 5:00 am

[img alt=More likely: Click on all the pictures of people who appear disloyal to [name of company or government]]https://imgs.xkcd.com/comics/machine_learning_captcha.png

Source: xkcd.com

Started November 14, 2019, 12:01:07 PM
avatar

Art

AGI might still become a reality... in AI News

...if Gaming/AI guru, John Carmack has his way!

John just left his full-time position as CEO for Occulus.

https://www.reddit.com/r/artificial/comments/dw0yad/john_carmack_on_leaving_oculus_as_fulltime_cto_im/?%24deep_link=true&correlation_id=26d0609d-3efc-4d3f-bccb-b87d9099ec75&ref=email_digest&ref_campaign=email_digest&ref_source=email&utm_content=post_body&utm_medium=digest&utm_name=top_posts&utm_source=email&utm_term=day&%243p=e_as&%24original_url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fartificial%2Fcomments%2Fdw0yad%2Fjohn_carmack_on_leaving_oculus_as_fulltime_cto_im%2F%3F%24deep_link%3Dtrue%26correlation_id%3D26d0609d-3efc-4d3f-bccb-b87d9099ec75%26ref%3Demail_digest%26ref_campaign%3Demail_digest%26ref_source%3Demail%26utm_content%3Dpost_body%26utm_medium%3Ddigest%26utm_name%3Dtop_posts%26utm_source%3Demail%26utm_term%3Dday&_branch_match_id=513029614062437280

Started November 14, 2019, 03:01:28 AM
avatar

Hopefully Something

Ideas for Alternatives to Logic in General AI Discussion

Picking things apart into definite bits is a waste of energy. AGI shouldn't have to break all curves into countless lines to make sense of them, it should only choose to do that if it's a sensible thing to do. Focusing on the self defining properties of things, could lead to universal, internally consistent predictions. A bit like geometric extensions, matching right triangles, and the like.

Internal consistency creates belief, or at least negates the necessity for suspension of disbelief, leaving us in an ideal state of undistracted observation. Its the bedrock foundation of existence, allowing us to experientially stand in every environment.

Robots have vision, and we have perception, because we are able to layer the broadly consistent, mutually indicated properties of things over the otherwise meaningless data of vision. Images are passwords for memory. The eyes are combination locks for the brain. And the brain is a picky bank, (that's picky, not piggy), just making sure... It resists cluttering itself with unintegratable or weakly bound data, because such facts appear to offer only a few, or, uncertain points of implication, to the desired utilitarian world model.

Absolute logic is very specific. You need lots of it to cover for the whole world. I feel it is better as a tool than an operating system, especially for something as broad as AGI.

Anyone have any ideas besides extending internal consistency?

46 Comments | Started October 29, 2019, 04:51:55 AM
avatar

chung

wavtomid chung polyphonic audio wav to midifile converter using neural network in General Project Discussion

wavtomid_chung : i noticed that the spectrum harmonics correction of my wav to mid converter has the same structure as a single hidden layer neural network , so i tried to generalize it with a randomize optimization algorythm , seems to work better (or is it just an illusion ?)   ::)

=>  http://chungswebsite.blogspot.com/2019/06/wavtomidchung-high-quality-polyphonic.html
 

3 Comments | Started October 15, 2019, 07:11:04 AM
avatar

Tyler

XKCD Comic : Transit of Mercury in XKCD Comic

Transit of Mercury
11 November 2019, 5:00 am

For some reason the water in my pool is green and there's a weird film on the surface #nofilter

Source: xkcd.com

Started November 12, 2019, 12:02:02 PM
avatar

Korrelan

We're releasing the 1.5billion parameter GPT-2 model in Robotics News

https://twitter.com/OpenAI/status/1191764001434173440

 :)

38 Comments | Started November 05, 2019, 07:48:09 PM
avatar

Tyler

Visualizing an AI model’s blind spots in Robotics News

Visualizing an AI model’s blind spots
8 November 2019, 6:30 pm

Anyone who has spent time on social media has probably noticed that GANs, or generative adversarial networks, have become remarkably good at drawing faces. They can predict what you’ll look like when you’re old and what you’d look like as a celebrity. But ask a GAN to draw scenes from the larger world and things get weird.

A new demo by the MIT-IBM Watson AI Lab reveals what a model trained on scenes of churches and monuments decides to leave out when it draws its own version of, say, the Pantheon in Paris, or the Piazza di Spagna in Rome. The larger study, Seeing What a GAN Cannot Generate, was presented at the International Conference on Computer Vision last week.

“Researchers typically focus on characterizing and improving what a machine-learning system can do — what it pays attention to, and how particular inputs lead to particular outputs,” says David Bau, a graduate student at MIT’s Department of Electrical Engineering and Computer Science and Computer Science and Artificial Science Laboratory (CSAIL). “With this work, we hope researchers will pay as much attention to characterizing the data that these systems ignore.”

In a GAN, a pair of neural networks work together to create hyper-realistic images patterned after examples they’ve been given. Bau became interested in GANs as a way of peering inside black-box neural nets to understand the reasoning behind their decisions. An earlier tool developed with his advisor, MIT Professor Antonio Torralba, and IBM researcher Hendrik Strobelt, made it possible to identify the clusters of artificial neurons responsible for organizing the image into real-world categories like doors, trees, and clouds. A related tool, GANPaint, lets amateur artists add and remove those features from photos of their own.

One day, while helping an artist use GANPaint, Bau hit on a problem. “As usual, we were chasing the numbers, trying to optimize numerical reconstruction loss to reconstruct the photo,” he says. “But my advisor has always encouraged us to look beyond the numbers and scrutinize the actual images. When we looked, the phenomenon jumped right out: People were getting dropped out selectively.”

Just as GANs and other neural nets find patterns in heaps of data, they ignore patterns, too. Bau and his colleagues trained different types of GANs on indoor and outdoor scenes. But no matter where the pictures were taken, the GANs consistently omitted important details like people, cars, signs, fountains, and pieces of furniture, even when those objects appeared prominently in the image. In one GAN reconstruction, a pair of newlyweds kissing on the steps of a church are ghosted out, leaving an eerie wedding-dress texture on the cathedral door.

“When GANs encounter objects they can’t generate, they seem to imagine what the scene would look like without them,” says Strobelt. “Sometimes people become bushes or disappear entirely into the building behind them.”

The researchers suspect that machine laziness could be to blame; although a GAN is trained to create convincing images, it may learn it's easier to focus on buildings and landscapes and skip harder-to-represent people and cars. Researchers have long known that GANs have a tendency to overlook some statistically meaningful details. But this may be the first study to show that state-of-the-art GANs can systematically omit entire classes of objects within an image.

An AI that drops some objects from its representations may achieve its numerical goals while missing the details most important to us humans, says Bau. As engineers turn to GANs to generate synthetic images to train automated systems like self-driving cars, there’s a danger that people, signs, and other critical information could be dropped without humans realizing. It shows why model performance shouldn’t be measured by accuracy alone, says Bau. “We need to understand what the networks are and aren’t doing to make sure they are making the choices we want them to make.”

Joining Bau on the study are Jun-Yan Zhu, Jonas Wulff, William Peebles, and Torralba, of MIT; Strobelt of IBM; and Bolei Zhou of the Chinese University of Hong Kong.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started November 11, 2019, 12:02:38 PM
avatar

LOCKSUIT

3D zoom in on pictures in General AI Discussion

Results:
http://sniklaus.com/papers/kenburns-results

They use depth camera I think and they use the image fill-in inpainting program too.
https://www.youtube.com/watch?v=FZZ9rpmVCqE

Maybe one day we can walk down the street and go shop in the store, all from the image.

3 Comments | Started November 10, 2019, 12:18:21 AM
avatar

Tyler

Driving toward a healthier planet in Robotics News

Driving toward a healthier planet
7 November 2019, 6:20 pm

With 100 million Toyota vehicles on the planet emitting greenhouse gases at a rate roughly comparable to those of France, the Toyota Motor Corporation has set a goal of reducing all tailpipe emissions by 90 percent by 2050, according to Brian Storey, who directs the Toyota Research Institute (TRI) Accelerated Materials Design and Discovery program from its Kendall Square office in Cambridge, Massachusetts. He gave the keynote address at the MIT Materials Research Laboratory's Materials Day Symposium on Oct. 9.

“A rapid shift from the traditional vehicle to electric vehicles has started,” Storey says. “And we want to enable that to happen at a faster pace.”

“Our role at TRI is to develop tools for accelerating the development of emissions-free vehicles,” Storey said. He added that machine learning is helping to speed up those innovations, but the challenges are very great, so his team has to be a little humble about what it can actually accomplish.

Electrification is just one of four “disrupters” to the automotive industry, which are often abbreviated CASE (connected, autonomous, shared, electric). “It’s a disrupter to the industry because Toyota has decades of experience of optimizing the combustion engine,” Storey said. “We know how to do it; it’s reliable; it’s affordable; it lasts forever. Really, the heart of the Toyota brand is the quality of the combustion engine and transmission.”

Storey stated that as society shifts toward electrification — battery or fuel cell vehicles — new capability, technology, and know-how is needed. Storey says “while Toyota has a lot of experience in these areas, we still need to move faster if we are going to make this kind of transition.”

To help with that acceleration, Toyota Research Institute is providing $10 million a year to support research of approximately 125 professors, postdocs, and graduate students at 10 academic institutions. About $2 million a year of that research is being done at MIT. Storey is also a professor of mechanical engineering at Olin College of Engineering.

For example, the Battery Evaluation and Early Prediction (BEEP) project, which is a TRI collaboration with MIT and Stanford University, aims to expand the value of lithium-based battery systems. In experiments, many batteries are charged and discharged at the same time. “From that data alone, the charge and discharge data, we can extract features. It’s super practical because we get the data. We extract features from the data, and we can correlate those features with lifetime,” Storey explained.

The traditional way of testing whether a battery is going to last for a thousand cycles is to cycle it for a thousand times. Storey noted that if each cycle takes one hour, one battery requires 1,000 hours of testing. “What we want to do is bring that time way back, and so our goal is to able to do it in five — to cycle five times and get a good estimate of what the battery’s lifetime would be at 1,000 cycles, doing it purely from data,” Storey said.

Published results in Nature Energy in March 2019 show just a 4.9 percent test error using data in classifying lithium-ion batteries from the first five charge/discharge cycles.

“This is a nice capability because it actually allows acceleration in testing,” Storey noted. “It’s using machine learning, but it’s really using it at the device scale, the ‘as-manufactured’ battery.”

The cloud-based battery evaluation software system allows TRI to collaborate easily with colleagues at MIT, Stanford, and Toyota’s home base in Japan, he said.

Program researchers operate it in a closed-loop, semi-autonomous way, where the computer decides and executes the next-best experiment. The system finds charging policies that are better than ones that have been published in the literature, and it finds them rapidly. “The key to this is the early prediction model, because if we want to predict the lifetime, we don’t have to do the whole test.” Storey added that the closed-loop testing “pulls the scientist up a level in terms of what questions they can ask.”

TRI would like to use this closed-loop battery evaluation system to optimize the first charge/discharge cycle a battery goes through, which is called formation cycling. “It’s like caring for the battery when it’s a baby,” Storey explained. “How you do those first cycles actually sets it up for the rest of its life. It’s a real black art, and how do you optimize this process?”

TRI’s long-term goal is to improve battery durability so that, from the consumer point of view, the battery capacity never goes down. Storey emphasized “we want the battery in the car to just last forever.”

Storey notes TRI is also conducting two other research projects, AI-Assisted Catalysis Experimentation (ACE) with CalTech to improve catalysts for fuel cell vehicles such as Toyota’s Mirai, and a materials synthesis project, mostly within TRI, to use machine learning to identify whether or not the new materials predicted on the computer are likely to be synthesizable.

For the materials synthesis project, TRI began with the phase diagrams of materials. “You build up a network of every material you’ve got in the computational database and look at features of the network. Believing that somehow those materials are connected to other materials through the relationship in this network provides a prediction of synthesizability,” Storey explained. “The way you can train the algorithm is by looking in the historical record of when certain materials were synthesized. You can virtually roll the clock back, pretending to know only what you knew in 1980, and use that to train your algorithm.” A report on the materials synthesis network was published in May in Nature Communications.

TRI is collaborating with Lawrence Berkeley National Laboratory (LBNL) and MIT Professor Martin Z. Bazant on a project that couples highly detailed mechanics of battery particles revealed through 4D scanning tunneling electron microscopy with a continuum model that captures larger-scale materials properties. “This program figures out the reaction kinetics and thermodynamics at a continuum scale, which is otherwise unknown,” Storey said.

“We’re putting our software tools online, so over the coming year many of these tools will start becoming available,” Storey explained. Hosted by LBNL, the Propnet materials database is already accessible to internal collaborators. Matscholar is accessible through GitHub. Both projects were funded by TRI.

“Our dream, which is a work in progress, is to have a system architecture that overlies all these projects and can start to tie them together,” Storey said. “We are creating a system that’s built for machine learning from the start, allows for diverse data, allows for systems and atom-scale measurements, and is capable of this idea of AI-driven feedback and autonomy. The idea is that you launch the system and it runs on its own, and everything lives in the cloud to enable collaboration.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started November 10, 2019, 12:01:26 PM
Terminator: Dark Fate

Terminator: Dark Fate in Robots in Movies

Terminator: Dark Fate is a 2019 American science fiction action film directed by Tim Miller and created from a story by James Cameron. Cameron considers the film a direct sequel to his films The Terminator (1984) and Terminator 2: Judgment Day. The film stars Linda Hamilton and Arnold Schwarzenegger returning in their roles of Sarah Connor and the T-800 "Terminator", respectively, reuniting after 28 years.

SPOILERS:

In 1998, three years after defeating the T-1000 and averting the rise of the malevolent artificial intelligence (AI) SkynetSarah Connor and her teenage son John are relaxing on a beach at Guatemala. A T-800 Terminator, sent from the future before Skynet's erasure, arrives and shoots John, killing him.

Mackenzie Davis stars as Grace: A soldier from the year 2042 adopted by Resistance leader Daniella Ramos who was converted into a cyborg and sent by her adoptive mother to protect her younger self from a new advanced Terminator prototype.

Oct 29, 2019, 21:27:46 pm
Life Like

Life Like in Robots in Movies

A couple, James and Sophie, buy an android called Henry to help around the house.

In the beginning, this is perfect for both James and Sophie as Henry does housework and makes a good companion to Sophie. But when Henry’s childlike brain adapts by developing emotions, complications begin to arise

Oct 29, 2019, 21:14:49 pm
I Am Mother

I Am Mother in Robots in Movies

I Am Mother is a 2019 Australian science fiction thriller film directed by Grant Sputore, from a screenplay by Michael Lloyd Green. Starring Clara Rugaard, Luke Hawker, Rose Byrne, and Hilary Swank, the film follows Daughter, a girl in a post-apocalyptic bunker, being raised by Mother, an android supposed to aid in the repopulation of Earth.

Sep 30, 2019, 21:39:16 pm
Mitsuku wins 2019 Loebner Prize

Mitsuku wins 2019 Loebner Prize in Articles

For the fourth consecutive year, Steve Worswick’s Mitsuku has won the Loebner Prize for the most humanlike chatbot entry to the contest. This is the fifth time that Steve has won the Loebner Prize. The Loebner Prize is the world’s longest running Turing-Test competition and has been organised by AISB, the world’s oldest AI society, since 2014.

Sep 30, 2019, 21:18:50 pm
Metal Gear Series - Metal Gear RAY

Metal Gear Series - Metal Gear RAY in Robots in Games

Metal Gear RAY is an anti-Metal Gear introduced in Metal Gear Solid 2: Sons of Liberty. This Metal Gear model comes in two variants: a manned prototype version developed to combat Metal Gear derivatives and an unmanned, computer-controlled version.

Metal Gear RAY differs from previous Metal Gear models in that it is not a nuclear launch platform, but instead a weapon of conventional warfare, originally designed by the U.S. Marines to hunt down and destroy the many Metal Gear derivatives that became common after Metal Gear REX's plans leaked following the events of Shadow Moses.

Apr 08, 2019, 17:35:36 pm
Fallout 3 - Liberty Prime

Fallout 3 - Liberty Prime in Robots in Games

Liberty Prime is a giant, military robot, that appears in the Fallout games. Liberty Prime fires dual, head-mounted energy beams, which are similar to shots fired from a Tesla cannon.

He first appears in Fallout 3 and also it's add-on Broken Steel. Then again in Fallout 4 and later in 2017 in Fallout: The Board Game.

Apr 07, 2019, 15:20:23 pm
Building Chatbots with Python

Building Chatbots with Python in Books

Build your own chatbot using Python and open source tools. This book begins with an introduction to chatbots where you will gain vital information on their architecture. You will then dive straight into natural language processing with the natural language toolkit (NLTK) for building a custom language processing platform for your chatbot. With this foundation, you will take a look at different natural language processing techniques so that you can choose the right one for you.

Apr 06, 2019, 20:34:29 pm
Voicebot and Chatbot Design

Voicebot and Chatbot Design in Books

Flexible conversational interfaces with Amazon Alexa, Google Home, and Facebook Messenger.

We are entering the age of conversational interfaces, where we will interact with AI bots using chat and voice. But how do we create a good conversation? How do we design and build voicebots and chatbots that can carry successful conversations in in the real world?

In this book, Rachel Batish introduces us to the world of conversational applications, bots and AI. You’ll discover how - with little technical knowledge - you can build successful and meaningful conversational UIs. You’ll find detailed guidance on how to build and deploy bots on the leading conversational platforms, including Amazon Alexa, Google Home, and Facebook Messenger.

Apr 05, 2019, 15:43:30 pm
Build Better Chatbots

Build Better Chatbots in Books

A Complete Guide to Getting Started with Chatbots.

Learn best practices for building bots by focusing on the technological implementation and UX in this practical book. You will cover key topics such as setting up a development environment for creating chatbots for multiple channels (Facebook Messenger, Skype, and KiK); building a chatbot (design to implementation); integrating to IFTT (If This Then That) and IoT (Internet of Things); carrying out analytics and metrics for chatbots; and most importantly monetizing models and business sense for chatbots.

Build Better Chatbots is easy to follow with code snippets provided in the book and complete code open sourced and available to download.

Apr 04, 2019, 15:21:57 pm
Chatbots and Conversational UI Development

Chatbots and Conversational UI Development in Books

Conversation as an interface is the best way for machines to interact with us using the universally accepted human tool that is language. Chatbots and voice user interfaces are two flavors of conversational UIs. Chatbots are real-time, data-driven answer engines that talk in natural language and are context-aware. Voice user interfaces are driven by voice and can understand and respond to users using speech. This book covers both types of conversational UIs by leveraging APIs from multiple platforms. We'll take a project-based approach to understand how these UIs are built and the best use cases for deploying them.

Build over 8 chatbots and conversational user interfaces with leading tools such as Chatfuel, Dialogflow, Microsoft Bot Framework, Twilio, Alexa Skills, and Google Actions and deploying them on channels like Facebook Messenger, Amazon Alexa and Google Home.

Apr 03, 2019, 22:30:30 pm
Human + Machine: Reimagining Work in the Age of AI

Human + Machine: Reimagining Work in the Age of AI in Books

Look around you. Artificial intelligence is no longer just a futuristic notion. It's here right now--in software that senses what we need, supply chains that "think" in real time, and robots that respond to changes in their environment. Twenty-first-century pioneer companies are already using AI to innovate and grow fast. The bottom line is this: Businesses that understand how to harness AI can surge ahead. Those that neglect it will fall behind. Which side are you on?

Apr 02, 2019, 17:19:14 pm