avatar

ivan.moony

Logos formal language specification in General Project Discussion

I've been working on a formal language specification for a while, and it finally took a shape I'm satisfied with. If you like, you can take a look at Logos specification in the link below (some twenty minutes read). Logos stands for a (l)anguage (o)f (g)eneral (o)perational (s)emantics. The language represents all of the following:

  • Metatheory language formalization
  • Theorem prover
  • Term rewriting system
  • Data computing environment

It reminds of a Lisp (lisp is a usual choice for AI development), and brings some extension on s-expressions. Logos has completely flexible syntax, ready to host *any* theory language, from logic, to lambda calculus, to different embodiments of industry programming languages.

Its syntax is very simple, but it represents a Turing complete language:

Code: [Select]
a-expr ::= ()
         | identifier
         | a-expr a-expr
         | ‹a-expr›
         | «a-expr»
         | (a-expr)

Some of implementation is already done in a form of context free grammar parser. Further development has something to do with Turing completeness of unrestricted grammar from Chomsky hierarchy, which is an extension of context free grammar. Logos should be a form of a higher order unrestricted grammar metalanguage.

If you are into symbolic AI, Logos could be a place to start with to collect some useful info about knowledge mining. It is aimed to be a language for manipulating formal knowledge.

Read about the project here.

10 Comments | Started June 24, 2019, 12:10:44 pm
avatar

Tyler

Want to learn how to train an artificial intelligence model? Ask a friend. in Robotics News

Want to learn how to train an artificial intelligence model? Ask a friend.
25 June 2019, 6:50 pm

The MIT Machine Intelligence Community began with a few friends meeting over pizza to discuss landmark papers in machine learning. Three years later, the undergraduate club boasts 500 members, an active Slack channel, and an impressive lineup of student-led reading groups and workshops meant to demystify machine learning and artificial intelligence (AI) generally. This year, MIC and MIT Quest for Intelligence joined forces to advance their common cause of making AI tools accessible to all.

Starting last fall, the MIT Quest opened its offices to MIC members and extended access to IBM and Google-donated cloud credits, providing a boost of computing power to students previously limited to running their AI models on desktop machines loaded with extra graphics processors. The MIT Quest and MIC are now collaborating on a host of projects, independently and through MIT’s Undergraduate Research Opportunities Program (UROP).

“We heard about their mission to spread machine learning to all undergrads and thought, ‘That’s what we’re trying to do — let’s do it together!” says Joshua Joseph, chief software engineer with the MIT Quest Bridge.

A makerspace for AI

U.S. Army ROTC students Ian Miller and Rishi Shah came to MIC for the free cloud credits, but stayed for the workshop on neural computing sticks. A compute stick allows mobile devices to do image processing on the fly, and when the cadets learned what one could do, they knew their idea for a portable computer vision system would work.

“Without that, we’d have to send images to a central place to do all this computing,” says Miller, a rising junior. “It would have been a logistical headache.”

Built in two months, for $200, their wallet-sized device is designed to plug into a tablet strapped to an Army soldier’s chest and scan the surrounding area for cars and people. With more training, they say, it could learn to spot cellphones and guns. In May, the cadets demo'd their device at MIT’s Soldier Design Competition and were invited by an Army sergeant to visit Fort Devens to continue working on it.

Rose Wang, a rising senior majoring in computer science, was also drawn to MIC by the free cloud credits, and a chance to work on projects with quest and other students. This spring, she used IBM cloud credits to run a reinforcement learning model that’s part of her research with MIT Professor Jonathan How, training robot agents to cooperate on tasks that involve limited communication and information. She recently presented her results at a workshop at the International Conference on Machine Learning.  

“It helped me try out different techniques without worrying about the compute bottleneck and running out of resources,” she says.

Improving AI access at MIT

The MIC has launched several AI projects of its own. The most ambitious is Monkey, a container-based, cloud-native service that would allow MIT undergraduates to log in and train an AI model from anywhere, tracking the training as it progresses and managing the credits allotted to each student. On a Friday afternoon in April, the team gathered in a quest conference room as Michael Silver, a rising senior, sketched out the modules Monkey would need.

As Silver scrawled the words "Docker Image Build Service" on the board, the student assigned to research the module apologized. “I didn’t make much progress on it because I had three midterms!” he said.

The planning continued, with Steven Shriver, a software engineer with the Quest Bridge, interjecting bits of advice. The students had assumed the container service they planned to use, Docker, would be secure. It isn’t.

“Well, I guess we have another task here,” said Silver, adding the word “security” to the white board.

Later, the sketch would be turned into a design document and shared with the two UROP students helping to execute Monkey. The team hopes to launch sometime next year.

“The coding isn’t the difficult part,” says UROP student Amanda Li, a member of MIC Dev-Ops. “It’s the exploring the server side of machine learning — Docker, Google Cloud, and the API. The most important thing I’ve learned is how to efficiently design and pipeline a project as big as this.”

Silver knew he wanted to be an AI engineer in 2016, when the computer program AlphaGo defeated the world’s reigning Go champion. As a senior at Boston University Academy, Silver worked on natural language processing in the lab of MIT Professor Boris Katz, and has continued to work with Katz since coming to MIT. Seeking more coding experience, he left HackMIT, where he had been co-director, to join MIC Dev-Ops.

“A lot of students read about machine learning models, but have no idea how to train one,” he says. “Even if you know how to train one, you’d need to save up a few thousand dollars to buy the GPUs to do it. MIC lets students interested in machine learning reach that next level.”

Conceived by MIC members, a second project is focused on making AI research papers posted on arXiv easier to explore. Nearly 14,000 academic papers are uploaded each month to the site, and although papers are tagged by field, drilling into subtopics can be overwhelming.

Wang, for one, grew frustrated while doing a basic literature search on reinforcement learning. “You have a ton of data and no effective way of representing it to the user,” she says. “It would have been useful to see the papers in a larger context, and to explore by number of citations or their relevance to each other.”

A third MIC project focuses on crawling MIT’s hundreds of listservs for AI-related talks and events to populate a Google calendar. The tool will be closely patterned after an app Silver helped build during MIT’s Independent Activities Period in January. Called Dormsp.am, the app classifies listserv emails sent to MIT undergraduates and plugs them into a calendar-email client. Students can then search for events by day or by a color-coded topic, such as tech, food, or jobs. Once Dormsp.am launches, Silver will adapt it to search for and post AI-related events at MIT to an MIC calendar.

Silver says the team spent extra time on the user interface, taking a page from MIT Professor Daniel Jackson’s Software Studio class. “This is an app that can live or die on its usability, so the front end is really important,” he says.  

Wang is now collaborating with Moin Nadeem, MIC’s outgoing president, to build the visualization tool. It’s exactly the kind of hands-on experience MIC was intended to provide, says Nadeem, a rising senior. “Students learn fundamental concepts in class but don’t know how to implement them,” he says. “I’m trying to build what freshman me would have liked to have had: a community of people excited to do interesting stuff with machine learning.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started June 26, 2019, 12:01:00 pm
avatar

Zero

What thought next in Human Experience and Psychology


When you're in a state of mind, what should be the next state? I can see 4 cases.

* Association

   ocean
      makes you think
   blue
      makes you think
   DeepBlue
      makes you think
   Ai
      makes you think
   AiDreams

* Utility

   it can cause pleasure or satisfaction
      it matters, think about it
   it can cause pain or be uncomfortable
      it matters, think about it
   it's neutral
      it's irrelevant, no need to focus on this

* Plan

   I'm starting a compound activity
      go down and do first step
   I'm in the middle of an activity
      do next step
   the current activity is over
      go up and start next activity

* Surprise

   standard predicted stuff happened
      why bother
   something unexpected happened
      analysis needed, could be important



===== am I missing something?

15 Comments | Started June 22, 2019, 10:46:35 pm
avatar

Tyler

XKCD Comic : Motivated Reasoning Olympics in XKCD Comic

Motivated Reasoning Olympics
24 June 2019, 5:00 am



Source: xkcd.com

Started June 25, 2019, 12:00:41 pm
avatar

goaty

Using solenoids for a plotter instead of other mechanical methods in General Hardware Talk

If your movement on your plotting machine was purely magnetic, wouldn't it be more accurate than say with belts, cogs, and worms?

My machine planned is going to brush up against a nichrome wire to get its absolute position - but other than that, if it was just purely arriving at the destination via a solenoid it would have no other machine in the chain to get to the position.

So, can you make solenoids that go over a space of 50cm?   but even if that's not the case,  getting 10cm would already be enough if you were going to be making a ram chip or cpu with it.

5 Comments | Started June 24, 2019, 01:07:36 pm
avatar

Hopefully Something

Electronic Longevity in General AI Discussion

Anything we make breaks down. The more complicated it is the faster this happens. There is a water wheel that has been turning since 1854 on the Isle of Man, a light bulb has been glowing since 1901 in Livermore California. Those are relatively simple constructs. When you start talking about cars the time frame begins to shrink. Desktop computers, laptops, cellphones, down down down, the more compact and complex our machines and electronics get, the more strings entropy gets to pull, and the sooner things begin to unravel. A functional AGI that is also a body, (as opposed to an unwieldy government complex), will be the epitome of cramped complexity. How can we get them to have life spans past a few months?

8 Comments | Started June 20, 2019, 11:03:01 pm
avatar

Art

Humanity only has 100 years left on Earth... in General Chat

...before Doomsday!

So sayeth Stephen Hawking.
He thinks climate change, asteroid strikes, epidemics, and overpopulation will be our downfall.

Could he be correct in this hypothesis?

https://futurism.com/stephen-hawking-humanity-only-has-100-years-left-on-earth-before-doomsday

3 Comments | Started June 23, 2019, 02:52:26 pm
avatar

Zero

Thoughts on graphs & XML in AI Programming


Hi,

I wanted to share my current point of view about symbolic Ai: how to manipulate it, and where it could live. I'm just talking about my own approach of course, nothing more.

I work with Javascript. It's cheap, fast enough, and it's everywhere. And, the ecosystem is just huge.

Whatever I explore, I always go back to my "consnet" idea, which is a network of cons cell (sort of like lisp's cons cells). The "brain" of the system, would be a directed graph. This graph would be modified step by step, by a rule engine. Rules would be stored in the very graph they modify. My best luck would be Cytoscape, a Js library for graph visualization& manipulation.

Then. This is a brains' world, but it wouldn't be nothing without a body. An Ai wouldn't learn anything without a teacher, like a parent, who lives in the same world, and who has approx the same body. During the 80's, MUDs were a popular type of text-based multiplayer "online" video-games. That would be the "physical world" intelligent agents and teachers live in. The problem is that natural language, traditionally used in MUDs, is not appropriate for a baby Ai, since it would be hard to "decrypt". The solution I see here, is to use XML descriptions instead of natural language descriptions.

So the world is a set of XML "rooms", containing XML "objects", that XML "bodies" (Ai's and teacher's bodies) can manipulate.

With such a setting, the teacher can actually teach things to Ai, about how this world works. There can be a need for food for example, how to find it, how to cook it, whatever you want. The bodies can "talk", in natural language, to each others. So the Ai can learn how to talk, how to express desires, ...etc.

Then one day, you can connect this XML world to the real world, either through sensors or to through the internet, or both.

That's how I see things currently. :)

11 Comments | Started June 21, 2019, 09:45:31 am
avatar

Tyler

XKCD Comic : Stack in XKCD Comic

Stack
21 June 2019, 5:00 am

Gotta feel kind of bad for nation-state hackers who spend years implanting and cultivating some hardware exploit, only to discover the entire target database is already exposed to anyone with a web browser.

Source: xkcd.com

Started June 22, 2019, 12:06:27 pm
avatar

Tyler

Spotting objects amid clutter in Robotics News

Spotting objects amid clutter
20 June 2019, 4:59 am

A new MIT-developed technique enables robots to quickly identify objects hidden in a three-dimensional cloud of data, reminiscent of how some people can make sense of a densely patterned “Magic Eye” image if they observe it in just the right way.

Robots typically “see” their environment through sensors that collect and translate a visual scene into a matrix of dots. Think of the world of, well, “The Matrix,” except that the 1s and 0s seen by the fictional character Neo are replaced by dots — lots of dots — whose patterns and densities outline the objects in a particular scene.

Conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.

With their new technique, the researchers say a robot can accurately pick out an object, such as a small animal, that is otherwise obscured within a dense cloud of dots, within seconds of receiving the visual data. The team says the technique can be used to improve a host of situations in which machine perception must be both speedy and accurate, including driverless cars and robotic assistants in the factory and the home.

“The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there’s no way you could do that,” says Luca Carlone, assistant professor of aeronautics and astronautics and a member of MIT’s Laboratory for Information and Decision Systems (LIDS). “But our algorithm is able to see the object through all this clutter. So we’re getting to a level of superhuman performance in localizing objects.”

Carlone and graduate student Heng Yang will present details of the technique later this month at the Robotics: Science and Systems conference in Germany.

“Failing without knowing”

Robots currently attempt to identify objects in a point cloud by comparing a template object — a 3-D dot representation of an object, such as a rabbit — with a point cloud representation of the real world that may contain that object. The template image includes “features,” or collections of dots that indicate characteristic curvatures or angles of that object, such the bunny’s ear or tail. Existing algorithms first extract similar features from the real-life point cloud, then attempt to match those features and the template’s features, and ultimately rotate and align the features to the template to determine if the point cloud contains the object in question.

But the point cloud data that streams into a robot’s sensor invariably includes errors, in the form of dots that are in the wrong position or incorrectly spaced, which can significantly confuse the process of feature extraction and matching. As a consequence, robots can make a huge number of wrong associations, or what researchers call “outliers” between point clouds, and ultimately misidentify objects or miss them entirely.

Carlone says state-of-the-art algorithms are able to sift the bad associations from the good once features have been matched, but they do so in “exponential time,” meaning that even a cluster of processing-heavy computers, sifting through dense point cloud data with existing algorithms, would not be able to solve the problem in a reasonable time. Such techniques, while accurate, are impractical for analyzing larger, real-life datasets containing dense point clouds.

Other algorithms that can quickly identify features and associations do so hastily, creating a huge number of outliers or misdetections in the process, without being aware of these errors.

“That’s terrible if this is running on a self-driving car, or any safety-critical application,” Carlone says. “Failing without knowing you’re failing is the worst thing an algorithm can do.”

A relaxed view

Yang and Carlone instead devised a technique that prunes away outliers in “polynomial time,” meaning that it can do so quickly, even for increasingly dense clouds of dots. The technique can thus quickly and accurately identify objects hidden in cluttered scenes.

The MIT-developed technique quickly and smoothly matches objects to those hidden in dense point clouds (left), versus existing techniques (right) that produce incorrect, disjointed matches. Gif: Courtesy of the researchers

The researchers first used conventional techniques to extract features of a template object from a point cloud. They then developed a three-step process to match the size, position, and orientation of the object in a point cloud with the template object, while simultaneously identifying good from bad feature associations.

The team developed an “adaptive voting scheme” algorithm to prune outliers and match an object’s size and position. For size, the algorithm makes associations between template and point cloud features, then compares the relative distance between features in a template and corresponding features in the point cloud. If, say, the distance between two features in the point cloud is five times that of the corresponding points in the template, the algorithm assigns a “vote” to the hypothesis that the object is five times larger than the template object.

The algorithm does this for every feature association. Then, the algorithm selects those associations that fall under the size hypothesis with the most votes, and identifies those as the correct associations, while pruning away the others.  In this way, the technique simultaneously reveals the correct associations and the relative size of the object represented by those associations. The same process is used to determine the object’s position.  

The researchers developed a separate algorithm for rotation, which finds the orientation of the template object in three-dimensional space.

To do this is an incredibly tricky computational task. Imagine holding a mug and trying to tilt it just so, to match a blurry image of something that might be that same mug. There are any number of angles you could tilt that mug, and each of those angles has a certain likelihood of matching the blurry image.

Existing techniques handle this problem by considering each possible tilt or rotation of the object as a “cost” — the lower the cost, the more likely that that rotation creates an accurate match between features. Each rotation and associated cost is represented in a topographic map of sorts, made up of multiple hills and valleys, with lower elevations associated with lower cost.

But Carlone says this can easily confuse an algorithm, especially if there are multiple valleys and no discernible lowest point representing the true, exact match between a particular rotation of an object and the object in a point cloud. Instead, the team developed a “convex relaxation” algorithm that simplifies the topographic map, with one single valley representing the optimal rotation. In this way, the algorithm is able to quickly identify the rotation that defines the orientation of the object in the point cloud.

With their approach, the team was able to quickly and accurately identify three different objects — a bunny, a dragon, and a Buddha — hidden in point clouds of increasing density. They were also able to identify objects in real-life scenes, including a living room, in which the algorithm quickly was able to spot a cereal box and a baseball hat.

Carlone says that because the approach is able to work in “polynomial time,” it can be easily scaled up to analyze even denser point clouds, resembling the complexity of sensor data for driverless cars, for example.

“Navigation, collaborative manufacturing, domestic robots, search and rescue, and self-driving cars is where we hope to make an impact,” Carlone says.

This research was supported in part by the Army Research Laboratory, the Office of Naval Research, and the Google Daydream Research Program.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started June 21, 2019, 12:02:19 pm
Metal Gear Series - Metal Gear RAY

Metal Gear Series - Metal Gear RAY in Robots in Games

Metal Gear RAY is an anti-Metal Gear introduced in Metal Gear Solid 2: Sons of Liberty. This Metal Gear model comes in two variants: a manned prototype version developed to combat Metal Gear derivatives and an unmanned, computer-controlled version.

Metal Gear RAY differs from previous Metal Gear models in that it is not a nuclear launch platform, but instead a weapon of conventional warfare, originally designed by the U.S. Marines to hunt down and destroy the many Metal Gear derivatives that became common after Metal Gear REX's plans leaked following the events of Shadow Moses.

Apr 08, 2019, 17:35:36 pm
Fallout 3 - Liberty Prime

Fallout 3 - Liberty Prime in Robots in Games

Liberty Prime is a giant, military robot, that appears in the Fallout games. Liberty Prime fires dual, head-mounted energy beams, which are similar to shots fired from a Tesla cannon.

He first appears in Fallout 3 and also it's add-on Broken Steel. Then again in Fallout 4 and later in 2017 in Fallout: The Board Game.

Apr 07, 2019, 15:20:23 pm
Building Chatbots with Python

Building Chatbots with Python in Books

Build your own chatbot using Python and open source tools. This book begins with an introduction to chatbots where you will gain vital information on their architecture. You will then dive straight into natural language processing with the natural language toolkit (NLTK) for building a custom language processing platform for your chatbot. With this foundation, you will take a look at different natural language processing techniques so that you can choose the right one for you.

Apr 06, 2019, 20:34:29 pm
Voicebot and Chatbot Design

Voicebot and Chatbot Design in Books

Flexible conversational interfaces with Amazon Alexa, Google Home, and Facebook Messenger.

We are entering the age of conversational interfaces, where we will interact with AI bots using chat and voice. But how do we create a good conversation? How do we design and build voicebots and chatbots that can carry successful conversations in in the real world?

In this book, Rachel Batish introduces us to the world of conversational applications, bots and AI. You’ll discover how - with little technical knowledge - you can build successful and meaningful conversational UIs. You’ll find detailed guidance on how to build and deploy bots on the leading conversational platforms, including Amazon Alexa, Google Home, and Facebook Messenger.

Apr 05, 2019, 15:43:30 pm
Build Better Chatbots

Build Better Chatbots in Books

A Complete Guide to Getting Started with Chatbots.

Learn best practices for building bots by focusing on the technological implementation and UX in this practical book. You will cover key topics such as setting up a development environment for creating chatbots for multiple channels (Facebook Messenger, Skype, and KiK); building a chatbot (design to implementation); integrating to IFTT (If This Then That) and IoT (Internet of Things); carrying out analytics and metrics for chatbots; and most importantly monetizing models and business sense for chatbots.

Build Better Chatbots is easy to follow with code snippets provided in the book and complete code open sourced and available to download.

Apr 04, 2019, 15:21:57 pm
Chatbots and Conversational UI Development

Chatbots and Conversational UI Development in Books

Conversation as an interface is the best way for machines to interact with us using the universally accepted human tool that is language. Chatbots and voice user interfaces are two flavors of conversational UIs. Chatbots are real-time, data-driven answer engines that talk in natural language and are context-aware. Voice user interfaces are driven by voice and can understand and respond to users using speech. This book covers both types of conversational UIs by leveraging APIs from multiple platforms. We'll take a project-based approach to understand how these UIs are built and the best use cases for deploying them.

Build over 8 chatbots and conversational user interfaces with leading tools such as Chatfuel, Dialogflow, Microsoft Bot Framework, Twilio, Alexa Skills, and Google Actions and deploying them on channels like Facebook Messenger, Amazon Alexa and Google Home.

Apr 03, 2019, 22:30:30 pm
Human + Machine: Reimagining Work in the Age of AI

Human + Machine: Reimagining Work in the Age of AI in Books

Look around you. Artificial intelligence is no longer just a futuristic notion. It's here right now--in software that senses what we need, supply chains that "think" in real time, and robots that respond to changes in their environment. Twenty-first-century pioneer companies are already using AI to innovate and grow fast. The bottom line is this: Businesses that understand how to harness AI can surge ahead. Those that neglect it will fall behind. Which side are you on?

Apr 02, 2019, 17:19:14 pm
Metal Arms: Glitch In The System - Glitch

Metal Arms: Glitch In The System - Glitch in Robots in Games

Metal Arms: Glitch in the System is a third-person shooter action-adventure video game, developed by American team Swingin' Ape Studios and released in 2003. The game follows a robot named Glitch as he joins forces with the Droids in their fight against General Corrosive and his Milbots.

Apr 01, 2019, 21:17:33 pm
10 of the Most Innovative Chatbots on the Web

10 of the Most Innovative Chatbots on the Web in Articles

Love them or hate them, chatbots are here to stay. Chatbots have become extraordinarily popular in recent years largely due to dramatic advancements in machine learning and other underlying technologies such as natural language processing. Today’s chatbots are smarter, more responsive, and more useful – and we’re likely to see even more of them in the coming years.

Mar 31, 2019, 00:32:28 am
Borderlands - Claptrap

Borderlands - Claptrap in Robots in Games

Borderlands is a series of action role-playing first-person shooter video games in a space western science fantasy setting, created by Gearbox Software and published by 2K Games for multiple platforms.

Several characters appear in multiple Borderlands games. The little yellow robot Claptrap (voiced by David Eddings), the de facto mascot for the franchise, has appeared in all games as a non-player character (NPC) and in the Pre-Sequel as a playable character.

Mar 30, 2019, 13:14:58 pm
Slave Zero - Slave Zero

Slave Zero - Slave Zero in Robots in Games

Taking place 500 years in the future, the game tells the story of Lu Chen, a sinister world overlord more commonly known as the SovKhan, who rules the Earth from a massive complex called Megacity S1-9.

The game follows "Slave Zero" as he wages war against the SovKhan's forces throughout every part of Megacity S1-9.

First released on the Dreamcast console.

Mar 29, 2019, 12:17:05 pm