Young Alpha in Bot Conversations

Codename: Young Alpha

This is a young alpha prototype. Not fully  worked out yet.  Before this, I created a more complex version, but it runs in a shell.

On the first try I failed to port the shell version to the web.  So, to rethink my strategy, I started from scratch creating a basic design with the goal of making it compatible with the complex design running in the shell.

If you think you can see where I am going with this, then please comment.  Nothing is getting logged this early in alpha testing.  So, I would appreciate hearing your impressions, directly from you.   Thank you.

Live Demo: http://aihax.com/codename

11 Comments | Started February 20, 2018, 01:56:11 am


Spikes in neural networks in New Users Please Post Here

Hi guys, I'm new here (literally created this account 5 min ago) and I want to ask a technical question about AI. I'm not sure whether this forum is suitable for technical questions or not but here's it:

So I have been doing AI for a long time now. I'm just curious about it and have no intention for now to apply AI in any fields. I have been testing different kinds of neural networks lately to see which works and which don't. I program all of these in Processing which is based on Java and I don't use any libraries like Tensorflow at all. Because of that, it'll be hard to give out the code as you'll need a lot of time to comprehend it so I'll just give the setup and the results I found here:

Network setup:
Goal: recognize the MNIST handwritten digits (really popular dataset in AI).
Architecture: plain feed forward network
Dimension: 28*28+1 neurons (input layer. Last input always equal to 1 in all samples to replicate the bias), 50 neurons (hidden layer), 10 neurons (output layer).
Correct answer format: If 0 is presented, the first neuron in the output layer will be 1 and all the other 0. Vice versa for every other number.
Activation function: sigmoid
Learning constant: 0.04 (plain backpropergation).
Learning rule: Delta learning rule
Training samples: 1000
Batch size: 150 (last batch have size 100)
Testing samples: 100

Evaluation function used:
Error: Sum over all samples(Sum over all output neurons(  abs(prediction-answer)  ))
Confidence: Sum over all samples(Sum over all output neurons(  min(1-answer, answer)  ))/number of samples. The more the network is confident that it is correct, the lower this value is.
Accuracy: Sum over all samples(  kronecker delta(max(all output neurons), max(all correct answer neurons))  )/number of samples

Extension evaluation functions:
Confidence: confidence on the first batch of the training samples
Real confidence: confidence on the testing samples
Accuracy: accuracy on the first batch of the training samples
Real accuracy: accuracy on the testing samples

These are the graphs I collected from running this:

This is a closer look at the begining:

- Accuracy and real accuracy graphs closely match each other in shape
- Confidence and real confidence graphs closely match each other in shape
- Accuracy keeps improving over time, but most of the time it's really quiet and unchanging and increases only at specific points (let's call them spikes)
- When a spike occurs, confidence increases dramatically, accuracy increases by a bit and error nudges a bit and slopes downward faster, eventually reaching a new, lower equilibrium.

Further experiments:
- The spikes occur very randomly and you can't really tell one is coming up
- Spikes stop happening when you reach an accuracy of around 0.96 (96% correct)
- Sometimes the error drops sharply to 0 and rise back up to its nominal value just before a spike happens.
- When I run it with a learning constant of 0.02, the spikes don't appear at all and the error race towards 0.96 right away. A summary graph of it can be found here: http://157239n.com/frame%203.png
- At first, I suspect the spikes are due to the fact that the network's experiencing learning slowdown to the nature of the sigmoid activation function and the quadratic cost function. This can be dealt with using cross-entropy cost function but why don't the spikes appear if they're still stuck with learning slowdown when the learning constant is 0.02. I am working on implementing the cross-entropy cost function and see what happens but in the mean time, that's all the information I've got.

My question:
So my question is, how can those spikes be formulated? What are those spikes anyway? What caused them? Can I somehow trigger a spike to happen so that the network can continue to learn?

Thanks in advanced if anyone knows about this. Let me know if you have any questions.

4 Comments | Started February 21, 2018, 07:49:45 pm


Perkun in New Users Please Post Here


I have invented a new AI algorithm and implemented an experimental language Perkun based on it - https://sourceforge.net/projects/perkun/ . Perkun is also a library usable in your own projects (C++) - here is a small demo how to use it: https://sourceforge.net/projects/perkunwars/ . I also write a blog about Perkun - http://pawel-biernacki.blogspot.fi/ .

I would like to talk about it. The idea is to introduce so called hidden variables (the variables that affect the state of the world, but are invisible). It has been attempted to achieve AI with the constructs like IF THEN, but IF THEN is stateless and does not remember anything. On the contrary in Perkun there is a belief - a state (a probability distribution) that denotes what the player believes in. This belief is continuously adjusted to the observations.

My algorithm is capable of "asking questions" i.e. sometimes it can perform actions that are not beneficial for the player, but give him the knowledge. This is the most promising feature of my algorithm.

Pawel Biernacki

2 Comments | Started Today at 02:13:36 pm


Fractal Orbit in Human Computer Interaction

Fractal Orbit is a project that has some connection to representing structured data on 2D screen. Data can be anything, from textbooks, over source code, over business databases, over knowledge base representation. As knowledge base is directly connected to AI, I thought  this is the right place to share this project.

It is about representing knowledge trees in a way that reminds me of fractals. Children nodes orbit around parent nodes, while adjusting their magnification to fit between outer and inner circle. Clicking a child node zooms in its circle together with its grandchild nodes. Clicking a central parent node zooms out, back to the grandparent circle node. It should be interesting to follow node links in and out from one tree node to another, even across indirectly connected nodes of the tree.

I think it would look cool on round smart watches :)

28 Comments | Started February 04, 2018, 11:19:12 am


Private browsing gets more private in Robotics News

Private browsing gets more private
23 February 2018, 4:59 am

Today, most web browsers have private-browsing modes, in which they temporarily desist from recording the user’s browsing history.

But data accessed during private browsing sessions can still end up tucked away in a computer’s memory, where a sufficiently motivated attacker could retrieve it.

This week, at the Network and Distributed Systems Security Symposium, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Harvard University presented a paper describing a new system, dubbed Veil, that makes private browsing more private.

Veil would provide added protections to people using shared computers in offices, hotel business centers, or university computing centers, and it can be used in conjunction with existing private-browsing systems and with anonymity networks such as Tor, which was designed to protect the identity of web users living under repressive regimes.

“Veil was motivated by all this research that was done previously in the security community that said, ‘Private-browsing modes are leaky — Here are 10 different ways that they leak,’” says Frank Wang, an MIT graduate student in electrical engineering and computer science and first author on the paper. “We asked, ‘What is the fundamental problem?’ And the fundamental problem is that [the browser] collects this information, and then the browser does its best effort to fix it. But at the end of the day, no matter what the browser’s best effort is, it still collects it. We might as well not collect that information in the first place.”

Wang is joined on the paper by his two thesis advisors: Nickolai Zeldovich, an associate professor of electrical engineering and computer science at MIT, and James Mickens, an associate professor of computer science at Harvard.

Shell game

With existing private-browsing sessions, Wang explains, a browser will retrieve data much as it always does and load it into memory. When the session is over, it attempts to erase whatever it retrieved.

But in today’s computers, memory management is a complex process, with data continuously moving around between different cores (processing units) and caches (local, high-speed memory banks). When memory banks fill up, the operating system might transfer data to the computer’s hard drive, where it could remain for days, even after it’s no longer being used.

Generally, a browser won’t know where the data it downloaded has ended up. Even if it did, it wouldn’t necessarily have authorization from the operating system to delete it.

Veil gets around this problem by ensuring that any data the browser loads into memory remains encrypted until it’s actually displayed on-screen. Rather than typing a URL into the browser’s address bar, the Veil user goes to the Veil website and enters the URL there. A special server — which the researchers call a blinding server — transmits a version of the requested page that’s been translated into the Veil format.

The Veil page looks like an ordinary webpage: Any browser can load it. But embedded in the page is a bit of code — much like the embedded code that would, say, run a video or display a list of recent headlines in an ordinary page — that executes a decryption algorithm. The data associated with the page is unintelligible until it passes through that algorithm.


Once the data is decrypted, it will need to be loaded in memory for as long as it’s displayed on-screen. That type of temporarily stored data is less likely to be traceable after the browser session is over. But to further confound would-be attackers, Veil includes a few other security features.

One is that the blinding servers randomly add a bunch of meaningless code to every page they serve. That code doesn’t affect the way a page looks to the user, but it drastically changes the appearance of the underlying source file. No two transmissions of a page served by a blinding sever look alike, and an adversary who managed to recover a few stray snippets of decrypted code after a Veil session probably wouldn’t be able to determine what page the user had visited.

If the combination of run-time decryption and code obfuscation doesn’t give the user an adequate sense of security, Veil offers an even harder-to-hack option. With this option, the blinding server opens the requested page itself and takes a picture of it. Only the picture is sent to the Veil user, so no executable code ever ends up in the user’s computer. If the user clicks on some part of the image, the browser records the location of the click and sends it to the blinding server, which processes it and returns an image of the updated page.

The back end

Veil does, of course, require web developers to create Veil versions of their sites. But Wang and his colleagues have designed a compiler that performs this conversion automatically. The prototype of the compiler even uploads the converted site to a blinding server. The developer simply feeds the existing content for his or her site to the compiler.

A slightly more demanding requirement is the maintenance of the blinding servers. These could be hosted by either a network of private volunteers or a for-profit company. But site managers may wish to host Veil-enabled versions of their sites themselves. For web services that already emphasize the privacy protections they afford their customers, the added protections provided by Veil could offer a competitive advantage.

“Veil attempts to provide a private browsing mode without relying on browsers,” says Taesoo Kim, an assistant professor of computer science at Georgia Tech, who was not involved in the research. “Even if end users didn't explicitly enable the private browsing mode, they still can get benefits from Veil-enabled websites. Veil aims to be practical — it doesn't require any modification on the browser side — and to be stronger — taking care of other corner cases that browsers do not have full control of.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started Today at 12:02:21 pm


XKCD Comic : The Simpsons in XKCD Comic

The Simpsons
23 February 2018, 5:00 am

In-universe, Bart Simpson and Harry Potter were the same age in 1990. Bart is perpetually 10 years old because of a spell put on his town by someone trying to keep him from getting his Hogwarts letter.

Source: xkcd.com

Started Today at 12:02:20 pm


MIT Sandbox’s first acquired startup gives back in Robotics News

MIT Sandbox’s first acquired startup gives back
21 February 2018, 6:00 pm

In spring 2016, the Institute launched MIT Sandbox, a program that provides fledgling entrepreneurs with funding, mentorship, education, and other support to help shape and launch early-stage ventures.

Two years later, the program is celebrating a major entrepreneurial milestone: Escher Reality, an augmented-reality (AR) startup from the first Sandbox cohort, has been acquired by an AR-game-developer giant — and plans to give back to the program that helped it launch.

Escher is developing a back-end platform for AR that allows, for the first time, cross-platform and fully multiplayer games. It also enables “persistent experiences,” as the startup calls them, in which an AR game remembers where a user places digital objects in a physical play area — to return to, or for others to find — and recognizes when other users, connected on other devices, interact with those objects.

On Feb. 1, the startup announced that it sold to Niantic, the major AR-game developer, founded by ex-Google engineers, that’s behind 2016’s massive hit “Pokémon Go.”

Of the roughly 700 startups that so far have gone through Sandbox, several have raised significant rounds and are generating revenue, but Escher is the first that’s penned an exit deal. Attributing part of its early success to Sandbox — especially as a startup operating in a very young industry — Escher co-founders are now giving back to the program.

“The Sandbox is phenomenal,” says Ross Finman, who co-founded Escher with Diana Hu, a graduate of Carnegie Mellon University. “It gives students freedom to explore any idea … and funds and mentorship and a bunch of resources to work with. If someone wants to pursue something, there’s no reason they can’t do it. We are excited to be the first exit coming out of Sandbox and to be the first ones giving back.”

Sandbox aims to open pathways for a range of student innovators, who may have an idea, an experimental technology, or even a specific startup in mind. Sandbox connects students with educational experiences and mentoring, and provides up to $25,000 in funding. Sandbox collaborates with other MIT programs, such as the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service (VMS), and the Gordon Engineering Leadership Program. The program accepts three cohorts per year, in the spring, summer, and fall.

Participating entrepreneurs sign a voluntary pledge to give back if they find success outside the program. Escher plans to give back more than originally pledged. The gift means a lot, emotionally and practically, says Jinane Abounadi, executive director of Sandbox.

“On the emotional side, we’re so proud of the outcome and proud to have been part of the journey,” she says. On the practical side: “Our assumption is that when people are able to give back, they will, so the next generation of students has the same opportunities. That the first company that exits is generously giving back — it validates that assumption.”

Vice Chancellor Ian A. Waitz, now faculty director of Sandbox, who launched the program as dean of the School of Engineering, met Finman at the inception of the program, but a couple years before its launch. “In the interim, he sent me 43 emails asking, ‘When will it happen? We really need this!’” Waitz says. “His passion and enthusiasm are part of what drove the effort forward, the same way they have driven Escher forward. So, it is a wonderful coincidence that his team is the first Sandbox team to have an exit.”

Entrepreneur from the start

At MIT, Finman was deeply involved in the entrepreneurial ecosystem — including as a student advisor during the formation of Sandbox.

In 2014, he was on the founding team of StartMIT (then Start6), founded by Anantha Chandrakasan, the current dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, along with other students, faculty, and administrators. The Independent Activities Period program provides students and postdocs opportunities to participate in venture-building activities and attend events with alumni and Boston innovation leaders. Participants also work with mentors and take field trips to local startups to explore the culture.

StartMIT organizers noted Finman’s enthusiasm. When Abounadi started to put together a student advisory group for Sandbox in 2015, Finman’s name immediately came up from Waitz, who was a key organizer of StartMIT. “I got an email from Ian that told me to contact Ross because he was so passionate about entrepreneurship,” she says.

Finman provided Sandbox organizers with insights into what resources students may need to succeed. For Finman, the opportunity to help was indicative of MIT’s student-focused ecosystem. “MIT is always asking, ‘What do students need?’” he says.

Around the same time Sandbox launched, Finman and Hu were drafting startup ideas. The co-founders had met as undergraduates studying artificial intelligence and computer vision at Carnegie Mellon years before. After graduating, Hu worked as an engineer at Intel, while Finman enrolled at MIT, where he developed advanced computer vision algorithms for robots in the Computer Science and Artificial Intelligence Laboratory’s Marine Robotics Group.

In early 2016, the two decided to bring their combined computer vision experience to a then-emerging field: AR. Most technological development and funding was focused on virtual reality at the time, Finman says, while very few startups were developing AR technologies. “We decided to be contrarian,” he says.

Founding Escher that spring, they were accepted to Sandbox in the first summer cohort, with only a vague idea for advancing AR capabilities. Then, that July, “Pokémon Go” took the world by storm. “That’s when things really took off,” Finman says. “We said, ‘How do you take this to the next level?’”

In and out of Sandbox

Playing “Pokémon Go,” the two noticed a few lacking features: multiplayer and cross-platform capabilities, and the ability to save games.

In Sandbox, the team developed early proof of concepts. To fix the cross-platform issue, they developed a “unifying layer” that connects disparate devices to the same game. To provide multiplayer capabilities, they found a way to synchronize devices with low latency. Then they used algorithmic techniques to help each device recognize where, say, one Pokémon character is in relation to the other in the real world.

The other issue — “a freakishly hard problem,” Finman says — was saving games. When players exit an AR game, they lose all data and need to start over. The startup developed algorithms that let devices always recognize the exact spot where a digital object had been in a physical space, each time a game is restarted.

But perhaps more importantly, Finman says, the co-founders in Sandbox worked on marketing the technology in an industry that was just getting its footing. “In early days, it was putting together demos of user interfaces for developers and gamers,” he says. “For the early markets that many MIT companies go after, you have to ask, ‘How do you get it out to customers?’”

That Sandbox work got Escher that fall into the National Science Foundation’s I-Corps Program, hosted on campus by the VMS. Over the course of the program, they interviewed 100 AR and game developers in about seven weeks, which really helped them understand the market.

In June 2017, Escher joined Y Combinator, a seed accelerator in Silicon Valley. That August, Apple released its ARKit for AR software developers. Then, the many AR limitations — such as lack of multiplayer and cross-platform capabilities — that Escher had been working on for well over a year became apparent to developers. That caught the eye of Niantic, which quickly scooped up the startup.

Now the Escher team of six joins Niantic’s AR development team, as Niantic gears up for the 2018 release of its next title, “Harry Potter: Wizards Unite.”

“[Niantic] has been a trailblazer of AR and getting people excited about AR. That’s why it’s such a good match for us,” Finman says, adding, “Because who doesn’t want to play wizard battles in the real world?”

For Abounadi, Escher’s story is a prime example of what Sandbox stands for. Escher entered the program with only an idea for a brand-new technology that had no definite application in an emerging industry. “We expect that a good portion of the ideas people work on in Sandbox will fit exactly that mold,” she says. “Because it’s MIT, and we have people pushing the boundaries on new technologies.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started February 22, 2018, 12:00:24 pm


Noam Chomsky Has Weighed In On A.I. Where Do You stand? in AI News

Noam Chomsky Has Weighed In On A.I. Where Do You stand?
There is a debate about the path to cognition. Will it be a statistical big data approach or a more biologically directed strategy?

What does this group think?


15 Comments | Started February 17, 2018, 07:20:44 pm


They Are Coming For US! in General Robotics Talk

1 Comment | Started February 21, 2018, 01:01:05 pm


XKCD Comic : Self-Driving Issues in XKCD Comic

Self-Driving Issues
21 February 2018, 5:00 am

If most people turn into muderers all of a sudden, we'll need to push out a firmware update or something.

Source: xkcd.com

Started February 21, 2018, 12:00:06 pm
Sing for Fame

Sing for Fame in Chatbots - English

Sing for Fame is a bot that hosts a singing competition. 

Users can show their skills by singing their favorite songs. 

If someone needs inspiration the bot provides suggestions including song lyrics and videos.

The bot then plays it to other users who can rate the song.

Based on the ratings the bot generates a top ten.

Jan 30, 2018, 22:17:57 pm

ConciergeBot in Assistants

A concierge service bot that handles guest requests and FAQs, as well as recommends restaurants and local attractions.

Messenger Link : messenger.com/t/rthhotel

Jan 30, 2018, 22:11:55 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!


FB Messenger






Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am