Borsuk in AI Programming

I have a new project. My former project was Perkun, an experimental AI language based on my own optimization algorithm supporting hidden variables. My new project is Borsuk, you can download it from:


It is not finished yet, but I wanted to discuss it here. The problem with Perkun is that it assumes all hidden variables to be not independent (the most general assumption but also very costly in terms of memory and computational power). Borsuk will assume that most hidden variables are independent. This will allow it to have hundreds, possibly thousands of hidden variables.

Take a look at the file examples/example2_fantasy.borsuk . If you run it with borsuk (which you have to build first) you will obtain a file content of examples/example2_fantasy.txt. It contains among others 387 hidden variables generated from the following code:

hidden variable has_(A:person)_(X:activity)_(B:person):boolean;
hidden variable does_(A:person)_like_(B:person):boolean;
hidden variable is_(A:person)_afraid_of_(B:person):boolean;
hidden variable (A:person)_is_in_(X:town):boolean;
hidden variable has_(A:person)_told_(B:person)_that_(C:person)_has_(X:activity)_(D:person):boolean;

These are "templates" of the hidden variables to be generated. The last one for example generates all tuples (A,B,C,X,D) satisfying the condition A is a person, B is a person, C is a person, X is an activity, D is a person. It then generates variables like:

hidden variable has_pregor_told_dorban_that_pregor_has_attacked_me:{none,false,true};

In short - I plan to use the same algorithm but allow many more hidden variables than in Perkun.

Borsuk requires the SWI Prolog (the devel packages) to be built.

2 Comments | Started December 12, 2018, 12:01:26 am


Anyone wants to learn logic? in General AI Discussion

Here is Stanford Introduction to Logic, an online course on symbolic logic. I believe it covers the most interesting knowledge of logic in general. Enjoy :)

70 Comments | Started December 07, 2018, 01:32:40 pm




10 Comments | Started December 09, 2018, 03:24:07 am


ChatbotML on Twitch in Home Made Robots


NOTE:  The chatbot has increased control of the video, with the ability to accurately pause and allow extra reading time.  

ChatbotML, is turning out to be a fun way to discover how chatbots work on Twitch.  This early prototype can read and write to the Twitch Stream Chat.  It can lip sync text to speech, and render a video response on the fly.   Currently it is being developed as an alpha version made available for testing.

If you are on twitch, please have a look:  https://www.twitch.tv/chatbotml

« Edit Notes:  This post has been slightly edited.  A video been updated in the post.

17 Comments | Started October 10, 2018, 01:11:36 am


Help Mr. Wiginnie recognize these pictures! in General AI Discussion

Mr. Wiginnie has scrambled his photos once again. He accidentally got tipsy one night and went on a rampage in photo editors. Unfortunately he didn't name them properly. Now we need to help him figure out which are the originals.

Get to know the originals, stare at them for minutes. When viewing the edits, make sure to not tip your head or edit them, keep your head up straight.

Below is 8 photos. Four are the originals. Four are the edits. For all four of the them, you need to give your opinion on which original is which edit. If you have an ANN, you may also use it to help you.

2 Comments | Started December 13, 2018, 04:43:49 pm


XKCD Comic : FDR in XKCD Comic

12 December 2018, 5:00 am

June 21st, 365, the date of the big Mediterranean earthquake and tsunami, lived in infamy for a few centuries before fading. Maybe the trick is a catchy rhyme; the '5th of November' thing is still going strong over 400 years later.

Source: xkcd.com

Started December 13, 2018, 12:00:08 pm


Deep-learning technique reveals “invisible” objects in the dark in Robotics News

Deep-learning technique reveals “invisible” objects in the dark
12 December 2018, 5:00 am

Small imperfections in a wine glass or tiny creases in a contact lens can be tricky to make out, even in good light. In almost total darkness, images of such transparent features or objects are nearly impossible to decipher. But now, engineers at MIT have developed a technique that can reveal these “invisible” objects, in the dark.

In a study published today in Physical Review Letters, the researchers reconstructed transparent objects from images of those objects, taken in almost pitch-black conditions. They did this using a “deep neural network,” a machine-learning technique that involves training a computer to associate certain inputs with specific outputs — in this case, dark, grainy images of transparent objects and the objects themselves.

The team trained a computer to recognize more than 10,000 transparent glass-like etchings, based on extremely grainy images of those patterns. The images were taken in very low lighting conditions, with about one photon per pixel — far less light than a camera would register in a dark, sealed room. They then showed the computer a new grainy image, not included in the training data, and found that it learned to reconstruct the transparent object that the darkness had obscured.

The results demonstrate that deep neural networks may be used to illuminate transparent features such as biological tissues and cells, in images taken with very little light.

“In the lab, if you blast biological cells with light, you burn them, and there is nothing left to image,” says George Barbastathis, professor of mechanical engineering at MIT. “When it comes to X-ray imaging, if you expose a patient to X-rays, you increase the danger they may get cancer. What we’re doing here is, you can get the same image quality, but with a lower exposure to the patient. And in biology, you can reduce the damage to biological specimens when you want to sample them.”

Barbastathis’ co-authors on the paper are lead author Alexandre Goy, Kwabena Arthur, and Shuai Li.

Deep dark learning

Neural networks are computational schemes that are designed to loosely emulate the way the brain’s neurons work together to process complex data inputs. A neural network works by performing successive “layers” of mathematical manipulations. Each computational layer calculates the probability for a given output, based on an initial input. For instance, given an image of a dog, a neural network may identify features reminiscent first of an animal, then more specifically a dog, and ultimately, a beagle. A “deep” neural network encompasses many, much more detailed layers of computation between input and output.

A researcher can “train” such a network to perform computations faster and more accurately, by feeding it hundreds or thousands of images, not just of dogs, but other animals, objects, and people, along with the correct label for each image. Given enough data to learn from, the neural network should be able to correctly classify completely new images.

Deep neural networks have been widely applied in the field of computer vision and image recognition, and recently, Barbastathis and others developed neural networks to reconstruct transparent objects in images taken with plenty of light. Now his team is the first to use deep neural networks in experiments to reveal invisible objects in images taken in the dark.

“Invisible objects can be revealed in different ways, but it usually requires you to use ample light,” Barbastathis says. “What we’re doing now is visualizing the invisible objects, in the dark. So it’s like two difficulties combined. And yet we can still do the same amount of revelation.”

The law of light

The team consulted a database of 10,000 integrated circuits (IC), each of which is etched with a different intricate pattern of horizontal and vertical bars.

“When we look with the naked eye, we don’t see much — they each look like a transparent piece of glass,” Goy says. “But there are actually very fine and shallow structures that still have an effect on light.”

Instead of etching each of the 10,000 patterns onto as many glass slides, the researchers used a “phase spatial light modulator,” an instrument that displays the pattern on a single glass slide in a way that recreates the same optical effect that an actual etched slide would have.

The researchers set up an experiment in which they pointed a camera at a small aluminum frame containing the light modulator. They then used the device to reproduce each of the 10,000 IC patterns from the database. The researchers covered the entire experiment so it was shielded from light, and then used the light modulator to rapidly rotate through each pattern, similarly to a slide carousel. They took images of each transparent pattern, in near total darkness, producing “salt-and-pepper” images that resembled little more than static on a television screen.

The team developed a deep neural network to identify transparent patterns from dark images, then fed the network each of the 10,000 grainy photographs taken by the camera, along with their corresponding patterns, or what the researchers called “ground-truths.”

“You tell the computer, ‘If I put this in, you get this out,’” Goy says. “You do this 10,000 times, and after the training, you hope that if you give it a new input, it can tell you what it sees.”

“It’s a little worse than a baby,” Barbastathis quips. “Usually babies learn a bit faster.”

The researchers set their camera to take images slightly out of focus. As counterintuitive as it seems, this actually works to bring a transparent object into focus. Or, more precisely, defocusing provides some evidence, in the form of ripples in the detected light, that a transparent object may be present. Such ripples are a visual flag that a neural network can detect as a first sign that an object is somewhere in an image’s graininess.

But defocusing also creates blur, which can muddy a neural network’s computations. To deal with this, the researchers incorporated into the neural network a law in physics that describes the behavior of light, and how it creates a blurring effect when a camera is defocused.

“What we know is the physical law of light propagation between the sample and the camera,” Barbastathis says. “It’s better to include this knowledge in the model, so the neural network doesn’t waste time learning something that we already know.”

Sharper image

After training the neural network on 10,000 images of different IC patterns, the team created a completely new pattern, not included in the original training set. When they took an image of the pattern, again in darkness, and fed this image into the neural network, they compared the patterns that the neural network reconstructed, both with and without the physical law embedded in the network.

They found that both methods reconstructed the original transparent pattern reasonably well, but the “physics-informed reconstruction” produced a sharper, more accurate image. What’s more, this reconstructed pattern, from an image taken in near total darkness, was more defined than a physics-informed reconstruction of the same pattern, imaged in light that was more than 1,000 times brighter.

The team repeated their experiments with a totally new dataset, consisting of more than 10,000 images of more general and varied objects, including people, places, and animals. After training, the researchers fed the neural network a completely new image, taken in the dark, of a transparent etching of a scene with gondolas docked at a pier. Again, they found that the physics-informed reconstruction produced a more accurate image of the original, compared to reproductions without the physical law embedded.

“We have shown that deep learning can reveal invisible objects in the dark,” Goy says. “This result is of practical importance for medical imaging to lower the exposure of the patient to harmful radiation, and for astronomical imaging.”

This research was supported, in part, by the Intelligence Advanced Research Projects Activity and Singapore’s National Research Foundation.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started December 12, 2018, 12:00:11 pm


Nightflyers in AI in Film and Literature.

This is something of a psychological thriller...aboard a large spaceship.

Nightflyers is an American science fiction television series on Syfy that premiered on December 2, 2018, based on the novella and series of short stories of the same name by George R. R. Martin. The series is set to consist of ten episodes.

In 2093, a team of scientists embarks on a journey into space aboard an advanced ship called the Nightflyer to make first contact with alien life-forms. However, when terrifying and violent events occur, the team begins to question each other and to realize there is something on-board the Nightflyer with them.


3 Comments | Started December 11, 2018, 01:35:04 pm


Why AI decentralisation is vital for the future of humanity? in Future of AI

Digital immortality: why AI decentralisation is vital for the future of humanity, and it’s not about open markets only

We're waiting for your opinions.

3 Comments | Started December 05, 2018, 03:05:33 pm


XKCD Comic : Laptop Issues in XKCD Comic

Laptop Issues
10 December 2018, 5:00 am

Hang on, we got a call from the feds. They say we can do whatever with him, but the EPA doesn't want that laptop in the ocean. They're sending a team.

Source: xkcd.com

Started December 11, 2018, 12:00:40 pm
Mortal Engines

Mortal Engines in Robots in Movies

Mortal Engines is a 2018 post-apocalyptic adventure film directed by Christian Rivers and with a screenplay by Fran WalshPhilippa Boyens and Peter Jackson, based on the novel of the same name by Philip Reeve.

Tom (Robert Sheehan) is a young Londoner who has only ever lived inside his travelling hometown, and his feet have never touched grass, mud or land. His first taste of the outside comes quite abruptly: Tom gets in the way of an attempt by the masked Hester (Hera Hilmar) to kill Thaddeus Valentine (Hugo Weaving), a powerful man she blames for her mother’s murder, and both Hester and Tom end up thrown out of the moving "traction" city, to fend for themselves.

Stars Stephen Lang as Shrike, the last of an undead battalion of soldiers known as Stalkers, who were war casualties re-animated with machine parts, and Hester's guardian.

Dec 08, 2018, 18:50:44 pm
Alita: Battle Angel

Alita: Battle Angel in Robots in Movies

Alita: Battle Angel is an upcoming American cyberpunk action film based on Yukito Kishiro's manga Battle Angel Alita. Produced by James Cameron and Jon Landau, the film is directed by Robert Rodriguez from a screenplay by Cameron and Laeta Kalogridis.

Visionary filmmakers James Cameron (AVATAR) and Robert Rodriguez (SIN CITY) create a groundbreaking new heroine in ALITA: BATTLE ANGEL, an action-packed story of hope, love and empowerment. Set several centuries in the future, the abandoned Alita (Rosa Salazar) is found in the scrapyard of Iron City by Ido (Christoph Waltz), a compassionate cyber-doctor who takes the unconscious cyborg Alita to his clinic. When Alita awakens she has no memory of who she is, nor does she have any recognition of the world she finds herself in. Everything is new to Alita, every experience a first.

As she learns to navigate her new life and the treacherous streets of Iron City, Ido tries to shield Alita from her mysterious past while her street-smart new friend, Hugo (Keean Johnson), offers instead to help trigger her memories. A growing affection develops between the two until deadly forces come after Alita and threaten her newfound relationships. It is then that Alita discovers she has extraordinary fighting abilities that could be used to save the friends and family she’s grown to love.

Determined to uncover the truth behind her origin, Alita sets out on a journey that will lead her to take on the injustices of this dark, corrupt world, and discover that one young woman can change the world in which she lives.

Scheduled to be released on February 14, 2019

Nov 16, 2018, 18:25:25 pm
The Beyond

The Beyond in Robots in Movies

A team of robotically-advanced astronauts travel through a new wormhole, but the mission returns early, sparking questions about what was discovered.

Nov 12, 2018, 22:38:18 pm
Mitsuku wins Loebner Prize 2018!

Mitsuku wins Loebner Prize 2018! in Articles

The Loebner Prize 2018 was held in Bletchley Park, England on September 8th this year and Mitsuku won it for a 4th time to equal the record number of wins. Only 2 other people (Joseph Weintraub and Bruce Wilcox) have achieved this. In this blog, I’ll explain more about the event, the day itself and a few personal thoughts about the future of the contest.

Sep 17, 2018, 19:10:51 pm
Automata (Series)

Automata (Series) in Robots on TV

In an alternate 1930's Prohibition-era New York City, it's not liquor that is outlawed but the future production of highly sentient robots known as automatons. Automata follows former NYPD detective turned private eye Sam Regal and his incredibly smart automaton partner, Carl Swangee. Together, they work to solve the case and understand each other in this dystopian America.

Sep 08, 2018, 00:16:22 am
Steve Worswick (Mitsuku) on BBC Radio 4

Steve Worswick (Mitsuku) on BBC Radio 4 in Other

Steve Worswick: "I appeared on BBC Radio 4 in August in a feature about chatbots. Leeds Beckett University were using one to offer places to students."

Sep 06, 2018, 23:50:39 pm

Extinction in Robots in Movies

Extinction is a 2018 American science fiction thriller film directed by Ben Young and written by Spenser Cohen, Eric Heisserer and Brad Kane. The film stars Lizzy Caplan, Michael Peña, Mike Colter, Lilly Aspell, Emma Booth, Israel Broussard, and Lex Shrapnel. It was released on Netflix on July 27, 2018.

Peter, an engineer, has recurring nightmares in which he and his family suffer through violent, alien invasion-like confrontations with an unknown enemy. As the nightmares become more stressful, they take a toll on his family, too.

Sep 06, 2018, 23:42:51 pm

Tau in Robots in Movies

Tau is a 2018 science fiction thriller film, directed by Federico D'Alessandro, from a screenplay by Noga Landau. It stars Maika Monroe, Ed Skrein and Gary Oldman.

It was released on June 29, 2018, by Netflix.

Julia is a loner who makes money as a thief in seedy nightclubs. One night, she is abducted from her home and wakes up restrained and gagged in a dark prison inside of a home with two other people, each with an implant in the back of their necks. As "subject 3," she endures a series of torturous psychological sessions by a shadowy figure in a lab. One night, she steals a pair of scissors and destroys the lab in an escape attempt, but she is stopped and the other two subjects are killed by a robot in the house, Aries, run by an artificial intelligence, Tau.

Alex, the technology executive who owns the house, reveals the implant is collecting her neural activity as she completes puzzles, and subjects her to more tests, because he is using the data to develop more advanced A.I. with a big project deadline in a few days.

Sep 06, 2018, 23:30:00 pm
Bot Development Frameworks - Getting Started

Bot Development Frameworks - Getting Started in Articles

What Are Bot Frameworks ?

Simply explained, a bot framework is where bots are built and where their behavior is defined. Developing and targeting so many messaging platforms and SDKs for chatbot development can be overwhelming. Bot development frameworks abstract away much of the manual work that's involved in building chatbots. A bot development framework consists of a Bot Builder SDK, Bot Connector, Developer Portal, and Bot Directory. There’s also an emulator that you can use to test the developed bot.

Mar 23, 2018, 20:00:23 pm