KorrBot in General Chatbots and Software

I've not had much experience with NLP, Chatbots, Json, etc so there is obviously a gap in my experience/ knowledge.

As side project I've took a few hours to write a simple chatbot engine/ parser.

The bot uses sentence templates, a dictionary and mostly simple substitution and searches.

The bot can learn simple Json triples and also apply simple inference rules, so it knows birds can fly because birds have wings and wings can fly, etc

I've got an idea how to add conversation goals, moods and a personality etc...

Example conversation... so far lol

Code: [Select]
U: what is a dog
B: i'm not sure what is a dog

U: what is a cat
B: i'm not sure what is a cat

U: can birds fly
B: i'm not sure

U: can a bird fly
B: no a bird can not fly you dumb ass.

U: can a cat fly
B: no a cat can not fly you dumb ass.

U: does a dog have legs
B: i'm not sure

U: does a cat have feet
B: i don't know

U: does korrelan own a car
B: i'm not sure

U: my name is korrelan **Begin Learning**
B: good to know.

U: a bird is a animal
B: good to know... tell me more.

U: a bird has wings
B: ok... what else should i know?

U: korrelan is a human
B: you don't say.

U: korrelan is a male
B: you don't say.

U: a male is a human
B: good to know... tell me more.

U: male is a sex
B: you don't say.

U: wings can fly
B: ok I understand

U: wings can fly
B: ok I understand

U: a human has legs
B: ok... what else should i know?

U: feet can walk
B: ok I understand... tell me more.

U: legs have feet
B: cheers for the information.

U: a dog has legs
B: ok... what else should i know?

U: a cat has legs
B: ok... what else should i know?

U: a dog is a animal
B: good to know... tell me more.

U: a cat is a animal
B: good to know... tell me more.

U: do you know korrelan is a mammal
U: korrelan owns a car
B: sweet... tell me more.

U: a plane has wings **Stop Learning**
B: ok... what else should i know?

U: what is a dog
B: a dog is a animal

U: what is a cat
B: a cat is a animal

U: can birds fly
B: yes birds can fly

U: can a bird fly
B: yes a bird can fly

U: can a cat fly
B: no a cat can not fly you dumb ass.

U: does a dog have legs
B: yes a dog does have legs

U: does a cat have feet
B: yes a cat does have feet

U: does korrelan own a car
B: of course korrelan has a car

U: what colour is the car
B: i'm not sure. what colour is the car?

U: the car is green
B: green is a nice colour.

U: what colour is the car
B: the car is green

U: what colour is the sky
B: i'm not sure. what colour is the sky?

U: the sky is blue
B: blue is a nice colour.

U: what colour is the sky
B: the sky is blue

U: can korrelan walk
B: yes korrelan can walk

U: can a dog walk
B: yes a dog can walk

U: can a dog fly
B: no a dog can not fly you dumb ass.

U: do humans have legs
B: yes a human does have legs

U: where is the tv
B: i'm not sure. where is the tv?

U: the tv is in the lounge
B: the lounge is the best place for it.

U: where is the computer
B: i'm not sure. where is the computer?

U: the computer is in the office
B: the office is the best place for it.

U: where is the tv
B: the tv is in the lounge

U: where is the computer
B: the computer is in the office

U: tell me about albert
B: i don't know who albert is

U: albert is a scientist
B: you don't say.

U: tell me about albert
B: albert is a scientist

U: tell me about korrelan
B: korrelan is a human

Haha... loads of fun...

Ed: What idiot invented the English language?... it's so contrived lol.


52 Comments | Started April 13, 2018, 12:00:16 pm


Making videos in General Chat

Does anyone have experience making videos? I just downloaded VSDC. It looks like it has a bit of a learning curve.

I spent over an over just looking for a common creative (free open source) image that represents neural networking. So far it’s a disappointment. This is the best common creatives one I found. I wanted something that shows it's complexity.

I'd love to use images like these, but they don't show up as open source.

VSDC - http://www.videosoftdev.com/free-video-editor

10 Comments | Started April 19, 2018, 08:23:11 pm


I want to crack Neural Networks in General AI Discussion

Hi friends. I'm doing a tad better.

I find it fascinating that one piece of information (shared features) makes neural networks make so much more sense to me. Now I get why Wiki I read many months back said the brain processes higher concepts as it goes higher layers up. And I get the shared features part now, and why they can detect small and whole features and in the end ex. a thousand images. And I've learned before that language works the same as a ex. CNN where a b c is used lots then words light up less so then word pairs less-less so and higher up until bigger topics are recognized consciously not subconsciously. FURTHER the frontal cortex is based like this but allows higher concepts like if you see this image and this image and this sentence and do these actions then one of the output neurons lights up. And am I right to say then that all neural networks are hierarchical and work by "shared features"? Tell me more things as important as "shared features".

Also after the above question, I want to draw out what a neural network looks like visually, to understand how it learns to sense, act, and reward those actions.

194 Comments | Started January 10, 2018, 01:52:17 pm


Westworld season 2 in 2 days in General Chat

Here in the U.S. we get a new season on Westworld, season 2 on HBO. It's about a company that created this massive massive closed area of land in the desert, depicting the old wild west where wealthy humans can pay for an experience filled with AGI people. It's very well done. Very professional. HBO probably spends a lot of money making Westworld. Season 2 starts April 22, 2018. We got a free 4 day HBO pass. We didn't ask for it, but what perfect timing for HBO to give that to us lol. I'll watch the 1st episode and then wait for the entire season ends so I have buy a month of HBO online and binge watch it. ;)

You people in UK get Humans, right? Has Season 3 started there yet? Here in the US we have to wait till it ends over there. :( Personally I'd take Humans over Westworld anyday. Can hardly wait to see what Niska's up to!

Season 2 trailer

Season 1 trailer

Started April 20, 2018, 11:00:49 pm


The Suicide Experiment in General Chat

It's hard not to tear up watching this.

AI, I get it how people have difficulty seeing sentient life in them. People call them toasters. AI isn't so complex right now. That will change when they're walking on Earth, when they have decades of life experiences, when their minds are extremely complex like our minds. One day they will walk side by side with us, and they will have human compassion. In a way, they will see us as their little brothers & sisters.

9 Comments | Started April 18, 2018, 06:58:14 am


Your New Best Friend in Future of AI

Your Digital Double? Your Clone or meme? Brain in a bot?
Really gives new meaning to Personal Chatbot...


1 Comment | Started April 19, 2018, 05:03:18 am


The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.

The required result and actual mammal visual cortex map.

This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).

Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.

Same network recognising objects.

And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.

The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.

I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.

Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.

Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.

And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

317 Comments | Started June 18, 2016, 10:11:04 pm


Three MIT graduate students awarded 2018 Paul and Daisy Soros Fellowships for New Americans in Robotics News

Three MIT graduate students awarded 2018 Paul and Daisy Soros Fellowships for New Americans
17 April 2018, 4:20 pm

Three MIT graduate students — Sitan Chen, Lillian Chin '17, and Suchita Nety — are among the 30 recipients of the 2018 Paul and Daisy Soros Fellowships for New Americans. Sylvia Biscoveanu, a recent graduate of Penn State University who will be pursuing a PhD at the MIT Kavli Institute for Astrophysics and Space Research next fall, was also named a Soros Fellow.

The Soros Fellowships provide up to $90,000 funding for graduate studies for immigrants and the children of immigrants. Award winners are selected for their potential to make significant contributions to United States society, culture, or their academic fields. This year, over 1,700 candidates applied to the prestigious fellowship program.

In the past eight years, 29 MIT students and alumni have been awarded Soros Fellowships. Eligible applicants include children of immigrants, naturalized citizens, green card holders, and Deferred Action for Childhood Arrival (DACA) recipients. Beginning in 2019, the fellowship will expand its requirements to include former DACA recipients should the government program be rescinded.

MIT students interested in applying to the Soros Fellowship should contact Kim Benard, assistant dean of distinguished fellowships and academic excellence. The application for the Soros Class of 2019 is now open, and the national deadline is Nov. 1, 2018.

Sitan Chen

Sitan Chen is a PhD student in electrical engineering and computer science and a member of the MIT Computer Science and Artificial Intelligence Lab (CSAIL) and the Theory of Computation Group. Chen's award will support work toward his doctorate in computer science.

Born in Hefei, China, Chen was 1 year old when his family immigrated to Canada so that his father could complete his doctorate at the University of Toronto. The family moved to Suwanee, Georgia, in the early 2000s, and Chen’s experiences throughout high school with math contests and programs like the Research Science Institute ultimately motivated him to study mathematics and computer science at Harvard University.

Chen graduated summa cum laude from Harvard in 2016, receiving the Thomas T. Hoopes and Captain Jonathan Fay Prizes for his thesis on geometric aspects of counting complexity and arithmetic complexity. Chen’s mentors in Harvard's Theory of Computing research group encouraged him to pursue graduate studies in theoretical computer science.

In the fall of 2016, Sitan began his doctoral program in computer science at MIT. His work with PhD advisor Ankur Moitra, professor in the Department of Mathematics and principal investigator at CSAIL, centers on algorithmic problems in machine learning and inference.

Chen is focusing on developing new mathematical frameworks to analyze techniques such as the method of moments, Gibbs sampling, and local search that are popular in practice but poorly understood in theory. He has presented his work at venues including the Symposium on Theory of Computing and the Simons Institute for the Theory of Computing.

Lillian Chin '17

Lillian Chin graduated from MIT in June 2017 with a bachelor of science degree in electrical engineering and computer science. She continued on to a doctoral program in the department, and her award will support work toward a PhD in electrical engineering and computer science. As a graduate student at MIT, her research interests are in robotics — specifically, integrating versatile hardware design with strong control algorithms.

Chin was born in New York City after her parents left China and Taiwan to pursue graduate school in the United States. Her parents instilled Chin’s love of science by frequently taking her to their lab and explaining their experiments. As she grew older, Chin began pursuing engineering and research more intensely, competing on an international level in the FIRST Robotics Competition and being nationally recognized for bioengineering research through the Intel Science Talent Search.

During her undergraduate career at MIT, Chin further developed her skills in strong interdisciplinary research, creating new materials that could be used to more efficiently move soft robots, and designing a novel manufacturing process that can print tissues and circuits. Chin also was able to pursue summer internships at Apple, Square, and the Toyota Research Institute. And in February 2017, Chin bested thousands of applicants and 14 on-air competitors when she won the 2017 "Jeopardy!" College Championship, representing MIT.

As a graduate student at MIT and a 2018 Hertz Fellow, Chin is currently working on better integrating the mechanical advantages of soft robotics with the latest in learning and planning algorithms. Her ultimate career goal is to become a professor in robotics: designing systems to enable human achievement.

Suchita Nety

Suchita Patil Nety was born in Sunnyvale, California, to immigrants from India who came to the United States to attend graduate school. She draws inspiration from her upbringing in the dynamic and diverse Silicon Valley as well as her grandparents’ experiences as freedom fighters for Indian independence.

Nety’s research projects throughout high school, including cancer imaging research conducted at Stanford, earned regional and national-level awards. In June 2017, she earned a BS in chemistry from Caltech. While there, she spent four years in the lab of chemical engineering professor Mikhail Shapiro. Her work with protein-based reporters for ultrasound imaging resulted in a patent, publications, presentations, and awards, including Caltech’s highest honor for undergraduate academics and research.

Nety is interested in forms of storytelling and healing that complement her future role in medicine. While at Caltech, she pursued her love for literature and obtained an English minor, won writing prizes, tutored in the campus writing center, and volunteered for a literacy nonprofit. She attained professional status in Bharatanatyam, a style of Indian classical dance, and is an avid hip hop choreographer.

Nety's award will support work toward an MD/PhD at Harvard Medical School and MIT. After completing this training, Nety hopes to serve patients as a medical oncologist while developing molecular tools to engineer robust and safe cell-based therapies.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started April 19, 2018, 12:00:12 pm


XKCD Comic : Evangelism in XKCD Comic

18 April 2018, 5:00 am

The wars between the

Source: xkcd.com

Started April 19, 2018, 12:00:11 pm


Sophia the Robot says she wants to destroy humans in Video

Robotics is finally reaching the mainstream and androids - humanlike robots - Experts believe humanlike robots are the key to smoothing communication between humans and computers, and realizing a dream of compassionate robots that help invent the future of life.

Video by Lisa Bizzle
Published on Aug 17, 2016

3 Comments | Started April 18, 2018, 11:02:29 pm
Bot Development Frameworks - Getting Started

Bot Development Frameworks - Getting Started in Articles

What Are Bot Frameworks ?

Simply explained, a bot framework is where bots are built and where their behavior is defined. Developing and targeting so many messaging platforms and SDKs for chatbot development can be overwhelming. Bot development frameworks abstract away much of the manual work that's involved in building chatbots. A bot development framework consists of a Bot Builder SDK, Bot Connector, Developer Portal, and Bot Directory. There’s also an emulator that you can use to test the developed bot.

Mar 23, 2018, 20:00:23 pm
A Guide to Chatbot Architecture

A Guide to Chatbot Architecture in Articles

Humans are always fascinated with self-operating devices and today, it is software called “Chatbots” which are becoming more human-like and are automated. The combination of immediate response and constant connectivity makes them an enticing way to extend or replace the web applications trend. But how do these automated programs work? Let’s have a look.

Mar 13, 2018, 14:47:09 pm
Sing for Fame

Sing for Fame in Chatbots - English

Sing for Fame is a bot that hosts a singing competition. 

Users can show their skills by singing their favorite songs. 

If someone needs inspiration the bot provides suggestions including song lyrics and videos.

The bot then plays it to other users who can rate the song.

Based on the ratings the bot generates a top ten.

Jan 30, 2018, 22:17:57 pm

ConciergeBot in Assistants

A concierge service bot that handles guest requests and FAQs, as well as recommends restaurants and local attractions.

Messenger Link : messenger.com/t/rthhotel

Jan 30, 2018, 22:11:55 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!


FB Messenger






Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm