Author Topic: ANN  (Read 3156 times)

8pla.net

  • *
  • Replicant
  • ********
  • Posts: 696
    • 8pla.net
ANN
« on: December 25, 2014, 11:12:31 PM »
Pre-release: ANN

ANN is an eight neuron
Artificial Neural Network
with a training interface.

Reference:
http://www.elizabot.com/ANN

HOW TO:
1. Specify three input neurons.
2. Process four hidden neurons
3. Display one output neuron.

This is an exclusive pre-release
on aidreams.co.uk, which hasn't
even been officially released on
the Elizabot.com homepage yet.

For testing purposes during the
pre-release, recall is turned off
in the long term memory. It learns
memories, and eventually will recall
them, but has no access presently.

Otherwise it will simply avoid the
training rounds in ALPHA testing,
by recalling its previous training.
(See below for details on that.)

Please don't bother comparing this
to a chatbot that brute forces every
human input with 10,000 records.
Of course, those are amazing A.I.

Yet a neural network may feel like
a puppy that is attempting to learn
what you are training it to do.

It is much different than a chatbot. 
For example:

Here is what a human may see:
Code: [Select]
ANN STIMULUS:
sun IS hot

Success:
Learned in 558 rounds!

sun NOT hot
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

sun IS hot
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

sun IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS hot
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT hot
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE


And here is what the robot learns
from your training:
Code: [Select]
[weights]

edges = "a:2:{i:0;a:3:{i:0;a:4:
{i:0;d:0.018433430106558686;i:1;d:2.0752904102477938;i:2;d:-2.3659331080440431;i:3;d:-5.9658975406205972;}i:1;a:4:
{i:0;d:-7.0843616593236041;i:1;d:1.0982051837897691;i:2;d:-2.8414919639398004;i:3;d:-5.0777022117575239;}i:2;a:4:
{i:0;d:-0.075755831721872474;i:1;d:1.8821195234091999;i:2;d:-1.8768441560093969;i:3;d:-3.1266105208785322;}}i:1;a:4:{i:0;a:1:
{i:0;d:-1.554628774100274;}i:1;a:1:{i:0;d:-0.33598219092569048;}i:2;a:1:{i:0;d:-0.29751095720415249;}i:3;a:1:
{i:0;d:-0.65137408115328266;}}}"

thresholds = "a:2:{i:1;a:4:
{i:0;d:-1.3393613961225899;i:1;d:-0.20393898468168131;i:2;d:-0.086723853668062706;i:3;d:0.46695201288196864;}i:2;a:1:
{i:0;d:-0.67168157451796928;}}"

[identifiers]
training_data = "a:5:{i:0;N;i:1;N;i:2;N;i:3;N;i:4;N;}"
control_data = "a:0:{}"

ANN training comes from human
interaction.  Perhaps easier than a
someone manually updating chatbot
records in a database,  ANN does all the
work for us.

A neural network is a different type of
A.I. modeled after the human brain.
So, don't be alarmed if you get the
impression it is sentient, or should
it remind you of a life form more than
other types of Artificial Intelligence.
ANN really is modeled after us.

My Very Enormous Monster Just Stopped Using Nine

ranch vermin

  • Not much time left.
  • *******
  • Starship Trooper
  • Posts: 386
  • Not much time.
Re: ANN
« Reply #1 on: December 26, 2014, 08:38:19 AM »
how many input sensors does it have?

how do you handle more than one piece of knowledge,  and using prior knowledge to understand something new like
if it learnt this in a row

'you are motivated to feel nice."
'hotdogs make you feel nice."   
'hotdogs come from the skippy van."
"a skippy van is there what do you do?"

could it handle that?
A bit from here, a bit from there, and bring it together and see the whole picture.

8pla.net

  • *
  • Replicant
  • ********
  • Posts: 696
    • 8pla.net
Re: ANN
« Reply #2 on: December 26, 2014, 06:58:50 PM »
When I put ANN together on Christmas day, I omitted the part that lets her recall what she learnt already. Now ANN does store those memories (see: weights, thresholds, identifiers, etc... ), but she is not programmed to access them yet. In other words, ANN is not programmed to remember anything yet.

Access to stored memory  (mostly done already), will instantly make ANN smarter. There is a simple explanation for that. Normally, training rounds may be unsuccessful, while stored memories are stored only after they are successful.  So, it makes no sense for training rounds to have association to other training rounds because success is not guaranteed. Makes sense, they only get stored, and associated, once they are successful.   For these reasons, instead of going through training rounds (see a sample below) ANN will simply access stored memories (see a sample below). So, with her training wheels removed, ANN will default to becoming smarter.

By wearing her training wheels, ANN assists in her own development. Repeating training over and over again, supports focus on memory access development. So to answer your question... Associating one stored memory to another may require access to them. Associating short term memories, the phase we are in now, may depend on how well human trainers do with ANN.

With that said, let's address your very kind feedback by taking your first "you are motivated to feel nice." suggestion. Thanks for that suggestion! We now have our three input neurons from your suggestion. The first input neuron, we type in, is the subject "you", The second input neuron, selected in the drop-down list, is the verb "are", and the third input neuron, we type in, is the object "motivated to feel nice". Just click a button to process our three input neurons, through four hidden neurons, to display one output neuron.

The human, for example, may see this from the Artificial Neural Network:
Code: [Select]
ANN STIMULUS:
ANN IS MOTIVATED TO FEEL NICE

TRAINING ROUND 1:
HAVEN'T LEARNT.

SUCCESSFUL
LEARNT IN 390 ROUNDS!

ANN NOT MOTIVATED TO FEEL NICE
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

ANN IS MOTIVATED TO FEEL NICE
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

ANN IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS MOTIVATED TO FEEL NICE
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT MOTIVATED TO FEEL NICE
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE


The robot, for example, may store its memory from the Artificial Neural Network, like this:
Code: [Select]
[weights]
edges = "
a:2:{i:0;
a:3:{i:0;
a:4:{i:0;
d:2.4374564045288012;
i:1;
d:-2.740271901067453;
i:2;
d:0.045101110516292488;
i:3;
d:-0.094644828163701178;
}
i:1;
a:4:{i:0;
d:1.7850451273810641;
i:1;
d:-4.097598194074191;
i:2;
d:4.3462708955491642;
i:3;
d:1.3473787150564396;
}
i:2;
a:4:{i:0;
d:1.7316401012547242;
i:1;
d:-2.5137955978693802;
i:2;
d:-0.024721716105362831;
i:3;
d:7.3986700250491103E-5;
}
}
i:1;
a:4:{i:0;
a:1:{i:0;
d:0.039778274112977685;
}
i:1;
a:1:{i:0;
d:-0.59430632811636064;
}
i:2;
a:1:{i:0;
d:1.2775967297837523;
}
i:3;
a:1:{i:0;
d:0.55670966163095392;
}
}
}
"

thresholds = "
a:2:{i:1;
a:4:{i:0;
d:-0.32788840506225331;
i:1;
d:0.15252377585231927;
i:2;
d:1.0467440755610877;
i:3;
d:0.35306259200056833;
}
i:2;
a:1:{i:0;
d:-0.63734414566423869;
}
}
"

[identifiers]
training_data = "
a:5:{i:0;
N;
i:1;
N;
i:2;
N;
i:3;
N;
i:4;
N;
}
"

control_data = "
a:0:{}
"

Please don't hesitate to suggest any improvements. Even simple improvements are needed. For example, please share any easier explanations. How would you help others feel more comfortable about using Neural Networks?  Your feedback is so valuable to this project.  Thanks to those who have ALPHA tested ANN already. And thanks for reading this. Your support is greatly appreciated.


« Last Edit: December 26, 2014, 07:21:39 PM by 8pla.net »
My Very Enormous Monster Just Stopped Using Nine

ranch vermin

  • Not much time left.
  • *******
  • Starship Trooper
  • Posts: 386
  • Not much time.
Re: ANN
« Reply #3 on: December 26, 2014, 08:25:26 PM »
Thanks for reply.

Im wondering where do you want this thing to head in the near future?
A bit from here, a bit from there, and bring it together and see the whole picture.

8pla.net

  • *
  • Replicant
  • ********
  • Posts: 696
    • 8pla.net
Re: ANN
« Reply #4 on: December 26, 2014, 09:29:18 PM »
Recent training by the ALPHA testers is getting exciting.
That's the direction this thing will be heading with ALPHA
testing in the near future.

Now, I am not going to be posting what the ALPHA testers
are doing.  They can do that themselves if they wish.

Here you can see the BEFORE and AFTER, of training that
pointed out an improvement. 


BEFORE:
Code: [Select]
ANN STIMULUS:
SUN WILL BE COLD

SUCCESSFUL
LEARNT IN 309 ROUNDS!

SUN NOT COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

SUN WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

SUN WILL BE THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE


ANN used her neurons to compute the best output available from her inputs.
Then an ALPHA tester trained ANN, and she stored that training in memory.
This, in turn supported an improvement to the short term memory of ANN.

AFTER:
Code: [Select]
ANN STIMULUS:
SUN WILL BE COLD

TRAINING ROUND 1:
HAVEN'T LEARNT.

TRAINING ROUND 2:
HAVEN'T LEARNT.

SUCCESSFUL
LEARNT IN 621 ROUNDS!

SUN WILL NOT BE COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

SUN WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

SUN WILL BE THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT WILL NOT BE COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

These ALPHA testers really know where to target improvements in Artificial Neural Networks.
I remember watching a Science show on TV which said, "Everything dies." and went on to
say, "Even the Sun, one day, will burn out."  So, one day, the "SUN WILL BE COLD".
My Very Enormous Monster Just Stopped Using Nine

ranch vermin

  • Not much time left.
  • *******
  • Starship Trooper
  • Posts: 386
  • Not much time.
Re: ANN
« Reply #5 on: December 27, 2014, 08:54:47 AM »
Wow thats amazing.   :)

There would be lots of uses for this.
Maybe one is an application is to get ANN to guess what your thinking, or guessing the solution of an half impessive problem, given some circumstance of knowledge.
A bit from here, a bit from there, and bring it together and see the whole picture.

ivan.moony

  • *
  • Replicant
  • ********
  • Posts: 643
  • look, a star is falling
Re: ANN
« Reply #6 on: December 29, 2014, 01:20:57 PM »
8pla.net, could I ask, is neural network in general capable of parsing texts?

8pla.net

  • *
  • Replicant
  • ********
  • Posts: 696
    • 8pla.net
Re: ANN
« Reply #7 on: December 31, 2014, 06:18:36 PM »
8pla.net, could I ask, is neural network in general capable of parsing texts?

@ivan.moony
That is an excellent question.  A lot may depend on how you are training your neural network.   For the sake of conversation amongst friends, let's agree that there may be exceptions, to avoid quoting scientific notation for semantic parsing and other research being done with neural networks, such as iCub.

Informally speaking for the purpose of our simple ANN with eight neurons, we may consider text parsing to be a preparation for the general capability of an ANN which is machine learning.  An analogy might be using flash cards with words (parsed) on each card as a study technique.   The flash cards may only be necessary for initial studying purposes until learning takes place.
My Very Enormous Monster Just Stopped Using Nine

8pla.net

  • *
  • Replicant
  • ********
  • Posts: 696
    • 8pla.net
Re: ANN
« Reply #8 on: December 31, 2014, 06:53:09 PM »
Wow thats amazing.   :)

There would be lots of uses for this.
Maybe one is an application is to get ANN to guess what your thinking, or guessing the solution of an half impressive problem, given some circumstance of knowledge.

We may find ourselves in discussion about this in the near future. 
My Very Enormous Monster Just Stopped Using Nine

8pla.net

  • *
  • Replicant
  • ********
  • Posts: 696
    • 8pla.net
Re: ANN
« Reply #9 on: December 31, 2014, 11:32:53 PM »
The Final Update of 2014

ANN is now able to recall that learnt.

Unsuccessful Learning  (This may happen sometimes and is considered normal):
Code: [Select]
ANN STIMULUS:
MEMORY IS WORKING

TRAINING ROUND 1:
HAVEN'T LEARNT.

TRAINING ROUND 2:
HAVEN'T LEARNT.

UNSUCCESSFUL:
MORE TRAINING PLEASE.

MEMORY NOT WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: FALSE

MEMORY IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

MEMORY IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: FALSE


Successful Learning  (This happens more often and is also normal):

Code: [Select]
ANN STIMULUS:
MEMORY IS WORKING

SUCCESSFUL
LEARNT IN 441 ROUNDS!

MEMORY NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

MEMORY IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

MEMORY IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

Here is what the robot sees:
Code: [Select]
[weights]
edges = "
a:2:{i:0;
a:3:{i:0;
a:4:{i:0;
d:-0.029243304417827569;
i:1;
d:3.5910951133577114;
i:2;
d:3.3565073125503271;
i:3;
d:1.9922035855214428;
}
i:1;
a:4:{i:0;
d:5.1993639125564908;
i:1;
d:4.5013108052188651;
i:2;
d:3.6250722726303342;
i:3;
d:1.3815499994858318;
}
i:2;
a:4:{i:0;
d:-0.037455006230214978;
i:1;
d:2.4215654202324917;
i:2;
d:2.9111186643829026;
i:3;
d:1.5250719642344766;
}
}
i:1;
a:4:{i:0;
a:1:{i:0;
d:1.5918046255861342;
}
i:1;
a:1:{i:0;
d:1.2070017632186236;
}
i:2;
a:1:{i:0;
d:0.25519720111901717;
}
i:3;
a:1:{i:0;
d:-0.062463709645525493;
}
}
}
"
thresholds = "
a:2:{i:1;
a:4:{i:0;
d:1.0808866906097543;
i:1;
d:-0.2622695349851461;
i:2;
d:-0.14503545814776675;
i:3;
d:-0.57208379581483737;
}
i:2;
a:1:{i:0;
d:0.032592926167136271;
}
}
"
[identifiers]
training_data = "
a:5:{i:0;
N;
i:1;
N;
i:2;
N;
i:3;
N;
i:4;
N;
}
"
control_data = "
a:0:{}
"


Recall of (previous) Successful Learning  (This is the new update.):

Code: [Select]
ANN STIMULUS:
(RECALLED)
MEMORY IS WORKING

SUCCESSFUL
LEARNT IN 9 ROUNDS!

MEMORY NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

MEMORY IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

MEMORY IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

Recall means that "MEMORY IS WORKING" was input a second time, and ANN recalled the learning from the first time it was learnt successfully above, instead of going through training over and over again.
My Very Enormous Monster Just Stopped Using Nine

 

Welcome

Please login or register.



Login with username, password and session length
Friday Funny
by kei10 (General Chat)
Today at 02:54:18 AM
mini a.i puzzles
by Freddy (General AI Discussion)
February 26, 2017, 09:01:05 PM
How to Predict Stock Prices Easily - Intro to Deep Learning #7 Siraj Raval
by keghn (General AI Discussion)
February 26, 2017, 04:21:26 PM
trivia questions
by yotamarker (General AI Discussion)
February 26, 2017, 03:05:54 PM
Life
by Art (AI in Film and Literature.)
February 26, 2017, 03:14:36 AM
Robot Message in a Bottle
by 8pla.net (Home Made Robots)
February 25, 2017, 04:04:21 AM
La-Masterpiece
by LOCKSUIT (Graphics)
February 24, 2017, 10:26:32 PM
Auto-Food-Drone type delivery
by Art (General Chat)
February 24, 2017, 06:52:28 PM
Robust bipedal Cassie to transform robot mobility
by Tyler (Robotics News)
February 24, 2017, 04:48:44 PM
Artificial intelligence: Understanding how machines learn
by Tyler (Robotics News)
February 24, 2017, 10:49:13 AM
Hard at work: A review of the Laevo Exoskeleton
by Tyler (Robotics News)
February 23, 2017, 04:48:12 PM
Shell Ocean Discovery XPRIZE: Semi-finalists set sail on a journey to illuminate the ocean
by Tyler (Robotics News)
February 22, 2017, 10:48:25 PM
Drones for good 2.0: How WeRobotics is redefining the use of unmanned systems in developing countries
by Tyler (Robotics News)
February 22, 2017, 04:48:07 PM
At what point should an intelligent machine be considered a person?
by Tyler (Robotics News)
February 22, 2017, 10:48:24 AM
Ocado evaluating robotic manipulation for online shopping orders
by Tyler (Robotics News)
February 21, 2017, 10:50:46 PM
Motor control systems: Bode plots and stability
by Tyler (Robotics News)
February 21, 2017, 04:48:52 PM

Users Online

17 Guests, 0 Users

Most Online Today: 33. Most Online Ever: 208 (August 27, 2008, 08:24:30 AM)

Articles