ANN

  • 9 Replies
  • 6504 Views
*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
ANN
« on: December 25, 2014, 11:24:31 pm »
Pre-release: ANN

ANN is an eight neuron
Artificial Neural Network
with a training interface.

Reference:
http://www.elizabot.com/ANN

HOW TO:
1. Specify three input neurons.
2. Process four hidden neurons
3. Display one output neuron.

This is an exclusive pre-release
on aidreams.co.uk, which hasn't
even been officially released on
the Elizabot.com homepage yet.

For testing purposes during the
pre-release, recall is turned off
in the long term memory. It learns
memories, and eventually will recall
them, but has no access presently.

Otherwise it will simply avoid the
training rounds in ALPHA testing,
by recalling its previous training.
(See below for details on that.)

Please don't bother comparing this
to a chatbot that brute forces every
human input with 10,000 records.
Of course, those are amazing A.I.

Yet a neural network may feel like
a puppy that is attempting to learn
what you are training it to do.

It is much different than a chatbot. 
For example:

Here is what a human may see:
Code
ANN STIMULUS:
sun IS hot

Success:
Learned in 558 rounds!

sun NOT hot
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

sun IS hot
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

sun IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS hot
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT hot
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE


And here is what the robot learns
from your training:
Code
[weights]

edges = "a:2:{i:0;a:3:{i:0;a:4:
{i:0;d:0.018433430106558686;i:1;d:2.0752904102477938;i:2;d:-2.3659331080440431;i:3;d:-5.9658975406205972;}i:1;a:4:
{i:0;d:-7.0843616593236041;i:1;d:1.0982051837897691;i:2;d:-2.8414919639398004;i:3;d:-5.0777022117575239;}i:2;a:4:
{i:0;d:-0.075755831721872474;i:1;d:1.8821195234091999;i:2;d:-1.8768441560093969;i:3;d:-3.1266105208785322;}}i:1;a:4:{i:0;a:1:
{i:0;d:-1.554628774100274;}i:1;a:1:{i:0;d:-0.33598219092569048;}i:2;a:1:{i:0;d:-0.29751095720415249;}i:3;a:1:
{i:0;d:-0.65137408115328266;}}}"

thresholds = "a:2:{i:1;a:4:
{i:0;d:-1.3393613961225899;i:1;d:-0.20393898468168131;i:2;d:-0.086723853668062706;i:3;d:0.46695201288196864;}i:2;a:1:
{i:0;d:-0.67168157451796928;}}"

[identifiers]
training_data = "a:5:{i:0;N;i:1;N;i:2;N;i:3;N;i:4;N;}"
control_data = "a:0:{}"

ANN training comes from human
interaction.  Perhaps easier than a
someone manually updating chatbot
records in a database,  ANN does all the
work for us.

A neural network is a different type of
A.I. modeled after the human brain.
So, don't be alarmed if you get the
impression it is sentient, or should
it remind you of a life form more than
other types of Artificial Intelligence.
ANN really is modeled after us.

My Very Enormous Monster Just Stopped Using Nine

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: ANN
« Reply #1 on: December 26, 2014, 08:50:19 am »
how many input sensors does it have?

how do you handle more than one piece of knowledge,  and using prior knowledge to understand something new like
if it learnt this in a row

'you are motivated to feel nice."
'hotdogs make you feel nice."   
'hotdogs come from the skippy van."
"a skippy van is there what do you do?"

could it handle that?

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: ANN
« Reply #2 on: December 26, 2014, 07:10:50 pm »
When I put ANN together on Christmas day, I omitted the part that lets her recall what she learnt already. Now ANN does store those memories (see: weights, thresholds, identifiers, etc... ), but she is not programmed to access them yet. In other words, ANN is not programmed to remember anything yet.

Access to stored memory  (mostly done already), will instantly make ANN smarter. There is a simple explanation for that. Normally, training rounds may be unsuccessful, while stored memories are stored only after they are successful.  So, it makes no sense for training rounds to have association to other training rounds because success is not guaranteed. Makes sense, they only get stored, and associated, once they are successful.   For these reasons, instead of going through training rounds (see a sample below) ANN will simply access stored memories (see a sample below). So, with her training wheels removed, ANN will default to becoming smarter.

By wearing her training wheels, ANN assists in her own development. Repeating training over and over again, supports focus on memory access development. So to answer your question... Associating one stored memory to another may require access to them. Associating short term memories, the phase we are in now, may depend on how well human trainers do with ANN.

With that said, let's address your very kind feedback by taking your first "you are motivated to feel nice." suggestion. Thanks for that suggestion! We now have our three input neurons from your suggestion. The first input neuron, we type in, is the subject "you", The second input neuron, selected in the drop-down list, is the verb "are", and the third input neuron, we type in, is the object "motivated to feel nice". Just click a button to process our three input neurons, through four hidden neurons, to display one output neuron.

The human, for example, may see this from the Artificial Neural Network:
Code
ANN STIMULUS:
ANN IS MOTIVATED TO FEEL NICE

TRAINING ROUND 1:
HAVEN'T LEARNT.

SUCCESSFUL
LEARNT IN 390 ROUNDS!

ANN NOT MOTIVATED TO FEEL NICE
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

ANN IS MOTIVATED TO FEEL NICE
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

ANN IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS MOTIVATED TO FEEL NICE
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT MOTIVATED TO FEEL NICE
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE


The robot, for example, may store its memory from the Artificial Neural Network, like this:
Code
[weights]
edges = "
a:2:{i:0;
a:3:{i:0;
a:4:{i:0;
d:2.4374564045288012;
i:1;
d:-2.740271901067453;
i:2;
d:0.045101110516292488;
i:3;
d:-0.094644828163701178;
}
i:1;
a:4:{i:0;
d:1.7850451273810641;
i:1;
d:-4.097598194074191;
i:2;
d:4.3462708955491642;
i:3;
d:1.3473787150564396;
}
i:2;
a:4:{i:0;
d:1.7316401012547242;
i:1;
d:-2.5137955978693802;
i:2;
d:-0.024721716105362831;
i:3;
d:7.3986700250491103E-5;
}
}
i:1;
a:4:{i:0;
a:1:{i:0;
d:0.039778274112977685;
}
i:1;
a:1:{i:0;
d:-0.59430632811636064;
}
i:2;
a:1:{i:0;
d:1.2775967297837523;
}
i:3;
a:1:{i:0;
d:0.55670966163095392;
}
}
}
"

thresholds = "
a:2:{i:1;
a:4:{i:0;
d:-0.32788840506225331;
i:1;
d:0.15252377585231927;
i:2;
d:1.0467440755610877;
i:3;
d:0.35306259200056833;
}
i:2;
a:1:{i:0;
d:-0.63734414566423869;
}
}
"

[identifiers]
training_data = "
a:5:{i:0;
N;
i:1;
N;
i:2;
N;
i:3;
N;
i:4;
N;
}
"

control_data = "
a:0:{}
"

Please don't hesitate to suggest any improvements. Even simple improvements are needed. For example, please share any easier explanations. How would you help others feel more comfortable about using Neural Networks?  Your feedback is so valuable to this project.  Thanks to those who have ALPHA tested ANN already. And thanks for reading this. Your support is greatly appreciated.


« Last Edit: December 26, 2014, 07:33:39 pm by 8pla.net »
My Very Enormous Monster Just Stopped Using Nine

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: ANN
« Reply #3 on: December 26, 2014, 08:37:26 pm »
Thanks for reply.

Im wondering where do you want this thing to head in the near future?

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: ANN
« Reply #4 on: December 26, 2014, 09:41:18 pm »
Recent training by the ALPHA testers is getting exciting.
That's the direction this thing will be heading with ALPHA
testing in the near future.

Now, I am not going to be posting what the ALPHA testers
are doing.  They can do that themselves if they wish.

Here you can see the BEFORE and AFTER, of training that
pointed out an improvement. 


BEFORE:
Code
ANN STIMULUS:
SUN WILL BE COLD

SUCCESSFUL
LEARNT IN 309 ROUNDS!

SUN NOT COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

SUN WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

SUN WILL BE THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE


ANN used her neurons to compute the best output available from her inputs.
Then an ALPHA tester trained ANN, and she stored that training in memory.
This, in turn supported an improvement to the short term memory of ANN.

AFTER:
Code
ANN STIMULUS:
SUN WILL BE COLD

TRAINING ROUND 1:
HAVEN'T LEARNT.

TRAINING ROUND 2:
HAVEN'T LEARNT.

SUCCESSFUL
LEARNT IN 621 ROUNDS!

SUN WILL NOT BE COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

SUN WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

SUN WILL BE THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT WILL BE COLD
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT WILL NOT BE COLD
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

These ALPHA testers really know where to target improvements in Artificial Neural Networks.
I remember watching a Science show on TV which said, "Everything dies." and went on to
say, "Even the Sun, one day, will burn out."  So, one day, the "SUN WILL BE COLD".
My Very Enormous Monster Just Stopped Using Nine

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: ANN
« Reply #5 on: December 27, 2014, 09:06:47 am »
Wow thats amazing.   :)

There would be lots of uses for this.
Maybe one is an application is to get ANN to guess what your thinking, or guessing the solution of an half impessive problem, given some circumstance of knowledge.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: ANN
« Reply #6 on: December 29, 2014, 01:32:57 pm »
8pla.net, could I ask, is neural network in general capable of parsing texts?

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: ANN
« Reply #7 on: December 31, 2014, 06:30:36 pm »
8pla.net, could I ask, is neural network in general capable of parsing texts?

@ivan.moony
That is an excellent question.  A lot may depend on how you are training your neural network.   For the sake of conversation amongst friends, let's agree that there may be exceptions, to avoid quoting scientific notation for semantic parsing and other research being done with neural networks, such as iCub.

Informally speaking for the purpose of our simple ANN with eight neurons, we may consider text parsing to be a preparation for the general capability of an ANN which is machine learning.  An analogy might be using flash cards with words (parsed) on each card as a study technique.   The flash cards may only be necessary for initial studying purposes until learning takes place.
My Very Enormous Monster Just Stopped Using Nine

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: ANN
« Reply #8 on: December 31, 2014, 07:05:09 pm »
Wow thats amazing.   :)

There would be lots of uses for this.
Maybe one is an application is to get ANN to guess what your thinking, or guessing the solution of an half impressive problem, given some circumstance of knowledge.

We may find ourselves in discussion about this in the near future. 
My Very Enormous Monster Just Stopped Using Nine

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: ANN
« Reply #9 on: December 31, 2014, 11:44:53 pm »
The Final Update of 2014

ANN is now able to recall that learnt.

Unsuccessful Learning  (This may happen sometimes and is considered normal):
Code
ANN STIMULUS: 
MEMORY IS WORKING

TRAINING ROUND 1:
HAVEN'T LEARNT.

TRAINING ROUND 2:
HAVEN'T LEARNT.

UNSUCCESSFUL:
MORE TRAINING PLEASE.

MEMORY NOT WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: FALSE

MEMORY IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

MEMORY IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: FALSE


Successful Learning  (This happens more often and is also normal):

Code
ANN STIMULUS: 
MEMORY IS WORKING

SUCCESSFUL
LEARNT IN 441 ROUNDS!

MEMORY NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

MEMORY IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

MEMORY IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

Here is what the robot sees:
Code
[weights]
edges = "
a:2:{i:0;
a:3:{i:0;
a:4:{i:0;
d:-0.029243304417827569;
i:1;
d:3.5910951133577114;
i:2;
d:3.3565073125503271;
i:3;
d:1.9922035855214428;
}
i:1;
a:4:{i:0;
d:5.1993639125564908;
i:1;
d:4.5013108052188651;
i:2;
d:3.6250722726303342;
i:3;
d:1.3815499994858318;
}
i:2;
a:4:{i:0;
d:-0.037455006230214978;
i:1;
d:2.4215654202324917;
i:2;
d:2.9111186643829026;
i:3;
d:1.5250719642344766;
}
}
i:1;
a:4:{i:0;
a:1:{i:0;
d:1.5918046255861342;
}
i:1;
a:1:{i:0;
d:1.2070017632186236;
}
i:2;
a:1:{i:0;
d:0.25519720111901717;
}
i:3;
a:1:{i:0;
d:-0.062463709645525493;
}
}
}
"
thresholds = "
a:2:{i:1;
a:4:{i:0;
d:1.0808866906097543;
i:1;
d:-0.2622695349851461;
i:2;
d:-0.14503545814776675;
i:3;
d:-0.57208379581483737;
}
i:2;
a:1:{i:0;
d:0.032592926167136271;
}
}
"
[identifiers]
training_data = "
a:5:{i:0;
N;
i:1;
N;
i:2;
N;
i:3;
N;
i:4;
N;
}
"
control_data = "
a:0:{}
"


Recall of (previous) Successful Learning  (This is the new update.):

Code
ANN STIMULUS: 
(RECALLED)
MEMORY IS WORKING

SUCCESSFUL
LEARNT IN 9 ROUNDS!

MEMORY NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

MEMORY IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

MEMORY IS THAT
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT IS WORKING
ANN RESPONSE: TRUE
  ANSWER KEY: TRUE

THAT NOT WORKING
ANN RESPONSE: FALSE
  ANSWER KEY: FALSE

Recall means that "MEMORY IS WORKING" was input a second time, and ANN recalled the learning from the first time it was learnt successfully above, instead of going through training over and over again.
My Very Enormous Monster Just Stopped Using Nine

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

171 Guests, 0 Users

Most Online Today: 291. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles