Ai Dreams Forum
Artificial Intelligence => General AI Discussion => Topic started by: keghn on October 08, 2017, 05:18:36 pm
-
Explaining AI:
https://www.youtube.com/watch?v=W2ey_4_DHuc
-
This is a cool website :)
-
those neural network circles are really annoying I try and try and I can't figure out how they work
how does a column of circles understand 'a' and 'A' are the same ?!
-
Well you have a input of say 16 pixels X 16 pixels. And then you have two outputs on the other side of the NN.
One for "A" and the other for lower case "a".
You then show or input a upper case "A" in to the NN. And then read the output for big A on the output of the NN. You want it to output to be
a max value of one to indicate that big "A" is being inputted. But this NN is un trained and it is outputting garbage. So to fix this. You stare at
the output and work you way back to the beginning. On you way you find which neuron are not up voting. Take the big "A" away and
the output should be the max low of zero. If it is not, then you have to follow backward through and find which neurons are over voting and
shut them down.
Little "a" will have is very own output and the other side of the NN. Do the same thing and in a way that does not knock big "A" out of
wack.
-
Just hardwire it.
I would customize the symantic rule for it, so it reads a and A as a common bit, and has a capital as the symantic thats different.
If it were collecting at random, unsupervised i guess it would see similarities in the patterns it appears around, and then maybe under some instances it could treat them as the same.
-
maybe each circle in the hidden layer represents a compilation of several input pixels
in other words hidden layer circle = cluster of input pixels ?
-
Input pixels value of one for a high. Zero for the back ground.
Small NN you can hard wire it. Pick the weight values in a hand crafted way. Or you can random select select the weight.
Then there will be many ways the NN can detect a "A". The NN by random chance 1). Use just the high pixel. 2). Use just the low pixel and invert the
value. 3). Use just the edges. OR 4). use a combination of all of the above.
IN deep learning NN they use random weight because it use a secret black box math that we do not understand yet.
Look at 12.37 minute into this video for a example:
https://www.youtube.com/watch?v=elPTCt45ZGo
-
whateva, the circles are clusters until they explain it better
-
tensor schmensor, its clusters :tickedoff:
-
Backpropagation neural network software (3 layer):
https://rimstar.org/science_electronics_projects/backpropagation_neural_network_software_3_layer.htm