Ai Dreams Forum
Artificial Intelligence => General AI Discussion => Topic started by: keghn on April 16, 2016, 04:55:59 pm
-
Backpropagation Neural Network - How it Works e.g. Counting:
https://www.youtube.com/watch?v=WZDMNM36PsM (https://www.youtube.com/watch?v=WZDMNM36PsM)
-
A image ANN can be of two main types. 1) Detector. Or 2) Autoencoder.
The bigger they then the more images that can be detected by a detector NN.
The bigger the auto encoder is the more images it can hold.
The detector nn can generate one high bit on the out put when it sees a cat.
Or the detector can be trained to give a bigger output like a binary hash tag vale.
Or a detector can give a very large output for detection of cat. The complete replication
of the image of the cat it is viewing. Yes at this stage the NN detector has become a NN autoencoder.
There is a third type, a symbolic NN, where you can give a NN any type of data, say the number .000000000000000009, and it can be trained to generate a image of a cat. when his value is feed in.
With a binary counter of 64 bits, many image can be trained into a symbolic NN. Just feed in the
right number and get the image you want.
A move is a bunch of images in a row:-)
Let say you have two copies of symbolic nn. Brain is very redundant. You could cycle through the
counter feeding into these NN moving forward and backward in temporal time to find
matching images in different time positions.
-
The real power of the symbolic NN is to show it the picture of the letters "cat" and it can be trained to
generate a image of a real cat on the output image, or it can be trained to generate eight sub
images of the most
extreme states a cat can be in, and all other cats will just be a distance function between them.
AIXI AGI Theory states that for every image seen by a AGI it will try to clear it up and make it simpler
more data compressed in size. Will sure it can. There is the input data cat. And then there is the
target it wants to transform it into? What is the target is? It is another image or sound that is
a lot fewer bytes than the original and is much simper a more clear and sharper to
view. Like a few scratches of pencil on clear piece white of paper: " c A t ".
Just give the original and the simpler target for the output and the symbolic NN auto train to it. And the
grounding of a symbol is done.
Like in mathematics of algebra. A is A but a number can be grounded to it with a symbolic NN
like A = 5.
Since the data compression and clarity did not improve it will be forgot or not remembered for too long.
And now back to auto encoder NN. In recreating what it is viewing it can recreate fantasy, or factor
features out, or dithered reality, LSD fantasy, or enhanced version of realty, or images of god.