Explaining AI

  • 9 Replies
  • 604 Views
*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Explaining AI
« on: October 08, 2017, 05:18:36 pm »

Explaining AI: 




*

zoraster

  • Roomba
  • *
  • 5
Re: Explaining AI
« Reply #1 on: October 08, 2017, 07:03:47 pm »
This is a cool website :)

*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 596
    • battle programming
Re: Explaining AI
« Reply #2 on: October 09, 2017, 07:28:27 pm »
those neural network circles are really annoying I try and try and I can't figure out how they work
how does a column of circles understand 'a' and 'A' are the same ?!

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Re: Explaining AI
« Reply #3 on: October 09, 2017, 09:31:44 pm »
 Well you have a input of say 16 pixels X  16 pixels. And then you have two outputs on the other side of the NN.
 One for "A" and the other for lower case "a".
 You then show or input a upper case "A" in to the NN. And then read the output for big A on the output of the NN. You want it to output to be
a max value of one to indicate that big "A" is being inputted. But this NN is un trained and it is outputting garbage. So to fix this. You stare at
the output and work you way back to the beginning. On you way you find which neuron are not up voting. Take the big "A" away and
the output should be the max low of zero. If it is not, then you have to follow backward through and find which neurons are over voting and
shut them down.

 Little "a" will have is very own output and the other side of the NN.  Do the same thing and in a way that does not knock big "A" out of
wack. 

*

ranch vermin

  • Not much time left.
  • Starship Trooper
  • *******
  • 484
  • Its nearly time!
Re: Explaining AI
« Reply #4 on: October 10, 2017, 10:54:40 am »
Just hardwire it.
I would customize the symantic rule for it,   so it reads a and A as a common bit,  and has a capital as the symantic thats different.

If it were collecting at random, unsupervised i guess it would see similarities in the patterns it appears around, and then maybe under some instances it could treat them as the same.

*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 596
    • battle programming
Re: Explaining AI
« Reply #5 on: October 10, 2017, 03:28:14 pm »
maybe each circle in the hidden layer represents a compilation of several input pixels
in other words hidden layer circle = cluster of input pixels ?

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Re: Explaining AI
« Reply #6 on: October 10, 2017, 03:30:38 pm »
 Input pixels value of one for a high. Zero for the back ground.

 Small NN you can hard wire it. Pick the weight values in a hand crafted way.  Or you can random select select the weight.
 Then there will be many ways the NN can detect a "A". The NN by random chance 1). Use just the high pixel. 2). Use just the low pixel and invert the
value.  3). Use just the edges. OR 4). use a combination of all of the above. 
 IN deep learning NN they use random weight because it use a secret black box math that we do not understand yet.
 Look at 12.37 minute into this video for a example:


 

*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 596
    • battle programming
Re: Explaining AI
« Reply #7 on: October 10, 2017, 06:57:36 pm »
whateva, the circles are clusters until  they explain it better

*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 596
    • battle programming
Re: Explaining AI
« Reply #8 on: October 10, 2017, 06:59:18 pm »
tensor schmensor, its clusters  :tickedoff:

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Re: Explaining AI
« Reply #9 on: October 12, 2017, 03:50:17 pm »

 


Users Online

32 Guests, 1 User

Most Online Today: 54. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles