I want to crack Neural Networks

  • 37 Replies
  • 838 Views
*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #15 on: January 12, 2018, 11:13:17 pm »
I find it strange how neural networks are what the machine learning field is basically all about yet......it's just a simple drawing of some lines man......yes I know it has a use and they have neurons and weights and biases and sigmoid functions but...they are too simple....somethings weird here........................and I also find it a bit disturbing how when reading an AI book, I have some knowledge of the field, and I'm not hating....but...it's still all like lol "Then these class representation distributions are fed into the linear regression classifier for backprop and made into vectors with biases and train each function of the network to discriminate the key features of the class features into small units quantinized after iterations of the feikstein mannboltz algorithm before sending it through an autodecoder for optimization of its sequences.

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 813
  • Look into my eyes! WOAH!
    • Google +
Re: I want to crack Neural Networks
« Reply #16 on: January 13, 2018, 11:19:23 am »
Hi Lock… I’m glad you are feeling better.

Whilst you do seem to be getting a better grasp of how NN’s work there still seems to be some confusion.  The NN in the diagram is a standard feed forward NN with and input layer, five hidden layers and an output layer… the labels/ arrows are incorrect.  This type of NN would usually be used as a feed forward classifier and would not use a reward based learning schema, it would probably use back propagation.

Quote
But do all neural nets use "Features"?

If you class a pattern of weight vectors as a feature then yes they do, though normally the term ‘feature’ is used with convolutional networks CNN and image processing.

The overall schema of a classical NN can be thought of as a ‘bottle neck’ function.  Vector weights enter on one side of the NN and are ‘diverted/ focused/ funnelled’ towards one or more of the outputs. 

The input data has to be converted into a format that is compatible with the NN, so with CNN for example a set of convolution filters are applied to the input image, each filter highlights a different aspect of the image.  A set of four filters could highlight horizontal, vertical and diagonal lines in a image for example.  Each filter converts the RGB image to a monochrome version that can be fed into a CNN, this is done for all four filters… the results are then usually fed into a pooling NN. Usually several layers of CNN are applied, each layer detecting larger more complex ‘features’ in the original image.

The classical NN is a basic function that can learn to classify weight vectors; the difficult bit is finding ways to leverage the power of the NN against your data set.

 :)
It thunk... therefore it is!

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 730
Re: I want to crack Neural Networks
« Reply #17 on: January 13, 2018, 02:11:37 pm »
 The Feynman explanation of NN:

 Input data is transformed into output data
 Each transform has to trained into the NN. 


*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #18 on: January 13, 2018, 05:20:32 pm »
I had studied all over ANNs a bit back a lot, and now I get how they all work basically.

I wonder why none of them have a sequential history bar / timeline. And what about self-ignitions? A sudden thought pop into your head.

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 730
Re: I want to crack Neural Networks
« Reply #19 on: January 13, 2018, 06:54:10 pm »
 To get what is really get the feel for the dynamics of a NN, one should really look at the types of input data to output data that a NN can do.
 You could have a NN trained to do these thing: 
 Input an 4 and output 3 and from the same NN input a 5 and it outputs a 7. Or input 9 and get out a 9.

 Or in different NN, input a picture of a cat and get out the same cat.
 Or input a letters cat and get out a picture of a cat. or even input a number and get out a cat.
 Or input a picture of door to your house and get picture out of night stand where the keys to the door are.
 Or like in algebra you can think of "x" and out come a number six. x is equal to six.
 Or a cat walk behind hind a big bush. So bush is equal to cat?
 Or a country on a Globe rotates behind the glove so globe is all so equal to that country?
 NN are working like one directional link list algorithm.

 For sequential stuff the output of NN is thrown back into the input of the NN. Also know as RNN. 
 Or you could have a bunch of NN in a row and have a feed back somewhere with in? 

 For me, Doing video recording with NN is using a frame number of a video and then feed it into a NN an get out that frame:)


*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #20 on: January 13, 2018, 07:13:28 pm »
Or output actions from sentences or somatosensory/visual sensors.
Or DNA/RNA data sequences.
Or creature generations.

*

ranch vermin

  • Not much time left.
  • Replicant
  • ********
  • 542
  • Its nearly time!
Re: I want to crack Neural Networks
« Reply #21 on: January 13, 2018, 08:29:09 pm »
what the future's made of ->

 A   Input rgb, output corner class. (=)
 B   input corner class, output corner descriptor (=)
 C   Input line density, output 3d depth.  (=)   <-this has to be trained with a triangulator.
 D   input corner collection key, output entity descriptor (=)
(then in extreme parallel and looped many times.)
  E  input entity descriptors, output motor posture (=)
  F  input motor posture, output entity descriptors (=)
   -> run rule reason why goal target motivation scoring program based apon geometry.

thats what i have planned for mine.  but it aint easy getting 1st nn with a huge capacity!!!
And theres not many assignments from eye to roboarm!    its virtually all PARALLEL, except for the final motor simulation.

So id like to say ive gotten to the point where symbolic ai is one and the same with pattern matching ai, its all based apon assignment or equals but its just more of a bulldozer - and more people do it.

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #22 on: January 13, 2018, 08:37:09 pm »
Now for actual cracking of ANN.

Ahhh, I think some of my AGI instructions were wrong. What should be happening is, when a sense gets "saved" into the network, and it senses it again later and strengthens it, each time it selects the actions linked to this sense it tweaks those actions for improvement, checking for higher reward. What should be happening is if the reward is not higher, then it should not save the mutated action tweaks. But also, it should delete the ones that have lower ranks, so that there is only 1 motor task in the mortor cortex linked to the sense in the sensory cortex and not duplicates of the same motor task (duplicates with slight variation from tweaks~).

Also, senses link to actions. But what about how we link an image to another image and recall one after recalling the first? Once an image is chosen in the network at the end of the network, it must be linking straight to another end neuron in the last layer or to some timeline bar made of sequential sets.

And what about how we link an image to a sound (2 different sensory types)? Again the end neuron in the last layer must be either linking right to the other neuron in the last layer of the other sensory cortex or linking to it indirectly by linking through the history bar made of sequential sets. (Each set contain the 5 senses sensed and the actions done for each motor).

?

*

ranch vermin

  • Not much time left.
  • Replicant
  • ********
  • 542
  • Its nearly time!
Re: I want to crack Neural Networks
« Reply #23 on: January 13, 2018, 10:02:34 pm »
your asking the right questions, as far as i can see.    its pointless talking about it, u have to put it together to learn more, it helps a lot to actually do it, and see it in front of your eyes.  (a painting is easy to paint in hindsight, or rather, what ur looking at, at the end.)

too-> Ive got many theories speculating around but its pointless continue thinking about them unless im resting from actual coding,  a big one of mine is "can i collect entities of equal meaning like words on a page, even tho it isnt words"

Another one that might help you specificly, cause its like what you said - is you can get a slight variation off old motor records by finding the halfway point of two succesful ones, it may be in the centre doing the action the correct way.

So just trust that your thinking is good,  and implement the bastard!

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #24 on: January 13, 2018, 10:13:45 pm »
But ranch take a moment here, settle down. I can tell my hired programmer friend to code stuff. And I have a lot of notes. But I want to exactly know precisely the responses from yous as a society in whole. Pin the answer down. How does the AI field/you think how an image links to a sound or how an image links to an image? Is it how I said? It is so simple to get this right. Doing anything without knowing too is ++ harder. Tell me your answers. In my case knowledge is all I use atm and I make sure I do a well job despite no tests.

And besides, I feel an unlocking feeling about ANN, if I can know just a few simple answers like to questions like this one.

I just need to know a few things.

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #25 on: January 13, 2018, 11:45:35 pm »
Wahahaha, wahahahahaha!! Now that I have a better understanding of ANNs, I am extracting more data about how they work! This is EASY!

Stayed tuned, I will probably have my diagram up in a few hours.

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #26 on: January 14, 2018, 11:56:50 pm »
See attachment in this post.

Alright so here is how I propose the human brain links an image of a eye (layer 3) to an image of a crowd (layer 5), an image of X to a word of sounds, and senses controlling actions.

As you can see, skip-layer connections are needed, and ex. the crowd neuron must feed-backwards to the eye neuron concept a few layers down-wards.

The multi-sensory representation concepts are formed sums in the Frontal Cortex. Master sub-policies.

Then this controls high-level action tasks that feed out of the brain to the body system.



Anyone who wants to understand CNNs, LSTMs/RNNs, and SVMs, these are the best I've seen.





« Last Edit: January 15, 2018, 02:36:18 am by LOCKSUIT »

*

ranch vermin

  • Not much time left.
  • Replicant
  • ********
  • 542
  • Its nearly time!
Re: I want to crack Neural Networks
« Reply #27 on: January 15, 2018, 05:18:53 am »
go on then, wheres the big implementation?

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #28 on: January 15, 2018, 05:14:23 pm »
No implementation yet. Just anylising the situation right now. Prep stage.

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: I want to crack Neural Networks
« Reply #29 on: January 15, 2018, 08:20:46 pm »
I have 2 new questions. See the image attachment in this post. So the stick man is made into convolved feature maps, one for vertical lines, one for diagonally left lines, and one for diagonally right lines. It's extracting features and looking for matches at a small level. BUT, now for my questions, as you can see at the bottom left in question 1, do those feature maps ever get layered on top of each other to you know, use lines n curves to make noses and eyes, then use eyes and noses to make faces, creating higher-level features as you go up the 6 layers in the mammalian brain? Or does that give us the same picture that we started with? I.e. in my attachment I show a OR in the bottom left asking if or does it use a fully connected layer with the curve/line features to sum up probability weights to what it is, like out of the 3 options, it is a eye or nose or mouth, or say a man or box or fox, ? If it does that then, then, then, that means it never actually builds higher-level features, i.e. lines/curves, eyes/nose/moth, face/body/limbs, human/chair/pillow, dining table with chairs with humans with pillow, that scene with music playing, and even higher. What that means is that it just uses the lines and curves and well, any higher-level features like eyes in the next layer and then faces in the layer after that are really just fully connected layers that where the image does-not look like a face but instead is just a number / representation, then uses that nose/eye representation in the next layer for ex. faces.
Question 2 now, is, at bottom right of attachment 1, how does it detect just a nose (or ear/s as I drew) by lines n curves if those lines n curve features will light up some of the stickman at the same time?

Another issue I'm having is, see attachment 2 explaining CNNs? In this image you can draw a number and that thing detects if it's a 1 or 8 or 0. 784 pixels get fed into the first layer, and it clearly stated it uses multiple features, yet if that were so then on the far left at the first layer of 784 pixels there would be a divergence of 3 separate layers each having 784 pixels input. It seems as if it is just 1 feature map summing its own self rofl! Helppp!

 


lines 2 filled graphics converter
by LOCKSUIT (General AI Discussion)
January 21, 2018, 09:15:15 pm
Hello peeps and evil robots
by 8pla.net (New Users Please Post Here)
January 21, 2018, 03:51:17 pm
What's everyone up to ?
by ivan.moony (General Chat)
January 21, 2018, 03:32:41 pm
What is Reality?
by Art (General Chat)
January 21, 2018, 03:20:37 pm
My question
by Art (General Chat)
January 21, 2018, 03:04:14 pm
Who are you, what do you do & what do you want to do ?
by Art (New Users Please Post Here)
January 21, 2018, 02:59:49 pm
B-Bot
by korrelan (General Project Discussion)
January 21, 2018, 12:16:44 pm
I want to crack Neural Networks
by LOCKSUIT (General AI Discussion)
January 20, 2018, 01:44:22 am

Users Online

29 Guests, 1 User

Most Online Today: 62. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles