Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: LOCKSUIT on January 10, 2018, 01:52:17 pm

Title: I want to crack Neural Networks
Post by: LOCKSUIT on January 10, 2018, 01:52:17 pm
Hi friends. I'm doing a tad better.

I find it fascinating that one piece of information (shared features) makes neural networks make so much more sense to me. Now I get why Wiki I read many months back said the brain processes higher concepts as it goes higher layers up. And I get the shared features part now, and why they can detect small and whole features and in the end ex. a thousand images. And I've learned before that language works the same as a ex. CNN where a b c is used lots then words light up less so then word pairs less-less so and higher up until bigger topics are recognized consciously not subconsciously. FURTHER the frontal cortex is based like this but allows higher concepts like if you see this image and this image and this sentence and do these actions then one of the output neurons lights up. And am I right to say then that all neural networks are hierarchical and work by "shared features"? Tell me more things as important as "shared features".

Also after the above question, I want to draw out what a neural network looks like visually, to understand how it learns to sense, act, and reward those actions.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 10, 2018, 05:10:19 pm
I havent got mine working yet,   but you have to build your way up to the big concepts,  u cant detect it all at once considering your starting from an rgb map (the camera.)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 10, 2018, 05:23:57 pm
Lol are you talking about my studying or the AI's studying?

To be clearer, you only recognize a face if you recognize a nose, mouth, etc, first. It's a hierarchy, using Shared Features. Are most Artificial Neural Networks based on a Shared Features Connectom? Vision, Language, and the Frontal Cortex seem to all be one.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on January 10, 2018, 05:30:48 pm
They say that they are not certain how a neural network really does the big picture. They clued up small parts, combined a lot of them, and when they run the algorithms, it works, somehow. Maybe I missed something...
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 10, 2018, 07:05:38 pm
Oh, u mean how hierarchical backprop nets work.      they make groups of pixels, then groups of those groups, they can overlap or not overlap, its a brain teaser to think about.

Just imagine a really advanced one that groups pictures of the actual thing, with the word of the thing, or sign language of the thing as well in different sub categories, all unsupervised,    it gets pretty complex.

After youve done this job, u then use it as a base for the robot to "see the world through" and then you generate movements off it to its motors.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 10, 2018, 07:47:28 pm
Finally I feel like I'm on topic with you guys....................

Yes yes! So Deep Learning NN...great.....it recognizes. But its outputs must now go to a motor DL NN. There's a lot more of the bigger picture missing. Like an attention module. A history bar over-lording all the hierarchies as a huge sequential parallel hierarchy and frontal cortex all at once. Reward module. Artificial Reward creation. My mind is exploding full of ideas.

Now I want to show yous an image and I want to know how it is rewarding its actions. Image atteched.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 11, 2018, 09:51:18 am
thats the idea.  Korrelan has a neural network that could possibly produce a blob map of identifications,   so u could be working with something like that for the input into the motor generation system.     My model reduces it to so many points of interest.

Also, I have never seen a robot that actually saw its environment in this detail, so its *new* if you get it to work!
Title: Re: I want to crack Neural Networks
Post by: keghn on January 11, 2018, 02:05:40 pm

The 8 Neural Network Architectures Machine Learning Researchers Need to Learn: 
https://towardsdatascience.com/the-8-neural-network-architectures-machine-learning-researchers-need-to-learn-11a0c96d6073 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 11, 2018, 06:14:05 pm
Thanks keghn that clarified some things.

So LSTMs+GANs+HopFieldNNs is the current furthest thing we have, since they combine FNN, RNN, and HNN. Right?



"To be clearer, you only recognize a face if you recognize a nose, mouth, etc, first. It's a hierarchy, using Shared Features. Are most Artificial Neural Networks based on a Shared Features Connectom? Vision, Language, and the Frontal Cortex seem to all be one."

Still don't have an actual answer for this question.

Do all Artificial Neural Networks use hierarchies and build higher concepts made of smaller ones? Or do some NNs revolve around other things and don't even use concept building? If so, please tell me what it is. Because I find the "using shared features to use smaller concepts to form higher concepts" really important. So, I want to know more things this important. Ex. pools but I don't think Pools are THAT important for getting the WHOLE picture if you get what I mean.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 11, 2018, 07:10:28 pm
your building bigger groups from smaller groups.

If u have object a and object b,  they can become a single object due to a symantic measurement.   face would be eyes and nose as in "contains".

But it gets complicated.

Hamburger would be the same thing as a chainsaw,  if the symantic were "they both destroy the environment."   - as in cows farting into the atmosphere, being what some stupid hippies think.   My point is, theres alot of relations to draw to make a container due to a context, or measurement.


So if the computer needed something that destroyed the environment. (although alot more would be taken into account)  it has the knowledge in its database to do it,  even if its just a scoring sharing around containers in its ga type randomization system.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on January 11, 2018, 07:46:47 pm
An object is a set of properties by which the object is recognized. Like in "object oriented programming" (OOP (https://en.wikipedia.org/wiki/Object-oriented_programming)), property membership can be inherited over different classes. Classes, beside inheritance properties, can add their own specific properties. "Cow" inherits all the properties from "living being", which inherits all the properties from "entity", and so on. Two classes (and so objects being members of classes) can share the same  property inherited form super-class (like "living being), although they are different classes (like "cow" or "giraffe"). "living being" may have a property "farts", while "cow" and "giraffe", that inherit "farts", may add their own properties like "produces tasty milk" or "has problems drinking water". And the pyramid branches downwards by growing the membership of (inherited) properties. Going up through the pyramid is called generalization, and going down is called specialization.

Object oriented programming can form nice knowledge structures, especially when paired to declarative programming (which is still not an often case, but OCaml (https://ocaml.org/) is an exception).

[Edit] Building pyramidal knowledge bases is still in scientific discovery stage, IMHO (see semantic web (https://www.w3.org/2001/sw/wiki/Main_Page) and OWL (https://www.w3.org/2001/sw/wiki/OWL)).
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 11, 2018, 08:22:02 pm
But do all neural nets use "Features"? I.e. nose and eye Features create face Feature. And connections all have different weight probabilities. ?
Title: Re: I want to crack Neural Networks
Post by: keghn on January 11, 2018, 08:42:20 pm
But what *is* a Neural Network?:

https://www.youtube.com/watch?time_continue=1&v=aircAruvnKk
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 12, 2018, 12:18:03 am
That didn't really help keghn lol......I already got a ton of that actually. Oh I see it has more parts. Ok so it's quite visually useful and good quality stuff.

But I mean, the question below is still unanswered:

"But do all neural nets use "Features"? I.e. nose and eye Features create face Feature. And connections all have different weight probabilities. ?"

I get that there's more to NNs....but....I want to know if they ALL use feature sharing to build deeper higher concepts. I would think not all NNs use it. But, maybe they all do.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 12, 2018, 06:55:25 pm

Spiking Neural Networks, the Next Generation of Machine Learning: 

https://towardsdatascience.com/spiking-neural-networks-the-next-generation-of-machine-learning-84e167f4eb2b 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 12, 2018, 11:13:17 pm
I find it strange how neural networks are what the machine learning field is basically all about yet......it's just a simple drawing of some lines man......yes I know it has a use and they have neurons and weights and biases and sigmoid functions but...they are too simple....somethings weird here........................and I also find it a bit disturbing how when reading an AI book, I have some knowledge of the field, and I'm not hating....but...it's still all like lol "Then these class representation distributions are fed into the linear regression classifier for backprop and made into vectors with biases and train each function of the network to discriminate the key features of the class features into small units quantinized after iterations of the feikstein mannboltz algorithm before sending it through an autodecoder for optimization of its sequences.
Title: Re: I want to crack Neural Networks
Post by: korrelan on January 13, 2018, 11:19:23 am
Hi Lock… I’m glad you are feeling better.

Whilst you do seem to be getting a better grasp of how NN’s work there still seems to be some confusion.  The NN in the diagram is a standard feed forward NN with and input layer, five hidden layers and an output layer… the labels/ arrows are incorrect.  This type of NN would usually be used as a feed forward classifier and would not use a reward based learning schema, it would probably use back propagation.

Quote
But do all neural nets use "Features"?

If you class a pattern of weight vectors as a feature then yes they do, though normally the term ‘feature’ is used with convolutional networks CNN and image processing.

The overall schema of a classical NN can be thought of as a ‘bottle neck’ function.  Vector weights enter on one side of the NN and are ‘diverted/ focused/ funnelled’ towards one or more of the outputs. 

The input data has to be converted into a format that is compatible with the NN, so with CNN for example a set of convolution filters are applied to the input image, each filter highlights a different aspect of the image.  A set of four filters could highlight horizontal, vertical and diagonal lines in a image for example.  Each filter converts the RGB image to a monochrome version that can be fed into a CNN, this is done for all four filters… the results are then usually fed into a pooling NN. Usually several layers of CNN are applied, each layer detecting larger more complex ‘features’ in the original image.

The classical NN is a basic function that can learn to classify weight vectors; the difficult bit is finding ways to leverage the power of the NN against your data set.

 :)
Title: Re: I want to crack Neural Networks
Post by: keghn on January 13, 2018, 02:11:37 pm
 The Feynman explanation of NN:

 Input data is transformed into output data
 Each transform has to trained into the NN. 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 05:20:32 pm
I had studied all over ANNs a bit back a lot, and now I get how they all work basically.

I wonder why none of them have a sequential history bar / timeline. And what about self-ignitions? A sudden thought pop into your head.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 13, 2018, 06:54:10 pm
 To get what is really get the feel for the dynamics of a NN, one should really look at the types of input data to output data that a NN can do.
 You could have a NN trained to do these thing: 
 Input an 4 and output 3 and from the same NN input a 5 and it outputs a 7. Or input 9 and get out a 9.

 Or in different NN, input a picture of a cat and get out the same cat.
 Or input a letters cat and get out a picture of a cat. or even input a number and get out a cat.
 Or input a picture of door to your house and get picture out of night stand where the keys to the door are.
 Or like in algebra you can think of "x" and out come a number six. x is equal to six.
 Or a cat walk behind hind a big bush. So bush is equal to cat?
 Or a country on a Globe rotates behind the glove so globe is all so equal to that country?
 NN are working like one directional link list algorithm.

 For sequential stuff the output of NN is thrown back into the input of the NN. Also know as RNN. 
 Or you could have a bunch of NN in a row and have a feed back somewhere with in? 

 For me, Doing video recording with NN is using a frame number of a video and then feed it into a NN an get out that frame:)

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 07:13:28 pm
Or output actions from sentences or somatosensory/visual sensors.
Or DNA/RNA data sequences.
Or creature generations.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 13, 2018, 08:29:09 pm
what the future's made of ->

 A   Input rgb, output corner class. (=)
 B   input corner class, output corner descriptor (=)
 C   Input line density, output 3d depth.  (=)   <-this has to be trained with a triangulator.
 D   input corner collection key, output entity descriptor (=)
(then in extreme parallel and looped many times.)
  E  input entity descriptors, output motor posture (=)
  F  input motor posture, output entity descriptors (=)
   -> run rule reason why goal target motivation scoring program based apon geometry.

thats what i have planned for mine.  but it aint easy getting 1st nn with a huge capacity!!!
And theres not many assignments from eye to roboarm!    its virtually all PARALLEL, except for the final motor simulation.

So id like to say ive gotten to the point where symbolic ai is one and the same with pattern matching ai, its all based apon assignment or equals but its just more of a bulldozer - and more people do it.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 08:37:09 pm
Now for actual cracking of ANN.

Ahhh, I think some of my AGI instructions were wrong. What should be happening is, when a sense gets "saved" into the network, and it senses it again later and strengthens it, each time it selects the actions linked to this sense it tweaks those actions for improvement, checking for higher reward. What should be happening is if the reward is not higher, then it should not save the mutated action tweaks. But also, it should delete the ones that have lower ranks, so that there is only 1 motor task in the mortor cortex linked to the sense in the sensory cortex and not duplicates of the same motor task (duplicates with slight variation from tweaks~).

Also, senses link to actions. But what about how we link an image to another image and recall one after recalling the first? Once an image is chosen in the network at the end of the network, it must be linking straight to another end neuron in the last layer or to some timeline bar made of sequential sets.

And what about how we link an image to a sound (2 different sensory types)? Again the end neuron in the last layer must be either linking right to the other neuron in the last layer of the other sensory cortex or linking to it indirectly by linking through the history bar made of sequential sets. (Each set contain the 5 senses sensed and the actions done for each motor).

?
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 13, 2018, 10:02:34 pm
your asking the right questions, as far as i can see.    its pointless talking about it, u have to put it together to learn more, it helps a lot to actually do it, and see it in front of your eyes.  (a painting is easy to paint in hindsight, or rather, what ur looking at, at the end.)

too-> Ive got many theories speculating around but its pointless continue thinking about them unless im resting from actual coding,  a big one of mine is "can i collect entities of equal meaning like words on a page, even tho it isnt words"

Another one that might help you specificly, cause its like what you said - is you can get a slight variation off old motor records by finding the halfway point of two succesful ones, it may be in the centre doing the action the correct way.

So just trust that your thinking is good,  and implement the bastard!
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 10:13:45 pm
But ranch take a moment here, settle down. I can tell my hired programmer friend to code stuff. And I have a lot of notes. But I want to exactly know precisely the responses from yous as a society in whole. Pin the answer down. How does the AI field/you think how an image links to a sound or how an image links to an image? Is it how I said? It is so simple to get this right. Doing anything without knowing too is ++ harder. Tell me your answers. In my case knowledge is all I use atm and I make sure I do a well job despite no tests.

And besides, I feel an unlocking feeling about ANN, if I can know just a few simple answers like to questions like this one.

I just need to know a few things.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 11:45:35 pm
Wahahaha, wahahahahaha!! Now that I have a better understanding of ANNs, I am extracting more data about how they work! This is EASY!

Stayed tuned, I will probably have my diagram up in a few hours.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 14, 2018, 11:56:50 pm
See attachment in this post.

Alright so here is how I propose the human brain links an image of a eye (layer 3) to an image of a crowd (layer 5), an image of X to a word of sounds, and senses controlling actions.

As you can see, skip-layer connections are needed, and ex. the crowd neuron must feed-backwards to the eye neuron concept a few layers down-wards.

The multi-sensory representation concepts are formed sums in the Frontal Cortex. Master sub-policies.

Then this controls high-level action tasks that feed out of the brain to the body system.



Anyone who wants to understand CNNs, LSTMs/RNNs, and SVMs, these are the best I've seen.

https://www.youtube.com/watch?v=FmpDIaiMIeA

https://www.youtube.com/watch?v=WCUNPb-5EYI

https://www.youtube.com/watch?v=-Z4aojJ-pdg
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 15, 2018, 05:18:53 am
go on then, wheres the big implementation?
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 15, 2018, 05:14:23 pm
No implementation yet. Just anylising the situation right now. Prep stage.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 15, 2018, 08:20:46 pm
I have 2 new questions. See the image attachment in this post. So the stick man is made into convolved feature maps, one for vertical lines, one for diagonally left lines, and one for diagonally right lines. It's extracting features and looking for matches at a small level. BUT, now for my questions, as you can see at the bottom left in question 1, do those feature maps ever get layered on top of each other to you know, use lines n curves to make noses and eyes, then use eyes and noses to make faces, creating higher-level features as you go up the 6 layers in the mammalian brain? Or does that give us the same picture that we started with? I.e. in my attachment I show a OR in the bottom left asking if or does it use a fully connected layer with the curve/line features to sum up probability weights to what it is, like out of the 3 options, it is a eye or nose or mouth, or say a man or box or fox, ? If it does that then, then, then, that means it never actually builds higher-level features, i.e. lines/curves, eyes/nose/moth, face/body/limbs, human/chair/pillow, dining table with chairs with humans with pillow, that scene with music playing, and even higher. What that means is that it just uses the lines and curves and well, any higher-level features like eyes in the next layer and then faces in the layer after that are really just fully connected layers that where the image does-not look like a face but instead is just a number / representation, then uses that nose/eye representation in the next layer for ex. faces.
Question 2 now, is, at bottom right of attachment 1, how does it detect just a nose (or ear/s as I drew) by lines n curves if those lines n curve features will light up some of the stickman at the same time?

Another issue I'm having is, see attachment 2 explaining CNNs? In this image you can draw a number and that thing detects if it's a 1 or 8 or 0. 784 pixels get fed into the first layer, and it clearly stated it uses multiple features, yet if that were so then on the far left at the first layer of 784 pixels there would be a divergence of 3 separate layers each having 784 pixels input. It seems as if it is just 1 feature map summing its own self rofl! Helppp!
Title: Re: I want to crack Neural Networks
Post by: korrelan on January 15, 2018, 10:15:57 pm
Your first picture shows a convolutional network.  The detected line features are fed through a max pooling network/ layer.  The pooling layer is just a way to keep the values/ vectors within reasonable bounds before being fed into the next layer.  The pooling layer also usually shrinks the detected image features, with no loss of detail.

There are many variations on the CNN schema, some pool features into larger features (eyes, nose, etc). Usually only at the end of the CNN chain of networks/ processes are the detected features fed into a fully connected network (FCN).  The full set of detected line features are never actually recombined into the original image.  The FCN (last stage) is where the machine learns to name an object from the features present in the image; all the stages prior to the FCN are just used to extract recognisable features from the image.

Your second picture is of a standard feed forward classifier, very similar to the FCN mentioned earlier.  This is a totally different method of using a NN to detect features/ numbers, the image matrix of pixels is fed in and the NN learns to classify a output.  This method is not usually as versatile/ accurate as the CNN approach but requires less processing.

Keep in mind that there is no silver bullet solution, there are thousands of different variations on the NN, CNN, RNN, architectures.

ED: OMG close some of those tabs on the browser... my OCD is twitching lol

 :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 15, 2018, 11:34:23 pm
I actually got my amount of open work-tabs down recently. However my knowledgebase is a little scarier to look at. Good thing I know where everything is.

So IF the filtered feature images DID get layered back as one, it would sorta ruin the whole point of the filters right?, however it would mean there is actual faces higher up in the network, lol. While if the filtered feature images DON'T get layered back as one, then the purpose of the filters is kept alive right?, however there is no actual faces in higher layers, rather they are there but as encryption-like and are detected by the fully-connected layer of weights with the lines voting in by however they were trained right? Also, after the first layer of filters, the lines/curves/etc features filtered are the only things that um, have an appearance, I mean, when a nose etc is detected and then a face is detected, these things detected will never have an appearance right, they are only an encryption and weight votings right? Cus I was thinking that the detected lines/curves would become shapes, and then higher, hehe...

If I stare at a human's face, I will detect "face" at the end of the network. But, why and how am I able to concentrate on just the nose and see the nose? If the encryption for "nose" "I see a nose I see a nose" is in a layer behind the "face" layer, then, that means it would need to stop there, and output a layer early, right? Also my concentration gives more score to that area I guess.

Is korrelan working hard on Deep Sensory Cortices because that will pave the way for the rest? Like The Deep Motor Cortex?

Btw it's bugging me so I wanted to make it clear, I know korrelan, you are the father of wisdom with the ANN / machine learning.

In the brain there is sensory cortices and the motor cortex. Why does Machine Learning have no Motor Cortex ANN algorithms????? For example, take CNNs, or Logistic Regression, or HHMMs, they are not motor cortexes in the sense of the human brain's motor cortex. Why does it seem Machine Learning is focusing on "senses" but not "motor actions" ?!?!? I know I've seen Machine Learning projects have spiders learn to crawl BUT, never mention the motor cortices, only the ex. CNN. Half the story is missing. Omg guys.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 19, 2018, 04:29:45 pm

Neural Networking: Robots Learning From Video: 
https://hackaday.com/2018/01/18/neural-networking-robots-learning-from-video/ 

Title: Re: I want to crack Neural Networks
Post by: WriterOfMinds on January 19, 2018, 06:45:56 pm
I wonder if we don't have separate detection networks for faces and for individual parts of faces. I'll illustrate why I think so.

Let's say there are two dots on a piece of paper. When you look at them, all your brain probably registers is "two dots." But if you add a curved line underneath the dots, suddenly you see a (highly simplified) face. Nothing about the dots themselves tells you that they are eyes, but once you see the face, you can infer that the dots are eyes. So the detection of eyes, as such, cannot be a prerequisite for detection of a face. It's possible to detect an eye that isn't part of a face, but in that case you need more detail -- an eye-shaped outline, an iris, a pupil. So I think that "I see an eye" is not necessarily a previous layer of "I see a face." It could be its own separate thing.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 19, 2018, 07:10:20 pm
 There is a law of pattern recognition that i have not found. Or i will have to make to explain such things. Like having
a object such as a dot that is only 10 percent a eye, line that ha only 10 percent of the features of a mouth. But when place
right orientation to one and another it is 99 percent a face. 

Title: Re: I want to crack Neural Networks
Post by: ivan.moony on January 19, 2018, 07:12:33 pm
Of course, correlation between recognizable objects is also important for overall recognition process. Recognizing a whole face takes n specific parameters, while each of those parameters, taken as a single unit, grows up possible recognition set. For example, when you see the whole face, dot can stand only for an eye, while taken in isolation, a dot can stand for an eye, a teardrop, a mold, or a star. The bigger the possible matching set is, I think it is harder to actually tell what the thing is. And the sets shrink in correlation with other parameters.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 19, 2018, 11:09:05 pm
 I going with object outline regression algorithm to change the face objects to a lower dimension. Then with the lower dimension
object pair them with a physical distance weight and morph weight. Then this well work. NN do this will. And with
a also with a secret of analog "OR" operation.

 
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 20, 2018, 01:44:22 am
WriterOfMinds....um.............If you take a blank sheet of paper and a pencil, and draw 2 small circles, and a wide curve, you can see a face, and, if you look differently at it (without moving your head or paper) you can see a wide curve.

SO...
This means our layers must be like this!:

input layer
hidden layer
hidden layer
curve detecting layer --- I can be an output layer too
face detecting layer --- I am the last/output layer, face detected
output layer

See attachment too.

Hey wait, isn't there columns like my picture in the brain? Maybe a bi-directional?
https://www.google.ca/search?q=neural+columns&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiGvZPGt-XYAhUW82MKHcv3DNcQ_AUICigB&biw=1280&bih=879#imgrc=8PGBppipVcy9MM: