Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: LOCKSUIT on January 10, 2018, 01:52:17 pm

Title: I want to crack Neural Networks
Post by: LOCKSUIT on January 10, 2018, 01:52:17 pm
Hi friends. I'm doing a tad better.

I find it fascinating that one piece of information (shared features) makes neural networks make so much more sense to me. Now I get why Wiki I read many months back said the brain processes higher concepts as it goes higher layers up. And I get the shared features part now, and why they can detect small and whole features and in the end ex. a thousand images. And I've learned before that language works the same as a ex. CNN where a b c is used lots then words light up less so then word pairs less-less so and higher up until bigger topics are recognized consciously not subconsciously. FURTHER the frontal cortex is based like this but allows higher concepts like if you see this image and this image and this sentence and do these actions then one of the output neurons lights up. And am I right to say then that all neural networks are hierarchical and work by "shared features"? Tell me more things as important as "shared features".

Also after the above question, I want to draw out what a neural network looks like visually, to understand how it learns to sense, act, and reward those actions.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 10, 2018, 05:10:19 pm
I havent got mine working yet,   but you have to build your way up to the big concepts,  u cant detect it all at once considering your starting from an rgb map (the camera.)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 10, 2018, 05:23:57 pm
Lol are you talking about my studying or the AI's studying?

To be clearer, you only recognize a face if you recognize a nose, mouth, etc, first. It's a hierarchy, using Shared Features. Are most Artificial Neural Networks based on a Shared Features Connectom? Vision, Language, and the Frontal Cortex seem to all be one.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on January 10, 2018, 05:30:48 pm
They say that they are not certain how a neural network really does the big picture. They clued up small parts, combined a lot of them, and when they run the algorithms, it works, somehow. Maybe I missed something...
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 10, 2018, 07:05:38 pm
Oh, u mean how hierarchical backprop nets work.      they make groups of pixels, then groups of those groups, they can overlap or not overlap, its a brain teaser to think about.

Just imagine a really advanced one that groups pictures of the actual thing, with the word of the thing, or sign language of the thing as well in different sub categories, all unsupervised,    it gets pretty complex.

After youve done this job, u then use it as a base for the robot to "see the world through" and then you generate movements off it to its motors.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 10, 2018, 07:47:28 pm
Finally I feel like I'm on topic with you guys....................

Yes yes! So Deep Learning NN...great.....it recognizes. But its outputs must now go to a motor DL NN. There's a lot more of the bigger picture missing. Like an attention module. A history bar over-lording all the hierarchies as a huge sequential parallel hierarchy and frontal cortex all at once. Reward module. Artificial Reward creation. My mind is exploding full of ideas.

Now I want to show yous an image and I want to know how it is rewarding its actions. Image atteched.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 11, 2018, 09:51:18 am
thats the idea.  Korrelan has a neural network that could possibly produce a blob map of identifications,   so u could be working with something like that for the input into the motor generation system.     My model reduces it to so many points of interest.

Also, I have never seen a robot that actually saw its environment in this detail, so its *new* if you get it to work!
Title: Re: I want to crack Neural Networks
Post by: keghn on January 11, 2018, 02:05:40 pm

The 8 Neural Network Architectures Machine Learning Researchers Need to Learn: 
https://towardsdatascience.com/the-8-neural-network-architectures-machine-learning-researchers-need-to-learn-11a0c96d6073 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 11, 2018, 06:14:05 pm
Thanks keghn that clarified some things.

So LSTMs+GANs+HopFieldNNs is the current furthest thing we have, since they combine FNN, RNN, and HNN. Right?



"To be clearer, you only recognize a face if you recognize a nose, mouth, etc, first. It's a hierarchy, using Shared Features. Are most Artificial Neural Networks based on a Shared Features Connectom? Vision, Language, and the Frontal Cortex seem to all be one."

Still don't have an actual answer for this question.

Do all Artificial Neural Networks use hierarchies and build higher concepts made of smaller ones? Or do some NNs revolve around other things and don't even use concept building? If so, please tell me what it is. Because I find the "using shared features to use smaller concepts to form higher concepts" really important. So, I want to know more things this important. Ex. pools but I don't think Pools are THAT important for getting the WHOLE picture if you get what I mean.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 11, 2018, 07:10:28 pm
your building bigger groups from smaller groups.

If u have object a and object b,  they can become a single object due to a symantic measurement.   face would be eyes and nose as in "contains".

But it gets complicated.

Hamburger would be the same thing as a chainsaw,  if the symantic were "they both destroy the environment."   - as in cows farting into the atmosphere, being what some stupid hippies think.   My point is, theres alot of relations to draw to make a container due to a context, or measurement.


So if the computer needed something that destroyed the environment. (although alot more would be taken into account)  it has the knowledge in its database to do it,  even if its just a scoring sharing around containers in its ga type randomization system.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on January 11, 2018, 07:46:47 pm
An object is a set of properties by which the object is recognized. Like in "object oriented programming" (OOP (https://en.wikipedia.org/wiki/Object-oriented_programming)), property membership can be inherited over different classes. Classes, beside inheritance properties, can add their own specific properties. "Cow" inherits all the properties from "living being", which inherits all the properties from "entity", and so on. Two classes (and so objects being members of classes) can share the same  property inherited form super-class (like "living being), although they are different classes (like "cow" or "giraffe"). "living being" may have a property "farts", while "cow" and "giraffe", that inherit "farts", may add their own properties like "produces tasty milk" or "has problems drinking water". And the pyramid branches downwards by growing the membership of (inherited) properties. Going up through the pyramid is called generalization, and going down is called specialization.

Object oriented programming can form nice knowledge structures, especially when paired to declarative programming (which is still not an often case, but OCaml (https://ocaml.org/) is an exception).

[Edit] Building pyramidal knowledge bases is still in scientific discovery stage, IMHO (see semantic web (https://www.w3.org/2001/sw/wiki/Main_Page) and OWL (https://www.w3.org/2001/sw/wiki/OWL)).
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 11, 2018, 08:22:02 pm
But do all neural nets use "Features"? I.e. nose and eye Features create face Feature. And connections all have different weight probabilities. ?
Title: Re: I want to crack Neural Networks
Post by: keghn on January 11, 2018, 08:42:20 pm
But what *is* a Neural Network?:

https://www.youtube.com/watch?time_continue=1&v=aircAruvnKk
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 12, 2018, 12:18:03 am
That didn't really help keghn lol......I already got a ton of that actually. Oh I see it has more parts. Ok so it's quite visually useful and good quality stuff.

But I mean, the question below is still unanswered:

"But do all neural nets use "Features"? I.e. nose and eye Features create face Feature. And connections all have different weight probabilities. ?"

I get that there's more to NNs....but....I want to know if they ALL use feature sharing to build deeper higher concepts. I would think not all NNs use it. But, maybe they all do.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 12, 2018, 06:55:25 pm

Spiking Neural Networks, the Next Generation of Machine Learning: 

https://towardsdatascience.com/spiking-neural-networks-the-next-generation-of-machine-learning-84e167f4eb2b 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 12, 2018, 11:13:17 pm
I find it strange how neural networks are what the machine learning field is basically all about yet......it's just a simple drawing of some lines man......yes I know it has a use and they have neurons and weights and biases and sigmoid functions but...they are too simple....somethings weird here........................and I also find it a bit disturbing how when reading an AI book, I have some knowledge of the field, and I'm not hating....but...it's still all like lol "Then these class representation distributions are fed into the linear regression classifier for backprop and made into vectors with biases and train each function of the network to discriminate the key features of the class features into small units quantinized after iterations of the feikstein mannboltz algorithm before sending it through an autodecoder for optimization of its sequences.
Title: Re: I want to crack Neural Networks
Post by: korrelan on January 13, 2018, 11:19:23 am
Hi Lock… I’m glad you are feeling better.

Whilst you do seem to be getting a better grasp of how NN’s work there still seems to be some confusion.  The NN in the diagram is a standard feed forward NN with and input layer, five hidden layers and an output layer… the labels/ arrows are incorrect.  This type of NN would usually be used as a feed forward classifier and would not use a reward based learning schema, it would probably use back propagation.

Quote
But do all neural nets use "Features"?

If you class a pattern of weight vectors as a feature then yes they do, though normally the term ‘feature’ is used with convolutional networks CNN and image processing.

The overall schema of a classical NN can be thought of as a ‘bottle neck’ function.  Vector weights enter on one side of the NN and are ‘diverted/ focused/ funnelled’ towards one or more of the outputs. 

The input data has to be converted into a format that is compatible with the NN, so with CNN for example a set of convolution filters are applied to the input image, each filter highlights a different aspect of the image.  A set of four filters could highlight horizontal, vertical and diagonal lines in a image for example.  Each filter converts the RGB image to a monochrome version that can be fed into a CNN, this is done for all four filters… the results are then usually fed into a pooling NN. Usually several layers of CNN are applied, each layer detecting larger more complex ‘features’ in the original image.

The classical NN is a basic function that can learn to classify weight vectors; the difficult bit is finding ways to leverage the power of the NN against your data set.

 :)
Title: Re: I want to crack Neural Networks
Post by: keghn on January 13, 2018, 02:11:37 pm
 The Feynman explanation of NN:

 Input data is transformed into output data
 Each transform has to trained into the NN. 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 05:20:32 pm
I had studied all over ANNs a bit back a lot, and now I get how they all work basically.

I wonder why none of them have a sequential history bar / timeline. And what about self-ignitions? A sudden thought pop into your head.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 13, 2018, 06:54:10 pm
 To get what is really get the feel for the dynamics of a NN, one should really look at the types of input data to output data that a NN can do.
 You could have a NN trained to do these thing: 
 Input an 4 and output 3 and from the same NN input a 5 and it outputs a 7. Or input 9 and get out a 9.

 Or in different NN, input a picture of a cat and get out the same cat.
 Or input a letters cat and get out a picture of a cat. or even input a number and get out a cat.
 Or input a picture of door to your house and get picture out of night stand where the keys to the door are.
 Or like in algebra you can think of "x" and out come a number six. x is equal to six.
 Or a cat walk behind hind a big bush. So bush is equal to cat?
 Or a country on a Globe rotates behind the glove so globe is all so equal to that country?
 NN are working like one directional link list algorithm.

 For sequential stuff the output of NN is thrown back into the input of the NN. Also know as RNN. 
 Or you could have a bunch of NN in a row and have a feed back somewhere with in? 

 For me, Doing video recording with NN is using a frame number of a video and then feed it into a NN an get out that frame:)

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 07:13:28 pm
Or output actions from sentences or somatosensory/visual sensors.
Or DNA/RNA data sequences.
Or creature generations.
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 13, 2018, 08:29:09 pm
what the future's made of ->

 A   Input rgb, output corner class. (=)
 B   input corner class, output corner descriptor (=)
 C   Input line density, output 3d depth.  (=)   <-this has to be trained with a triangulator.
 D   input corner collection key, output entity descriptor (=)
(then in extreme parallel and looped many times.)
  E  input entity descriptors, output motor posture (=)
  F  input motor posture, output entity descriptors (=)
   -> run rule reason why goal target motivation scoring program based apon geometry.

thats what i have planned for mine.  but it aint easy getting 1st nn with a huge capacity!!!
And theres not many assignments from eye to roboarm!    its virtually all PARALLEL, except for the final motor simulation.

So id like to say ive gotten to the point where symbolic ai is one and the same with pattern matching ai, its all based apon assignment or equals but its just more of a bulldozer - and more people do it.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 08:37:09 pm
Now for actual cracking of ANN.

Ahhh, I think some of my AGI instructions were wrong. What should be happening is, when a sense gets "saved" into the network, and it senses it again later and strengthens it, each time it selects the actions linked to this sense it tweaks those actions for improvement, checking for higher reward. What should be happening is if the reward is not higher, then it should not save the mutated action tweaks. But also, it should delete the ones that have lower ranks, so that there is only 1 motor task in the mortor cortex linked to the sense in the sensory cortex and not duplicates of the same motor task (duplicates with slight variation from tweaks~).

Also, senses link to actions. But what about how we link an image to another image and recall one after recalling the first? Once an image is chosen in the network at the end of the network, it must be linking straight to another end neuron in the last layer or to some timeline bar made of sequential sets.

And what about how we link an image to a sound (2 different sensory types)? Again the end neuron in the last layer must be either linking right to the other neuron in the last layer of the other sensory cortex or linking to it indirectly by linking through the history bar made of sequential sets. (Each set contain the 5 senses sensed and the actions done for each motor).

?
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 13, 2018, 10:02:34 pm
your asking the right questions, as far as i can see.    its pointless talking about it, u have to put it together to learn more, it helps a lot to actually do it, and see it in front of your eyes.  (a painting is easy to paint in hindsight, or rather, what ur looking at, at the end.)

too-> Ive got many theories speculating around but its pointless continue thinking about them unless im resting from actual coding,  a big one of mine is "can i collect entities of equal meaning like words on a page, even tho it isnt words"

Another one that might help you specificly, cause its like what you said - is you can get a slight variation off old motor records by finding the halfway point of two succesful ones, it may be in the centre doing the action the correct way.

So just trust that your thinking is good,  and implement the bastard!
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 10:13:45 pm
But ranch take a moment here, settle down. I can tell my hired programmer friend to code stuff. And I have a lot of notes. But I want to exactly know precisely the responses from yous as a society in whole. Pin the answer down. How does the AI field/you think how an image links to a sound or how an image links to an image? Is it how I said? It is so simple to get this right. Doing anything without knowing too is ++ harder. Tell me your answers. In my case knowledge is all I use atm and I make sure I do a well job despite no tests.

And besides, I feel an unlocking feeling about ANN, if I can know just a few simple answers like to questions like this one.

I just need to know a few things.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 13, 2018, 11:45:35 pm
Wahahaha, wahahahahaha!! Now that I have a better understanding of ANNs, I am extracting more data about how they work! This is EASY!

Stayed tuned, I will probably have my diagram up in a few hours.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 14, 2018, 11:56:50 pm
See attachment in this post.

Alright so here is how I propose the human brain links an image of a eye (layer 3) to an image of a crowd (layer 5), an image of X to a word of sounds, and senses controlling actions.

As you can see, skip-layer connections are needed, and ex. the crowd neuron must feed-backwards to the eye neuron concept a few layers down-wards.

The multi-sensory representation concepts are formed sums in the Frontal Cortex. Master sub-policies.

Then this controls high-level action tasks that feed out of the brain to the body system.



Anyone who wants to understand CNNs, LSTMs/RNNs, and SVMs, these are the best I've seen.

https://www.youtube.com/watch?v=FmpDIaiMIeA

https://www.youtube.com/watch?v=WCUNPb-5EYI

https://www.youtube.com/watch?v=-Z4aojJ-pdg
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 15, 2018, 05:18:53 am
go on then, wheres the big implementation?
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 15, 2018, 05:14:23 pm
No implementation yet. Just anylising the situation right now. Prep stage.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 15, 2018, 08:20:46 pm
I have 2 new questions. See the image attachment in this post. So the stick man is made into convolved feature maps, one for vertical lines, one for diagonally left lines, and one for diagonally right lines. It's extracting features and looking for matches at a small level. BUT, now for my questions, as you can see at the bottom left in question 1, do those feature maps ever get layered on top of each other to you know, use lines n curves to make noses and eyes, then use eyes and noses to make faces, creating higher-level features as you go up the 6 layers in the mammalian brain? Or does that give us the same picture that we started with? I.e. in my attachment I show a OR in the bottom left asking if or does it use a fully connected layer with the curve/line features to sum up probability weights to what it is, like out of the 3 options, it is a eye or nose or mouth, or say a man or box or fox, ? If it does that then, then, then, that means it never actually builds higher-level features, i.e. lines/curves, eyes/nose/moth, face/body/limbs, human/chair/pillow, dining table with chairs with humans with pillow, that scene with music playing, and even higher. What that means is that it just uses the lines and curves and well, any higher-level features like eyes in the next layer and then faces in the layer after that are really just fully connected layers that where the image does-not look like a face but instead is just a number / representation, then uses that nose/eye representation in the next layer for ex. faces.
Question 2 now, is, at bottom right of attachment 1, how does it detect just a nose (or ear/s as I drew) by lines n curves if those lines n curve features will light up some of the stickman at the same time?

Another issue I'm having is, see attachment 2 explaining CNNs? In this image you can draw a number and that thing detects if it's a 1 or 8 or 0. 784 pixels get fed into the first layer, and it clearly stated it uses multiple features, yet if that were so then on the far left at the first layer of 784 pixels there would be a divergence of 3 separate layers each having 784 pixels input. It seems as if it is just 1 feature map summing its own self rofl! Helppp!
Title: Re: I want to crack Neural Networks
Post by: korrelan on January 15, 2018, 10:15:57 pm
Your first picture shows a convolutional network.  The detected line features are fed through a max pooling network/ layer.  The pooling layer is just a way to keep the values/ vectors within reasonable bounds before being fed into the next layer.  The pooling layer also usually shrinks the detected image features, with no loss of detail.

There are many variations on the CNN schema, some pool features into larger features (eyes, nose, etc). Usually only at the end of the CNN chain of networks/ processes are the detected features fed into a fully connected network (FCN).  The full set of detected line features are never actually recombined into the original image.  The FCN (last stage) is where the machine learns to name an object from the features present in the image; all the stages prior to the FCN are just used to extract recognisable features from the image.

Your second picture is of a standard feed forward classifier, very similar to the FCN mentioned earlier.  This is a totally different method of using a NN to detect features/ numbers, the image matrix of pixels is fed in and the NN learns to classify a output.  This method is not usually as versatile/ accurate as the CNN approach but requires less processing.

Keep in mind that there is no silver bullet solution, there are thousands of different variations on the NN, CNN, RNN, architectures.

ED: OMG close some of those tabs on the browser... my OCD is twitching lol

 :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 15, 2018, 11:34:23 pm
I actually got my amount of open work-tabs down recently. However my knowledgebase is a little scarier to look at. Good thing I know where everything is.

So IF the filtered feature images DID get layered back as one, it would sorta ruin the whole point of the filters right?, however it would mean there is actual faces higher up in the network, lol. While if the filtered feature images DON'T get layered back as one, then the purpose of the filters is kept alive right?, however there is no actual faces in higher layers, rather they are there but as encryption-like and are detected by the fully-connected layer of weights with the lines voting in by however they were trained right? Also, after the first layer of filters, the lines/curves/etc features filtered are the only things that um, have an appearance, I mean, when a nose etc is detected and then a face is detected, these things detected will never have an appearance right, they are only an encryption and weight votings right? Cus I was thinking that the detected lines/curves would become shapes, and then higher, hehe...

If I stare at a human's face, I will detect "face" at the end of the network. But, why and how am I able to concentrate on just the nose and see the nose? If the encryption for "nose" "I see a nose I see a nose" is in a layer behind the "face" layer, then, that means it would need to stop there, and output a layer early, right? Also my concentration gives more score to that area I guess.

Is korrelan working hard on Deep Sensory Cortices because that will pave the way for the rest? Like The Deep Motor Cortex?

Btw it's bugging me so I wanted to make it clear, I know korrelan, you are the father of wisdom with the ANN / machine learning.

In the brain there is sensory cortices and the motor cortex. Why does Machine Learning have no Motor Cortex ANN algorithms????? For example, take CNNs, or Logistic Regression, or HHMMs, they are not motor cortexes in the sense of the human brain's motor cortex. Why does it seem Machine Learning is focusing on "senses" but not "motor actions" ?!?!? I know I've seen Machine Learning projects have spiders learn to crawl BUT, never mention the motor cortices, only the ex. CNN. Half the story is missing. Omg guys.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 19, 2018, 04:29:45 pm

Neural Networking: Robots Learning From Video: 
https://hackaday.com/2018/01/18/neural-networking-robots-learning-from-video/ 

Title: Re: I want to crack Neural Networks
Post by: WriterOfMinds on January 19, 2018, 06:45:56 pm
I wonder if we don't have separate detection networks for faces and for individual parts of faces. I'll illustrate why I think so.

Let's say there are two dots on a piece of paper. When you look at them, all your brain probably registers is "two dots." But if you add a curved line underneath the dots, suddenly you see a (highly simplified) face. Nothing about the dots themselves tells you that they are eyes, but once you see the face, you can infer that the dots are eyes. So the detection of eyes, as such, cannot be a prerequisite for detection of a face. It's possible to detect an eye that isn't part of a face, but in that case you need more detail -- an eye-shaped outline, an iris, a pupil. So I think that "I see an eye" is not necessarily a previous layer of "I see a face." It could be its own separate thing.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 19, 2018, 07:10:20 pm
 There is a law of pattern recognition that i have not found. Or i will have to make to explain such things. Like having
a object such as a dot that is only 10 percent a eye, line that ha only 10 percent of the features of a mouth. But when place
right orientation to one and another it is 99 percent a face. 

Title: Re: I want to crack Neural Networks
Post by: ivan.moony on January 19, 2018, 07:12:33 pm
Of course, correlation between recognizable objects is also important for overall recognition process. Recognizing a whole face takes n specific parameters, while each of those parameters, taken as a single unit, grows up possible recognition set. For example, when you see the whole face, dot can stand only for an eye, while taken in isolation, a dot can stand for an eye, a teardrop, a mold, or a star. The bigger the possible matching set is, I think it is harder to actually tell what the thing is. And the sets shrink in correlation with other parameters.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 19, 2018, 11:09:05 pm
 I going with object outline regression algorithm to change the face objects to a lower dimension. Then with the lower dimension
object pair them with a physical distance weight and morph weight. Then this well work. NN do this will. And with
a also with a secret of analog "OR" operation.

 
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 20, 2018, 01:44:22 am
WriterOfMinds....um.............If you take a blank sheet of paper and a pencil, and draw 2 small circles, and a wide curve, you can see a face, and, if you look differently at it (without moving your head or paper) you can see a wide curve.

SO...
This means our layers must be like this!:

input layer
hidden layer
hidden layer
curve detecting layer --- I can be an output layer too
face detecting layer --- I am the last/output layer, face detected
output layer

See attachment too.

Hey wait, isn't there columns like my picture in the brain? Maybe a bi-directional?
https://www.google.ca/search?q=neural+columns&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiGvZPGt-XYAhUW82MKHcv3DNcQ_AUICigB&biw=1280&bih=879#imgrc=8PGBppipVcy9MM:
Title: Re: I want to crack Neural Networks
Post by: keghn on January 25, 2018, 11:30:15 pm

Convolutional Neural Networks Learn Class Hierarchy?: 


https://vimeo.com/228263798 

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 26, 2018, 03:17:02 am
Wicked. Even better CNNs. However I want to make it clear that my goal is to understand the whole human brain from the top level, slowly going top-down.
Title: Re: I want to crack Neural Networks
Post by: keghn on January 27, 2018, 03:53:46 pm

Autoencoder Explained: 

https://www.youtube.com/watch?v=H1AllrJ-_30   
Title: Re: I want to crack Neural Networks
Post by: keghn on January 28, 2018, 02:49:22 pm

What is wrong with Convolutional neural networks ?: 

https://towardsdatascience.com/what-is-wrong-with-convolutional-neural-networks-75c2ba8fbd6f
Title: Re: I want to crack Neural Networks
Post by: Art on January 28, 2018, 05:27:49 pm
Wicked. Even better CNNs. However I want to make it clear that my goal is to understand the whole human brain from the top level, slowly going top-down.

In that case, I hope you have a very long lifespan as people have been trying to understand the human brain for an extremely long period of time and are still only scratching the surface. Good luck!

To quote Mr. Emerson M. Pugh - "If the Human Brain Were So Simple That We Could Understand It, We Would Be So Simple That We Couldn’t."
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on January 28, 2018, 05:30:13 pm
as far as we "true lifers" go,  we can say ANYTHING is ANYTHING, you can look at a dog how its a man,  we are beyond ai systems, we are true intelligence as far as we know...  maybe god knows things that we cant concieve.

But Onto this feature hierarchy thing ->

Say I had every possible photo of a nose,  and I had every possible photo of an eye,    the individual photos are *anded* as a single photo but to make the class they are an *or*,   any one photo of the nose i then activate the class nose.  having it *or* the photos offers reusable compression.  (layers of features is very similar to what im saying.)  because they all arise at the same group in the end - so i can share bits and pieces that *or simplify* together,  which is similar to having features (or levels in an NN),  except im taking it from a *boolean logic simplification* standing point.

Locksuit,  theres motor stuff around, korrellan has a video on it,  theres plenty on the internet,  whats different and new would be someone taking one of these vision systems and having it as a world for the motor cortex to explore.  (its combining them thats the exciting thing on its way...)

Then well see some pretty hectic stuff come out.  >XD


[EDIT]
theres a thing called geometric invarience,  where just one 1024 bit or so key will automatically respond to a huge camera difference around the object.  and that saves ram hugely over just a photo orrery by itself.

Look up brisk orb, or sift or surf descriptors,  they are pretty amazing and useful.
[/EDIT]
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on January 31, 2018, 02:46:23 am
https://www.youtube.com/watch?v=Blegbge7ri8

https://www.youtube.com/watch?v=3JQ3hYko51Y

This guys clearly doing a Korrelan "thing" here.
https://www.youtube.com/watch?v=T2aZAWXyw6c

OMG OMG OM G OMG OMG look!
https://www.youtube.com/watch?v=CQXxzQDjXNc

moreee neurons
https://www.youtube.com/watch?v=u28ijlP6L6M

MOREEEE
https://www.youtube.com/watch?v=PM_gTOm9fgk

Even more neuronssssssss!!
https://www.youtube.com/watch?v=B8-H4lRdGHI&t=16s

omg so good...i liked the part around 5:00
https://www.youtube.com/watch?v=LS3wMC2BpxU
"I think, therefore I am"

also new diagram attached....I'm gonna make a better one soon hehe
Title: Re: I want to crack Neural Networks
Post by: keghn on January 31, 2018, 03:45:43 pm
Neurons Use Virus-Like Proteins to Transmit Information: 

https://www.the-scientist.com/?articles.view/articleNo/51342/title/Neurons-Use-Virus-Like-Proteins-to-Transmit-Information/ 


Brain Cells Share Information With Virus-Like Capsules: 

https://unews.utah.edu/surprise-a-virus-like-protein-is-important-for-cognition-and-memory/


 Viruses are half dead and alive. The connection between live world non living. 
 
 A person can grow crystals with non living chemicals.
Grow Transparent Single Crystals of Alum salt at Home!:
https://www.youtube.com/watch?v=eIAkWaQi0AE 

 Viruses can be used to make crystals: 
https://www.google.com/search?q=virus+crystal&client=ubuntu&hs=P7w&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwjdpof8woLZAhUN9mMKHbMFAt4QsAQIRQ#imgrc=9NZodaWGPo8GqM:

Glowing Crystal Has the Quantum Internet Within Reach: 
https://www.inverse.com/article/36317-quantum-internet-erbium-crystal 







 My be heaven in not so far away?:
https://www.youtube.com/watch?v=40V9_1PMUGM


Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on February 01, 2018, 07:19:43 am
Brain simulation engine:
http://www.digicortex.net/node/2

Is this good korrelan?

It ran this:
https://www.youtube.com/watch?time_continue=1&v=B8-H4lRdGHI
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on February 01, 2018, 02:41:41 pm
4 billion synapses is pretty good.  remember most peoples computers are only 5 gigahertz,  so it must be on a gpu i reckon.  that would have been hard to get going that fast.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on February 06, 2018, 05:39:36 am
It's nice to see a woman who is a computer scientist.
https://www.coursera.org/learn/intro-to-deep-learning/lecture/WpduX/modern-rnns-lstm-and-gru
Title: Re: I want to crack Neural Networks
Post by: keghn on February 08, 2018, 03:36:41 pm

Machine Learning & Neuroscience: 

https://www.youtube.com/watch?v=e_BOJS1BLj8
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on February 10, 2018, 07:35:47 am
Really good information.

Would yous say that one of the greatest things AIs will do is discover new knowledge? Is that actually the underpinning goal to basically everything? After they discover ex. how to implement nanobots, then they implement them using discovered nano-actuator arms correctly.

Because...I believe I may have just cracked the secret to the universe and why things roll out the way they do from the cell to our brain-storming experimentation to physical support-evidence.
Title: Re: I want to crack Neural Networks
Post by: keghn on February 10, 2018, 02:54:40 pm
 Truth seekers will find Siraj Raval speaks the truth.  He follow us here, and we follow him. We are in the same intellectual circle. 




Title: Re: I want to crack Neural Networks
Post by: ranch vermin on February 11, 2018, 09:30:19 am
Dont worry about Siraj, hes just second hand peddling papers,  the real guys are elsewhere.
Title: Re: I want to crack Neural Networks
Post by: keghn on February 11, 2018, 04:51:08 pm
 He gives credit to other peoples for their work. Which i like. If he does not, then it looks like he stealing somebody else's work.
 Stealing by the big wigs I have seen, when the hit that AI wall at 100 miles per hour. It causes a really big mess.

 There is a team that work is genuine and parallel mine. Not as advanced but lot of brain power.

Building Machines That Learn and Think Like People | Two Minute Papers #223: 

https://www.youtube.com/watch?v=uOiOhVgR3VA&t=1s 

MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum):   

https://www.youtube.com/watch?v=7ROelYvo8f0&feature=youtu.be   



Title: Re: I want to crack Neural Networks
Post by: ranch vermin on February 11, 2018, 06:27:06 pm
Stealing by the big wigs I have seen, when the hit that AI wall at 100 miles per hour. It causes a really big mess.


Good wording Keghn,  you are right, some "more powerful" beginners probably want the whole straight away and fuck it up completely :).
Title: Re: I want to crack Neural Networks
Post by: keghn on February 15, 2018, 11:48:11 pm

AI detectives are cracking open the black box of deep learning:     

https://www.youtube.com/watch?v=gB_-LabED68&t=2s
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 05, 2018, 04:49:42 pm
Holy crap did you guys hear this gal speak? Listen up till 0:36. I think this is pushing the limits of my word recognition abilities hahahaha!

https://www.coursera.org/learn/intro-to-deep-learning/lecture/WpduX/modern-rnns-lstm-and-gru
Title: Re: I want to crack Neural Networks
Post by: keghn on March 05, 2018, 06:01:04 pm

Researchers find algorithm for large-scale brain simulations: 

https://techxplore.com/news/2018-03-algorithm-large-scale-brain-simulations.html
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 06, 2018, 05:03:40 am
I assume that is a big BIG deal correct? Now we have the power simulate all the neurons in a human brain?
Title: Re: I want to crack Neural Networks
Post by: keghn on March 07, 2018, 03:01:09 pm
Building Blocks of AI Interpretability | Two Minute Papers #234: 

https://www.youtube.com/watch?v=pVgC-7QTr40
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 07, 2018, 05:52:03 pm
ohhhh look ! At 0:40 of the video is that link I linked us to lolol of the ANN playground.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 08, 2018, 06:46:41 am
Lol so ya since August 2015 I've been researching ANNs/Machine Learning/AI and until 1.3 months ago never understood it, and then it ALL tied together ohhhhhhh sheit lol. That is, once my friend told me ANNs are a shared-feature hierarchy. That single idea is VERY important as you can, see...
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 08, 2018, 06:36:40 pm
this is deep - strap down
https://www.youtube.com/watch?v=tWReDtkt-YE
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 08, 2018, 09:02:11 pm
This video is too good....it's candy:

RANCH CHECK THIS OUT......LOOK AT THOSE FACES FROM JUST BLOBS OF COLOR !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
https://www.youtube.com/watch?v=XhH2Cc4thJw
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 09, 2018, 08:06:31 pm
I used to wonder why I could recognize lines/curves on any object. After understanding ANNs/CNNs, I see why~.....I hope yous do too lol!
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 11, 2018, 11:36:38 am
I almost completed scanning this in just 1 day lol.
https://en.wikipedia.org/wiki/Artificial_intelligence

Haha you gotta be kidding me:
"Sub-symbolic
By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[16] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge."

NLP IS perception, learning, and pattern recognition!

"Sub-symbolic methods..."? HAHAHA it is ALL symbols. Only universe physics gives our memory meaning.

"...manage to approach intelligence without specific representations of knowledge." ? Yet even more silly!! It is all learned representations of knowledge whether visual or auditory. ALL sensory types are symbolic AND are a language!
Title: Re: I want to crack Neural Networks
Post by: ranch vermin on March 11, 2018, 12:58:20 pm
This video is too good....it's candy:

RANCH CHECK THIS OUT......LOOK AT THOSE FACES FROM JUST BLOBS OF COLOR !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
https://www.youtube.com/watch?v=XhH2Cc4thJw

yes that is amazing.   making a game or movie with this would be cool,  would use this for the graphics.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 11, 2018, 04:26:59 pm
If anybody can find the program send meh it. Would love to create things. Especially Super-Resolve things.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 11, 2018, 05:07:12 pm
(HEADS UP - THE BELOW PROGRAM IS A WASTE OF TIME, EVEN THE COMMENTERS SAY IT IS WOOOORSE!)

Oh look new 4min paper!

DEMO of a super-resolution AI!

Try YOUR photo !
http://people.ee.ethz.ch/~ihnatova/

Surprisingly, if you increase the zoom in your browser window (ONLY IN THE DEMO PAGE), you can see the small images closer and clearer!! Also make sure to download your generated image!
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on March 11, 2018, 10:20:52 pm
I agree that shared weights as used in CNNs are a quite important and powerful concept. Actually its a core technique of CNNs. But on the other side, I doubt that this concept is actually used in the human brain, since a mechanism that gives synapses at different locations the same weight does not seem to be biologically plausible at all. So its rather an artificial trick.
Title: Re: I want to crack Neural Networks
Post by: keghn on March 11, 2018, 11:07:10 pm
 A neuron can read a pixel. So no hard to look at them as light pipes. In side a NN chip, all pipes come together and mix 
all pixels together. The mixing is constructive and destructive or somewhere in between.
 Input data is trained to give output data.

 Input data can be un linearly related to the output data. The same as with a link list algorithm.

Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 30, 2018, 03:33:53 am
For me, I see the world as a bit scary. Pain and death to be specific. My dog used to sit on the vent side against the cubbord tail up against wall lol. After leaving to the vet and not returning, he's not there anymore. The kitchen/vent is empty.

It really comforts me in the morning to check in on things like how many transistors Intel was recently able to fit in their latest chip.
Title: Re: I want to crack Neural Networks
Post by: infurl on March 30, 2018, 03:43:24 am
If only dogs lived longer than they do...  :(
Title: Re: I want to crack Neural Networks
Post by: Art on March 30, 2018, 04:28:56 am
And if only humans could learn, forgive and love the way dogs can. Ah yes... Good memories of great dogs...family members, friends, and companions, true and loyal, always happy to see you and thought that you were the best person on Earth! That was my dog! (so much for being a rational judge of character)... ;) Thus the word, Unconditional (as in love).
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 30, 2018, 05:12:08 am
Agree.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on March 30, 2018, 05:14:52 am
He seen me cry for the first time and his eyes were open big like "oh no!" when he was 9 years old (before going) and came and sat on my feet in a ball facing outwards. It was human behavior. I told him I love him after he did this. People get upset when they see someone cry.

Gotta be positive though. Also don't worry I'm pretty ok. Truck on.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 04, 2018, 08:22:10 am
Drew some monster ASI pics hehe.

The pics show their size, fog/liquid metaloid nanorobots, instant self re-generation, levitating around FAST, sensors/tools popping out, lasers.

All it takes is 1 advanced nanorobot! They wirelessly form a supercomputer.

BONUS PICS GENERATED
https://deepart.io/img/EJmLtYy51/
https://deepart.io/img/CqR0MoYk/
https://deepart.io/img/qTrVHX6Z/
https://deepart.io/img/vEvTpJHj/
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 05, 2018, 04:42:15 am
I'm not doing the baby sim project anymore btw. My plan has changed a lot. The name of what I'm going to construct now is a researcher/discoverer. That's all the stupid baby was going to be used for later anyhow right? Learn to crawl...then....sit at pc researching cancer etc pffffft.

A=B

:)
Title: Re: I want to crack Neural Networks
Post by: infurl on April 05, 2018, 04:46:46 am
Progress comes through trial and error. Lots of trial and lots of error. You're making progress.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 05, 2018, 04:51:38 am
What if I told you you need more Ontologies? A isUsedFor B isn't very powerful.
Title: Re: I want to crack Neural Networks
Post by: infurl on April 05, 2018, 06:57:29 am
What if I told you you need more Ontologies? A isUsedFor B isn't very powerful.

Go on. What else have you learned?
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 05, 2018, 07:11:25 am
Because you need to work with sentences just like we do. See what I just typed? Stuff like this. Because if you want deep complex discoveries, then you will need deep complex Ontologies...by using a ANN like LSTMs, which use hierarchies that can go deep.

You can also use 2D PR vision, or even 3D PR vision, or 55D PR vision + a dimension of time ex. 22 steps Ds or 578 step Ds. If you go 6D your image can have 67886 pixels with a certain amount of steps say.

So why will I go with just text (1D)? Idk...it's certainty easier. I guess that's why.
Title: Re: I want to crack Neural Networks
Post by: infurl on April 05, 2018, 07:21:28 am
So why will I go with just text (1D)? Idk...it's certainty easier. I guess that's why.

Text is nothing. Language is everything.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 05, 2018, 07:36:34 am
text/vision is language, even vision of a nail being hammered is language.

Yes, language is everything.
Title: Re: I want to crack Neural Networks
Post by: Art on April 05, 2018, 01:07:38 pm
Sorry, Understanding is everything. With language, contextual understanding is the key.

There are many who speak the language but fail to comprehend...look at some politicians, for example. ;D
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 05, 2018, 01:13:59 pm
Computer memory is linear, yet it is able to hold many-dimensional data.

This is one-dimensional array:
[1, 2, 3, 4, 5, 6, 7, 8]

This is two-dimensional array:
[1, 2][3, 4][5, 6][7, 8]

This is three-dimensional array:
[[1, 2][3, 4]][[5, 6][7, 8]]

At the end, they are all stored as one-dimensional array. It is about how they are being read by code, meaning what structure the code assigns in them.
Title: Re: I want to crack Neural Networks
Post by: spydaz on April 07, 2018, 07:29:27 pm
What if I told you you need more Ontologies? A isUsedFor B isn't very powerful.

Go on. What else have you learned?

Although Ontologies are useful for information gathering and then question answering, enabling for entanglement creates the intelligence.


Question way back in the post: about neural networks. : Why? for chat-bots ? and What for?

They basically work on regression  / Gaussian / and logE / sigmoid: the neural network in artificial intelligence (Chat-bots) can be useful in managing big data collections: where there are many potential responses. evaluating the most "Correct" response for the current input may be a matter of historical record. there may have been many correct responses which ended in a satisfactory conversation and other which ended in a non-satisfactory solution.  ...
there may be a series of collected conversations saved. the conversations have similarities and can be saved as conversation trees starting one dimensionally:  as more conversations trees are added trees can be merged into a single tree with probabilities now calculable for each branch historically taken:  such data can also be evaluated by the neural network producing the most desired responses again based on satisfactory outcomes.  based on the historically collected past user conversations (held in Trees)(merged to a master Tree).

the use of neural networks for chat-bots is highly debatable and yet  a natural step when considering how much data is collected. (hopefully stored). to enable for Tuning (desired Over-fitting) to specific Conversations pre programmed conversation trees can form the base tree as well as Conversation filtering enabled to block undesired conversation trees polluting the conversation landscape:

there are potentials with picture identification with artificial intelligence: by recognising elements in the picture the ability for NLP techniques to build a Subject Predicate Object Phrase  from the elements to describe the picture contents (Man) (Sitting on) (Beach) ; the model can be trained from previously collected data stored from Conversations: by saving associated Components as (Collected/Detected/Learned)

Outputs from neural networks can be classed as Predictive/Probable but not Assured!

Maybe?
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 07, 2018, 08:33:23 pm
Spydaz, that's a great stuff in your post. Are you anywhere near the implementation?
Title: Re: I want to crack Neural Networks
Post by: spydaz on April 07, 2018, 11:30:29 pm
Spydaz, that's a great stuff in your post. Are you anywhere near the implementation?

I made A Neural network Class in Visual basic, And because i could not understand the RNN calculations ; put it a library and forgot about it;
Now i'm reaching the above capabilities. the NLP side is 100% now. i have also made most of my functions "Extensions" as it enables for many more abilities coding wise especially with recursion which was a major struggle for me.... iterating thru tree data to extract the correct relations (for my propositional logic module)(using structures instead of Class objects, due to issues with Byval/Byref not having the same meaning when dealing with objects and they only maintain their integrity as structures)  .
I am currently capturing pictures with words as well as Pictures with phrases ; causing a mini data explosion. currently i am in data capture mode until i collect a large enough data-set.  an manually check the images to see how close the error level is. i have not gone to the data experiments ....

most data experiments start with some supervised experiments before allowing for unsupervised learning techniques.

I have an idea how i want to implement my Conversation tree, and how to merge the trees..... and finally store the trees in the database;

But the Image recognition i have not done yet!! - but could always work off a large database of correctly classified images. if given a phrase or word, if the AI had spoken about it or learned it before it could recollect..... but for Two way traffic ; by presenting a picture the AI could determine from similar images via neural network what the picture could be .... but not recognise. but if asked about a man on a beach . the AI could recollect or find on the internet and learn.
Hmm... it sparks imagination.. (I have most of the Tools required (CodeWise) available in my AI) THis is why i always suggest to build everything from scratch;

With understanding comes inspiration

They do have a very good Conversation tree Function in UNITY..... C# ...... But im VB and i would not like to have to distribute unity with my app... when i could create my own extendable version of conversation trees..(i only realised the other day that you could add a list of strings to the "AutoComplete" string collection on a Textbox (not RichTextBox) enabling for intellisense in my Custom Script editor AFTER i created the Syntax Highlighter for the rich textbox (using delegates / Threads) ..... which i could have avoided.... but Delegates also taught me a lot about parallel programming.
Now;
I would not like to deny anybody their learning journey by giving the answer, before showing them pathway for themselves
[/b]

I'm real close; but far from the PhD thesis Project...this is the problem with AI.... there are too many disciplines to cover (even non computer related) to make a REAL achievement... this is why they give Phd's for simple projects ....Combining all the required processes is endless fun...lol.

Perhaps;

My AI may look rugged but i'm not a graphical designer i'm a scientist
Title: Re: I want to crack Neural Networks
Post by: spydaz on April 08, 2018, 12:34:50 am
https://github.com/spydaz/Neural-NetWork

Quite old but should give you an ideas on how to implement your version (its in vb highly commented)

I dont usually github.....
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 10, 2018, 08:58:46 pm
Omg look at all of Boston Dynamic's work what the hell is this!! A very spooky song starts halfway through you must watch it all!
Furthermore the ending is empirically hilarious I can't believe it. It is PETMAN WTF !!!!!!!!!!!! RUN RUN RUN !! And, 2013!
https://www.youtube.com/watch?v=-e9QzIkP5qI




wow this music is so dope hear it my friends!
Keep this vid's first pic on in 1st video for best results+else you could have nightmares when watching the 2nd video below:

https://www.youtube.com/watch?v=J1EfAICh5CA

2:16 and 7:49 (at 7:49 now look at pic on the video playing the music):
https://www.youtube.com/watch?v=Ws3EYm6koiE





digitally high - are you mastering the kungfoo? - EEE:
https://www.youtube.com/watch?v=Z08Djs5J__A



and for the road:
39:40
1:52:23
2:04:45
1:56:56

https://www.youtube.com/watch?v=xmxi4rC8cXg
Title: Re: I want to crack Neural Networks
Post by: Art on April 11, 2018, 02:11:40 am
According to D.A.R.P.A., that highly mobile, sophisticated humanoid creation called, PETMAN was created for the purpose of "Testing Clothing" for the military. Really? A modified Crash Test Dummy could be fitted with springs and weights to do the same thing and for a whole lot less.

OK...the clothes didn't rip, now let's see how well it crawls through the obstacle course and handles a weapon! Kpow! Nice work Petman! You don't get tired and we could build several thousand of you and have our own squadron of advanced 'Troops'.

And some people keep on believing everything the government tells them is the truth.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 11, 2018, 11:25:15 pm
Is this what you read? https://www.army-technology.com/projects/petman/

At the end they finally let it out of the bag. Ya, and then maybe some search and rescue operations, and keeping humans away from exposure in bad situations :) .

And why the backflip? :)

Why the other robots they made? :)

:)

Talk about a "cover-up". Get it?
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 13, 2018, 06:46:09 am
I was talking to spydaz and I just want to make sure yous know that by now especially after learning the secret of ANN a few months ago "finally" what these things are/how they work, yes I realize I don't know it all. It's actually not possible in the universe to know all. As for the concept well ASI will. As for a lay human, a just idea of knowing all IS possible. I DO aim for knowing what we are, why we're here, and where it's all going. And I do think I have a good track on those things as I try all day every day using a fine tooth comb and luck ex. the way I think etc. I hope I don't have a attitude anymore either...
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 13, 2018, 04:16:52 pm
Ok so nets are not needed for language-AGI spydaz says???? Just Subject>Predicate/Verb>Object "bird has wings" "bird can fly"?????

I believe 3 word use is akin to the early neural nets. You must go deep. Use real human English not "Hi, name Joe, me here now, must fix washer, move here, bye." or whatever else.

Think about it, all our sentences use pieces of the shared-feature-hierarchy.....so, it is able to connect things and generate a similar but new story say.

But most importantly most words are needed. You can't use tricks like S>V>O or a matrix grid database. Not complex enough. Deep is the buzzword.

Can you prove it?



Wow old English gets funnier as you look back further.
19C: The allwhite poors guardiant, pulpably of balltossic stummung, was literally astundished over the painful sake, how he burstteself, which he was gone to, where he intent to did he, whether you think will, wherend the whole current of the afternoon whats the souch of a surch hads of hits of hims, urged and staggered thereto in his countryports at the caledosian capacity for Lieutuvisky of the caftan's wineskin and even more so, during, looking his bigmost astonishments, it was said him, aschu, fun the concerned outgift of the dead med dirt, how that, arrahbejibbers, conspuent to the dominical order and exking noblish permish, he was namely coon at bringer at home two gallonts, as per royal, full poultry till his murder.
http://www.thehistoryofenglish.com/history_late_modern.html
http://www.thehistoryofenglish.com/history_old.html
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 14, 2018, 03:36:36 pm
My friend says "trees I think are way too much outdated to even consider them". True?
Title: Re: I want to crack Neural Networks
Post by: unreality on April 14, 2018, 04:46:55 pm
My friend says "trees I think are way too much outdated to even consider them". True?

Tree search method is far superior than neural networking. I've successfully created AGI using the tree search method on an old 2008 inexpensive Mac Pro 3.1 dual Intel Xeon E5462 8-core PC. Having studied NN achievements, I can confidently predict that NN will eventually fade.

NN is equivalent to a high level language, which equates to being ridiculously slow. I fail to see why people are having such difficulty realizing that simple fact. You'll need a supercomputer to achieve NN AGI. With tree search you can even achieve AGI with a $100 desktop PC.

NN results in a spaghetti database that nobody can make sense of. There will be absolutely NO GUARANTEE that your neural network AGI won't suddenly go on a mass killing spree, but don't worry about that because even Google is no where near creating AGI with NN. Take Google's Deepmind AlphaZero NN as a example. Deepmind removed the start and end database from Storkfish (a tree search chess engine), but obviously was unable to remove any start and ending db from AlphaZero. The neural network is a massive mess that nobody has a clue what's what. In Storkfish it takes a few seconds to disable it's db. In my tree search AGI it's extremely easy to modify the root tree search goals. It's written in a simple computer logic language. For example, the root tree search goal could place a *requirement* that no harm occur to mammals, or even eukaryote organisms. You can try your best to get a neural networking to not harm mammals, but obviously there’s no guarantee, regardless of how well you train it. Humans are a good example. Endless war and killing. Today's headline news: US, UK and France strike Syria.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 14, 2018, 05:30:50 pm
"I've successfully created AGI using the tree search method on an old 2008 inexpensive Mac Pro 3.1 dual Intel Xeon E5462 8-core PC"

Suddenly you make every living person cringe to their knees WHAT IS WRONG WITH YOU no no you did not create AGI or we wouldn't be here. And we'd know about "him", too.



Well ok I'm open minded.

hmm....
"NN results in a spaghetti database that nobody can make sense of.

In my tree search AGI it's extremely easy to modify the root tree search goals. It's written in a simple computer logic language. For example, the root tree search goal could place a *requirement* that no harm occur to mammals, or even eukaryote organisms. You can try your best to get a neural networking to not harm mammals, but obviously there’s no guarantee, regardless of how well you train it."

But they work. Real good.



Currently I believe NNs are a more powerful database (DB? Not yet...) that are not actually messy...............hmm...



What if I want my to-be AGI talking English just like us? Need NN then? I think we need all these words we use for deepness. Give me (teach me) some big pointers.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 14, 2018, 10:30:37 pm
"I've successfully created AGI using the tree search method on an old 2008 inexpensive Mac Pro 3.1 dual Intel Xeon E5462 8-core PC"

Suddenly you make every living person cringe to their knees WHAT IS WRONG WITH YOU no no you did not create AGI or we wouldn't be here. And we'd know about "him", too.



Well ok I'm open minded.

hmm....
"NN results in a spaghetti database that nobody can make sense of.

In my tree search AGI it's extremely easy to modify the root tree search goals. It's written in a simple computer logic language. For example, the root tree search goal could place a *requirement* that no harm occur to mammals, or even eukaryote organisms. You can try your best to get a neural networking to not harm mammals, but obviously there’s no guarantee, regardless of how well you train it."

But they work. Real good.



Currently I believe NNs are a more powerful database (DB? Not yet...) that are not actually messy...............hmm...



What if I want my to-be AGI talking English just like us? Need NN then? I think we need all these words we use for deepness. Give me (teach me) some big pointers.

English? No, my AGI doesn't need a massive CPU intensive slow NN version.

You can't make any sense of a complex NN database. Doctors can't reprogram a serial killer's NN to be a normal person. NN is a messy outdated design created by a slow evolutionary process that took millions of years. It takes me a matter of minutes to change root goal of my AGI.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 14, 2018, 11:14:20 pm
BTW, even Google's company, Deepmind, is now using a tree search. They discovered that when they placed the NN inside a tree search that AlphaZero became magnitudes better. Again, the NN is inside their tree search. Meaning that the tree search is the root. Good move, Deepmind! ;) One component at a time. Eventually it will be NN free.

This is the common evolutionary path of coding. The first goal is to get the code working. Then you can focus on performance. NN is the slowest interpreted language that I'm aware of. Of course, I find it rather insane to start out with the worlds slowest language. IMO, it's far more intelligent to start out with c language.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 15, 2018, 08:45:58 am
so korrelan are you gonna give up your NN brain for trees?

Your algorithms are conversational - mine needs Pattern Recognition by NNs to discover new insights for Earth.

Show me proof a Search Tree can do what a LSTM can do - ex generate English sentences ex. "Oh how the romeo sat there waiting." And show me an image of what this "easy-to-view" tree looks like then!

LSTM can take in 2 words n and b that are said in order x, then predict the next word. The lines will begin to look messy. Show me your tree. Don't forget I can link two words together from afar, ex. piano and cat. Draw your Graph/Net come on! Show me!

First goal is to get the code working? Then performane? LOL then let's use ANN! Then use Python. LSTM will work easy to get it workin!
.......The NN DB are not just a Programming Language but also a mechanism, one that you need that you can't yet show me in a simple mechanism DB/Tree. You do use Python or whatever you want to code up ANNs.

Btw watch Two Minute Papers, ANN achievements are magic. You'll want to download every one of his videos.
Title: Re: I want to crack Neural Networks
Post by: korrelan on April 15, 2018, 10:25:31 am
Quote
so korrelan are you gonna give up your NN brain for trees?

No lol. If you are referring to my chatbot experiment then that is just a side project.  I like to have several side projects, all experience/ learning is useful.  The more methods/ angles/ views you have on a problem space the better.

I'm still working on my main AGI, trees are fun but their functionality won't create the type of AGI I require.

 :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 15, 2018, 10:48:58 am
AHAHAHAHA I WIN YES YES!! The AGI korrelan wants to create needs NN see!

:PP

Btw unreality/infurl/don/spydaz can one of yous draw a tree doing Terminal Yeild from a Sequence of Transformations? Ex. it outputs "sam loves sally" while after a transformation will output "sally is in love with sam".

Unreality, NNs are efficient because you can store a sentence with just a few nodes and connections. It's a Shared-Feature-Hierarchy. Letter A is used many times, word The is used many times, etc.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 15, 2018, 02:59:33 pm
Sorry to disturb. Please continue playing in your sandbox. SMH
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 15, 2018, 03:28:57 pm
Lock, this is how sentences are stored within Treebank (https://en.wikipedia.org/wiki/Treebank) databases:

Code: [Select]
(S
  (NP I)
  (VP
    (VP (V shot) (NP (Det an) (N elephant)))
    (PP (P in) (NP (Det my) (N pajamas)))))
(S
  (NP I)
  (VP
    (V shot)
    (NP (Det an) (N elephant) (PP (P in) (NP (Det my) (N pajamas))))))

The example shows two different interpretations of the same sentence: "I shot an elephant in my pajamas."
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 15, 2018, 04:34:17 pm
I'm sorry. I really don't know enough to say trees are out of the game.


That's not what I meant. What I meant was, I want to know how a tree can transform a sentence into a different but similar/on-topic sentence.

Ex. the tree transforms "Flower are pretty looking." into "Flowers are a beautiful thing on Earth." Is that possible with trees? Show me. My brain was able to do it. Here's another I'll generate "I believe flowers are a nice-looking thing that have come on Earth from evolution.". "Flowers are certainty not one of the uglier things.".
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 15, 2018, 04:41:18 pm
I'm sorry, Lock, I didn't mean to dispute any of trees or NN for solving a task. I guess I wasn't clear enough. I just wanted to inform you of current top technology known to me for storing natural language sentences.

As for transforming one sentence to another with the same meaning, I'm not aware of any technology or program that does that. But an interesting question, though. If someone figures out how to do it, it could be an interesting base for a compression algorithm. I think it could be done with NN looking backwards, down its synapse tree, but I'm not an expert for NN.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 15, 2018, 04:48:12 pm
"Grass loves it when you cut it so keep up down your cutting spree."
"Grass is happy when you cut it so do cut it whenever you can."
"Please cut grass it loves the feeling."
"Cutting grass makes grass feel good."
"Grass will never ever dare cut itself until it changes species."
"The grass wherever it lies can't cut itself without arms because it can't ok grass needs a punk to cut its bosom."
Title: Re: I want to crack Neural Networks
Post by: unreality on April 15, 2018, 06:40:20 pm
[snip] I want to know how a tree can transform a sentence into a different but similar/on-topic sentence.

Ex. the tree transforms "Flower are pretty looking." into "Flowers are a beautiful thing on Earth." Is that possible with trees? Show me. My brain was able to do it. Here's another I'll generate "I believe flowers are a nice-looking thing that have come on Earth from evolution.". "Flowers are certainty not one of the uglier things.".

That level of task is for a developed AGI. That is, an AGI with sufficient learning experience regarding your question. If you're interested I could copy my main tree search function and strip out the code. I'm a ridiculously heavy code commenter lol.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 15, 2018, 07:24:07 pm
Yes I would like to see that.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 16, 2018, 06:51:08 pm
Yes I would like to see that.
I've been wanting to make a short video about this for sometime. When time permits. Probably this week. Hopefully it will also outline some processor circuit design ideas for tree search. A few decades ago computer gaming relied upon the general CPU to render texture maps. Then came the GPUs, now rendering roughly 370 billion T/second, but it didn't start like that. It seems the STG-2000 was the first GPU, rending just 12 MT/s (0.012 GT/s). Back then I'm sure that was amazing. Poor CPUs. The GPU was also great for neural networking due to matrix math, but it wasn't designed specifically for NN. So google spent tens of millions of dollars making the TPU, now capable of an insane 45 TFLOPs per TPU. One day there will be processors designed specifically for AGI tree search code where the ASIC transistors will contain the tree search and pattern recognition routines, all connected to what I call parallel RAM. That will be the dawn of ASI. :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 11:37:04 am
I'll look up AlphaZero's tree use.

I drew a pic showing why trees are too clear-cut. It's attached.

Here is what i got:
AlphaGo Zero: Uses Residual Network. Doesn't take in human data like AlphaGo and then discovers, rather just self-plays. Starts out with just self-play, backpropagation error rate RL. Monte carlo tree search stabilizes the self-weight-training process. Goes through 16,000 simulated board states down the tree but not more cus it gets exponential computation power and evaluates and backs up to tree top and has solid estimate of which moves are strong and which are weak, then selects the single best move, but this algorithm is specific to the game GO because it has perfect information anddd simulations anddd perfect ability to look at which simulations are down the tree. Using the tree, it can look down the roads and go with unproven beliefs/actions and see where they lead and test/evaluate terminals/structures. We could use this. The way to discover is still a messy NN, but "in" the messy NN can be a tree of brute force/smart force happening.

Btw I like to use ANI/AGI/ASI to stand for the AI's power/capabilities instead of how versatile it is. Ex. "I made ASI! It can give suggestions on carpets, cats, hats, politics, and flight!"....boring....... :P Call that ANI. ASI should mean "deep" not "not narrow".....oh well.........
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 03:38:28 pm
It merely boils down to the fact that the NN by itself means it must do the tree search, but if we do the tree search for NN, then it's a lot faster since NN is merely a ridiculously slow interpreted language.

Sorry to disappoint millions of people who had such high hopes for NN. I know it's the way our human brain works, and the human ego is, after all, the largest object in the known Universe, ;) but there's nothing magical about neural networking. It's natures way of writing source code.

Again, on the bright side, non-NN means we'll have 100% control over the AGI's goals. With NN, there's no guarantee when working with massive bowls of spaghetti. That's why there will always be evil people until the CPUs take over the human brain-- transhumanism.

Here's to spaghetti
(http://www.gruenderszene.de/wp-content/uploads/2016/03/oops.jpg)

The obvious truth can be difficult to accept. People are just ... stubborn. :/
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 04:14:40 pm
Ok you...

How old are you, and how long have you been studying AI/ML/computer science?
I'm 22.7 and have been since 2015 august 12th.

Tell me how you use ALL variables to weight in on a dozen variables then? Yotta spaghetti looks good.

Korrelan probably disagrees with you strongly. So show us proof, it's time.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 04:58:32 pm
I merely gave you facts. Again, when Deepmind added the tree search, the end result was that it played significantly better. Korrelan can argue with that fact till he's blue in the face if he so wishes.

Again, the non-NN tree search method is guided by a set of goals written in logic language, not a yotta spaghetti bowl.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 05:07:50 pm
Danny is crying for his candy. Momma will show danny how to make his candy, but danny is going to throw a fit in the meantime.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 05:27:28 pm
Ah I see your first post on here. You said in 2017 Nov 4th that you don't know much AI lingo. Well there's your problem. I knew it. Look, I was a sort of know-it-all before too. I'm not saying I'm very dumb now, just that I have some things but not right on the button.

btw um........
you say:
"I firmly believe the secret to the singularity is pattern recognition."
"I'm a seeker of knowledge, understanding, wisdom, and insight. Nice to meet you!"
........well, that belongs to NNs dude, they are king is PR. You need the spaghetti.

I too didn't know nor want NNs before. But once you understand them you'll see their importance. You can also watch Two Minute Papers. Actually I was learning the whole field and seen achievements but I didn't understand nor want how NNs work and wanted to do a non-NN way, I was very determined to do so.

I actually made a simulated spider learn to crawl without a NN lol, it was pretty fast learning but didn't crawl that great nor fast, and would need exponential time if more limbs, unless use 1 joint above the baby child but I don't know bout that works and also not everything works like this...

Been there, done it.
https://www.youtube.com/watch?v=WPUJQow1QlY
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 07:45:25 pm
Lol Moore, who made the law Moore's Law, said when asked if there was anything else he wished he'd predicted:

“The importance of the Internet surprised me,” said Moore. “It looked like it was going to be just another minor communications network that solved certain problems. I didn’t realize it was going to open up a whole universe of new opportunities, and it certainly has. I wish I had predicted that.”

hahaha a minor communications network omg so typical, I see why he thought that.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:07:43 pm
Wow. What a ding dong. You don't seem to understand what "lingo" means.

"The language and speech, especially the jargon, slang, or argot, of a particular field, group, or individual."

Facts still remain. Tree search made Deepmind's NN magnitudes better. NN is a ridiculously slow interpreted language. I accomplished non-NN AGI on an inexpensive slow 8 core CPU when Deepmind and their NN is no where near that even with their army of supercomputers.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:18:00 pm
Future for Neural Network AGI

(https://nerdist.com/wp-content/uploads/2017/01/t2-01202017.jpg)



Future for Tree Search AGI

(http://cdn-static.denofgeek.com/sites/denofgeek/files/styles/main_wide/public/0/47//humans_main.jpeg?itok=6Jd2WGSg)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 08:18:46 pm
What? First I read lingo and interpret it right. Then you say the definition that I recognized. THEN googles shows me the definition as follows which is same as your definition:

https://www.google.ca/search?source=hp&ei=IUfWWuaVFMrKjwTduiQ&q=lingo+definition&oq=lingo+def&gs_l=psy-ab.1.0.0l4j0i22i30k1l6.45.1780.0.2627.10.9.0.0.0.0.169.821.8j1.9.0....0...1c.1.64.psy-ab..1.9.820.0..35i39k1j0i131k1j0i131i46k1j46i131k1j0i10k1.0.v1AlJ8EkCsY

I did get what you meant and what lingo meant, I'm not a ding dong. You said you don't get AI lingo like what "discrimination" or "gradient decent" means.

You said in your 1st ever post:
Anyhow, tbh I don’t know much AI lingo. I haven’t studied NNs, OpenCog's hypergraph pattern miner, etc.

Pretty much if you don't know how NNs work then you have nothing, not what you think you have. Unless very lucky.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 08:26:47 pm
But your right about the singularity I'll give you that.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:31:41 pm
Again, you don't understand.

Study:
"A detailed investigation and analysis of a subject or situation."

Why would I spend ridiculously amounts of time researching something, looking for a breakthrough, when it is obvious to me that it's a dead end path? Of course I understand neural networking. I understood it well enough to know it's a dead end.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:35:54 pm
So far it appears my analysis was correct, because once again Deepmind learned that when it placed NN inside a tree search the AlphaZero became magnitudes better.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:36:58 pm
But your right about the singularity I'll give you that.

So far I'm correct about everything.  ;)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 08:45:59 pm
Ha ha...real funny! You said yourself you "Haven't studied NNs".

Either someone set you up to say NNs are dead ends or you REALLY need to watch Two Minute Papers videos. Here read the titles in my attachment of the videos I downloaded from his channel. You can also search for them - and watch them too. Reading them is enough though for the point though.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:51:57 pm
Again. Study: "A detailed investigation and analysis of a subject or situation."

It seems you'll have to wait a few years to understand why neural networking is a dead end. Just read my posts again to learn why.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 08:57:01 pm
Ha ha...real funny! You said yourself you "Haven't studied NNs".

Either someone set you up to say NNs are dead ends or you REALLY need to watch Two Minute Papers videos. Here read the titles in my attachment of the videos I downloaded from his channel. You can also search for them - and watch them too. Reading them is enough though for the point though.

The videos shown in your image are tasks accomplished through machine learning. They will need quantum computing with a database the size of Data in Star Trek to somehow transform machine learning into a sentient thinking human being. It will not happen. Companies such as Deepmind will be *required* to introduce a tree search, in the very least. Oh wait a minute, Deepmind already started doing that just recently. ;)

Okay? That's about my chitchat limit lol.
Title: Re: I want to crack Neural Networks
Post by: infurl on April 17, 2018, 09:10:15 pm
Dumb and dumberer.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 09:11:59 pm
NO YOU DUMB

:p

Well tell me how a tree search helps form discoveries then. I want to know this. How does it "look ahead and test 4,000 situations"???

Give me a example like that of a "Socrates isA man, man is mortal, therefore Socrates isA mortal.".
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 09:15:33 pm
Sorry, danny has used up all of mammas time. Mamma will teach danny how to make candy by the end of this week if all goes well and danny is patient.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 17, 2018, 09:17:36 pm
While I find your muse funny I really don't think it's a good thing that I am supposedly dumb when my objective is to be the opposite.
Title: Re: I want to crack Neural Networks
Post by: infurl on April 17, 2018, 10:04:38 pm
https://en.wikipedia.org/wiki/Resolution_(logic)
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 10:55:11 pm
Logic language, :tup:  ... Pass some of that sugar to deepmind cuz we gonna demystify santa :santa-new:    Get your apron ready, infurl
Title: Re: I want to crack Neural Networks
Post by: unreality on April 17, 2018, 11:04:32 pm
A little sugar can guarantee https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
Title: Re: I want to crack Neural Networks
Post by: korrelan on April 18, 2018, 01:15:52 am
The schemas for all classical ANN's are basically variations on a tree structure.

Even biological neurons have branching dendrites and axons.

 :)
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 01:41:59 am
Why that's an insult to trees  lol. ;) NN isn't based on an intelligible pruning/selection tree search method. It's kind of like saying a match is a Sun. Yes, both give off heat, but that's about it. NN is not referred to as a tree search. I'm sure you know all of this, but just wanted to push my buttons haha. Don't mess with my trees!!

https://en.wikipedia.org/wiki/Artificial_neural_network

Although some people will combine NN with tree searches, such as deepmind. Another example,

Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments
https://www.youtube.com/watch?v=MM2U_SGMtk8




In a tree search I have 100% control of what the AGI is searching for and what nodes it will select to search through. In my AGI I could transform it from a Dalai Lama to a Hitler in a matter of minutes by changing the root tree search goals. That's why you have two possible outcomes for unteathered AGIs regardless of how nice you try to make them :)


Future for Neural Network AGI

(https://nerdist.com/wp-content/uploads/2017/01/t2-01202017.jpg)



Future for Tree Search AGI

(http://cdn-static.denofgeek.com/sites/denofgeek/files/styles/main_wide/public/0/47//humans_main.jpeg?itok=6Jd2WGSg)
Title: Re: I want to crack Neural Networks
Post by: korrelan on April 18, 2018, 08:59:19 am
ANN's even look like trees.

 :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 18, 2018, 09:31:39 am
I think the most cutest thing is when you get a newcomer in computer science that acts like this. Am I right? XD

So unreality, what is your mission? Is it to create AGI as soon as you can?
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 04:15:06 pm
NN tree at best ;)

(https://img00.deviantart.net/b7e9/i/2010/078/a/f/ugly_dead_tree_by_gothicmamas_stock.jpg)
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 04:17:03 pm
I think the most cutest thing is when you get a newcomer in computer science that acts like this. Am I right? XD

So unreality, what is your mission? Is it to create AGI as soon as you can?

What's not cute are people who can't defend their stance with facts, so they retort to ad hominem. :/
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 04:25:15 pm
Oh well lookie here, folks. Looks like a tree. Must be a tree search haha. And if ya look real close in the wiki article you'll see the word "Research" ;) hm hmm yup, that's right. Just remove that them there re and it's right there. Search. Proof! Dadgummit!

https://en.wikipedia.org/wiki/Abstract_syntax_tree

(https://upload.wikimedia.org/wikipedia/commons/thumb/c/c7/Abstract_syntax_tree_for_Euclidean_algorithm.svg/400px-Abstract_syntax_tree_for_Euclidean_algorithm.svg.png)

No offense, korrelan. Just a point made in good fun.
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 05:30:33 pm
One basic property of a tree structure is that it does not contain any cycles. This does not necessarily hold for ANNs.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 18, 2018, 05:31:39 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over whole branches if only parts don't match. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 05:43:28 pm
As far as I can tell, NN is a special implementation of tree search.

An ANN is not a special case of a tree, but a tree is a special case of an ANN.
Yes, dear folks, ANNs are much more powerful than trees. ;)
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 05:50:46 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over certain branches. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?

Up, down, doesn't matter. Fuzzy data lol. Haha, good one. ;) That translates into "Nobody in the known Universe knows what I'm searching for in this massive bowl of spaghetti, but I'll let you know when it's done."
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 05:51:33 pm
As far as I can tell, NN is a special implementation of tree search.

An ANN is not a special case of a tree, but a tree is a special case of an ANN.
Yes, dear folks, ANNs are much more powerful than trees. ;)

Care to provide a peer reviewed source to back up that claim? Or was that just humor on your part?
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 06:01:13 pm
As far as I can tell, NN is a special implementation of tree search.

An ANN is not a special case of a tree, but a tree is a special case of an ANN.
Yes, dear folks, ANNs are much more powerful than trees. ;)

Care to provide a peer reviewed source to back up that claim? Or was that just humor on your part?

Following from the definition, trees are not allowed to contain cycles. And even more, in a tree each node has not more than one predecessor. All of these restrictions do not hold for ANNs. In an ANN, nodes (e.i., neurons) can be connected arbitrarily, without any rules.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 18, 2018, 06:03:52 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over certain branches. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?

Up, down, doesn't matter. Fuzzy data lol. Haha, good one. ;) That translates into "Nobody in the known Universe knows what I'm searching for in this massive bowl of spaghetti, but I'll let you know when it's done."
Calm down a bit. Say, how can you tell that a person you've never seen in your life is a human? Or a hammer you've never seen in your life is a hammer? My answer is fuzzy bottom-up match (NN in my simplified perception). Is there any other way? If there is, it might be a breakthrough in computer industry.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 06:11:31 pm
As far as I can tell, NN is a special implementation of tree search.

An ANN is not a special case of a tree, but a tree is a special case of an ANN.
Yes, dear folks, ANNs are much more powerful than trees. ;)

Care to provide a peer reviewed source to back up that claim? Or was that just humor on your part?

Following from the definition, trees are not allowed to contain cycles. And even more, in a tree each node has not more than one predecessor. All of these restrictions do not hold for ANNs. In an ANN, nodes (e.i., neurons) can be connected arbitrarily.

In terms of cycles I've never heard of such a limitation with tree searches. The root tree in my AGI never ends. It has endless cycles.

Multiple predecessors? Exactly, because tree searches are not a big bowl of spaghetti. Spaghetti was meant to be eaten, not used in tree searches. A tree search can guarantee what it's goals are, which are written in a logic language in my AGI. In NN, humans for example, can easily end up being Hitlers.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 06:14:09 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over certain branches. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?

Up, down, doesn't matter. Fuzzy data lol. Haha, good one. ;) That translates into "Nobody in the known Universe knows what I'm searching for in this massive bowl of spaghetti, but I'll let you know when it's done."
Calm down a bit. Say, how can you tell that a person you've never seen in your life is a human? Or a hammer you've never seen in your life is a hammer? My answer is fuzzy bottom-up match (NN in my simplified perception). Is there any other way? If there is, it might be a breakthrough in computer industry.

It's called intelligence. A tree search has access to the AGI's entire lifespan of learned knowledge & intelligence.
Title: Re: I want to crack Neural Networks
Post by: korrelan on April 18, 2018, 06:17:00 pm
Quote
No offense, korrelan. Just a point made in good fun.

I seem to have become the cornerstone of applying neural networks to the AGI problem lol.

I also noticed that earlier in the thread I was discussing/ arguing by proxy... cool.

We each have our own ideas, that’s a good thing, there is no point trying to convince another party that ‘your way’ is better than any other.  We can post our theories/ ideas but the only way to convince a peer is to provide proof.   

Showing diagrams or waxing the lyrical about a concept does not provide proof.

Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.

I often use this example… just consider the concepts of ‘up’ or ‘love’… I can see no way you can capture the ‘essence’ of the concepts using any kind of human derived logic.

We each choose our own path; by all means do your best to create an AGI using tree search algorithms… I will continue with my research.

 :)
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 18, 2018, 06:20:16 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over certain branches. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?

Up, down, doesn't matter. Fuzzy data lol. Haha, good one. ;) That translates into "Nobody in the known Universe knows what I'm searching for in this massive bowl of spaghetti, but I'll let you know when it's done."
Calm down a bit. Say, how can you tell that a person you've never seen in your life is a human? Or a hammer you've never seen in your life is a hammer? My answer is fuzzy bottom-up match (NN in my simplified perception). Is there any other way? If there is, it might be a breakthrough in computer industry.

It's called intelligence. A tree search has access to the AGI's entire lifespan of learned knowledge & intelligence.

I see. We have to define intelligence first.
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 06:27:57 pm
Following from the definition, trees are not allowed to contain cycles. And even more, in a tree each node has not more than one predecessor. All of these restrictions do not hold for ANNs. In an ANN, nodes (e.i., neurons) can be connected arbitrarily.

In terms of cycles I've never heard of such a limitation with tree searches. The root tree in my AGI never ends. It has endless cycles.

A graph contains a cycle, iff there is a path of nodes x1 -> x2, ....., xn -> x1 induced by the edges, e.i., one starts in a node x1, follows the directed edges and may end up in the start node x1 again. If your "tree" contains such a cycle, its not a tree.

Quote
Multiple predecessors? Exactly, because tree searches are not a big bowl of spaghetti. Spaghetti was meant to be eaten, not used in tree searches. A tree search can guarantee what it's goals are, which are written in a logic language in my AGI. In NN, humans for example, can easily end up being Hitlers.

I don't know exactly what you mean. Are you are somehow concerned with true AGI? Are you afraid of it?
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 06:34:04 pm
Showing diagrams or waxing the lyrical about a concept does not provide proof.

In academic community one usually must put forth the time & effort to understand a paper or even a simple flowchart. Hopefully this week I'll create a short video outlining the code structure of my AGI. It will require people to spend some time analyzing it, but that's their choice. I'm only interested in finding the Einsteins out there. That's one of my main goals in life. So far, a failure. :(




Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.

I've already accomplished it. To date there are absolutely no NN that think like a human mind or even close to being sentient. That'll be a doozy trying to create a machine learning setup to create a human brain NN. The only public displayed human like AIs so far have been non-NN based.




I often use this example… just consider the concepts of ‘up’ or ‘love’… I can see no way you can capture the ‘essence’ of the concepts using any kind of human derived logic.

My AGI is driven by tree searches, logic language goals, pattern recognition, experiences & knowledge stored in a db. For me the answer of how my AGI understands love is extremely easy to see, but I've spent considerable time analyzing my thinking process and how to implement that into computer source code. I can't just simply give you that understanding here because it would take months.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 06:39:33 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over certain branches. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?

Up, down, doesn't matter. Fuzzy data lol. Haha, good one. ;) That translates into "Nobody in the known Universe knows what I'm searching for in this massive bowl of spaghetti, but I'll let you know when it's done."
Calm down a bit. Say, how can you tell that a person you've never seen in your life is a human? Or a hammer you've never seen in your life is a hammer? My answer is fuzzy bottom-up match (NN in my simplified perception). Is there any other way? If there is, it might be a breakthrough in computer industry.

It's called intelligence. A tree search has access to the AGI's entire lifespan of learned knowledge & intelligence.

I see. We have to define intelligence first.

The root tree goal is written in logic language. The concept of murder is an easy concept. It's easy to implement the three laws of robotics in the tree root goal.  Very useful, and IMO a requirement. Sentient ANN/NN should NEVER be allowed to be free.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 06:47:24 pm
Following from the definition, trees are not allowed to contain cycles. And even more, in a tree each node has not more than one predecessor. All of these restrictions do not hold for ANNs. In an ANN, nodes (e.i., neurons) can be connected arbitrarily.

In terms of cycles I've never heard of such a limitation with tree searches. The root tree in my AGI never ends. It has endless cycles.

A graph contains a cycle, iff there is a path of nodes x1 -> x2, ....., xn -> x1 induced by the edges, e.i., one starts in a node x1, follows the directed edges and may end up in the start node x1 again. If your "tree" contains such a cycle, its not a tree.

Quote
Multiple predecessors? Exactly, because tree searches are not a big bowl of spaghetti. Spaghetti was meant to be eaten, not used in tree searches. A tree search can guarantee what it's goals are, which are written in a logic language in my AGI. In NN, humans for example, can easily end up being Hitlers.

I don't know exactly what you mean. Are you are somehow concerned with true AGI? Are you afraid of it?

It depends what you want to refer to as a "cycle."  I can assure you that my AGI operates in a tree search. There is no limitation on how long it takes a tree search to complete. The top nodes in my AGI are objects that exist in the present, which exists in what is called Reality. A chess game could be an object. A person could be an object. Objects in Reality come and go.

Yes, afraid of NN AGI. Human history has proven that. No need to take time discussing that fact. It's time to give non-NN sentient lifeforms a chance to prove they are better and safer.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 18, 2018, 06:55:11 pm
Unreal, wait until you learn about forests (http://www.bramvandersanden.com/post/2014/06/shared-packed-parse-forest/).

As far as I can tell, NN is a special implementation of tree search. It is a bottom-up branch selection that works on fuzzy data. I think it is occasionally irreplaceable to switch to fuzzy pattern match because not every data complies exactly to expected forms, bit by bit.

Bottom-up is slower than top-down because we can optimize top-down search to skip over certain branches. But sometimes, I think we can't avoid bottom-up search. Maybe I'm wrong, did anyone try to implement top-down fuzzy search?

Up, down, doesn't matter. Fuzzy data lol. Haha, good one. ;) That translates into "Nobody in the known Universe knows what I'm searching for in this massive bowl of spaghetti, but I'll let you know when it's done."
Calm down a bit. Say, how can you tell that a person you've never seen in your life is a human? Or a hammer you've never seen in your life is a hammer? My answer is fuzzy bottom-up match (NN in my simplified perception). Is there any other way? If there is, it might be a breakthrough in computer industry.

It's called intelligence. A tree search has access to the AGI's entire lifespan of learned knowledge & intelligence.

I see. We have to define intelligence first.

The root tree goal is written in logic language. The concept of murder is an easy concept. It's easy to implement the three laws of robotics in the tree root goal.  Very useful, and IMO a requirement. Sentient ANN/NN should NEVER be allowed to be free.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

4. A robot shouldn't implement an idea in reality if the result is increasing negative emotions of any living being.  >:(
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 07:21:11 pm
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

4. A robot shouldn't implement an idea in reality if the result is increasing negative emotions of any living being.  >:(

Interesting. Poor robot lol.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 07:24:16 pm
"Everything should be made as simple as possible, but not simpler." Albert Einstein

Not simple ;)
(https://www.ocregister.com/wp-content/uploads/migration/kz7/kz71nk-b78616343z.120100312160447000g4nn668j.1.jpg)
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 07:26:53 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
I've already accomplished it. To date there are absolutely no NN that think like a human mind or even close to being sentient. That'll be a doozy trying to create a machine learning setup to create a human brain NN. The only public displayed human like AIs so far have been non-NN based.

Deep neural networks are one of the most effective tools for image classification. How would you classify images with a tree search?
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 07:30:00 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
I've already accomplished it. To date there are absolutely no NN that think like a human mind or even close to being sentient. That'll be a doozy trying to create a machine learning setup to create a human brain NN. The only public displayed human like AIs so far have been non-NN based.

Deep neural networks are one of the most effective tools for image classification. How would you classify images with a tree search?

1) Pattern recognition, experience, knowledge, all contained in the tree search driven by logic language goals.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 18, 2018, 07:31:37 pm
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

4. A robot shouldn't implement an idea in reality if the result is increasing negative emotions of any living being.  >:(

Interesting. Poor robot lol.

I find it as the only kind of life worth of living.  :D
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 07:33:19 pm
AgentSmith,

I'm uncertain what you want. That's like asking you to step through how a NN running on a supercomputer in machine learning found a relatively decent NN to accomplish a task.
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 07:36:29 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
I've already accomplished it. To date there are absolutely no NN that think like a human mind or even close to being sentient. That'll be a doozy trying to create a machine learning setup to create a human brain NN. The only public displayed human like AIs so far have been non-NN based.

Deep neural networks are one of the most effective tools for image classification. How would you classify images with a tree search?

1) Pattern recognition, experience, knowledge, all contained in the tree search driven by logic language goals.

Do you think that your tree search approach can compete with deep neural networks in image classification?

AgentSmith,

I'm uncertain what you want. That's like asking you to step through how a NN running on a supercomputer in machine learning found a relatively decent NN to accomplish a task.

Lets assume the approaches have the same computational resources.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 07:42:46 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
I've already accomplished it. To date there are absolutely no NN that think like a human mind or even close to being sentient. That'll be a doozy trying to create a machine learning setup to create a human brain NN. The only public displayed human like AIs so far have been non-NN based.

Deep neural networks are one of the most effective tools for image classification. How would you classify images with a tree search?

1) Pattern recognition, experience, knowledge, all contained in the tree search driven by logic language goals.

Do you think that your tree search approach can compete with deep neural networks in image classification?

AgentSmith,

I'm uncertain what you want. That's like asking you to step through how a NN running on a supercomputer in machine learning found a relatively decent NN to accomplish a task.

Lets assume the approaches have the same computational resources.

If both methods are given equal focus on custom ASIC processor circuit design relative to their code, then I'm certain my tree search method would be at least 10 times faster with at least equal quality, but that's not the case, yet. Eventually there will be custom tree search AGI chips. In my upcoming video I'll go over tree search AGI circuit design.
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 07:51:12 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
That'll be a doozy trying to create a machine learning setup to create a human brain NN.

I have to add that this statement actually confuses me hard. Apparently, you are somehow ignoring the fact that NNs are actually inspired from the human brain. Am I interpreting this correctly?
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 08:06:54 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
That'll be a doozy trying to create a machine learning setup to create a human brain NN.

I have to add that this statement actually confuses me hard. Apparently, you are somehow ignoring the fact that NNs are actually inspired from the human brain. Am I interpreting this correctly?

I'm talking about someone creating a neural network that is sentient like a human, that can hold a conversation with you, have the ability to take classes at a University, learn, adapt, and graduate, learn quantum mechanics and make scientific breakthroughs, etc. Such potentials is what defines AGI.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 18, 2018, 08:25:36 pm
He means above that yes inspired from brain but you can't succeed with using spegehti urself.
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 18, 2018, 08:29:27 pm
Well unreality if you want to go with trees. Fine. But I want to know 1 thing from you. Because there is just 1 big problem I want to know how you solve. How do you link any 2 particular words like boat and violin? That requires a link crossing over branches, thereby begining the spegethi.

I can for example condition myself to play the word "boat", then suddenly hear after it abc 123. A year ago I linked together in a play "mark, schandelair, marbles"....and till today I still have the sequence stored that has these words linked in a forward sequence play dude!
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 08:37:30 pm
Well unreality if you want to go with trees. Fine. But I want to know 1 thing from you. Because there is just 1 big problem I want to know how you solve. How do you link any 2 particular words like boat and violin? That requires a link crossing over branches, thereby begining the spegethi.

I can for example condition myself to play the word "boat", then suddenly hear after it abc 123. A year ago I linked together in a play "mark, schandelair, marbles"....and till today I still have the sequence stored that has these words linked in a forward sequence play dude!

My AGI db, like the tree search, is easy to understand, well structured, contains logic language. No spaghetti required :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 18, 2018, 08:40:09 pm
Am I right to assume that in your tables you would write in the box violin "relatedTo boat" without a actual line over the db? How do you connect 2 words that simply are not in the same row/column or anywhere remotely near each other??
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 18, 2018, 08:45:23 pm
like this i mean - see attachment:
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 08:51:04 pm
Am I right to assume that in your tables you would write in the box violin "relatedTo boat" without a actual line over the db? How do you connect 2 words that simply are not in the same row/column or anywhere remotely near each other??

I'm certain to have gone over this before. Through experiences the non-NN AGI makes connections between objects (cluster IDs). The tree search is always analyzing, learning, updating weights, priorities, results, etc. Through specific db calls it formulates probabilities & priorities to help it make decisions, which it learned from knowledge & tree search analysis.

I think the problem is that you're thinking in terms of very rigid software. I've said over and over that the secret is flexibility.
Title: Re: I want to crack Neural Networks
Post by: AgentSmith on April 18, 2018, 09:42:57 pm
Search trees have their uses… but in my humble opinion they will never provide the mechanisms required for a true AGI.
That'll be a doozy trying to create a machine learning setup to create a human brain NN.

I have to add that this statement actually confuses me hard. Apparently, you are somehow ignoring the fact that NNs are actually inspired from the human brain. Am I interpreting this correctly?

I'm talking about someone creating a neural network that is sentient like a human, that can hold a conversation with you, have the ability to take classes at a University, learn, adapt, and graduate, learn quantum mechanics and make scientific breakthroughs, etc. Such potentials is what defines AGI.

You mean there is no method to adjust the weights of a neural network properly such that it produces a desired behaviour, even if such proper weights would actually exist for the given network, right?
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 18, 2018, 09:55:38 pm
That seems easy. :D No really.

So is the problem that a related task comes up accidentally activating or is the problem that a totally different task is activated like "eiihhhhh nah I don't want to save yous!" ?

Never heard of such problem. Time to look at myself.

When I want to eat or go to sleep I do it. I mean...if I see something great to bookmark or note down I'll get sidetracked. I'll recognize every moment that I am indeed staring at a ex. telephone.

Therefore, I don't see the Hitler problem.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 10:27:27 pm
The only sentient NN so far are humans-- Like a box of chocolates. If you want a guarantee that future sentient NNs don't destroy or enslave humanity, then you'll need to wrap it inside an AGI that I've outlined, but then why bother slowing down the non-NN AGI.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 10:29:39 pm
Humanity has a nasty past. Thousands of years of enslaving the animal kingdom. Constant wars, jealousy, crazy negative emotions, etc. etc. etc. etc. etc.  NN is a proven disaster.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 18, 2018, 10:43:50 pm
That seems easy. :D No really.

So is the problem that a related task comes up accidentally activating or is the problem that a totally different task is activated like "eiihhhhh nah I don't want to save yous!" ?

Never heard of such problem. Time to look at myself.

When I want to eat or go to sleep I do it. I mean...if I see something great to bookmark or note down I'll get sidetracked. I'll recognize every moment that I am indeed staring at a ex. telephone.

Therefore, I don't see the Hitler problem.

Humans are flooded with urges and desires created by evolution. They experience a sea of irritations that set off emotions that humans take for granted. I've found most humans to be ruled by emotions, not critical thinking skills. There are books written about the human psychology. Ad hominem, which I've found most humans partake in, is nothing in comparison. You should have no problem understanding why the world is flooded with hating, evil, dangerous humans around the world.

For those who think AGI NN will be immune to such problems is asking for trouble. They will not because you can't control neural networking of that complexity. We're not talking about a bowl of spaghetti. We're talking about a peta byte size spaghetti nightmare. Fortunately that doesn't seem to be a problem. Even Deepmind with it's Google funding still has no clue how to create sentient AGI based on neural networking.
Title: Re: I want to crack Neural Networks
Post by: ivan.moony on April 18, 2018, 11:35:36 pm
I don't believe the problem is in NN. Correct me if I'm wrong, but isn't NN just a pattern matching mechanism. You put an image, a sound, or whatever in front of it, and it returns a code of what's recognized. It has no connection with AI other than it is just a process similar to a part of a brain functionality to fuzzy match a pattern.

What really matters is behavior mechanism, meaning of what is done upon recognition. And that is a part we have a full control over. If the bot screws up, it will be because we told it to do it that way, not because a NN gone wild (I assume NN would not suddenly get delusional and start to recognize things that aren't there). NN is just a tool for interfacing the world,a tool to convert seen world parts into top codes usable for further processing, and then to convert processing result codes back into bottom imagined world, to draw an action. Converting is our least problem. The problem starts when some genius starts to think that it is ethical to get even, and then he implements a philosophy "an eye for an eye, a tooth for a tooth."
Title: Re: I want to crack Neural Networks
Post by: unreality on April 19, 2018, 12:20:56 am
Ivan, if I understand what you're saying, you're placing trust in the machine learning process. That's like humans saying "We raised our child well. He could never harm people." Yes, I'm sure Hitler's parents believed the same thing. The problem is not how well you teach it. The problem is what happens when it's outside of your control, when it's part of the world. All bets are off from thereafter. The problem is directly do to the very nature of NN. Unlike my non-NN AGI, there's no logic language in your NN to guarantee certain things will not happen.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 19, 2018, 12:28:07 am
Also, remember we're not talking about some simple static NN tool such as image recognition. We're talking about something nobody has a clue how to accomplish. Creating a neural network that's like a human. That human like NN AGI is going to walk the world, learning, observing, experience everything from hell to amazing things. People spitting at it. Kicking it. Cussing at it, saying horrible hateful things. Trying to trick it to cause it harm. Welcome to human world! Even humans better have some serious caution and street smarts to survive. There are so many corrupt bad people out there. It's going to form it's own personality. It's going to go through so many phases depending on who it comes in contact with, what it experiences, that nobody will have control over what it becomes.

Yes, most humans are good people, but for ever few dozen humans there's a high chance you'll find a bad one. It's a tough world out there.
Title: Re: I want to crack Neural Networks
Post by: infurl on April 19, 2018, 01:17:01 am
https://www.youtube.com/watch?v=NEIWktzvSBc&t=2m
Title: Re: I want to crack Neural Networks
Post by: unreality on April 19, 2018, 01:29:15 am
Sarah connor chronicles was one of my favorites.

On the other end of the spectrum we have gangsta chappie.

https://www.youtube.com/watch?v=pG8mCC1ZQ1g
Title: Re: I want to crack Neural Networks
Post by: Art on April 19, 2018, 04:03:11 am
A real thought-provoking segment from the same movie, Chappie, which I really enjoyed as well as the Sarah Connor Chronicles, HUMANS and others in this "digital" genre. Who are we? What makes us...us? What is it that hidden "source" that so many are trying to instill and install in an Artificial Intelligent being?

https://www.youtube.com/watch?v=mbbKMCaOxAQ (https://www.youtube.com/watch?v=mbbKMCaOxAQ)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 19, 2018, 12:40:31 pm
reading comments later . . . .

new possible storage device for Long Data
https://www.youtube.com/watch?v=DylDNsqdAmI
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 19, 2018, 08:36:28 pm
I know this is a somewhat uneducated take but here is my gather of the following:



INTEL PROCESSORS:

1971 - 10micron
1974 - 6micron
1976 - 3micron
1982 - 1.5micron
1992 - 0.6micron
1995 - 350nm
1999 - 250nm
2003 - 130nm
2004 - 90nm
2009 - 32nm
2012 - 22nm
2014 - 14nm
2018 - 10nm

47 years ago - 10micron - 1,000 times bigger/worse compared to 2018
23.5 years ago - 350nm - 35 times bigger/worse compared to 2018
2018 - 10nm

And if you look back from 23.5 years ago at 350nm 23.5 more years ago to 10micron it is x32 worse.

Therefore, looking back 23 years is 32 times worse whether look back from 2018 or look back from 1995. I assume same for looking back 47 years - it is always 1000 times worse.

2y x2
4y x4
6y x8
8y x16
10y x32
12 x64
14 x128
16y x256
18y x512
20y x1024
...Double our advancement every 2 years up to 1000 times better does NOT lead to 47 years. Therefore our doubling happens every 4 years actually. ????????? This is for only size though, possibly computational power is higher, however clock speed doesn't go up much anymore apparently. Then what else is the next biggest fill in factor that proves it really is 2 years it takes to double?


1971 - 0.740MHz
1974 - 3.125MHz
1976 - 6MHz
1978 - 10MHz
1989 - 100MHz
1993 - 300MHz
1995 - 450MHz
2000 - 3466MHz
2008 - 3600MHz
2011 - 4000MHz
2013 - 4400MHz
2017 - 480MHz
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 19, 2018, 09:03:52 pm
1 atom transistor has been made http://www.purdue.edu/newsroom/research/2012/120219KlimeckAtom.html
There's also sites saying they've made 1nm transistors proof-of-concept.
Title: Re: I want to crack Neural Networks
Post by: infurl on April 19, 2018, 09:26:52 pm
https://en.wikipedia.org/wiki/Moore%27s_law
Title: Re: I want to crack Neural Networks
Post by: unreality on April 19, 2018, 09:41:03 pm
Multi threaded cores is the way things have been going. Even CPUs are getting up there. Xeon PHI is getting close to 300 cores. There's the KiloCore, although I recall it's intended for lightweight devices such as mobile phones. My best guesstimate for google's TPU v2 puts it at about 10,000 cores. AlphaZero used 4 TPUs, about 40,000 cores during game play. Although I doubt that's the TPU specialty since it's specifically designed for it's neural networking software. You don't want to know how many TPUs were used during it's training period. ... Over 5000 TPUs!
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 20, 2018, 12:43:42 pm
WOWWWWWW look at my findings.....it really does indeed take them about approximately 4.8 years to shrink the size of a transistor! See my numbers below:

First, HOW I GOT MY PREDICTION:
...BLINDLY I went with a well decided number after a few tries and if you keep adding 4.8 years and keep cutting the nm number in half, it reaches 2018 10nm at 48 years just 1 year over! Therefore 4.8 years is the time it takes!
...(STARTING AT 1971)
0y - 10,000nm
4.8y - 5,000nm
9.6y - 2,500nm
14.4y - 1,250nm
19.2y - 625nm
24y - 312.5nm
28.8y - 156.25nm
33.6y - 78.125nm
38.4y - 39nm
43.2y - 19.5nm
48y - 9.75nm
52.8y - 4.8nm
57.6y - 2.4nm
62.4y - 1.2nm
67.2 - 0.6nm
72y - 0.3nm
76.8y - 0.15nm

NOW SEE THE TREND:
1971 - 10,000nm
1974 - 6,000nm
1976 - 3,000nm
1982 - 1,500nm
1992 - 600nm
1995 - 350nm
2003 - 130nm
2004 - 90nm
2009 - 32nm
2012 - 22nm
2014 - 14nm
2018 - 10nm
AND HERE'S MY PREDICTION:
2022 - 4.8nm
2027 - 2.4nm
2032 - 1.2nm
2037 - 0.6nm
2042 - 0.3nm
2046 - 0.15nm

WHAT DO YOU THINK?

AND WHY DON'T THEY JUST JUMP TO THE "BOTTOM", WHY THE SMALL STEPS?
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 20, 2018, 02:28:01 pm
Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.........................................

OHHHHwwwww now I gotta do another whole different type of calculation!!!! RRRRR!!!!!

But my finding is still amazing, please see it in the last reply.
Title: Re: I want to crack Neural Networks
Post by: unreality on April 20, 2018, 04:49:23 pm
One advantage to smaller transistors is less capacitance, which equates to higher switching speeds. Also smaller transistors usually results in shorter wires, which means less inductance and believe it or not less capacitance, which again means higher switching speeds. On the bad side, eventually you have serious tunneling issues to deal with.

Moore's law is an overall effect, but obviously can't predict sudden improvements. Quantum computing isn't ready for home PCs, but we're almost at the dawn of real quantum computing. Quantum computing is a major breakthrough because the particles are in superposition and they're entangled, quantum entanglement. This offers superdense coding, qubits.

https://en.wikipedia.org/wiki/Superdense_coding

Beyond that, I think our very clever AGI & ASI friends will make major major breakthroughs that it mind boggling. Imagine communication that occurs outside of spacetime. Instant or near instant communication between transistors. :)
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 20, 2018, 08:59:52 pm
Is it just me or did anyone else's Gmail suddenly get like 4 times faster when uploading files to the cloud in an email? I noticed this about a 1.4 months ago.

Yes our ASI friends I love them. See attachments. It'll be all like...
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 20, 2018, 11:34:03 pm
WHAT IS THIS
https://www.bestbuy.ca/en-ca/product/western-digital-wd-my-passport-ultra-metal-edition-1tb-2-5-usb-external-hard-drive-wdbtyh0010bsl-nesn-silver-wdbtyh0010bsl-nesn/10304250.aspx?

Apparently they cleared them out and won't have any more.....


But there is a 73.98CAD 2TB - https://www.bestbuy.ca/en-ca/product/seagate-barracuda-2tb-3-5-desktop-internal-hard-drive/11509500.aspx?

Here is a ultra (salivatatingly) view of storage device evolution. We are nearing the singularity! Lol!
https://onextrapixel.com/a-look-into-the-evolution-of-storage-devices-1956-2013/
Title: Re: I want to crack Neural Networks
Post by: LOCKSUIT on April 22, 2018, 02:16:37 am
Backing Up Your Data against Geomagnetic Storms and EMPs

Punch cards......no just joking.

Backup your most important files by printing them. Having multiple backups. Spreading them around the globe by Cloud. Putting your devices in Faraday cages or in the basement.