Releasing full AGI/evolution research

  • 290 Replies
  • 161446 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #135 on: October 23, 2020, 06:09:42 pm »
This year has been my most productive year. I've learned to program in Python and make my first AI a year ago, I've much better understood KNN/K-Means, Random Forests, I already had the random forests idea but the KNN was amazing it took me 2 days to get so much out of it. I've made a dozen large discoveries and 100 micro discoveries. My AGI guide is much more on the way now and incredible than ever before. Among things I do every month I went through all of Kaggle courses in 2 days just now https://www.kaggle.com/learn/overview, I didn't bother with the code because its expensive time-wise to code but I still read all they said and understood it well. They always go through what others do and they certainly did a more thorough one they had reinforcement, NLP, computer vision, word2vec, backprop.....they didn't say so much in details but its clearer what they know (and share, they never share everything and clearly). And NLP and RL are not to be separated do not underestimate their relationships, computer vision is also very similar to NLP AI. I also went through lots of other readings of course, and generated my own discoveries of course. And I'll soon be creating images and code and vision-versions for my guide to AGI and an exotic AGI group so we can cover more ground and be surer/ clearer.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #136 on: October 23, 2020, 10:13:42 pm »
Also,

In some way, I feel there is 2 approaches to life/progress, and you could actually put it like this: you could take the Satanist or Religious approach. In the godly method, you re hopeful, but this is too hopeful, there is no grounding to reality, just a single book, or, what you could call "a thought". The satanist way is grim and truthful, too, but too little hope, they tell you we die, they promote death, they are crass and nasty. I like to stay in between :). I'm very hopeful so that I have goals, I may seem blissful at times or "detached and in the future already", but I'm also very grim dark and honest and know we are machines that will die if we don't evolve so that I can reachhhh those future goals, I may seem evil at times or "far in the past of history". In AI terms these 2 things are Frequency of observations, and reward, it causes you to say things that likely/ usually/ should happen. You predict your future. Will it be nearby death, or a distant future? The biggest rewards tends to be sparse, how long do meals or love making last? How long does victory last? We really want reward, but we really want frequency to walk you to the reward, and the walk needs reward to have a reason for the walking. When you lack one of these, you either don't reach the future because you can't walk or you can't reach the future because you have no clue where to walk.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #137 on: November 03, 2020, 07:07:33 am »
So I've been thinking. What if ANNs learn physics functions? Are they learning more than just entails and co-occurences? Are they acting like a Turing Complete neural computer?

I mean, if you count what word follows the word 'dog' in text, you can find out what dogs usually do, to predict the future in text better. The more data it sees, the better it gets. This models physics, in a cheap, but easy and pretty helpful way. And instead of taking all particles locations and speeds (which is impossible), you take 2D images of the world.

But you can also run a physics sim and get a really good prediction of the future rolling out, just it's super, super costly. But there is one thing interesting, you can take refractions in an image off of wood etc which are all reflective/refractive some amount and get back the reflections as there were before refracting by merging all that data and see something that is not shown in the image directly, ex. a cat face even though only a tail is shown.

How would a ANN learn that on its own though? A net, in the shape of a hierarchy, able to adjust its connections to make up rules, could it do it itself?

If you have a net using Backpropagation to find a mapping between input and output nodes, hmm, i mean ya, its adjusting the net erm rules, but, hmm, you tell it it has error and to tweak the weights again so that it predicts it sees a cat face even though there is only an image of a tail?

When we say backprop makes a mapping between inputs to outputs, we mean it will find the patterns and know what input activates what output, but, this can't find all patterns, it seems like a blind way to find patterns.

Let's take a simple dataset and see how/why it'd form a net some way.

cat ran, cat ran, cat ran, cat sat

So far the net would, given 1 word, weight strongerly to ran, more than sat. The net is trying to say ok this is the right output and you have the wrong output so let's get rid of these and up these weights, continueing through each layer, and the idea may be it gets rid of the not pattern nodes and merges the pattern nodes where is most value or at least that's what's "in" the net with simply wasted nodes that can be pruned afterwards. Still not seeing much here hmm....

And what about double sided backprop i.e. forbackprop? Isn't building a bridge better if done at both sides? And middle zones too no? Is that called a Hopfield net? Or SVM? Does it exist?
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #138 on: November 03, 2020, 11:48:16 pm »
If the laws of our physics were random, there'd be no patterns/laws. Simple AI rules like counting FREQUENCY captures any pattern/law even other physics our universe doesn't have, at the cost that it isn't so accurate. Other rules are more precise but less general, it may predict the future in a physics sim perfectly but won't be able to predict other physics at all, and the reflections rule to see a cat face that is hidden only works on light, a subset of physics, but is more flexible. It's more likely we are merging the general rules like frequency/ recency/ related, and the rest are made from those ex. how we came up with physics sims ourselves is an example. Backprop and neural Turing Computer machines seem to want to find patterns/target functions on their own, but do so by using key parts like recency, long term memory, relationships, just like my AI can learn deeper patterns once has the few common ones! It seems backprop is only but a way to learn FREQUENCY and RANDOM FORESTS. FREQUENCY etc are universal rules and work well together ! and well on any "mapping" or functions/laws that need to be understood.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #139 on: November 22, 2020, 11:57:48 am »
Lol I'm reading this guy's research notes next month and he says exactly what i said:
http://www.adaptroninc.com/BookPage/1969-and-1970

"98.  Sight and attention: I can pay attention to a spot on the wall and my attention is on a specific very small area of the retina. I also can pay attention to something out of the corner (side) of my eye. I can stare at one thing but not see it but see something out of the corner of my eye but not in so much detail as if I looked straight at it. So my attention can switch not only to sight sound and feel but to a specific area of sight or even just the general picture of what I’m looking at. Now when you imagine something you combine the small specific sight areas into a general picture. Like a man with green togs a straw hat walking on a beach, each specific thing is seen in memory and then combined to form a general picture."

He sounds very precise so far, writing about how the body has wires, I/O, electricity (the "blood"), etc. That's something I do too.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #140 on: November 28, 2020, 06:58:33 pm »
Geoffrey Hinton has said about backprop (which he helped bring into existence): "My view is throw it all away and start again".

Sometimes you need to go the wrong way to get to the right way, there's no "clear" path or else we would have an easy walk! A common coach will tell you "keep going, get through it, don't give up".
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #142 on: December 04, 2020, 01:20:19 pm »
No one mentioned my architecture even one bit in that video (and no one on the internet has my full architecture).... What's funny is mine is energy-based too but the energy only forms the network/ accesses nodes. The energy is even part of the prediction score.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #143 on: December 15, 2020, 04:54:31 pm »
After studying Lateral Inhibition I believe now it does not exist. Check out the Mach Band Illusion. And White's Effect Illusion (the 2 big blue balls version). For the Lateral Inhibition, imagine two 1-pixel-wide lines side by side, both on the ground of the image editor so only one end (the tops) of the two lines is shown, and one of the pixel tower lines is 2 pixels shorter in height. It is noticeable, but the farther apart the lines are, the less you notice the height difference, because the distance between the 2 lines is not anything you seen before as often, and looking at one line at a time matches each other so similarly - we know this isn't because of retina focus/location offset because it happens even if look at both lines when they are really close (when you look all of both lines, not tips), but thankfully you can look at only the top tips of both at the same time and doing this means you won't be able to compare the complete lengths of the 2 since you no longer look at all the line. The only way to recognize which line is higher is when they are very close side by side and when you look at both at the same time and only look at the tips. The idea here is the Mach Illusion is you recognizing the difference but only when closer the more you do so. The brain does have a strong emphasis for edge detection, it builds all objects, you don't need lines to make a toaster you just need different shaded blobs of grey. But we don't change the sensor truth, and the brain already has a pool effect where if it sees a shape evidence enough it is like ok ya know that is definitely a cube there I'm positive fully. And while the illusion looks real, right in our vision, so does motion illusion, spinning ballerina silhouette illusion (which way she turns is based on your primed thoughts), shepard's illusion (vertical table looks longer), ponzo illusion (farther back same size man looks bigger), the brick road illusion (same image but right one looks much slanted), and everything else you think you see lol.

White's Effect happens because you pay attention to a sphere or both and the white or black is mixing in with what you have your retina "eye" on in your retina image.

Since I've incorporated vision and cortical columns into my AGI design, I strongly believe there is no Place Cell and that all senses come from other senses or sensors. When you move your hand around a cup with your eyes closed, multiple points of touch will weigh in and vote on what object it is. But when you move your hand, then feel something, you seem to know where it is, which helps you, for example you feel a pointed tip (of a pen), then you know you move your hand 4 inches and then feel a clamp (of the pen, for shirt wearing), i.e. you know the distance between the tip and clamp, as well. But this is not happening because of place cells or anything motor related, simply your vision is dreaming the image as you visually think of moving your hand 4 inches and feel the air on your hand hit it, you end up with an image of a tip 4 inches away from a clamp, and touch sensory of pen parts too.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #144 on: December 17, 2020, 05:01:38 pm »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #145 on: December 25, 2020, 05:22:53 pm »
156,000 humans die per day (before covid). And the causes are below. 49,000 deaths a day are blood vessel related, it simplifies things, we need nanobots to go through them and repair us. Covid is 11,600 deaths per day on average as of today, it's almost half as bad as cancer. https://www.visualcapitalist.com/how-many-people-die-each-day/
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #146 on: December 26, 2020, 09:34:56 am »
Can anyone's AI - Korrelan's, Google's, etc, recognize the letter A in these images below? And how sure is your AI A is there and how many training examples of what A looks like was needed?
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #147 on: December 30, 2020, 07:22:10 pm »
One said you'd need full AGI to solve that above. Remarkably, I say you need only a small part of AGI to ace it. How odd isn't that? To get 2 very opposing views in the same week! I'm thinking about coding it in the coming months. I believe I can ace it in accuracy using only 1 example of what A looks like. All these distortions of the A are just location and brightness offsets. Rotate an A, stretch that A, blur it too, flip it, brighten parts of it, upsize it, remove color, rotate parts of it, etc. A human, given 1 example pic of a never before seen object ex. a elephant-dragon-frog, will easily be able to recognize it later despite many distortions. All the mentioned distortions to the A ex. rotate/ blur/ etc are solved by a few tricks. To illustrate this, imagine we see the A now, it is same, but is much brighter, now when each pixel is compared to our stored copy, it is off a bit, not same brightness, but the amount off error is same for the rest of the pixels, they are ALL 3 shades brighter, so not so bad sanction it will get then. Do this for each layer and you have an efficient network. For location it is same. The simplest part to my to-be AGI is recognition, everyone's just doing AI wrong.... Recognizing the A is simple per see but the distortions make it less matching, right, so, but most (99%) the A is there and relatively no different between its parts. That is the pattern in recognizing A. > Similar brightness, location, and similar relative error expectation ex. hey i off by 2 shade and my bro is off by 4 shades we similar bro! If this works, it will start everything I think!! Lock finally tackles Vision omg! It' be my 2nd algorithm I coded then.

It's once you merge the 2, brightness AND location, you get the full ability.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #148 on: January 01, 2021, 05:50:47 am »
Would be best if we make a new, from scratch, full Guide to AGI, there always too much go look here, link this, that... just mentor us, but instead make it a very intuitive guide, that makes sense, and short. I unfortunately work on another area, not exactly modern AI / backprop. I have learnt lots after 5 years straight of work towards AGI, and yet still if I had a full guide of all we learnt so far from CNNs, Transformers, etc, I could be maybe much smarter, after just a 1-day read. But nobody can organize such an Guide To AGI. I'm beginning to think more and more nobody really understands the connections between architectures, nor can simply say what the tricks are, there's too much math behind it, even though it's fully understood, the math is hiding the actual understanding of what Patterns in any dataset are.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #149 on: January 02, 2021, 12:06:40 pm »
I think I'm onto something big. So in my big AGI design there is a smaller part for just recognizing objects. When I tried thinking about how it'd work for images instead of text, I realized something new was needed. And it should help text recognition too. A human can see a new image they never seen before, only 1 example of it, then given dummy images can easily spot which is the same image despite being rotated, brighter, stretched, blurred, parts rotated, colorless, flipped horizontally, bigger, etc. Same for music, a slowed down, higher pitch, louder version of Jingle Bell Rock sounds very recognizable despite being a different "image". It's because really there is not much difference, relatively all parts are the same, let me explain.

So, originally you store a node in the brain for the text word ex. "hello", so if you see "hell" you recognize it some amount at least because of #1 how many parts expected are there (ex. 4 of 5 letters are there, so it is 80% triggered/ recognized), and #2 the time delay where it expects the parts ex. "olleh" / "hellzzzzo" / "hZeZlZlZo". So it's flexible, it recognizes typos and delays in location. "you how arrrre ? doing" lol.

So with an image of a stop sign, say we see one but just is much brighter. Obviously if we total up each pixel to match, the global total sum of brightness difference will be huge, each pixel is 6 shades brighter than the pixel we compare to in the original memory image. Yet an image of a frog, not so bright, will have a very similar global sum of brightness (how bright the image is) by just simply shuffling around the pixels. Ex. 3 pixels of brightness > 3, 6, 2......and if the image is 2 times brighter it is 6, 12, 4.......still looks like a stop sign image just brighter.....but if we take the original stop sign image 3, 6, 2 and shuffle them, we get an image of a frog ex. 6, 3, 2! The time delay is of course off but it won't help now. In fact you can see here in this video they cut out real wood pieces and by just rearragning them they get a different image! Clearly their arrangement is not usable to see a stop sign when it now looks like a human face. https://light.informatik.uni-bonn.de/papers/IseringhausenEtAl-ComputationalParquetry-TOG2020.pdf

So how do we realize that a much brighter stop sign, although much brighter than a frog (frog gotten by shuffling the pixels around, not any brighter than the original stop sign!) is actually a stop sign, and not a frog? Time delay has some help but clearly won't help us enough and will lie to us (frog=stop sign), so how do we realize the brighter stop sign is actually a stop sign? Because each pixel is of a much different brightness, they could be arranged differently for all we know. Again, the frog image has less brightness difference in global sum total. So with the brighter stop sign, we need to not just compare pixel to original pixel, we need to look at it like this - suspect image pixel 1 is 6 shades brighter than original image pixel 1, and the other pixels when compared are also 6 shades brighter. That's the pattern. In text this looks like this - we see "hello" and then see "hzzzezzzlzzzlzzzo", obviously the location delay is lots, but after the first painful wait of 3 z (zzz), it sees another same wait time (zzz again), it is less upset and expects the location of the rest of "hello", it is hello, if it were (without the spaces and CAPS) "H sfw E uowfg L rtl L opywds O" it's obviously not spaced evenly like "H rts E jui  L dfg L awq O". Although both look really silly, the latter 2nd one has the pattern "hello" in it because there is 3 random letters between each of the letters we want, the other is random letters all over it doesn't say hello. It is expected error. And really useful for image and music recognition I believe.

For image, it uses both location, brightness, and color, relative expected error. So if allllll the image is brighter, or stretched, is sees the same gap in expectation across the whole image. Still works fine if the stop sign's top right and bottom left patches are inverted brightness, it will process each the 4 squares with less upsetness, then when compare the 4 squares themselves to each other it will se square 1 to square 2 are very different brightnesses cuz inverted, but so is the other 2 corners.

For a music track, it can be stretched longer, louder, higher pitch (size, brightness, color ;p), and it should work fine seeing the expected error across it. If we feed a small hierarchy network the music track but flipped played backwards, it doesn't really look at it like that, it just puts each pixel into the bottom of the network hierarchy, then if the end of the song has expected brightnesses and locations and pitch it will light up parts of the node in the higher layer of the neural network (the start of that node, despite being at the end).

Does it make sense to you?

If you blur, stretch, brighten a stop sign, it is very different to a computer to match to a memory, but obviously it is really, little different, it matches lots, because relatively between each parts of it it is relatively expected to each other, the parts are relative, each is as off in error. When it comes to pattern finding, you start at exact matches (only I know this, others don't...), it roots all other pattern matching. For example if you see "cat ran, cat ran, cat ran, cat sleeps" and given the prompt cat>?, it predicts the frequency is probably 75% likely, sleep 25% likely next word predicted. And if we want to do better we can recognize translation ex. dog ran, sleep etc as much, so cat-dog, and so maybe cat barks, dog meows, they are extremely close words. And look at this rarer pattern problem "hat man scarf, book building library, he frog she, wind dine blown, truck rat ?" it is vehicle that comes next, every 1st and 3rd is similar, so we match each 3 to each other then to the last triplet and see the 1st and 3rd should also be a translate.....lots of translate matching here lol!!!! When it comes to images, the pixel may be brighter, so it isn't exactly exact text matching, but it still starts at exact matches as evidence for anything further, pixel a is bit brighter than pixel b. It's rooted at something. Exact match. So back to image distortions, there is lots of relativeness in them, there is really not so much distortion, clearly a stop sign, this uses pattern matching to reach the activation / conclusion. It's very simple. But the AI field is not showing how simple it is, and I think that's where they really lack, and withholds better understanding.

I'm going to try to code it in the coming months, but first am busy on finishing the guide to AGI to a more stable version, so it may have to wait a bit to get worked on...... but thought I'd share with you some of the idea ahead of time as it is very interesting / simple and something the AI field can't even do as well, they need many examples of a stop sign and still fail over silly small 1 pixel attacks.

When I ask myself why is it still a stop sign image, I start at the pixels we start with, and then know we will do worse it missing some parts of the stop sign, and I look at the difference in brightness, location arrangement of parts of parts in a hierarchy network, color too, then from there what next can we do ?, the relative error difference "how similar is the error" in brightness, etc. So although there is error/missing parts during matching a stop sign, we can also *relieve the error lots by seeing the pattern across errors. When you stretch an image object, it adds lots of error (though not tons, else we wouldn't easily recognize it), but it can be resolved by my trick by seeing the pattern of error across the object.

Do note there is other pattern helpers for prediction ex. every image I showed you for the past hour is a cloud, so the next image is probably a cloud even if not looks like a cloud, however, this is if have a history, if no history, you rely on the image alone. And the reward system simply weights prediction to loved words/ images, it is not telling you what really is in an image.
Emergent          https://openai.com/blog/

 


Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

294 Guests, 0 Users

Most Online Today: 335. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles