Releasing full AGI/evolution research

  • 140 Replies
  • 19391 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4354
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #135 on: October 23, 2020, 06:09:42 pm »
This year has been my most productive year. I've learned to program in Python and make my first AI a year ago, I've much better understood KNN/K-Means, Random Forests, I already had the random forests idea but the KNN was amazing it took me 2 days to get so much out of it. I've made a dozen large discoveries and 100 micro discoveries. My AGI guide is much more on the way now and incredible than ever before. Among things I do every month I went through all of Kaggle courses in 2 days just now https://www.kaggle.com/learn/overview, I didn't bother with the code because its expensive time-wise to code but I still read all they said and understood it well. They always go through what others do and they certainly did a more thorough one they had reinforcement, NLP, computer vision, word2vec, backprop.....they didn't say so much in details but its clearer what they know (and share, they never share everything and clearly). And NLP and RL are not to be separated do not underestimate their relationships, computer vision is also very similar to NLP AI. I also went through lots of other readings of course, and generated my own discoveries of course. And I'll soon be creating images and code and vision-versions for my guide to AGI and an exotic AGI group so we can cover more ground and be surer/ clearer.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4354
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #136 on: October 23, 2020, 10:13:42 pm »
Also,

In some way, I feel there is 2 approaches to life/progress, and you could actually put it like this: you could take the Satanist or Religious approach. In the godly method, you re hopeful, but this is too hopeful, there is no grounding to reality, just a single book, or, what you could call "a thought". The satanist way is grim and truthful, too, but too little hope, they tell you we die, they promote death, they are crass and nasty. I like to stay in between :). I'm very hopeful so that I have goals, I may seem blissful at times or "detached and in the future already", but I'm also very grim dark and honest and know we are machines that will die if we don't evolve so that I can reachhhh those future goals, I may seem evil at times or "far in the past of history". In AI terms these 2 things are Frequency of observations, and reward, it causes you to say things that likely/ usually/ should happen. You predict your future. Will it be nearby death, or a distant future? The biggest rewards tends to be sparse, how long do meals or love making last? How long does victory last? We really want reward, but we really want frequency to walk you to the reward, and the walk needs reward to have a reason for the walking. When you lack one of these, you either don't reach the future because you can't walk or you can't reach the future because you have no clue where to walk.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4354
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #137 on: November 03, 2020, 07:07:33 am »
So I've been thinking. What if ANNs learn physics functions? Are they learning more than just entails and co-occurences? Are they acting like a Turing Complete neural computer?

I mean, if you count what word follows the word 'dog' in text, you can find out what dogs usually do, to predict the future in text better. The more data it sees, the better it gets. This models physics, in a cheap, but easy and pretty helpful way. And instead of taking all particles locations and speeds (which is impossible), you take 2D images of the world.

But you can also run a physics sim and get a really good prediction of the future rolling out, just it's super, super costly. But there is one thing interesting, you can take refractions in an image off of wood etc which are all reflective/refractive some amount and get back the reflections as there were before refracting by merging all that data and see something that is not shown in the image directly, ex. a cat face even though only a tail is shown.

How would a ANN learn that on its own though? A net, in the shape of a hierarchy, able to adjust its connections to make up rules, could it do it itself?

If you have a net using Backpropagation to find a mapping between input and output nodes, hmm, i mean ya, its adjusting the net erm rules, but, hmm, you tell it it has error and to tweak the weights again so that it predicts it sees a cat face even though there is only an image of a tail?

When we say backprop makes a mapping between inputs to outputs, we mean it will find the patterns and know what input activates what output, but, this can't find all patterns, it seems like a blind way to find patterns.

Let's take a simple dataset and see how/why it'd form a net some way.

cat ran, cat ran, cat ran, cat sat

So far the net would, given 1 word, weight strongerly to ran, more than sat. The net is trying to say ok this is the right output and you have the wrong output so let's get rid of these and up these weights, continueing through each layer, and the idea may be it gets rid of the not pattern nodes and merges the pattern nodes where is most value or at least that's what's "in" the net with simply wasted nodes that can be pruned afterwards. Still not seeing much here hmm....

And what about double sided backprop i.e. forbackprop? Isn't building a bridge better if done at both sides? And middle zones too no? Is that called a Hopfield net? Or SVM? Does it exist?
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4354
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #138 on: November 03, 2020, 11:48:16 pm »
If the laws of our physics were random, there'd be no patterns/laws. Simple AI rules like counting FREQUENCY captures any pattern/law even other physics our universe doesn't have, at the cost that it isn't so accurate. Other rules are more precise but less general, it may predict the future in a physics sim perfectly but won't be able to predict other physics at all, and the reflections rule to see a cat face that is hidden only works on light, a subset of physics, but is more flexible. It's more likely we are merging the general rules like frequency/ recency/ related, and the rest are made from those ex. how we came up with physics sims ourselves is an example. Backprop and neural Turing Computer machines seem to want to find patterns/target functions on their own, but do so by using key parts like recency, long term memory, relationships, just like my AI can learn deeper patterns once has the few common ones! It seems backprop is only but a way to learn FREQUENCY and RANDOM FORESTS. FREQUENCY etc are universal rules and work well together ! and well on any "mapping" or functions/laws that need to be understood.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4354
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #139 on: November 22, 2020, 11:57:48 am »
Lol I'm reading this guy's research notes next month and he says exactly what i said:
http://www.adaptroninc.com/BookPage/1969-and-1970

"98.  Sight and attention: I can pay attention to a spot on the wall and my attention is on a specific very small area of the retina. I also can pay attention to something out of the corner (side) of my eye. I can stare at one thing but not see it but see something out of the corner of my eye but not in so much detail as if I looked straight at it. So my attention can switch not only to sight sound and feel but to a specific area of sight or even just the general picture of what I’m looking at. Now when you imagine something you combine the small specific sight areas into a general picture. Like a man with green togs a straw hat walking on a beach, each specific thing is seen in memory and then combined to form a general picture."

He sounds very precise so far, writing about how the body has wires, I/O, electricity (the "blood"), etc. That's something I do too.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4354
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #140 on: November 28, 2020, 06:58:33 pm »
Geoffrey Hinton has said about backprop (which he helped bring into existence): "My view is throw it all away and start again".

Sometimes you need to go the wrong way to get to the right way, there's no "clear" path or else we would have an easy walk! A common coach will tell you "keep going, get through it, don't give up".
Emergent

 


Arecibo radio telescope
by infurl (General Chat)
Today at 12:09:00 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
December 01, 2020, 10:59:22 pm
New Server
by LOCKSUIT (Announcements)
November 30, 2020, 06:38:12 am
Releasing full AGI/evolution research
by LOCKSUIT (General Project Discussion)
November 28, 2020, 06:58:33 pm
alert manager class for waifubots
by yotamarker (General AI Discussion)
November 27, 2020, 04:12:22 pm
Giving AI rights
by frankinstien (General Project Discussion)
November 26, 2020, 04:26:07 pm
We are computational machines after all!
by MikeB (General Project Discussion)
November 26, 2020, 08:37:45 am
Pattern based NLP
by MikeB (General Project Discussion)
November 26, 2020, 08:28:32 am
AI leads a revolution in biology.
by infurl (AI News )
November 30, 2020, 09:56:38 pm
Syntherapy AI psychotherapist game.
by 8pla.net (AI News )
November 30, 2020, 04:58:36 am
Senate Approves Deepfake bill
by LOCKSUIT (AI News )
November 25, 2020, 02:01:18 am
Sony Patent Suggests PS5 Will Have a Chatbot Feature
by frankinstien (AI News )
November 18, 2020, 05:47:45 pm
Potentially life-saving robot scares bears.
by infurl (Robotics News)
November 12, 2020, 12:41:40 am
good news everyone
by HS (AI News )
November 07, 2020, 10:03:04 pm
Meet Kuki
by 8pla.net (AI News )
November 05, 2020, 04:18:34 am
Realistic and Interactive Robot Gaze by Disney Research
by infurl (AI News )
November 03, 2020, 06:33:15 am

Users Online

123 Guests, 0 Users

Most Online Today: 143. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles