pre_AGI

  • 3 Replies
  • 141 Views
*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
pre_AGI
« on: December 03, 2017, 05:30:06 pm »
 pre_AGI

 AGI is about unsupervised learning. To do unsupervised leaning A) you look for repeating pattern in a stream of information. Such
as video and audio. Or B). Randomly create a detector then stick it in a information stream and see if it will detect anything.
If not then scramble it weights and try again. The idea is build up a large amount of detectors for a
first layer of  a Neural Network.

 The Next step is to look for repeating patterns in a data stream, by way of forecasting in a passive state. When all patterns
have been found then the AGI outputs, by way of motors, in hopes of finding more pattern or improving existing patterns.
 When this phase is finished, all hand, arm, leg, and all other personal body movement patterns well have been learned.

 Next is the movement relative to others.

 In this step the AGi uses a  internal 3 D internal simulator to learn of "self" and "other" or self aware.


*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 800
  • Can computers cry?
    • Structured Type System
Re: pre_AGI
« Reply #1 on: December 03, 2017, 06:08:32 pm »
I'm interested in learning from textual input stream. Could it work the same way with NN?
If you vaporize a teardrop, you get a salt.   :flake:

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Re: pre_AGI
« Reply #2 on: December 03, 2017, 07:59:11 pm »
 Ya sure but it will a lot more difficult to make a full blown AGI with just text. Text has been striped of a lot of shard and learned
information. Text is a one way hash of learned information. You can put text into a bot but it has not learned it because it can not go back to
the time when it learned it.

 All people and other bot in the text AGI's  world would need to be points of text emanations. That move around like points of light.
 This way the most primitive 3 d simulator could be used. Then it would become self aware because it could swap position in the
simulation when learning. Like learn the meaning of "me and you", listen to me", "i will give you this word", and "give me that information.
 Also this would not be a lot " IF " cod. It would be collecting a lot of data by AGI setting and listening to other point of light talking for
while then interacting and modeling all other text generators. All other point of light would have distance, direction, movement,
 text typing style. Also others would be more talkative to others and, or, stay out of range, or in range at certain times.

 It would go through a baby steps by generating letters and outputting them into the void and then learn to assemble them in sequence.
 Then learn to copy others texting of other people or bots. And then learn how to apply it in a 3 d simulation of swapping position with
others. or acting it out.

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 784
  • Look into my eyes! WOAH!
    • Google +
Re: pre_AGI
« Reply #3 on: December 03, 2017, 09:32:30 pm »
@Keghn

Quote
A) you look for repeating pattern in a stream of information. Such as video and audio.

After marvelling at Michelangelo’s statue of Goliath-vanquishing David, the Pope asked the sculptor, “How do you know what to cut away?”

Michelangelo’s reply? “It’s simple. I just remove everything that doesn't look like David.”

Human babies are born with billions more synapse than an adults eventual quota.

It’s easier to forget what’s not relevant than to work out and remember what is.

:)
It thunk... therefore it is!

 


Users Online

30 Guests, 0 Users

Most Online Today: 49. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles