pre_AGI

  • 4 Replies
  • 2097 Views
*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
pre_AGI
« on: December 03, 2017, 05:30:06 pm »
 pre_AGI

 AGI is about unsupervised learning. To do unsupervised leaning A) you look for repeating pattern in a stream of information. Such
as video and audio. Or B). Randomly create a detector then stick it in a information stream and see if it will detect anything.
If not then scramble it weights and try again. The idea is build up a large amount of detectors for a
first layer of  a Neural Network.

 The Next step is to look for repeating patterns in a data stream, by way of forecasting in a passive state. When all patterns
have been found then the AGI outputs, by way of motors, in hopes of finding more pattern or improving existing patterns.
 When this phase is finished, all hand, arm, leg, and all other personal body movement patterns well have been learned.

 Next is the movement relative to others.

 In this step the AGi uses a  internal 3 D internal simulator to learn of "self" and "other" or self aware.


*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: pre_AGI
« Reply #1 on: December 03, 2017, 06:08:32 pm »
I'm interested in learning from textual input stream. Could it work the same way with NN?

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: pre_AGI
« Reply #2 on: December 03, 2017, 07:59:11 pm »
 Ya sure but it will a lot more difficult to make a full blown AGI with just text. Text has been striped of a lot of shard and learned
information. Text is a one way hash of learned information. You can put text into a bot but it has not learned it because it can not go back to
the time when it learned it.

 All people and other bot in the text AGI's  world would need to be points of text emanations. That move around like points of light.
 This way the most primitive 3 d simulator could be used. Then it would become self aware because it could swap position in the
simulation when learning. Like learn the meaning of "me and you", listen to me", "i will give you this word", and "give me that information.
 Also this would not be a lot " IF " cod. It would be collecting a lot of data by AGI setting and listening to other point of light talking for
while then interacting and modeling all other text generators. All other point of light would have distance, direction, movement,
 text typing style. Also others would be more talkative to others and, or, stay out of range, or in range at certain times.

 It would go through a baby steps by generating letters and outputting them into the void and then learn to assemble them in sequence.
 Then learn to copy others texting of other people or bots. And then learn how to apply it in a 3 d simulation of swapping position with
others. or acting it out.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: pre_AGI
« Reply #3 on: December 03, 2017, 09:32:30 pm »
@Keghn

Quote
A) you look for repeating pattern in a stream of information. Such as video and audio.

After marvelling at Michelangelo’s statue of Goliath-vanquishing David, the Pope asked the sculptor, “How do you know what to cut away?”

Michelangelo’s reply? “It’s simple. I just remove everything that doesn't look like David.”

Human babies are born with billions more synapse than an adults eventual quota.

It’s easier to forget what’s not relevant than to work out and remember what is.

:)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: pre_AGI
« Reply #4 on: January 11, 2018, 06:22:16 pm »
:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D
:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D
:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D

 If you took a book and read a stream of characters or words streaming into the the system.

 You can find repeating patterns of words or characters. The idea is to build up a data base of repeating characters, words, paragraphs,
pages, and then books.
 Starting with common pieces, and then to the more rarer arrangements of of common pieces.  Like many common letter and the
the rare novel.
 You have two search pointers Target pointer is compared to source pointer.
  Kolmogorov Complexity:
https://en.wikipedia.org/wiki/Kolmogorov_complexity 

 In video or images, or computer vision, there is the Region Of Interest, ROI.   I call this the focus:
https://www.learnopencv.com/how-to-select-a-bounding-box-roi-in-opencv-cpp-python/ 

 The both organic eyes can mechanically focus on to one region of interest, ROI. All within is enhanced and all outside
blurred out.

 To extract common pieces out of a monolithic wall of data clutter, a program uses, two pointers that move through recorded data.
 These are two focuses.

 The idea is to find two of the same object that have completely different back grounds.


 If you take two completely different images with the same one object in both, and make the images transparent. Then over lap
both image and move it around until both same objects overlap. With object found in the over lapping images, The clutter outside
focus or ROI is clutter wall of confusion. The found object is cut away from selected non chaotic area, of the MAIN FOCUS.
 The capture of the object is record into a data base.
 If you took exactly the same two images and made them transparent and overlap them then that images, then a capture of
the whole image would occur.
 But if it is a image of a special location, you saw on a trip to a national park, then it is a rare image that cannot be used as a
building blocks for other images. Very inflexible object capture.

   Unsupervised learning is about finding repeating sub features, objects, arrangements of the smaller pieces. matching pictures,
repeating sub sequences. and sequence built out of smaller sequences. From raw data.

:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D
:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D
:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D
:) :) :) :) :) :) :) :) :) :) :) :;) ;) ;) ;) ;) ;) ;) ;) :D :D :D :D :D  :D :D :D :D :D



 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

293 Guests, 0 Users

Most Online Today: 346. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles