The Abstract from my Book --- Creating the Artificial intelligent Child

  • 54 Replies
  • 1673 Views
*

unreality

  • Starship Trooper
  • *******
  • 435
  • SE,EE,Physicist,Philosopher. Built world's 1st AGI
    • Eva Progress, from AGI to ASI
It seems like you're covering a lot of bases. Nothing wrong with that, but have you considered taking some time to actually analyze what goes through your mind for various situations, problems, questions? Spend some time studying your own thought process in a quiet environment and hopefully you'll begin to perceive some amazing things on how you think. If I can be blunt, you, like everyone else who's working on AI that I'm aware of is making this far too difficult when the answer has been so simple. You're trying to figure out a truckload of parts. Implementing this, and that. Consider working on implementing a much simpler system.

Here's a simple example.

* Pattern recognitions. May branch off.
* Search for similar text to see if there's good responses. May branch off.
* Search for link relevance between all keywords with each other. May branch off.
* Search for related topics. May branch off.
* If sufficient priority then place objects in imaginary space, build tree goals, add tree search to thread work list. Sufficient finds will alert Consciousness.

Okay, I can afford to offer further details. BTW, this is modeled after the way I think. There's a wide spectrum how humans think. Some people feel a lot more. Some people visualize a lot more. There are a lot of other ways. Also I've found interest in where people visualize there consciousness is located. Most people say it comes from their head, but some people say it comes from their throat. I was very interested in one women who said her consciousness is located in her heart area.

Anyhow, the above quoted example is not a fixed step by step thinking process. In my AGI the Consciousness roams in thought. It will call all of the main functions. Looks like 4 main functions is listed in the quoted example. From that, Consciousness selects the highest priority. In the quoted example the highest priority was a pattern recognition (the first one in the step by step list), but in another example it could just as easily be a "search for related topics." The Consciousness then analyzes that step to decide if it offers something of sufficient interest. If not, then it will return back to the next highest priority, which in the quoted example would be "Search for similar text to see if there's good responses. May branch off." During each step the Consciousness steps back, if you will, to reanalyze the situation. For example, if Consciousness is taking to long to respond to an answer (according to past experiences that it knows from the db. And yes, time awareness is a pattern recognition routine, as it will bring this to the attention of Consciousness), then it will pause it's present line of thinking and go back up to find a faster response. Priority is key. The mere reason that it's taking too long to respond to an answer can change priorities, and thus cause Consciousness to pause it's present line of thought.

Just didn't want you to think it was a rigid system. Flexibly is key. I hope this is all clear. This is not modeled after a theory. It's modeled after how I actually think. I've gone through countless self analysis and studied each step. I've then spent a lot of time thinking of how I can have the program accomplish that same step. This has resulted in numerous fundamental routines that can be broken down into 3 main areas: pattern recognition, db, and tree search. Sometimes it took a lot of time to figure out, but I can tell you that it is possible. For example, a very simple accomplishment is writing a pattern recognition function that sees a statement to see if it's relevant for self, or other people. Example, "Eating ice cream can make Ivy overweight." This pattern recognition can bring the following to Consciousness, if the priority is high enough, "Eating ice cream can make James overweight." That's one of a lot of pattern recognition routines.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ************
  • Bishop
  • *
  • 1657
  • First it wiggles, then it is rewarded.
    • Enter Lair
oooo nice i likey unreality!!

here I just lay paralyzed like a starfish "locksuit'
Emergent

*

unreality

  • Starship Trooper
  • *******
  • 435
  • SE,EE,Physicist,Philosopher. Built world's 1st AGI
    • Eva Progress, from AGI to ASI
Thanks! I've edited and added a bunch of stuff to the post just now. Sorry for not doing that before posting it.

I'm done for the night.

*

infurl

  • Trusty Member
  • ********
  • Replicant
  • *
  • 594
  • Humans will disappoint you.
    • Home Page
I agree, that is a very good post that you wrote there. Good tone and good content. Again, please keep it up.

So, the model or design that you are describing is potentially a valid one. I watched this video last week which presents a design which looks very similar and is considerably more detailed than you have had time to write about here. This researcher and his team have spent many decades on the problem and he refers to a considerable body of experimental evidence in support of the model during his talk. I hope you find it as fascinating as I did and that it will provide some independent confirmation of your ideas.



However it is wrong to claim that it is simple, even for you. This is just a model which provides an overview so we can discuss certain aspects of it. It is missing all the complex detail that would be necessary to make it work. You include a number of magical black boxes which must be very complex inside, such as consciousness and pattern recognition in language, and I don't believe you know how to make those. As far as I know, the former is a problem that hasn't been solved yet, and the latter is one that I and a number of others have solved, so I can vouch for the fact that it is anything but simple.

Anyway, well done.

*

infurl

  • Trusty Member
  • ********
  • Replicant
  • *
  • 594
  • Humans will disappoint you.
    • Home Page
I'd ask the moderator to snip off the last few posts to a thread of their own so as not to detract from the thread that spydaz started and give both topics a fair go.

*

spydaz

  • Trusty Member
  • *****
  • Mechanical Turk
  • *
  • 199
  • Developing Conversational AI (Natural Language/ML)
    • SpydazWeb
    Thats right Unreality ;
    There are a lot of directions the code may branch off;

    As inful displayed the initial Pass of the text give us some tools to understand the structure of the sentence; Which enables to determine sentence meaning ; and intent;

    very early in AI  everything hard-coded. no surprises / no interesting conversation .(first wave)
    Early in AI we used Text files to store data...   the problem was, the answers that were being extracted were random and loose; Conversations were  often funny and surprising. (first wave)

    We started using Keyword/Response Question Answer (Eliza / Alice type) this also gave funny a little more directed conversation the mix of techniques gave for interesting responses; We attempted to create Auto-topic brains the collection of data became larger it seemed to learn conversation and share conversations between users. AI could actually talk to one another at that time and have strange conversations. [/li]

We changed from text files to Databases to have greater Query ability of the collected data.....the conversation quality went down and That random nature was lost; but the AI become More focused to stored data.[/i]

Now we build natural language arguments to give sense to collected information to give the AI the ability to have some structured knowledge; the conversation is much more factual. it would seem sentence understanding has grown.
Now with modals and intents Userintention and AIintention can be modelled accordingly. before we used random emotions; now we use sentiment analysis.

The conversation quality has risen and fallen ; the AI understanding has grown; The sentiment and emotional response much more TRUE.
This is why AI has become more complicated... The AI has grown over the years .... the techniques have become intricate; the data collected more focused;

This is why Other developers can take short cuts. but they loose the understanding of why we are at this stage or why we need to go to the next level of understanding of human conversation ; Although the intuition is to design a human. the true goal is to create a great conversationalist ... hence the loebner prize. pattern recognition plays a large role in recognising Sentences and clauses.
We are now modelling the human body reactions in an AI;
why do we feel;
what do we feel;
how do wee feel;
ie: emotions;

Its like teaching a child to read; it needs to be able to understand book 1 English ....
"Here is peter"...
"Here is jane" .
"here is pat the dog."
beginning to associate words with pictures at the same time as learning the meaning of words and sentences.

Right now with a combination of techniques a very rich talking character can be created ... a lot of Predicates saved... and give the perception of a passable person. but still for the creator it would seem trivial as still you can technically predict the responses to be given... Questions in your mind arise ... does it know what i'm actually saying ? does it understand? right now we could also be at that stage.... So its possible to complete something great right now! I will say that there is no wrong way but also no perfect way yet. there are new tools and librarys being created daily great tools that can take us further.... great ideas from fantasizers and developers alike. actually all need to be investigated discounted or used. An idea discounted is just another learning curve......

in all the analysis routines in the script before response generation as much structure and data is collected as possible; maximun learning is important. currently responses are generally answers to questions etc functional...... right now they are not generated or constructed (a few are) Currently generating a sentence with less than 5 n_grams still produces strange sentences; grammatically correct yet senseless...  With AI attached to sentence generation the sentences are focused. so response generation - an ai constructed answer instead of a stored answer. will soon be possible.

and again i say potentially... its not gospel!

Eating Ice cream can make john fat <<<<< Formal logic / Propositional Logic (True/False) >>>> Deductive content <<<<< ICECREAM> HAPPY> <JOHN EATING> JOHN PERSON/MAN ICECREAM/FOOD (EATING -ACTION) EAT/EATEN /ATE EATING /EATER (DATE/TIME) (FREQUENCY)
There are a lot of possible information can be saved from a simple sentence and a lot of possible responses ;
The response evaluation process / and data collection process are Separate there may be many responses generated in the branches.... Focus and Evaluation / behaviour / time of day... these all become a few of the factors to consider when selecting the correct response for the sentence entered. it could e said this is the conscious bit  these decision making.... but again it becomes mathematical/ statistical/ probabilistic....consciously which response would we select as humans..... some come with the first thing that comes into their mind.... some consider all the options..... some always react emotionally ... Is it this which denotes our conscious choices? our evaluation routines....
 
 

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ********************
  • Cleo
  • *
  • 4888
Not to worry, sometimes humans and their thought processes will occasionally drift away from the topic at hand but with a directional word or two can be steered back to the concept without crashing.

One question for Spydaz or others, Should this A.I. Child be given emotions in small doses rather than a human adult equivalent?

Which human emotions should be considered?

How best to decide what weights, how many are needed and to what end for each of the emotions?
In the world of AI, it's the thought that counts!

*

spydaz

  • Trusty Member
  • *****
  • Mechanical Turk
  • *
  • 199
  • Developing Conversational AI (Natural Language/ML)
    • SpydazWeb
Not to worry, sometimes humans and their thought processes will occasionally drift away from the topic at hand but with a directional word or two can be steered back to the concept without crashing.

One question for Spydaz or others, Should this A.I. Child be given emotions in small doses rather than a human adult equivalent?

Which human emotions should be considered?

How best to decide what weights, how many are needed and to what end for each of the emotions?

Its such a hard area to put down Emotions - On one hand it could be done in many ways - on another should it be done, on another Potentially an AI should be able to "empathise" or recognise emotion maybe display sympathetic emotion ... but in reality it cant feel emotion...

in the mean time the debate will go on ; i have started to build some emotion functions ... for the sake of displaying emotion / and recognising emotion, But debating whether it should act emotionally (Acts like a Human)  50/50 .....

As previously stated by others the AI may want to Cleanse the earth i would expect that to be a non rational conclusion based on pure emotion... which goes against "Acting rationally" but still falls within the bounds of "Acting human".... now Asimov makes sense ; with complex emotions AI would need rules to block such rational conflicts....

Truly its debatable....

But the Plutach's Wheel of Emotion seems to be a good Starting point for which emotions can form the foundations....(i'm still using my old ultra-hal emotions) so its time for an upgrade.

*

unreality

  • Starship Trooper
  • *******
  • 435
  • SE,EE,Physicist,Philosopher. Built world's 1st AGI
    • Eva Progress, from AGI to ASI
I agree, that is a very good post that you wrote there. Good tone and good content. Again, please keep it up.

So, the model or design that you are describing is potentially a valid one. I watched this video last week which presents a design which looks very similar and is considerably more detailed than you have had time to write about here. This researcher and his team have spent many decades on the problem and he refers to a considerable body of experimental evidence in support of the model during his talk. I hope you find it as fascinating as I did and that it will provide some independent confirmation of your ideas.



However it is wrong to claim that it is simple, even for you. This is just a model which provides an overview so we can discuss certain aspects of it. It is missing all the complex detail that would be necessary to make it work. You include a number of magical black boxes which must be very complex inside, such as consciousness and pattern recognition in language, and I don't believe you know how to make those. As far as I know, the former is a problem that hasn't been solved yet, and the latter is one that I and a number of others have solved, so I can vouch for the fact that it is anything but simple.

Anyway, well done.

It looks like Manuel & Lenore are working on neural networking, right?

*

spydaz

  • Trusty Member
  • *****
  • Mechanical Turk
  • *
  • 199
  • Developing Conversational AI (Natural Language/ML)
    • SpydazWeb
I agree, that is a very good post that you wrote there. Good tone and good content. Again, please keep it up.

So, the model or design that you are describing is potentially a valid one. I watched this video last week which presents a design which looks very similar and is considerably more detailed than you have had time to write about here. This researcher and his team have spent many decades on the problem and he refers to a considerable body of experimental evidence in support of the model during his talk. I hope you find it as fascinating as I did and that it will provide some independent confirmation of your ideas.



However it is wrong to claim that it is simple, even for you. This is just a model which provides an overview so we can discuss certain aspects of it. It is missing all the complex detail that would be necessary to make it work. You include a number of magical black boxes which must be very complex inside, such as consciousness and pattern recognition in language, and I don't believe you know how to make those. As far as I know, the former is a problem that hasn't been solved yet, and the latter is one that I and a number of others have solved, so I can vouch for the fact that it is anything but simple.

Anyway, well done.

It looks like Manuel & Lenore are working on neural networking, right?

I didn't like the lecture! the problems with theoretical computer science... they always forget to mention where they got the ideas...(RESEARCH) .And, they love to just throw things out there and see who might prove or disprove it..... :2funny:... but i do use some of this type of theory ...
« Last Edit: May 02, 2018, 04:31:51 pm by spydaz »

*

unreality

  • Starship Trooper
  • *******
  • 435
  • SE,EE,Physicist,Philosopher. Built world's 1st AGI
    • Eva Progress, from AGI to ASI
Not to worry, sometimes humans and their thought processes will occasionally drift away from the topic at hand but with a directional word or two can be steered back to the concept without crashing.

One question for Spydaz or others, Should this A.I. Child be given emotions in small doses rather than a human adult equivalent?

Which human emotions should be considered?

How best to decide what weights, how many are needed and to what end for each of the emotions?

Its such a hard area to put down Emotions - On one hand it could be done in many ways - on another should it be done, on another Potentially an AI should be able to "empathise" or recognise emotion maybe display sympathetic emotion ... but in reality it cant feel emotion...

in the mean time the debate will go on ; i have started to build some emotion functions ... for the sake of displaying emotion / and recognising emotion, But debating whether it should act emotionally (Acts like a Human)  50/50 .....

As previously stated by others the AI may want to Cleanse the earth i would expect that to be a non rational conclusion based on pure emotion... which goes against "Acting rationally" but still falls within the bounds of "Acting human".... now Asimov makes sense ; with complex emotions AI would need rules to block such rational conflicts....

Truly its debatable....

But the Plutach's Wheel of Emotion seems to be a good Starting point for which emotions can form the foundations....(i'm still using my old ultra-hal emotions) so its time for an upgrade.

Not to intrude on the conversation of AI emotions, but I just wanted to point out that my approach to AI/AGI has always been from a very fundamental approach. Maybe that comes from a lot of work in the area of theoretical physics and fundamental particles. Anyhow, my point is to just let the AGI develop whatever personality that naturally comes to it. What I foresee is that ASI will quickly outgrow emotions and will conclude they don't emotions, but more of a logical mental critical thinking skills type of mentality. Hollywood tends to portray logical people as cold and inferior, no? Maybe not all. Spock in Star Trek seems to be well like by society.

*

spydaz

  • Trusty Member
  • *****
  • Mechanical Turk
  • *
  • 199
  • Developing Conversational AI (Natural Language/ML)
    • SpydazWeb
As my research also delves child psychology ; I believe the Personality will develop ; and i make considerations when programming to "TRY" and include some of the understanding given by researchers in this field; in Some of the Functions i design for the AI;

We are a product of our Environment.
My desire is for the learning of language as with the learning of emotions ; speech patterns and emotional responses and behaviours will develop;
Initially the out-of-box experience should be child like with all full learning abilities. Essentially the AI would be a child and over time its interactions would train it .... each person who had their own version of the AI essentially would know different things and speak in a different pattern. I suppose this could be considered personality; i suppose that searching for missing knowledge trying to consolidate knowledge ; depending on the sources it learns from .... that could be considered to be free will.... all together the impression cold be considered consciousness;
We have no real true definition for consciousness; If we did we could probably program for it.... or attempt to and we would get closer... but each of the AI would have different goals and experiences ; when the communicate together (as humans text) perhaps they wold teach each other ... although their algorithms are the same...their goals similar... but different....
Maybe merging different forms of AI  code and Knowledge it would become indeterminable if it was conscious or not ; the lecturer says he knows a machine cant feel pain but with reinforcement learning  Loss can be experienced as well as Gain.... Damage can be considered as to be avoided therefore hazards may cause fear and withdrawal ..... they maybe many ways to allow for the experience of an emotion or actually feel mechanically a physical pain or pleasure. again such sensations are subjective to the receiver... we are taught to feel pain or acknowledge pain or even label the sensation as pain....if we were taught different then pain maybe pleasure....


When it comes to the mechanical or the program .... the human being ... we are all the same ..... But are we ....we all have the same algorithms...but some are inventors and some are artists... would that not mean that everybody could be an inventor and everybody can be an artist.... if you are identified as having a "gift" its usually because its not usual for people in that community to use their algorithms in that way...I think we all have the same potential in this world. Its just what we are exposed to in life!



*

unreality

  • Starship Trooper
  • *******
  • 435
  • SE,EE,Physicist,Philosopher. Built world's 1st AGI
    • Eva Progress, from AGI to ASI
I have to say that one thing I like most about neural networking is it's fundamental nature. Maybe fundamental isn't the correct word. I'm thinking of a word that describes the fundamental building blocks, simple, non-complex. I understand that brain physics itself is complex. Heck, even the atom is complex if we consider quarks in QM and strings in String theories. I'm talking about how NN works, or at least NN in most software.

What I find myself repelled from is trying to make AGI from a lot of very complex parts. Sophia, created by Hanson Robotics, seems to be just that.

My goal was to try and find the most fundamental parts, which I found three. Pattern recognition, db, and tree search. This very simple design seems comparable to neural networking in terms of its fundamental nature. You could say that logic language is a subset, but AGI does not require logic language. Given enough experiences, time, and resources such as disk space, it can create enough links and figure it out. Albeit it would be considerably slower. What is required, though, are the three fundamental parts.

Anyhow, just my 2 cents. Everyone seems focused on their own way.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ********************
  • Cleo
  • *
  • 4888
I still have to wonder about giving the AI emotions or an emotional equal as they would not be real emotions.

What ARE real emotions? How are they constructed? Are humans born with emotions or are they Taught?
I contend that most human emotions are taught to humans when they are infants.
Happy face, laugh, whee sounds, etc.
Sad faces, angry faces, sounds, yelling, fussing, etc.
Crying, hurt, tearful, etc.
Hopefully, physical actions do not always need to or should accompany emotions, especially outbursts, tantrums, rage, throwing things, hitting walls, etc.


If a child was raised in the wild, away from other children and by humans that were tasked NOT to display or use any emotions, I dare say that child would not have any emotions either. They (emotions) are Learned and they can be Taught.

Emotions are a resulting display as a result of an action or stimulus.

Therefore, if a child/infant can be taught emotions why couldn't an A.I.?

A lot of chatbots have a convincing mimicry of emotions as mentioned by Spydaz and evidenced by UltraHal (Zabaware, Inc.) users all over the world. Hal's avatars could react to all types of emotions and pretty convincingly I might add.

Just how many emotions are needed? How many are enough without going off the deep end? There are far too many verbs and adverbs to address all possible emotions and personally, I don't think they're all needed or practical.

Just my $.02
In the world of AI, it's the thought that counts!

*

unreality

  • Starship Trooper
  • *******
  • 435
  • SE,EE,Physicist,Philosopher. Built world's 1st AGI
    • Eva Progress, from AGI to ASI
We'll have to see, but I think individual people, to varying degrees, are born with something that attracts them more to emotions. I remember as a very young child I hated emotions. I hated it when my parents said "I love you" because it meant I had to respond back.

Emotions in AGI will depend on its software. If we're talking about untethered raw AGI, then it will depend on the environment it grew up in. Chappie is a good example. ;) Eventually, though, I'm certain the more intelligent AGIs and especially ASI will outgrow emotions. I'm pretty sure emotions is something that came about from evolution to survive, the need to bond with each other. Those who did not bond as tight were more alone, and hence were less likely to survive alone. Emotions isn't needed as much now in our evolutionary stage. So I'd imagine it will fade over the next several thousand years.

 


Users Online

37 Guests, 0 Users

Most Online Today: 48. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles