The Abstract from my Book --- Creating the Artificial intelligent Child

  • 54 Replies
  • 14886 Views
*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
ABSTRACT

The purpose of this book is to enlighten those whom would like to design and build conversational artificial intelligence. Early influencers of artificial intelligence and artificial intelligence concepts are discussed in general; some code examples are given. Code, which has been used in this book, has been created in Visual basic and some examples of design algorithms are used to gain an understanding of the vast amount of techniques that are being used. The Lisp programming language has been created by researchers of artificial intelligence yet does not hold the programming paradigms that are needed to create a fully responsive program.
Cognitive psychology can be said, to play a large part of understanding, how data is stored and retrieved by the human mind, which gives a great understanding on how data could be categorized for artificial intelligence and other expert systems. Research given by The Psychologists also indicate how early child learning takes place, which will be used comparatively in the development of a similar model of development.
Machine learning and data mining can be said, to have become a great influencer of modern applications of artificial intelligence and is also being classified as artificial intelligence. This may have slowed progress on conversational artificial intelligence, yet it has fueled the concept of fuzzy learning techniques, which aid in the creation of a conversational artificial intelligence. The concept of a multi agent architecture, working symbiotically to create an overall expert system, can be a complex process. The goal of the agent architecture is to spread task loads, yet the cost on CPU and memory, have slowed the development of these concepts. Today cloud based systems enable for the realization of a multi agent architecture driven by cloud processing power.
The components required to design and build a confident intelligent conversational reasoning engine, cover a wide spectrum of ideologies, each in turn enable for a deeper understanding of information and word knowledge. Conceptual understanding and the taxonomy of things are prime to creating a working storage of understandable and retrievable knowledge. The ability to imply or infer is largely linked to syllogisms, which were proposed by Aristotle in 360 BCE and published in his classical works in 50 BC, these provide the foundation for logic and understanding. These types of syllogisms will be applied to the intelligent design of the conversational intelligence.
Natural language processing enables for understanding the grammatical rules which, as discussed by (Chomsky, 1957) enable for understanding predication and subject content of the sentence. Patterns, which can be designed around such concepts, can produce responses to questions, which, in turn, fit grammatical criteria’s.
With the combined concepts from the various AI disciplines, an effective learning intelligence can be created. This will provide reason and understanding to data collected, providing responses to conversation. The intelligence can be attached to an avatar, which can be used to communicate information via speech to the user. Thereby also creating a rational agent, which acts like a human, and thinks like a human, satisfying the Turing machine concept (Turing, 1950).

LEROY SAMUEL DYER
« Last Edit: May 01, 2018, 11:36:42 am by spydaz »

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
PREFACE
Sociologist Albert Bandura claims that humans are “information processors” and base decisions of behavior on their correctness and appropriateness within their environments;
“Behavior is learned from the environment through the process of observational learning.” (Bandura, 1977)
These behaviors are often modeled on their peers, which in the instance of artificial intelligence, behaviors such as word selection and emotional reaction, would be affected by the user currently interacting with the intelligence, or the information being received. By understanding frequently used sentences the conversational artificial intelligence would be able to select these common phrases when or colloquialisms when communicating with a particular user. In a formal system these preferences would not be applied. Yet such qualities would be needed, to create a unique individual, which fits the concepts of Albert Bandura.
The design of a conversational intelligence can also be challenging as ‘humans need empathy’ which indicates positive reinforcement, as well as negative reinforcement are necessary for human interaction, such qualities enable for personality changes to occur. Internal mood or current emotion, would be affected by verbal cues in conversations.
Current research in sentiment analysis, uses polarity to determine the skew of sentiments in corpus of texts. Keywords indicate value, which can be said to be akin to positive or negative reinforcement, enabling for personality shaping. The mechanisms which are effected in the human body which handle these traits, are similar to intelligent agents, which perform individual functions. This can also give a foundation to the software design process. Assigning agents to perform individual functions in the model aid in its ability to add functionality and evolve, which is also a key understanding gained by learning from cognitive psychologists such as Jean Piaget, who refers to the schema as being defined as,
“a set of representations of the world, which are evolved over a lifetime.” (Piaget, 1952).
An Artificial intelligence should have ability to self-program or be programmed, with new concepts or functions over time, ultimately being able design new schemas based on interactions or changes to its environment. The development of a conversational artificial intelligence can be said to be similar to the development of a child like entity.
The research path of this study will also focus on clausal analysis, as well as syntactic and grammatical understanding of information in text, in which will be stored in databases, in a conceptual and semantic format. Conceptions and discussions by Aristotle in his classical works (Aristotle, 50 BC) defines reasoning as syllogistic.
Entailment gives understanding to information, as well as categorizing things as having conceptual relations. Creating a conceptual model or schema, as an ontology of information, allows for understanding about an object, which exists naturally or is created. Syllogisms allow for inference and implications to be made about that object. In developing a schema of knowledge, conceptual relationships will be used to define information stored grammatically, allowing for the artificial intelligence to conceptually understand the words being parsed from the input medium. According to research by (Atkinson & Shiffrin, 1968) as humans, we manage information mentally, according to long term memory and short-term memory, this also gives the developer of artificial intelligence, a pathway and understanding of, how information needs to be managed and recalled. Memory management can be compared to an “Information processing system” with a central governance, which can be compared with an intelligent agent.
Machine learning algorithms, create the ability for unsupervised as well as semi supervised learning to occur. Automated discovery extraction of Subject, and predicate relations allows for unknown information to be absorbed using the schemas and ontologies created. As well as statistical inference, enabling forestimating the truth of statements given to the artificial intelligence. The development of internal truth can be attached to internal belief structures, which can also affect emotion as mentioned by Klaus Scherer, in reference to the typology of affective states. It is proposed that an artificial intelligence should encompass all of these aforementioned ideologies.
The tools and methodologies proposed enable for the creation of a learning algorithm or the creation of an artificial intelligent system, which also has chat capabilities. A system which learns from text input or any input types, which can be formalized into sensory information to formulate a response. The response of which contains a satisfactory answer to a proposed question, exclamation or statement or even environmental stimuli. A system being able to understand the properties of an entity or thing. The system also can determine logical truth, in so much to say according to the input data the world exists solely based on its personal historical records or truth. A system itself has no character and yet, as the sum of its inputs, the system learns by its environmental conditioning thereby a persona may be created or developed by its interactions.
« Last Edit: May 01, 2018, 02:03:51 pm by spydaz »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Congratulations Leroy! Very nice!
I hope it does well. Best of luck to you! O0
In the world of AI, it's the thought that counts!

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
I suppose it must be AGI ?

Probably when it can self manage itself then it would become AGSI?

It would probably be easier to make an AI which acts and reacts and makes decisions  like an Animal?

AI in any form is AGI

Its just that we would like to talk to the intelligence! see what it has to say.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Well no, AGI should be something that can do human-level wonders....not just whatever can talk to us......(talking to us and changing the world/us is "wonders")
Emergent          https://openai.com/blog/

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
That's nice work Spydaz, is it going to be a book for sale ?  O0

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
That's nice work Spydaz, is it going to be a book for sale ?  O0

Maybe not ....  But i will probably release it somewhere on the Web free!
Its to go with my AI ; And also keep me focused on the overall direction of where the AI should be going... programming often takes you on journeys as with research... its good to keep a end target as you could be designing and building forever and never ever complete it. Its took a long time to arrive at this point.

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Well no, AGI should be something that can do human-level wonders....not just whatever can talk to us......(talking to us and changing the world/us is "wonders")

Wow You want it to change the world as well! LOL
Its a high expectation to build the next ruler of the world.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Very interesting read, even as the two introductionary texts. You have announced some neat ideas there, the ideas I didn't have opportunity to see anywhere else. I hope your readers would be enlightened by your work, with both literal and practical part. All of it looks like an AGI to me. Give it some speedy processor and it would hopefully become AGSI, as imho the only thing that divides AGI from AGSI is a number of solutions per second/minute/hour...

May I ask, did you plan to utilize generative NN too?

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Very interesting read, even as the two introductionary texts. You have announced some neat ideas there, the ideas I didn't have opportunity to see anywhere else. I hope your readers would be enlightened by your work, with both literal and practical part. All of it looks like an AGI to me. Give it some speedy processor and it would hopefully become AGSI, as imho the only thing that divides AGI from AGSI is a number of solutions per second/minute/hour...

May I ask, did you plan to utilize generative NN too?

My initial intention was to use neural networks for Generating Emotions; before i understood neural networks. IF; i was to use them in this system Each network deployed would have to be PRE_TRAINED. I will build the ability to utilise a neural network Given preset parameters... but mainly for use in the AI_DEVELOPMENT KIT for External programmers to utilise.  But i have also been inspired by various posts on syntax (Inful and Ivan Moony) and the development of a programming language; sometimes the simplest things can take years to understand. i didn't realise how important a Tokenizer was and how it can be used to strip unwanted elements from text. I personally think building an Advance AI is not hard... ITs just all the parts that are required to build it are complicated to build. I think that If we had all the components then many people could start building some interesting characters as well as some very intelligent resources. for me Conquering the Whole thing is not really necessary; hence I decided to Have an Achievable Project design  to focus on so that for me there is an actual endpoint of satisfaction. but i would like to leave the components required out in the world. Many of my ideas are coming to pass and academia is just catching up... I personally dont have the funding to release a Library or a Book or an APP....

It has taken many years to be able to map out and design such a project. (as you have to learn everything) Some university is required; some online courses and massive research and programming; It actually takes up so much time; the methodologies gathered by focusing on Machine learning / Data science / Business intelligence / Linguistics / Statistics / Psychology / Natural language processing & Programming has made it an exciting pathway.

For me its often interpreting the maths is the hardest.
 
I'm always humbled by other approaches

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Good posts Spydaz. I read both of them a couple of times. Good content and your argument flows, your writing is still a little rough in spots but all it takes is practice. Looking forward to reading more. I'm finding the scope of your vision quite inspiring.

Regarding the handling of language, the important thing is to understand the Chomsky hierarchy. Make sure you can see the difference between regular expressions, context free grammars, context sensitive grammars, and the various gradations in between. Context Free Grammar is (IMO) the best goal for now. It can express things that regular expressions cannot express and seems to be able to cover most natural languages. Context sensitive grammars are computationally intractable and beyond our reach for the time being.

Tokenisation was useful before there were efficient algorithms (and powerful enough computers) for handling CFGs. You could use two passes to convert one stream of tokens into another stream of tokens where each could be parsed without ambiguity by a simple finite state machine (for regular expressions) or a shift reduce parser (for the grammar). When computers did not normally have gigabytes of memory this was extremely important.

Now you can use a GLR parser which is basically a shift-reduce parser that can process an unrestricted CFG and handle ambiguity efficiently. It is not an easy thing to implement. I know because I spent years researching and building one. There were already GLR parsers out there in the open source world, but they don't scale up to handle grammars with millions of rules like mine does.

The thing about CFGs is that they are closed under union. You can merge two CFGs and you still get a CFG. If your parser can only handle restricted CFGs (which is most parsers, e.g. bison) then merging two grammars that work will probably yield a grammar that crashes your parser. This means that with a GLR parser you can merge the grammar for tokenisation with the grammar for parsing and you can process the input in one pass.

I've taken it a long way further than that already. I've built a complete semantic parser around my GLR parser so I can go from a character stream to a data structure capturing the meaning of the input in one pass. Here's an example parsing "The box isn't a table."

Code
<Clause>
    <Subject Clause_type='declarative' Number='singular' Person='third' Verb_type='complex_intransitive'>
        <Determinative_THE Category='definite'>The</Determinative_THE>
        <Nominal Number='singular'>
            <Noun_BOX Case='plain_case' Category='common_noun' Number='singular'>box</Noun_BOX>
        </Nominal>
    </Subject>
    <Predicator Clause_type='declarative' Number='singular' Person='third' Verb_type='complex_intransitive'>
        <Nonmodal_BE Number='singular' Person='third' Polarity='negative' Verb_form='present' Verb_type='complex_intransitive' to_infinitival='yes'>isn&apos;t</Nonmodal_BE>
    </Predicator>
    <Subjective>
        <Determinative_A Category='indefinite'>a</Determinative_A>
        <Nominal Number='singular'>
            <Noun_TABLE Case='plain_case' Category='common_noun' Number='singular'>table</Noun_TABLE>
        </Nominal>
    </Subjective>
</Clause>

Now I'm in the process of gathering the data to produce all the grammar rules that I need. I've been doing that for nearly twenty years and I don't expect to be finished any time soon. The next step will be to transform the output of the semantic parser into first or higher order logic statements which can be applied to a knowledge base. I've also written all the software for that too. I just need more data.

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
I Have been focusing a lot i=on Syntax and grammar rules;

breaking through each layer actually produces a tree ... once phrases are defined as Declarative, exclamatory, propositional, conditional, interrogative... each also has its Set of rules which lead to meaning and intent ... the data extracted at this level is much richer content .
The Word-net / Concept-NET  type Structures also give a good foundation for Sentence / Object / Event / Location etc understanding. the data models produced can be visualised in 3d+ producing Clusters of information which an be further Classified sing KNN / Neural Networks for Deep Learning  or analysis. but the foundation allows for structured learning leaving space for Loose Predicate learning of Unhanded Shapes of data or sentences. Formal logic has also allowed for detailed entanglement for sentence coordination and truth of statements with Boolean logic.  so sentence understanding is becoming much more focused. and the knowledge gathered centred around a dictionary structure at its heart of the data-warehouse.

The context free grammars / and the Probabilistic grammars are ideal starting points. but collecting the rules and handling them personally also is a discovery in itself. I notice that your handling modals as well. Understanding the sentence intention gives the AI the Understanding of What response type or intent should be. This is actually a NEW HOT TOPIC in AI Right now.

INTENTS;
by designing intents for your bot conversations can become productive; Storing learned symptoms or taking natural language orders many API's now feature intents as their main focus point. (obviously another way to learn how people design intents)- big data collection mission as even good use information from Pinterest to Learn about picture content (labelled by the Users)  to design a great algorithm the more cases that your model can be fit to the better the confidence of the model. the data has to come from somewhere.....

I mainly use the Tokeniser to Strip content and recognise sentence shapes; but i have also been learning grammar from speech; saving every sentence that has been tagged successfully as a POS sentence eventually it will learn every possibility  and be able to predict what tag shold be there when a missing tag is detected.
The common grammar rules do not fit to days usage of grammar... so the AI will have to learn its own grammar from its user.... When constructing sentences the AI can compare its sentence with a Tagged sentence of the same length to see if its correct... the possibilities of extending the grammar categories Airport>> LOC as location instead of Noun ..... and propositional sentence as a category for a sentence ..... and more as we discover them... enables for even more control over the knowledge collected and the possible responses generated...

PS: I have Squeezed in A lot in the abstract and preface (as they are supposed to be 1 page each). but i do cover each aspect in depth later on...:)

I will probably release more on the forum.... and on my git hub now as i have decided to disseminate some techniques
https://github.com/spydaz

Norm Chomsky and Marvin Minsky both said that it would take a lifetime to computerise all the rules for the English language !...

This year my main focus is sentence intent / meaning .... And some more on syntax learning... i'm sure i will stray from the path....
(i also capture as much as i can grammatically as well in the first scan its that analysis that  lays the foundation) ....<<<<

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
That is indeed where the research is headed, to separate grammar, semantics and intent. In English, and I imagine other languages, intent is often not conveyed by semantics alone, but by context or non-verbal signals if available. A common example is asking a question when an imperative is intended. e.g. "Can you pass the sauce?" where the wrong response would be "Yes" rather than "Say please," or the action of passing the sauce.

*

unreality

  • Starship Trooper
  • *******
  • 443
It seems like you're covering a lot of bases. Nothing wrong with that, but have you considered taking some time to actually analyze what goes through your mind for various situations, problems, questions? Spend some time studying your own thought process in a quiet environment and hopefully you'll begin to perceive some amazing things on how you think. If I can be blunt, you, like everyone else who's working on AI that I'm aware of is making this far too difficult when the answer has been so simple. You're trying to figure out a truckload of parts. Implementing this, and that. Consider working on implementing a much simpler system.

Here's a simple example.

* Pattern recognitions. May branch off.
* Search for similar text to see if there's good responses. May branch off.
* Search for link relevance between all keywords with each other. May branch off.
* Search for related topics. May branch off.
* If sufficient priority then place objects in imaginary space, build tree goals, add tree search to thread work list. Sufficient finds will alert Consciousness.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Unreality I'll give you a plus here because you're obviously making an effort to be polite and constructive, and I'll also acknowledge that it's not for the first time. However the real test will come when you have to field some criticism. I can see some obvious flaws in your argument but I will not take the trouble to address them until you have demonstrated that you can take criticism graciously.

Please keep it up.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

405 Guests, 0 Users

Most Online Today: 497. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles