Ai Dreams Forum

Member's Experiments & Projects => AI Programming => Topic started by: Neogirl101 on January 17, 2015, 02:56:08 am

Title: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on January 17, 2015, 02:56:08 am
Hi everyone,

So this is where I'll be posting my psuedocode and, someday, my code for my AGI.

I've recently decided on a new AGI architecture that is a combination of various other AGI architectures and processes (why reinvent the wheel, right?). I'll let my iPad note speak for itself.
Note: I'm sorry the note scrolls so far over...  :-\

Code
[left]My AGI Architecture

Design principles:

1. Three mechanisms that apparently help create human behavior:

A. Powerful perceptual and "pre-cognitive" processing

B. Memory, along with processes that contextualize and generalize experiences for applicable future use

C. Satisficing ("good enough") reasoning


2. Two more important aspects of human cognition:

A. Progressive filtering and structuring of perception ("cognitive pyramid")

a. The cognitive pyramid solves the trade-off problem of quantity and sophistication by collapsing and integrating large volumes of perceptual information into more abstract aggregates that can then easily be stored, retrieved, manipulated and composed.

b. Once the information is filtered and aggregated, the various pieces of information compete for the AGI's attention (functional consciousness).

c. When the volume of incoming data is high, the architecture locally aggregates and filters it. This means that in the architecture's progressively higher levels, the reduced data volume allows more computationally expensive processing (such as symbolic reasoning) to have a lighter load.


B. Tight integration of functionalities and modalities

a. Tight integration is fine-grained, full-spectrum interaction between the cognitive levels.

b. Tight integration allows better leveraging of the inherent power of the individual components. With loose integration, information (including about the situation) become "trapped" inside subsystems and cannot be effectively communicated.

c. All cognitive components will share a common memory, integrated control process, and a shared language for communication with individual components.

d. Learning and information flows both forwards and backwards between the layers. For example, Proto-cognition's filtering and clustering are modulated by signals from Micro-cognition indicating goodness-of-fit and applicability of high-salience clusters to ongoing problem solving. This allows the architecture to adapt and optimize itself to the nature of its processing and the structure of its environment.

-----------------------------------------------------------

2. The three layers of the AGI (and their components*):

A. Pre-cognitive processing (proto-cognition)

Perception, attention, aggregation

B. Micro-cognition

Motivation, actuation, memory, communication, learning, quantitative functions, map formations, goal and sub-goal creations, databases

C. Macro-cognition

(Symbolic) reasoning (such as abduction, deduction, induction, etc.,) social interaction, planning, emotion, modeling self/others, building/creation, knowledge (w/ cognitive synergy,) meta-cognition, other

*See competency areas below.

-----------------------------------------------------------

Competency Areas*

Perception

Proprioception, introspection, speech recognition, pattern recognition, musical processing, vision, smell, touch, auditory, prediction, natural language processing, salience, face/emotion/image etc. recognition

---------------------

Memory

Working, episodic, implicit, semantic, procedural, short-term, long-term, past-present event comparison, forgetting

---------------------

Attention

Visual, auditory, social, behavioral, internal, dynamic allocation, competitive selection,
reflexive response

---------------------

Social interaction

Communication, appropriateness, social inference, cooperation, competition, relationships

---------------------

Planning

Tactical, strategic, physical, social

---------------------

Motivation

Drives (incl. social,) appetence, aversion, goal and sub-goal planning and setting, affect-based, altruism

---------------------

Actuation

Physical skills, tool use, navigation, proprioceptive, face tracker/gaze director, animation database/generator, action selection, animation control, text-to-speech lip sync, conflict manager/scheduler

---------------------

(Symbolic) reasoning

Induction, deduction, abduction, physical, casual, associational, past-present event comparison

---------------------

Communication

Gestural, verbal, musical, pictorial, diagrammatical, language acquisition, natural language processing/understanding

---------------------

Learning

Experimentation, imitation, reinforcement, media-oriented, non-associative, cognitive, observational, non-monotonic learning, dialogical/through dialogue

---------------------

Emotion

Percieved, expressed, control, understanding, sympathy, empathy, mood, reactive

---------------------

Modeling self/other

Other-awareness, relationships, self-control, Theory of Mind, sympathy, empathy, self-awareness, self-reflection, metacognition

---------------------

Building/creation

Physical construction with objects, formation of novel concepts, verbal invention, social organization, internal simulation ("imagining,") concept creation

---------------------

Quantitative

Counting observed entities, comparison of quantitative properties of observed entities, measurement using simple tools

---------------------

Knowledge

Declarative, procedural, attentional, sensory, episodic, intentional, domain, meta-knowledge

---------------------

Databases

Query manager (incl. queries about processes and knowledge within the AGI's self,) agent models, faith engine, ontologies, situation model, context awareness, failsafes/protective programming

---------------------

Other

Beliefs, humor, conscious control of cognition, personality, likes/loves, dislikes/hates

-----------------------------------------------------------

The AGI's consciousness stems from three factors:

1. Different inputs competing for the creature's attention (a functional view of consciousness).

2. The AGI can think about and process (and thus be aware of) various parts of itself (i.e. Its beliefs and emotions) and have a sense of selfhood. It can also relate other things to itself (i.e. "I just spilled punch all over the floor, I must be really clumsy...").

3. The AGI can be made aware of all or, at least in the beginning, some aspects of other people (i.e. Personality, likes, dislikes, beliefs, etc.). This makes the AGI more than only self-conscious.

---------------------------------

Other notes:

Ontologies (as well as roles) in "Weaving a Fabric of Socially Aware Agents" can be stored in the databases module within the Micro-cognition level.

Modules within components (i.e. The visual module within the attention component) consist of even smaller codelets that only process information when they receive relevant or matching information. This saves time and computation expenses.[/left]

And here's the first bit of psuedocode I'm working on: attention (based on a PDF describing an AGI attention mechanism).

Code
{

{

# Activate

}

{

# Load all variables

}

{

# Determine which types of input (auditory, visual, etc.) should be sent to which modules while making sure
# compatibility between them is present

}

{

# Begin program loop

}

{

# Begin save loop

# Save every 3 seconds

}

{

# Accept up to 9 external and internal inputs

}

{

{

# Determine the creature's desired or expected drive(s), state(s), goal(s), sub-goal(s) and prediction(s)

}

{

# Send the creature's current drive(s)/state(s)/goal(s)/sub-goal(s)/prediction(s) and a request to search for
# similar needs/problems to the reasoning component

}

{

# Wait for results to return from the memory component

}

{

# Accept the result(s)

}

{

# If no similar problem(s) are found in the creature's "memories":

  # Skip this if/else block

# Else:

  # Create attentional templates based on any similar problems found

  # Establish the action(s) and word(s) that helped resolve the problem as automatically having a higher
  # positive bias

  # Skip the next block straight to the block regarding checking inputs against the creature's desire(s)
  # and/or expectations

}

{

# Create attentional templates based on all information recieved that is relevant to the creature's current
# and most prioritized drive(s)/state(s)/goal(s)/sub-goal(s)/prediction(s)

}

{

# Check all current inputs against the creature's current and most prioritized desire(s) and/or expectation(s)


}

{

# Give each input that is related to a current attentional template a positive bias increase relative to the
# priority of the creature's current desire(s)/expectation(s)

}

{

# Quickly scan the contents of the creature's memory component against all of the inputs currently being
# processed

}

{

# Compare novel and/or unexpected input(s) to the creature's past experience or the current context

}

{

# Give a saliency bias to all input(s) that have/has novelty and/or unexpectedness in relation to how
# different they are from the norm (while gradually reducing the saliency bias of the novel and unexpected
# inputs the more frequently they happen)

}

{

# Ignore input(s) that are familiar, possibly expected and not relevant to any of the current attentional
# template(s)

}

{

# Determine which input has the highest bias (the external and internal inputs are determined in an equal
# manner)

}

{

# Select the input with the highest bias (whether internal or external)

}

{

# Send input to compatible module(s)

}

{

# Reset save loop

}

{

# Reset program loop

}

{

# If the creature is deactivated:

  # End save loop

  # End program loop

  # Deactivate

# Else:

  # Allow the program to keep running

}

}

The reason I use only one if/else block is because the first four(!) times I tried to write this psuedocode, there were just too many. So I tried writing it without any if/else blocks to see if I could make a basic outline of the idea (and I did... I think).

However, I am still looking for input on this psuedocode, and I wouldn't mind more if/else blocks (as long as they don't make a giant if/else pyramid... yikes!).

So if you guys have any comments, ideas, questions and constructive criticism, I'd love to hear them!

But please remember, the AGI attention PDF said that this attention mechanism is supposed to run quickly and efficiently with limited computational resources (which I assume is why it's so short and simple).

But apart from that, fire away!  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: ranch vermin on January 17, 2015, 05:15:51 am
So whats actually going into the 'actual' project,  this is a little too dreamy to actually implement, no?
Title: Re: Creature AGI (psuedo)coding thread
Post by: ivan.moony on January 17, 2015, 05:34:29 am
Greatest projects always start with dreams. To cite David Bowie: ...and when she's dreaming I believe...
Title: Re: Creature AGI (psuedo)coding thread
Post by: Don Patrick on January 17, 2015, 08:26:19 am
That looks like a sound outline. It also looks like a lot of work on a lot of subsystems, but seeing as I'm doing that myself I can't argue. One tip: I'd recommend doing as much as possible without incorporating human language, or to use keyword commands or existing language processing systems, because language processing can be insanely time-consuming to work on.
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on January 17, 2015, 10:07:01 am
@ranch: Yeah, I know what you mean... But then again, people living 200 years ago would've scoffed at the very idea of digital computers and smartphones. Just something to think about.  :)

I'm trying to make this AGI start out simple (even though I'm aware that it doesn't seem that way,) and I plan to add improvements sometime after the initial code exists and has been tested. Starting with some very simple code that is just enough to get the job done.

The craziest dreams, as ivan.moony suggests, can become realities in only decades or, more rarely, even years. But I do know that most of the time, these things don't happen right away.

@Don: Don't worry, I plan to program this AGI in Python 3, and I found a Python natural language processor to implement in it. I think it will work well.  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: Korrelan on January 17, 2015, 11:02:28 am
Good work so far.  I’m a connectionist but I’ll follow your work and help if I can.  There’s a lot of good stuff in the archives of this site regarding AGI theory and chat bots, lots of reading.  Keep up the good work.
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on January 17, 2015, 02:49:04 pm
@korrelan: Thank you so much for your kind compliments!  ;D I appreciate any and all help I can get. :)

I found a disconnect between the AGI architecture and the attention psuedocode... and I improved them both slightly.

I added a prediction module to the reasoning component in the AGI architecture, and added a few things to the attention psuedocode as well.

Code
{

{

# Activate

}

{

# Load all variables

}

{

# Determine which types of input (auditory, visual, etc.) should be sent to which modules while making sure
# compatibility between them is present

}

{

# Begin program loop

}

{

# Begin save loop

# Save every 3 seconds

}

{

# Accept up to 9 external and internal inputs

}

{

{

# Determine the creature's desired or expected drive(s), goal(s), sub-goal(s) and prediction(s)

}

{

# Access the AGI's memory component

}

{

# Search for similar past drive(s)/goal(s)/sub-goal(s)/prediction(s)/internal and external events

}

{

# If no similar problem(s) are found in the creature's "memories":

  # Skip this if/else block

# Else:

  # Create attentional templates based on any similar problems found

  # Establish the action(s) and word(s) that helped resolve the problem as automatically having a higher
  # positive bias

  # Skip the next block straight to the block regarding checking inputs against the creature's desire(s)
  # and/or expectations

}

{

# Access the creature's motivation and reasoning components and search for the most recent drive(s)/goal(s)/
# sub-goal(s)/prediction(s)

}

{

# Create attentional templates based on all information recieved that is relevant to the creature's current
# and most prioritized drive(s)/goal(s)/sub-goal(s)/prediction(s)

}

{

# Check all current inputs against the creature's current and most prioritized desire(s) and/or expectation(s)


}

{

# Give each input that is related to a current attentional template a positive bias increase relative to the
# priority of the creature's current desire(s)/expectation(s)

}

{

# Access the information retrieved from the memory component

}

{

# Quickly scan the contents of the creature's memory information against all of the inputs currently being
# processed

}

{

# Compare novel and/or unexpected input(s) to the creature's past internal or external experience or regarding
# the current context

}

{

# Give a saliency bias to all input(s) that have/has novelty and/or unexpectedness in relation to how
# different they are from the norm (while gradually reducing the saliency bias of the novel and unexpected
# inputs the more frequently they happen)

}

{

# Ignore input(s) that are familiar, possibly expected and not relevant to any of the current attentional
# template(s)

}

{

# Determine which input has the highest bias (the external and internal inputs are determined in an equal
# manner)

}

{

# Select the input with the highest bias (whether internal or external)

}

{

# Send input to compatible module(s)

}

{

# Reset save loop

}

{

# Reset program loop

}

{

# If the creature is deactivated:

  # End save loop

  # End program loop

  # Deactivate

# Else:

  # Allow the program to keep running

}

}

Once again, I appreciate any comments or constructive criticism in making this AGI faster, more efficient, and even better written. Please, don't be afraid to point out anything you see can be improved in either the AGI outline (just imagine that prediction is in the reasoning component) or in the attention psuedocode.

Thank you all for your insights and creative inputs!  ;D
Title: Re: Creature AGI (psuedo)coding thread
Post by: toborman on January 19, 2015, 03:43:38 pm
I’m impressed. Your design is comprehensive. I’m wondering what sources you used to create your list of mental functions.

The inclusion of abductive reasoning is important to generating hypothesis candidates in scientific inquiry. Good work. You may want to add analogy and case-based reasoning, although I view these as derivatives of the main reasoning methods.

In symbolic reasoning did you mean to say casual or causal?

Your extension of the semantic, procedural, and episodic memory types is insightful.

As to language processing, I have split this into three parts: understanding (normalization and disambiguation), generation, and learning. Learning includes tutoring and pattern recognition (inference) methods of acquisition.

I’m expecting good things from your project.
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on March 21, 2015, 06:28:52 am
First off, I am aware that starting new threads is greatly encouraged here, but I wanted to use this existing thread so as to not clutter up the forum. I'm sorry if I'm breaking any rules.  :-\

Toborman, thank you so much for your kind words. But in reality, I read a lot of papers on the matter, so most of it is actually other people's work. No need to reinvent the wheel, right?  :)

I'm quite glad (elated, actually) that you find it so comprehensive. It's interesting just how well all of those papers and the work contained in them fit together.

About the casual/causal thing, I meant casual. I believe as in "casual day" at work? If you were talking "causal" as in the cause of something, then I would have to say that would be a wonderful thing to add!

Also, I will take all of your suggestions into account. Thank you again.

And now for a brief update:

I have completely reworked the attention module, matching it very closely to the paper I got it from.

Here is the paper I used:

http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_16.pdf (http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_16.pdf)


And here is the pseudocode I'm working on:

Code
{

{

# Activate

}

(

# Load variables

}

{

# Begin program loop

}

{

# Look for high drives, goals, and/or predictions

# If there are none:

   # Send uncertainty increase to the drives in the motivation module

   # Repeat main loop

# Else:

   # Accept drive(s,) goal(s,) and/or prediction(s)

}

{

# Determine the importance of each of the drive(s)/goal(s)/prediction(s)

}

{

# Determine all components of the accepted drive(s)/goal(s)/prediction(s)

}

{

# Make attentional templates out of the components and importances of each of the
# separate drive(s)/goal(s)/prediction(s)

}

{

# Begin accepting input from both sensors and cognitive processes

}

{

# Match input(s) to the AGI's compressed memories and determine the input's
# relevance to any of the attentional templates

# If any data is relevant:

   # Assign the data a saliency bias based upon the current attentional template's
   # importance

# Else if any data is novel, uncannily familiar, unexpected and/or emotional:

   # Give a saliency bias to the data relative to the novelty, unexpectedness, and
   # emotionality of the data

# Elif any data that is novel/unexpected/emotional has been encountered before:

   # Give a slightly reduced saliency bias to the data relative to the novelty,     
   # unexpectedness, and emotionality of the data

# Else:

   # Do nothing

}

{

# Select the one piece of data with the highest saliency bias

}

{

# Determine time, resource constraints, tasks, and context conditions

}

{

# Use these conditions to affect which process(es) are selected

}

{

# Determine the most appropriate process(es) to process the data with (always
# including working memory)

}

{

# Select that/those process(es)

}

{

# Give the selected process(es) a positive bias for processing the information type that
# is to be processed

}

{

# Copy as much data as necessary

# Send data to the process(es)

}

{

# If the creature is deactivated:

   # Shut down

# Else:

   #  Repeat main loop

}

}

If there's anything you want to suggest or point out, please do. I am always open to suggestions. Thank you.  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: Freddy on March 21, 2015, 10:48:30 am
Quote
First off, I am aware that starting new threads is greatly encouraged here, but I wanted to use this existing thread so as to not clutter up the forum. I'm sorry if I'm breaking any rules.

Don't worry we're not very strict. It makes most sense to me too that you continued in this thread :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on May 06, 2015, 10:59:20 pm
Hey everyone, it's me again.  :)

So I'm in the process of changing my AGI into a new design that is more simple and elegant (since things that are too complex, like my last AGI, actually tend not to work).

I found an updated version of the Dehanae and Changeux's model of consciousness, which I plan to implement in my new AGI. This is the first PDF attachment.

The second PDF is the Izhikevich neuron model that the first PDF uses.

So here's my problem. I'm trying to recreate the Izhikevich neuron in a very simple, elegant yet functional way. The thing is, I have no clue how to do that or even how to make a neural net from the neuron I'm trying to recreate.

I'm re-introducing myself to Python 3, my language of choice, but unfortunately knowledge of programming languages doesn't stick in my head easily. I've found a Python version of the Izhikevich neuron I want, but it needs to import externals (like numpy,) and I really don't want to rely on any additions outside of Python. I also found it too complex.

Being an amateur Python programmer, I don't want the recreation I'm making of the Izhikevich neuron to be too complicated- I would like for others (and myself) to easily understand it and modify it.

Thank you to anyone and everyone who can help me implement this neuron (and a neural net based on multiple neurons). I am very eager and willing to learn what I can.

If you're confused, I understand. Just ask about it, and I'll answer you as best I can.

Thanks again.  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: Korrelan on May 07, 2015, 11:57:27 am
I can’t help you with the Python programming but I will say that I think you taking the correct approach to AGI.  O0
Title: Re: Creature AGI (psuedo)coding thread
Post by: ivan.moony on May 07, 2015, 01:30:56 pm
I finally forced myself to learn something about neural networks. I read partially the first PDF you posted, but I found it rather non understandable, so I googled a bit and found: http://www.ai-junkie.com/ann/evolved/nnt1.html (http://www.ai-junkie.com/ann/evolved/nnt1.html). It has only 8 pages. By the page 4 my mind was a little bit lined up, but blurry, so I've read the rest 4 pages and returned to page 4 and thought with myself for some 45 minutes. I can say I have started to understand basics, so I recommend for you to read those 8 pages (but the first 4 are important) and to shoot some questions to me. I might be able to answer them.

P.S.
But you will not be impressed with NN when you clue it up. It is just a statistical method for recognizing patterns.
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on May 07, 2015, 04:34:04 pm
@ Korrelan: Thank you so much! That means so much to me. :)

@ ivan: Thank you for the web page! I'll read it, but with what you said about neural networks at the end of your post, I might have to look into achieving the consciousness model a different way. But that will actually be so much easier, given that I have more room for my own ideas. I believe that in programming, the same thing (well, most things) can be accomplished in multiple ways. Thank you so much for your reply.  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: ivan.moony on May 07, 2015, 05:05:11 pm
The way we behave also conforms some patterns, so you might consider putting some *weight* on NN (although I'll try to manage my AGI without much NN, I have other ways in mind that don't require NN).
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on May 09, 2015, 11:57:48 am
I've finally read through the NN tutorial that Ivan posted, and it actually answered a lot of my questions, but not all of them. I found another site introducing neural networks, though, and I'm reading that one too.

In short, thank you, Ivan, for giving me the webpage. It really did help a lot.  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on May 10, 2015, 05:21:02 pm
Hey everyone,

I just redid my AGI... I think it's way more simple and elegant, yet it has the perfect degree of complexity.

I tried to design it to have at least some form of consciousness (hence the global neuronal workspace,) and hopefully at least some things not mentioned in the image will become an emergent property (like introspection and self-awareness, maybe?).

This time, I did the AGI map as an image instead of a long ramble. What do you guys think?

Thank you for any and all input you give.
Title: Re: Creature AGI (psuedo)coding thread
Post by: ranch vermin on May 11, 2015, 05:49:24 am
I look at it this way (and its quite computational)->  virtualize sensors -> virtual statistical store of environment -> motor search engine  -> actualize motor

Ive got big issues now, that im coming down from the dream, and really worrying about this thing actually being possible to run at even just 30hz.

I like your design, looks cool.  But when you said you dont want to "reinvent the wheel"    Id rather call it "codiscovering the wheel" or even maybe "codiscovering calculus"  and I think if your missing something from some patchy lectures, you have to fill the missing pieces in yourself, and they are the all important codiscoveries.

One thing I have to tell you,   is we dont have enough computation power to run an a.i. in realtime these days, (unless you want to proove me wrong, which may happen.) with any decent enough state capacity.    But you can go with us, (you look like you mean business.)  and co-discover the theory, that Markov worked out back in 1800's, with statistical chains, and it works in theory, as far as I can see now, just getting the thing in a working state is the tricky thing.

So I hope you like offline processing.

I can tell you,  markov chains are good theory that will be true to your end result,  back propagation and hopfield nets are also both good, and could be used a finished working implementation.  But getting it to work on threaded cpus or gpus, is still quite difficult to even get a basic system running, even once your implementation is quite unblurred.

My nets are huge and useless,  when I finish this thing, ive got more cells than a human has in his head,  it was a total bitch to run it realtime, but its nowhere near as good as even just an animal,  but it has its robo-perks, like it can talk, its a slightly different kettle of fish.

My big thing now is,  "make 2 things the same thing,  and you generate an option."   

So if I swap all my hellos for hi's,  and hi's for hello's, I get to include the later material in 2 different places... in its hypothetical playback.
Title: Re: Creature AGI (psuedo)coding thread
Post by: ranch vermin on May 11, 2015, 06:36:58 am
All my crazy thinking just headed me off in one direction->  'make two thing the same, generate an option'

Just say you wanted to make a pickapath book out of some old funny video,  you automatically generate the cross points, the insensitivity to the frame differences between your swap points, gives you more forks.

and thats actually the birth of the whole thing,  you can use a really good memory to do it,  but that can be done with a not as good job, but thats the basic idea, and its markov chain video, at its simplest starting point! 

actually making the robot, a better markov chain video systtem is behaviour segmentation of the "forky playback", and thats it...  and something that i havent even got up to yet, is more integrated playback, and taking things in even smaller pieces.

Then your motor is a search engine, using these generated forks, itself being a part of the action.  So just say we used the most basic system to do it,  the biggest problem being the robot only having a crc check or something of the photos, to decide what it wants to do.  So give him a yellow filter and a blue filter,   So of all these forking points you develop from random behaviour that doesnt kill it,  develop a web of all the video that went into it,  and then it will be able to remember where the yellow and blue is.

One challenge of it, is being able to see the full pathway,  and I havent solved that yet,  because im still just writing this super instancable spacial memory, and I wonder if im doing it the wrong way!  But im working on a really integratable sim of the environment.  (oh you think thats easy as piss?)

Then if you keep all the photos of the robot, you can play him like a pikapath book at the end.
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on May 11, 2015, 08:24:00 am
Hi ranch,

Thank you for your feedback. I'm afraid that I am a major novice in AI (and especially robotics,) and I would like if you could please explain the basics of virtual sensors and virtual statistical store to me. Thank you for your feedback again, and I will definitely look into Markov chains!  :)

P.S. Where do you suggest I use Markov chains?
Title: Re: Creature AGI (psuedo)coding thread
Post by: ranch vermin on May 11, 2015, 09:31:26 am
id like to tell you,  but you can work it out yourself, because your an intelligent girl,  your diagram was excellent.
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on May 11, 2015, 10:44:42 am
Thank you so much for your kind comment, ranch! :-) I also understand a little more of what you've said, too.

But I suppose that if there's ever a concept I don't understand (at least not fully,) I can always jury rig it to be simple, elegant, and effective to the best of my knowledge, though I know I'll eventually know more. Co-designing!
Title: Re: Creature AGI (psuedo)coding thread
Post by: infurl on July 23, 2015, 02:09:49 am
If you are comparatively new to robotics and artificial intelligence you might find it worthwhile to think about what is called a subsumption architecture. It is a practical and proven method of operating a robot or artificial intelligence in real time. As you become familiar with its strengths and limitations you may well find ways to improve on it, but it's undoubtedly a good place to start.

https://en.wikipedia.org/wiki/Subsumption_architecture
Title: Re: Creature AGI (psuedo)coding thread
Post by: Neogirl101 on July 23, 2015, 04:57:46 am
Thank you, infurl, for the link! I'm looking into it right now.  :)
Title: Re: Creature AGI (psuedo)coding thread
Post by: keghn on July 23, 2015, 02:52:10 pm
 AGI Brain. A cascading RNN.


Take a look at his video of a car racing around a race track:

https://www.youtube.com/watch?v=j_maFaAjcPY (https://www.youtube.com/watch?v=j_maFaAjcPY)

It could be a pattern loop of life, in a AGI brain. It has a beginning and a end and them come
back to the start.

 The car completed the loop around the rack within
10,000 image frames.
 Each one of these images is recorded or trained into a Neural Network Chip (NNC).
These chips are lined up like dominoes in a loop just like the race track, and tied together
by wires, which are addressing and data buses.
 It record by having a program pointer point to the first chip and clocks in date, then the program
pointer is  clocked to the next NNC, and then the next etc.....

 At a latter time the this can be replayed, froward or backwards at any speed.
NCC can be hardware or software chips, and of any size.
The NNC are trained as auto encoders and classifiers. They can merge to form, later on,
into denser trained NNC.
 
This race track or pattern loop in side the middle of the AGI brain surrounded by millions and
million of other NNC. All of these ree NNC are waiting to get into the loop, replace one, or be
add in to the race pattern loop.


The NNC can output onto a output bus.

The way it learns is by letting the weight sates in all of the unused NN chips jump around
randomly by action of a program, or by out side electromagnetic nose, or by a little bit
of ionizing radiation, or by out side electrostatic discharge:


http://www.eurekalert.org/pub_releases/2015-07/ru-ndb071615.php (http://www.eurekalert.org/pub_releases/2015-07/ru-ndb071615.php)

 When A image shows up on the bus, from a video camera,  the NN chip that
is in the best state at that moment is selected. Also, the weight matrix is locked in to place. If this
capture is better than the one in the video it will be swapped it out. Also, copies of learned
nn chips are copied into unused nn chip, and the there weight matrix are
vibrate very slightly, randomly.

 NN logic will forms between NNC to predicted were sub classified features, object and
other stuff will show up.

 Pattern loop, or engrams can get very complex with parallel loop, sub loops and so on.