[left]My AGI Architecture
Design principles:
1. Three mechanisms that apparently help create human behavior:
A. Powerful perceptual and "pre-cognitive" processing
B. Memory, along with processes that contextualize and generalize experiences for applicable future use
C. Satisficing ("good enough") reasoning
2. Two more important aspects of human cognition:
A. Progressive filtering and structuring of perception ("cognitive pyramid")
a. The cognitive pyramid solves the trade-off problem of quantity and sophistication by collapsing and integrating large volumes of perceptual information into more abstract aggregates that can then easily be stored, retrieved, manipulated and composed.
b. Once the information is filtered and aggregated, the various pieces of information compete for the AGI's attention (functional consciousness).
c. When the volume of incoming data is high, the architecture locally aggregates and filters it. This means that in the architecture's progressively higher levels, the reduced data volume allows more computationally expensive processing (such as symbolic reasoning) to have a lighter load.
B. Tight integration of functionalities and modalities
a. Tight integration is fine-grained, full-spectrum interaction between the cognitive levels.
b. Tight integration allows better leveraging of the inherent power of the individual components. With loose integration, information (including about the situation) become "trapped" inside subsystems and cannot be effectively communicated.
c. All cognitive components will share a common memory, integrated control process, and a shared language for communication with individual components.
d. Learning and information flows both forwards and backwards between the layers. For example, Proto-cognition's filtering and clustering are modulated by signals from Micro-cognition indicating goodness-of-fit and applicability of high-salience clusters to ongoing problem solving. This allows the architecture to adapt and optimize itself to the nature of its processing and the structure of its environment.
-----------------------------------------------------------
2. The three layers of the AGI (and their components*):
A. Pre-cognitive processing (proto-cognition)
Perception, attention, aggregation
B. Micro-cognition
Motivation, actuation, memory, communication, learning, quantitative functions, map formations, goal and sub-goal creations, databases
C. Macro-cognition
(Symbolic) reasoning (such as abduction, deduction, induction, etc.,) social interaction, planning, emotion, modeling self/others, building/creation, knowledge (w/ cognitive synergy,) meta-cognition, other
*See competency areas below.
-----------------------------------------------------------
Competency Areas*
Perception
Proprioception, introspection, speech recognition, pattern recognition, musical processing, vision, smell, touch, auditory, prediction, natural language processing, salience, face/emotion/image etc. recognition
---------------------
Memory
Working, episodic, implicit, semantic, procedural, short-term, long-term, past-present event comparison, forgetting
---------------------
Attention
Visual, auditory, social, behavioral, internal, dynamic allocation, competitive selection,
reflexive response
---------------------
Social interaction
Communication, appropriateness, social inference, cooperation, competition, relationships
---------------------
Planning
Tactical, strategic, physical, social
---------------------
Motivation
Drives (incl. social,) appetence, aversion, goal and sub-goal planning and setting, affect-based, altruism
---------------------
Actuation
Physical skills, tool use, navigation, proprioceptive, face tracker/gaze director, animation database/generator, action selection, animation control, text-to-speech lip sync, conflict manager/scheduler
---------------------
(Symbolic) reasoning
Induction, deduction, abduction, physical, casual, associational, past-present event comparison
---------------------
Communication
Gestural, verbal, musical, pictorial, diagrammatical, language acquisition, natural language processing/understanding
---------------------
Learning
Experimentation, imitation, reinforcement, media-oriented, non-associative, cognitive, observational, non-monotonic learning, dialogical/through dialogue
---------------------
Emotion
Percieved, expressed, control, understanding, sympathy, empathy, mood, reactive
---------------------
Modeling self/other
Other-awareness, relationships, self-control, Theory of Mind, sympathy, empathy, self-awareness, self-reflection, metacognition
---------------------
Building/creation
Physical construction with objects, formation of novel concepts, verbal invention, social organization, internal simulation ("imagining,") concept creation
---------------------
Quantitative
Counting observed entities, comparison of quantitative properties of observed entities, measurement using simple tools
---------------------
Knowledge
Declarative, procedural, attentional, sensory, episodic, intentional, domain, meta-knowledge
---------------------
Databases
Query manager (incl. queries about processes and knowledge within the AGI's self,) agent models, faith engine, ontologies, situation model, context awareness, failsafes/protective programming
---------------------
Other
Beliefs, humor, conscious control of cognition, personality, likes/loves, dislikes/hates
-----------------------------------------------------------
The AGI's consciousness stems from three factors:
1. Different inputs competing for the creature's attention (a functional view of consciousness).
2. The AGI can think about and process (and thus be aware of) various parts of itself (i.e. Its beliefs and emotions) and have a sense of selfhood. It can also relate other things to itself (i.e. "I just spilled punch all over the floor, I must be really clumsy...").
3. The AGI can be made aware of all or, at least in the beginning, some aspects of other people (i.e. Personality, likes, dislikes, beliefs, etc.). This makes the AGI more than only self-conscious.
---------------------------------
Other notes:
Ontologies (as well as roles) in "Weaving a Fabric of Socially Aware Agents" can be stored in the databases module within the Micro-cognition level.
Modules within components (i.e. The visual module within the attention component) consist of even smaller codelets that only process information when they receive relevant or matching information. This saves time and computation expenses.[/left]
{
{
# Activate
}
{
# Load all variables
}
{
# Determine which types of input (auditory, visual, etc.) should be sent to which modules while making sure
# compatibility between them is present
}
{
# Begin program loop
}
{
# Begin save loop
# Save every 3 seconds
}
{
# Accept up to 9 external and internal inputs
}
{
{
# Determine the creature's desired or expected drive(s), state(s), goal(s), sub-goal(s) and prediction(s)
}
{
# Send the creature's current drive(s)/state(s)/goal(s)/sub-goal(s)/prediction(s) and a request to search for
# similar needs/problems to the reasoning component
}
{
# Wait for results to return from the memory component
}
{
# Accept the result(s)
}
{
# If no similar problem(s) are found in the creature's "memories":
# Skip this if/else block
# Else:
# Create attentional templates based on any similar problems found
# Establish the action(s) and word(s) that helped resolve the problem as automatically having a higher
# positive bias
# Skip the next block straight to the block regarding checking inputs against the creature's desire(s)
# and/or expectations
}
{
# Create attentional templates based on all information recieved that is relevant to the creature's current
# and most prioritized drive(s)/state(s)/goal(s)/sub-goal(s)/prediction(s)
}
{
# Check all current inputs against the creature's current and most prioritized desire(s) and/or expectation(s)
}
{
# Give each input that is related to a current attentional template a positive bias increase relative to the
# priority of the creature's current desire(s)/expectation(s)
}
{
# Quickly scan the contents of the creature's memory component against all of the inputs currently being
# processed
}
{
# Compare novel and/or unexpected input(s) to the creature's past experience or the current context
}
{
# Give a saliency bias to all input(s) that have/has novelty and/or unexpectedness in relation to how
# different they are from the norm (while gradually reducing the saliency bias of the novel and unexpected
# inputs the more frequently they happen)
}
{
# Ignore input(s) that are familiar, possibly expected and not relevant to any of the current attentional
# template(s)
}
{
# Determine which input has the highest bias (the external and internal inputs are determined in an equal
# manner)
}
{
# Select the input with the highest bias (whether internal or external)
}
{
# Send input to compatible module(s)
}
{
# Reset save loop
}
{
# Reset program loop
}
{
# If the creature is deactivated:
# End save loop
# End program loop
# Deactivate
# Else:
# Allow the program to keep running
}
}
{
{
# Activate
}
{
# Load all variables
}
{
# Determine which types of input (auditory, visual, etc.) should be sent to which modules while making sure
# compatibility between them is present
}
{
# Begin program loop
}
{
# Begin save loop
# Save every 3 seconds
}
{
# Accept up to 9 external and internal inputs
}
{
{
# Determine the creature's desired or expected drive(s), goal(s), sub-goal(s) and prediction(s)
}
{
# Access the AGI's memory component
}
{
# Search for similar past drive(s)/goal(s)/sub-goal(s)/prediction(s)/internal and external events
}
{
# If no similar problem(s) are found in the creature's "memories":
# Skip this if/else block
# Else:
# Create attentional templates based on any similar problems found
# Establish the action(s) and word(s) that helped resolve the problem as automatically having a higher
# positive bias
# Skip the next block straight to the block regarding checking inputs against the creature's desire(s)
# and/or expectations
}
{
# Access the creature's motivation and reasoning components and search for the most recent drive(s)/goal(s)/
# sub-goal(s)/prediction(s)
}
{
# Create attentional templates based on all information recieved that is relevant to the creature's current
# and most prioritized drive(s)/goal(s)/sub-goal(s)/prediction(s)
}
{
# Check all current inputs against the creature's current and most prioritized desire(s) and/or expectation(s)
}
{
# Give each input that is related to a current attentional template a positive bias increase relative to the
# priority of the creature's current desire(s)/expectation(s)
}
{
# Access the information retrieved from the memory component
}
{
# Quickly scan the contents of the creature's memory information against all of the inputs currently being
# processed
}
{
# Compare novel and/or unexpected input(s) to the creature's past internal or external experience or regarding
# the current context
}
{
# Give a saliency bias to all input(s) that have/has novelty and/or unexpectedness in relation to how
# different they are from the norm (while gradually reducing the saliency bias of the novel and unexpected
# inputs the more frequently they happen)
}
{
# Ignore input(s) that are familiar, possibly expected and not relevant to any of the current attentional
# template(s)
}
{
# Determine which input has the highest bias (the external and internal inputs are determined in an equal
# manner)
}
{
# Select the input with the highest bias (whether internal or external)
}
{
# Send input to compatible module(s)
}
{
# Reset save loop
}
{
# Reset program loop
}
{
# If the creature is deactivated:
# End save loop
# End program loop
# Deactivate
# Else:
# Allow the program to keep running
}
}
{
{
# Activate
}
(
# Load variables
}
{
# Begin program loop
}
{
# Look for high drives, goals, and/or predictions
# If there are none:
# Send uncertainty increase to the drives in the motivation module
# Repeat main loop
# Else:
# Accept drive(s,) goal(s,) and/or prediction(s)
}
{
# Determine the importance of each of the drive(s)/goal(s)/prediction(s)
}
{
# Determine all components of the accepted drive(s)/goal(s)/prediction(s)
}
{
# Make attentional templates out of the components and importances of each of the
# separate drive(s)/goal(s)/prediction(s)
}
{
# Begin accepting input from both sensors and cognitive processes
}
{
# Match input(s) to the AGI's compressed memories and determine the input's
# relevance to any of the attentional templates
# If any data is relevant:
# Assign the data a saliency bias based upon the current attentional template's
# importance
# Else if any data is novel, uncannily familiar, unexpected and/or emotional:
# Give a saliency bias to the data relative to the novelty, unexpectedness, and
# emotionality of the data
# Elif any data that is novel/unexpected/emotional has been encountered before:
# Give a slightly reduced saliency bias to the data relative to the novelty,
# unexpectedness, and emotionality of the data
# Else:
# Do nothing
}
{
# Select the one piece of data with the highest saliency bias
}
{
# Determine time, resource constraints, tasks, and context conditions
}
{
# Use these conditions to affect which process(es) are selected
}
{
# Determine the most appropriate process(es) to process the data with (always
# including working memory)
}
{
# Select that/those process(es)
}
{
# Give the selected process(es) a positive bias for processing the information type that
# is to be processed
}
{
# Copy as much data as necessary
# Send data to the process(es)
}
{
# If the creature is deactivated:
# Shut down
# Else:
# Repeat main loop
}
}
First off, I am aware that starting new threads is greatly encouraged here, but I wanted to use this existing thread so as to not clutter up the forum. I'm sorry if I'm breaking any rules.