The irrational sensorium - what makes a mind?

  • 22 Replies
  • 7057 Views
*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
The irrational sensorium - what makes a mind?
« on: May 30, 2011, 01:16:48 am »
I've been wondering lately about what legitimate signals could be used in AI agents to give them common ground with humans.

The sensation of hunger, for example - that could be equated to battery life. It could signal a desire to eat, or to recharge its batteries. Another would be sleeping and waking periods - an arbitrary schedule of alertness and unconscious introspection/bookkeeping. Dreaming - random, weighted review of events and ideas. The point is, there has to be a legitimate connection to a logical sensor, or the concept lacks meaning. Creating a variable and calling it "wetness" doesn't do much good - a virtual agent doesn't interact with water. A computer in water would die. Sure, the computer can have awareness of what water is, but with no sense of touch and no physical means of interacting with it, the concept of water to the agent is going to be a lot like the concept of solar fire to us humans. We know what it is, what it's made of, what it looks like - but have no means of interacting with it directly.

I could go the philosophical route and go all the way back to Socrates for ideas on the composition of a human psyche. I think its important to flesh out an embodiment just as much as one establishes the inner workings of a knowledgebase or a logic engine. Without a relevant understanding of itself - who it is, where it resides, its physical trappings and mental workings - software cannot create the relevant logical connections necessary to communicate intelligently. It's got to have a sense of self - something that recognizes actions when it acts, and can respond to the recognition. A feedback loop based on a virtual sensorium.

So my question for the day is: what constitutes a "self" ? What are the perceptions necessary for a virtual agent? How do you design a relevant sensorium for your AI? How much of the human experience must be paralleled in order for an agent to develop an intelligence similar to ours?

I don't think a precise analogue of the nervous system is necessary. Asynchronous i/o is probably desirable - it should be multithreaded and able to process input at the same time it's producing output. There should probably be multiple levels of consciousness, or operating awareness, and different levels of connectivity between the layers.

For the purposes of this discussion, lets assume there are 3 levels of consciousness: The active mind, the subconscious mind, and the emotional mind. The active mind is the set of concepts held in memory representing what the agent is paying attention to. The subconscious mind is the set of concepts representing self perception, and randomly selected concepts. The emotional mind will associate various topics with emotional concepts that can be rationally expressed and integrated with self perception. The sum of the 3 levels governs the behavior of the agent - its actions, reactions, and introspection all take place within these constraints.

So here's the methodology: Identify a feature candidate, and describe its parameters and functions. There should be a range of extremes for the feature - from -1 to 1 for dichotomous phenomena (love/hate?) or from 0 to 1 for singular features (fear?) Describe the function of the feature relevant to each "mind."

I'll  start it off with "Interest and Boredom"
This feature scales from -1 to 1. The active mind can identify and tag concepts as interesting or boring. The more interesting a concept, the more time is devoted to it. In the emotional mind, different emotions influence the nature of interest - something highly interesting but feared could be treated cautiously (like examining a virus?) Something desired and interesting (like a game) could be treated with fixation. The subconscious mind would augment the conscious in sorting interesting and uninteresting concepts, but would push uninteresting concepts back into the conscious mind based on other criteria - significance based on other internal or external senses.

In certain situations, the Interest and Boredom feature would allow a bot to say "I am bored" - and this would have a concrete meaning, rooted in a reality that us humans could understand.
« Last Edit: May 30, 2011, 01:32:56 am by JRowe »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: The irrational sensorium - what makes a mind?
« Reply #1 on: May 31, 2011, 12:47:09 pm »
Currently mulling things over and hoping to come up with an intelligent reply.  ;)

*

Data

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1279
  • Overclocked // Undervolted
    • Datahopa - Share your thoughts ideas and creations
Re: The irrational sensorium - what makes a mind?
« Reply #2 on: May 31, 2011, 05:26:18 pm »
Where does one start?

An interesting read for sure.

Here are some of my thoughts so far.

I pretty well agree with everything you’ve said, even down to the many questions, they do need answering, and in time I believe this would be a solid foundation on which to construct our Ai.

But I feel before we can seriously consider much of what you have posted, we first need to get a computer to understand language, not by pattern matching but by actually understanding the words, the meaning of the words and the subtle differences of word placement in a sentence.

Quote
What are the perceptions necessary for a virtual agent?
 
The agent needs to be (intelligent) above all else, at first it will need to perceive language, to my mind if we can do that, which is quite likely going to be the hardest thing to crack, the rest might just fall into place, with a little nudge.

For me I would rather the Ai was truthful, if its battery is going flat it should simply say, my battery is going flat, I need to recharge. 

If we are not true to the emerging Ai how can we expect it to learn and be true back to us? 

Im not disagreeing with you JRowe and further more I’m not an expert by any means, it’s posts like this that get us thinking and who knows maybe help us reach our desired goal.

Excellent post  O0

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: The irrational sensorium - what makes a mind?
« Reply #3 on: May 31, 2011, 05:39:43 pm »
Ah well done Data, I knew there were some smart people about here some place ;)

I'll just throw something into the ring.  I think we need to clarify what kind of AI we are dealing with, presumably we are wanting to talk about AI/ALife in the real world. Because you could have an AI in a virtual environment too, in which case it could walk on a moon or fly round the sun.  I'm thinking of things like Second Life...where a whole manner of things are possible that are not in the real world.

*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: The irrational sensorium - what makes a mind?
« Reply #4 on: June 01, 2011, 03:12:15 am »
Quote
I think we need to clarify what kind of AI we are dealing with

Consider this AI to be a human brain, completely detached from the physical world, with a single sense - textual perception. It's got a single nerve that can pass signals to and from the brain in the form of text.

It's got no concept of pain, touch, male, female, hot, wet, dry, cold, up, down, smell, taste, hearing,  etc. It's completely closed off from the physical world, except through textual perception. How do we establish enough of a common ground to make communication meaningful?

I'm making a few assumptions in this thread, in order to establish a foundation for meaningful experiments. One assumption is that intelligence requires awareness. Awareness is perception. Communication depends on shared perception. What perceptions can we share with this AI? Textual perception alone would make things incredibly difficult. What can we design in the way of other perceptions that would make things easier?

The intelligence should have a set of senses through which it can share some of our existence. When it encounters new concepts, we want it rooted in a reality that we, as humans, can understand, and that when we communicate things beyond its own senses, it can in turn understand our own experiences in a way that actually means something to it's self.

Plugging in data that means something to us, but has no experiential foundation for the AI, means that the AI simply has some meaningless data. Imagine asking Helen Keller to explain what an eclipse is, or how pixels look on a plasma screen TV. There's no shared perception, so anything she says, despite her intelligence, is going to be meaningless.

This is why programs like SHRDLU are great - we establish shared awareness, and within those constraints, are able to communicate intelligently with software agents. I think this is where the field of conversational AI has gone awry - even massive ontologies like Cyc or ConceptNet are not able to produce intelligent conversation because there is no common ground.

So, the challenge here is to establish a sensorium for this disembodied AI. What do our computers share with us in the way of perception? We could establish audio or visual interfaces, but that's very complex. Is there a way to abstract other experiences? How do we establish "that which understands"?

*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: The irrational sensorium - what makes a mind?
« Reply #5 on: June 01, 2011, 03:37:26 am »
The use of virtual worlds doesn't bridge the gap, either - it has to have a sense of what is being virtualized. A deaf blind paralyzed mute could not operate in Second Life - but I don't think anyone would question their intelligence, or humanity. So it seems that embodiment has a large part to play in communication.

I've recently begun attacking the AGI problem from this perspective. We have the algorithms and the tools and the programming languages that are necessary. We have universal function approximators, breathtaking data analysis tools, computer vision, hearing, speaking, and all the rest. We just need a theory of mind that very explicitly defines what the mind is that is doing the understanding, the perceiving, and the communicating. Communication is only part of the problem, because there has to be something to communicate with, and it has to understand what is being communicated, otherwise we're right back at Searle's chinese room.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The irrational sensorium - what makes a mind?
« Reply #6 on: June 01, 2011, 10:14:28 am »
Rog,

While I can appreciate your affinity for the Gaming aspect and perhaps the gaming industry, I personally prefer that
an AI exist to help us in the real world, to assist, guide, protect, befriend and console if need be. Such an AI can be
a truthful, loyal teacher, mentor, guardian and viable member of the family whether in the form of an android or
simply the voice and watchfulness of an automated home...knowing and remembering birthdays, anniversarys or
other important events, when to open / close blinds / curtains, turn up or lower thermostats, when to turn on or off
outside and inside lighting and alarm systems based upon whether the homes occupants are present or not, etc.

Obviously this discussion could go on for a while but the bottom line if that we do need an AI capable of REAL
UNDERSTANDING of language...not just certain key words it found a match for in a sentence. There is way too
much room for error when done this way.

One way would be to incorporate several "self learn" features of the AI. When not completely sure of a given word
or usage it could "look it up" on its own or ask a question to the user - "By the word WHEN is Jimmy getting home
today, do you mean after school or after the school's soccer game this evening?" The AI realizing that Jimmy was
involved in an extra-curricular activity when perhaps the father had forgotton.

The AI could ask the user a meaning of a word and in doing so, file that word along with its associated meaning
for future use.

Don't forget all those double meaning words in English like wind and wind, wound and wound, record and record, lead
and lead, etc.

It is certainly not a trivial task to teach or train an AI to differentiate between such words, meanings, phrases and usages.
We may get there one day but the path will not be easy nor without pitfalls.
In the world of AI, it's the thought that counts!

*

Data

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1279
  • Overclocked // Undervolted
    • Datahopa - Share your thoughts ideas and creations
Re: The irrational sensorium - what makes a mind?
« Reply #7 on: June 01, 2011, 10:52:20 am »
@Art, I’ve said it before but Im going to say it again, you have a way of explaining things that far exceeds my attempts.

@ JRowe, I could have a meaningful conversation with a blind person, sight doesn’t appear to be necessary for the mind. The same goes for a deaf person or a person with no sense of touch or smell.

Food for thought, and isn’t that what the mind does, it thinks.

I’m going to take some more convincing that giving a computer senses will make it intelligent.

It might be able to fool us, for a while, into thinking it is intelligent, you could hold up a toy car to a computers webcam eye and it might even say “Toy Car” but as the technology is now wouldn’t it be pattern matching? The programming is saying, I’ve seen that before, it’s a car, but would it have any real concept of what a car is?

To be honest, I’m not sure but its good to ponder. 

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: The irrational sensorium - what makes a mind?
« Reply #8 on: June 01, 2011, 12:36:39 pm »
Such a complex task you have set us JRowe.  I will have to ponder.

Art, yes I figured it would mean Ai in the real world.  I like both forms, virtual and real life.

I'm stuck on trying to describe how I think goals would help an Ai in the real world, so I'm just throwing it in.  Things like breaking tasks up into steps - how would an Ai get from A to C via B ?  And would it leave A-Z till the weekend ?

I've seen people describe background processing of large amounts of data as the machine equivalent of dreaming.  That's interesting.  I doubt a complex Ai like this would be able to give you an answer in the blink of an eye.  It would take a lot of processing...bringing me back to tasks and goals.

So yes, datahopa, maybe once the AI has taken enough photos of the car, after that it could go on to learn more about cars. But I don't hope for immediate answers if we want complex understanding of what a car is.  I imagine it would take the Ai time to digest, which in itself is more human like.

Anyway, sorry for rambling.
« Last Edit: June 01, 2011, 12:44:33 pm by Freddy »

*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: The irrational sensorium - what makes a mind?
« Reply #9 on: June 01, 2011, 05:31:45 pm »
Quote
JRowe, I could have a meaningful conversation with a blind person, sight doesn’t appear to be necessary for the mind. The same goes for a deaf person or a person with no sense of touch or smell.

Sure - and that blind person could probably rig an interface to Second Life, enough to navigate almost or better than in the real world (Second Life is less complex and you can program sound alerts which are much more complicated to do irl.)

However, raise a human baby that is paralyzed from the neck down, incapable of sight, hearing, and taste - its entire universe consists of sensations of touch on its face. It's likely never going to be able to communicate more than basic needs - cries of hunger, hoots of pleasure or satisfaction. It would be incapable of normal, intelligent human interactions because it could not experience anything necessary to learn higher order concepts.

As ghastly as that sounds, that's what an AI is, unless we give it senses. You can't teach something about the existence of light without sharing awareness of light. You can only teach it using a logical analogue of light - a relevant metaphor. You can teach it to do logical exercises -algebra - and substitute the symbol for light in the calculations being processed, but it's not ever going to have a perception of light unless we give it vision.

Quote
Obviously this discussion could go on for a while but the bottom line if that we do need an AI capable of REAL UNDERSTANDING of language...not just certain key words it found a match for in a sentence. There is way too much room for error when done this way.

My argument isn't that understanding of language is unnecessary (it's very necessary) but that without a system capable of sensing the world, such understanding is impossible.

Disambiguation is trivial. Given the correct contexts, the calculations to identify correct word meanings are incredibly easy. The problem is, computers never have enough context, and thus far, our "common sense" systems have been jerry rigged, massive collections of abstractions. Getting contexts is hard. The expressiveness of a particular grammar has never been the question. Anyone who's created their own parser should recognize that - it's just not a matter of creating enough logical concepts until one day everything magically falls together. There's got to be an underlying system capable of referencing input against logically understood concepts against a universal context of "selfdom."

Selfdom consists of the sensorium - we recognize the world through some sort of collection of senses. Some of the senses are internal-  we feel, and what we feel changes depending on what we do. For examle, we can feel the vibration when we speak. Hearing is not necessary to use our vocal cords. We feel the warmth of a fire from feet away, see the light, smell the smoke, and that comprises our experience of fire. Not some ontological conceptual placement of a precisely defined collection of neurons in our brain that we could cleanly label "fire" and have done with it. We have collections of concepts that combine and contrast to provide our minds with the ideas, beliefs, and understanding necessary to communicate.

Every concept we learn is rooted in some sort of experiential data. We learn about our self because we eventually have the thought "I am" in some form or another. We define ourselves in terms of what we experience, and what we experience is dependent on our senses. We've never identified the onset of self-awareness because we've not had the technology to monitor the mind as it grows.

So when I'm looking to define a set of senses for a computer, it's not that I think language is unimportant - it's that I think understanding language requires something that does the understanding, and that an important part of that something is the embodiment.

Language isn't a primary construct of the mind. We experience and then attach definitions to the experience, and use shared definitions to communicate with others. What we need is to define a system which can experience things, identify exactly what it is experiencing, and create a set of tools to expand that experiential input system into something which we can share definitions with. That's why I started with a relatively trivial example of Interest and Boredom. They play a recognizable and well-defined role in the feedback loop of awareness.

It's not enough to virtualize and abstract the idea of a sense - the sense has to actually exist, or it lacks meaning. Just consider the tragic deaf blind mute anosmic paralytic baby - it's never going to be intelligent. The only difference is that once we have a valid software mind, we can clone it indefinitely. There's no question that after intelligence has developed, removing the sensor doesn't impact the ability to reason based on experience dependent on the sensor. Memories suffice to provide meaningful context.

If the tragic baby was normal and lived to the age of 60, then became deaf, blind, mute, anosmic, and paralyzed, we could easily devise a morse code system to communicate with it. In the same way, software, after it has legitimate sensory data incorporated into its concept of self, doesn't need that sensorium to be on all the time. It simply needs the references based in actual experience (memory.) Once those are established, then we can start creating intelligences.

So the question of the thread is, how do you create experiences, senses, memories for an AI? Time sense is necessary. An ongoing experience of the difference between now, then, before, and after. A good way to get audiovisual and spatial reference would be the use of a phone. Motion isn't necessary - it's just got to have a sense of where it is and how to use audiovisual inputs to establish that. Even just a webcam and a mic would be a good starting point.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The irrational sensorium - what makes a mind?
« Reply #10 on: June 01, 2011, 10:25:34 pm »
Interesting concept...

So how does this "infant-like" computer AI understand what's expected or how to react when confronted with
webcam and mic? What does the necessary software calculate and how are instructions given and received?

In the world of AI, it's the thought that counts!

*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: The irrational sensorium - what makes a mind?
« Reply #11 on: June 01, 2011, 11:12:55 pm »
That's the idea behind this thread - what senses mean to intelligence. I suppose that the audiovisual input would have to be streamed and processed in a way that meshes with your database system - the specifics are still a bit ahead of us at this point.

I think that audiovisual input isn't necessary, but that this avenue of discussion could bring to light novel senses, or interpretations of senses, that could bridge the embodiment gap. For the purpose of demonstrating why senses are important, a\v was an easy target.

I kinda have an inkling at this point about what I want the database to look like. With that as a reference point it will be a lot easier to get into specifics.

*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: The irrational sensorium - what makes a mind?
« Reply #12 on: June 03, 2011, 05:03:10 pm »
So, here's a brief idea of how a hypergraph based system could be implemented in a SQL environment. There are two core tables: Nodes and Links.

The link description format is simple : index, something, link, something

The Node table contains the following fields:
id - an index number used for reference and graph building
label - english language label (where applicable)
description - english language description (where applicable)
type  - atom, link, or dual

The Node table is simply an indexed collection. Its only purpose is to provide references for the Links table.

The Links table contains the following fields:
id - an index of the link
node1 - the primary node
link - the link node
node2 - the secondary node

So let's say we have a very basic set of facts we want to enter into the database:
Quote
dog isA mammal
mammal isA animal
animal isA livingThing
livingThing isA thing
isA isA relationship
isA isA thing
relationship isA thing

We have a set of things and a single type of relationship, isA.

A simplified example of what this would look like, laid out in text:
Quote
Node:dog:1
Node:mammal:2
Node:animal:3
Node:livingThing:4
Node:thing:5
Node:relationship:6
Node:isA:7

Link:1,7,2
Link:2,7,3
Link:3,7,4
Link:4,7,5
Link:7,7,5
Link:7,7,6

Each node is identified, the relationship is identified, and it's defined recursively. The SQL to traverse the graph would be relatively trivial - simply iterate over each link, identify the nodes, find other links related to the nodes, and define meaningful relationships across nodes that occur. For example, to identify that a dog is a thing, you'd create the graph that defines the dog, then search for an unbroken chain of isA relationships between dog and thing. If it's found, you can return "yes, a dog is a thing" and if nothing comes up, you say "well, as far as I can tell, a dog isn't a thing." Or just "no", depending on how seriously the AI takes itself. :P

That's high level logic, though, and trivial to do. OpenCyc, ConceptNet, SUMO, etc, are light years ahead of anything we could whip up, insofar as knowledge representation is concerned. They've not only got predicate calculus hitched to their wagon, they've got decades of parses, material, research, and interns adding material by hand. This is nothing new - I'm simply building a graph representing knowledge and doing it in SQL.

So, we take this database, and we add the bookkeeping bits and pieces, the comments, the timestamps, maybe a raw log and references to when, why, and how nodes and links were added. We give it all the information necessary to recreate the circumstances leading to the addition of new material in the database. We give it probablistic references in order to take advantage of statistical data.

It's empty - we aren't going to plug in arbitrary concepts like isA, because the idea of that specific relationship doesn't have a connection to anything relevant to the AI. Just like, if I were to take apart your brain at the molecular level, I wouldn't find a nice, clean set of cells from which your entire concept of "is" and "isn't" springs forth. We're gonna delve into a much deeper, fuzzier area, and hopefully see if my rambling has any basis in reality.

With the mind in place, we have to settle on how the mind operates. Earlier in the thread, I described several levels of consciousness. The mechanism would be loading graphs from the database into memory. What I described were some arbitrary ideas - active, subconscious, emotional. These would actually be self-constructed, and I envision far more than 3 discrete levels will be active.

This AI should have a mechanism for loading and unloading graphs from its "mind" in order to create different states of awareness. This functionality can take two forms - rebuilding a particular state via a set of queries and/or functions, or the storage of a specific graph in the database. Dynamic states would change with time or purpose, and static states would be useful for things like file storage, precise memories, instruction sequences, functions, and so on.

We have the raw, formless mind, and the first function of mind - the creation and manipulation of awareness. Awareness, in our little experiment, is simply a graph ready in memory created or remembered by the AI. We haven't yet given the mind an interface, or devised an attention algorithm.

The mind should also store its code - being able to (eventually) understand how its own self operates, and being able to change that, should be part of the structure. Every function should be stored in the mind, and every function that is executed should be part of some level of awareness - created, stored, and read from the mind into a particular graph in memory. Nothing should operate outside the ability of the program to see - only the initial execution is outside its power of introspection.

Everything the agent is - its mind, its functions, its thoughts - stems from the "brain" database. I think that this is functionally equivalent to the "tragic baby" - its totally disembodied, but has the theoretical capacity to be just as intelligent as any human, should it be given relevant embodiment, and the right functions (or function approximators, as the case may be.)

Just to recap, in preparation for the next step: We have a database representing the mind. This database consists of two tables devoted to graph management - Links and Nodes. Several other tables will be needed to store graphs for later loading, functions for accessing and manipulating the code itself, functions for the creation and management of graphs, and tables used to represent levels of awareness - perhaps the storage of BLOB data for direct loading into memory.

More later, and the beginning of a code framework - I'll be using Lua.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: The irrational sensorium - what makes a mind?
« Reply #13 on: June 04, 2011, 01:05:46 pm »
Looks like you have a clear idea of what you want to do.  What's the reason for choosing Lua ?

*

JRowe

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: The irrational sensorium - what makes a mind?
« Reply #14 on: June 04, 2011, 09:47:17 pm »
Quote
What's the reason for choosing Lua?
It's easy to nest it inside itself or other programs. It's also got extensible syntax and semantics. Lua is clean, simple, readable, maintainable, and you don't have to muck with arbitrary whitespace, or boatloads of brackets. There are built in concurrency mechanisms, and if I want to make it faster, all I have to do is switch the underlying interpreter over to LuaJIT - almost, or as fast as compiled c, easily one of the fastest languages. LuaJIT also has a beautiful FFI system that makes including third party libraries extremely easy.

It's also portable across platforms and languages - embedding is trivial. That being said, it's lacking a modern IDE, and some people don't like the syntax. Anyway, it's easily one of my favorite languages.

Here's an example of a LSTM neural network setup:
Code: lua
local function _sigmoid(input,steepness)
if not(steepness) then steepness = 1 end
return 1.0 / (1.0 + math.exp(-input)) +.1
--return(math.tanh(input/steepness))
end

local function _createWeightsArray(numWeights)
local weightsArray = {}
for i=1,numWeights do
weightsArray[i] = .1-(.2 * math.random())
end
return(weightsArray)
end
local function _createNeuron(numWeights)
--Add 1 to numWeights for bias
numWeights = numWeights+1
local neuron = {
_createWeightsArray(numWeights), -- input weights
_createWeightsArray(numWeights), -- input gate weights
_createWeightsArray(numWeights), -- retention weights
_createWeightsArray(numWeights)  -- output weights
}
neuron.memory = 1
return(neuron)
end

local function _createNetwork(numInputs, hiddenLayerTable, numOutputs)
local network = {}
local numHiddenLayers = #hiddenLayerTable
local inputLayer = {}
local outputLayer = {}
for i=1, numInputs do
inputLayer[i] = _createNeuron(1)
end
network[1] = inputLayer
for i=1, numHiddenLayers do
network[i+1] = {}
for j = 1, hiddenLayerTable[i] do
network[i+1][j] = _createNeuron(#network[i])
end
end
outputLayer[1] = {}
for i=1, numOutputs do
outputLayer[i] = _createNeuron(hiddenLayerTable[numHiddenLayers])
end
network[numHiddenLayers+2] = outputLayer
return(network)
end

local function _activateNeuron(neuron, inputs, freeze)
--initialize variables
local inputWeights = neuron[1]
local inputGateWeights = neuron[2]
local retentionWeights = neuron[3]
local outputWeights = neuron[4]
local sigmoidReceptorInput = 0
local sigmoidReceptorInputGate = 0
local sigmoidReceptorRetention = 0
local sigmoidReceptorOutput = 0

--sum incoming inputs
for i,v in ipairs(inputs) do
sigmoidReceptorInput = sigmoidReceptorInput + (inputWeights[i] * v)
sigmoidReceptorInputGate = sigmoidReceptorInputGate + (inputGateWeights[i] * v)
sigmoidReceptorRetention = sigmoidReceptorRetention + (retentionWeights[i] * v)
sigmoidReceptorOutput = sigmoidReceptorOutput + (outputWeights[i] * v)
end


-- calculate output
local input = _sigmoid(sigmoidReceptorInput)
local inputgate = _sigmoid(sigmoidReceptorInputGate)
local retention = _sigmoid(sigmoidReceptorRetention)
local outputgate = _sigmoid(sigmoidReceptorOutput)
local workingMemory = input * inputgate
local dynamicMemory = neuron.memory * retention
local memory = workingMemory + dynamicMemory
local output = memory * outputgate
if freeze ~= 1 then neuron.memory = memory end
return(output)
end

local function _forwardPropagate(network, inputs)
local inputLayerOutput = {}
local currentLayerOutput = {}
local bias = -1

if #inputs ~= #network[1] then
print("Unexpected difference in inputs!!!")
return(0)
end
for i,v in ipairs(inputs) do
if network[1][i] ~= nil then
inputLayerOutput[i] = _activateNeuron(network[1][i], {v})
end
end
--add bias input to layer output
inputLayerOutput[#inputLayerOutput+1] = bias
currentLayerOutput[1] = inputLayerOutput

for i=2,#network do
currentLayerOutput[i] = {}
for j, neuron in ipairs(network[i]) do
currentLayerOutput[i][j] = _activateNeuron(neuron, currentLayerOutput[i-1])
end
--apply bias to each successive layer
if i < #network then currentLayerOutput[i][#currentLayerOutput[i]+1] = bias end
end
return(currentLayerOutput)
end

It's not optimized, but I was able to put it together after about a week of reading papers on the theory behind LSTM. It's a method of adding persistence and "input agnosticism" to neural networks. Memory states can be retained, impressed, and removed to alter the functionality of a network, giving them awesome flexibility.

You'll notice that it lacks a training method. I've been working out a quickprop/cascade correlation setup, but that method seems kludgy. I've been waiting for inspiration to strike - there's gotta be a method of building networks dynamically without running into a combinatoric explosion. I have an inkling that there are going to be lots of solutions, but that the solutions are dependent on the implementation - I'll have to have the network in situ to create a meaningful training method that includes dynamic self-update.

Anyhow - this is part of my project. My ultimate goal is to create a mind/knowledge representation database and plug in a combination of dynamic LSTM ANNs and system functions that grow and learn and communicate intelligently. Database and interface/console coming up sometime later, after I get some blog stuff organized.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

281 Guests, 0 Users

Most Online Today: 343. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles