We are computational machines after all!

  • 10 Replies
  • 10074 Views
*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
We are computational machines after all!
« on: November 17, 2020, 12:42:49 am »
So with the new AR glasses and engineering the real-time engagement feature I realized that episodic memory is actually complex but how our brains do it is how you would expect a machine to do it. Here's an article that describes how we chunk events.

*

MikeB

  • Autobot
  • ******
  • 219
Re: We are computational machines after all!
« Reply #1 on: November 18, 2020, 04:45:22 am »
The article uses MRI to scan the mind... but the problem is you don't know what people are responding to on screen. It could be anything from hair style, interior decoration, a word you haven't heard in a long time, etc. Long term memory could involve the last time you seen/heard the things happening on screen.

Not necessary event based time, location, emotion, and story. It may only be 10% of what their mind is doing.

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: We are computational machines after all!
« Reply #2 on: November 18, 2020, 03:40:54 pm »
The article uses MRI to scan the mind... but the problem is you don't know what people are responding to on screen. It could be anything from hair style, interior decoration, a word you haven't heard in a long time, etc. Long term memory could involve the last time you seen/heard the things happening on screen.

Not necessary event based time, location, emotion, and story. It may only be 10% of what their mind is doing.

It doesn't matter what they're responding to on the screen. The model that the authors of the paper developed detects changes based on time slices and those time slices were then linked to higher order or longer time slices, where they are correlated to what's shown on the screen or what has changed on the screen.

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: We are computational machines after all!
« Reply #3 on: November 19, 2020, 12:26:20 am »
So I implemented a chunking scheme to Amanda and something fantastic came out of it! We all experience time differently depending on the situation. If it's a critical situation or an anxious situation such as catching a baseball or being in a car accident time slows down, events look like they're moving in slow motion. Other times you can remember details but time just zips by, like when you're making out with your girlfriend when you were a teenager. The paper described 4 layers one of which its period is a minute or so. When in a critical or anxious mode we focus on the lower tiers of chunking which have shorter periods but when we are in much less critical situations we can move up the chunking tier to even the highest tier where time feels like it's moving much faster. With what I implemented the AI will have identically the similar type of recall as a human being, where depending on the scenario it could be focused on the shorter period tiers and events are very detailed and seem like they're in slow motion and when it's in a more pleasant situation it focuses on the higher tier or longer period levels that are simply summaries of lower tiers!  :D

There are other details as to how to handle the long-term memory storage but I was just blown away that what was described in the paper and ultimately implemented in code could lend itself to such an effect. So time is much more complex than just a clock for human beings and so too for AI, at least for Amanda it will be. ;)

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: We are computational machines after all!
« Reply #4 on: November 19, 2020, 01:05:11 am »
Ideally your internal framerate will attune itself to what you are observing. AI should be able to go far beyond what humans can do in this regard. The question is whether to carry over some of the human triggers (e.g. approaching object, loud noise) or leave it entirely up to the conscious control of the AI.

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: We are computational machines after all!
« Reply #5 on: November 19, 2020, 03:04:17 am »
Ideally your internal framerate will attune itself to what you are observing. AI should be able to go far beyond what humans can do in this regard. The question is whether to carry over some of the human triggers (e.g. approaching object, loud noise) or leave it entirely up to the conscious control of the AI.

Framerate is a fair enough analogy for the sense of the passage of time, but the lower tier is where the sampling of data from sensors is happening, so moving up the layers is not just shifting time periods it's a matter of dealing with all that data and how to organize it as you move up the layers. As far as going beyond what humans can do, well since CPUs can clock themselves at nano-second scales that changes what an AI can observe since it could modify its period to whatever may be appropriate.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: We are computational machines after all!
« Reply #6 on: November 19, 2020, 05:46:17 am »
Oh, so the lowest tier/fastest framerate is always there, working in the peripheral nervous system. It’s ready to notice something quick happening; but the intelligence behind it may glean richer data by dedicating most its attention to one of the more integrated time layers. So a neural net functions on several time sampling circuits; each one being specialized to sample time through a different ''event length'', by combining the impressions from lower layers into higher level concepts.

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: We are computational machines after all!
« Reply #7 on: November 22, 2020, 09:11:04 pm »
Here's a summary with diagrams that help clarify what's being implemented in the chunking scheme.

*

MikeB

  • Autobot
  • ******
  • 219
Re: We are computational machines after all!
« Reply #8 on: November 26, 2020, 08:37:45 am »
Ideally your internal framerate will attune itself to what you are observing. AI should be able to go far beyond what humans can do in this regard. The question is whether to carry over some of the human triggers (e.g. approaching object, loud noise) or leave it entirely up to the conscious control of the AI.

I don't know if it's possible to test this but synapse speed may be linked (or self adjust) to screen & light bulb flicker rate. When ever I'm thinking critically I can see screens & lights flickering. They're generally only ~60hz. If you see 2-3 flickers then synapse speed is ~120-160hz. Modern gaming monitors and VR headsets are around 120-144hz.

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: We are computational machines after all!
« Reply #9 on: January 11, 2021, 01:01:24 am »
I've implemented an infrastructure that captures various data inputs from video, audio, tactile, temperature, and pressure sensors. With that said using a 10 ms time window for the shortest tier (T4) in about twenty minutes, I run out of 125 GB of ram! When you realize that the human brain only has 89 billion neurons where each neuron only has a single output whose spike train range can be encoded with only 4 bits (128 possible output states) you start wondering how is the brain storring info. Realize that long term potentiation is a consistent voltage state and not a spike train. So spiking neurons are delivering a signal level that is proportional to the number of spikes within a time window that can be replaced or is the logical equivalent of a consistent output level of long term potentiation. I found this article which describes the dendritic tree of neurons as encoders or modulators of synaptic inputs! Meaning inputs to a neuron are a form of information storage. So, say you have a neuron with 90,000 inputs (not uncommon) and those inputs represent pixels of an image or features extracted from an image. OK, so the inputs to neurons can represent complex information but the output of the neuron is simply a spike or spike train which represents the degree to which the neuron inputs match the coded dentrities. The more inputs that match the higher the spike train's firing rate.  Effectively the neuron can code for complex inputs but can only validate whether those inputs are an adequate match to its codification. So, if the memory is of a image of a cat this scheme can identify the cat or if there are features similar to a cat it can fire at a slower rate indicating a partial feature match. 

This dentritie coding scheme allows for highly complex data to be represented by a single neuron which can save a lot of resources, but the neuron can only return whether the inputs matched or partially matched its dendritic coding. This scheme also has the advantage of being a quick means for a life form to determine if it has relatable information and not have to do any further processing. This idea could also explain why human memories are vague, where we can be aware that we are familiar with some concept or stimulus that we are experiencing but not have any more details about it. Here's where "you use it, or loose it" comes into play. Where the neuron can validate the past experience but lost its connections as to how to handle it or the details of the experience because those were not used in quite some time.

Everyone has had the experience of remembering events but the details are none to sparse. While it seems counter-intuitive to have such imperfect memory such a strategy motivates a dependency with other peer members who might have more recent experience with a particular subject matter and not overload the scarce resource of only 89 billion neurons in our brains...




 
« Last Edit: January 11, 2021, 04:31:34 am by frankinstien »

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: We are computational machines after all!
« Reply #10 on: January 12, 2021, 06:24:09 pm »
Quote
I've implemented an infrastructure that captures various data inputs from video, audio, tactile, temperature, and pressure sensors. With that said using a 10 ms time window for the shortest tier (T4) in about twenty minutes, I run out of 125 GB of ram!

Update: I implemented some optimization tricks where noise from the sensors can be filtered out. Also, only deltas are stored but before the T4 would be stored regardless if its collection of inputs is empty, I corrected that so only T4s with inputs that can be evaluated are stored. Now I don't get the memory overloading. So looks like you can overcome the problem of information overload algorithmically.  :happyaslarry:

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

276 Guests, 0 Users

Most Online Today: 343. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles