Dendrite Processing

  • 8 Replies
  • 2508 Views
*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Dendrite Processing
« on: November 19, 2021, 05:35:59 am »
Here is a very interesting and fairly recent paper on the morphology and signaling of dendrite spines.

The diagram below is from the article:



Looking at the ion pumps on the plasma membrane and the endoplasmic reticulum (ER) membrane, it's clear, that there is an ability to modulate signaling, allowing either amplification or reduction. Not, only that, but the ability to control a neuron's internal states, such as dichotomies that can act in a boolean fashion where the spine's states can cancel out or re-enforce each other within regions. The paper states that the spines can physically change and that has an effect on the plasticity of neurons.

The beauty of this kind of approach is by simply anchoring to these spines you can build combinational logic astronomically.  :o

Here's an older paper on the effects of caffeine on dendrite spines.

*

MikeB

  • Autobot
  • ******
  • 224
Re: Dendrite Processing
« Reply #1 on: November 19, 2021, 01:55:29 pm »
Inhibition / excitation (plasticity) of synapses is controlled through Chloride flow, and the Cysteine component in Glutamate (GAD65).

Apples, cranberries, eggs contain the Cysteine amino acid... You can literally eat them and 5 seconds later feel more mentally flexible, sociable, open minded...

Some info here:
"In addition, some EAATs [Excitatory Amino Acid Transporters] also act as chloride channels or mediate the uptake of cysteine, required to produce the reactive oxygen species scavenger glutathione."

https://core.ac.uk/download/pdf/300327899.pdf

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Dendrite Processing
« Reply #2 on: November 19, 2021, 04:49:16 pm »
I dont think the secret of Ai is in studying the brain,    I believe in Socratic thought (Thought with no experiment, its concievable with theory alone, seeing nothing.) to solve it.

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Dendrite Processing
« Reply #3 on: November 19, 2021, 06:18:47 pm »
I dont think the secret of Ai is in studying the brain,    I believe in Socratic thought (Thought with no experiment, its concievable with theory alone, seeing nothing.) to solve it.

I would disagree, since looking at neurons as state machines and cellular automata removes the stigma and mystery of them. Resolving their behaviors or generalizing them to forms of mathematics that are boolean with a relevance dimension allows for analogies, albeit scaled-down, to current electronics and even digital technologies. Where working with programming languages that can mimic free association, something thought 40 years ago as impossible for a computer, is very possible today using object-oriented relationships and O1 lookups. I don't think we, as a culture would have attempted to do such a thing if it weren't for the research that has revealed, so far, that neurons behave with fuzzy states.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Dendrite Processing
« Reply #4 on: November 19, 2021, 06:27:16 pm »
You can come up with cellular automata without studying biology at all.   The whole thing of the Turing Machine, was concieved by Alan Turing without studying any biology either.

I hate biology.

Leave the animals alone,    you'll learn nothing you can work out by yourself with their pain, and cruelty towards them.

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Dendrite Processing
« Reply #5 on: November 27, 2021, 05:42:06 pm »
When you realize that a neuron can have 100K plus connections where the combination of those inputs determines if a neuron will fire you can understand a beautiful quality or feature of this kind of system. Neurons are very good at identifying partial features of data since the combinations of inputs can vary quite a bit depending on how the neuron has configured its dendrite spines. For example; if the neuron statistically needs only 50 inputs to trigger a firing then from a combinational perspective, meaning the order is not important, there are 1.00891 E32 combinations!  The beauty of this is the neuron literally can respond to situations, combinations, that it hasn't even trained on.

Since I've been establishing a means to do partial feature identification through hash tables such lookups that could mimic what a neuron does would require a key for each combination! Not practical, but there is a way to do this without having to have a key for each combination. So, how can it be achieved you may ask? If each feature is its own key for each context or concept then I need only count how many features any concept has with respect to the inputs being received. Yes, I have to apply the test for all inputs to each concept and that may seem a bit clumsy at first. Realize I can constrain the lookups on a context basis so that it shrinks the number of comparisons I need to do. Since these lookups are not linear and can be done concurrently such a process can quickly sift through a lot of instances to find qualified candidates.

To give you an idea of how much can be done using this approach here's an article on a GPU hash table. The author used a GTX 1060 where 300 million insertions and 500 million deletions per second could be achieved, the lookup statistic was not stated but would be on the order higher than the deletions.  With a more powerful GPU, say an RTX 3080 that is 7 times more capable than a GTX 1060 then applying lookups on the order of 3.5 billion a second is not unfeasible.  From an NLP perspective where GPU cards having 8 to 10 GB of memory and concepts encoded with just a simple integer identity and a complex of features encoded numerically using bytes and some vectors where needed can use 16-bit float, a footprint of a concept with 100 features is well within 1KB! Caching the GPU with up to 80% of its memory with concepts allows for 8 Million concepts. Where finding a match based on some set of inputs could be achieved in 0.01 seconds to do a full match, where partial matches take even less time!  :o

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Dendrite Processing
« Reply #6 on: November 27, 2021, 10:12:22 pm »
When you realize that a neuron can have 100K plus connections where the combination of those inputs determines if a neuron will fire you can understand a beautiful quality or feature of this kind of system. Neurons are very good at identifying partial features of data since the combinations of inputs can vary quite a bit depending on how the neuron has configured its dendrite spines. For example; if the neuron statistically needs only 50 inputs to trigger a firing then from a combinational perspective, meaning the order is not important, there are 1.00891 E32 combinations!  The beauty of this is the neuron literally can respond to situations, combinations, that it hasn't even trained on.

Since I've been establishing a means to do partial feature identification through hash tables such lookups that could mimic what a neuron does would require a key for each combination! Not practical, but there is a way to do this without having to have a key for each combination. So, how can it be achieved you may ask? If each feature is its own key for each context or concept then I need only count how many features any concept has with respect to the inputs being received. Yes, I have to apply the test for all inputs to each concept and that may seem a bit clumsy at first. Realize I can constrain the lookups on a context basis so that it shrinks the number of comparisons I need to do. Since these lookups are not linear and can be done concurrently such a process can quickly sift through a lot of instances to find qualified candidates.

To give you an idea of how much can be done using this approach here's an article on a GPU hash table. The author used a GTX 1060 where 300 million insertions and 500 million deletions per second could be achieved, the lookup statistic was not stated but would be on the order higher than the deletions.  With a more powerful GPU, say an RTX 3080 that is 7 times more capable than a GTX 1060 then applying lookups on the order of 3.5 billion a second is not unfeasible.  From an NLP perspective where GPU cards having 8 to 10 GB of memory and concepts encoded with just a simple integer identity and a complex of features encoded numerically using bytes and some vectors where needed can use 16-bit float, a footprint of a concept with 100 features is well within 1KB! Caching the GPU with up to 80% of its memory with concepts allows for 8 Million concepts. Where finding a match based on some set of inputs could be achieved in 0.01 seconds to do a full match, where partial matches take even less time!  :o

If you look at it from a data i/o standpoint,   a transformation from input to output,  what the neuron is isnt that important,  as long as it can transform it, anything will pretty much do for it.   It doesn't interest me so much,   a perceptron is basicly an fpga, and it has an exponential set of possible ways to configure it.

Finding the configuration is the important quest,  not what the framework is,  that's unimportant.

*

MikeB

  • Autobot
  • ******
  • 224
Re: Dendrite Processing
« Reply #7 on: November 28, 2021, 05:53:16 am »
There's nothing that can be learned about what information is stored where by looking at synapses, though.

You can prove they're used in learning/not learning.. connecting dots, aspiration (as opposed to stern), epiphanies.

But to see where pictures & images are stored you need EEG headsets... which areas are hightlighted when doing something...

Most areas are mapped on the homunculus chart for mind-body areas... What about where information is stored? That would be more useful for NLP's, as current NLP's are mostly epiphany / guessing machines.

What if all that information was in groups... now there's not 300million choices, there's a handful...

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Dendrite Processing
« Reply #8 on: November 28, 2021, 03:01:28 pm »
if the brain isnt an exponential technology id be suprised,  it makes a person?!!?? or an animal!??  it must be at least as powerful as a quantum computer.

That means it is NOT classically implementable, not in the usual way youd think computers work.   But it doesnt mean its using quantum mechanics, I dont mean that,  I'm just saying its possibly a level above your normal high earning calculus mathematician's toolset, so u wont learn it at UNI.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

288 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 474. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles