Ignoring Noise

  • 0 Replies
  • 930 Views
*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Ignoring Noise
« on: June 11, 2020, 04:12:16 am »
There are various hypotheses of neural spike codes where the complexity of the premise describes a neuron's ion channels and how various spiking modes are produced. I am going to describe a much simpler and generalized coding scheme for neurons based on anatomical observations and in particular the auditory wiring of mammalian brains. Neurons exhibit firing ranges that directly relate to the degree of inputs they receive relative to some desensitizing of the neuron's dendritic membrane. If we examine the cochlea and how the hairs stimulate neurons we can effectively see how nature is transforming mechanical energy into electrical signaling.  A vital data point of the brain's auditory system is the positioning of the neurons that are stimulated by the cochlea hairs, where the position of the neuron represents the auditory wavelength that a cochlea hair vibrates at. The second piece of information we can glean from the anatomy of the auditory circuits of the brain is the vibrating hair can control the degree or rate of neurons firing. So mammalian brains codify sound by:

1. The positioning of neurons along the cochlea for a particular wavelength.
2. The firing rate of a neuron indicates the energy or loudness of the auditory signal being heard.

Effectively neurons can encode by superimposing two pieces of information, symbolic representation by physical positioning and two, the degree of signal strength of the symbolic representation. In other words, neurons produce fuzzy bits! Where an output of a neuron is more than true or false it is to a degree of truth represented by its firing rate. The truth table for neurons is dependent on its resistance to inputs or a weighted barrier, as shown in the truth table below:

Neuron A Firing Rate Neuron B Firing Rate Weighted Barrier Neuron C Output
1150
2250
2350
3351
4352
4453
5555
6657
7759

The combination of the two input neurons A and B have to exceed the weighted barrier of neuron C's dendritic membrane. With that said the question becomes can one build logic circuits similar to boolean methods?

Below is a diagram of a full adder using the neuron spike coding scheme:


You'll note that there are five neurons in the circuit, the truth table for the circuit is below:

(A Neuron)_(B Neuron)_(C Weight)_(C Output)_(D Weight)_(D Output)_(E Weight)_(E Output)
54905440
559-1*(5 influence)5041
60905140
06905140
71905340
72905440
739-1*(5 influence)5041
749-2*(5 influence)5046
759-3*(5 influence)50411
11905040
22905040
33905140

Where the inhibitory neurons usually have 5 times the influence that Excitory neurons have and where the D output is greater than 0 it is the equivalent to binary 1 and where E outputs are greater than zero are also the equivalent to binary 1. So effectively the representation of binary inputs are through neurons A and B but their outputs are effective only under certain firing rates! Where the minimum output from Neurons A and B have to exceed neuron D's weighted barrier of 5 and neuron E is dependent on the inhibitory neuron C but is actually an Excitory transmitter for neuron E. So both neuron A and B must exceed neuron C's weighted barrier of 9 to force neuron E to output.

From the truth table we can see how neural logic can work and the combinations of where the neurons operate as boolean adders is only under certain firing rates of the neural circuits. Meaning, unlike discrete logic elements that operate consistently over all input ranges(0 or 1) neural logic will only operate under critical firing rates that can over come the weighted barriers.

From a mere engineering perspective neural logic might seem useless since there are states where the neurons just don't work out the logic at all! But neurons have to operate under conditions that discrete logic could not work at all and that is a very noisy environment.

You'll notice that the diagram depicts glial cells and it has been observed that glial cells do discharge. One of the functions of glial cells is to act like little vacuum cleaners sopping up excess neural transmitters or ions of firing neurons. This absorption builds up potential within the glial cells and eventually causes a discharge of the electrical potential within the cell. At first this may look like simple noise but if you realize that the glial cells absorb excess neural transmitters in direct proportion to the neighboring firing rate of neurons then we can see a very interesting effect from glial cells. Glial cells can actually provide feedback as in the form of a learning impetus that forces neurons to adjust their weights based on their own and surrounding neurons' activity!

With that said: From the neural logic example, the circuits will only operate under certain firing rate conditions, it does imply that neurons learn to ignore noise. This is advantageous since it doesn't behoove a neuron to fire upon any input given to it. From a perspective of a neuron evolving to survive in a noisy environment firing randomly in response to noise would expend energy needlessly. A neuron adapting to ignore noise is conserving energy and is benefiting its host by being energy parsimonious. The additional benefit to ignoring noise is it forces the neuron to fire at signal strengths that improve the fidelity of information exchange between neurons.

In addition to the mentioned advantages of energy conservation of neurons, each neuron will have its own set of inputs along with feedback from glial discharges and therefore uniquely derives its weight barrier in response to those influences. Realize that there really isn't any way for a neuron to distinguish information from noise, so one could either view the individual neuron learning to ignore noise or is learning the information patterns from its inputs, I am asserting that they are one and the same...

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

357 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 532. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles