When you realize that a neuron can have 100K plus connections where the combination of those inputs determines if a neuron will fire you can understand a beautiful quality or feature of this kind of system. Neurons are very good at identifying partial features of data since the combinations of inputs can vary quite a bit depending on how the neuron has configured its dendrite spines. For example; if the neuron statistically needs only 50 inputs to trigger a firing then from a combinational perspective, meaning the order is not important, there are 1.00891 E32 combinations! The beauty of this is the neuron literally can respond to situations, combinations, that it hasn't even trained on.
Since I've been establishing a means to do partial feature identification through hash tables such lookups that could mimic what a neuron does would require a key for each combination! Not practical, but there is a way to do this without having to have a key for each combination. So, how can it be achieved you may ask? If each feature is its own key for each context or concept then I need only count how many features any concept has with respect to the inputs being received. Yes, I have to apply the test for all inputs to each concept and that may seem a bit clumsy at first. Realize I can constrain the lookups on a context basis so that it shrinks the number of comparisons I need to do. Since these lookups are not linear and can be done concurrently such a process can quickly sift through a lot of instances to find qualified candidates.
To give you an idea of how much can be done using this approach here's an article on a
GPU hash table. The author used a GTX 1060 where 300 million insertions and 500 million deletions per second could be achieved, the lookup statistic was not stated but would be on the order higher than the deletions. With a more powerful GPU, say an RTX 3080 that is 7 times more capable than a GTX 1060 then applying lookups on the order of 3.5 billion a second is not unfeasible. From an NLP perspective where GPU cards having 8 to 10 GB of memory and concepts encoded with just a simple integer identity and a complex of features encoded numerically using bytes and some vectors where needed can use 16-bit float, a footprint of a concept with 100 features is well within 1KB! Caching the GPU with up to 80% of its memory with concepts allows for 8 Million concepts. Where finding a match based on some set of inputs could be achieved in 0.01 seconds to do a full match, where partial matches take even less time!