I've implemented an infrastructure that captures various data inputs from video, audio, tactile, temperature, and pressure sensors. With that said using a 10 ms time window for the shortest tier (T4) in about twenty minutes, I run out of 125 GB of ram! When you realize that the human brain only has 89 billion neurons where each neuron only has a single output whose spike train range can be encoded with only 4 bits (128 possible output states) you start wondering how is the brain storring info. Realize that long term potentiation is a consistent voltage state and not a spike train. So spiking neurons are delivering a signal level that is proportional to the number of spikes within a time window that can be replaced or is the logical equivalent of a consistent output level of long term potentiation. I found this
article which describes the dendritic tree of neurons as encoders or modulators of synaptic inputs! Meaning inputs to a neuron are a form of information storage. So, say you have a neuron with 90,000 inputs (not uncommon) and those inputs represent pixels of an image or features extracted from an image. OK, so the inputs to neurons can represent complex information but the output of the neuron is simply a spike or spike train which represents the degree to which the neuron inputs match the coded dentrities. The more inputs that match the higher the spike train's firing rate. Effectively the neuron can code for complex inputs but can only validate whether those inputs are an adequate match to its codification. So, if the memory is of a image of a cat this scheme can identify the cat or if there are features similar to a cat it can fire at a slower rate indicating a partial feature match.
This dentritie coding scheme allows for highly complex data to be represented by a single neuron which can save a lot of resources, but the neuron can only return whether the inputs matched or partially matched its dendritic coding. This scheme also has the advantage of being a quick means for a life form to determine if it has relatable information and not have to do any further processing. This idea could also explain why human memories are vague, where we can be aware that we are familiar with some concept or stimulus that we are experiencing but not have any more details about it. Here's where "you use it, or loose it" comes into play. Where the neuron can validate the past experience but lost its connections as to how to handle it or the details of the experience because those were not used in quite some time.
Everyone has had the experience of remembering events but the details are none to sparse. While it seems counter-intuitive to have such imperfect memory such a strategy motivates a dependency with other peer members who might have more recent experience with a particular subject matter and not overload the scarce resource of only 89 billion neurons in our brains...