Alright, weekendly update! Here's my attempts at the further development of this non rigorous perception idea. Also, I'm trying out a GCRERAC format, who knows, maybe it will guide me to the light.
Goal:My primary assumption here, is that nature has refined the art of making effective divisions in experience to quite a high degree. These divisions are units of thought, they are shaped to fit together, thus having the benefit of being easily unified and held at arms length when necessary. Seeing these "thinks" in terms of others, in a web of interaction, could be a powerful tool for AGI. I will discuss the pros and cons of logical "Single File" perception VS my theoretical "Wholeistic" perception, then try to discover and convey ways of representing them.
"Diagram":Conflict:There are some foreseeable consequences to just using a linear thought process. For example, after carefully reasoning, a one-at-a-time logical machine, (or person taught to use their mind in this way), might conclude they should devote their time to the pursuit of happiness. Then, not having access to the big picture as a cohesive whole, they're liable to overlook that pursuing happiness directly won't be a fulfilling use of their time, the linear logical pursuit can block itself. Like inadvertently walking in large circles by focusing exclusively on foot placement.
When we consider robots, we want to give them our best characteristics. Science and technology are in right now, so we focus on those, the things we consider most civilized, the quantitative.
To discover what something is, we perform an experiment, take the results, the numerable facts, and associate them backwards with the object of the experiment. Which objectively, is backwards thinking. For a purpose yes, but you don't want to be doing it all the time.
It's mostly useful because it contrasts well against our wholeistic perception. A(~G)I's are on track to lack this property; and giving them solely analytic linear processing will limit their mental depth perception. How might that manifest? Well, humans are susceptible to the misapplication of thought types as well.
Our instincts probably take their cues from the greater world model. Giving us advice which we may not immediately understand, nor need to understand. That's helpful, to a point. Continued running on reason without sufficient referral to perception can result in errors, which can corrupt much of our world view. Perception, assumptions/conclusions, continuous verification. That's how it should go, with teamwork.
When properly balanced, the wholeistic perception should keep the single file perception from doing anything laughably unwise, while the single file perception should greatly enhance the wholeistic system's ability to alter their environment.
In conclusion, I think the most effective way of experiencing the universe is like a traditional beam pattern. A cone of diffuse light with a bright spot at its center. The larger cone reveals where to apply the smaller one. A traditional robot would perceive with a focused beam, and a dog would perceive with a diffuse beam. Combine them, and you get human like perception. We've got the logic down, the question is, how do we give robots diffuse perception?
Diagram:Result:Firstly, since the wide beam will cover A LOT, we should make an attempt to simplify its job. It may be more efficient to interpret perception in terms of verbs, rather than nouns. It makes sense to use verbs because of their properties. There are fewer of them, they add a dimension, and they've got fuzzy boundaries! Maybe that's why memory and understanding have fuzzy edges. Maybe integration pays better than precision when handling large data. Maybe something to do with lose tolerances as well, so processes have lower odds of jamming up, (and higher odds of jamming
.
Secondly, the random bits of perception intended for the background mosaic should be fitted together symmetrically (like the first "Diagram"
. (Fitting things together using symmetry is also efficient, balance conserves energy. But I suspect that perfect symmetry is too stable, too nonreactive, that's probably why it paradoxically misses the mark on humans.) Matching the pieces isn't easy, our senses are not all-knowing, they only pick disjointed fragments out of the universal tapestry. We have to invent our own context for each of these fragments.
So it's not like an AGI's "cohesive" world view would make exact sense, or possesses much inherent symmetry. What we perceive can point to a greater order, but isn't itself ordered. So we need a fitting method to make use of the jumbled info we do get. It's like chess, only with fuzzy rules, and an unknown number of invisible pieces. You can't brute force all the possible moves, can't insure victory by only using the basic principles of the visible pieces. The invisible pieces will create exceptions to these rules.
Diagram:The losses from unsymmetrical interactions: (Bell curve?)
5x5=25 9x9 = ((8+10)/2)^2 = 81
4x6=24 8x10 = 80
3x7=21... 7x11 = 77...
Yup, why is it always bell curves!?!?!? But fortunately this means the effect is forgiving near the apex. Therefore you can do this:
Now you might ask: "HS? Why not pure symmetry? What exactly is wrong with that?" Yes, it is the easiest to move, but its movement has no novelty. If you take a double pendulum which is unbalanced, vs balanced, vs nearly balanced, and give them all a swing, the nearly balanced one will probably cross the most points on it's natural reactive movement. This would correspond to the most general type of intelligence.
Emotion:The wide beam communicates in large bandwidth to the small beam using emotion. It's a ground up approach. Acute intelligence is more suited to a top down approach. Eg, [you are my/I am your] [wife/husband] , which means we should interact like [ ex ]. Then you get a divorce... It's binary on or off. Going with the bottom up route, the most comfortable natural relationship indicates appropriate adjectives. It's basically listening to reality. This way behavior is less forced, and personal/interpersonal discord is reduced.
If its true that perception is made from verbs, then that's the source of the phenomenon where people are miserable in apparently luxurious environments. Like the typical paradox with smiling peasants and sulky business men. Material (nounal) quality of life ≠actual quality of life. Rather the actual quality of life = the quality of personal narrative. And the quality of personal narrative = (material quality of life)*(perceived verb sequences). The brain requires experiencing the correct verbs in the correct orders, for proper informational nutrition. But verbs require nouns, the discrete things created by the strong beam, in order to exist in an according abundance.
Far be it from me to say, I feel like a complete waste of food when one of them shows up, but I think this may partially explain why highly intelligent people are more susceptible to depression and the like. We use what's available right? High IQ people have a better narrow beam, so they apply it more often, to a greater range of things. Conversely, it doesn't make sense for someone like me to make my way through the world, using primarily logical analysis, I'd never get anywhere!
Point is, that too great of an imbalance between the apparent usefulness of the broad and the narrow beams of perception, can lead to a disproportionate use of one or the other. And this would do what? Remember "5*5=25" ? And, "(material quality of life)*(perceived verb sequences) = (quality of personal narrative)"? It could be that: (single file logic)*(wholistic perception) = (a maximized self).
Diagram:Reason:What life forms are having the best go of it? In general, probably retrievers, labs, beagles... Now using linear, a to b, cause and effect reasoning, will tell you that dogs have good lives thanks to humans, ergo lets pat ourselves on the back, we are very generous. But when we step back and see the interactions of everything, the other perspective reveals that credit is due to their skill at selecting/creating the best niche. They can be viewed as winning at life more skillfully than any of us intellectually advanced lifeforms. This is indicative of really effective divisions in their broad beams of perception. We should learn from them to see how its done, and convey these skills to AGI.
So, it follows that the focused dissective intelligence we're most proud of has little to do with acting rightly in the grand scheme of things. All it does is increase the influence of a given entity on the world. It is more a tool of being, than the core of being.
That being said, focused intelligence can still multiply the awesomeness of your creature by a brsdfgkjsillion. It could sort of juggle the various facets illuminated by the wide beam. I think you'd need to have several main methods of arranging this data into useful internal discoveries. A basic mental toolbox, a collection of useful algorithms. Each GI develops a unique bag of tricks. Though each member of a species probably gets a similar "essentials" starter pack at the start.
We should think carefully about these tools because they should be designed to handle, and pick apart whatever the broad beam presents. Verb handling tools...
"Diagram:"Anticipation:How could the pieces of wholistic perception be fitted together? Melted edges? Imagined framework? Bookshelf? Fitting into personal narrative like bricks and mortar? I like the last one, it fits with my "primary pathways and processes of intelligence" diagram and accidentally incorporates the verb idea. Maybe that's not how we do it at all, but there could be several ways of getting this party started. I mean, just getting cohesion between my ideas could be worth something, even if nature is doing it another way.
Alright. So lets represent it by showing the potentials of a chess board all at once with verbs indicated by positive and negative shading like magnetic fields, and then applying logic to the critical paths. They can be represented by magnetic fields because the invisible pieces reduce the visible piece's probability of influence with distances (x,y,t).
The other adjustment to regular chess is giving rules soft boundaries to introduce some room for error. I believe neural nets are able to think quickly and repetitively because they use approximations.
There are sure to be less glitches if thinking involves collisions of large numbers of signals. Given the brain's seeming "what the hell" attitude towards the fine points of internal structuring, (respect), clashes of armies instead of duels look like an effective strategy for dependable repeatable results. Like running a simulation multiple times, only in parallel.
Therefore, to get those types of forgiving mechanics, I'll represent each chess piece with 81 pieces of it's type. For now, here's the basic idea.
Diagram:The big external perceptual chess board becomes more and more internal with time. Which helps to identify present situations from context noticed in past situations. Part of patterns evoke larger memorized ones from experiences in the past. Therefore broad perception will draw increasingly from these memorized big data slides. It could keep optimizing itself indefinitely. What's next? No clue. Maybe someone else has some ideas.