Oh! I see… I’d had four cups of coffee in a row… tends to make me waffle/ rant/ literal lol.
We humans are the problem; we have such a narrow/ limited methodology/ terminology for describing complex problem spaces.
The essence of an AGI is the re use/ re combination of known base knowledge/ variables into ever more complex representations. As this is the essence of an intelligent system, it is also our downfall.
Because the network topology as served us so well with other problems, we tend to want to re use/ think in terms of networks. It’s the main ‘goto’ technique we utilize… 99% of all AGI theories are built around the network principle.
What we actually need is a new type of math or language to adequately describe intelligence… we need to break tradition and known methods… think outside the box.
Hence my AGI program runs on simulated wetware computer architecture, the logic and syntax of the program is totally alien/ incompatible with modern processors. It’s not just about the connections, but where the nodes/ neurons are located within the connectomes 3D structure that affects processing.
I’m waffeling again… time for a brain dump lol
An intelligent/ processing system has to have some kind of base unit of knowledge that all other knowledge is comprised from.
The base unit has to be in a format that can be re-combined to represent other types of knowledge… at a meaningful processing level.
Ideally this knowledge unit should be the same format as the sensory input unit, the learned knowledge should exist in the system in the same sequence/ state it was received/ input. Intelligence/ rules should be both derived and applied to/ from data in this single format.
Obviously for a digital computer it’s 0 &1, the lowest base unit/ of information that can be re-combined to represent more complex concepts, sound, images, logic, etc.
This ‘base unit of knowledge’ concept is the current limit of our human understanding of the AGI problem space. We simply don’t know of any other way to represent information or process it… big things are made of smaller things… big knowledge comes from smaller knowledge… a + b = c.
I’ve forgotten what the question was… Oh! Yeah… lol.
For a schema to be able to re-combine low level representations of knowledge there has to be some structure/ method/ path ways/ logic/ order that represents the available options… a network, matrix or tree (which is also a network).
To achieve a required threshold of data complexity… the simpler the network, the more complex the data needs to be… and vice versa…
The only exceptions to these rules that I can think of ATM are natural laws of reality/ nature and physical mediums… like water.
End brain dump…