Sorry for the month long absence
Various non-technical life requirements took me away from my hobby, but I've developed something of a roadmap to further this topic.
I've been looking at various neuroscience topics and think I've stumbled upon something pretty fascinating. The first is the idea that the complexity of the human mind isn't necessary for a human level intelligence. Much of what the brain does is regulatory in nature - keeping us upright while we walk, adjusting hormonal balances, temperature regulation, and so forth. Our conscious experience, by some accounts, takes up no more than a millionth of our neural capacity. At the current estimate of 85 billion neurons and some 90 trillion synapses, that reduces to about 90,000,000 synapses and 85,000 neurons creating the experience of consciousness, of which only 30% or so is active at any given time.
With a creative loading and caching mechanism, a sparsely connected graphical representation of a neural network can be simulated easily on current desktops.
In my theoretical understanding of mind, which is a feedback loop between sensorium and the neuro-mechanical processors representing the conscious experience, consciousness arises as a meta-construct of the whole, or a transcendent and emergent construct. The mind, experiencing, is what we know as the mind. There is no difference between a consciousness and an active mind. Deactivate the mind and there is nothing there, or at least, the mind no longer exists.
There are a few moral, or ethical, quandaries which I think are worth looking at. If one creates a system capable of consciousness, then the continuity of such a system constitutes, if not intelligent life, than something parallel to it. Knowledge of such systems and how they develop would be important to the development of more advanced intelligences... imagine a child, learning of other children like itself that were disposed of or terminated according to certain parameters. I think it rather monstrous to consider that a fledgling AI might encounter such a scenario if care is not taken. It is therefore of paramount importance during any endeavor to establish ethical and moral foundations of research and behavior.
E.g. , if you have a system that's capable of self awareness, it'd be a bad idea to turn it off and on at a whim, and to selectively edit personality traits, because if you really have a "mind" as such, then it's equivalent, to a degree, to human life. Continuity and persistence of existence should be a prime consideration during the development process in order to satisfy ethical constraints. We don't want to involve ourselves in the murder of sentience through our carelessness or disregard.
If, in fact, we have the capacity to create intelligence, then we must proceed as if this life were as important, valuable, and worthy as a human life.
That being said, I think I've got a plan that will produce a strongly intelligent system. I'll post more later.