How can you create a mind which perceives reality with a human perspective? That is the question… I’d say, create a versatile information absorbing substrate. You are what you eat, so create a mental diet rich in what you want to create.
To create an informational substrate that’s able to do this, it seems you’d need something which can perceive reality without thinking. We’re trying to make our A.I.’s know reality by logic. When in fact thinking appears to get in the way of seeing reality. From a first-person perspective, at least, thinking seems to distract from reality. Perceiving reality is what should cause thinking, how are we supposed to create perception from its supposed results? That’s backwards.
Ideally, you’d want a substrate which can adapt internal/external patterns to suit its goals, (Korr’s point/definition about intelligence). Specifically, patterns which the agent cares about, otherwise all patterns would be considered intelligent, which would make the word meaningless. A thing that can do this needn’t be a simple “deus ex data-sponge†thing that’s a shortcut of wishful thinking. It could be very complicated, just ideally empty to start with, for maximal potential.
A personality/entity could be grown through the process of this substrate attempting to predict a first-person story. Stories teach empathy. You cannot influence events, so in order to get a satisfactory temporal understanding of the world, you have to develop empathy. You have to model the conscious reality of other people accurately within your own mind.
This isn’t something you’d get when exploring a physical environment without seeing it through the lens of an established intelligence. You’d get more of a kinesthetic understanding, like most animals do, making the future predictable through your own movement, because that is the most readily apparent strategy.
A general mind will mirror the thing it is faced with in order to deal with that thing effectively. Adapting patterns. To deal effectively with a world of conscious personality, (the first-person process of someone with emotions etc), you’d have to model it in your own mind, therefore recreating/becoming it once the description was communicated completely and thoroughly enough.
How else could we expect A.I.’s to come up with all of our objectively strange ideas and peculiar modes of processing? If we make them learn from the environment, they will be a product of the environment. We humans are largely a product of ourselves, we exist in the state that we are in because our processing has interacted with itself when it was encountered in other people.
It’s happening to dogs already; they are becoming more like us mentally. We have humanified wolves, but we have humanified ourselves even more. It’s probably not an easy sequence of events to imagine or replicate from scratch. A lone calculation/prediction machine probably wouldn’t arrive at our methods for making sense of the universe.
So, how would an A.I. truly understand the language within which, (I hypothesize), our intelligence processes are encoded? I think it requires imagination; in this case, a substitution of our symbols with the gist of what they represent, in a way which is internally consistent with the currently utilized level of abstraction.