I agree. I hope you read the whole article because they say something like that at the end.
Umm.. now I have.
I take the view that they're both wrong, although nativism is less wrong than empiricism.
Yes, neither Nativism nor Empiricism appear like complete strategies, hearing them explained doesn’t produce an epiphany to kickstart an understanding of intelligence. They create the symptoms, but not the disease.
Intelligence is an ecosystem.
An ecosystem… Yes… That’s a good way to describe it. A system which attunes to the world. Overlapping loops of various functions, both detailing the present and building towards the future, synergistically reinforcing each other.
Regarding the communication, more important than good words, are clues about where the words are coming from. If there is no reason for it to exist, the most clever banter can be as thin and dry as a paper cut-out. We should first figure out how to create a system which generates reasons, then we won’t need to work so hard to make the resulting proceedings seem reasonable. They won’t have to be reasonable, logical, or consistent at the surface level. They just need to have the signature of a system running on faith and countless assumptions.
Seriously.
The test wouldn’t be about if it makes sense and responds to questions in the correct way, but rather if it’s able to make you self conscious. The “Something perceives me, how do I appear?†reaction is what we’re truly hoping for. This could also provide a second, less Freudian explanation for all the pretty female robots. Those could be attempts to recreate that missing sense of presence with an optical trick, to make the robot appear closer to sentience than it actually is. We'd probably be best off with both.
But again, that’s an outside-in approach. Just grammar and glamour trying to distract from the empty space behind them. Are there any attempts/examples of a core program which provides a basic functional system with some built in goals, which also supports near omnidirectional growth?
Seems like one of the basic goals of a general intelligence would be to have an optimal interaction with the world, not necessarily a victorious one.