This topic also reminds me of 'The Wisdom of the Crowd', whereby a large number of people can give a better opinion than just one individual.
In his book ' The perfect swarm' (german edition: 'Schwarmintelligenz'), Len Fisher distinguishes between group intelligence and swarm intelligence.
The wisdom of the crowd is on the group intelligence side.
He describes different experiments like a group of people estimating a cows weight or estimating the number of marbles in a glass.
Interestingly the group in average tends to guess the correct answer more precisely than any expert (and than the majority!) given the necessary requirements for crowd wisdom such as a high diversity of group members.
The effect can however get totally lost if the requirements are not met and even turn the wisdom of the crowd into the dullness of the mob if group dynamics come into play.
The difference between the wisdom of many and swarm intelligence is:
Wisdom of many is a problem solution strategy whereas swarm intelligence emerges spontaneously (self organisation) from the behaviour of group members following simple rules (where the simple local rules create complex global behaviour including that the global system behaviour could not be deducted from the individuals behaviour).
You mentioned another probem solution strategy over here, which is helpful especially when we have to deal with complex problems:
Reducing complexity by increasing the fuzziness.
Having less information can lead to more success in making decisions.
Classical example: American students being asked wether San Diego or San Antonio has more inhabitants.
Only 62% were right, but when the same question was asked in Munich, surprisingly all students had the right answer (San Diego).
The reason is assumed to be the lack of information among german students. Most of them never really heard about San Antonio, so they simply guessed it must be smaller than San Diego.
Interestingly this method seems to lead to high succes rates when the problems to deal with are complex.
Exebeche, thank you for answering my question. I thought we had similar thoughts on the subject, and I think I was right.
What you refer to as semantics in your posts I have referred to as symbol grounding. I've linked the Wikipedia article before, but here it is again.
http://en.wikipedia.org/wiki/Symbol_grounding
Hello dear Irh9
This is getting really exciting, as now that i have read your link about symbol grounding i realise that my idea had already been expressed by somone else just in different terms.
I see two major parallels in the following sentences:
" Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to "
" Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output."
The way i understand it (and i can hardly get it wrong) this is precisely what i was trying to explain. It gives me the kick that i have been searching for and i can hardly express how thankful i am for this input.
I hope you stay on the hook for this discussion.
Alright, as you may have seen i am trying to show how grounding can also exist for a being that lacks human consciousness.
Humans and robots share the same reality but they do live in a different perceptional realm.
A robot may be able to realise laughter by some pattern recognition algorithms. But it relates to it in a different way.
Laughter refers to a state a human person can be in. I can enter a state of laughter and exit it, and when i see somebody else laugh i have a model of what this state might be like for the other person.
A robot will lack this model because it has not experienced the state itself.
But does that mean, that there can not be any intersection of the grounding at all?
We should first realise at this point that a word's grounding is not something universal.
For example when kids grow up and fall in love for the first time they tend to wonder "is this real love"?
The grounding of the word love is being build by an individual process of experiences that is different for everyone.
The more abstract a word is, the more the groundings differ.
What does the word 'sin' for example mean to you? What does it mean to a mullah in Afghanistan? Our life is full of terms that don't match other peoples' groundings.
Even less abstract terms carry this problem.
That's why there is a lot of misunderstanding.
When G.W. Bush sais the word 'America' he relates to it very differently, compared to when Bin Laden sais it (or would have said it).
One extremely important thing to see is that a grounding does not look like a linear relation between A and B.
I suggest to look at the following link where i entered the word 'sin' in the wiki mind map to get a picture of what i mean:
http://www.wikimindmap.org/viewmap.php?wiki=en.wikipedia.org&topic=sinOf course this is not a natural grounding the way it would look in our brain (actually our brain only holds the physical basis for the grounding to be precise). But it gives an image of what it could look like.
It has lots of connections to other terms, which themselves would be connected to further terms. This multiple connectedness is why i also like the term concept for the 'idea'.
How could a robot ever get such a grounding?
Well first of all we have to find the intersections of our perceptional realm to find possibilities for equivalent groundings.
A robot could for example be able to relate to the word 'walk'.
Actually that sounds easier than it is.
What does it take to relate to the word walk?
Let me make a long story short:
http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.htmlThis clip explains something about an evolutionary step that has been done in terms of robotic consciousness.
It's about robots that are simply equipped with a self-model.
Having a self-model and a world-model is by the way a neuroscience approach according to which these two things are major factors of having a consciousness.
That doesn't mean it's enough for having 'human' consciousness, however neuroscience is not at that point anymore, at which we thought humans were the only species who can have something similar to consciousness.
Whatsoever, when you give a robot a self-model and a world-model it can relate to itself and the outside world.
The difference between a normal computer and a robot:
You can program a computer to realise people who walk by pattern recognition. And you can make a computer say 'walk' when it sees a walking person.
However all it means to the computer is a change of zeroes and ones in its system. The computer at this point does not relate to the value 'walk' because it does not affect him in any way.
A robot that has a self-model however could learn that a particular state of his is called 'walk'. Plus he could learn that other agents (e.g. robots) in his realm can be in conditions that are assigned with the same value 'walk'.
A robot with a self model creates a virtual self image of itself walking that can be compared to the images of other agents that are walking.
If i could do that experiment personally i would wait for the robot to assign the concept 'walk' to the other agents by himself. Such an experiment would be somewhat similar to the talking heads experiment.
The talking heads experiment however (linked above) does not include any self modeling, thus it lacks grounding according to my opinion (and probably yours Irh9).
The word (idea, concept) is a representation of the outside world.
As soon as the robot can relate to it, in the sense that the word 'walk' makes the robot figure himself walking, that is to calculate a virtual image of himself in the virtual image of the world it perceives by its optical input, the word 'walk' perceives semantic depth.
At this moment the command 'walk' could make the robot figure himself walking and falling into a hole. This might actually result in a conflict with a self-maintainance algorithm.
The robot would have to make a decision, such as deny following the command.
Such an action would show understanding.
At the same moment, the robot's grounding of the word 'walk' has a lot in common with our grounding of the word 'walk'.
A meaning is not really more than a consensus of a word's grounding.
Whenever we use a word, we work with an agreement on the word's content. When somebody has a different opinion about a word's meaning we believe that he is wrong and we are right.
The disagreements can lead people to even kill each other.
It's really not necessary to teach a robot a concept like 'god'.
And does a robot really have to share my grounding of the word 'cry'?
There is a lot of meanings robots will never 'understand'. A lot of groundings that can not be shared.
But there are groundings that robots will understand and share with us.