Semantics

  • 24 Replies
  • 8629 Views
*

Exebeche

  • Roomba
  • *
  • 11
Semantics
« on: July 24, 2011, 10:51:44 pm »
System is asking me to start a new topic rather than posting an answer.
So here we go.

I'm not sure I understand what you mean by "semantic".
Could you explain a little more about semantic understanding?

Would you allow me to do so?
Because it is highly important.
The word semantics is not really a term that nature scientists use a lot.
People who talk about literature and art tend to use it, and it describes the meaning of a lingual fragment.
Another synonym would be the 'content' or rarely used the 'concept' that is behind something.
It is not something that could be defined the way you can define let's say velocity.
That is why nature scientists and also IT guys don't really pay a lot of attention to it.

Which is too bad because it lead to some of their greatest mistakes.
When Turing said in 1946:
“In thirty years, it would be as easy to ask a computer a question as to ask a person”
he simply failed.
And in the 70s when Programmers said Computers would be reading Shakespear in a couple of years they repeated this same mistake.

I bumped into the reason for this failure in some discussions.
Scientifically oriented people tend to only want to deal with the hard facts. They tend to believe that programming a dictionary and the correct grammar makes a computer talk.
I really talked to highly educated people who had this believe.
The 'philosophy of mind' has an approach of explaining why this model has to fail.
Language according to the 'philosophy of mind' is not simply a set of rules, but actually one of the pillars of human mind itself, next to consciousness and intentionality.
Searle by the way is considered one of the represantants of the philosophy of mind.
Personally i do not quiet bow to this philosophy, however i think its concepts have to be taken into serious consideration.
To speak is an act of understanding which requires a mind.

Reducing language to the aspects of grammar and vocabulary means ignoring the most important aspect which is its semantic depth.
If you write a program that can handle grammar and vocabulary you will have a system that deals with major aspects of language.
I would like to call this aspect the horizontal dimension of language.
Actually speaking requires another dimension, a vertical one, which is the semantic depth.

How could we describe semantic depth without getting blury?
Trotter was already pretty close to what i mean when he said:
Then give the program the abilities to detect stimulus and link it with response. For instance, every time we give him food, we can write "here is your food", "time to eat", "you must be hungry, have some food".
The bot can then link the stimulus (the word "food") with action of eating

Let me explain a little bit more detailled:
The problem about the chinese room is that a chatbot who uses a language even perfectly will nevertheless in no way be able to understand the content of its words.
Even if a chatbot works perfectly enough to pass the Turing test it has no understanding of WHAT it has said.
We have to look at Trotters suggestion to understand why:
Any word a human person uses has an equivalent in a 3dimensional world creating a relation between the human and the used word (concept).
The word itself is a represantation of a phase that reality is at or can be at.
A word like 'white' describes a condition in a 3d-world that humans can relate to because it has an effect to their own world. If there is something white it can already be seen, so it's not dark. Words and their implications create their own realms. And humans always relate to those realms in some way.
This is what a poor computer does not have.
And it is maybe why embodiement concepts are kind of successful.

To make your program attach a meaning to a word, give it a way of relating to it.
Use a virtual 3d-gameworld for example. Make your bot look for food all the time, make it depend on food.
Give it a self-model and a world-model.
Make it vulnerable by other creatures.
Make it even based on neuronal network so that learning patterns emerge by itself.
It will develop the concepts of 'food' and 'being wounded' without giving these states names.
Then you can teach it the right word for it.
After the 'real' circumstance itself is already established as a concept in its neuronal network, attaching the vocabulary to it will be just peanuts.
A bot that can be in many conditions, that can relate to many states of reality will absorb the words you teach it like a spongue.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Semantics
« Reply #1 on: July 25, 2011, 12:01:59 am »
The most successful natural language understanding software written to date (that has been published) takes into account three aspects of natural language and unless I am completely misunderstanding you, you seem to be thinking along these lines too. The aspects that I am talking about are syntax (the form of the words), semantics (the meaning of the words) and intention (the goals of the entities doing the communicating). When the software is developed to take into account all of these things it seems to become quite robust and useful. http://www.cs.rochester.edu/~james/

*

DaveMorton

  • Trusty Member
  • ********
  • Replicant
  • *
  • 636
  • Safe, Reliable Insanity, Since 1961
    • Geek Cave Creations
Re: Semantics
« Reply #2 on: July 25, 2011, 05:34:06 am »
Boy, Exebeche, when you get involved, you jump right in there, don't you? :) That's wonderful!

I would express my joy that I have another soul with which to debate (and I still might, at some point), but that would require an opposing viewpoint, and other than a minor disagreement over whether IT people ignore the word "semantic" (I've done IT work in the past, and in some ways, I still do, with my web design, and I firmly believe in semantics as it applies to web page structuring, though the term in that context is little more than a "buzzword"), we pretty much see eye to eye. I'm certain that we'll find topics to debate, however, at some point. :)
Comforting the Disturbed, Disturbing the Comfortable
Chat with Morti!
LinkedIn Profile
CAPTCHA4us

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Semantics
« Reply #3 on: July 25, 2011, 12:54:17 pm »
Quote
System is asking me to start a new topic rather than posting an answer.

Just putting my admin hat on for a moment. I'm assuming it gave the warning about it being an old thread ? Generally we don't worry too much about that as often we find the need to go back to a topic or in this case someone new does. It's good to see new perspectives, so don't take it too seriously if you think it will work. I think in this case maybe it was best to start a new topic; so thank you... Admin hat off.

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #4 on: July 25, 2011, 04:40:09 pm »
Hello, that's great, i had not even expected response so soon.
And i am especially happy about even getting positive feedback.
Generally we don't worry too much about that as often we find the need to go back to a topic or in this case someone new does.
Alright got that.

The aspects that I am talking about are syntax (the form of the words), semantics (the meaning of the words) and intention (the goals of the entities doing the communicating).
That sounds exciting. How do these programs simulate intention? Is there a product name that i could do some research on?

but that would require an opposing viewpoint, and other than a minor disagreement over whether IT people ignore the word "semantic"
Don't worry about having opposing opinions. I am used to being punched in a philosophy forum.
Actually only when opinions differ the output is maximised.
My knowledge about neuronal networks is only based on playing with a tiny one called something like 'little red riding hood -network'.
Though being more or less a toy however it worked perfectly to show how (so called) hidden neurons after the training phase turn out to be representations of particular concepts, which in this case where representations of 'wolf, hunter, grandmother'.
Once you see it, it becomes clear how concepts are getting wired in our brain.
A particular signal can activate the particular according group of neurons which is the individual representation of this person's (or animal's) concept about the signal. Big ears and big mouth can trigger the concept 'wolf'.
What is one hidden neuron in an artificial network is represented by many neurons in our brain i assume (the so called grandmother-neuron that theoretically could have existed, never was located by neuroscientists - the idea was pretty much given up). Still - the principle remains the same.
The neuronal representation of something in the outside world is like a reflection of it (Actually the term representation is used by neuro scientists).
A transcription of the original into an anologuos pattern. The relations between things in the outside world are mirrored by the relations between their neuronal represantations.
The more a being learns about its environment the more precisely the relations between neuronal representations will match the corresponding relations of the environment.
The biological fundament for this mechanism seems to based on simple chemical reactions causing 'neurons that fire together wire together'.
The ringing of a bell will be attached to the concept of food by a dog's neurons without reasoning about it when you simply repeat it over and over.
It's pure chemistry.

One of the most stunning examples of representations of the outside world in IT i found when i was shopping in a popular online book store everyone knows.
When you look at a book you already get suggestions saying "people who were interested in this book also were interested in..."
Good marketing concept. First i wasn't interested, but with a closer look i realized that actually this tiny algorithm (combined with a huge database of purchases) had the ability to recommend me books better than any human person has been able to.
"What's so surprising about it?" people keep asking me at the philosophy forum. "It's just simple statistics".
Yes of course it's simple - that's why it's genious, because it's statistics causing semantic content.
The relations between the recommended books are 99,9 percent functioning representations of relations that do actually exist between these books.
It's not very impressing to see an algorithm make one recommendation for a book you look at.
However if you drew a 3d map of all connections between books that are related somehow you would quickly realise that this is an incredibly complex building to look at.
And it is not just set up by an algorithm, but actually every customer has made a little contribution to make it something that reflects relations between books which 99,9 percent of all customers would acknowledge.

I call the algorithm and the database the horizontal dimension of this phenomenon. The fact however that the way it is linked and set up to a huge web that is a representation (reflection) of real outside world relations, is what gives it the vertical semantic dimension.

Guess i should stop here, it's already almost too long to read.
I am looking forward to exchanging ideas GeekCaveCreations.
 :)

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Semantics
« Reply #5 on: July 25, 2011, 11:05:00 pm »
That sounds exciting. How do these programs simulate intention? Is there a product name that i could do some research on?

I included a link at the end of my post. There isn't a commercially available product yet (as far as I know) but there have been several systems developed and deployed. For one example, watch the videos for the PLOW system in action. There are also numerous research papers to download and read that will give you more details. Here's the link again. http://www.cs.rochester.edu/~james/

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #6 on: July 25, 2011, 11:25:29 pm »
Sorry for not seeing the link (thought it was a signature) and thank you for reposting.
« Last Edit: July 25, 2011, 11:32:21 pm by Exebeche »

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #7 on: July 26, 2011, 12:03:04 am »
Sorry but my pen (olive green crayon) doesn't want to stop writing, so i guess you may need to have a little more patience with me.

One extremely exciting aspect of the talking heads experiment is certainly that they don't just use language but they actually make a language evolve:
http://talking-heads.csl.sony.fr/
It seems that embodied systems have proven some much better cognitive output than their non embodied brothers in some experiments.
I don't know too much about it, but the talking head experiment appears to be one result of this embodiment trend.
Personally i believe that all of this will be possible in virtual spaces however at the moment there are certainly some technical obstacles.
Making an agent SEE a (let's say) yellow triangle in a virtual space may simply be more complicated than showing it a real yellow triangle in reality (although certainly both has its challenges).
Nevertheless to me the talking head experiment is another example of semantic depth.
The shapes and colours for which the robots create words are the 3D origins of the concepts (which are labeled with the words).
I have no idea if the robots are based on neural networks, so the representations in their processors, reflecting the outside objects are not neuronal but just binary, but anyway they are representations.

Here we have a transcription of an original signal (from an object) into a completely different code (binary), which however will be triggered when the system gets confronted with the original signal again.
The representation is linked to its original.
While in case of the book store the link was established by an algorithm, here it's the robots perception.
In both cases we have a logical (binary) representation of an outside world phenomenon.
Another difference is that the robots are talking about (and have concepts of) actual objects, where the book store algorithm reflected a construction that was completely abstract. Inconcrete.

The difference to chatbots is that a bot can assign patterns correctly, however the patterns are not a representation of anything.
Why do people have the feeling that connecting 'Have a nice evening' to 'thank you good night' is not enough to believe that a computer really understands what it says?
Some of us have already been in foreign countries adapting peoples behaviour by e.g. just answering a 'Salaam' by saying 'Salaam' without having any idea what it means.
(Funny side note: I had a friend who in Israel permanently replied to Salaam with Shalom and vice versa because she thought that's how it goes.  :D  )
In some cases our way of learning a language is NOT different from a chatbot.
However what we mean by the word 'understand' requires the semantic depth which is only given when the code (thus concept) represents a possible condition of the outside world (and/or of the system itself relating to the outside world).
A typical computer will only be found in conditions that relate to its own states.
It changes from a 001 into a 011.
The difference between these states to a computer is only of formal nature. It means as much to the computer as it does to you or me.
When 001 translates into triangle and 011 translates into yellow, and it tells us about its discovery we may find a yellow triangle somewhere around, so we might share the same environment with the computer.

Which tells us another thing about understanding: To have the same understanding you need to share the same reality (simply for the representations to match).
We expect from computers to UNDERSTAND our language, before we really consider them intelligent.
It could also go the other way:
Maybe an AI is going to tell us one day that we are never going to understand the WEB in its semantic depth.
Because we can not relate to all the possible states in this realm.
Honestly i believe we are moving rapidly towards this point.


« Last Edit: July 26, 2011, 12:16:20 am by Exebeche »

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #8 on: July 27, 2011, 01:54:47 pm »
Today i have something really mind blowing for you.

The amazing but also confusing thing about that bookstore algorithm is that it's both:
A semantic concept and a map of semantic content.
One must not confuse these two things.
First of all the connection between the books is based on their content - in other words they are connected on a semantic level.
This is something that a computer actually does not understand.
No computer on this world would be able to process the content of all these books and based on the processing make the content-related connections between them correctly.
The reason why the map of relations reflects the real semantic connections is because millions of humans created the input by their purchasing and clicking behaviour.
So that the algorithm could simply record the connections.

And than actually it works like neurons do.
This fact deserves a special amount of attention. Personally i find it mind blowing.
Why does it work like neurons do and how so?
The principle is simple:
Connections that are established more often simply get stronger.
When you ring a bell before you feed your dog it will not connect it the first time you do it. The more often this happens however the stronger the connections between the synapses get. You cannot even prevent the synapses from doing so, it's a chemical process.
Same for the algorithm: Let's say first customer buys Winnetou1 and Winnetou2, the connection gets recorded. Next customer buys Winnetou1 and the bible, also the connection gets recorded.
Now we have a right and a false (actually random) connection.
There will always be false (random) connections. And they will be recorded. But at the same time there will also be Winnetou 1 and 2 connected and recorded, and if you simply count these correct connections, their number will slowly increase, compared to the number of random connections, each counted by itself.
The connection between the bible and Winnetou1 will still be at value 1, when the number of connections between Winnetou1 and 2 is already at let's say 10000.
Now if you count the number of connections as a weight, even if there is 10000 false (random) connections, each of them has a weight of 1 (or not very much higher), the weight of 10000 for Winnetou1 and 2 outnumbers the random connections by far.
This is exactly what neurons do. By increasing the physical synapse itself with every connection that gets established, the neuron increases the weight of a signal.
Basically that's what our brain does: Making connections and increasing their weight.

And it's also what artificial neural networks do. If you ever played with one, you will remember that for getting your neural network to create the correct output, the major factor is how to set the weights.

In other words the online book store algorithm is a demonstration of how a statistical mechanism creates an effect with an output similar to neurons.
My guess would be that data mining makes use of this principle, however i haven't got into that topic yet.

Now back to our online book store semantic map.
We need to distinguish between the semantic concept itself and the semantic content that is being mapped.
The algorithm does not understand anything about the content of the books, but nevertheless maps similar contents because it reflects the opinion of millions of customers.
The content itself can be called semantic content, however the map of it is itself a single semantic concept.
It's the most complex one that i know and a pretty impressing one, but it's just one semantic concept.
A semantic concept is something that we attach to anything of which we believe that it simply IS.
Let's say vanilla ice cream. Vanilla ice cream is for everyone connected to whatever associations.
Maybe summer, blue sky, sun, the colour white, a particular taste, and so on.
This cloud of associations is your semantic concept of something.
Sometimes the concept exists before a word is attached to it. And sometimes you first hear a word and slowly fill it with attributes.
But the word is not to be confused with the semantic concept. It is a tag, although very strongly tied to the concept.
The semantic concept however basically csonsists of a cloud of associations.
Its subject does not have to be something solid but it can also be abstract.
A concept like the 'Evil' for example is set up by a cloud of associations that is different for everyone, as different as the retina of your eyes.

When a group of associations appears to us as pattern of its own (something that can be seen as one thing - thus an entity) its subject appears to us as something that exists.
This way humans have the impression that the 'Evil' or 'honour' or 'beauty' or whatever other abstract concepts, do exist 'as such', meaning they have a way of being, in other words a metaphysical existence. Enough philosophy over here.

The best image one can get of how such a concept can be figured is on Wiki mind map:
http://www.wikimindmap.org/
select your language and search for some topics. It's pretty interesting.
The English version seems to be bigger than other languages.
« Last Edit: July 27, 2011, 02:37:39 pm by Exebeche »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Semantics
« Reply #9 on: July 27, 2011, 01:56:12 pm »
Quote
One of the most stunning examples of representations of the outside world in IT i found when i was shopping in a popular online book store everyone knows.
When you look at a book you already get suggestions saying "people who were interested in this book also were interested in..."
Good marketing concept. First i wasn't interested, but with a closer look i realized that actually this tiny algorithm (combined with a huge database of purchases) had the ability to recommend me books better than any human person has been able to.

I like that you spotted that and that you were impressed.  I think things like that, as there are a lot of examples now, are some of the more useful things I see online.  They model a world that humans would find difficult or near impossible to do. Also you see things like search terms 'trending', not so useful to me but still a nice way of showing what is going on in the world. Maybe a virtual shopping assistant could tap into some of those things. Kind of like...'I couldn't find x but I know a lot of people are interested in y at the moment and it seems related.'

The Amazon API is already available of course, but I don't think it allows you to use that particular data.  All I remember seeing was related items, but I don't think that has anything quite as clever happening with it.

In a lot of ways this is like the charts we had been using all the time already - you know, Top Ten CD's, Top 100 best films etc.  But those things were never real time and took a while to assemble.  Can we wait a week to find out the top album any more ?  Not that I really care these days, but just to make a point..
« Last Edit: July 27, 2011, 03:46:17 pm by Freddy »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Semantics
« Reply #10 on: July 28, 2011, 01:42:27 am »
Not just Amazon but the entire assemble of shops, stores and markets online use the weighted purchasing, user selection / recommendation hook for advertising and promotions.

It reminded me of the movie, "Minority Report" (Tom Cruise), where people's eyes were scanned as they entered a store, the computer scan was linked into the stores database so it knew the person's identity AND that person's shopping preferences based on past purchases. Way cool for the
store, perhaps not so cool for the consumer to get besieged as soon as the store is entered. Kind of a scary future but most likely enevitable.

I have an older book with programs that start only knowing the rules of a game and progressively become "smarter" if not impossible to beat after
several games. No doubt using weighted factors based on its opponent's playing selections but still quite interesting.

I think there is also a place for what I call, General Pattern matching whereas an absolute or extremely detailed knowledge is not always required in order to make a connection. Guy walks into a store and displays what looks like a gun. First, is it hand held? OK...Pistol...either revolver or semi-auto.
The caliber is really NOT important at this point. Same scenerio for a long gun...Is it single barreled? Is the barrell, large for say a shotgun or with maybe a scope indicating a hunting rifle. Could be a 22 for small varmints or very large for hunting larger game. Suffice it to say that the fact that he had a gun in a store would indicate that he was not hunting game (unless he entered a gun store to have his gun repaired or traded for an upgrade, etc.).

Point is, at some time, details are not always as important as good general information. Sometimes we tend to over focus on the finite little issues that only cloud the larger ones. Something to think about...Keep it simple ...remember the K.I.S.S. principle...? Good!
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Semantics
« Reply #11 on: July 29, 2011, 01:33:46 pm »
Good points Art.  I don't remember that part in Minority Report, I have it here so will have to watch it again.

Like you say we already have our past preferences being used to offer us other things. I find that quite useful and sometimes go to Amazon just to see what it has dug up.

I agree with your observation, knowing simply that it is a gun and could be life threatening is far more important at that point than what calibre or colour it is !

This topic also reminds me of 'The Wisdom of the Crowd', whereby a large number of people can give a better opinion than just one individual.  I guess that is if the individual is not Einstein...

Quite interesting if anyone wants a read : Wisdom of the Crowd.

A British magician called Derren Brown claims that it was by using this method that he was able to predict the outcome of our National Lottery live... of course I remain sceptical on that one..

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Semantics
« Reply #12 on: July 29, 2011, 10:00:14 pm »
@Minority Report - Right after Tom gets his eye transplant then goes into a store, his (and everyone elses) eyes are scanned...etc.

@ Derren Brown - I've watched practically all of his videos and yes, he is quite the picker of brains and messer of minds. Suggesting
a person select a RED BIKE, or paying for items with blank pieces of paper. I especially loved his roundabout with the chess players.
Great stuff and a rather unique individual that Derren.

The rest of you, go to Youtube and enter Derren Brown...you will be entertained!! O0
In the world of AI, it's the thought that counts!

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Semantics
« Reply #13 on: August 02, 2011, 02:27:58 pm »
Exebeche, thank you for answering my question. I thought we had similar thoughts on the subject, and I think I was right.

What you refer to as semantics in your posts I have referred to as symbol grounding. I've linked the Wikipedia article before, but here it is again.

http://en.wikipedia.org/wiki/Symbol_grounding

Have you read it?

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #14 on: August 03, 2011, 12:47:08 am »
This topic also reminds me of 'The Wisdom of the Crowd', whereby a large number of people can give a better opinion than just one individual.
In his book ' The perfect swarm' (german edition: 'Schwarmintelligenz'), Len Fisher distinguishes between group intelligence and swarm intelligence.
The wisdom of the crowd is on the group intelligence side.
He describes different experiments like a group of people estimating a cows weight or estimating the number of marbles in a glass.
Interestingly the group in average tends to guess the correct answer more precisely than any expert (and than the majority!) given the necessary requirements for crowd wisdom such as a high diversity of group members.
The effect can however get totally lost if the requirements are not met and even turn the wisdom of the crowd into the dullness of the mob if group dynamics come into play.
The difference between the wisdom of many and swarm intelligence is:
Wisdom of many is a problem solution strategy whereas swarm intelligence emerges spontaneously (self organisation) from the behaviour of group members following simple rules (where the simple local rules create complex global behaviour including that the global system behaviour could not be deducted from the individuals behaviour).
You mentioned another probem solution strategy over here, which is helpful especially when we have to deal with complex problems:
Reducing complexity by increasing the fuzziness.
Having less information can lead to more success in making decisions.
Classical example: American students being asked wether San Diego or San Antonio has more inhabitants.
Only 62% were right, but when the same question was asked in Munich, surprisingly all students had the right answer (San Diego).
The reason is assumed to be the lack of information among german students. Most of them never really heard about San Antonio, so they simply guessed it must be smaller than San Diego.
Interestingly this method seems to lead to high succes rates when the problems to deal with are complex.

Exebeche, thank you for answering my question. I thought we had similar thoughts on the subject, and I think I was right.
What you refer to as semantics in your posts I have referred to as symbol grounding. I've linked the Wikipedia article before, but here it is again.
http://en.wikipedia.org/wiki/Symbol_grounding

Hello dear Irh9
This is getting really exciting, as now that i have read your link about symbol grounding i realise that my idea had already been expressed by somone else just in different terms.
I see two major parallels in the following sentences:
" Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to "
" Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output."
The way i understand it (and i can hardly get it wrong) this is precisely what i was trying to explain. It  gives me the kick that i have been searching for and i can hardly express how thankful i am for this input.
I hope you stay on the hook for this discussion.

Alright, as you may have seen i am trying to show how grounding can also exist for a  being that lacks human consciousness.
Humans and robots share the same reality but they do live in a different perceptional realm.
A robot may be able to realise laughter by some pattern recognition algorithms. But it relates to it in a different way.
Laughter refers to a state a human person can be in. I can enter a state of laughter and exit it, and when i see somebody else laugh i have a model of what this state might be like for the other person.
A robot will lack this model because it has not experienced the state itself.
But does that mean, that there can not be any intersection of the grounding at all?
We should first realise at this point that a word's grounding is not something universal.
For example when kids grow up and fall in love for the first time they tend to wonder "is this real love"?
The grounding of the word love is being build by an individual process of experiences that is different for everyone.
The more abstract a word is, the more the groundings differ.
What does the word 'sin' for example mean to you? What does it mean to a mullah in Afghanistan? Our life is full of terms that don't match other peoples' groundings.
Even less abstract terms carry this problem.
That's why there is a lot of misunderstanding.
When G.W. Bush sais the word 'America' he relates to it very differently, compared to when Bin Laden sais it (or would have said it).

One extremely important thing to see is that a grounding does not look like a linear relation between A and B.
I suggest to look at the following link where i entered the word 'sin' in the wiki mind map to get a picture of what i mean:
http://www.wikimindmap.org/viewmap.php?wiki=en.wikipedia.org&topic=sin
Of course this is not a natural grounding the way it would look in our brain (actually our brain only holds the physical basis for the grounding to be precise). But it gives an image of what it could look like.
It has lots of connections to other terms, which themselves would be connected to further terms. This multiple connectedness is why i also like the term concept for the 'idea'.

How could a robot ever get such a grounding?
Well first of all we have to find the intersections of our perceptional realm to find possibilities for equivalent groundings.
A robot could for example be able to relate to the word 'walk'.
Actually that sounds easier than it is.
What does it take to relate to the word walk?
Let me make a long story short:

http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.html
This clip explains something about an evolutionary step that has been done in terms of robotic consciousness.
It's about robots that are simply equipped with a self-model.
Having a self-model and a world-model is by the way a neuroscience approach according to which these two things are major factors of having a consciousness.
That doesn't mean it's enough for having 'human' consciousness, however neuroscience is not at that point anymore, at which we thought humans were the only species who can have something similar to consciousness.
Whatsoever, when you give a robot a self-model and a world-model it can relate to itself and the outside world.
The difference between a normal computer and a robot:
You can program a computer to realise people who walk by pattern recognition. And you can make a computer say 'walk' when it sees a walking person.
However all it means to the computer is a change of zeroes and ones in its system. The computer at this point does not relate to the value 'walk' because it does not affect him in any way.
A robot that has a self-model however could learn that  a particular state of his is called 'walk'. Plus he could learn that other agents (e.g. robots) in his realm can be in conditions that are assigned with the same value 'walk'.
A robot with a self model creates a virtual self image of itself walking that can be compared to the images of other agents that are walking.
If i could do that experiment personally i would wait for the robot to assign the concept 'walk' to the other agents by himself. Such an experiment would be somewhat similar to the talking heads experiment.
The talking heads experiment however (linked above) does not include any self modeling, thus it lacks grounding according to my opinion (and probably yours Irh9).
The word (idea, concept) is a representation of the outside world.
As soon as the robot can relate to it, in the sense that the word 'walk' makes the robot figure himself walking, that is to calculate a virtual image of himself in the virtual image of the world it perceives by its optical input, the word 'walk' perceives semantic depth.
At this moment the command 'walk' could make the robot figure himself walking and falling into a hole. This might actually result in a conflict with a self-maintainance algorithm.
The robot would have to make a decision, such as deny following the command.
Such an action would show understanding.
At the same moment, the robot's grounding of the word 'walk' has a lot in common with our grounding of the word 'walk'.

A meaning is not really more than a consensus of a word's grounding.
Whenever we use a word, we work with an agreement on the word's content. When somebody has a different opinion about a word's meaning we believe that he is wrong and we are right.
The disagreements can lead people to even kill each other.
It's really not necessary to teach a robot a concept like 'god'.
And does a robot really have to share my grounding of the word 'cry'?
There is a lot of meanings robots will never 'understand'. A lot of groundings that can not be shared.
But there are groundings that robots will understand and share with us.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

238 Guests, 0 Users

Most Online Today: 270. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles