Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: Exebeche on July 24, 2011, 10:51:44 pm

Title: Semantics
Post by: Exebeche on July 24, 2011, 10:51:44 pm
System is asking me to start a new topic rather than posting an answer.
So here we go.

I'm not sure I understand what you mean by "semantic".
Could you explain a little more about semantic understanding?

Would you allow me to do so?
Because it is highly important.
The word semantics is not really a term that nature scientists use a lot.
People who talk about literature and art tend to use it, and it describes the meaning of a lingual fragment.
Another synonym would be the 'content' or rarely used the 'concept' that is behind something.
It is not something that could be defined the way you can define let's say velocity.
That is why nature scientists and also IT guys don't really pay a lot of attention to it.

Which is too bad because it lead to some of their greatest mistakes.
When Turing said in 1946:
“In thirty years, it would be as easy to ask a computer a question as to ask a person”
he simply failed.
And in the 70s when Programmers said Computers would be reading Shakespear in a couple of years they repeated this same mistake.

I bumped into the reason for this failure in some discussions.
Scientifically oriented people tend to only want to deal with the hard facts. They tend to believe that programming a dictionary and the correct grammar makes a computer talk.
I really talked to highly educated people who had this believe.
The 'philosophy of mind' has an approach of explaining why this model has to fail.
Language according to the 'philosophy of mind' is not simply a set of rules, but actually one of the pillars of human mind itself, next to consciousness and intentionality.
Searle by the way is considered one of the represantants of the philosophy of mind.
Personally i do not quiet bow to this philosophy, however i think its concepts have to be taken into serious consideration.
To speak is an act of understanding which requires a mind.

Reducing language to the aspects of grammar and vocabulary means ignoring the most important aspect which is its semantic depth.
If you write a program that can handle grammar and vocabulary you will have a system that deals with major aspects of language.
I would like to call this aspect the horizontal dimension of language.
Actually speaking requires another dimension, a vertical one, which is the semantic depth.

How could we describe semantic depth without getting blury?
Trotter was already pretty close to what i mean when he said:
Then give the program the abilities to detect stimulus and link it with response. For instance, every time we give him food, we can write "here is your food", "time to eat", "you must be hungry, have some food".
The bot can then link the stimulus (the word "food") with action of eating

Let me explain a little bit more detailled:
The problem about the chinese room is that a chatbot who uses a language even perfectly will nevertheless in no way be able to understand the content of its words.
Even if a chatbot works perfectly enough to pass the Turing test it has no understanding of WHAT it has said.
We have to look at Trotters suggestion to understand why:
Any word a human person uses has an equivalent in a 3dimensional world creating a relation between the human and the used word (concept).
The word itself is a represantation of a phase that reality is at or can be at.
A word like 'white' describes a condition in a 3d-world that humans can relate to because it has an effect to their own world. If there is something white it can already be seen, so it's not dark. Words and their implications create their own realms. And humans always relate to those realms in some way.
This is what a poor computer does not have.
And it is maybe why embodiement concepts are kind of successful.

To make your program attach a meaning to a word, give it a way of relating to it.
Use a virtual 3d-gameworld for example. Make your bot look for food all the time, make it depend on food.
Give it a self-model and a world-model.
Make it vulnerable by other creatures.
Make it even based on neuronal network so that learning patterns emerge by itself.
It will develop the concepts of 'food' and 'being wounded' without giving these states names.
Then you can teach it the right word for it.
After the 'real' circumstance itself is already established as a concept in its neuronal network, attaching the vocabulary to it will be just peanuts.
A bot that can be in many conditions, that can relate to many states of reality will absorb the words you teach it like a spongue.
Title: Re: Semantics
Post by: infurl on July 25, 2011, 12:01:59 am
The most successful natural language understanding software written to date (that has been published) takes into account three aspects of natural language and unless I am completely misunderstanding you, you seem to be thinking along these lines too. The aspects that I am talking about are syntax (the form of the words), semantics (the meaning of the words) and intention (the goals of the entities doing the communicating). When the software is developed to take into account all of these things it seems to become quite robust and useful. http://www.cs.rochester.edu/~james/ (http://www.cs.rochester.edu/~james/)
Title: Re: Semantics
Post by: DaveMorton on July 25, 2011, 05:34:06 am
Boy, Exebeche, when you get involved, you jump right in there, don't you? :) That's wonderful!

I would express my joy that I have another soul with which to debate (and I still might, at some point), but that would require an opposing viewpoint, and other than a minor disagreement over whether IT people ignore the word "semantic" (I've done IT work in the past, and in some ways, I still do, with my web design, and I firmly believe in semantics as it applies to web page structuring, though the term in that context is little more than a "buzzword"), we pretty much see eye to eye. I'm certain that we'll find topics to debate, however, at some point. :)
Title: Re: Semantics
Post by: Freddy on July 25, 2011, 12:54:17 pm
Quote
System is asking me to start a new topic rather than posting an answer.

Just putting my admin hat on for a moment. I'm assuming it gave the warning about it being an old thread ? Generally we don't worry too much about that as often we find the need to go back to a topic or in this case someone new does. It's good to see new perspectives, so don't take it too seriously if you think it will work. I think in this case maybe it was best to start a new topic; so thank you... Admin hat off.
Title: Re: Semantics
Post by: Exebeche on July 25, 2011, 04:40:09 pm
Hello, that's great, i had not even expected response so soon.
And i am especially happy about even getting positive feedback.
Generally we don't worry too much about that as often we find the need to go back to a topic or in this case someone new does.
Alright got that.

The aspects that I am talking about are syntax (the form of the words), semantics (the meaning of the words) and intention (the goals of the entities doing the communicating).
That sounds exciting. How do these programs simulate intention? Is there a product name that i could do some research on?

but that would require an opposing viewpoint, and other than a minor disagreement over whether IT people ignore the word "semantic"
Don't worry about having opposing opinions. I am used to being punched in a philosophy forum.
Actually only when opinions differ the output is maximised.
My knowledge about neuronal networks is only based on playing with a tiny one called something like 'little red riding hood -network'.
Though being more or less a toy however it worked perfectly to show how (so called) hidden neurons after the training phase turn out to be representations of particular concepts, which in this case where representations of 'wolf, hunter, grandmother'.
Once you see it, it becomes clear how concepts are getting wired in our brain.
A particular signal can activate the particular according group of neurons which is the individual representation of this person's (or animal's) concept about the signal. Big ears and big mouth can trigger the concept 'wolf'.
What is one hidden neuron in an artificial network is represented by many neurons in our brain i assume (the so called grandmother-neuron that theoretically could have existed, never was located by neuroscientists - the idea was pretty much given up). Still - the principle remains the same.
The neuronal representation of something in the outside world is like a reflection of it (Actually the term representation is used by neuro scientists).
A transcription of the original into an anologuos pattern. The relations between things in the outside world are mirrored by the relations between their neuronal represantations.
The more a being learns about its environment the more precisely the relations between neuronal representations will match the corresponding relations of the environment.
The biological fundament for this mechanism seems to based on simple chemical reactions causing 'neurons that fire together wire together'.
The ringing of a bell will be attached to the concept of food by a dog's neurons without reasoning about it when you simply repeat it over and over.
It's pure chemistry.

One of the most stunning examples of representations of the outside world in IT i found when i was shopping in a popular online book store everyone knows.
When you look at a book you already get suggestions saying "people who were interested in this book also were interested in..."
Good marketing concept. First i wasn't interested, but with a closer look i realized that actually this tiny algorithm (combined with a huge database of purchases) had the ability to recommend me books better than any human person has been able to.
"What's so surprising about it?" people keep asking me at the philosophy forum. "It's just simple statistics".
Yes of course it's simple - that's why it's genious, because it's statistics causing semantic content.
The relations between the recommended books are 99,9 percent functioning representations of relations that do actually exist between these books.
It's not very impressing to see an algorithm make one recommendation for a book you look at.
However if you drew a 3d map of all connections between books that are related somehow you would quickly realise that this is an incredibly complex building to look at.
And it is not just set up by an algorithm, but actually every customer has made a little contribution to make it something that reflects relations between books which 99,9 percent of all customers would acknowledge.

I call the algorithm and the database the horizontal dimension of this phenomenon. The fact however that the way it is linked and set up to a huge web that is a representation (reflection) of real outside world relations, is what gives it the vertical semantic dimension.

Guess i should stop here, it's already almost too long to read.
I am looking forward to exchanging ideas GeekCaveCreations.
 :)
Title: Re: Semantics
Post by: infurl on July 25, 2011, 11:05:00 pm
That sounds exciting. How do these programs simulate intention? Is there a product name that i could do some research on?

I included a link at the end of my post. There isn't a commercially available product yet (as far as I know) but there have been several systems developed and deployed. For one example, watch the videos for the PLOW system in action. There are also numerous research papers to download and read that will give you more details. Here's the link again. http://www.cs.rochester.edu/~james/ (http://www.cs.rochester.edu/~james/)
Title: Re: Semantics
Post by: Exebeche on July 25, 2011, 11:25:29 pm
Sorry for not seeing the link (thought it was a signature) and thank you for reposting.
Title: Re: Semantics
Post by: Exebeche on July 26, 2011, 12:03:04 am
Sorry but my pen (olive green crayon) doesn't want to stop writing, so i guess you may need to have a little more patience with me.

One extremely exciting aspect of the talking heads experiment is certainly that they don't just use language but they actually make a language evolve:
http://talking-heads.csl.sony.fr/ (http://talking-heads.csl.sony.fr/)
It seems that embodied systems have proven some much better cognitive output than their non embodied brothers in some experiments.
I don't know too much about it, but the talking head experiment appears to be one result of this embodiment trend.
Personally i believe that all of this will be possible in virtual spaces however at the moment there are certainly some technical obstacles.
Making an agent SEE a (let's say) yellow triangle in a virtual space may simply be more complicated than showing it a real yellow triangle in reality (although certainly both has its challenges).
Nevertheless to me the talking head experiment is another example of semantic depth.
The shapes and colours for which the robots create words are the 3D origins of the concepts (which are labeled with the words).
I have no idea if the robots are based on neural networks, so the representations in their processors, reflecting the outside objects are not neuronal but just binary, but anyway they are representations.

Here we have a transcription of an original signal (from an object) into a completely different code (binary), which however will be triggered when the system gets confronted with the original signal again.
The representation is linked to its original.
While in case of the book store the link was established by an algorithm, here it's the robots perception.
In both cases we have a logical (binary) representation of an outside world phenomenon.
Another difference is that the robots are talking about (and have concepts of) actual objects, where the book store algorithm reflected a construction that was completely abstract. Inconcrete.

The difference to chatbots is that a bot can assign patterns correctly, however the patterns are not a representation of anything.
Why do people have the feeling that connecting 'Have a nice evening' to 'thank you good night' is not enough to believe that a computer really understands what it says?
Some of us have already been in foreign countries adapting peoples behaviour by e.g. just answering a 'Salaam' by saying 'Salaam' without having any idea what it means.
(Funny side note: I had a friend who in Israel permanently replied to Salaam with Shalom and vice versa because she thought that's how it goes.  :D  )
In some cases our way of learning a language is NOT different from a chatbot.
However what we mean by the word 'understand' requires the semantic depth which is only given when the code (thus concept) represents a possible condition of the outside world (and/or of the system itself relating to the outside world).
A typical computer will only be found in conditions that relate to its own states.
It changes from a 001 into a 011.
The difference between these states to a computer is only of formal nature. It means as much to the computer as it does to you or me.
When 001 translates into triangle and 011 translates into yellow, and it tells us about its discovery we may find a yellow triangle somewhere around, so we might share the same environment with the computer.

Which tells us another thing about understanding: To have the same understanding you need to share the same reality (simply for the representations to match).
We expect from computers to UNDERSTAND our language, before we really consider them intelligent.
It could also go the other way:
Maybe an AI is going to tell us one day that we are never going to understand the WEB in its semantic depth.
Because we can not relate to all the possible states in this realm.
Honestly i believe we are moving rapidly towards this point.


Title: Re: Semantics
Post by: Exebeche on July 27, 2011, 01:54:47 pm
Today i have something really mind blowing for you.

The amazing but also confusing thing about that bookstore algorithm is that it's both:
A semantic concept and a map of semantic content.
One must not confuse these two things.
First of all the connection between the books is based on their content - in other words they are connected on a semantic level.
This is something that a computer actually does not understand.
No computer on this world would be able to process the content of all these books and based on the processing make the content-related connections between them correctly.
The reason why the map of relations reflects the real semantic connections is because millions of humans created the input by their purchasing and clicking behaviour.
So that the algorithm could simply record the connections.

And than actually it works like neurons do.
This fact deserves a special amount of attention. Personally i find it mind blowing.
Why does it work like neurons do and how so?
The principle is simple:
Connections that are established more often simply get stronger.
When you ring a bell before you feed your dog it will not connect it the first time you do it. The more often this happens however the stronger the connections between the synapses get. You cannot even prevent the synapses from doing so, it's a chemical process.
Same for the algorithm: Let's say first customer buys Winnetou1 and Winnetou2, the connection gets recorded. Next customer buys Winnetou1 and the bible, also the connection gets recorded.
Now we have a right and a false (actually random) connection.
There will always be false (random) connections. And they will be recorded. But at the same time there will also be Winnetou 1 and 2 connected and recorded, and if you simply count these correct connections, their number will slowly increase, compared to the number of random connections, each counted by itself.
The connection between the bible and Winnetou1 will still be at value 1, when the number of connections between Winnetou1 and 2 is already at let's say 10000.
Now if you count the number of connections as a weight, even if there is 10000 false (random) connections, each of them has a weight of 1 (or not very much higher), the weight of 10000 for Winnetou1 and 2 outnumbers the random connections by far.
This is exactly what neurons do. By increasing the physical synapse itself with every connection that gets established, the neuron increases the weight of a signal.
Basically that's what our brain does: Making connections and increasing their weight.

And it's also what artificial neural networks do. If you ever played with one, you will remember that for getting your neural network to create the correct output, the major factor is how to set the weights.

In other words the online book store algorithm is a demonstration of how a statistical mechanism creates an effect with an output similar to neurons.
My guess would be that data mining makes use of this principle, however i haven't got into that topic yet.

Now back to our online book store semantic map.
We need to distinguish between the semantic concept itself and the semantic content that is being mapped.
The algorithm does not understand anything about the content of the books, but nevertheless maps similar contents because it reflects the opinion of millions of customers.
The content itself can be called semantic content, however the map of it is itself a single semantic concept.
It's the most complex one that i know and a pretty impressing one, but it's just one semantic concept.
A semantic concept is something that we attach to anything of which we believe that it simply IS.
Let's say vanilla ice cream. Vanilla ice cream is for everyone connected to whatever associations.
Maybe summer, blue sky, sun, the colour white, a particular taste, and so on.
This cloud of associations is your semantic concept of something.
Sometimes the concept exists before a word is attached to it. And sometimes you first hear a word and slowly fill it with attributes.
But the word is not to be confused with the semantic concept. It is a tag, although very strongly tied to the concept.
The semantic concept however basically csonsists of a cloud of associations.
Its subject does not have to be something solid but it can also be abstract.
A concept like the 'Evil' for example is set up by a cloud of associations that is different for everyone, as different as the retina of your eyes.

When a group of associations appears to us as pattern of its own (something that can be seen as one thing - thus an entity) its subject appears to us as something that exists.
This way humans have the impression that the 'Evil' or 'honour' or 'beauty' or whatever other abstract concepts, do exist 'as such', meaning they have a way of being, in other words a metaphysical existence. Enough philosophy over here.

The best image one can get of how such a concept can be figured is on Wiki mind map:
http://www.wikimindmap.org/ (http://www.wikimindmap.org/)
select your language and search for some topics. It's pretty interesting.
The English version seems to be bigger than other languages.
Title: Re: Semantics
Post by: Freddy on July 27, 2011, 01:56:12 pm
Quote
One of the most stunning examples of representations of the outside world in IT i found when i was shopping in a popular online book store everyone knows.
When you look at a book you already get suggestions saying "people who were interested in this book also were interested in..."
Good marketing concept. First i wasn't interested, but with a closer look i realized that actually this tiny algorithm (combined with a huge database of purchases) had the ability to recommend me books better than any human person has been able to.

I like that you spotted that and that you were impressed.  I think things like that, as there are a lot of examples now, are some of the more useful things I see online.  They model a world that humans would find difficult or near impossible to do. Also you see things like search terms 'trending', not so useful to me but still a nice way of showing what is going on in the world. Maybe a virtual shopping assistant could tap into some of those things. Kind of like...'I couldn't find x but I know a lot of people are interested in y at the moment and it seems related.'

The Amazon API is already available of course, but I don't think it allows you to use that particular data.  All I remember seeing was related items, but I don't think that has anything quite as clever happening with it.

In a lot of ways this is like the charts we had been using all the time already - you know, Top Ten CD's, Top 100 best films etc.  But those things were never real time and took a while to assemble.  Can we wait a week to find out the top album any more ?  Not that I really care these days, but just to make a point..
Title: Re: Semantics
Post by: Art on July 28, 2011, 01:42:27 am
Not just Amazon but the entire assemble of shops, stores and markets online use the weighted purchasing, user selection / recommendation hook for advertising and promotions.

It reminded me of the movie, "Minority Report" (Tom Cruise), where people's eyes were scanned as they entered a store, the computer scan was linked into the stores database so it knew the person's identity AND that person's shopping preferences based on past purchases. Way cool for the
store, perhaps not so cool for the consumer to get besieged as soon as the store is entered. Kind of a scary future but most likely enevitable.

I have an older book with programs that start only knowing the rules of a game and progressively become "smarter" if not impossible to beat after
several games. No doubt using weighted factors based on its opponent's playing selections but still quite interesting.

I think there is also a place for what I call, General Pattern matching whereas an absolute or extremely detailed knowledge is not always required in order to make a connection. Guy walks into a store and displays what looks like a gun. First, is it hand held? OK...Pistol...either revolver or semi-auto.
The caliber is really NOT important at this point. Same scenerio for a long gun...Is it single barreled? Is the barrell, large for say a shotgun or with maybe a scope indicating a hunting rifle. Could be a 22 for small varmints or very large for hunting larger game. Suffice it to say that the fact that he had a gun in a store would indicate that he was not hunting game (unless he entered a gun store to have his gun repaired or traded for an upgrade, etc.).

Point is, at some time, details are not always as important as good general information. Sometimes we tend to over focus on the finite little issues that only cloud the larger ones. Something to think about...Keep it simple ...remember the K.I.S.S. principle...? Good!
Title: Re: Semantics
Post by: Freddy on July 29, 2011, 01:33:46 pm
Good points Art.  I don't remember that part in Minority Report, I have it here so will have to watch it again.

Like you say we already have our past preferences being used to offer us other things. I find that quite useful and sometimes go to Amazon just to see what it has dug up.

I agree with your observation, knowing simply that it is a gun and could be life threatening is far more important at that point than what calibre or colour it is !

This topic also reminds me of 'The Wisdom of the Crowd', whereby a large number of people can give a better opinion than just one individual.  I guess that is if the individual is not Einstein...

Quite interesting if anyone wants a read : Wisdom of the Crowd (http://en.wikipedia.org/wiki/Wisdom_of_the_crowd).

A British magician called Derren Brown claims that it was by using this method that he was able to predict the outcome of our National Lottery live... of course I remain sceptical on that one..
Title: Re: Semantics
Post by: Art on July 29, 2011, 10:00:14 pm
@Minority Report - Right after Tom gets his eye transplant then goes into a store, his (and everyone elses) eyes are scanned...etc.

@ Derren Brown - I've watched practically all of his videos and yes, he is quite the picker of brains and messer of minds. Suggesting
a person select a RED BIKE, or paying for items with blank pieces of paper. I especially loved his roundabout with the chess players.
Great stuff and a rather unique individual that Derren.

The rest of you, go to Youtube and enter Derren Brown...you will be entertained!! O0
Title: Re: Semantics
Post by: lrh9 on August 02, 2011, 02:27:58 pm
Exebeche, thank you for answering my question. I thought we had similar thoughts on the subject, and I think I was right.

What you refer to as semantics in your posts I have referred to as symbol grounding. I've linked the Wikipedia article before, but here it is again.

http://en.wikipedia.org/wiki/Symbol_grounding (http://en.wikipedia.org/wiki/Symbol_grounding)

Have you read it?
Title: Re: Semantics
Post by: Exebeche on August 03, 2011, 12:47:08 am
This topic also reminds me of 'The Wisdom of the Crowd', whereby a large number of people can give a better opinion than just one individual.
In his book ' The perfect swarm' (german edition: 'Schwarmintelligenz'), Len Fisher distinguishes between group intelligence and swarm intelligence.
The wisdom of the crowd is on the group intelligence side.
He describes different experiments like a group of people estimating a cows weight or estimating the number of marbles in a glass.
Interestingly the group in average tends to guess the correct answer more precisely than any expert (and than the majority!) given the necessary requirements for crowd wisdom such as a high diversity of group members.
The effect can however get totally lost if the requirements are not met and even turn the wisdom of the crowd into the dullness of the mob if group dynamics come into play.
The difference between the wisdom of many and swarm intelligence is:
Wisdom of many is a problem solution strategy whereas swarm intelligence emerges spontaneously (self organisation) from the behaviour of group members following simple rules (where the simple local rules create complex global behaviour including that the global system behaviour could not be deducted from the individuals behaviour).
You mentioned another probem solution strategy over here, which is helpful especially when we have to deal with complex problems:
Reducing complexity by increasing the fuzziness.
Having less information can lead to more success in making decisions.
Classical example: American students being asked wether San Diego or San Antonio has more inhabitants.
Only 62% were right, but when the same question was asked in Munich, surprisingly all students had the right answer (San Diego).
The reason is assumed to be the lack of information among german students. Most of them never really heard about San Antonio, so they simply guessed it must be smaller than San Diego.
Interestingly this method seems to lead to high succes rates when the problems to deal with are complex.

Exebeche, thank you for answering my question. I thought we had similar thoughts on the subject, and I think I was right.
What you refer to as semantics in your posts I have referred to as symbol grounding. I've linked the Wikipedia article before, but here it is again.
http://en.wikipedia.org/wiki/Symbol_grounding (http://en.wikipedia.org/wiki/Symbol_grounding)

Hello dear Irh9
This is getting really exciting, as now that i have read your link about symbol grounding i realise that my idea had already been expressed by somone else just in different terms.
I see two major parallels in the following sentences:
" Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to "
" Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output."
The way i understand it (and i can hardly get it wrong) this is precisely what i was trying to explain. It  gives me the kick that i have been searching for and i can hardly express how thankful i am for this input.
I hope you stay on the hook for this discussion.

Alright, as you may have seen i am trying to show how grounding can also exist for a  being that lacks human consciousness.
Humans and robots share the same reality but they do live in a different perceptional realm.
A robot may be able to realise laughter by some pattern recognition algorithms. But it relates to it in a different way.
Laughter refers to a state a human person can be in. I can enter a state of laughter and exit it, and when i see somebody else laugh i have a model of what this state might be like for the other person.
A robot will lack this model because it has not experienced the state itself.
But does that mean, that there can not be any intersection of the grounding at all?
We should first realise at this point that a word's grounding is not something universal.
For example when kids grow up and fall in love for the first time they tend to wonder "is this real love"?
The grounding of the word love is being build by an individual process of experiences that is different for everyone.
The more abstract a word is, the more the groundings differ.
What does the word 'sin' for example mean to you? What does it mean to a mullah in Afghanistan? Our life is full of terms that don't match other peoples' groundings.
Even less abstract terms carry this problem.
That's why there is a lot of misunderstanding.
When G.W. Bush sais the word 'America' he relates to it very differently, compared to when Bin Laden sais it (or would have said it).

One extremely important thing to see is that a grounding does not look like a linear relation between A and B.
I suggest to look at the following link where i entered the word 'sin' in the wiki mind map to get a picture of what i mean:
http://www.wikimindmap.org/viewmap.php?wiki=en.wikipedia.org&topic=sin (http://www.wikimindmap.org/viewmap.php?wiki=en.wikipedia.org&topic=sin)
Of course this is not a natural grounding the way it would look in our brain (actually our brain only holds the physical basis for the grounding to be precise). But it gives an image of what it could look like.
It has lots of connections to other terms, which themselves would be connected to further terms. This multiple connectedness is why i also like the term concept for the 'idea'.

How could a robot ever get such a grounding?
Well first of all we have to find the intersections of our perceptional realm to find possibilities for equivalent groundings.
A robot could for example be able to relate to the word 'walk'.
Actually that sounds easier than it is.
What does it take to relate to the word walk?
Let me make a long story short:

http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.html (http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.html)
This clip explains something about an evolutionary step that has been done in terms of robotic consciousness.
It's about robots that are simply equipped with a self-model.
Having a self-model and a world-model is by the way a neuroscience approach according to which these two things are major factors of having a consciousness.
That doesn't mean it's enough for having 'human' consciousness, however neuroscience is not at that point anymore, at which we thought humans were the only species who can have something similar to consciousness.
Whatsoever, when you give a robot a self-model and a world-model it can relate to itself and the outside world.
The difference between a normal computer and a robot:
You can program a computer to realise people who walk by pattern recognition. And you can make a computer say 'walk' when it sees a walking person.
However all it means to the computer is a change of zeroes and ones in its system. The computer at this point does not relate to the value 'walk' because it does not affect him in any way.
A robot that has a self-model however could learn that  a particular state of his is called 'walk'. Plus he could learn that other agents (e.g. robots) in his realm can be in conditions that are assigned with the same value 'walk'.
A robot with a self model creates a virtual self image of itself walking that can be compared to the images of other agents that are walking.
If i could do that experiment personally i would wait for the robot to assign the concept 'walk' to the other agents by himself. Such an experiment would be somewhat similar to the talking heads experiment.
The talking heads experiment however (linked above) does not include any self modeling, thus it lacks grounding according to my opinion (and probably yours Irh9).
The word (idea, concept) is a representation of the outside world.
As soon as the robot can relate to it, in the sense that the word 'walk' makes the robot figure himself walking, that is to calculate a virtual image of himself in the virtual image of the world it perceives by its optical input, the word 'walk' perceives semantic depth.
At this moment the command 'walk' could make the robot figure himself walking and falling into a hole. This might actually result in a conflict with a self-maintainance algorithm.
The robot would have to make a decision, such as deny following the command.
Such an action would show understanding.
At the same moment, the robot's grounding of the word 'walk' has a lot in common with our grounding of the word 'walk'.

A meaning is not really more than a consensus of a word's grounding.
Whenever we use a word, we work with an agreement on the word's content. When somebody has a different opinion about a word's meaning we believe that he is wrong and we are right.
The disagreements can lead people to even kill each other.
It's really not necessary to teach a robot a concept like 'god'.
And does a robot really have to share my grounding of the word 'cry'?
There is a lot of meanings robots will never 'understand'. A lot of groundings that can not be shared.
But there are groundings that robots will understand and share with us.
Title: Re: Semantics
Post by: lrh9 on August 04, 2011, 12:11:56 am
I'm still following.
Title: Re: Semantics
Post by: victorshulist on August 04, 2011, 01:37:55 pm
Both human and computer "groundings" are not directly related to the real world -- humans also build a model of reality only through our senses.   The symbol grounding in humans is only grounded to our simple 5 senses.

raw-input(A) --->  preceived-value X---> alice --> says "red"
raw-input(A) --->  preceived-value Y --> bob --> says "ok,  *that* is red."  (bob being colour blind)

bob reallys "sees" Y, alice sees X, they both have different models of reality.

raw-input(visual of symbols "10/ 5",  preceived-value(???) --> human says "equals 2"
raw-input(keyboard ASCII codes of "10/5"), preceived-value "1010 / 101" --> computer says "2" (after converting from binary "10")

The process of calculing 10 divided by 2 for a biological entitity is quite different than that of a digital silicon entity, but certaintly the ability to perform the operation is intelligent in both cases, regardless of the type of grounding?

Our models of reality are not based *directly* on raw input, but from our senses.   Both human and computer will have different senses.  But, does it really matter in terms of  *intelligence* if we have different groundings?  (different preceived-values).

As a child, I only thought of "heat" as the effect it had on my skin -- if I placed my hand near a hot stove, the *effect* (which came to me only via my sense of touch), I thought was "heat' --- later physics told me it is simply the rate of motion , kenetic energy of molecules.    I once thought of color as the effect it had on my retina,  physics tells us no, it is only the wave length of the EM radition.  If a robot's ground of 'heat' was the physics explanation, would it be considered "not knowing what the word really means", just because it differs from our , primitive grounding?

We humans thus aren't truly "grounded".    So would a computer "know" what the word "heat" really means?  well, not in the same WAY we do no.  But does it need to know to be considered intelligent while talking about heat?   If it knows in its own way, and can reason about it, and come to the same correct conclusions as a human, then it doesn't matter.  The same way it doesn't matter how "10" and "5" and "divided by"  and the process for how math differs in how it is done in a machine or human.

Perhaps all that matters to share semantics between humans and machines is the relationships of words, the functional relationships between them and their groupings into complex phrases and sentences.

Another question is:  is a machine intelligent if it passes a turing test, regardless of how its symbols are grounded?
Or, -must- its symbol grounding be equal to a humans grounding?

For my project, as long as a computer gets "2" when given "10/5", the same answer a human gets, is all that matters for intelligence (of course with natural language conversations and not simple math, but the same principle).
Title: Re: Semantics
Post by: Exebeche on August 19, 2011, 11:45:07 pm
Quote from: victorshulist
Both human and computer "groundings" are not directly related to the real world -- humans also build a model of reality only through our senses.
Thank you for the exciting input by the way.
I don't want to get too deep into philosophical talk, but one could mention Immanuel Kant at this point, who claimed that we will never be in touch with the 'real' world.
According to him we will always perceive the way something appears to be, without ever realizing the thing-in-itself.
Alice seeing a rose and calling it 'red' and Bob being colourblind but also calling it red doesn't make Alice's perception more real than Bobs.
A robot that sees only black and white in Grey shades may see the red rose and identify the colour as Grey K826.
Listening to Alice he will from now on assign the word 'red' to Grey K826 when talking to humans.
As Grey K826 is always caused by the same wavelength of light his assignment works perfectly fine.
Is the robot less capable of realizing the 'real' properties of the rose?
Not at all.
It's association of red with Grey K826 is 100% functional.
What we regard as red is only the way the electromagnetic radiation of this particular wavelenght APPEARS to (most) humans.
The way it appears however is not to be confused with 'what it really is'. The way something appears is totally dependent on the sensor that perceives the information. Different sensors - different perceptions.
It's obvious why we can say very little about a things 'real' nature. According to Kant there's nothing we can say about the 'real' nature of a thing, and this corner of AI brings up a new angle of how to understand Kant.

All we have is the functionality of a concept. When the wavelength of Grey K826 is always assigned with the word 'red' by Alice, Bob and the robot, then the concept 'red' can be used succesfully by everyone.
Nobody has a 'false' grounding then.
Even though their groundings differ. From a functional perspective the differences of groundings have no relevance.
And in the end it's all about functionality.

Quote from: victorshulist
We humans thus aren't truly "grounded". 
I agree. Humans do not have the true or 'real' grounding. There is no such thing as a real grounding.
It is mere anthropocentrism to believe that an AI would have to think or act like a human to be considered intelligent.
That's essential.
Once we believed the earth was the center of the universe. Realising that it's not like that was painful but inevitable.
The next great insult is about to hit. We will have to deal with computers far more intelligent than us, but not like humans at all.
Really we should even forget about the Turing test.

Anyway, what's important is: We all share the same reality and can have completely different groundings which nevertheless work.
A robots grounding of a word might have nothing in common with mine, but still be completely functional.
In a way this can lead to the conclusion that the robot will not understand me on a semantic level. However that doesn't make his own understanding wrong.
Especially because on the other hand i may simply not be able to understand HIS grounding.
A robot's grounding is not false as long as it's functional (same for humans, by the way: the more a persons groundings lack functionality the more likely we call him/her mad).
It could be interesting to regard that the Internet is a realm of its own and intelligent agents that we use may be establishing groundings within this environment of which we will never understand even a tiny bit on a semantic level.
For us it's sometimes hard to understand how two computers standing next to each other can be as distant from each other as two planets, while a computer on a different continent is just a couple of hops away.
To an intelligent agent however the internet might spread out like a landscape built of OSI layers.
The realm we call reality is not the only environment that offers a potential of groundings and semantic depth.
In some sort of way agents are already talking about the internet in a language of algorithms.
Patterns appear. Patterns repeat. Repeating patterns make bigger patterns emerge. If a pattern is a word, words turn into sentences.
Humans will only 'understand' the patterns on a functional, mathematical basis.

Fun Stuff. Really exciting.

Title: Re: Semantics
Post by: victorshulist on August 22, 2011, 03:18:00 pm
"From a functional perspective the differences of groundings have no relevance.
And in the end it's all about functionality."

Very well said.    This is why I never "bought in" to the Chinese Room argument.   *BOTH* human and robot symbol-groundings are arbiturary.  The robot may even have a much more detailed "grounding" of colour.. by actually getting a very high-fidelity recording of the actual EM wave that represents red.

In my bot, ALL knowledge is functional (or relational).   Again, like "10 / 2" ... binary digits to a CPU, and the division is done utterly differently, but who cares... I get 5 as a result, so does the machine.   Functionality, bottom line :)

"Once we believed the earth was the center of the universe. Realising that it's not like that was painful but inevitable."

also, very well said.   Issac Newton's laws of gravity weren't really "grounded" (we are still struggling with defining what gravity really is, the general accepted truth generally deals with the Theory of General Relativly though).

*BUT*,   again, *functionally* Newton's laws were good enough... and still used today.

"Really we should even forget about the Turing test"

I basically already have, well, for the most part.   Turing Test deals a lot with making a computer deal with error prone humans ... having to do extra permutations for spelling mistakes for example, what a waste of time :)

"In a way this can lead to the conclusion that the robot will not understand me on a semantic level. However that doesn't make his own understanding wrong."

again, if semantics is defined in terms of functionality (or relational to other semantics), it would work.

Perhaps we need 2 terms "functional semantics" (FS) and "common-grounding semantics" (CGS)?

If I get the information I want from a computer system with FS, I don't care about CGS

Again, our simple math example, both me and the computer have the same FS regarding "10 divided by 2 = 5", but we don't share the same CGS for this problem and solution... but , who cares.. I got what I needed.

"It could be interesting to regard that the Internet is a realm of its own and intelligent agents that we use may be establishing groundings within this environment of which we will never understand even a tiny bit on a semantic level."

Yes, that *is* a rather fascinating idea !

"Fun Stuff. Really exciting"

Damn right ! That's why I spend pretty much all my free time working on my project.    Thank God I have a wife that thinks it is equally as cool  ! :)

Title: Re: Semantics
Post by: DaveMorton on August 22, 2011, 03:49:47 pm
Thank God I have a wife that thinks it is equally as cool  ! :)

That, right there, is WELL worth the price of admission, my friend. Don't ever let her go! :D
Title: Re: Semantics
Post by: victorshulist on August 23, 2011, 05:18:35 pm
Don't ever let her go! :D
I sure don't intend too !
Title: Re: Semantics
Post by: ivanv on August 24, 2011, 12:40:15 pm
if we divide Universe to reality and cognition, then grounding is a matter of reality. cognition may be implemented in thousand ways, but reality is always the same. We can peek to reality through our sensors and test imagined rules to make assumptions about it.

please allow little digression by some interesting thoughts to me:
is life opposite thing to matter?
could matter exist without life?
can life interfere with matter side of Universe by changing it's laws, or adding new ones?
is matter side of Universe intelligent (esoteric and chaos happenings)?

maybe matter can be considered as a syntax and cognition as semantics?
Title: Re: Semantics
Post by: Art on August 24, 2011, 10:14:39 pm
Perhaps you should define your terms a bit more.

Matter is anything that takes up space and has mass or weight.
Matter does not have to be alive although humans, animals and plantlife fall into that category but they are not the absolute definition.

Also something to ponder...Virtual Reality. Where one is unable to distinguish the real from the unreal much like CGI characters compared to live counterparts.

The people who play Second Life are real and grounded and they exist in both the real world (their homes) and the virtual world (2nd life).

Sometimes our built-in sensors deceive us...CGI images, holograms, AI, interactive events, games, etc. It's all around us so how do some of us perceive our own level of "grounding" and others not?

Some interesting thoughts and we're only touching the tip of the iceberg.
Title: Re: Semantics
Post by: Exebeche on August 26, 2011, 10:49:22 pm
Perhaps we need 2 terms "functional semantics" (FS) and "common-grounding semantics" (CGS)?


I have to say, i like the idea.
Actually it sounds perfect to me.
I wonder what an expert of semantics would think about it.
Semantics is a science of signs (or signifiers) and thus a science of meaning.
which implies that it's a science of understanding.
This is why it's often related to consciousness immediately.

What would be the difference between functional and general grounded semantics.
I don't think it should be seen as opposite sides.
My guess would rather be that the functionality is the basis of a bottom up model.

How far to the bottom would we be allowed to take this idea of functional semantics?
personally i believe something like semantic information processing already exists at the level of physics as soon as the information affects something we would call a system.
When an information causes a determined effect to a system, we can say, the information has a functional significance to that system.
This information can also be physical:
When you use a wedge that causes a water mill to wether stop or run, the action of removing (or blocking otherwise) is a physical information that gets processed by the system  'water mill'.
About  an atom we can say the same, if it is determined to exchange electrons with another atom when they meet.
But nobody would want to call this process understanding.

But when can we start talking about understanding?
An enzyme is a physical catalyst that enables a molecule to react with other molecules at much lower temperatures, such as room temperature.
We don't however call exchange of electrons understanding even if it's information processing.
In an organism enzymes are used for example for a process called digesting which means, the chemical structure of a substance is being broken up so that the chemical energy stored in it can be absorbed and used by the digesting organism.
The digesting system however does not blindly try to break up anything that comes along.
Only what is of use for the organism will be processed (some side effects can be neglected for this thread of ideas).
So the system that holds the enzymes must have established some way of reading the chemical (physical) information of the substance that is getting eaten.
Chemical (or physical) information is being read, interpreted and processed with a measurable output of maximising the energy household of the organism.
Does this not have anything to do with understanding?
Suddenly exchanging electrons is not so far from understanding anymore.

How's about this here:
The three way handshake used by the TCP protocol: http://en.wikipedia.org/wiki/Three-way_handshake#Connection_establishment (http://en.wikipedia.org/wiki/Three-way_handshake#Connection_establishment)
Machine 1 sends a row of digits to machine 2, for example (in non binary numbers): xxxxxxxxxxxxxxx1384xxxx
Machine 2 perceives this and realizes (because the space where this number is situated has this particular meaning): 'Machine 1 is asking if i want to talk to it. If i do, i add 1 to the number 1384 and send it back'.
Machine 2 sends xxxxxxxxxxxxxxx13852100.
By the number 1385 machine 1 can read: 'machine2 is willing and ready to talk to me. If i am also ready i am supposed to add 1 to its suggested number 2100.'
Machine 1 sends back xxxxxxxxxxxxxxx13852101.
Machine 2 can see: 'Machine 1 is ready to start the conversation. Connection established, conversation begins'.

Obviously there is no human or other mind involved in this process.
And in fact the machines don't do anything else than the watermill does. They follow their program like blindfolded.
The effect and even the method itself is something that has equivalents in our complex human world.
What if you go to a club or disco (or whatever is the correct word in these days) and someone gives you a note with his/her telephone number on it?
This is almost precisely the same principle as the three way handshake.

Reading signs takes place on very primitive levels already.
Humans are not the only instances that read signs.
Functional significance already exists on a physical level. As soon as an information has a determined effect to a system it has functional significance.
Where does significance turn into meaning?
When intelligence comes into play.

Which actually means the information processing system has to profit from the act of information processing.
How can a dog proof its intelligence if it can not earn a cooky?
No matter what experiment we make with animals, the only means of proofing their intelligence is by letting them win a reward.
It's mostly called the ability of problem solution. There does not always have to be a problem, profit does the same (and at the same time you always profit from solving a problem).
An organism that reads the chemical structure of a substance to absorb its energy efficiently, profits from its ability of information processing.
This is why it's a difference wether the enzyme reacts with a molecule when just floating in the ocean or inside an organism.

Functionality is the basis of semantic information.
The grounding comes into play on a higher layer of the cake.

I don't want to go to far though. First would like to check if my ideas are acceptable so far.
Title: Re: Semantics
Post by: victorshulist on January 07, 2012, 12:47:02 am
Sorry, I *was* going to reply to your post -- but it some how slipped by me.

Yes, I like your explanation, please continue.   I think there is merit in defining FS & CGS.
basically,

FS: functional semantics - what the information DOES, how it interacts with other information you have already learned ,

-and-

CGS - common grounding semantics -- not what it DOES, but what the information *IS* .. this is where the "raw" feeling of sensing the outside world comes in.