Semantics

  • 24 Replies
  • 9271 Views
*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Semantics
« Reply #15 on: August 04, 2011, 12:11:56 am »
I'm still following.

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Semantics
« Reply #16 on: August 04, 2011, 01:37:55 pm »
Both human and computer "groundings" are not directly related to the real world -- humans also build a model of reality only through our senses.   The symbol grounding in humans is only grounded to our simple 5 senses.

raw-input(A) --->  preceived-value X---> alice --> says "red"
raw-input(A) --->  preceived-value Y --> bob --> says "ok,  *that* is red."  (bob being colour blind)

bob reallys "sees" Y, alice sees X, they both have different models of reality.

raw-input(visual of symbols "10/ 5",  preceived-value(???) --> human says "equals 2"
raw-input(keyboard ASCII codes of "10/5"), preceived-value "1010 / 101" --> computer says "2" (after converting from binary "10")

The process of calculing 10 divided by 2 for a biological entitity is quite different than that of a digital silicon entity, but certaintly the ability to perform the operation is intelligent in both cases, regardless of the type of grounding?

Our models of reality are not based *directly* on raw input, but from our senses.   Both human and computer will have different senses.  But, does it really matter in terms of  *intelligence* if we have different groundings?  (different preceived-values).

As a child, I only thought of "heat" as the effect it had on my skin -- if I placed my hand near a hot stove, the *effect* (which came to me only via my sense of touch), I thought was "heat' --- later physics told me it is simply the rate of motion , kenetic energy of molecules.    I once thought of color as the effect it had on my retina,  physics tells us no, it is only the wave length of the EM radition.  If a robot's ground of 'heat' was the physics explanation, would it be considered "not knowing what the word really means", just because it differs from our , primitive grounding?

We humans thus aren't truly "grounded".    So would a computer "know" what the word "heat" really means?  well, not in the same WAY we do no.  But does it need to know to be considered intelligent while talking about heat?   If it knows in its own way, and can reason about it, and come to the same correct conclusions as a human, then it doesn't matter.  The same way it doesn't matter how "10" and "5" and "divided by"  and the process for how math differs in how it is done in a machine or human.

Perhaps all that matters to share semantics between humans and machines is the relationships of words, the functional relationships between them and their groupings into complex phrases and sentences.

Another question is:  is a machine intelligent if it passes a turing test, regardless of how its symbols are grounded?
Or, -must- its symbol grounding be equal to a humans grounding?

For my project, as long as a computer gets "2" when given "10/5", the same answer a human gets, is all that matters for intelligence (of course with natural language conversations and not simple math, but the same principle).
« Last Edit: August 04, 2011, 01:49:38 pm by victorshulist »

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #17 on: August 19, 2011, 11:45:07 pm »
Quote from: victorshulist
Both human and computer "groundings" are not directly related to the real world -- humans also build a model of reality only through our senses.
Thank you for the exciting input by the way.
I don't want to get too deep into philosophical talk, but one could mention Immanuel Kant at this point, who claimed that we will never be in touch with the 'real' world.
According to him we will always perceive the way something appears to be, without ever realizing the thing-in-itself.
Alice seeing a rose and calling it 'red' and Bob being colourblind but also calling it red doesn't make Alice's perception more real than Bobs.
A robot that sees only black and white in Grey shades may see the red rose and identify the colour as Grey K826.
Listening to Alice he will from now on assign the word 'red' to Grey K826 when talking to humans.
As Grey K826 is always caused by the same wavelength of light his assignment works perfectly fine.
Is the robot less capable of realizing the 'real' properties of the rose?
Not at all.
It's association of red with Grey K826 is 100% functional.
What we regard as red is only the way the electromagnetic radiation of this particular wavelenght APPEARS to (most) humans.
The way it appears however is not to be confused with 'what it really is'. The way something appears is totally dependent on the sensor that perceives the information. Different sensors - different perceptions.
It's obvious why we can say very little about a things 'real' nature. According to Kant there's nothing we can say about the 'real' nature of a thing, and this corner of AI brings up a new angle of how to understand Kant.

All we have is the functionality of a concept. When the wavelength of Grey K826 is always assigned with the word 'red' by Alice, Bob and the robot, then the concept 'red' can be used succesfully by everyone.
Nobody has a 'false' grounding then.
Even though their groundings differ. From a functional perspective the differences of groundings have no relevance.
And in the end it's all about functionality.

Quote from: victorshulist
We humans thus aren't truly "grounded". 
I agree. Humans do not have the true or 'real' grounding. There is no such thing as a real grounding.
It is mere anthropocentrism to believe that an AI would have to think or act like a human to be considered intelligent.
That's essential.
Once we believed the earth was the center of the universe. Realising that it's not like that was painful but inevitable.
The next great insult is about to hit. We will have to deal with computers far more intelligent than us, but not like humans at all.
Really we should even forget about the Turing test.

Anyway, what's important is: We all share the same reality and can have completely different groundings which nevertheless work.
A robots grounding of a word might have nothing in common with mine, but still be completely functional.
In a way this can lead to the conclusion that the robot will not understand me on a semantic level. However that doesn't make his own understanding wrong.
Especially because on the other hand i may simply not be able to understand HIS grounding.
A robot's grounding is not false as long as it's functional (same for humans, by the way: the more a persons groundings lack functionality the more likely we call him/her mad).
It could be interesting to regard that the Internet is a realm of its own and intelligent agents that we use may be establishing groundings within this environment of which we will never understand even a tiny bit on a semantic level.
For us it's sometimes hard to understand how two computers standing next to each other can be as distant from each other as two planets, while a computer on a different continent is just a couple of hops away.
To an intelligent agent however the internet might spread out like a landscape built of OSI layers.
The realm we call reality is not the only environment that offers a potential of groundings and semantic depth.
In some sort of way agents are already talking about the internet in a language of algorithms.
Patterns appear. Patterns repeat. Repeating patterns make bigger patterns emerge. If a pattern is a word, words turn into sentences.
Humans will only 'understand' the patterns on a functional, mathematical basis.

Fun Stuff. Really exciting.

« Last Edit: August 19, 2011, 11:56:35 pm by Exebeche »

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Semantics
« Reply #18 on: August 22, 2011, 03:18:00 pm »
"From a functional perspective the differences of groundings have no relevance.
And in the end it's all about functionality."

Very well said.    This is why I never "bought in" to the Chinese Room argument.   *BOTH* human and robot symbol-groundings are arbiturary.  The robot may even have a much more detailed "grounding" of colour.. by actually getting a very high-fidelity recording of the actual EM wave that represents red.

In my bot, ALL knowledge is functional (or relational).   Again, like "10 / 2" ... binary digits to a CPU, and the division is done utterly differently, but who cares... I get 5 as a result, so does the machine.   Functionality, bottom line :)

"Once we believed the earth was the center of the universe. Realising that it's not like that was painful but inevitable."

also, very well said.   Issac Newton's laws of gravity weren't really "grounded" (we are still struggling with defining what gravity really is, the general accepted truth generally deals with the Theory of General Relativly though).

*BUT*,   again, *functionally* Newton's laws were good enough... and still used today.

"Really we should even forget about the Turing test"

I basically already have, well, for the most part.   Turing Test deals a lot with making a computer deal with error prone humans ... having to do extra permutations for spelling mistakes for example, what a waste of time :)

"In a way this can lead to the conclusion that the robot will not understand me on a semantic level. However that doesn't make his own understanding wrong."

again, if semantics is defined in terms of functionality (or relational to other semantics), it would work.

Perhaps we need 2 terms "functional semantics" (FS) and "common-grounding semantics" (CGS)?

If I get the information I want from a computer system with FS, I don't care about CGS

Again, our simple math example, both me and the computer have the same FS regarding "10 divided by 2 = 5", but we don't share the same CGS for this problem and solution... but , who cares.. I got what I needed.

"It could be interesting to regard that the Internet is a realm of its own and intelligent agents that we use may be establishing groundings within this environment of which we will never understand even a tiny bit on a semantic level."

Yes, that *is* a rather fascinating idea !

"Fun Stuff. Really exciting"

Damn right ! That's why I spend pretty much all my free time working on my project.    Thank God I have a wife that thinks it is equally as cool  ! :)


*

DaveMorton

  • Trusty Member
  • ********
  • Replicant
  • *
  • 636
  • Safe, Reliable Insanity, Since 1961
    • Geek Cave Creations
Re: Semantics
« Reply #19 on: August 22, 2011, 03:49:47 pm »
Thank God I have a wife that thinks it is equally as cool  ! :)

That, right there, is WELL worth the price of admission, my friend. Don't ever let her go! :D
Comforting the Disturbed, Disturbing the Comfortable
Chat with Morti!
LinkedIn Profile
CAPTCHA4us

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Semantics
« Reply #20 on: August 23, 2011, 05:18:35 pm »
Don't ever let her go! :D
I sure don't intend too !

*

ivanv

  • Trusty Member
  • ***
  • Nomad
  • *
  • 89
  • automatic mistake machine
Re: Semantics
« Reply #21 on: August 24, 2011, 12:40:15 pm »
if we divide Universe to reality and cognition, then grounding is a matter of reality. cognition may be implemented in thousand ways, but reality is always the same. We can peek to reality through our sensors and test imagined rules to make assumptions about it.

please allow little digression by some interesting thoughts to me:
is life opposite thing to matter?
could matter exist without life?
can life interfere with matter side of Universe by changing it's laws, or adding new ones?
is matter side of Universe intelligent (esoteric and chaos happenings)?

maybe matter can be considered as a syntax and cognition as semantics?

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Semantics
« Reply #22 on: August 24, 2011, 10:14:39 pm »
Perhaps you should define your terms a bit more.

Matter is anything that takes up space and has mass or weight.
Matter does not have to be alive although humans, animals and plantlife fall into that category but they are not the absolute definition.

Also something to ponder...Virtual Reality. Where one is unable to distinguish the real from the unreal much like CGI characters compared to live counterparts.

The people who play Second Life are real and grounded and they exist in both the real world (their homes) and the virtual world (2nd life).

Sometimes our built-in sensors deceive us...CGI images, holograms, AI, interactive events, games, etc. It's all around us so how do some of us perceive our own level of "grounding" and others not?

Some interesting thoughts and we're only touching the tip of the iceberg.
In the world of AI, it's the thought that counts!

*

Exebeche

  • Roomba
  • *
  • 11
Re: Semantics
« Reply #23 on: August 26, 2011, 10:49:22 pm »
Perhaps we need 2 terms "functional semantics" (FS) and "common-grounding semantics" (CGS)?


I have to say, i like the idea.
Actually it sounds perfect to me.
I wonder what an expert of semantics would think about it.
Semantics is a science of signs (or signifiers) and thus a science of meaning.
which implies that it's a science of understanding.
This is why it's often related to consciousness immediately.

What would be the difference between functional and general grounded semantics.
I don't think it should be seen as opposite sides.
My guess would rather be that the functionality is the basis of a bottom up model.

How far to the bottom would we be allowed to take this idea of functional semantics?
personally i believe something like semantic information processing already exists at the level of physics as soon as the information affects something we would call a system.
When an information causes a determined effect to a system, we can say, the information has a functional significance to that system.
This information can also be physical:
When you use a wedge that causes a water mill to wether stop or run, the action of removing (or blocking otherwise) is a physical information that gets processed by the system  'water mill'.
About  an atom we can say the same, if it is determined to exchange electrons with another atom when they meet.
But nobody would want to call this process understanding.

But when can we start talking about understanding?
An enzyme is a physical catalyst that enables a molecule to react with other molecules at much lower temperatures, such as room temperature.
We don't however call exchange of electrons understanding even if it's information processing.
In an organism enzymes are used for example for a process called digesting which means, the chemical structure of a substance is being broken up so that the chemical energy stored in it can be absorbed and used by the digesting organism.
The digesting system however does not blindly try to break up anything that comes along.
Only what is of use for the organism will be processed (some side effects can be neglected for this thread of ideas).
So the system that holds the enzymes must have established some way of reading the chemical (physical) information of the substance that is getting eaten.
Chemical (or physical) information is being read, interpreted and processed with a measurable output of maximising the energy household of the organism.
Does this not have anything to do with understanding?
Suddenly exchanging electrons is not so far from understanding anymore.

How's about this here:
The three way handshake used by the TCP protocol: http://en.wikipedia.org/wiki/Three-way_handshake#Connection_establishment
Machine 1 sends a row of digits to machine 2, for example (in non binary numbers): xxxxxxxxxxxxxxx1384xxxx
Machine 2 perceives this and realizes (because the space where this number is situated has this particular meaning): 'Machine 1 is asking if i want to talk to it. If i do, i add 1 to the number 1384 and send it back'.
Machine 2 sends xxxxxxxxxxxxxxx13852100.
By the number 1385 machine 1 can read: 'machine2 is willing and ready to talk to me. If i am also ready i am supposed to add 1 to its suggested number 2100.'
Machine 1 sends back xxxxxxxxxxxxxxx13852101.
Machine 2 can see: 'Machine 1 is ready to start the conversation. Connection established, conversation begins'.

Obviously there is no human or other mind involved in this process.
And in fact the machines don't do anything else than the watermill does. They follow their program like blindfolded.
The effect and even the method itself is something that has equivalents in our complex human world.
What if you go to a club or disco (or whatever is the correct word in these days) and someone gives you a note with his/her telephone number on it?
This is almost precisely the same principle as the three way handshake.

Reading signs takes place on very primitive levels already.
Humans are not the only instances that read signs.
Functional significance already exists on a physical level. As soon as an information has a determined effect to a system it has functional significance.
Where does significance turn into meaning?
When intelligence comes into play.

Which actually means the information processing system has to profit from the act of information processing.
How can a dog proof its intelligence if it can not earn a cooky?
No matter what experiment we make with animals, the only means of proofing their intelligence is by letting them win a reward.
It's mostly called the ability of problem solution. There does not always have to be a problem, profit does the same (and at the same time you always profit from solving a problem).
An organism that reads the chemical structure of a substance to absorb its energy efficiently, profits from its ability of information processing.
This is why it's a difference wether the enzyme reacts with a molecule when just floating in the ocean or inside an organism.

Functionality is the basis of semantic information.
The grounding comes into play on a higher layer of the cake.

I don't want to go to far though. First would like to check if my ideas are acceptable so far.

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Semantics
« Reply #24 on: January 07, 2012, 12:47:02 am »
Sorry, I *was* going to reply to your post -- but it some how slipped by me.

Yes, I like your explanation, please continue.   I think there is merit in defining FS & CGS.
basically,

FS: functional semantics - what the information DOES, how it interacts with other information you have already learned ,

-and-

CGS - common grounding semantics -- not what it DOES, but what the information *IS* .. this is where the "raw" feeling of sensing the outside world comes in.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

441 Guests, 0 Users

Most Online Today: 448. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles