The Symbol Grounding Problem

  • 40 Replies
  • 32170 Views
*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Symbol Grounding Problem
« Reply #15 on: February 20, 2023, 09:00:24 am »
Maybe the confusion just lies in the way this sentence is written:
"Words are just collections of sounds (or squiggles on a surface), in no particular order, with no rules by which they were chosen."
I did indeed mistake this sentence's second half to refer to "words". My bad. Thank you for elaborating.

To me a symbol is an indivisible unit, a whole. Words can be regarded as such if one follows the mindset of 80's philosophers, but one can also regard them as information. Words are digital data in the same way that a camera-recorded visual is: a sequence of byte values. Whether those bytes end up in the system through sensors directly reporting them or humans indirectly reporting them doesn't make much difference in my opinion, save that it takes 1000 words to tell a picture. But perhaps I am arguing the wrong case. Any objections from my side are only to argue with the general notion that computers can not be grounded without sensory embodiment, as the symbol grounding problem is typically perceived.
CO2 retains heat. More CO2 in the air = hotter climate.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #16 on: March 12, 2023, 06:41:45 pm »
Here is Part 2: https://writerofminds.blogspot.com/2023/03/sgp-part-ii-need-for-grounding.html

This article considers whether grounding is really necessary / what the deficiencies of an AI program without grounding might be.

Thanks for all the discussion on the first one!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #17 on: March 12, 2023, 08:29:06 pm »
Hi Writer, let me reveal some of my insights from investigating knowledge about knowledge.

About humaneness and groundings

While syntax is a study of correctly composed sentences, semantics is a study of meaning of those sentences. This isn't restricted only to textual data, but to any sensory input/output (you may think of grammar of a photo of a face as compound of brows, eyes, cheeks, nose, ...). To explain my view of semantics, imagine we understand only one language - our mother's one, and we want to learn another one. During learning process, we ground the learned language into words of our mother's language. When we learn this new language, we may recursively learn another one, maybe even grounding it not on our primary language, but on any language we learned in a meanwhile. We get an onion layers of languages that (guess what) each compile to the one level below, until we reach our primary language that we finally understand.

Now, let's go in the other direction, from our mother's language below. What's beneath that? I guess it could be another layers of symbolic languages, from culturally learned responses, to responses we are taught in schools, to our home upbringings, all the way to our deepest instincts we were born with. These instincts are our groundings. No one taught us to them in our lives, we are born with them, and we just feel them. Pure emotions, some of which we like, some of which we don't like, some of which we learned how to achieve by using any semantic levels above the basic one.

So, how deep a machine needs to go? We may teach it any semantic level, and thats it. That may be its basic instinct, so to say. It may form any new semantic levels above it (to learn new languages, etc). The thing is, if its deepest semantic level is somewhat aligned with ours, it will make sense to us when the machine, say, see an injured man, and it calls doctors on the phone. It may seem it exhibits an intelligence, although it doesn't share with us its deepest grounding to emotions - any level above will do the trick if it is complete enough.

About computing algebra and groundings

I happen to have explored lambda calculus, a functional paradigm used to calculate stuff. It is based on functions that have its parameters which it apply to function bodies composed of other functions, going downwards as deep as we need. Here I draw a parallel to semantic levels from above. In theory, as I learned from various sources, there are a special kinds of functions called combinators. Combinators are functions that in their bodies don't introduce new functions, but only recombine order and existence of parameters. For example, constant true is defined as `λx.λy.x`, which means given parameters x and y, return x. False is defined as `λx.λy.y`. Similarly, operators and, or, and not are defined as `λp.λq.p q p`, `λp.λq.p p q`, and `λp.p FALSE TRUE`, respectively. operator if-then-else is defined as `λp.λa.λb.p a b`.

As we see, combinators are describing some constant values and operations we'd expect to have to be grounded or hard-coded, yet they refer to themselves, forming a system we may use like they are really grounded to something while they are not. With a set of combinators powerful enough, we may describe any operation and algorithm known to us (this is a well known academic fact, I believe).

Why am I mentioning combinators? Well, I see them as a concrete semantic layer that doesn't need any further down-passing-meaning, yet they are able to interpret and calculate anything we imagine without referencing to anything below them.

So, can combinators be considered as grounding constants without actually be grounded to anything outside?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #18 on: March 12, 2023, 10:12:50 pm »
Quote
... all the way to our deepest instincts we were born with. These instincts are our groundings. No one taught us to them in our lives, we are born with them, and we just feel them.

Are you saying only inborn qualities are eligible to be the lowest level in which symbols are grounded?

A symbol, in the terminology I'm using, is something that is a representation of something else, that stands in for something else. So any primary experience that isn't about anything else, that is just itself, is a "lowest level" that symbols can be grounded in. I would think that includes a variety of experiences that aren't inborn, but are not taught, either - things that just happen to the self, and can be given names.

Quote
So, can combinators be considered as grounding constants without actually be grounded to anything outside?

You note that they don't refer to anything beyond themselves, so, maybe! If something about them can be said to have inherent meaning to the system using them. E.g. is the system trying to acquire more instances of the "true" combinator than the "false" one? I tend to relate all meaning back to goals. Or is the system's memory of its runtime, its experience as it were, composed of sightings of these combinators?

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #19 on: March 12, 2023, 11:25:24 pm »
Are you saying only inborn qualities are eligible to be the lowest level in which symbols are grounded?

Entire world around us is merely an imprint in one side of our inborn qualities, in our sensors, so to say. I guess there are many kinds of sensory data. For example a tickle of a raindrop on the skin. Another example would be a feeling in our body while pronouncing a word "raindrop".

Some other side of inborn qualities is intelligence. Using it, we learned to share the idea of a symbol. We taught each other that pronouncing the word "raindrop" relates to the real raindrops, so we can talk about them in other contexts. Both real raindrop and the word "raindrop" are felt by inborn qualities first. Only then the word "raindrop" is conceived as a symbol for the real raindrop in our unexplored mind. That symbol is an invention we use to, say, offer an umbrella when appropriate. So I believe intelligence is used to refer directly or indirectly by symbols to the first kind of inborn qualities, feelings.

But if you ask me about emotions as an opposite side of inborn qualities of the first kind, feelings, and whether emotions can be used as symbol groundings... that's too complex question for my simple mind.

I don't really know, maybe there are feelings on the one side, emotions on the other, with symbol hierarchy in between.

I have pretty unstructured thoughts about all this.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #20 on: April 10, 2023, 10:25:31 pm »
In Symbol Grounding Problem Part 3, I make a brief (but unfortunately necessary) detour into consciousness-related strangeness. Searle thinks it's impossible for any entity that runs a formal computer program and doesn't have wetware (real biological neurons) to have symbol grounding or "intentionality." Is he right? (I don't think so.)

https://writerofminds.blogspot.com/2023/04/sgp-part-iii-on-brain-physics-qualia.html

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #21 on: April 13, 2023, 02:57:26 pm »
I've got two thoughts if anyone is interested:
  • isn't a mind of a living being a kind of chinese room?
  • suppose we have a programming language. Now suppose we make an interpreter of that programming language in that programming language. Obviously, both levels can execute any code. Since the main level is already grounded to whatever is beneath it (maybe assembler), isn't the second level grounded to the first level? And if we build interpreter within interpreter within interpreter... aren't they all grounded by the fact that the shallowest, main level is already grounded? Finally, where is that assembler grounded to?

« Last Edit: April 13, 2023, 06:05:24 pm by ivan.moony »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #22 on: April 13, 2023, 09:14:45 pm »
  • isn't a mind of a living being a kind of chinese room?

A living being's mind can actually understand Chinese, but the Chinese Room cannot - that's the overarching point of the thought experiment, and is also the part of Searle's paper that I agree with. Information cannot pass from the interior of the Room to the exterior, whereas minds (whether biological or not) can communicate their internal state to others and can also learn knowledge from the communications of others. So, could you elaborate on why you think a mind is no different from the Chinese Room?

I would say the symbols of an assembly language stand for (are grounded in) literal electrical activity inside the computer, which produces changes in the bits stored in various locations. But the question of how to ground a programming language feels a little different from the question of how to ground typical natural languages.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #23 on: April 14, 2023, 12:06:16 am »
So, could you elaborate on why you think a mind is no different from the Chinese Room?

I mean we can see colors, hear sounds, and everything else. But what we express to the outer world are patterns we learned in the past from observing our environment. May be a stretch, but if we didn't see an action in the past, we can't make it happen. At some point, when our vocabulary is rich enough, we learn to do random checks on our own to learn our future actions. If this speculation is right, it means we can learn to learn from ourselves without needing for any groundings.

I'm just superficially familiar with Chinese Room experiment, but if rules for providing output to input can take variables, I bet it is all doable without grounding, to learn patterns which can direct future outputs. A memory of combinators may do the trick (combinators are functions that return combinations of their parameters without introducing any constants).

Thinking further, there may be a thing that tells apart a mind from a machine, but I believe we can't prove it to noone else but ourselves that the thing exists. It is just that little Yes/No feeling in our minds that leaves the question open. We think of some idea (may be simulated by some heuristics, genetic combining, or something similar), and the feeling decides whether we will do it or not. So, what is that Yes/No feeling? Where from does it come? I bet it can be simulated by some algorithm, but experiencing that feeling may be the only thing that tells us apart from machines. Maybe even different streams of that binary feeling in time give us illusion of all the colors and sounds and other wonders of this world.

But other than that feeling, I believe our logical (or random) reactions can be simulated by an algorithm. Maybe that would be the point of Chinese room experiment? And that brings me to my next question: under which circumstances that real Yes/No feeling can materialize in an artificially made physical construction?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #24 on: April 14, 2023, 07:03:24 pm »
Quote
But what we express to the outer world are patterns we learned in the past from observing our environment ... I bet it is all doable without grounding, to learn patterns which can direct future outputs

Ah, but the patterns in this case are the groundings. When you build a model of the world from taking in sensory data, that's the thing that the symbols of your language get grounded in.

The Chinese Room's problem is that it only operates on symbols (meaningless in and of themselves) which are unconnected to anything but each other. It has no clue about the goal-relevant patterns in the world that those symbols represent when humans use them.

When you describe the Yes/No feeling I am guessing you are talking about qualia and possibly the intuitive sensation of free will - but the Chinese Room has bigger problems than just the absence of these things, in my opinion. I do think our reactions can be simulated by an algorithm, but the Chinese Room is not the right algorithm to do it.

*

DaltonG

  • Bumblebee
  • **
  • 46
Re: The Symbol Grounding Problem
« Reply #25 on: April 17, 2023, 03:22:29 am »
In Symbol Grounding Problem Part 3, I make a brief (but unfortunately necessary) detour into consciousness-related strangeness. Searle thinks it's impossible for any entity that runs a formal computer program and doesn't have wetware (real biological neurons) to have symbol grounding or "intentionality." Is he right? (I don't think so.)

https://writerofminds.blogspot.com/2023/04/sgp-part-iii-on-brain-physics-qualia.html

I'm in agreement with you. Be it bio-neurons, artificial neurons, or even rule base representations, grounding is achieved through linked relationships to other representations.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #26 on: May 09, 2023, 04:40:06 pm »
Symbol Grounding Problem Part 4! This is the big one. I examine and respond to a variety of claims that embodiment contributes something essential to symbol grounding (and intelligence generally). Human minds may need bodies, but minds in general do not!

https://writerofminds.blogspot.com/2023/05/sgp-part-iv-does-grounding-demand.html

*

DaltonG

  • Bumblebee
  • **
  • 46
Re: The Symbol Grounding Problem
« Reply #27 on: May 10, 2023, 12:14:45 pm »
There are only lingering questions about qualia because, again, it is not obvious whether qualia are informational or physical (or perhaps even some secret third thing).

https://writerofminds.blogspot.com/2023/04/sgp-part-iii-on-brain-physics-qualia.html

The above quote comes from your posting on your website and I would like to address the subject of Qualia.

Qualia - (As I see it)

Qualia extend the interpretation of mere sensation to include perceptual variations over time that are associated with an event or experience. Qualia are interpretations of variations in detectable sensory signal samples taken consecutively over a finite period of time. Temporal discrimination is required so they involve both bottom up and top down comparisons. New episodic memories are only instantiated when the experience is truly novel, else analogically similar experience from the past get temporally updated.

Qualia are extra-dimensional sources of meaning that arise from the physical then translated into the informational. They become one of many contextual elements effecting the interpretation of experiences. Qualia that qualify as contextual elements, and impart meaningful influence on how an experience is interpreted, are, to some degree, learned. The perception and interpretation of qualia by a receiver is a highly variable due to the influences of demeanor, expectation, attention, and focus (relative to active and passive perception).

The physical aspects of qualia are bidirectional in that sensory input stimuli bear the signal information required to initiate cognitive interpretation while at the same time they can be tied to autonomic unconscious responses which feedback through other modalities to reach consciousness and provide a subjective experience.

Phonemes are a form of qualia and are subject too age dependent developmental windows. I've seen it reported that if a new born kitten is prevented from seeing anything for an extended period of time after opening its eyes, that it may remain blind - another developmental window.

When it comes to the qualia that accompanies tactile sensations, I have a personal hypothesis concerning the pleasure that can be experienced from back scratching. I've always thought of a back scratch as the next best thing to sexual pleasure. However, when I got married, I was dismayed to discover that giving a back scratching to my wife was met with indifference. If she had an itch, scratch-scratch-scratch and she was satisfied wanting no more. My guess was that that kind of affectionate exchange between parent and child wasn't something that went on in her family and that she never acquired the association between it and pleasure. I suppose I could be wrong about this and that it could be related to a genetic difference, but by the same token, the pleasure sensation is a form of qualia and therefore may be subject to a developmental window. All in all, qualia may have a somewhat fixed window in which the fetus and neonate can allocate neural resources needed to support qualia.

The connection between qualia and subjective experience implies that the cognitive response to qualia makes autonomic connections with the body. Once upon a time when I was a whole lot younger, I had normal hearing. When listening to music, some compositions would trigger a tingling sensation that went up my neck and into my scalp. Since I haven't lost all my hearing, the triggering of those sensations must have been tied to certain regions of my perceptual range that no longer exist for it's been a long long time since I've experience such a subjective experience. As far as I can recollect, the loss is concurrent with my hearing degradation. The point being is that the propagation of qualia originates from physical sources for I assume that those missing frequencies in my perception still exist in nature.

Language and communications have many dimensions of qualia in the form of prosody or vocal inflections. Of all the non-symbolic sources for context, the qualia of prosody may be the most influential and important form, for they can transpose the literal interpretation by 180 degrees, or into the opposite meaning.

All in all, Qualia is one dimension of influence that allows us to split hairs and select the correct meaning embodied in an experience.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #28 on: May 10, 2023, 03:03:03 pm »
Phonemes are a form of qualia
Language and communications have many dimensions of qualia in the form of prosody or vocal inflections.
All in all, Qualia is one dimension of influence that allows us to split hairs and select the correct meaning embodied in an experience.

Nice post, but I don't think you and I are talking about the same thing when we use the word "qualia."

*

DaltonG

  • Bumblebee
  • **
  • 46
Re: The Symbol Grounding Problem
« Reply #29 on: May 11, 2023, 06:56:53 pm »
Phonemes are a form of qualia
Language and communications have many dimensions of qualia in the form of prosody or vocal inflections.
All in all, Qualia is one dimension of influence that allows us to split hairs and select the correct meaning embodied in an experience.

Nice post, but I don't think you and I are talking about the same thing when we use the word "qualia."

Last comment on Qualia

The philosophical and so called authoritative sources charged with the responsibility of defining terms are in a state of flux and discord with regard to the meaning of Qualia (and many other terms). This means that it comes down to what you decide a terms meaning is, whether you can make it work, and if it satisfies your requirements or needs. Putting it to work is the Acid Test, Test under fire, or the proof is in the pudding. Any time there's controversy or disagreement between authoritative sources, it's better to discount them entirely rather than take sides. It's time to go it alone and figure it out for yourself. Truth is always relative to context and the perspective from which it is viewed.

Like you (WOM), I often find myself disagreeing and rejecting claims from the center isle. A finely honed exception detector is quite helpful in separating the wheat from the chaff.

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

380 Guests, 0 Users

Most Online Today: 379. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles