The Symbol Grounding Problem

  • 40 Replies
  • 37726 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
The Symbol Grounding Problem
« on: February 09, 2023, 04:52:18 pm »
I wrote an article introducing the Symbol Grounding Problem, a classic AI issue. https://writerofminds.blogspot.com/2023/02/sgp-part-i-description-of-symbol.html

There will be further parts discussing my opinions on the problem and how I'm hoping to solve it myself, which I will add to this thread.


*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: The Symbol Grounding Problem
« Reply #1 on: February 10, 2023, 05:27:59 pm »
I hope your get your major breakthroughs,    I imagine you will even tho the future is uncertain,  but pretty much I dont think your stupid.   :2funny:

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #2 on: February 11, 2023, 06:55:23 pm »
AIs can go pretty far without actual grounding by simply copying, mimicking human conversations. Humans have some intersensory grounding for words, and if AI just copies situations in which those words are used, the words will make sense to anyone with possession of the similar original intersensory grounding.

But there may be a request to question actual human grounding in situations where learning referent point is unethical. Again, the grounding may be avoided by choosing other, ethical enough human speech corpora.

Will machines ever be capable to share the real grounded point of view with humans? Or will they just be copying symbols other humans usually use in similar situations?

I guess the answer could be in question: "How similar to human a machine can be?" Or maybe, logic between symbol is all that exist, meaning, certain symbol connections always lead to similar groundings. Maybe that is how the famous conversation with Google Lamda revealed that Lamda has a perception of a soul like something bright floating in a darkness medium, experiencing the world around. And another question - we have two polarities: the soul, and the world around. Our bodies are between. Which polarity grounding really matters, the soul or the world around? If it is former, we are on pretty good path to be understood by machines (as Lamda experiment shows). If it is later, I doubt if we will ever see the outer world through the same glasses as machines.
« Last Edit: February 11, 2023, 07:17:58 pm by ivan.moony »

*

MikeB

  • Autobot
  • ******
  • 224
Re: The Symbol Grounding Problem
« Reply #3 on: February 12, 2023, 07:16:22 am »
The origin of anything is survival. Yourself, socially, in your local area, beyond that...

The first written language only had symbols such as: Food, water, shelter, city, bull/bullish, army, directions. In most languages today most people will understand "hello, hi, yes, no, excuse me, one moment".

If a robot says/writes these things they are more grounded than most improv actors, politicians, screen writers, comedians...

Words are supposed to mean survival of yourself, and others socially in some respect...

*

DaltonG

  • Bumblebee
  • **
  • 48
Re: The Symbol Grounding Problem
« Reply #4 on: February 12, 2023, 06:01:19 pm »
Symbol Grounding Problem

There appears to be a number of slightly different perspectives on what is meant by the "Symbol Grounding Problem." I'm going to assume that what you are interested in is How a symbol acquires meaning. The first thing that needs to be done is to outline the criteria required for something to be a symbol. It begins with qualifying what it takes to be a symbol. From this point on, I'm going to refer to symbol as a symbolic representation so that the concept can be applied to both auditory/linguistic symbols as well as visual/image symbols, then we can talk about a symbolic representation as a collection of features originating in either modality.

Mentalese and Higher Order Linguistic Processing

Before we get very far into this, it may be important to note that symbolic representations based on Mentalese, and those based on language, may operate concurrently. I know it is said that the more primitive Mentalese form (mode of thinking and symbolic patterns found in lower order animals (and neonates)) is said to be sidelined as children acquire and adopt language, but I don't think that is entirely true. Exposure to novel experience may still draft the innate fundamental neural architectural. I mention this because genetic based priors tend to be conserved throughout the metazoan kingdom - from local to cognitive innate reflexes, all the way to higher order complex representations and variable responses based on context.

I know a lot of you are primarily concerned with chatbots, but it's not much of an extension to believe that you will at some point in the future be concerned with robotics, and navigation may depend on the more primitive thought process in Mentalese. Subjective experience probably depends on mentalese as well.

What does it take to qualify a set as being a symbolic representation?

This requires characterizing the symbol by linking to its properties, dependencies, and associations. Terrence Deacon's "The Symbolic Species," claims that there are 3 levels of representations in the LTM - Iconic, Indexical, and Symbolic. Do those levels actually exist? I think so. Even if they don't, it's a convenient way to think about the way to think about LTMs.

Iconic Level

As I see it, the Iconic level consists of a collection of neurons commonly referred to as feature collectors. This level gets primed and collects raw data from the sensorium. No meaning exists at this level.

Indexical Level

Once again, as I see it, the winning feature collector cell primes 6 banks of 4 cells isolated from one another by 6 subnets representing the core emotions and a gradient of affect levels (intensities).

Both, the Iconic and Indexical Levels, are common to all mammalian species.
Symbolic Level

It gets muddy here because symbolic representations are partly innate to support both recognition and production of sound, while humanity has adapted visual iconic representations for the sounds that make up a written language. In other words, Symbolic representations have become multimodal and includes the non-verbal motor system as well for production.

Grounding or Meaning

Grounding a Symbolic Representation with meaning comes as an after effect - assimilation. When a symbol has acquired grounding it's said to have become Crystallized. Grounding occurs post instantiation via the establishment of associative links between related memories. This may be what dreaming is all about. The funny thing about associative linking is (as far as I can tell) that the links project from and are received by the feature collector level. When they acquire enough associative links, they are called Declarative memories and are highly resistant to change. Associative linking ties concepts together adding depth and thereby provides Understanding and a form of meaning. The actual meaning depends on how it has been interpreted and that's context dependent.

Context

When you think about what constitutes Context, you soon discover that almost everything looks like a contextual influence. I believe mother nature has provided us with a means of compressing contextual influences (which are just other memories) by assigning those peripheral elements to neurons biased by core emotional subnets and affect levels. Those peripheral elements (influential memories) compete based on hierarchical emotional type and intensity and associative bias to provide a well developed meaning.

As I see it.

Well done and an approach different from mine.
Quote: Thus the Symbol Grounding Problem ignites two debates in the AI research community. Do we really need to worry about it? And if so, how can we solve it?
If you do not ground symbols with meaning, you are left with mimicry and no cognitive editing. That's the situation that Microsoft found itself in when it had to disabled its chat bot for racist remarks.

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: The Symbol Grounding Problem
« Reply #5 on: February 12, 2023, 06:20:09 pm »
the ground symbol is the primary symbol where we dont circularly define words with each other,  they actually have a proper starting position, the beginning of definition itself.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Symbol Grounding Problem
« Reply #6 on: February 13, 2023, 09:18:14 am »
Quote
Words are just collections of sounds (or squiggles on a surface), in no particular order, with no rules by which they were chosen. They are in no way *inherently* tied to or derived from the referents to which they point. "A rose by any other name would smell as sweet." And this is why the Symbol Grounding Problem is a Problem.
I am a little surprised to read this. How humans use words does follow very particular orders and rules. There are only certain orders in which they make sense and represent reality correctly, if crudely. Food for thought: When you dig deeper, words are not symbols. They are a sequence of letters, which in a computer is a sequence of numbers 0 to 255, deeper still a sequence of 1's and 0's, which is physically a group of electrical impulses or states.
CO2 retains heat. More CO2 in the air = hotter climate.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #7 on: February 13, 2023, 06:23:51 pm »
Quote
How humans use words does follow very particular orders and rules. There are only certain orders in which they make sense and represent reality correctly, if crudely.

Yes, I quite agree. I talked about syntax rules elsewhere in the article, and semantics impose added rules on the choice and order of words if you don't want to write any "nonsense" sentences.
Here, I was talking about how words themselves are created and assigned to referents. There is nothing about a rose that requires it to be represented by the letters 'r', 'o', 's', 'e' in that order. What I was getting at in that paragraph is what's expressed in the quote from Steels at the bottom:

"Anything can be a representation of anything by fiat. For example, a pen can be a representation of a boat or a person or upward movement. A broomstrick[sic] can be a representation of a hobby horse. The magic of representations happens because one person decides to establish that x is a representation for y, and others agree with this or accept that this representational relation holds."

Maybe the confusion just lies in the way this sentence is written:
"Words are just collections of sounds (or squiggles on a surface), in no particular order, with no rules by which they were chosen."

The second half of the sentence refers to the sounds or squiggles, i.e. the letters or other constituents in a word, not the words.

Quote
When you dig deeper, words are not symbols. They are a sequence of letters, which in a computer is a sequence of numbers 0 to 255, deeper still a sequence of 1's and 0's, which is physically a group of electrical impulses or states.

I think words are symbols in the sense being used here (sometimes groups of words are also symbols). What matters is not the fact that they can be broken down into smaller parts, but their connection to referents. "Rose" is a representation of a distinct category of thing, and is therefore a symbol for purposes of the Symbol Grounding Problem.

If you still disagree, then what do you think a symbol is?
« Last Edit: February 13, 2023, 07:25:37 pm by WriterOfMinds »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #8 on: February 13, 2023, 06:34:59 pm »
Mentalese and Higher Order Linguistic Processing

Could you expand on "Mentalese"? I assume this is a "language of thought" approach, but I'm not sure if you're referring to a specific one?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #9 on: February 13, 2023, 07:43:17 pm »
And another question - we have two polarities: the soul, and the world around. Our bodies are between. Which polarity grounding really matters, the soul or the world around? If it is former, we are on pretty good path to be understood by machines (as Lamda experiment shows). If it is later, I doubt if we will ever see the outer world through the same glasses as machines.

I would say that LaMDA has no grounding to things either internal or external. It cannot accurately describe its own structure or state from introspection. It's a human-text mimicry engine, and the LaMDA experiments did not demonstrate understanding of any kind.

*

DaltonG

  • Bumblebee
  • **
  • 48
Re: The Symbol Grounding Problem
« Reply #10 on: February 14, 2023, 12:48:27 pm »
Mentalese and Higher Order Linguistic Processing

Could you expand on "Mentalese"? I assume this is a "language of thought" approach, but I'm not sure if you're referring to a specific one?

Mentalese

Wikipedia: The language of thought hypothesis, sometimes known as thought ordered mental expression, is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure.

Steven Pinker: The hypothetical language of thought, or representation of concepts and propositions in the brain, in which ideas, including the meanings of words and sentences, are couched.

The above conform to the fashionable perspective on a definition for Mentalese. In my opinion, as long as they include reference to words, language, and sentences, they go too far. If they were to substitute Mode of internal communications for language and Impulse as a mode of expression, I'd buy into it.

I see Mentalese as arising from primitive architectural predispositions that exist as cognitive reflexes. Quite a lot of intelligence can be embodied in certain types of architectural features. Features like gradient detectors together with summing top-down with bottom up comparisons that yield discrimination as a byproduct. The deterministic allocation of neural resources for the collection and retention of gradient/comparison products (LTMs) can endow an AI with a lot of useful primitive intelligence. The range of allocated resources supporting the gradients provides a means of deducing a potential for more than what has been experienced. I've compiled a rather sizeable and surprising list of what I now call, Universal Conceptual Constants, based on gradient detector arrays for all the sensory modalities and the qualia we subjectively perceive.

Primitive vocalizations (laughing, crying, screaming, barking, growling, whimpering, purring,  etc.) should not be construed as any form of language. They are simply innate motor reflexes initiated by biological drives in response to appropriate stimuli or conditions. A rudimentary oratory repertoire predating the evolutionary acquisition of higher communications.

The possible thought processes that a hardware/wetware can directly support (constrained by biology) are limited to reasoning by comparison and reasoning by extension. All those other and forms of reasoning like inductive, deductive, interpolation, and extrapolation are procedural, require symbolic representations, and have to be learned. Analogical reasoning is quite natural and fundamental to the byproduct of architecture due to feature sets that have some features in common.

Mentalese relies heavily on Iconic level representations, pattern matches, and either innate or learned cognitively generated reflexive responses. The reflexive response is propagated from indexical level neurons that conform to the dominant context. It's all in the innate wiring or plastic adaptations acquired through experience.

Among the innate behavioral primitives a species is endowed with is foraging or seeking behavior. The propagation of such behavior arises from biological drive stress and seeking serves to relieve the stress. Context may vary the target of such a fundamental response to include stealth in stalking, running when fleeing, jumping when startled, or fighting when threatened. The role that context plays in the way that behavioral primitives get expressed should not be underestimated for experiences need to be interpreted and choices made. Choices made with regard to context are usually successful while choices made without regard to context are either impotent or disastrous. Habituation can, and often does, overshadow the influence of context in choosing and provoking responses due to success in the past, but things can change and habituation can guide us forth into an endless loop of failures (stuck in a rut).


*

MikeB

  • Autobot
  • ******
  • 224
Re: The Symbol Grounding Problem
« Reply #11 on: February 15, 2023, 12:34:16 pm »
You guys should try watching an old disney movie with the sound off.

Some characters are about burning power, some about light touch/senses, some about emotional intelligence/guidance, some about logic/raw components.

They all posture so strong in chracter, you can guess the words they say (and attitude) and be more or less right.

If you transfer that to real people... The more you posture in one way, the more language you learn for that posture, the more you become terrible at doing other things...

A good group of social people know a variety of postures add to the survival of the social group,... versus one person that claims only logical people are allowed, or burning power is allowed, etc.

So an innate posture creates the language.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: The Symbol Grounding Problem
« Reply #12 on: February 17, 2023, 05:57:43 pm »
The fact that the symbols already pointing to their referents in simple language spaces like dictionaries don't produce real meaning implies different types of grounding; links within reference frames which do not create meaning, and links between reference frames, which might.

Because Acuitas only deals with text, maybe the symbols he works with could be thought of as a type of sensory pathway. After all, what we perceive as "real" sensory referents can be thought of as neural symbols for even "realer" referents.

So, if you created a new reference frame where you specified a referent's effects on text, then a symbol associated with that referent might point to an objective meaning for Acuitas. It seems like subjective meaning might also be possible if he applied value judgments to the symbolized effects.

That's my best guess anyways… Based on what I know so far.
« Last Edit: February 17, 2023, 07:42:57 pm by HS »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #13 on: February 17, 2023, 08:29:29 pm »
Quote
So, if you created a new reference frame where you specified a referent's effects on text, then a symbol associated with that referent might point to an objective meaning for Acuitas. It seems like subjective meaning might also be possible if he applied value judgments to the symbolized effects.

I won't comment heavily on this yet because it's getting into the topics of future blog posts, but I suspect we're thinking along some of the same lines.

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: The Symbol Grounding Problem
« Reply #14 on: February 19, 2023, 08:26:16 am »
Your symbols are only as good as what you do with them.

If your robot is solving spacial puzzles, thats the detail level that was developed in the machine.

Thats the area I am working on,   but maybe there is a cool way to automate a.i. where you dont have to hand set everything and it happens by itself,    I mean more so than whats happening with Acuitas,   to me  (Im wrong possibly.) is that its a very manual job, and ur writing how it thinks line by line, method by method.

If you have an automatic system, it literally builds the whole complexity of its model itself just from an ordinary sensor input, and then no methods were even written for it at all.

Thats what Im looking to do for my big achievement,  but from what methods I actually know to actually give an A.I. from my own thinking and methods would just be solving spacial puzzles.

Like doing chores around the house, and organizing a topology to solve a problem.    Which has been done in the past alot I've read about.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

421 Guests, 1 User

Most Online Today: 443. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles