The Symbol Grounding Problem

  • 40 Replies
  • 32083 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #30 on: May 23, 2023, 03:37:16 pm »
I agree that all senses (even textual input/output) fall into the same category which can be a base for symbol grounding.

How do you feel about the following theory: an agent reads input, this is something that happens by itself. Then the agent produces the output in a process of intelligently adjusting the future inputs according to its strivings and plans. How important the symbol grounding is in this situation?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #31 on: May 23, 2023, 04:02:09 pm »
... an agent reads input, this is something that happens by itself. Then the agent produces the output in a process of intelligently adjusting the future inputs according to its strivings and plans. How important the symbol grounding is in this situation?

Are we assuming that the input and the output are symbol streams? Then I think it's quite important. Without grounding, the learnable relationships between input and output can only be shallow statistical models. To get a "deep" model that considers the *mechanisms* by which certain outputs lead to certain inputs, I think you need grounding, so that the agent can actually reason about the systems the symbols represent. Grounding is also necessary to determine how the inputs relate to the agent's strivings and plans ... assuming those strivings are for something non-symbolic, e.g. a particular internal state for the agent, rather than just "I want to see these symbols."

Human text isn't really "a thing in itself," it's a collection of abstractions that were made to represent other things. So trying to treat it like "a thing in itself" will always end up missing something essential.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #32 on: May 23, 2023, 10:43:41 pm »
To get a "deep" model that considers the *mechanisms* by which certain outputs lead to certain inputs, I think you need grounding, ...

The "deep" model you are talking about is something that could be used to implement instincts.

GPT-X crew has a real trouble with their creation personality, and they are trying to solve it without grounding. What GPT guys do these days is asking questions in natural language about output eticity, and demanding an answer to be in yes/no form. If the answer is yes, the output is being forwarded to end user. But this method is not so successful. Grounded symbols to manipulate with seem like a more promising way to tame the beast.

I have to admit, the more I read about symbol grounding, the more that that notion becomes too abstract in my head. At start, I had some idea about it, but now I don't know anymore how to describe it in a sentence or two, as all of it reduces to end effect, and that is the future input. And really, what difference does it make what method do we use if the only important thing is the future input?

Right now I only see symbol grounding as a possible tool for implementing some instincts, but somehow it should have more weight than I'm currently aware of.

[Edit]
You are right, we don't want a black box scrambling input to output by raw statistics. We want some anchors in between because we want to fully understand the process so we can do the right thing. And symbol groundings might represent those anchors. It is still somewhat abstract to me, but I hope I'll catch the tales.
« Last Edit: May 23, 2023, 11:16:15 pm by ivan.moony »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #33 on: June 01, 2023, 04:26:48 pm »
Isn't symbol grounding problem exactly the same theme as the matter of semantics?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #34 on: June 01, 2023, 07:18:48 pm »
Isn't symbol grounding problem exactly the same theme as the matter of semantics?

From Part I of my essay:

"Two more terms that often come up in connection with the SGP are "semantics" and "syntax." "Semantics" is a formal term for meaning or the study thereof, while "syntax" refers to the structural or manipulative rules that are part of a symbol system. It is notable that syntax does not need semantics; it is based on the forms of the symbols themselves, so manipulations that follow the rules can be carried out without knowledge of the symbols' meanings. However, syntax without semantics is arguably not very useful, as it only serves to transform one string of gibberish into another."

So they are certainly closely related. I think you could say the Symbol Grounding Problem is the problem of how to give an artificial intelligence access to semantics.

*

MikeB

  • Autobot
  • ******
  • 224
Re: The Symbol Grounding Problem
« Reply #35 on: June 04, 2023, 11:09:17 am »
It'll always be statistical guessing and "more power".

AI as a simulation of the mind doesn't relate to statistical guessing except in the case of epiphanies. Solving an unknown problem by going into a hyperbaric chamber (double oxygen) and searching more of your mind at once.

Pre-trained statistical models are the same but evolved and condensed.

Where does it relate to a human or animal mind...

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #36 on: June 08, 2023, 03:18:05 pm »
Getting out of the argument zone now and starting to look at methods: *how* might a disembodied system achieve symbol grounding? Part 5: https://writerofminds.blogspot.com/2023/06/sgp-part-v-symbol-grounding-for.html

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #37 on: June 18, 2023, 12:30:10 pm »
Writer, In one of your posts you questioned whether AI needs to have a body to be grounded. I just remembered something I was reading about. There were a few experiments.

In one experiment, a man wore mirror glasses that turn picture upside-down. After a day or two of experiment, the man adjusted in extent that he said it become normal to him, and it didn't make a difficulty to him doing all the everyday tasks.

Taking it further, the point should be, whatever senses are attached to brain, the brain adjusts and begins dealing with it like it is natural. Hence, whatever input/output is attached to brain, it would be ok, it can exhibit intelligent behavior.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: The Symbol Grounding Problem
« Reply #38 on: June 19, 2023, 01:25:02 pm »
a robot cant control its input, the input is not actually affected by the robot,  it makes things simpler that way.

the robot controls its output, only.

I know there is some feedback going on from output to input, but u dont have to worry about that if u just want to get things done.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: The Symbol Grounding Problem
« Reply #39 on: July 06, 2023, 06:45:56 pm »
Here we are: the final installment of "The Symbol Grounding Problem," in which I finally get to talk about how I'm doing/planning to do grounding in Acuitas. https://writerofminds.blogspot.com/2023/07/sgp-part-vi-acuitas-and-symbol.html

Thanks everybody for reading and commenting so far!!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: The Symbol Grounding Problem
« Reply #40 on: July 08, 2023, 08:58:25 am »
Interesting...

Lately, I planned a bit about my future version of text based AI. I decided to try to expose the following instructions: "hear", "say", and "remember". "Forget" would be a process that triggers automatically. AI would have to clue up on its own which input corresponds to which process, thus making a connection to the four ground processes.

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
Today at 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

256 Guests, 0 Users

Most Online Today: 390. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles