I wrote an article on how chatbots can connect language to meaning

  • 11 Replies
  • 2236 Views
*

Yervelcome

  • Roomba
  • *
  • 16
  • Not Morgan Freeman
    • Artificial Knowledge Workers
And I'd like feedback before I send it to other people.

https://medium.com/@mycardboarddreams/language-and-motivation-in-chatbots-df6e9f651293

The TL;DR is that language can only have meaning if it's connected to the agent's motivations.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #1 on: May 24, 2020, 08:42:36 pm »
It is quite a long read, well written. I seem to agree with most of it, and I really like the clean pictures. One thing however: I don't believe intention covers all meaning of language. It covers meaning when human psychology or goals are involved, but there are also situations where this is not applicable, like a landscape. The landscape does not want anything, nor does whomever describes the landscape, but it provides a shared context to draw meaning from to understand words.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Yervelcome

  • Roomba
  • *
  • 16
  • Not Morgan Freeman
    • Artificial Knowledge Workers
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #2 on: May 24, 2020, 08:53:21 pm »
Thanks a lot for reading it. I tried to keep it concise and interesting.

As for meaning in landscapes, how do you split about the meaning in landscapes themselves, vs the meaning that we put into it? For instance a "hill" isn't in a landscape, it's something that we define, we decide where a hill begins and ends.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #3 on: May 24, 2020, 09:32:15 pm »
I was thinking more that if one sees or describes a hill, and then says something like "the slope", then the earlier described hill serves as a shared context to derive the meaning of "the slope" from. That it then refers to the base of the hill rather than the angle of a mathematical curve for instance.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Yervelcome

  • Roomba
  • *
  • 16
  • Not Morgan Freeman
    • Artificial Knowledge Workers
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #4 on: May 24, 2020, 09:45:56 pm »
I think you're right. In fact, I'm of the opinion that meaning itself can only be something that's "shared". To what degree do you think it detracts from the argument of the article, as opposed to needing to be discussed in the next one? I'm trying to keep this one short, but still correct.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #5 on: May 25, 2020, 12:54:20 am »
It’s a good point, how can we make an isolated language processing system and expect it to act human, when the human language processing system is influenced by more than just its internal rules. 

If the AI had more emotions in addition to a negative signal, and if it could remember its emotional profiles from past situations, then it could learn to associate these specific emotional profiles with words. If it encountered a new problem, details aside, the fact of having a problem could produce a recognizable emotional signature, for which the robot is now equipped, with the word “help”.

A useful emotional heuristic for generating appropriate words could be created without having to rationalize environmental differences between environments. Such a subsystem which puts gradients of value judgement behind experience, coloring the bot’s perception with potential meanings, could allow the bot to recognize general solutions through its lack of definition, and speed up solutions by estimating into a problem before calculating out of it.

*

Yervelcome

  • Roomba
  • *
  • 16
  • Not Morgan Freeman
    • Artificial Knowledge Workers
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #6 on: May 25, 2020, 01:44:53 am »
You preempted the next article, Hopefully :)

That's exactly what I was thinking too. You build up internal aversions and goals based on lower-level drives. And these are the basis of higher-level concepts.

For example:
The idea of "portal/door/entry" is a solution for when you're blocked from getting somewhere.
The idea of "extend/spread/expand" is a solution for having insufficient space.
The idea of "powerful/effective" is a solution to something/a tool being inadequate to your needs.

etc.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #7 on: May 25, 2020, 01:49:04 am »
http://www.adampease.org/OP/

From SUMO, Geography.kif:
Code
(subclass Hill LandForm)
(subclass Hill UplandArea)

(documentation Hill EnglishLanguage "A &%Hill is a raised part of the earth's surface with sloping sides - an old mountain which because of erosion has become shorter and more rounded.")

(=>
  (instance ?Hill Hill)
  (exists (?Incline)
    (and
      (instance ?Incline SlopedArea)
      (part ?Incline ?Hill))))

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #8 on: May 25, 2020, 08:06:37 am »
I think you're right. In fact, I'm of the opinion that meaning itself can only be something that's "shared". To what degree do you think it detracts from the argument of the article, as opposed to needing to be discussed in the next one? I'm trying to keep this one short, but still correct.
I don't think it detracts from the article, but is an addition. The article seemed to equate meaning to motivation only, on a number of occasions. As many people on this forum are caught up on a single angle, I thought it might need pointing out that there's more. Perhaps a later article, then.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #9 on: May 25, 2020, 12:55:43 pm »
I think the overall premise and purpose of a/the chatbot should be defined and clearly apparent to and for any potential user(s). Is it a neutral being, male/female human/alien lifeform/entity. Is it primarily a helpbot, personal assistant, guide, expert system versed in only one area of expertise (medical, insurance claims, finance, etc.), conversational assistant, or a companion bot, English as a second language bot, etc.

If the bot's purpose is to allow the user to pretend that it is a real person then the human user should be of the mindset to "play along", otherwise, the effect and experience are basically wasted and not fun nor engaging.

Some bots will carry on perfectly conversations until the user asks it a personal or perhaps insensitive question at which point the bot will quickly remind the user that it is just a chatbot and not a real girl/boy/person. Some bots will even get sort of insensitive right back to the user...a taste of their own medicine, so-to-speak. Again, it depends on how "real" the bot's creator wishes it to be. So the language and meaning will have a great amount of interplay largely based on the initial parameters from the start of the conversational meeting.
In the world of AI, it's the thought that counts!

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #10 on: May 25, 2020, 01:29:29 pm »
So good article. I read your others too, about 4 of 5, the last at least had a few WIP but I still gleaned enough from them and am done reading them. I'll go through the key points you provided in my own words. Not my best writing but it will have to do:

You say the AI responds about what else its son likes, but fails to respond it has NO SON! Hmm, if it is asked to predict text in wikipedia articles, it should rarely say it has no son, it's modeling other features like Jessica or Wood. All it has to do is turn on the node feature "me" and will select the right completion ^_^. Tom has son, Marry nope, Jane has baby girl.

You say our AI finds patterns, text to intention, not intention to text, it has bias that nurses are females and can be racist. It's saying only what, we wrote. Well, so much is wrong here, erm... Females are usually nurses. There CAN be male nurses. There is too. Next, GPT-2 etc already can generate new data from the distribution, not just what humans made. They make real discoveries, it may never seen "cats eat" but only "dogs eat", and can utter "cats eat". Next, humans do, find patterns... Next, you've separated intention and text as if they are different, but the brain is all just data be it text or images and some features re-occur which causes patterns, so your questions you wake up with every morning are text/sounds or images and they are the intent, the text features have Reward Dialog Goal on them and force it to talk about certain subject like AI or money or food, and root goal is food mostly, and is unchangeable reward too mostly.

You say "children who are just learning to speak will say %u201Cfood!%u201D when they are hungry, rather than %u201CI want food%u201D. A child doesn%u2019t need to develop a concept of %u201Cself%u201D before he or she understands the meaning of hunger.". Yes, the next likely word to predict is Food, indeed. Later it will leave this far away and talk about ex. AI, a much more distant thing, but definite way to get food forever :P. Now why doesn't it say Give me food? Or give my friend, not me, food? There must be some reward, or frequency behind the choice of predicting the next likeliest word or letter after the context given it has.

You say rewards, meaning, are key too. Yes I said those above, we say/predict the next word by entail frequency and rewarding node features like food or foods, etc.

You say "memories are only formed when they solve your immediate problem. They must answer a question or solve a problem." Nope. Collecting data trains the model about frequency, semantics, etc. You always store it, you only forget connections if not activated enough. Rewardful answers you expected or wanted to see occur with aligning reason do make it more likely to remain stored.

You say basically it tries randomly answering questions until gets the desired answer. Yes, it's collecting data from an updated/specialized-er source or website/ distribution/ motor trial tweaking. You say it may use again in future the "help" word if it causes the answer to appear next in close time. Yes, help, genie, God, all may do the trick. Maybe dad or mom will do all you need.

You made me consider an example 'i threw the _ at the', if it sees many different words where the blank is, it can match all sorts of matches confidently even if it's missing that word? That word no matter than? Can be anything for sure? Perhaps.

You say we use already existing features in data. Yes. I show this in my work. Hierarchy or semantic heterarchy web. And Byte Pair Encoding.

You say you can see/hear a new object or question and not know a satisfying answer confidently/ predictable/ recognizable, and if told the answer only satisfied if ex. trust them, and if they say certain matching words. Also if told this object of gears and rope is a food cooker over fire, you'll remember it if familiar with food cooking and clock precision features to connect it to known templates in memory to boost/store it more likely.

Do note that predicting the next word or recognizing first your or other's context are both the same thing, you recognize by frequency and related words and related positions of words convolutionally.

Lastly, you showed us something like le-nun-no-dor-e-vi-ncie and made us recognize who this person is but that is just similar matching, like typos... :P
« Last Edit: May 25, 2020, 03:13:08 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

Yervelcome

  • Roomba
  • *
  • 16
  • Not Morgan Freeman
    • Artificial Knowledge Workers
Re: I wrote an article on how chatbots can connect language to meaning
« Reply #11 on: May 25, 2020, 06:18:06 pm »
As many people on this forum are caught up on a single angle, I thought it might need pointing out that there's more. Perhaps a later article, then.

I agree. My goal with this blog is to present a comprehensive overview of all aspects of human thought, and their variety. This is why I write in accessible language. Maybe some young'un digging through it all might spot a pattern in it or have an insight. In my heart I've always been an educator. There is also value in current NLP research - I'd like to add to it, not wholesale replace it.

I think the overall premise and purpose of a/the chatbot should be defined and clearly apparent to and for any potential user(s).
...

Again, it depends on how "real" the bot's creator wishes it to be.

This gets back to the question of why you are making the chatbot in the first place. As a developer/businessperson I personally skew towards providing business value, which may be why I value speech that has a connection to reality. That doesn't make it the only legitimate purpose, any more than "all games should be RPGs".

Do note that predicting the next word or recognizing first your or other's context are both the same thing, you recognize by frequency and related words and related positions of words convolutionally.

My thesis is that "context" is more about the underlying motivation of the person speaking. For instance, even if everyone in the world says "Cats: the Movie sucks", I can still say "Cats: the Movie is great". I can be the only person who ever connects those words, because I want others to like the same things I like.

To co-opt a popular saying: "Language is not the mirror you hold up to the world, but the hammer with which you shape it". (Originally that was about art)

Also, thanks for the feedback. I appreciate that you went through it and thought critically about the content. I also appreciate the respectful and open dialog, which is rare to find online.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

370 Guests, 0 Users

Most Online Today: 464. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles