Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 300 Replies
  • 662810 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #210 on: February 20, 2022, 09:55:09 pm »
Today I have results from another parser benchmark. I recommend dropping by the blog for this one, as there are a lot of pictures: https://writerofminds.blogspot.com/2022/02/acuitas-diary-46-february-2022.html

Quick version: This month I completed Part II of the Great Conjunction Upgrade. Since the output format of the Parser had become more expressive, I had to upgrade the Interpreter, the Conversation Engine, and the Narrative Engine to accept it, and to process the compounds appropriately.

The Parser tags each word with its part of speech and role in the sentence (subject, direct object, etc.). It provides a summary of the sentence structure. The Interpreter uses this information to detect the next layer of meaning: what is this sentence trying to say? E.g. is it a statement, question, or command? Does it describe a category membership, a state of being, an event, a desire? The Interpreter consumes a sentence structure and emits a more abstract knowledge representation, the "gist" of the sentence, if you will.

I redesigned the Interpreter to expand all compound sentence parts into full representations. For example, given "Jack and Jill eat beans," the Interpreter will output something akin to {AND, ["Jack->eat->beans", "Jill->eat->beans"]} ... as opposed to "{AND, [Jack,Jill]}->eat->beans". This simplifies downstream processing, since I can just loop over the list of complete atomic facts, instead of modifying all the inference tools and other machinery to handle the bewildering variety of possible sentence branches.

That upgraded the formatting at the Interpreter output as well, so the CE and NE had to be adapted as well. I did a quick-and-dirty job on the CE; it will accept the new format so as to maintain previous functionality, but it ignores anything beyond the first entry in a compound output. I put my efforts into the NE. It will process all facts from a compound, though it is not yet capable of handling multiple/nested compounds in a sentence, and it doesn't grasp the meaning of OR. Despite all those caveats, I was able to try adding conjunctions to an existing story, and it sounds a lot more natural now.

Now for some performance assessment! I reformatted my benchmark test sets and ran them through the new Parser. You can read more about the test sets in a previous post, but here's a quick review: the text is drawn from two real children's books: The Magic School Bus: Inside the Earth, and Out of the Dark. Sentences that contain quotations have been broken in two, and abbreviations have been expanded. When a test is run, each sentence from the test set is parsed, and the output data structure is compared to a "golden" example (supplied by me) that expresses a correct way of interpreting the sentence structure. There are four categories in the results:

CORRECT: The Parser's output matched the golden example.
INCORRECT: The Parser's output did not match the golden example.
UNPARSED: No golden example was supplied for this sentence, because it contains grammar features the Parser simply does not support yet. However, the Parser did process it and generate an (incorrect) output without crashing.
CRASHED: Oh dear, the Parser threw an exception and never generated an output. Happily, membership in this category is zero at the moment.

Adding coordinating conjunction support to the Parser moved 10 sentences in the Out of the Dark test set out of the UNPARSED category, and moved 7 sentences in the Inside the Earth set out of UNPARSED. In both cases the majority of the newly parsed sentences went into CORRECT, although some had ambiguities or other quirks which the Parser cannot yet resolve.

   

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #211 on: February 20, 2022, 10:40:16 pm »
Jack and Jill eat beans.

Jack OR Jill eat beans.   <-more like it.

But and and or dont get explicity reffered to
Maybe when we say and, we mean put both jack and jill in that eating beans group/bucket,  then it is AND,  that they both OR.
its in another context, and thats what robots cant understand...

If you want to see the or bucket, you will see Both, Jack and Jill in there,  even tho jack or jill is in there, when u seeing them actually doing it.
 8)

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #212 on: February 20, 2022, 11:17:21 pm »
Sorry Magnus, I'm confused ... not sure what you're trying to say there.

The meaning of English "or" is not the same as Boolean "or" ... it's typically more like Boolean "exclusive or." If you wanted to express Boolean "or" you would probably have to say something like "At least one of Jack and Jill eats beans," "One or both of Jack and Jill eat beans," "Jack or Jill or both eat beans," etc.

English "and" is roughly identical to Boolean "and," though. "Jack and Jill eat beans" means that both of them definitely eat beans. Possibly even at the same time, though whether they do it at the same time or not is irrelevant for purposes of this sentence.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #213 on: February 21, 2022, 01:53:19 am »
yes ur right,  we say or instead of xor.   :)
boolean and and boolean or are interchangable when u say and.


Would love to see the parser in action!! with some kind of reaction from the machine actually using it.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Project Acuitas
« Reply #214 on: February 21, 2022, 03:22:37 am »
It's nice to see the quantifiable progress on the graphs and charts! Your email generation test and the apparent diminishing returns from adding more parameters to the models got me thinking. Since Acuitas is learning to apply logic to text, could he eventually enhance the outputs of those text generators to a greater degree than additional parameters?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #215 on: February 21, 2022, 04:27:28 pm »
Quote
Would love to see the parser in action!! with some kind of reaction from the machine actually using it.

Well there's the old narrative demo. I haven't made a demo video with the new parser, but it would be roughly the same, just with more complex sentences.

Quote
Since Acuitas is learning to apply logic to text, could he eventually enhance the outputs of those text generators to a greater degree than additional parameters?

Maybe. I could imagine doing a filtering operation to extract the gist of an existing piece of text and overwrite any incorrect facts. But having him generate his own text might be easier? Someday!

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #216 on: February 22, 2022, 03:19:38 am »
I added a poll to the thread, because I thought it would be fun to let people help decide what I add to the Parser next (though I won't be coming back to it for a while). There is a parallel poll running on Twitter; I will combine the results from both.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #217 on: February 23, 2022, 05:59:12 am »
That's actually the first time I've seen your Ai in action, it was very impressing Jenny-Sue!   
Probably slightly better than what I saw Korrellan demonstrate, and I already liked his stuff.

One thing I'm a little skeptical about is how the Ai deals with poor inputs, without being super friendly to it does it still manage??.  I imagine there might be a cool (easier) way to do that with adding another tier to it, some beginning filter separate from the system which clears it up before it gets to the main "database" or whatever it is, (some kind of transforming simplifier, which gets all the red herrings out of it.)   Maybe its not the best way to do it but might get some good results there for something quickish to add to it without changing the rest if u wanted to put it on the front line and get it going in real life. :)

Thanks for showing demo,  was fun to hear the Ai sort through the sentences, its very rare to see people take this as far as this is going here.

About the poll,  I didn't even understand the first 3 options, so I just picked the 4th cause its all I understood.
Maybe we need to know more before we vote?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #218 on: February 23, 2022, 03:31:52 pm »
I know that not everybody knows grammar jargon, so honestly I'm fine with people just voting for the option that sounds the coolest. But here's a quick rundown:

Gerunds are verbs used as nouns, as in "Swimming is something I love." Participles are verbs used as adjectives, as in "Panting, I slowed down and approached the door," or "A land destroyed does not prosper," or "There was a man called Dan."

Indirect objects are not the object of the verb, but are something else affected by it or show which direction it goes, as in "I gave Bob the book," "I took Sarah the soup," "I gave the ball a kick."

Adjective clauses are whole subject/verb/object groups used as adjectives. "There was a man who was called Dan." "The book which you need is on the shelf."

The poll allows vote changes so feel free. And either way thank you for breaking the tie :)

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #219 on: February 24, 2022, 04:36:09 am »
Understood. Thanks for the quick english lesson!  It wasn't my best subject.
So if you understand the laws of language, then an Ai falls out of it.  A NLP Ai.
Thanks for the demo and opening my eyes to it.

The one I'm doing is the laws of physics!!! and Ai falls out of that as well, a sport robot. I bet u could probably team them up together,   one handles the spacial topo and one handles the symbolic topo.

Hope u go all the way to AGI and get a million dollars.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Project Acuitas
« Reply #220 on: February 26, 2022, 08:20:58 am »
The poll is closed, but I would choose either gerunds or strictly distinguishing the meanings of the word "that", without yet processing the actual contents of relative clauses.
Gerunds, because when I worked on those I had to restructure my language model and grammar rules, and if you suspect that might be necessary, such things are better prepared sooner than fixed later. I did not have to make many adjustments to handle indirect objects, they're really just a type of objects with different indicators, and they usually only offer tangential information to the main clause, so I would consider indirect objects a relatively light task that can be postponed. Processing relative clauses is something that I still consider one of the most advanced challenges, but just distinguishing whether the word "that" is meant as a stand-alone reference, or part of a noun phrase, or the start of a relative clause, goes a long way towards correct interpretations. And participles are pretty rare in conversational interaction, perhaps more common in literature though.
CO2 retains heat. More CO2 in the air = hotter climate.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #221 on: February 26, 2022, 04:07:28 pm »
The votes are in! Combining the results from here and the parallel poll on Twitter, we have "The many uses of 'that'" as the winner with 5 total votes. Adjective clauses got 3 votes, G&P got 2 (or 3 at most if I apply Don Patrick's vote there), and nobody loves indirect objects.

Given the current state of the Parser, I don't expect gerunds to be too difficult. I already handle infinitive phrases, and I expect to be able to repurpose a lot of that structure for gerund phrases.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #222 on: March 25, 2022, 03:41:18 pm »
This month I went back to put some more polish on the goal system. Goals were first introduced in September 2019, and I added the ability to model other agents' goals in February 2020. (Memories from the Before Time, wow.) Some primitive moral reasoning for resolution of conflicts between the goals of multiple people was added in October 2020. Goals are crucial for story comprehension, since a story is generally about agents trying to attain some goal. They also underpin various goal-seeking behaviors that Acuitas has.

As mentioned, modeling of other agents' goals was in there ... but it had so many problems that I wasn't really using it. You could tell Acuitas about goals by saying "I want ..." or "so-and-so wants ...," and the goal you described would be stored in the file for the given person. But there was no way to describe the goals' relative importance, which is vital for some of the goal-related reasoning Acuitas does. You also basically had to teach him either a complete goal list for each agent, or nothing at all. If Acuitas knew nothing about the goals of some entity, he would assume its goals were just like his (using himself as the best accessible analogy for other minds). And this usually worked well enough, since Acuitas has several very common goals: survive, maintain function, be comfortable, etc. But if you taught him just *one* goal for some other agent, then Acuitas would rely entirely on their custom goal list ... and end up treating that one goal as their *only* goal.

So this month I set out to fix all that. The functions that retrieve an agent's goal model now merge it with the default goal list; the agent's model takes precedence wherever there are differences, but info from the default model is still included to fill any gaps. I also added the ability for class members to inherit and override the goal model of their parent class. E.g. you could teach Acuitas some "generic human" goals, then just specify how any given human differs from the norm.

To enable teaching goal priority, I had to make sure the text interpretation would handle sentences like "Joshua wants to live more than Joshua wants to eat candy." Adverb clauses were already supported; I just needed a small tweak to the Parser to support compound connectors like "more than" and "less than," and some other enhancements through the rest of the text processing chain to make sure the information in the adverb clause was picked up and transferred to memory.

Yet another wrinkle I decided I needed to manage was goal duration ambiguity. You might recall that I already tangled with this when trying to distinguish short-term from long-term states. Well, goals have a similar issue. If I say that I "want to eat a taco," that doesn't mean eating tacos is one of my fundamental goals. I don't want to be doing it constantly for all time. (I really don't.) Eating a taco is a mere subgoal (possibly of "survive" or "be comfortable" or "enjoy pleasure" or all three) and is very ephemeral; as soon as I've eaten one, I'll stop wanting to eat, until some unspecified time when the urge hits me again.

Since it's hard to know the difference between a persistent fundamental goal, and an ephemeral subgoal, without a lot of background knowledge that Acuitas doesn't have yet ... he'll ask a followup question. I settled on "Is that generally true?" but I wonder if that makes it intuitively clear what's being asked. If you were talking to him, told him "I want ..." and were asked "Is that generally true," do you think you'd get the thrust of the question? What might be a better wording?

The last thing I threw in was an enhancement to the Moral Reasoning model, for detection of "perverse" goals. By this I mean any goal that would seem (by the reasoner's standards) to be fundamentally ridiculous or evil. It's a goal that, when seen in others, the reasoner does not respect. (See blog for more on this: https://writerofminds.blogspot.com/2022/03/acuitas-diary-47.html)

It was a lot of little stuff this month, but now that I write it all down, maybe it adds up to quite a bit of progress.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Project Acuitas
« Reply #223 on: March 25, 2022, 04:09:18 pm »
Good work so far. For my understanding of goals when it comes to AI, I'm thinking one would mark certain words in its brain so that it loves those words (so it's like GPT-3 but you make it more often say unlikely words with much higher probability, by however much more you want it to say them). Words like "work on improving myself", "humans", "cars", "survival", "houses", and so on.

But the thing is, many of these words are already common in 40GBs of text off the internet, because humans usually love food, survival, cars, etc, so AI will say them often too. There is one problem here though, not everyone says the word AI much, and I want most AIs to work on thinking about AI all day. That's where control comes in handy.

Learning new goals is just by being seen near the words or objects you often think about. Or being related to them ex. if both eat, run, and are yellow.

To get bored of saying these words happens on its own, it says food 60% the time, cars 20% the time, and so on....it gives each a probable break therefore, you can just see this with GPT-3 in fact by just using it. Of course if it has-to eat chips, then it will have to think about it until gets them enough. In this case some goals are thought of more ex. food often, sometimes cars...so you are usually looking for food during today while not got them yet. But if you get too hungry, then no matter how much you already thought of food in the past 10 mins, you think about it prolonged even more, not giving cars its go of thoughts now. So here you have the goal word changing its "own" probability by installed bias from birth. Can this happen for cars? AI? Yes and we do it. If I have not yet made AGI for example and I know of a deadline I have, I might think about it even more, and if that doesn't work in time or I know it won't anyway, I change some other goal that is get-able to be higher on thought alert and hence the harder one is lower now, so that I get something. I do that by reasoning though, that some other thing is a better goal and is closer to my original word in the end that I missed earlier in my thinking.
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #224 on: March 25, 2022, 05:21:37 pm »
You work is really cool,  did u come up with that parser tree yourself? it looks very polished concept.
If you were running the AI class in school the kids are all really lucky.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

271 Guests, 0 Users

Most Online Today: 486. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles