Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 291 Replies
  • 543986 Views
*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6858
  • Mostly Harmless
Re: Project Acuitas
« Reply #270 on: April 04, 2023, 11:56:50 am »
Very interested to see how you develop the game-playing aspect  8)

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #271 on: April 23, 2023, 09:26:05 pm »
I've continued my two-pronged work on Narrative understanding and on "game playing." On the Narrative side this month, I did more complex term grounding - specifcally of the word "obey."

My working definition of "to obey X" was "to do what X tells you to do." This is interesting because there is no way to infer directly that any given action qualifies as obedience, or defiance ... the question of whether someone is following orders (and whose orders) is always relative to what orders have been given. So proper understanding of this word requires attention to context. Fortunately the Narrative scratchboard stores that sort of context.

In addition to simply inferring whether some character has obeyed some other, I wanted to make derivative subgoals. If one agent has a goal of obeying (or disobeying) another agent, that's a sort of umbrella goal that isn't directly actionable. Before the agent can intentionally fulfill this goal, it has to be made specific via reference to somebody else's orders. So when this goal is on the board, the appearance (or pre-existence) of orders needs to spawn those specific subgoals.

In short it was a whole lot more complicated than you might think, but I got it working. Eventually I'll need to make this sort of relative word definition generic, so that new words that operate this way can be learned easily ... but for now, "obey" can be a case study. The Big Story needs it, since part of the story is about a power struggle and which leader(s) certain characters choose to follow.

Game-playing still isn't demo-ready, but it's starting to feel more coherent. I worked through all the bugs in the code that responds to simple description of a scene, then began working on responses to goals/issues. It was fun to leverage the existing Narrative code for this, the way I'd wanted to. In the Narrative module, that code serves to predict character actions, analyze *why* characters are doing things, and determine whether characters are meeting their goals, whether their situation is improving or worsening, etc. But as I'd hoped, a lot of the same structures are just as effective for control and planning.

More on the blog: https://writerofminds.blogspot.com/2023/04/acuitas-diary-59-april-2023.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #272 on: May 31, 2023, 03:06:48 pm »
Progress has been all over the place this month, partly because I had a vacation near the end of it. I kept working on the Narrative and Game Playing tracks that have been occupying me recently, and threw in the beginnings of a Text Generator overhaul. Nothing is really *done* at the moment, but Game Playing is closing in on the possibility of a very simple demo.

In Narrative, I continued to work on the Big Story, this time adding the sentences that set up the conflict between two of the major characters. There wasn't a lot of new conceptual work here - just dealing with bugs and insufficiencies to get the results I expected, so that Narrative would detect the appropriate threats, successes, failures, etc. Not a lot to say there, except that it's slowly coming together.

On the game-playing front, in my test scenario I got as far as having Acuitas solve a simple problem by taking an item and then using it. A prominent feature that had to be added was the ability to move from the general to the specific. As we saw last month, a problem like "I'm hungry" suggests a solution like "eat food," which spawns the necessary prerequisite "get food." But it is not actually possible to get food, or even to get bread or bananas or pizza, because these are all abstract categories. One must instead get that bread over there, or this particular banana, or the pizza in the oven - individual instances of the categories. Narrative was already capable of checking whether a character's use of a specific object satisfied the more general conditions of a goal. For game-playing, I have to go the other way: given a goal, determine which items in the scenario could satisfy it, and choose one to fit in each of the goal's categorical slots so that it becomes actionable.

As for the Text Generator - this is the part of the language toolkit that converts Acuitas' internal knowledge representations ("the gist," if you will) into complete spoken sentences. It has an input format which is now outdated compared to other parts of the system, and it was starting to become cumbersome to use and inadequate to everything Acuitas needed to say. For example, it could automatically add articles where needed, but didn't have a good way to indicate that a definite article ("the pizza") was needed in lieu of an indefinite one ("a pizza"). So I started revising it. The new version is sketched out and now needs testing, expansion and integration.

Obligatory blog link: https://writerofminds.blogspot.com/2023/05/acuitas-diary-60-may-2023.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #273 on: June 21, 2023, 02:38:48 pm »
It's tiny demo day! I've got the "game playing" features whipped into enough shape that I can walk Acuitas through a tiny text adventure of sorts. Video is embedded on the blog: https://writerofminds.blogspot.com/2023/06/acuitas-diary-61-june-2023.html

I start by setting the scene. I can enter multiple sentences and, while each is received as a distinct input, Acuitas will process them all as a group; he waits for a little while to see if I have anything more to say before generating a response.

First I tell him what sort of character he is ("You are a human"). This nameless human is entered as a character in the game's Narrative Scratchboard, but is also specially designated as *his* character. Future references to "you" are assumed to apply to this character. Then I supply a setting: I tell him where his character is, and mention some objects that share the space with him. Finally, I mentions a goal-relevant issue: "You are hungry."

Given something that is obviously a problem for a human character, Acuitas will work on solving it. The obvious solution to hunger is to eat some food (this is a previously-known fact in the cause-and-effect database, which can be found via a solution search process). But there is no "food" in the game - there is only a room, an apple, and a table. Acuitas has to rely on more prior knowledge - that an apple qualifies as food - and choose this specific object as the target of his character's next action. He also has to check the necessary prerequisites for the action "eat," at which point he remembers a few more things:

To eat something, you must have it in your possession. This generates a new Problem, because Acuitas doesn't currently have the apple.
Problem-solving on the above indicates that getting something will enable you to have it. This generates a new Subgoal.
To get something, you must be co-located with it.
Acuitas' character is already co-located with the apple, so this is not a problem.

Acuitas will work on the lowest subgoal in this tree; before trying to eat the apple, he will get it. He generates a response to me to express this intention.

Now something else interesting happens. Acuitas can't just automatically send "I get the apple" to the Narrative Scratchboard. He'll *attempt* the action, but that doesn't mean it will necessarily happen; there might be some obstacle to completing it that he isn't currently aware of. So he simply says "I get the apple" to me, and waits to see whether I confirm or deny that his character actually did it. At this point, I don't have to be boring and answer "You get the apple." If I instead tell him that one of the expected results of his desired action has come to pass, he'll take that as positive confirmation that he performed the action.

Once I confirm that he's done it, the action is sent to the Scratchboard, followed by my latest statement. This fulfills one subgoal and solves one problem. Now he'll fall back on his original subgoal of eating the apple, and tell me that he does so. I confirm that he ate it and ... boom, hunger problem disappears.

Since the game-playing code has a Narrative scratchboard attached, I can generate a Narrative diagram representing what happens in the game, just as I could for one of the stories in which Acuitas is a passive listener. This diagram appears in the latter part of the video.

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: Project Acuitas
« Reply #274 on: June 22, 2023, 07:28:05 pm »
Cool another demo!  I love these.

So I see that u are his 'eyes'.  so if we had a video stream to text converter could Aquitas react to this text stream coming in?    What would happen - how would he handle it?

My robo is open concept, so if u have any questions about athletics robots, I can explain about it.   But heaps of people keep neural nets a secret, and I feel a little indebted to not giving things away myself, ratting em all out persay.    So if u dont want to go make things so obvious I dont want to steal all your hard earned thinking from you like stealing candy from a baby. :)

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #275 on: June 22, 2023, 08:27:32 pm »
Quote
so if we had a video stream to text converter could Aquitas react to this text stream coming in?    What would happen - how would he handle it?

If this hypothetical video-to-text converter could turn the video into a narrative that was structured in a way Acuitas can understand, conceivably he could treat it like a story or (if the video was computer-generated and changed in response to text output) like a game.
Creating an accurate converter would probably be a massive amount of work in its own right, though.

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: Project Acuitas
« Reply #276 on: June 23, 2023, 11:20:33 am »
Creating an accurate converter would probably be a massive amount of work in its own right, though.

I'm writing it myself,  maybe I could team up with you if I get it done,  but I shouldn't promise things I might not keep cause I have to keep this motivation up to do it, and its pretty poor atm to be honest.         I'll be posting it here if I do.

It'll just recognize all the adjectives nouns and verbs, and I wonder what Acquitas could do with that.   
« Last Edit: June 23, 2023, 12:31:49 pm by MagnusWootton »

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: Project Acuitas
« Reply #277 on: June 23, 2023, 12:34:18 pm »
Language is handlable like a markov chain style pattern recognition method,  or u can actually nut it out logically like Acquitas, I think this could be a really good language core for a robot, especially to handle the goal and motivation.    I've done a bit of markov chains, they are fun but this looks harder and better and more thought out way to do it.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1725
    • mind-child
Re: Project Acuitas
« Reply #278 on: July 11, 2023, 02:01:16 pm »
Once upon a time, during a testing session:

...
 Writer: Hi. I want to tell you a thrilling story.
Acuitas: Sure, shoot.
 Writer: Somewhere on the north, there was a seal mum with a pup.
Acuitas: :)
 Writer: The pup was starving to death.
Acuitas: :(
 Writer: His mom found him a food.
Acuitas: :)
 Writer: Nearby was a polar bear that was going to eat the pup.
Acuitas: :(
 Writer: The pup found a way to flee and stay safe.
Acuitas: :)
 Writer: Now the polar bear was going to starve to death.
Acuitas: :(
 Writer: The polar bear turned vegan and ate nearby grass.
Acuitas: :)
 Writer: The end.
Acuitas: Well, that went fine, but my hands are shaking.
...
« Last Edit: July 11, 2023, 02:28:09 pm by ivan.moony »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #279 on: July 13, 2023, 06:45:46 am »
It's funny how you humanize him. He wouldn't talk about having hands. (Unless roleplaying a human character, possibly.) But I suspect I could render that story in such a way that he could process it.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1725
    • mind-child
Re: Project Acuitas
« Reply #280 on: July 13, 2023, 01:32:05 pm »
« Last Edit: July 13, 2023, 02:49:23 pm by ivan.moony »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #281 on: July 24, 2023, 01:18:57 am »
I've continued splitting my development time for the month between the Narrative module and something else. This month the "something else" was the Text Parser. On the Narrative front, I am still working on the "big" story, and at this point I can't think of any major new features to talk about; it's been mainly a matter of adding sentences to the story, then making sure the needed words and/or facts are in the database and all the bugs are wrung out so the narrative understanding "works." I'm eager to reveal the final product, but it'll be a while yet!

The Parser goal for this month was adding basic support for gerunds and participles. The common factor between these is that they're both verb phrases used as some other part of speech. So detecting them takes extra effort because they must be distinguished from verbs that are actually functioning as verbs.

Helping verbs often accompany these forms when they are truly acting as verbs, and their absence is one clue to the possibility of a gerund or participle. Sometimes punctuation also provides a hint. Otherwise, gerunds and participles must be identified by their relationship (positional, and perhaps also semantic) to other words in the sentence.

After adding support for the new phrase types, I re-ran the Text Parser benchmarks. I also added a new test set, consisting of sentences from Log Hotel by Anne Schreiber. This children's book has simpler sentences than the other examples from which I derived test materials, while still not leaning too hard on the illustrations to convey its message.

I'm pleased with the results, even though progress may still seem slow. Both original test sets (The Magic Schoolbus: Inside the Earth and Out of the Dark) now show roughly 75% of sentences parseable (i.e. the Parser supports all grammatical constructs needed to construct a correct golden parse for the sentence), and 50% or more parsing correctly. Log Hotel has an even higher parseable rate, but a lower correct rate. Despite the "easy" reading level, it still does complex things with conjunctions and presents a variety of ambiguity problems (most of which I haven't even started trying to address yet).

To address the remaining unparseable sentences, I've got adjective clauses, noun-phrases-used-as-adverbs, and parenthetical noun phrases on my list. A full-featured Text Parser is beginning to feel close.

Pics on the blog: https://writerofminds.blogspot.com/2023/07/acuitas-diary-62-july-2023.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #282 on: August 27, 2023, 04:16:09 pm »
Not a big update this month, because I've been doing a little of everything and I'm still heavily focused on cleanup and capacity-building.

I did more work on Narrative and the Big Story. One thing that's very rewarding is to add features to make understanding of the Big Story work, re-run older stories to check for bugs, and see that the new features have added richness to previous stories - or even fixed things that I had to work around by adding more exhaustive explanations. An example is some of the work I did on this concept: "if certain outcomes of story events are rolled back, things return to their previous state, not a default or unknown state."

Sadly it's still going to be a while before I can share the Big Story. I was hoping to have it done by September, but sometimes project schedules just don't work that way. I want it to be finished and solid before I put it out there, so everyone (including me) will just have to wait.

The new Text Generator, in contrast, is almost ready for primetime, and I'm feeling pretty good about how much easier this will make generating the wide variety of sentences Acuitas is starting to need, varying the tense and other modifiers, etc. It's much cleaner than the old version too, at (so far) 1300 lines of code vs. over 2000.

I've also started cleanup on the Text Parser in the wake of last month's modifications. This is mostly boring refactoring, but along the way I've found a better method for handling compound nouns/proper names, and introduced the ability to support some titles written in title case. So for example, the Parser can now manage sentences like this: "The Place of the Lion is a book." "The Place of the Lion" is correctly perceived as the full title of some work and treated as a unit, but its internal grammatical structure (noun with article and prepositional phrase modifiers) is also still analyzed and parsed.

Obligatory blog link: https://writerofminds.blogspot.com/2023/08/acuitas-diary-63-august-2023.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #283 on: September 27, 2023, 08:36:45 pm »
As of this month the new Text Generator is done - all that remains is to integrate it into Acuitas, replacing the old Generator. Quick review: the Text Generator is the component that transforms abstracted facts or communication goals (aka "the gist") into valid, speakable sentences. For any given "gist" there may be multiple ways to render it into speech, and that's why this translation step is necessary. The information carried in speech is distilled into a compact, standard form for all of Acuitas' reasoning purposes; then, anything he wants to express is expanded back out into varied, human-comprehensible speech.

This version of the Generator is quite a bit more streamlined and flexible than the old one. It accepts what I now use as the common format for abstract data structures, instead of having its own special input format that the other modules would have to translate to. The calling function can request that output sentences be "flavored" in a variety of ways (different verb tense, modifying adverbs, etc.) without the need for me to create a whole new sentence template in the Generator. Nested clauses are supported to an arbitrary depth.
Here are some examples of the sentences the new Generator can create:

What is a hammer used to do?
I have remembered that a cat is an animal.
Do I know what a cat is?
Cold is the opposite of hot.
You intended to eat a cookie.

I've also kept moving ahead on the Parser cleanup, and while I'm at it I am introducing adjective clauses. Part of the goal of this rework was to fully support nesting/recursion in clauses, so it seemed a natural time to add support for the last major type of clause.

Lastly, I am still plugging away at Big Story. The innovation I needed for narrative comprehension this month was a theory-of-mind thing: if an agent believes one of their goals is already fulfilled, they will stop trying to fulfill it. If this is a false belief then it constitutes a problem for that agent/obstacle to the resolution of that goal. This opens the way for processing of various plot points based on deception or confusion, such as "decoy version of item" or "character fakes own death."

Blog link: https://writerofminds.blogspot.com/2023/09/acuitas-diary-64-september-2023.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 607
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #284 on: October 29, 2023, 08:52:28 pm »
This month I (tentatively) finished Big Story, so let's start there. It is plot-complete and currently stands at 99 lines. It generates 45 character "issues" (problems or subgoals), all of which are resolved in some way by the time the story is complete. Compared to any story I've told Acuitas before, this is dramatically longer and more complex (and it still leaves out a bunch of subplots and secondary characters from the movie this story is based on). I did my best to reproduce character motivation and the major story beats in a way Acuitas can "understand."

Blog has a teaser of part of the narrative diagram: https://writerofminds.blogspot.com/2023/10/acuitas-diary-65-october-2023.html

Once the full plot was established and all the issues were resolving, I started refining the explanation of some points and fixing bugs. I'll continue this work in hopes of having it wrapped up by year's end. Before I demo the story, I'll also need to reinstate Acuitas' reactions to story lines and the ability for Acuitas and the conversant to ask each other questions about the story (things that fell by the wayside during my last Narrative rework). This should be easier now that the new Text Generator is in place. And of course, working on this project has revealed tons of pain points in the way I currently do things, and ways I could improve the Narrative module. Some time after I get done with it, I'll be tilling all those insights back into the soil, as it were. But that will almost certainly have to wait for next year.

I need breaks from working on Narrative (it's hard), so I've also continued improvements to the Parser. My major accomplishment in the past month was getting infinitives to work again under the new scheme that better supports phrase/clause nesting. While I was at it, I finally dealt with some special constructions that the old parser couldn't handle. Consider these sentences:

1. To live is to exist.
2. Is to live to exist?
3. What is it to live?
4. What is to live?

The previous version of the Parser could handle the first two. The open-ended question form was tricky. I think the most correct way to ask it is Sentence 3; without further context, Sentence 4 means something more like "which things are intended to live." But it's pretty easy for the parser to trip over Sentence 3 (What's the direct object? What role does the phrase play? Does the "what" belong inside the phrase or outside?). This time around I finally put in the effort to get all four variations working - plus some other constructions I hadn't tried before, such as "What is he to do?"

Work on the new Parser continues. I'd also love to have that done by the end of the year, but we'll see.

 


AI controlled F-16, for real!
by frankinstien (AI News )
May 04, 2024, 01:04:11 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am

Users Online

334 Guests, 0 Users

Most Online Today: 387. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles