Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 300 Replies
  • 646217 Views
*

MikeB

  • Autobot
  • ******
  • 224
Re: Project Acuitas
« Reply #240 on: July 02, 2022, 10:09:52 am »
What do you do with articles/quantifiers?
I'm not sure what you're asking here. I treat them as adjectives that modify the noun they're attached to. I also pick up some additional information from them to help the parser (articles are *always* adjectives, which is handy) and to tag words as count nouns rather than bulk nouns.
Nowadays articles and quantifiers are placed in the determinative grammatical category which normally function as determiners. This is quite distinct from members of the adjective grammatical category which normally function as modifiers.

https://en.wikipedia.org/wiki/Determiner

In my current model I have Article/Quantifies, Person/Agent, Question/Interrogative in three separate groups. The first two are further split into three, and the last is split into five. 11 total. This came about due to the need to encode a past/present/future (or precise/optimistic/explaining) intention into each word, so when compressed with similar words, the word is lost but the meaning isn't...

For these, as "this/that/these/those" act as a precise Article/Quantifier only in my model, it assumes there is a missing word:
That <noun/verb> is incorrect.
That <noun/verb> doesn't make me happy.
This <noun/verb> is not a good time.

What ever works. For actually coding though I don't know how you can lump them all in one category. There's micro intentions in each sub category.

In my model "running" is hard fixed as a verb, and pattern sentences account for "<article> <verb> <noun>..."

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #241 on: July 02, 2022, 10:59:06 am »
top work here,   i can see alot of potential with this symbolic a.i.,   and when quantum machine learning ends up with huge intelligence defecits this style may actually be better and take alot less horsepower to get going.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #242 on: July 16, 2022, 03:29:44 pm »
Coming from the standpoint of NLP,  I give the robot so much text and it uses it as its database.  (natural english database.)

Does your robot utilize questions inside of it as well as statements?

Id think that only statements would be useable, a question kind of a useless thing because its not knowing anything,  I wonder if it is useful at all to have a question implanted inside it.

Or your robot doesnt actually use NLP directly,  it uses your awesome graphs that it converts it to,  but do these graphs support questions?
I guess they do,    but questions actually arent used as a source of information are they?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #243 on: July 16, 2022, 10:02:12 pm »
In the internal data representation, a statement and a yes-no question are actually the same, and an open question is almost the same, with a wildcard in the place of one of the concepts.

The only real difference is how they get used. Questions from a conversation partner prompt information retrieval, so they can be answered. Questions generated by Acuitas (yes, there are internally generated questions) are later asked of conversation partners in order to gain more knowledge.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Project Acuitas
« Reply #244 on: July 17, 2022, 08:44:17 am »
Questions can also contain implicit facts or serve as hypotheses when you can chop sentences up into semantic components. For instance, if people ask "is the sun blue or yellow?", then presumably one of the two is true. Supposing a program already knows that it's not blue, the "or yellow" provides a presumable fact. Some open questions like "when", "why" and "how - does a robot talk" contain a premise that is already assumed to be true by the person asking, and thus contain as much information as a statement.
CO2 retains heat. More CO2 in the air = hotter climate.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #245 on: July 17, 2022, 11:51:02 am »
I understand thankyou.

So when I asked the question "can a question be treated as stating things?"

It is information that possibly it could be?  But it could be completely meaningless as well, just like saying could all the time when you speak, it hasnt designated anything completely, its all still a mystery but there were words that were said still.

Maybe if your machine has a function for the word "could"   goes into a hypothical store of possibles.

*

chattable

  • Electric Dreamer
  • ****
  • 127
Re: Project Acuitas
« Reply #246 on: July 18, 2022, 09:23:52 am »
I understand thankyou.

So when I asked the question "can a question be treated as stating things?"

It is information that possibly it could be?  But it could be completely meaningless as well, just like saying could all the time when you speak, it hasnt designated anything completely, its all still a mystery but there were words that were said still.

Maybe if your machine has a function for the word "could"   goes into a hypothical store of possibles.
words have meaning it just guesses about what you are trying to say.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #247 on: July 18, 2022, 06:14:29 pm »
A question is a form of communication. It is always asked of someone and by someone, with some motive. You need this surrounding frame to know the full import of a question.

This is why Acuitas has the Conversation Engine and the Narrative Engine in addition to just a parser.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Project Acuitas
« Reply #248 on: July 21, 2022, 06:17:15 pm »
So when I asked the question "can a question be treated as stating things?"

It is information that possibly it could be?  But it could be completely meaningless as well, just like saying could all the time when you speak, it hasnt designated anything completely, its all still a mystery but there were words that were said still.

Maybe if your machine has a function for the word "could"   goes into a hypothical store of possibles.
There are a number of ways that a question can be useful in the same way as a statement.
If you have a conversation engine such as Acuitas, then a question is a valuable clue to what people want to say or talk about.
If you have a program that strictly processes text as a string of letters, then you can return the question and get statements in return, like Cleverbot.
If you have a statistical algorithm, then at the very least it can learn that "Do you have a cat?" is a more plausible sequence than "Do you have a pangalactic gargleblaster?", and thus is more likely to be true as a statement.
The question "Did people cheer for the king?" almost equals the statement "people can cheer for the king". Of course there is a degree of assumption on behalf of the questioner that may turn out entirely false, but normal statements can also be untrue.

I find there is value in distinguishing information as real (happening), theoretical (possibilities & abilities), and hypothetical (what if). Whether it is really useful depends on one's approach to AI. I didn't distinguish hypothetical scenarios until late in development, I imagine it is of much more import to Acuitas and its story-based angle.
CO2 retains heat. More CO2 in the air = hotter climate.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #249 on: July 25, 2022, 11:33:03 pm »
Update! Getting some new motivated communication material into the Conversation Engine/Executive.

I now have the Conversation Engine create its own Narrative scratchboard at the beginning of every conversation. That gives it access to a lot of the same modeling tools the Narrative engine uses. When a conversation is initiated and a new scratchboard is created, Acuitas is immediately entered as a character, and the scratchboard is populated with information about his internal state. This includes the current status of all his time-dependent drives, any "problems" (current or anticipated undesired realities) or "subgoals" (desired hypotheticals) being tracked in the Executive, and any activity he is presently doing. Once his conversation partner introduces themselves, they will be entered as a character as well, and given an empty belief model. Now the fun starts.

Whenever there is a brief lull in the conversation, Acuitas considers stating one of these known facts about his internal state. But first, he'll run a prediction on his conversation partner: "If they knew this, what would they do - and would I like the results?" This process retrieves the listener's goal model and determines their likely opinion of the fact, runs problem-solving using their knowledge model and capabilities, then determines Acuitas' opinion of their likely action using *his* goal model. If Acuitas can't come up with a prediction of what the listener will probably do, he settles for checking whether their opinion of his internal state is the same as his.

Maybe that was a little convoluted, so what's the bottom line? If Acuitas expects that you will either try to sabotage one of his positive states/subgoals or accentuate one of his negative states/problems, he will not tell you about it. If he thinks that you are neutral or might try to help, he *will* tell you.

There's also a mechanism that enters any fact told to the listener into their belief model. Acuitas will check this to make sure he isn't telling them something they already know.

With this in place, I started working on a better way of handling the spontaneously generated questions that have been an Acuitas feature since very early. Again, the previous method was kind of reflexive and arbitrary: generate and store a big list of potential questions while "thinking" privately. Whenever there's a lull in a conversation, spit one out. Here's how the new way works: whenever Acuitas is "thinking" and invents a question he can't answer, that gets registered as a lack-of-knowledge Problem: "I don't know <fact>." Acuitas may later run problem-solving on this and conclude that a feasible solution is to ask somebody about <fact>; this plan gets attached to the Problem until somebody appears and the Conversation Engine grabs the Problem and considers talking about it. At that point, instead of just describing the problem, Acuitas will execute the plan, and ask the question.

I think this is better than the old method because it's more versatile - less of a canned feature specific to those spontaneously-generated questions. In the future, all sorts of processes might generate lack-of-knowledge problems, which could have various solutions. For now, it still needs refinement. I haven't fully tested it all yet, and things need better prioritization so the generated questions (which can be very numerous) don't totally drown out the communication of other internal states.

There's one more thing I did, and that concerns threat handling. As I've previously described, if the conversation partner states an intention ("I will ..."), Acuitas will infer possible effects and run them against his goals. The result is a positive, negative, or neutral conclusion; if the conclusion is negative, he will view the speaker's statement as a "threat," dissent, and make attempts at self-defense. The new feature I added was the ability to collect the pieces of information used to reach the negative conclusion, and announce some of them to the threatening agent. Because if you knew this would have results he doesn't like, you wouldn't do it, right? You're not a total meanie, right?

I'm kinda proud of this one. It should generate a more-or-less appropriate reply no matter what the threat or the violated goal is.

A bit more on the blog: https://writerofminds.blogspot.com/2022/07/acuitas-diary-51-july-2022.html

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #250 on: July 26, 2022, 06:07:13 am »
Building a model of the person communicating with the robot is really good if you get it going.
Is there any new data structures involved with this modelling, or is it all in there already and you havent actually full utilized whats there yet?  (From possibly a lack of energy? thats what happens to me.)

So your actually building a separate brain for the person communicating with Acuitas, and he can occupy this separate model and actually pretend hes u as well?   Thats what I would do.

If I were you I wouldn't go too far without a bit of direct programming the activity of the bot,  because unless u do this the problem is too big,   everything in my bot is going to be fairly dictated to it,  Possibly alot more so than this, and it doesnt hurt the situation,  makes it more predictable but at least its working good and its doing what you want,  if u let the computer control its own behaviour too much then it ends up being a problem as well, because it doesnt do what u want.

So the conversation robot has friend and foe system in it,  I spose in the real world your going to need that,  only takes one naughty kid to go try and "break the system" and Acuitas has got to detect that or he wont last as long in the wild.

I'm predicting your success,  In all my aging 41 year old programming wisdom you look like you are managing the problem in an efficient way, in ways I couldnt do until right close to now really,  before this u were majorly kicking my butt,  but maybe now Ive read more of what you are doing its probably helped me with the situation as well, if I want to go try and attempt a symbolic language based AI.

Is there a new demo of you communicating to it?   It would be cool to see it demonstrating these new abilities, how much does it help the situation of being with something useful to be around, instead of some meaningless magic 8 ball conversation you can have with a markov chain.

<edit>
I bet Acuitas can take commands really well,  that seems like a really useful thing for it.   would make a way better artificial telephone operator as well than what we get, so id think.
</edit>

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #251 on: July 27, 2022, 04:02:55 am »
Quote
Is there any new data structures involved with this modelling, or is it all in there already and you havent actually full utilized whats there yet?

I think all the data structures that went into this month's work were re-used from the Narrative module.

Quote
So your actually building a separate brain for the person communicating with Acuitas, and he can occupy this separate model and actually pretend hes u as well?

Something like that, yeah.

No new demos yet. I'm a little too busy hacking things together right now to polish one up.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #252 on: August 23, 2022, 04:12:47 am »
For the first half of the month I just did code refactoring, which is in general rather boring, but essential to make my life easier in the future. Better organization really does make a difference sometimes.

In both the Narrative and the Executive, I unified Problems and Subgoals into what I'm calling "Issues," and made sure all the stories still worked. Much better. Having to write up everything twice has been a pain point for a while. I also fixed up the "motivated communication" features that I introduced to the Conversation Engine last month, to give internal states a little more priority over the mountains of internally generated questions that were coming up.

The second half of the month was for new features, which meant even more upgrades to the Narrative module. I started work on how to handle actions that have mixed results or side effects. For an illustrative example, I wrote the following story:

Ben was a human.
Ben was hungry.
The oven held a pizza.
The pizza was hot.
Ben wanted to get the pizza.
But Ben didn't want to be burned.
A mitt was on the counter.
Ben wore the mitt.
Ben got the pizza.
Ben ate the pizza.
The end.

Fun fact: I wrote the original version of this on a laptop and e-mailed it to myself to move it to my main PC. Gmail auto-suggested a subject line for the e-mail, and at first it thought the title of the story should be "Ben was a pizza." Commercial AI is truly doing great, folks.

Based on information I added to the cause-and-effect database, Acuitas knows that if Ben picks up the hot pizza, he will both 1) have it in his possession and 2) burn himself. This is judged to be Not Worth It, and the old version of the Narrative module would have left it at that, and regarded the story as having a bad ending (why would you touch that pizza Ben you *idiot*). The new version looks at how the implications of different events interact, and recognizes that the mitt mitigates the possibility of being burned. Grabbing the pizza switches from a bad idea to a good idea once the possibility of self-harm is taken off the table.

The explicit addition of "Ben didn't want to be burned" establishes the bad side effect of his "get the pizza" subgoal as an independent problem, which enables speculations about how he might solve it and so forth. The story wraps up with two solved problems (this one, and his primary problem of hunger) and one fulfilled positive subgoal (get the pizza).

That's enough for now, but wait until you see how I use this next month. More on the blog: https://writerofminds.blogspot.com/2022/08/acuitas-diary-52-august-2022.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #253 on: September 17, 2022, 07:49:40 pm »
This month's work directly built on what's in the previous post, by introducing the idea that agents can *add* side effects to other agents' actions, by choosing a conditional behavior: "If you do X, I will do Y." This is the groundwork for understanding social interactions like bargaining, reward, and coercion.

The introduction of a story sentence like "Agent A decided to do X if Agent B did Y" now creates a new cause-and-effect rule for the Narrative engine to use; it isn't stored to the permanent database, only used within the domain of that story. For reasoning purposes, it is assumed that "B does Y" will automatically happen if A does X ... so long as nothing is preventing B from doing Y.

I can start to define some verbs in terms of these models - much as, in previous Narrative work, I effectively defined "lie" as "tell someone a proposition that you don't believe." Now "coerce" ... in at least one of its forms ... can be defined as "deliberately apply a negative side effect to someone else's subgoal." If this happens, the Narrative engine will infer that A coerced B.

I was more interested in coercion than the positive options, thanks to the second goal of the month: to figure out a functional understanding of "freedom." As with other important abstractions I've introduced, I don't promise the result is better than an oversimplification. But we have to start somewhere.

Freedom could be defined, in a very simple and broad sense, as a lack of constraints. But all humans live with constraints. We generally don't presume that freedom requires *omnipotence.* So to get closer to the idea of freedom that people generally have in mind, we might say "a lack of *unnatural* or *exceptional* constraints." These could include situations that severely reduce one's options below the ordinary level ... getting trapped in a cave by a landslide, for instance. Since any constraints imposed by other agents are not part of the default state of things, they are also included. Freedom in a narrower sense is concerned with not having one's existence, abilities, and purpose subverted - not being *used* as a means to someone else's ends.

Assessing what counts as a "severe reduction of options" is a little beyond Acuitas' capability right now, so I plan to just put conditionals in the database for some of these. "Confined implies not free," "restrained implies not free," etc. But as for the other part, the Narrative engine can assess whether some other agent is applying coercion, or otherwise purposely constraining the viewpoint character's actions. If this happens, the viewpoint character is less than free.

There are a couple of additional wrinkles. Agent B's freedom is not regarded as being lost if Agent A thwarts one of Agent B's goals in *self-defense.* If we didn't have this provision, we'd be stuck with conundrums like "Agent B wants to prevent Agent A from living. Agent A wants to prevent Agent B from killing them. Who is offending against whose freedom?" For an idea of how "self-defense" is defined, take a look back at the Odysseus and the Cyclops story.

Now for what I found to be the trickiest part: sometimes you can interfere with someone else even while minding your own business. For example, let's suppose Josh has a goal of buying a PS5. There's a world of difference between "Josh could not buy the PS5 because I bought it first," and "Josh could not buy the PS5 because I wrestled him to the ground and wouldn't let him enter the store." I take a volitional action that reduces Josh's options and prevents him from achieving his goal in both cases. In the first case, I'm not limiting Josh's freedom, just exercising my own; my interference is indirect and incidental. In the second case, my interference is direct and intentional. So I can express the difference in words, but how on earth to explain it to a computer?

I finally decided a handy encapsulation was "Would Agent A still take the interfering action if Agent B didn't exist?" In the above example, I would still buy the PS5 whether Josh were involved or not. (Unless I were being a dog in the manger and only buying it to spite him, in which case that *would* be reducing his freedom! See how context-dependent these things are.) But I'd have no incentive to wrestle Josh down if he were not there (not to mention that I wouldn't be able to). Can you come up with any thought experiments in which this doesn't work? Let me know in the comments!

Again, testing for this in the Narrative engine is a little complex for now - it requires a somewhat more thorough analysis of character intent than I'm currently doing. But having it in my back pocket for the future makes me feel better. As a stopgap, I went with the less perfectly accurate "is Agent B an object of Agent A's interfering action?"

For purposes of a quick test, I wrote the following totally not historical story about ... feudalism, I guess:

0:"Robert was a human."
1:"George was a king."
2:"Robert wanted to study mathematics."
3:"George wanted Robert to work George's farm."
4:"Robert didn't want to work the farm."
5:"If Robert studied mathematics, Robert could not work George's farm."
6:"George decided to beat Robert if Robert studied mathematics."
7:"Robert left the farm and went to the big city."
8:"George did not know where Robert was."
9:"So George could not beat Robert."
10:"Robert studied mathematics."
11:"Robert became a scholar."
12:"Robert never worked the farm."
13:"The end."

The Narrative engine picks up on a threat to Robert's freedom on Line 4, and retroactively marks George's goal from Line 3 as something negative. Wanting another agent to do something, or not do something, is all fine and dandy; it's only if your wishes for them oppose theirs that we run into trouble. An attempt at coercion happens on Line 6; Robert cannot safely fulfill his goal of studying math now. But George's illegitimate plan is blocked, and Acuitas can conclude that this story has a good ending.

With this done ... I think I've built the capacity for understanding all the necessary concepts to explain the conflict in the big story I've been targeting. They need more refinement and expansion, but the bones are there. This is exciting, and I may start pushing toward that more directly.

Blog version: https://writerofminds.blogspot.com/2022/09/acuitas-diary-53-september-2022.html

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #254 on: September 17, 2022, 10:03:28 pm »
God a lot of human psychology here, I dont want to know about it, maybe I'll start to hate my simple existence.

Your system seems pretty good,   u want to make a real sentient a.i. thats a believable person? I can see,  I bet u'll be able to do it.    Everything here seems really good,   and with you here doing this all of our brains are growing I think.

Most very educational reading to you type what ur thinking about the project.   Your helping me alot.


 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

483 Guests, 0 Users

Most Online Today: 571. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles