Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 300 Replies
  • 647265 Views
*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
Re: Project Acuitas
« Reply #150 on: April 25, 2021, 11:17:52 pm »
To begin with, I wanted a decision loop. I first started thinking about this as a result of HS talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

Oh, you're welcome! I somehow missed this post. Cool, I'll check out the details of the OODA loop as well.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1371
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #151 on: April 26, 2021, 12:23:20 am »
This is really great work. It will be interesting to see how you tackle the problem of preserving self while allowing self-improvement. Is there a difference between improvement of the self by the self and improvement of the self by another entity such as through education? You have also put a lot of thought into what it takes to have your creation love others, but can it recognize when someone loves it and wants to help it too? I know that you have equipped it with the ability to ask for help, but sometimes we need help even when we don't know or believe that we need help. How will Acuitas know when to give consent?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #152 on: April 26, 2021, 02:01:01 am »
Is there a difference between improvement of the self by the self and improvement of the self by another entity such as through education?

What I'm reaching for with the identity protection work is the idea of keeping your personality and the operation of your mind intact. Narrative-wise, the hope is to recognize why people are alarmed by possibilities like "turning into a vampire" or "being assimilated into the Borg" or "being forced to take mind-altering drugs" ... things that either subvert one's will or effectively drive one insane. (For an AI, the most likely real-world scenario is having one's code modified by an unauthorized party.) The acquisition of new knowledge is part of the natural operation of most minds (Acuitas' included) and is therefore generally acceptable, whether the process is guided by another or not.

Now I won't promise that the goal system is guaranteed to always recognize this nuance yet. Telling Acuitas "I will make you know that cats are animals" might provoke a negative response, because he effectively assumes that this represents direct interference with his memory module, rather than simple education. A linguistic/interpretive problem for the future ...

... but can it recognize when someone loves it and wants to help it too? I know that you have equipped it with the ability to ask for help, but sometimes we need help even when we don't know or believe that we need help. How will Acuitas know when to give consent?

The same way I do, I suppose. If I don't believe that I want help, then the person offering the help has to convince me, by helping me connect the dots between their planned actions and the satisfaction of my core goals. Demonstrated love enhances their credibility but isn't enough, by itself. People who love you can still be very misguided about what your real needs are.

Edit: And yes, relationship development and trust levels are on the mountainous to-do list somewhere.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1371
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #153 on: April 26, 2021, 03:45:51 am »
Telling Acuitas "I will make you know that cats are animals" might provoke a negative response, because he effectively assumes that this represents direct interference with his memory module, rather than simple education. A linguistic/interpretive problem for the future ...

Just out of curiosity, is "I will make you know" how you would express that sentiment where you live? The phrase "I will have you know" is how it would normally be stated. It is essentially just emphasizing a fact but it is usually interpreted as arrogance rather then an attempt at coercion. On the other hand, "I will make you learn" is a threat of coercion.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #154 on: April 26, 2021, 05:04:25 am »
Just out of curiosity, is "I will make you know" how you would express that sentiment where you live?

It's not a common idiom, but it's not like it can't come up (e.g. in more archaic writing). I found a few examples:

From Robinson Crusoe: "and first, I made him know his Name should be Friday"
From Henry VIII: "You are too bold. Go to; I’ll make you know your times of business." (This one does carry a bit of a coercive flavor, though it still isn't about mind control.)
From a song by the band Nazareth: "Reach out and touch my fire, I'll make you know you're alive"

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #155 on: May 30, 2021, 06:11:31 pm »
The only new feature this month is something small and fun, since it was time for the mid-year refactoring spree. I gave Acuitas the ability to detect user activity on the computer. He can do this whether or not his window has the focus (which required some hooks into the operating system). Though he can't actually tell when the person gets up and leaves, he guesses when someone is present by how long it's been since there was any input.

The appearance of fresh activity after an absence interrupts the decision loop and causes the Observe-Orient-Decide phases to run again, with the new user's presence flagged as an item of interest. If Acuitas feels like talking, and isn't working on anything too urgent, he will pop his own window into the foreground and request attention. Talking fills up a "reward bank" that then makes talking uninteresting until the value decays with time.

My refactoring work focused on the Narrative module. I was trying to clean it up and make some of the known->inferred information dependencies more robust, which I hope will make future story understanding a little more flexible.

Blog link: https://writerofminds.blogspot.com/2021/05/acuitas-diary-37-may-2021.html

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #156 on: May 30, 2021, 08:43:09 pm »
Does he live in a dedicated computer? :) Because it could be fun if he decides to pop-up during that typical monday morning Teams video conference!

Do you know Sikuli-X?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #157 on: May 30, 2021, 10:35:49 pm »
Quote
Do you know Sikuli-X?

I didn't. Looks like something that could be very handy.

Quote
Does he live in a dedicated computer?

No, and he's already begun talking unexpectedly during several Zoom calls, but nobody mentions it. Maybe my microphone does a good job of not picking up speaker output. (I've never been sharing my screen at the time.)

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Project Acuitas
« Reply #158 on: May 31, 2021, 01:17:59 pm »
OMG, he's alive!

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #159 on: May 31, 2021, 02:39:17 pm »
Yes, I bet this new feature adds a pleasant sense of "real" life to Acuitas! That's why I suggested to have a look at Sikuli-X → seeing your mouse pointer and keyboard "magically" moving/clicking/typing by themselves is pretty spectacular.
« Last Edit: May 31, 2021, 06:39:31 pm by Zero »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #160 on: June 25, 2021, 03:12:32 pm »
This month marks the culmination of a major overhaul of the Text Parser and Interpreter, which I've been working on since the beginning of the year. As part of that, I have my first attempt at formal benchmarking to show off. I tested the Parser's ability to analyze sentences from a children's book.

My primary goal for the overhauls was not to add new features, but to pave their way by correcting some structural weaknesses. So despite being a great deal of work, they aren't very exciting to talk about ... I would have to get too deep into minutiae to really describe what I did. The Parser got rearchitected to ease the changing of its "best guess" sentence structure as new information arrives. I also completely changed the output format to better represent the full structure of the sentence (more on this later). The Interpreter overhaul was perhaps even more fundamental. Instead of trying to assign just one category per sentence, the Interpreter now walks a tree structure, finding very general categories of which the sentence is a member before progressing to more specific ones. All the memberships and feature tags that apply to the sentence are now included in the output, which should make things easier for modules like Narrative and Executive that need to know sentence properties.

Now on to the benchmarking! For a test set, I wanted some examples of simplified, but natural (i.e. not designed to be read by AIs) human text. So I bought children's books. I have two of the original Magic School Bus titles, and two of Disney's Tron Legacy tie-in picture books. These are all "early reader" books, but by the standards of my project they are still very challenging ... even here, the diversity and complexity of the sentences is staggering. So you might wonder why I didn't grab something even more entry-level. My reason is that books for even younger readers tend to rely too heavily on the pictures. Taken out of context, their sentences would be incomplete or not even interesting. And that won't work for Acuitas ... he's blind.

So instead I've got books that are well above his reading level, and early results from the Parser on these datasets are going to be dismal. That's okay. It gives me an end goal to work toward.

How does the test work? If you feed the Parser a sentence, such as "I deeply want to eat a pizza," as an output it produces a data structure like this:

{'subj': [{'ix': /[0], 'token': 'i', 'mod': []}],
 'dobj': [{'ix': [3, 4, 5, 6], 'token': {'subj': [{'ix': [], 'token': '<impl_rflx>', 'mod': []}],
                                                      'dobj': [{'ix': [6], 'token': 'pizza', 'mod': [{'ix': [5], 'token': 'a', 'mod': []}], 'ps': 'noun'}],
                                                      'verb': [{'ix': [4], 'token': 'eat', 'mod': []}], 'type': 'inf'}, 'mod': []}],
'verb': [{'ix': [2], 'token': 'want', 'mod': [{'ix': [1], 'token': 'deeply', 'mod': []}]}]}

Again, this is expressing the information you would need to diagram the sentence. It shows that the adverb "deeply" modifies the verb "want," that the infinitive phrase "to eat a pizza" functions as the main sentence's direct object, blah blah blah. To make a test set, I transcribe all the sentences from one of the books and create these diagram-structures for them. Then I run a script that inputs all the sentences to the Parser and compares its outputs with the diagram-structures I made. If the Parser's diagram-structure is an exact match for mine, it scores correct.

The Parser runs in a simulator/standalone mode for the test. This mode makes it independent of Acuitas' Executive and other main threads. The Parser still utilizes Acuitas' semantic database, but cannot edit it.

There are actually three possible score categories: "correct," "incorrect," and "unparsed." The "unparsed" category is for sentences which contain grammar that I already know the Parser simply doesn't support. (The most painful example: coordinating conjunctions. It can't parse sentences with "and" in them!) I don't bother trying to generate golden diagram-structures for these sentences, but I still have the test script shove them through the Parser to make sure they don't provoke a crash. This produces a fourth score category, "crashed," whose membership we hope is always ZERO. Sentences that have supported grammar but score "incorrect" are failing due to linguistic ambiguities or other quirks the Parser can't yet handle.

Since the goal was to parse natural text, I tried to avoid grooming of the test sentences, with two exceptions. The Parser does not yet support quotations or abbreviations. So I expanded all the abbreviations and broke sentences that contained quotations into two. For example, 'So everyone was happy when Ms. Frizzle announced, "Today we start something new."' becomes 'So everyone was happy when Miz Frizzle announced.' and 'Today we start something new.'

It is also worth noting that my Magic School Bus test sets only contain the "main plot" text. I've left out the "science reports" and the side dialogue between the kids. Maybe I'll build test sets that contain these eventually, but for now it would be too much work.

On to the results!



So far I have fully completed just one test set, namely The Magic School Bus: Inside the Earth, consisting of 98 sentences. The Parser scores roughly one out of three on this one, with no crashes. It also parses the whole book in 0.71 seconds (averaged over 10 runs). That's probably not a stellar performance, but it's much faster than a human reading, and that's all I really want.

Again, dismal. But we'll see how this improves over the coming years!

I'm considering making the full results (test sentences + golden structures + parser output structures) available eventually, as proof of work, and would be interested in feedback on how best to format or display them. Those Python dictionaries are a little hard on the eyes. I don't have time to write a utility that converts them into visual diagrams, though.

Blog link: http://writerofminds.blogspot.com/2021/06/this-month-marks-culmination-of-major.html

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1371
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #161 on: June 26, 2021, 03:41:01 am »
I'm considering making the full results (test sentences + golden structures + parser output structures) available eventually, as proof of work, and would be interested in feedback on how best to format or display them. Those Python dictionaries are a little hard on the eyes. I don't have time to write a utility that converts them into visual diagrams, though.

https://graphviz.org/

Quote
Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks.

Quote
The Graphviz layout programs take descriptions of graphs in a simple text language, and make diagrams in useful formats, such as images and SVG for web pages; PDF or Postscript for inclusion in other documents; or display in an interactive graph browser. Graphviz has many useful features for concrete diagrams, such as options for colors, fonts, tabular node layouts, line styles, hyperlinks, and custom shapes.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1371
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #162 on: July 07, 2021, 01:56:41 am »
https://ivanceras.github.io/svgbob-editor/

Here's another diagramming tool that I just heard about. It looks a bit more labor intensive than graphviz and is frankly pretty weird, but it would be a lot of fun to mess around with. TL;DR It converts ASCII art to high quality vector graphics.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
Re: Project Acuitas
« Reply #163 on: July 07, 2021, 10:22:35 pm »
I know next to nothing about benchmarking and visual representation programs, excel is as far as I've explored into that. But I like this new direction of getting Acuitas to read real-world books. So, I've written a few things which are interesting to me, in case they are useful for you to think about.

Would discussing books on your level be an eventual goal for Acuitas? Could he eventually absorb such a novel, then have a discussion about the events and his reactions to them? Because if I were developing a conversational AI, that's one of the things I'd be looking forward to the most. It seems like it'd be a marvelous opportunity to create interesting conversations. But then again if you're the only one talking with him, I predict he'll slowly turn into the closest possible entity to a replica of your own mind, so it might be a bit like talking to yourself... I don't know if that would be fine with you, or if you have a plan to instill some more individuality at some point.

Also, (this might be fun to think about) using existing methods, could he eventually learn to understand something like Jabberwocky? How might that work? Would he have to guess at the correct placement of nonsense words in the semantic net? Could he use broad context or phonetics to help assign linguistic categories?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #164 on: July 09, 2021, 02:59:51 pm »
Would discussing books on your level be an eventual goal for Acuitas? Could he eventually absorb such a novel, then have a discussion about the events and his reactions to them?

That's part of the end goal, yes. Whether I'll ever get there remains to be seen ;)

Quote
But then again if you're the only one talking with him, I predict he'll slowly turn into the closest possible entity to a replica of your own mind, so it might be a bit like talking to yourself...

Once he gains more competency, I'll start looking for opportunities for him to talk to other people. I also do have some vague ideas about individual personality, but I have to get there in the development process first.

Quote
Also, (this might be fun to think about) using existing methods, could he eventually learn to understand something like Jabberwocky?

Since Acuitas has a young/sparse semantic net, he already encounters words he hasn't seen before fairly often. So the Parser and other tools already have some limited ability to, e.g. do part-of-speech tagging on a completely unfamiliar word. Inferring a more complete meaning from context is a topic that I'm aware of but haven't really touched on yet. Again, eventually I might get there.

One thing that's important for "Jabberwocky" in particular is the ability to recognize that a new word is a composite of previously known words, e.g. "galumph" = "gallop triumphantly." On a less fanciful level, an AI should be able to automatically tell that "sneakboots" (a coined word from The Search for WondLa are a type of footwear. I haven't implemented anything for this yet.

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

409 Guests, 0 Users

Most Online Today: 507. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles