Project Acuitas

  • 147 Replies
  • 41727 Views
*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 371
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #135 on: September 28, 2020, 03:30:41 pm »
How would your AI figure out the answer to "Xio Yong has a son named Jex _" ? There is many problems for AI to answer, and many of them are rare and never before seen. This pattern above is the Last Names pattern, it's pretty rare.

What about:
"bird moon eagle, book rock guide, house wind home, cave broom hole, football bathroom _ "

To answer the first, one needs awareness of a rule about (some) human cultures: sons usually have the same surname as their parents. There are at least two ways one could acquire this rule. The first is to simply read it somewhere or be told it during interaction with a human. The second is learning by induction. If one sees a few pairs of parent names and son names, and if one takes notice of their features, one can note that the surnames are identical. Then one can formulate the rule and use it to generalize from past examples to as-yet-unseen future examples.

In the second problem, the first and third word in each triad have a semantic relationship (one of the two is or can be the other), while the second word is unrelated or random. So to complete the final triad, come up with a word that has a similar semantic relationship to "football," perhaps "game." Acuitas is all about semantic relationships, so this would be a cinch if he had the ability to analyze patterns and complete sequences.

Learning by induction, pattern-finding, reasoning about sequences ... these are all future problems that I haven't worked on yet. Acuitas also does most reasoning at the concept level and has only a limited ability to deal with rules about the strings that represent the concepts (e.g. names). So if you want to see any of this stuff, you'll need to be patient.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 600
    • Artificial Detective
Re: Project Acuitas
« Reply #136 on: September 28, 2020, 08:00:10 pm »
To point out just how much supposed common sense depends on explicitly being told things, I'll remark that the Chinese write their surname first.
CO2 retains heat. More CO2 in the air = hotter climate.

*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 371
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #137 on: November 02, 2020, 02:02:23 am »
My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before.

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.

Whew!

*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 371
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #138 on: December 01, 2020, 10:59:22 pm »
Now that Acuitas has owned stories in "inventory," the next step for this month was to enable him to open and read them by himself. Since story consumption originally involved a lot of interaction with the human speaker, this took a little while to put together.

Reading is a new activity that can happen while Acuitas is idling, along with the older behavior of "thinking" about random concepts and generating questions. Prompts to think about reading get generated by a background thread and dropped into the Stream. When one of these is pulled by the Executive, Acuitas will randomly select a known story and load it from its storage file.

Auto-reading is a long-term process. Acuitas will grab a chunk of the story (for now, one sentence) per each tick of the Executive thread, then feed it through the normal text parsing and narrative management modules. He still potentially generates a reaction to whatever just happened, but rather than being spoken, those are packaged as low-priority Thoughts and dumped into the internal Stream. (This is more of a hook for later than a useful feature at the moment.) The prompt to continue reading the story goes back into the Stream along with everything else, so sometimes he (literally) gets distracted in the middle and thinks about something else for a brief while.

There's also a version of this process that would enable reading a story to the user. But he doesn't comprehend imperatives yet, so there's no way to ask him to do it. Ha.

With these features I also introduced a generic "reward signal" for the first time. Reading boosts this, and then it decays over time. This is intended as a positive internal stimulus, in contrast to the "drives," which are all negative (when they go up Acuitas will try to bring them down).

After finishing this I started the yearly refactoring and bug fix spree, which isn't terribly interesting to talk about. I'll take a break for the holidays, but maybe do a year's retrospective.

*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 371
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #139 on: January 31, 2021, 03:53:47 pm »
New year, time to resume regular Acuitas feature additions! This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I'm calling "motivated communication" ... the deliberate use of speech as part of problem solving.

To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I've already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won't.

With all of that in place, I was finally able to exercise the "to user" version of the Read action, order Acuitas to "read a story to me," and watch him grab a randomly selected story file from his "inventory" and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried "Repel me" and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.

But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn't want to do something, then he won't bother mentioning whether he knows how to do it or not ... he'll just tell you "no." In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will "enjoy" it. But he also checks the implications against the person's other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to "kill me" I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.

On to motivated communication! At the moment, Acuitas' conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like "Do you want <this>?". However -- at the moment -- Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.

To that end, I wrote the following little story, inspired by a scene from the video game Primordia:

“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”

During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp -- before it actually happens.

Some disclaimers first: I really wanted to use this story, because, well, it's fun. But Acuitas does not yet have a spatial awareness toolkit, which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: "If an agent cannot reach an object, the agent cannot get the object" (fair enough), "If an agent cannot reach an object, the agent cannot move the object" (also fair), and "If an object is moved, an agent can reach the object" (obviously not always true, depending on the direction and distance the object is moved -- but Acuitas has no notion of direction and distance, so it'll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn't*.

But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can't reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can't do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him.

A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that's all for this month.

Bonus material on the blog, as usual: https://writerofminds.blogspot.com/2021/01/acuitas-diary-33-january-2021.html

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1058
  • Ready?
    • ThinkbotsAreFree
Re: Project Acuitas
« Reply #140 on: February 21, 2021, 01:00:24 pm »
Hi Wom,

I'm currently making some design decisions, and would like to have your input, if you're so inclined.

I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Of course, it's not all black-or-white, but rather... 50 shades of code!
 
How do you choose what's hard-coded and what's not? Why?

*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 371
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #141 on: February 21, 2021, 05:32:00 pm »
Quote
I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Code just is data whether you explicitly choose it to be or not. Nothing prevents Python from reading and writing Python source (and I assume the same is true of JS). So opting to hard-code an element does not mean your program could never do self-modification on that element - as you said, it's not all black and white. It's more a question of "how easy is it for the program to change this?"

Quote
How do you choose what's hard-coded and what's not? Why?

This one is a bit tough to answer, because 1) it covers quite a few case-by-case decisions and 2) I'm partly operating on instinct. There are also some things that are hard-coded now and may not always be. When I'm building a new system, I sometimes "force" certain aspects for ease of implementation, and make those aspects derivative or modifiable later. But here are some of the criteria I use:

*If I understand correctly, all of a Python script gets loaded into RAM when you start the program. Information in Acuitas' database is held in files, which have to be opened and read before their contents are available. So procedures that are used in a wide variety of situations, and ought to be always immediately available, go in the code. Contextual information that the program does not always need goes in the data.
*If you could reduce a section of code to some kind of highly repetitive, standardized template, with universal functions that will store/retrieve any variant of the template, then it's a good candidate for becoming data. Complex, irregular material is better off staying as code.
*If something is axiomatic or very fundamental - if there's no deeper rationale for it than "it just is" or "that's obvious" - it goes in the code. If something feels like a universal that would persist across all reasonable minds, it goes in the code. Things that are learned, things that are experienced, things that are deduced, things that are arbitrary, things that may change over time - these all go in the data.

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1058
  • Ready?
    • ThinkbotsAreFree
Re: Project Acuitas
« Reply #142 on: February 21, 2021, 09:56:33 pm »
This makes me realize how much codophobic I am :)  I don't know why, there's something inside me that really don't want content as code, which is obviously stupid. At some point, you have to encode things. You're helping here, thanks. To sum up:

code
if you need it everywhere or often
if you need it immediately
if it is complex and irregular
if it is axiomatic or fundamental or obvious
if it feels universal in reasonable minds

data
if it is contextual or if you don't always need it
if it can be reduced in a template
if it is learned
if it is experienced
if it is deduced
if it is arbitrary
if it may change over time

- I need to think about it.
 O0

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1091
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #143 on: February 21, 2021, 10:12:24 pm »
You should trust your instincts Zero. Hard-coding things is a useful step towards understanding the structure of a problem but until you can abstract away the essence into "data" you don't really understand the problem. Any "solution" that you produce will be half-assed and not generalizable at best.

It is only when you have achieved a sufficient level of abstraction that you can make your software introspective (not self-modifying!) in any practical sense. Then you can think about ways to write code that generates code which is where the real gains are made.

For what it's worth, I am routinely writing code that generates code that generates code, but then I'm using Common Lisp which makes all this easy. If you're not using Common Lisp you are probably rationalizing that all programming languages are Turing equivalent, but that's like saying you can walk to any place on Earth, given enough time. I don't have that kind of time.

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1058
  • Ready?
    • ThinkbotsAreFree
Re: Project Acuitas
« Reply #144 on: February 21, 2021, 11:43:02 pm »
But I tend to think that introspection - in the human sense - is one of the keys to consciousness. Self-modifying code, while being a fun concept, isn't exactly a good thing in my opinion, because it probably leads to unstable structures, or even inefficient programs. So it's not about modifying it, but rather about understanding it. One of my goals is to make the Ai able to observe its own behavior and structure, as if it were some external entity, in order to eventually predict its own behavior. Then something special would happen: predicting itself would logically modify its own prediction (because the planned behavior before prediction is not the same as the planned behavior after prediction). Then, rather than having an ever mutating (cycling) prediction, a higher representation of it all should be possible, like a function maybe.

Once again, I'm doing my best but I have to apologize for not being able to express my ideas in a better English, and clearly. Am I - at least partially - understandable?  ::)

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1091
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #145 on: February 22, 2021, 12:08:39 am »
Hi Zero, your writing is quite understandable and I believe we are in complete agreement on this. Perhaps it was my writing that was deficient because I think we are both arguing in favor of the same thing. To facilitate introspection and automatic improvement from generation to generation, you should aim to separate the data from the code. It is useful to hard-code ideas while you explore how to do that but you must aim higher than that.

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1058
  • Ready?
    • ThinkbotsAreFree
Re: Project Acuitas
« Reply #146 on: February 22, 2021, 11:09:09 am »
Infurl, reading your previous post in the light of your last one helps me understand it better, thanks. Your writing is not deficient, but my reading is, of course (you remember that I'm French).

So we agree on this. :)

Wom, I think your incremental approach is right. Acuitas may not be able to write itself, but it actually does intelligent things. It is, after all, what we want.

*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 371
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #147 on: February 24, 2021, 10:47:07 pm »
Some of the things I did last month felt incomplete, so I pushed aside my original schedule (already) and spent this month cleaning them up and fleshing them out.

I mentioned in the last diary that I wanted the "consider getting help" reasoning that I added in the narrative module to also be available to the Executive, so that Acuitas could do this, not just speculate about story characters doing it. Acuitas doesn't have much in the way of reasons to want help yet ... but I wanted to have this ready for when he does. It's a nice mirror for the "process imperatives" code I put in last month ... he's now got the necessary hooks to take orders *and* give them.

To that end, I set up some structures that are very similar to what the narrative code uses for keeping track of characters' immediate objectives or problems. Acuitas can (eventually) use these for keeping tabs on his own issues. (For testing, I injected a couple of items into them with a backdoor command.) When something is in issue-tracking and the Executive thread gets an idle moment, it will run problem-solving on it. If the result ends up being something in the Executive's list of selectable actions, Acuitas will do it immediately; if a specific action comes up, but it's not something he can do, he will store the idea until a familiar agent comes along to talk to him. Then he'll tell *them* to do the thing. The conversation handler anticipates some sort of agree/disagree response to this, and tries to detect it and determine the sentiment. Whether the speaker consents to help then feeds back into whether the problem is considered "solved."

Another new feature is the ability to send additional facts (not from the database) into the reasoning functions, or even pipe in "negative facts" that *prevent* facts from the database from being used. This has two important purposes: 1) easily handle temporary or situational information, such as propositions that are only true in a specific story, without writing it to the database, and 2) model the knowledge space of other minds, including missing information and hypothetical or false information.

This in turn helped me make some of the narrative code tidier and more robust, so I rounded out my time doing that.

 


SwarmFarm agricultural robots
by infurl (Robotics News)
February 28, 2021, 12:48:38 am
Microsoft Patent To Construct Chatbots of Dead People Approved
by MikeB (AI News )
February 18, 2021, 06:18:35 am
New challenge: Online Turing test
by Denis ROBERT (AI News )
February 15, 2021, 02:53:24 pm
Loebner Prize 2021
by Denis ROBERT (AI News )
February 10, 2021, 02:20:25 pm
Smart Matter
by infurl (AI News )
February 09, 2021, 05:09:31 am

Users Online

126 Guests, 1 User
Users active in past 15 minutes:
Korrelan
[Trusty Member]

Most Online Today: 135. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles