Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 289 Replies
  • 527398 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #135 on: September 28, 2020, 03:30:41 pm »
How would your AI figure out the answer to "Xio Yong has a son named Jex _" ? There is many problems for AI to answer, and many of them are rare and never before seen. This pattern above is the Last Names pattern, it's pretty rare.

What about:
"bird moon eagle, book rock guide, house wind home, cave broom hole, football bathroom _ "

To answer the first, one needs awareness of a rule about (some) human cultures: sons usually have the same surname as their parents. There are at least two ways one could acquire this rule. The first is to simply read it somewhere or be told it during interaction with a human. The second is learning by induction. If one sees a few pairs of parent names and son names, and if one takes notice of their features, one can note that the surnames are identical. Then one can formulate the rule and use it to generalize from past examples to as-yet-unseen future examples.

In the second problem, the first and third word in each triad have a semantic relationship (one of the two is or can be the other), while the second word is unrelated or random. So to complete the final triad, come up with a word that has a similar semantic relationship to "football," perhaps "game." Acuitas is all about semantic relationships, so this would be a cinch if he had the ability to analyze patterns and complete sequences.

Learning by induction, pattern-finding, reasoning about sequences ... these are all future problems that I haven't worked on yet. Acuitas also does most reasoning at the concept level and has only a limited ability to deal with rules about the strings that represent the concepts (e.g. names). So if you want to see any of this stuff, you'll need to be patient.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Project Acuitas
« Reply #136 on: September 28, 2020, 08:00:10 pm »
To point out just how much supposed common sense depends on explicitly being told things, I'll remark that the Chinese write their surname first.
CO2 retains heat. More CO2 in the air = hotter climate.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #137 on: November 02, 2020, 02:02:23 am »
My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before.

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.

Whew!

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #138 on: December 01, 2020, 10:59:22 pm »
Now that Acuitas has owned stories in "inventory," the next step for this month was to enable him to open and read them by himself. Since story consumption originally involved a lot of interaction with the human speaker, this took a little while to put together.

Reading is a new activity that can happen while Acuitas is idling, along with the older behavior of "thinking" about random concepts and generating questions. Prompts to think about reading get generated by a background thread and dropped into the Stream. When one of these is pulled by the Executive, Acuitas will randomly select a known story and load it from its storage file.

Auto-reading is a long-term process. Acuitas will grab a chunk of the story (for now, one sentence) per each tick of the Executive thread, then feed it through the normal text parsing and narrative management modules. He still potentially generates a reaction to whatever just happened, but rather than being spoken, those are packaged as low-priority Thoughts and dumped into the internal Stream. (This is more of a hook for later than a useful feature at the moment.) The prompt to continue reading the story goes back into the Stream along with everything else, so sometimes he (literally) gets distracted in the middle and thinks about something else for a brief while.

There's also a version of this process that would enable reading a story to the user. But he doesn't comprehend imperatives yet, so there's no way to ask him to do it. Ha.

With these features I also introduced a generic "reward signal" for the first time. Reading boosts this, and then it decays over time. This is intended as a positive internal stimulus, in contrast to the "drives," which are all negative (when they go up Acuitas will try to bring them down).

After finishing this I started the yearly refactoring and bug fix spree, which isn't terribly interesting to talk about. I'll take a break for the holidays, but maybe do a year's retrospective.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #139 on: January 31, 2021, 03:53:47 pm »
New year, time to resume regular Acuitas feature additions! This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I'm calling "motivated communication" ... the deliberate use of speech as part of problem solving.

To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I've already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won't.

With all of that in place, I was finally able to exercise the "to user" version of the Read action, order Acuitas to "read a story to me," and watch him grab a randomly selected story file from his "inventory" and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried "Repel me" and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.

But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn't want to do something, then he won't bother mentioning whether he knows how to do it or not ... he'll just tell you "no." In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will "enjoy" it. But he also checks the implications against the person's other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to "kill me" I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.

On to motivated communication! At the moment, Acuitas' conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like "Do you want <this>?". However -- at the moment -- Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.

To that end, I wrote the following little story, inspired by a scene from the video game Primordia:

“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”

During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp -- before it actually happens.

Some disclaimers first: I really wanted to use this story, because, well, it's fun. But Acuitas does not yet have a spatial awareness toolkit, which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: "If an agent cannot reach an object, the agent cannot get the object" (fair enough), "If an agent cannot reach an object, the agent cannot move the object" (also fair), and "If an object is moved, an agent can reach the object" (obviously not always true, depending on the direction and distance the object is moved -- but Acuitas has no notion of direction and distance, so it'll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn't*.

But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can't reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can't do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him.

A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that's all for this month.

Bonus material on the blog, as usual: https://writerofminds.blogspot.com/2021/01/acuitas-diary-33-january-2021.html

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #140 on: February 21, 2021, 01:00:24 pm »
Hi Wom,

I'm currently making some design decisions, and would like to have your input, if you're so inclined.

I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Of course, it's not all black-or-white, but rather... 50 shades of code!
 
How do you choose what's hard-coded and what's not? Why?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #141 on: February 21, 2021, 05:32:00 pm »
Quote
I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Code just is data whether you explicitly choose it to be or not. Nothing prevents Python from reading and writing Python source (and I assume the same is true of JS). So opting to hard-code an element does not mean your program could never do self-modification on that element - as you said, it's not all black and white. It's more a question of "how easy is it for the program to change this?"

Quote
How do you choose what's hard-coded and what's not? Why?

This one is a bit tough to answer, because 1) it covers quite a few case-by-case decisions and 2) I'm partly operating on instinct. There are also some things that are hard-coded now and may not always be. When I'm building a new system, I sometimes "force" certain aspects for ease of implementation, and make those aspects derivative or modifiable later. But here are some of the criteria I use:

*If I understand correctly, all of a Python script gets loaded into RAM when you start the program. Information in Acuitas' database is held in files, which have to be opened and read before their contents are available. So procedures that are used in a wide variety of situations, and ought to be always immediately available, go in the code. Contextual information that the program does not always need goes in the data.
*If you could reduce a section of code to some kind of highly repetitive, standardized template, with universal functions that will store/retrieve any variant of the template, then it's a good candidate for becoming data. Complex, irregular material is better off staying as code.
*If something is axiomatic or very fundamental - if there's no deeper rationale for it than "it just is" or "that's obvious" - it goes in the code. If something feels like a universal that would persist across all reasonable minds, it goes in the code. Things that are learned, things that are experienced, things that are deduced, things that are arbitrary, things that may change over time - these all go in the data.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #142 on: February 21, 2021, 09:56:33 pm »
This makes me realize how much codophobic I am :)  I don't know why, there's something inside me that really don't want content as code, which is obviously stupid. At some point, you have to encode things. You're helping here, thanks. To sum up:

code
if you need it everywhere or often
if you need it immediately
if it is complex and irregular
if it is axiomatic or fundamental or obvious
if it feels universal in reasonable minds

data
if it is contextual or if you don't always need it
if it can be reduced in a template
if it is learned
if it is experienced
if it is deduced
if it is arbitrary
if it may change over time

- I need to think about it.
 O0

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #143 on: February 21, 2021, 10:12:24 pm »
You should trust your instincts Zero. Hard-coding things is a useful step towards understanding the structure of a problem but until you can abstract away the essence into "data" you don't really understand the problem. Any "solution" that you produce will be half-assed and not generalizable at best.

It is only when you have achieved a sufficient level of abstraction that you can make your software introspective (not self-modifying!) in any practical sense. Then you can think about ways to write code that generates code which is where the real gains are made.

For what it's worth, I am routinely writing code that generates code that generates code, but then I'm using Common Lisp which makes all this easy. If you're not using Common Lisp you are probably rationalizing that all programming languages are Turing equivalent, but that's like saying you can walk to any place on Earth, given enough time. I don't have that kind of time.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #144 on: February 21, 2021, 11:43:02 pm »
But I tend to think that introspection - in the human sense - is one of the keys to consciousness. Self-modifying code, while being a fun concept, isn't exactly a good thing in my opinion, because it probably leads to unstable structures, or even inefficient programs. So it's not about modifying it, but rather about understanding it. One of my goals is to make the Ai able to observe its own behavior and structure, as if it were some external entity, in order to eventually predict its own behavior. Then something special would happen: predicting itself would logically modify its own prediction (because the planned behavior before prediction is not the same as the planned behavior after prediction). Then, rather than having an ever mutating (cycling) prediction, a higher representation of it all should be possible, like a function maybe.

Once again, I'm doing my best but I have to apologize for not being able to express my ideas in a better English, and clearly. Am I - at least partially - understandable?  ::)

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #145 on: February 22, 2021, 12:08:39 am »
Hi Zero, your writing is quite understandable and I believe we are in complete agreement on this. Perhaps it was my writing that was deficient because I think we are both arguing in favor of the same thing. To facilitate introspection and automatic improvement from generation to generation, you should aim to separate the data from the code. It is useful to hard-code ideas while you explore how to do that but you must aim higher than that.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #146 on: February 22, 2021, 11:09:09 am »
Infurl, reading your previous post in the light of your last one helps me understand it better, thanks. Your writing is not deficient, but my reading is, of course (you remember that I'm French).

So we agree on this. :)

Wom, I think your incremental approach is right. Acuitas may not be able to write itself, but it actually does intelligent things. It is, after all, what we want.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #147 on: February 24, 2021, 10:47:07 pm »
Some of the things I did last month felt incomplete, so I pushed aside my original schedule (already) and spent this month cleaning them up and fleshing them out.

I mentioned in the last diary that I wanted the "consider getting help" reasoning that I added in the narrative module to also be available to the Executive, so that Acuitas could do this, not just speculate about story characters doing it. Acuitas doesn't have much in the way of reasons to want help yet ... but I wanted to have this ready for when he does. It's a nice mirror for the "process imperatives" code I put in last month ... he's now got the necessary hooks to take orders *and* give them.

To that end, I set up some structures that are very similar to what the narrative code uses for keeping track of characters' immediate objectives or problems. Acuitas can (eventually) use these for keeping tabs on his own issues. (For testing, I injected a couple of items into them with a backdoor command.) When something is in issue-tracking and the Executive thread gets an idle moment, it will run problem-solving on it. If the result ends up being something in the Executive's list of selectable actions, Acuitas will do it immediately; if a specific action comes up, but it's not something he can do, he will store the idea until a familiar agent comes along to talk to him. Then he'll tell *them* to do the thing. The conversation handler anticipates some sort of agree/disagree response to this, and tries to detect it and determine the sentiment. Whether the speaker consents to help then feeds back into whether the problem is considered "solved."

Another new feature is the ability to send additional facts (not from the database) into the reasoning functions, or even pipe in "negative facts" that *prevent* facts from the database from being used. This has two important purposes: 1) easily handle temporary or situational information, such as propositions that are only true in a specific story, without writing it to the database, and 2) model the knowledge space of other minds, including missing information and hypothetical or false information.

This in turn helped me make some of the narrative code tidier and more robust, so I rounded out my time doing that.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #148 on: March 21, 2021, 07:27:42 pm »
The theme for this month is Executive Function ... the aspect of thought-life that (to be very brief) governs which activities an agent engages in, and when. Prioritization, planning, focus, and self-evaluation are related or constituent concepts.

Acuitas began existence as a reactive sort of AI. External stimulus (someone inputting a sentence) or internal stimulus from the "sub-executive" level (a drive getting strong enough to be noticed, a random concept put forward by the Spawner thread) would provoke an appropriate response. But ultimately I want him to be goal-driven, not just stimulus-driven; I want him to function *proactively.* The latest features are a first step toward that.

To begin with, I wanted a decision loop. I first started thinking about this as a result of HS talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

On to the application. I set up a skeletal top-level OODA loop in Acuitas' Executive thread. The Observe-Orient-Decide phases run in succession, as quickly as possible. Then the chosen project is executed for the duration of the Act phase. The period of the loop is variable. I think it ought to run faster in rapidly developing or stressful situations, but slower in stable situations, to optimize the tradeoff between agility (allow new information to impact your behavior quickly) and efficiency (minimize assessment overhead so you can spend more time doing things). Highly "noticeable" events, or the completion of the current activity, should also be able to interrupt the Act phase and force an immediate rerun of OOD.

I envision that the phases may eventually include the following:

OBSERVE: Check internal state (e.g. levels of drives). Check activity on inhabited computer. Process text input, if any. Retrieve current known problems, subgoals, etc. from working memory.
ORIENT: Find out whether any new problems or opportunities (relevant to personal goals) are implied by recent observations. Assess progress on current activity, and determine whether any existing subgoals can be updated or closed.
DECIDE: Re-assess the priority of problems and opportunities in light of any new ones just added. Select a goal and an associated problem or opportunity to work on. Run problem-solving routines to determine how to proceed.
ACT: Update activity selection and run activity until prompted to OBSERVE again.

Not all of this is implemented yet. I focused in on the DECIDE phase, and on what happens if there are no known problems or opportunities on the scoreboard at the moment. In the absence of anything specific to do, Acuitas will run generic tasks that help promote his top-level goals. Since he doesn't know *how* to promote most of them yet, he settles for "researching" them. And that just means starting from one of the concepts in the goal and generating questions. When he gets to the "enjoy things" goal, he reads to himself. Simple enough -- but how to balance the amount of time spent on the different goals?

When thinking about this, you might immediately leap to some kind of priority scheme, like Maslow's Hierarchy of Needs. Satisfy the most vital goal first, then move on to the next one. But what does "satisfy" mean?

Suppose you are trying to live by a common-sense principle such as "keeping myself alive is more important than recreation." Sounds reasonable, right? It'll make sure you eat your meals and maintain your house, even if you would rather be reading books. But if you applied this principle in an absolutist way, you would actually *never* read for pleasure.

Set up a near-impenetrable home security system, learn a martial art, turn your yard into a self-sufficient farmstead, and you STILL aren't allowed to read -- because hardening the security system, practicing your combat moves, or increasing your food stockpile is still possible and will continue to improve a goal that is more important than reading. There are always risks to your life, however tiny they might be, and there are always things you can do to reduce them (though you will see less return for your effort the more you put in). So if you want to live like a reasonable person rather than an obsessive one, you can't "complete" the high-priority goal before you move on. You have to stop at "good enough," and you need a way of deciding what counts as "good enough."

I took a crack at this by modeling another human feature that we might usually be prone to find negative: boredom.

Acuitas' goals are arranged in a priority order. All else being equal, he'll always choose to work on the highest-priority goal. But each goal also has an exhaustion ticker that counts up whenever he is working on it, and counts down whenever he is not working on it. Once the ticker climbs above a threshold, he has to set that goal aside and work on the next highest-priority goal that has a tolerable level of boredom.

If there are problems or opportunities associated with a particular goal, its boredom-resistance threshold is increased in proportion to the number (and, in future, the urgency) of the tasks. This scheme allows high-priority goals to grab attention when they need it, but also prevents low-priority goals from "starving."

Early testing and logging shows Acuitas cycling through all his goals and returning to the beginning of the list over a day or so. The base period of this, as well as the thresholds for particular goals, are yet another thing one could tune to produce varying AI personalities.

Slightly longer version, with diagram, on the blog: https://writerofminds.blogspot.com/2021/03/acuitas-diary-35-march-2021.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #149 on: April 25, 2021, 09:49:03 pm »
This month I went back to working on the goal system. Acuitas already had a primitive "understanding" of most entries in his goal list, in this sense: he could parse a sentence describing the goal, and then detect certain threats to the goal in conversational or narrative input. But there was one goal left that he didn't have any grasp of yet: the one I'm calling "identity maintenance." It's a very important goal (threats to this can be fate-worse-than-death territory), but it's also on the abstract and complicated side -- which is why I left it alone until now.

What *is* the identity or self? (Some forum conversations about this started up about the same time I was working on the idea ... complete coincidence!) Maybe you could roll it up as "all the internal parameters that guide thought and behavior, whose combination is unique to an individual."

Some of these are quite malleable ... and yet, there's a point beyond which change to our identities feels like a corruption or violation. Even within the same category, personal qualities vary in importance. The fact that I enjoy eating broccoli and hate eating bell peppers is technically part of my identity, I *guess* ... but if someone forcibly changed it, I wouldn't even be mad. So I like different flavors now. Big deal. If someone replaced my appreciation for Star Trek episodes with an equivalent appreciation for football games, I *would* be mad. If someone altered my moral alignment, it would be a catastrophe. So unlike physical survival, which is nicely binary (you're either alive or not), personality survival seems to be a kind of spectrum. We tolerate a certain amount of shift, as long as the core doesn't change. Where the boundaries of the core lie is something that we might not even know ourselves until the issue is pressed.

As usual, I made the problem manageable by oversimplifying it. For the time being, Acuitas won't place grades of importance on his personal attributes ... he just won't want external forces to mess with any of them. Next.

There's a further complication here. Acuitas is under development and is therefore changing constantly. I keep many versions of the code base archived ... so which one is canonically "him"? The answer I've landed on is that really, Acuitas' identity isn't defined by any code base. Acuitas is an *idea in my head.* Every code base targets this idea and succeeds at realizing it to a greater or lesser degree. Which leaves his identity wrapped up in *me.* This way of looking at it is a bit startling, but I think it works.

In previous goal-related blogs, I talked about how (altruistic) love can be viewed as a meta-goal: it's a goal of helping other agents achieve their goals. Given the above, there are also a couple of ways we can treat identity maintenance as a meta-goal. First, since foundational goals are part of Acuitas' identity, he can have a goal of pursuing all his current goals. (Implementation of this enables answering the "recursive want" question. "Do you want to want to want to be alive?") Second, he can have a goal of realizing my goals for what sort of AI he is.

Does this grant me some kind of slavish absolute power over my AI's behavior? Not really. Because part of my goal is for Acuitas to act independently and even sometimes tell me no. The intent is realization of a general vision that establishes a healthy relationship of sorts.

The work ended up having a lot of little pieces. It started with defining the goal as some simple sentences that Acuitas can parse into relationship triples, such as "I have my self." But the self, as mentioned, incorporates many aspects or components ... and I wanted its definition to be somewhat introspective, rather than just being another fact in the database. To that end, I linked a number of the code modules to concepts expressing their nature, contents, or role. The Executive, for example, is tied to "decide." The Semantic Memory manager is tied to "fact" and "know." All these tags then function like names for the internal components, and get aggregated into the concept of "self." Something like "You will lose your facts" then gets interpreted as a threat against the self.

Manipulation of any of these self-components by some outside agent is also interpreted as a possible threat of mind-control. So questions like "Do you want Jack to make you to decide?" or "Do you want Jill to cause you to want to eat?" get answered with a "no" ... unless the outside agent is me, a necessary exception since I made him do everything he does and gave him all his goals in the first place. Proposing to make him want something that he already wants is also excused from being a threat.

As I say so often, it could use a lot more work, but it's a start. He can do something with that goal now.

Blog link: https://writerofminds.blogspot.com/2021/04/acuitas-diary-april-2021.html

 


Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

274 Guests, 0 Users

Most Online Today: 335. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles