Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 300 Replies
  • 670449 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #180 on: August 18, 2021, 03:02:33 pm »
Gearing up to talk about spatial reasoning, I wanted to start by addressing a sort of obvious issue ... Acuitas doesn't really exist in physical space. Of course the computer he runs on is a physical object, but he has no awareness of it as such. There are no sensors or actuators; he cannot see, touch, or move. Nor does he have a simulated 3D environment in which to see, touch, and move. He operates on words. That's it.

So how could this type of AI begin to conceptualize space?

Option #1: Space as yet another collection of relationships

To an isolated point object floating in an otherwise empty space, the space doesn't actually matter. Distance and direction are uninteresting until one can specify the distance and direction *to* something else. So technically, everything we need to know about space can be expressed as a graph of relationships between its inhabitants. Here are some examples, with the relational connection in brackets:

John [is to the left of] Jack.
Colorado [is north of] New Mexico.
I [am under] the table.
The money [is inside] the box.

For symbolic processing purposes, these are no more difficult to handle than other types of relationship, like category ("Fido [is a] dog") and state ("The food [is] cold"). An AI can make inferences from these relationships to determine the actions possible in a given scenario, and in turn, which of those actions might best achieve some actor's goals.

Though the relationship symbols are not connected to any direct physical experience -- the AI has never seen what "X inside Y" looks like -- the associations between this relationship and possible actions remain non-arbitrary. The AI could know, for instance, that if the money is inside a box, and the box is closed, no one can remove the money. If the box is moved, the money inside it will move too. These connections to other symbols like "move" and "remove" and "closed" supply a meaning for the symbol "inside."

To prevent circular definitions (and hence meaninglessness), at least some of the symbols need to be tied to non-symbolic referents ... but sensory experiences of the physical are not the only possible referents! Symbols can also represent (be grounded in) abstract functional aspects of the AI itself: processes it may run, internal states it may have, etc. Do this right, and you can establish chains of connection between spatial relationships like "inside" and the AI's goals of being in a particular state or receiving a particular text input. At that point, the word "inside" legitimately means something to the AI.

But let's suppose you found that confusing or unconvincing. Let's suppose that the blind, atactile, immobile AI must somehow gain first-hand experience of spatial relationships before it can understand them. This is still possible.

The relationship "inside" is again the easiest example, because any standard computer file system is built on the idea of "inside." Files are stored inside directories which can be inside other directories which are inside drives.

The file system obeys many of the same rules as a physical cabinet full of manila folders and paper. You have to "open" or "enter" a directory to find out what's in it. If you move directory A inside directory B, all the contents of directory A also end up inside directory B. But if you thought that this reflected anything about the physical locations of bits stored on your computer's hard drive, you would be mistaken. A directory is not a little subregion of the hard disk; the files inside it are not confined within some fixed area. Rather, the "inside-ness" of a file is established by a pointer that connects it to the directory's name. In other words, the file system is a relational abstraction!

File systems can be represented as text and interrogated with text commands. Hence a text-processing AI can explore a file system. And when it does, the concept of "inside" becomes directly relevant to its actions and the input it receives in response ... even though it is not actually dealing with physical space.

Though a file system doesn't belong to our physical environment, humans find it about as easy to work with as a filing cabinet or organizer box. Our experience with these objects provides analogies that we can use to understand the abstraction.

So why couldn't an AI use direct experience with the abstraction to understand the objects?

And why shouldn't the abstract or informational form of "inside-ness" be just as valid -- as "real" -- as the physical one?

Option #2: Space as a mathematical construct

All of the above discussion was qualitative rather than quantitative. What if the AI ends up needing a more precise grasp of things like distances and angles? What if we wanted it to comprehend geometry? Would we need physical experience for that?

It is possible to build up abstract "spaces" starting from nothing but the concepts of counting numbers, sets, and functions. None of these present inherent difficulties for a symbolic AI. Set membership is very similar to the category relationship ("X [is a] Y") so common in semantic networks. And there are plenty of informational items a symbolic AI can count: events, words, letters, or the sets themselves. (Consider Roger Penrose's "Do Natural Numbers Need the Physical World?", summarized within this article: http://www.lrcphysics.com/scalar-mathematics/2007/11/24/on-algebra-of-pure-spacetime.html) When you need fractional numbers, you can derive them from the counting numbers.

Keeping in mind that I'm not a mathematician by trade and thus not yet an expert on these matters, consider the sorts of ingredients one needs to build an abstract space:

1. A set of points that belong to the space. A "point" is just a number tuple, like (0, 3, 5, 12) or (2.700, 8.325). Listing all the points individually is not necessary -- you can specify them with rules or a formula. So the number of points in your space can be infinite if needed. The number of members in each point tuple gives the space's dimension.

2. A mathematical function that can accept any two points as inputs and produce a single number as output. This function is called the metric, and it provides your space's concept of distance.

3. Vectors, which introduce the idea of direction. A vector can be created by choosing any two points and designating one as the head and the other as the tail. If you can find a minimal list of vectors that are unrelated to each other and can be used to compose any other possible vector in the space, then you can establish cardinal directions.

None of this requires you to see anything, touch anything, or move anything. It's all abstract activity: specifying, assigning, calculating. Using these techniques, you can easily build an idea-thing that happens to mimic the Euclidean 3D space that humans live in (though many other spaces, some of which you could not even visualize, are also possible). And once you've done that, you are free to construct all of geometry.

I'd like to eventually equip Acuitas with the tools to apply both Option #1 and Option #2. I'm starting with Option #1 for now. More on that later ...

*

chattable

  • Electric Dreamer
  • ****
  • 127
Re: Project Acuitas
« Reply #181 on: August 18, 2021, 05:01:36 pm »
this is very interesting.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #182 on: August 24, 2021, 11:41:26 am »
I was about to write something along the lines of "internet (directed graph) is a better space metaphor than filesystem (tree)".

But, isn't what you're facing now (how can he conceptualize space) more general: how can he conceptualize a human, or the action of "giving" something, or well... anything? As you said, Acuitas operates on words. To you, why is "conceptualizing space" different from "conceptualizing a simple story", if that story is about things he can't experience?

I hope I'm being constructive.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #183 on: August 24, 2021, 02:42:52 pm »
I was about to write something along the lines of "internet (directed graph) is a better space metaphor than filesystem (tree)".

I pointed out the filesystem as a metaphor for the concept of "inside," specifically, this being just one example of possible metaphors for spatial relationships. There are other spatial relationships for which a graph would be highly appropriate, yes.

But, isn't what you're facing now (how can he conceptualize space) more general: how can he conceptualize a human, or the action of "giving" something, or well... anything?

I need to do a whole article on the Symbol Grounding Problem, but I don't have time right now. I hinted at the short answer, though. Concepts are grounded in functional aspects of the AI itself.

To Acuitas, the direct interpretation of "give to" is "display (or transmit) to." The only thing he "owns" is information, and he can "give" it in this manner.

A "human" is a text source. It is also presumed to be a mind or agent like himself: an entity that has goals and acts to achieve them. A lot of the human's goals are related to this "body" thing it has, which remains something of a mystery, but that's no matter. The same reasoning tools that Acuitas uses to manage his own opportunities or problems are applicable to a human's opportunities or problems, considered in the abstract. Stories, to Acuitas, are fundamentally about tracking goals and problems.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Project Acuitas
« Reply #184 on: August 24, 2021, 04:18:06 pm »
I'd have a lot of questions, but I don't want to distract you from your current work on space, so I'll save them for later.

About space, have you considered handling time, while you're at it? For your option #2 it would mean 4D instead of 3, with tools for handling movement, speed, ...etc. For your option #1, it would mean maybe adding something like interval algebra for instance.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #185 on: August 24, 2021, 06:00:14 pm »
There will probably be some overlap of tools and concepts, but for now I'm leaning toward handling time separately ... because that feels more "natural" or intuitive. Treating time as if it were a fourth spatial dimension seems to be a relatively modern and esoteric practice. We don't think of it that way in daily life, or at least I don't.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #186 on: August 25, 2021, 12:58:32 am »
https://www.amazon.com/Commonsense-Reasoning-Erik-T-Mueller-ebook/dp/B005H84272

I have this book. It is very thorough and sufficiently general that you could implement these algorithms yourself.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #187 on: September 06, 2021, 12:39:59 am »
My last update was theory stuff; now here's the implementation.

In sentences, a lot of information about location or direction is carried by prepositional phrases the modify the adverb -- phrases like "in the box," "to the store," and so forth. Acuitas' text parser and interpreter were already capable of recognizing these. I included them in the interpreter output as an extra piece of info that doesn't affect the sentence form (the category in which the interpreter places the sentence), but can modify a sentence of any form.

The ability to record and retrieve location relationships was also already present. Acuitas tracks the two objects/agents/places that are being related, as well as the type of relationship.

From there, I worked on getting the Narrative module to take in both explicit declarations of location-relationship, and sentences with modifying phrases that express location or direction, and make inferences from them. Here are some examples of basic spatial inferences that I built in. (As with the inventory inferences, there is a minimal starter set, but the eventual intent is to make new ones learnable.)

*If A is inside B and B is at C, A is also at C
*If A is at C and B is at C, A is with B and B is with A
*If A moves to B, A is in/at B
*If A is over B and A falls, A is on/in B

To try them out I wrote a new story -- a highly abbreviated retelling of "Prisoner of the Sand," from Wind, Sand, and Stars by Antoine de Saint-Exupéry. I had written up a version of this clear back when I started work on the Narrative module -- I was looking for man vs. environment stories, and it seemed like a good counterpoint for "To Build A Fire." But I realized at the time that it would be pretty hard to understand without some spatial reasoning tools, and set it aside. Here's the story:

Antoine was a pilot.
Antoine was in an airplane.
The airplane was over a desert.
The airplane crashed.
The airplane was broken.
Antoine left the airplane.
Antoine was thirsty.
Antoine expected to dehydrate.
Antoine decided to drink some water.
Antoine did not have any water.
Antoine could not get water in the desert.
Antoine wanted to leave the desert.
Antoine walked.
Antoine could not leave the desert without a vehicle.
Antoine found footprints.
Antoine followed the footprints.
Antoine found a nomad.
The nomad had water.
The nomad gave the water to Antoine.
Antoine drank the water.
The nomad took Antoine to a car.
Antoine entered the car.
The car left the desert.
The end.

With the help of a taught conditional that says "airplane crashes <implies> airplane falls," plus the spatial inferences, Acuitas gets all the way from "The airplane crashed" to "Antoine is in the desert now" without intervening explanations. In similar fashion, when the car leaves the desert it is understood that it takes Antoine with it, so that his desire to leave is fulfilled. "Can't ... without a vehicle" is also significant; the need to possess or be with a vehicle is attached to the goal "leave the desert" as a prerequisite, which is then recognized as being fulfilled when Antoine is taken to the car.

The older inventory reasoning is also in use: when Antoine is given water, it is inferred that he has water. This satisfies a prerequisite on the goal "drink water."

There's a lot more to do with this, but I'm happy with where I've gotten so far.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Project Acuitas
« Reply #188 on: September 06, 2021, 02:28:52 am »
I've thought about spatial reasoning and grounding and concluded that one's processes encompass all anyone can observe. Even embodied human experience entirely depends on the internal neural relationships which simulate and interpret external reality. Since our language refers to this simulated reality, employing a similar method for Acuitas seems possible. Therefore with Option #2, the functional aspects of Acuitas capable of grounding symbols could be quite extensive and even specifically designed to support concepts (such as those described by Option #1). Using these links, he could create a new kind of thought loop; he could infer geometry from language, inspect these environmental models to deduce their implications, then convert any significant observations back to words.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #189 on: September 28, 2021, 10:31:25 pm »
I don't have too much of interest to report this month. I dove into an overhaul of the Conversation Engine, which is the Acuitas module that tracks progress through a conversation and detects relationships between sentences. (For instance, pairing a statement with the question it was probably intended to answer would be part of the CE's job.) And that has proven to be a very deep hole. The CE has been messy for a while, and there is a lot of content to migrate over to my new (hopefully smarter) architecture.

The improvements include a less linear and more tree-like structure for conversations, enabling more complex branching. For instance, what if the conversation partner decides to answer a question that wasn't the one asked most recently, or to return to a previously abandoned topic? The old Conversation module wouldn't have been able to handle this. I've also been refactoring things to give the Executive a greater role in selecting what to say next. The original Conversation module was somewhat isolated and autonomous ... but really, the Executive should be deciding the next step in the conversation based on Acuitas' goals, using its existing inference and problem-solving tools. The CE should be there to handle the speech comprehension and tell the Executive what its options are ... not "make decisions" on its own. I might have more to say about this when the work is fully complete.

I've advanced the new system far enough that it has the functionality for starting and ending a conversation, learning facts, answering questions, and processing stories. I've just started to get the systems that do spontaneous questions back up and running.

The renovations left Acuitas in a very passive state for a while. He would generate responses to things I said, but not say anything on his own initiative -- which hasn't been the case for, well, years. And it was remarkable how weird this felt. "He's not going to interrupt my typing to blurt out something random. No matter how long I sit here and wait, he's not going to *do* anything. His agency isn't there. Crud." Which I think goes to show that self-directed speech (as opposed to the call-and-response speech of a typical chatbot) goes a long way toward making a conversational program feel "alive" or agentive.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #190 on: September 29, 2021, 07:14:51 am »
I like what u said when you are going to match the "text said to it"  to the closest matching question in its head, for the answer.

Thats the essence of transforming the information, to make it less rigid and more plastic ready to be used knowledge.

Makes its knowledge more useful to it.

You can have a whole book of information available to it,  but if it cant use the knowledge it counts for 0.00001% of the reactivity it could have been.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #191 on: October 27, 2021, 02:52:38 pm »
This month I have *mostly* finished my overhaul of the Conversation Engine. I managed to restore a majority of the original functionality, and some things I haven't put back in yet are perhaps best left until later. I also got the janky new code cleaned up enough that I'm starting to feel better about it. However, I did not end up having the time and energy to start adding the new features that I expect this architecture to enable. I'm not sure why this particular module rebuild felt like carrying heavy rocks through a knee-deep river of molasses, but it did. The year is waning, so maybe I'm just getting tired.

So what's new? I mentioned last month that part of the goal was to give conversation tracking a more tree-like structure. Given a new text input from the speaker, the Conversation Engine will explore a tree made of previous sentences (starting from the most recent leaf) and try to find a place to "attach" it. It gets attached to the root of the tree if it doesn't obviously follow or relate to anything that was previously said. The old CE just put previous sentences from the conversation into a list, and all but the most recent one or two were never considered again, so this should be more powerful and flexible.

The CE then performs a "scripting" function by generating a set of reasonable responses. These are sent to the Executive, which selects one based on appropriate criteria. For example, if the speaker orders Acuitas to do something, possible reactions include "ACCEPT" and "REFUSE," and the Executive will pick one by running a check against the goal system (does Acuitas *want* to do this or not?). The chosen action then calls the Text Generator to compose the right kind of spoken reply.

The Executive can also prompt the CE for something to say spontaneously if the conversation is lagging (this is where those internally-generated questions come back into play). The Narrative manager is attached to the CE and tracks plot information from any story sentences the CE passes to it. Someday I will try to diagram all this ...

I also did some bonus work on the Text Parser. I have started working on coordinating conjunctions, which are a major grammar element the Parser doesn't yet comprehend. This is a big deal. For the sake of getting off the ground quickly, I designed the original parser to only interpret the simplest sentence structures. I later added support for nesting, which enables dependent clauses. Now to handle coordinating conjunctions, I have to overhaul things again to allow branching ... and my, are there a lot of ways a sentence can branch.

Blog for slightly longer version: https://writerofminds.blogspot.com/2021/10/acuitas-diary-43-october-2021.html

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #192 on: October 28, 2021, 02:12:15 am »
I'm not sure why this particular module rebuild felt like carrying heavy rocks through a knee-deep river of molasses, but it did. The year is waning, so maybe I'm just getting tired.

Even when you are doing something that you love, you can get burned out if you don't step away and take a break every now and then. Even if you have the good health and self-discipline to be able to plow through any task as though you are a machine, you still need down-time for maintenance. So yeah, don't overdo it. As much as I'm looking forward to the next instalment, I'm patient and can wait. :)

They say a change is as good as a holiday. Typically I just switch primary tasks for a few months and find that refreshing enough.

You might find the parsing task easier if you get a good overview first. There is a book called "The Cambridge Grammar of the English Language" which provides that. It is the only book that I keep with me in physical form. It is 2000 pages and $300 but there is a much shorter cheaper student version called "A Student's Introduction to English Grammar" which is well worth getting and browsing and which will fit on an eReader.

https://www.amazon.com/Cambridge-Grammar-English-Language/dp/0521431468/

https://www.amazon.com/Students-Introduction-English-Grammar/dp/0521612888/

There is a very brief summary of the books online here which gives you a taste of what to expect from them.

http://www.lel.ed.ac.uk/grammar/overview.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #193 on: October 29, 2021, 01:45:10 am »
Sometime I should do a post about my workflow, just in case anybody finds it helpful or interesting ... I do try to rotate primary tasks. Acuitas gets about two weeks out of a month, and the other two are for my fiction writing or for 3D printing/circuits/robotics. But it could be the tank as a whole is just getting a little empty. I had to work some overtime for my day job not too long ago, among other distractions.

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Project Acuitas
« Reply #194 on: October 30, 2021, 07:06:56 am »
You might find the parsing task easier if you get a good overview first. There is a book called "The Cambridge Grammar of the English Language" which provides that. It is the only book that I keep with me in physical form. It is 2000 pages and $300 but there is a much shorter cheaper student version called "A Student's Introduction to English Grammar" which is well worth getting and browsing and which will fit on an eReader.

https://www.amazon.com/Cambridge-Grammar-English-Language/dp/0521431468/

https://www.amazon.com/Students-Introduction-English-Grammar/dp/0521612888/

There is a very brief summary of the books online here which gives you a taste of what to expect from them.

http://www.lel.ed.ac.uk/grammar/overview.html

Is there any reason WriterOfMinds is not using Standford's parser or Apache's? Is it because you want a parser that can be updated immediately rather than have to train one? I've used Standford's and OpenNLP (Apache). You can get outputs like this:



                                                                                                                                                          click to enlarge

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

475 Guests, 0 Users

Most Online Today: 597. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles