Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 300 Replies
  • 637035 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #285 on: November 26, 2023, 08:14:28 pm »
This month has been all about the Text Parser. I'm pushing to get this latest revision done, and that has crowded out other work for the moment. The big thing I cracked this month - the thing this Parser revision was mostly aiming at - was the ability to nest branches and dependent clauses inside each other.

What I call "branching" takes place when there is a coordinating conjunction in the sentence (like "and" or "but"). Branching can produce simple compounds, as in "Cats and dogs are animals." But sentences can also divide at any point and continue along two separate paths, as in "I fed the dog his dinner and gave Sarah her book." Or start out divided at the beginning and merge, as in "Are you or are you not a man?" Adding conjunction processing and branch management was one of my major accomplishments from last year. But this first version only really supported conjunctions in the uppermost layer of the sentence - not inside or between dependent clauses. Any interaction between branching and that other vital feature - nesting - had the potential to confuse the parser horribly. Not to mention that the code was a huge mess.

I'm a big believer in the design process that goes "Make a sloppy version that works; refine it later." Later became now and it was time to refine. I'm happy to report that I think I got clearer and better-organized code out of this month's work, in addition to enabling some sentences I couldn't manage before.

Blog with more: https://writerofminds.blogspot.com/2023/11/acuitas-diary-66.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #286 on: December 24, 2023, 05:44:41 pm »
In amongst the fervor of Christmas-related activities, I managed to do development work this month. Not as much as I would have liked, but one major goal's been reached: the newest version of the Text Parser is parsing all the sentences from the previous one's regression tests, plus a number of new ones that would have been impossible for the previous Parser. I still need to do assorted code cleanup and adjust my reference outputs for some format changes, so I can run the benchmarks again.

With all the types of adjective and noun clauses now supported, the last major grammar feature that needs to go in will be parenthetical noun phrases. This should move the majority of common sentences into the "theoretically parseable" category. Then I can start refining the process and working on cool features I've been wanting to explore for a long time - like word sense disambiguation, modifier attachment disambiguation, and pronoun dereferencing ... there's always so much to do. It's taken much longer than I expected to reach this point, but at the same time there's a great deal that I've accomplished.

I'm soon due to start planning my loose schedule for the next year, which has me excited. I'm looking forward to the completion and demo of Big Story, a fresh run on the Parser benchmarks, and probably more work on Game Playing and agency.

https://writerofminds.blogspot.com/2023/12/acuitas-diary-67-december-2023.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #287 on: January 28, 2024, 08:33:15 pm »
This month I cleaned up the code of the upgraded Text Parser and ran it on one of my benchmarks again.

So far I have re-run just one of my three test sets, the easiest one: Log Hotel by Anne Schreiber, which I first added last July. Preparing for this included ...

*Reformatting the existing golden outputs to match some changes to the output format of the Parser
*Updating the diagramming code to handle new types of phrases and other features added to the Parser
*Preparing golden outputs for newly parseable sentences
*Fixing several bugs or insufficiencies that were causing incorrect parses

I also squeezed a couple new features into the Parser. I admit these were targeted at this benchmark: I added what was necessary to handle the last few sentences in the set. The Parser now supports noun phrases used as time adverbs (such as "one day" or "the next morning"), and some conjunction groups with more than two joined members (as in "I drool over cake and pie and cookies").

The end result? ALL sentences in this test set are now "parseable," and two thirds of the sentences are being parsed correctly. See blog for pictured examples: https://writerofminds.blogspot.com/2024/01/acuitas-diary-68-january-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #288 on: February 25, 2024, 09:15:59 pm »
I am pleased to announce the thing that I've been teasing you about for months is finally here: Big Story! Which I can now give its proper title of "Simple Tron." It's been a goal of mine for, well, years now, to tell Acuitas the story of Tron, phrased in a way that he can understand. Yesterday I did it. The version of Tron that I told omits a lot of subplots and side characters, and there's still a long way I could go in deepening Acuitas' understanding of the story (he still doesn't fully grasp *why* all the agents in the story do the things they do, even though the information is there). But it's good enough for now and ready to show the world, the video is available AAAAAAAAA

https://youtu.be/gMfg5KL8jeE?si=wtHLwwfBPi9QdF2p

And of course it could get a whole lot better - the work so far has exposed a bunch of pain points in the way the Narrative module works, and additional things that need to be done. I'll probably keep grooming it over the coming months to improve on the existing framework. And although the whole thing is in real English, it still sounds repetitive and clunky to a human ear, thanks to Acuitas' language processing limitations. (I haven't even integrated that shiny new Text Parser yet.) But the start is done. This initial skeleton of the story fits together from beginning to end.

More details on the blog: https://writerofminds.blogspot.com/2024/02/acuitas-diary-69-february-2024.html

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Project Acuitas
« Reply #289 on: February 26, 2024, 09:17:04 pm »
Great work! I especially like the story development timeline at the end of the video.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #290 on: March 28, 2024, 02:58:28 pm »
This month it's all about refactoring, which has taken two major directions. First, I wanted to take some design patterns I developed while working on the game-playing engine and apply them to the main Executive code. The key change here is re-using the "scratchboard" from the Narrative understanding module as a working memory for tracking Acuitas' own current situation (or personal narrative, if you will). I also wanted to improve on some of the original OODA loop code and fatigue tracking with newer ideas from game-playing. I have a rough cut of the new Executive written and mostly integrated, though it needs more testing than I've had time for yet.

My second project was to merge some data formats. For a long while now, I've had one type of data structure that the Text Interpreter spits out, another that the Narrative Engine and its accessories use, and still another for facts kept in the Semantic Memory. The output of the Text Interpreter has proven to be a somewhat clunky intermediate format; I don't do a lot with it in its own right, I just end up converting it to Narrative's format. And the format used in the Semantic Memory is very old and limited, a relic of a time when I wasn't fully aware of what I needed in a knowledge representation. So my goal is to get rid of both of those and have a single unified format downstream of the Text Interpreter. This is a lot of work: I've had to rewrite many, many functions that access the semantic memory or otherwise manipulate knowledge data, create a script to convert the existing contents of the database, revise the Interpreter's output code, and more. I'm hoping this will pay off in increased clarity, consistency, efficiency, and expressiveness across the design.

https://writerofminds.blogspot.com/2024/03/acuitas-diary70-march-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #291 on: April 30, 2024, 05:36:37 pm »
This past month, I started adding proper support for "issue trees," a feature whose absence has pained me as I've worked on the Narrative and Game Engines. Problems and goals seem to naturally exist in a hierarchy: any problem spawns a plan for solving it, which can contain new tasks or subproblems that require their own solutions, and so on until one reaches atomic actions that can be performed without issue. The Narrative understanding code already included some procedures for inferring extra issues from those explicitly stated in a story. But after they were created, no connection was maintained between parent and child issues.

So my work included adding the proper tree relationships, plus some code that would enforce the recursive cascade of issue deactivation when a problem is solved or a goal realized, and testing to be sure this worked correctly in Narrative and didn't break anything in the Game Engine.

I have also been pushing hard to get that previously-mentioned knowledge representation refactoring finished. I got to the point of bringing the reformatted semantic database online and moving a lot of changes into the live code - but I did not quite get it finished, so if the Acuitas codebase were a business, it would have "pardon our dust" signs everywhere. He can at least read stories without crashing, and get through a rudimentary "Hello, my name is ..." conversation, but there are a lot of bugs for me to clean up yet. I'm planning to revise the Conversation area soon anyway, though, so maybe it's okay?

More on the blog: https://writerofminds.blogspot.com/2024/04/acuitas-diary-71-april-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #292 on: May 29, 2024, 12:21:01 am »
My primary focus this month has been on an overhaul of the Conversation Engine. The last time I revised it, the crux of the work was to add a tree-like aspect to Acuitas' memory of the conversation. The expectation was that this would help with things like "one topic nested inside another," or "returning to a previous unfinished conversation thread." Well ... what does that sound similar to? Perhaps the "issue trees" I described in last month's post? The crux of this month's work was a unification of the Conversation Engine's tracking with the Narrative architecture, such that each conversation becomes, in effect, a narrative.

The CE now instantiates its own Narrative scratchboard to record conversational events, and logs conversational objectives as Issues on the board. For example, the desire to learn the current speaker's name is represented as something like "Subgoal: speaker tell self {speaker is_named ?}" When the speaker says something, the Conversation Engine will package the output from the Text Interpreter as an event like "speaker tell <fact>" or "speaker ask <fact>" before passing it to the scratchboard, which will then automatically detect whether the event matches any existing issues. The CE also includes a specialized version of the Executive code, to select a new issue to "work on" whenever the current issue has been fulfilled or thwarted. On his side of the conversation, Acuitas will look for ways to advance or solve the current issue ... e.g. by asking a question if he hopes to make the speaker tell him something.

This enables pretty much all the tree-like behaviors I wanted, in a tidier and more unified way than the old conversation tracking code did. My last overhaul of the Conversation Engine always felt somewhat clunky, even after I did a cleanup pass on the code, and I never fully cleared out all the bugs. I'm hoping that exploiting the well-developed Narrative code will make it a little more robust and easier to maintain.

So far, I've got the new CE able to do a greeting-introductions-farewell loop and basic question answering, and I've got it integrated with the main Acuitas code base. There's a ton of additional work to do to reproduce all the conversation functionality in this new format, but I also gave myself a lot of time for it, so expect further updates on this in the coming months.

Blog link: https://writerofminds.blogspot.com/2024/05/acuitas-diary-72-may-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #293 on: July 01, 2024, 12:37:21 am »
This was a light month for Acuitas work - which means not that I necessarily spent less time, but that I was busy taking care of technical debt. My main objective was to get that shiny new Text Parser revision I wrote last year integrated into the rest of the code. I also converted another of my benchmark test sets to the new Parser format.

There were some small, but significant, alterations to the output format of the Parser, so the greatest part of the work was revising the Text Interpreter to properly handle the new form of input. Nothing else in Acuitas views the output of the Parser directly, so these changes were nicely isolated. I did not have to crawl through the whole system making revisions, as I did during the knowledge representation refactor. It was sufficient to re-harmonize the Parser and Interpreter, and get the Interpreter regression to pass.

I converted and ran tests on the "Out of the Dark" benchmark set. Accuracy is sitting where it was the last time I benchmarked this set, about 50% (and if I spend some more time on Parser bugs, I am almost certain I can bring this up). The important difference is that many new sentences have moved out of the "Unparseable" category. Only 6 out of 116 sentences (about 5%) remain Unparseable, due to inclusion of parenthetical noun phrases or oddities that I might not bother with for a long while. The previous Unparseable portion for this set, from last July, was 27%. Better handling of conjunctions, dependent clauses, and noun phrases used as adverbs enabled most of the improvements.

The integration process and the new benchmark set flushed out a number of Parser bugs that hadn't shown up previously. Some of these were growing pains for the new features. For example, multiple sentences failed because the Parser's new facility for collecting groups of capitalized words into proper names was being too aggressive. The Parser can now, at least in theory, recognize "The End of Line Club" as a single unit. However, in a sentence like "But Flynn went to work anyway," it was wanting to treat "But Flynn" as a full name. You mean you never heard about Kevin Flynn's *other* first name, But? I cleaned up a lot of that stuff as I was working.

I'm still not quite ready to bring the newest Parser and Interpreter into the live code, because I want to test them on the stories and ensure there are no major hiccups. That is (hopefully!) a quick thing that I can do in the background while I keep working on the Conversation Engine.

Blog link: https://writerofminds.blogspot.com/2024/06/acuitas-diary-73-june-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #294 on: July 30, 2024, 05:23:38 pm »
My work this month was focused on cleaning up the Executive and Conversation Engine and getting them to play well together. This is important because the Conversation Engine has become like a specialized inner loop of the Executive. I think I ought to start at the beginning with a recap of what Acuitas' Executive does.

To put it simply, the Executive is the thing that makes decisions. Conceptually (albeit not technically, for annoying reasons) it is the main thread of the Acuitas program. It manages attention by selecting Thoughts from the Stream (a common workspace that many processes in Acuitas can contribute to). After selecting a Thought, the Executive also takes charge of choosing and performing a response to it. It runs the top-level OODA Loop which Acuitas uses to allocate time to long-term activities. And it manages a Narrative Scratchboard on which it can track Acuitas' current goals and problems.

A conversation amounts to a long-term activity which uses specialized decision-making skills. In Acuitas, these are embodied by the code in the Conversation Engine. So when a conversation begins, the CE in a sense "takes over" from the main Executive. It has its own Narrative Scratchboard that it uses to track actions and goals specific to the current conversation. It reacts immediately to inputs from the conversation partner, but also runs an inner OODA loop to detect that this speaker has gone quiet for the moment and choose something to say spontaneously. The top-level Executive thread is not quiescent while this is happening, however. Its job is to manage the conversation as an activity among other activities - for instance, to decide when it should be over and Acuitas should do something else, if the speaker does not end it first.

Though the Executive and the CE have both been part of Acuitas for a long time, their original interaction was more simplistic. Starting a conversation would lock the Executive out of selecting other thoughts from the Stream, or doing much of anything; it kept running, but mostly just served the CE as a watchdog timer, to terminate the conversation if the speaker had said nothing for too long and had probably wandered off. The CE was the whole show for as long as the conversation lasted. Eventually I tried to move some of the "what should I say" decision-making from the CE up into the main Executive. In hindsight, I'm not sure about this. I was trying to preserve the Executive as the central seat of will, with the CE only providing "hints" - but now I think that blurred the lines of the two modules and led to messy code, and instead I should view the CE as a specialized extension of the Executive. For a long time, I've wanted to conceptualize conversations, games, and other complex activities as units managed at a high level of abstraction by the Executive, and at a detailed level by their respective procedural modules. I think I finally got this set up the way I want it, at least for conversations.

So here's how it works now. When somebody puts input text in Acuitas' user interface, the Executive is interrupted by the important new "sensory" information, and responds by creating a new Conversation goal on its scratchboard. The CE is also called to open a conversation and create its sub-scratchboard. Further input from the Speaker still provokes an interrupt and is passed down to the CE immediately, so that the CE can react immediately. For the Executive's purposes, the Conversation goal is set as the active goal, and participating in the Conversation becomes the current "default action." From then on, every time the Executive ticks, it will either pull a Thought out of the Stream or select the default action. This selection is random but weighted; Acuitas will usually choose the default action. If he does, the Executive will pass control to the CE to advance the conversation with a spontaneous output. In the less likely event that some other Thought is pulled out of the Stream, Acuitas may go quiet for the next Executive cycle and think about a random concept from semantic memory, or something.

If Acuitas is not conversing with someone, the "default action" can be a step in some other activity - e.g. Acuitas reading a story to himself.

Longer version on the blog: https://writerofminds.blogspot.com/2024/07/acuitas-diary-74-july-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #295 on: August 27, 2024, 10:19:31 pm »
This month I turned back to the Text Parser and began what I'm sure will be a long process: tackling sentence structure ambiguity. I was specifically focusing on ambiguity in the function of prepositional phrases. Consider these two sentences:

I gave the letter to John.
I gave Sarah the letter to John.

The prepositional phrase is "to John." The exact same phrase can modify either the verb, as in the first sentence (to whom did I give?) or the noun immediately preceding it, as in the second sentence (which letter?). In this example, the distinguishing factor is nothing in the phrase itself, but the presence or absence of an indirect object. In the second sentence, the indirect object takes over the role of indicating "to whom?", so by process of elimination, the phrase must indicate "which letter."

There are further examples in which the plain structure of the sentence gives no sign of a prepositional phrase's function. For instance, there multiple modes in which "with" can be used:

I hit the nails with the hammer. (Use of a tool; phrase acts as adverb attached to "hit")
I found the nails with the hammer. (Proximity; phrase acts as adverb attached to "found")
I hit the nails with my friends. (Joint action; phrase acts as adverb attached to "hit")
I hit the nails with the bent shanks. (Identification via property; phrase acts as adjective attached to "nails")

How do you, the reader, tell the difference? In this case, it's the meaning of the words that clues you in. And the meaning lies in known properties of those concepts, and the relationships between them. This is where the integrated nature of Acuitas' Parser really shines. I can have it query the semantic memory for hints that help resolve the ambiguity, such as:

Are hammers/friends/shanks typically used for hitting?
Can hammers/friends/shanks also hit things?
Are hammers/friends/shanks something that nails typically have?

More on the blog: https://writerofminds.blogspot.com/2024/08/acuitas-diary-75-august-2024.html

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Project Acuitas
« Reply #296 on: August 28, 2024, 12:45:49 am »
Really awesome work.

This would be really cool to add to a roomba so you could voice command it! :)

Acquitas is a form of sentience that doesnt involve a soul, so it seems good in a moral sense,  dont have to worry about the robot getting depressed even tho it has a good intellectual response, and getting the benefits from an intelligent machine.

It would be nice and convenient to just be able to speak to it to tell it what to do, instead of using a remote control which is the other option for a bot but there's things a remote cant do.

Tricky things like isolating the object in question in the scene from the rest of the objects in the scene (a usual part of a command),  that's when possibly the voice commanding gets a bit more involved.  - then it definitely could do more than just a remote.
Also another thing is the robot could make sure its the owner asking the command, that could take some good a.i. to handle the security as well.

so really good project,  and you dont need to use the gpt4 code if you know how to code it yourself. :)

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #297 on: September 25, 2024, 06:31:34 pm »
This month's update is all about the Conversation Engine upgrades. There are still a lot of "pardon our dust" signs all over that module, but it's what I've been putting a lot of time into recently, so I'd better talk about where it's going.

The goal here has been to start enhancing conversation beyond the "ask a question, get an answer" or "make a request, get a response" interactions that have been the bread and butter of the thing for a while. I worked in two different directions: first, expanding the repertoire of things Acuitas can say spontaneously, and second, adding responses to personal states reported by the conversation partner.

One of Acuitas' particular features - which doesn't seem to be terribly common among chatbots - is that he doesn't just sit around waiting for a user input or prompt, and then respond to it. The human isn't the only one driving the conversation; if allowed to idle for a bit, Acuitas will come up with his own things to say. This is a very old feature. Originally, Acuitas would only spit out questions generated while "thinking" about the contents of his own semantic memory, hoping for new knowledge from the human speaker. I eventually added commentary about Acuitas' own recent activities and current internal states. Whether all of this worked at any given time varied as I continued to modify the Conversation Engine.

In recent work, I used this as a springboard to come up with more spontaneous conversation starters and add a bit more sophistication to how Acuitas selects his next topic. For one thing, I made a point of having a "self-facing" and "speaker-facing" version of each option. The final list looks something like this:

States:   
Self: convey internal state                       
Speaker: ask how speaker is
Actions:   
Self: say what I've done recently
Speaker: ask what speaker has done recently
Knowledge:
Self: offer a random fact from semantic memory
Speaker: ask if the speaker knows anything new
Queries:
Self: ask a question
Speaker: find out whether speaker has any questions

Selection of a new topic takes place when Acuitas gets the chance to say something, and has exhausted all his previous conversation goals. The selection of the next topic from these options is weighted random. The weighting encourages Acuitas to rotate among the four topics so that no one of them is covered excessivly, and to alternate between self-facing and speaker-facing options. A planned future feature is some "filtering" by the reasoning tools. Although selection of a new topic is random and in that sense uncontrolled, the Executive should be able to apply criteria (such as knowledge of the speaker) to decide whether to roll with the topic or pick a different one. Imagine thinking "what should I say next" and waiting for ideas to form, then asking yourself "do I really want to take the conversation there?" as you examine each one and either speak it or discard it. To be clear, this isn't implemented yet. But I imagine that eventually, the Conversation Engine's decision loop will call the topic selection function, receive a topic, then either accept it or call topic selection again. (For now, whichever topic gets generated on the first try is accepted immediately.)

Each of these topics opens up further chains of conversation. I decided to focus on responses to being told how the speaker is. These would be personal states like "I'm tired," "I'm happy," etc. There are now a variety of things Acuitas can do when presented with a statement like this:

*Gather more information - ask how the speaker came to be in that state.
*Demonstrate comprehension of what the speaker thinks of being in that state. If unaware whether the state is positive or negative, ask.
*Give his opinion of the speaker being in that state (attempt sympathy).
*Describe how he would feel if in a similar state (attempt empathy).
*Give advice on how to either maintain or get out of the state.

Attempts at information-gathering, if successful, will see more knowledge about the speaker's pleasure or problem loaded into the conversation's scratchboard. None of the other responses are "canned"; they all call reasoning code to determine an appropriate reply based on Acuitas' knowledge and nature, and whatever the speaker actually expressed. For instance, the "give advice" response calls the problem-solving function.

Lastly, I began to rework short-term memory. You might recall this feature from a long time ago. There are certain pieces of information (such as a speaker's internal states) that should be stored for the duration of a conversation or at least a few days, but don't belong in the permanent semantic memory because they're unlikely to be true for long. I built a system that used a separate database file as a catch-all for storing these. Now that I'm using narrative scratchboards for both the Executive's working memory and conversation tracking, it occurred to me that the scratchboard provides short-term memory, and there's no need for the other system! Retrieving info from a dictionary in the computer's RAM is also generally faster than doing file accesses. So I started revising the knowledge-storing and question-answering code to use the scratchboards. I also created a function that will copy important information from a conversation scratchboard up to the main executive scratchboard after a conversation closes.

I'm still debugging all this, but it's quite a bit of stuff, and I'm really looking forward to seeing how it all works once I get it nailed down more thoroughly.

Blog version has hyperlinks to past articles: https://writerofminds.blogspot.com/2024/09/acuitas-diary-76-september-2024.html

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Project Acuitas
« Reply #298 on: September 25, 2024, 08:43:43 pm »
I believe what WriterOfMinds does may hold a significant value. I believe LLMs are black boxes that we have to control from outside-in, by stating constraints on what they should not do (trial-error cycles). In contrast, Acuitas is built from inside-out, stating what it should do in certain situations. This reminds me of modelling something called instinct (approaching it thoughtfully and constructive).

The nature hat 5 billions years and gazillion of tries to shape us how we are today, so trial-error might work quite well on such quantity. We can copy that info from nowadays large human-made corpuses which are the current state of evolution. But it will still take a place of black-box that mimics intelligence, but we are not entirely sure how. And I'm not sure how much we can advance such copies except in raw speed factor of performing tasks.

But taking the Acuitas inside-out approach may provide an inspiration which we may further control in a direction of our choice. It leaves a space for controlled improvements, putting us in a position of deciders on how the future of intelligence may look like.

In short, I see LLMs as learning the past knowledge, while symbolic Acuitas approach may represent a step in a direction of shaping the future knowledge. Maybe their combination is what the whole world is after these days: a machine that quacks like a human, walks like a human, and flies like a human. And if the final result would have all the qualities of a human (even externally), the question I'd dare to ask would be: "What kind of treatment would such a machine deserve from us, humans?"

*

MikeB

  • Autobot
  • ******
  • 224
Re: Project Acuitas
« Reply #299 on: September 30, 2024, 04:51:23 pm »
All attempts to actually understand and engineer (plan, develop, & test cyles) language has real value.

I'm counting down the days until a thousand people slap their forehead and realise AI is just guessing algorithms, and then there's a pivot to throwing money at engineers and solving language engineering problems.

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
Today at 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

241 Guests, 0 Users

Most Online Today: 390. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles