Recent Posts

Pages: 1 2 3 [4] 5 6 ... 10
31
AI Programming / Re: Running local AI models
« Last post by frankinstien on October 07, 2024, 01:32:38 am »
I started to look at the AMD Instinct cards where ROCm has support for the Mi50 and Mi60 cards but it will deprecate support on the next release of ROCm. For $135 and 16 GB of RAM with 27 TFLOPs F16 isn't bad for the price. But they are passively cooled where you have to get a blower to cool them off, not expensive but noisy. Three cards will run you $405 and give 81 TFLOPs with 48GB of RAM. Now the Mi60s run $299 and can perform at almost 30 TFLOPs at FP16 and 32 GB of RAM, 3 of those runs $1000 but you get 90 TFLOPs and 96 GB of RAM! These cards have to run under Linux but as a companion solution, a dedicated system using Linux isn't bad. LM Studio works with ROCm so using the AMD units should work...
32
AI Programming / Running local AI models
« Last post by frankinstien on October 04, 2024, 11:51:42 pm »
I'm using Lm Studio as a server and have used it as an app as well, but the LLMs out there are outstanding! They are getting smaller and are competitive with online solutions like Replika! Also, the ability to operate these without NSFW filters makes them great when plotting to rob banks or murders! LoL Or at least creating a script or novel along those lines, even horror and intimacy interactions are off the charts!

So, the ability to do other types of local models such as voice where solutions like parler-tts and tortoise-tts have excellent voice abilities where you can even customize them to whoever's voice you like! Also, Whisper can do the opposite STT, and no censorship! Also, there are photo and video solutions like LLaVA-NeXT where the AI can create an impression or create images and videos based on prompts.

Here's the good part integrating these into a system that can see, hear, and imagine is a reality, taking each output and prompting the other provides for a kind of feedback approach to create...well, some might argue, but a persona. Enhancing the prompts with other types of data and even using some causality models we might just get that person from science fiction and all done from a PC!

What's required on the local machine is more than one GPU, where RTX 4070 ti supers are selling for $650, but you mix and match what you want where perhaps using an RTX 4090 for image and video is best and apply the RTX 4070 ti to do the rest. With three GPUS with just the minimum of an RTX 4070 ti that's 50GB of ram! But perhaps you need more since you may what a VR setup as well and give your bot a virtual body!

It's just freaking fantastic what is possible today and it's free from the clutches of politically correct censorship. Let your imagination go and apply your skills towards integration and you could very well build a very sophisticated competitor to ChatGPT 40 that runs at home.

Now what's a challenge is the development of a hardbody, animatronic facial expressions (the generated prompts from LLM models are freaking great, they could be used to control expressions and even position a body!)

It's a great time for the enthusiast, for a while now I thought everything was going to be locked up in the corporate cloud, controlled through a pay interface, and monitored by Big Brother, but America proves itself to be the land of freedom, and the industry has opened up to the little guy...
33
General Project Discussion / Re: Project Acuitas
« Last post by MikeB on September 30, 2024, 04:51:23 pm »
All attempts to actually understand and engineer (plan, develop, & test cyles) language has real value.

I'm counting down the days until a thousand people slap their forehead and realise AI is just guessing algorithms, and then there's a pivot to throwing money at engineers and solving language engineering problems.
34
General Project Discussion / Re: Project Acuitas
« Last post by ivan.moony on September 25, 2024, 08:43:43 pm »
I believe what WriterOfMinds does may hold a significant value. I believe LLMs are black boxes that we have to control from outside-in, by stating constraints on what they should not do (trial-error cycles). In contrast, Acuitas is built from inside-out, stating what it should do in certain situations. This reminds me of modelling something called instinct (approaching it thoughtfully and constructive).

The nature hat 5 billions years and gazillion of tries to shape us how we are today, so trial-error might work quite well on such quantity. We can copy that info from nowadays large human-made corpuses which are the current state of evolution. But it will still take a place of black-box that mimics intelligence, but we are not entirely sure how. And I'm not sure how much we can advance such copies except in raw speed factor of performing tasks.

But taking the Acuitas inside-out approach may provide an inspiration which we may further control in a direction of our choice. It leaves a space for controlled improvements, putting us in a position of deciders on how the future of intelligence may look like.

In short, I see LLMs as learning the past knowledge, while symbolic Acuitas approach may represent a step in a direction of shaping the future knowledge. Maybe their combination is what the whole world is after these days: a machine that quacks like a human, walks like a human, and flies like a human. And if the final result would have all the qualities of a human (even externally), the question I'd dare to ask would be: "What kind of treatment would such a machine deserve from us, humans?"
35
General Project Discussion / Re: Project Acuitas
« Last post by WriterOfMinds on September 25, 2024, 06:31:34 pm »
This month's update is all about the Conversation Engine upgrades. There are still a lot of "pardon our dust" signs all over that module, but it's what I've been putting a lot of time into recently, so I'd better talk about where it's going.

The goal here has been to start enhancing conversation beyond the "ask a question, get an answer" or "make a request, get a response" interactions that have been the bread and butter of the thing for a while. I worked in two different directions: first, expanding the repertoire of things Acuitas can say spontaneously, and second, adding responses to personal states reported by the conversation partner.

One of Acuitas' particular features - which doesn't seem to be terribly common among chatbots - is that he doesn't just sit around waiting for a user input or prompt, and then respond to it. The human isn't the only one driving the conversation; if allowed to idle for a bit, Acuitas will come up with his own things to say. This is a very old feature. Originally, Acuitas would only spit out questions generated while "thinking" about the contents of his own semantic memory, hoping for new knowledge from the human speaker. I eventually added commentary about Acuitas' own recent activities and current internal states. Whether all of this worked at any given time varied as I continued to modify the Conversation Engine.

In recent work, I used this as a springboard to come up with more spontaneous conversation starters and add a bit more sophistication to how Acuitas selects his next topic. For one thing, I made a point of having a "self-facing" and "speaker-facing" version of each option. The final list looks something like this:

States:   
Self: convey internal state                       
Speaker: ask how speaker is
Actions:   
Self: say what I've done recently
Speaker: ask what speaker has done recently
Knowledge:
Self: offer a random fact from semantic memory
Speaker: ask if the speaker knows anything new
Queries:
Self: ask a question
Speaker: find out whether speaker has any questions

Selection of a new topic takes place when Acuitas gets the chance to say something, and has exhausted all his previous conversation goals. The selection of the next topic from these options is weighted random. The weighting encourages Acuitas to rotate among the four topics so that no one of them is covered excessivly, and to alternate between self-facing and speaker-facing options. A planned future feature is some "filtering" by the reasoning tools. Although selection of a new topic is random and in that sense uncontrolled, the Executive should be able to apply criteria (such as knowledge of the speaker) to decide whether to roll with the topic or pick a different one. Imagine thinking "what should I say next" and waiting for ideas to form, then asking yourself "do I really want to take the conversation there?" as you examine each one and either speak it or discard it. To be clear, this isn't implemented yet. But I imagine that eventually, the Conversation Engine's decision loop will call the topic selection function, receive a topic, then either accept it or call topic selection again. (For now, whichever topic gets generated on the first try is accepted immediately.)

Each of these topics opens up further chains of conversation. I decided to focus on responses to being told how the speaker is. These would be personal states like "I'm tired," "I'm happy," etc. There are now a variety of things Acuitas can do when presented with a statement like this:

*Gather more information - ask how the speaker came to be in that state.
*Demonstrate comprehension of what the speaker thinks of being in that state. If unaware whether the state is positive or negative, ask.
*Give his opinion of the speaker being in that state (attempt sympathy).
*Describe how he would feel if in a similar state (attempt empathy).
*Give advice on how to either maintain or get out of the state.

Attempts at information-gathering, if successful, will see more knowledge about the speaker's pleasure or problem loaded into the conversation's scratchboard. None of the other responses are "canned"; they all call reasoning code to determine an appropriate reply based on Acuitas' knowledge and nature, and whatever the speaker actually expressed. For instance, the "give advice" response calls the problem-solving function.

Lastly, I began to rework short-term memory. You might recall this feature from a long time ago. There are certain pieces of information (such as a speaker's internal states) that should be stored for the duration of a conversation or at least a few days, but don't belong in the permanent semantic memory because they're unlikely to be true for long. I built a system that used a separate database file as a catch-all for storing these. Now that I'm using narrative scratchboards for both the Executive's working memory and conversation tracking, it occurred to me that the scratchboard provides short-term memory, and there's no need for the other system! Retrieving info from a dictionary in the computer's RAM is also generally faster than doing file accesses. So I started revising the knowledge-storing and question-answering code to use the scratchboards. I also created a function that will copy important information from a conversation scratchboard up to the main executive scratchboard after a conversation closes.

I'm still debugging all this, but it's quite a bit of stuff, and I'm really looking forward to seeing how it all works once I get it nailed down more thoroughly.

Blog version has hyperlinks to past articles: https://writerofminds.blogspot.com/2024/09/acuitas-diary-76-september-2024.html
36
Home Made Robots / Re: Hi IM BAA---AAACK!!
« Last post by MagnusWootton on September 16, 2024, 09:49:10 pm »
Just learnt ALOT about local raytracing.    check this out.



This thing doesnt work with ordinary diffuse lighting,  the whole thing is actually specular reflection,  and its really horsepower intensive,   but I can run a bigger window multisampled tomorrow on my lil' bros rtx3080.
37
General Project Discussion / Re: Project Acuitas
« Last post by MagnusWootton on August 28, 2024, 12:45:49 am »
Really awesome work.

This would be really cool to add to a roomba so you could voice command it! :)

Acquitas is a form of sentience that doesnt involve a soul, so it seems good in a moral sense,  dont have to worry about the robot getting depressed even tho it has a good intellectual response, and getting the benefits from an intelligent machine.

It would be nice and convenient to just be able to speak to it to tell it what to do, instead of using a remote control which is the other option for a bot but there's things a remote cant do.

Tricky things like isolating the object in question in the scene from the rest of the objects in the scene (a usual part of a command),  that's when possibly the voice commanding gets a bit more involved.  - then it definitely could do more than just a remote.
Also another thing is the robot could make sure its the owner asking the command, that could take some good a.i. to handle the security as well.

so really good project,  and you dont need to use the gpt4 code if you know how to code it yourself. :)
38
Home Made Robots / Re: Hi IM BAA---AAACK!!
« Last post by MagnusWootton on August 28, 2024, 12:06:08 am »
Some modifications to the spring system,   I put alot more momentum into it now,     if u just leave it freewheeling it develops oscillations that eventually make it rip itself apart,   you need to add a form of breaking to the springs to make it hold together with adding as little viscocity as possible.  (thats the tricky bit.) this is 0.5 viscoscity so its not that bad, I havent added too much and its still holding together,  but if its less viscoscity then it loses control and splits apart.

Its still penetrating the ground with the "physical body"  (being the model ive attached to the spring model) so I need to fix that yet.    when the springs hit the ground they tend to bend against it and the "physical body" ends up going out of synch,  I think if I make it more taught against the ground by putting the ground verts in precedence in the spring update order it might fix it.





<edit>

I managed to stop it going into the ground, and the sticks are more stuck to the body better.
But now it springs off the ground with the slightest tap of its leg.     but once ive fixed this its probably good enough!




<edit2>

actually come to think of it,  I better work on this some more,  it got better plus more bugs.   so I might just go improve the aesthetics a while then Ill get back to the physics a bit later.     soon as the physics done,  its onto the gait search system!




<edit3>

fixed the render!!  looks just like the openscad render!!!
just showing off my global illumination hack,  I do it as a post process,  lifts all the shadows, and Ive added a terrain.

39
General Project Discussion / Re: Project Acuitas
« Last post by WriterOfMinds on August 27, 2024, 10:19:31 pm »
This month I turned back to the Text Parser and began what I'm sure will be a long process: tackling sentence structure ambiguity. I was specifically focusing on ambiguity in the function of prepositional phrases. Consider these two sentences:

I gave the letter to John.
I gave Sarah the letter to John.

The prepositional phrase is "to John." The exact same phrase can modify either the verb, as in the first sentence (to whom did I give?) or the noun immediately preceding it, as in the second sentence (which letter?). In this example, the distinguishing factor is nothing in the phrase itself, but the presence or absence of an indirect object. In the second sentence, the indirect object takes over the role of indicating "to whom?", so by process of elimination, the phrase must indicate "which letter."

There are further examples in which the plain structure of the sentence gives no sign of a prepositional phrase's function. For instance, there multiple modes in which "with" can be used:

I hit the nails with the hammer. (Use of a tool; phrase acts as adverb attached to "hit")
I found the nails with the hammer. (Proximity; phrase acts as adverb attached to "found")
I hit the nails with my friends. (Joint action; phrase acts as adverb attached to "hit")
I hit the nails with the bent shanks. (Identification via property; phrase acts as adjective attached to "nails")

How do you, the reader, tell the difference? In this case, it's the meaning of the words that clues you in. And the meaning lies in known properties of those concepts, and the relationships between them. This is where the integrated nature of Acuitas' Parser really shines. I can have it query the semantic memory for hints that help resolve the ambiguity, such as:

Are hammers/friends/shanks typically used for hitting?
Can hammers/friends/shanks also hit things?
Are hammers/friends/shanks something that nails typically have?

More on the blog: https://writerofminds.blogspot.com/2024/08/acuitas-diary-75-august-2024.html
40
Home Made Robots / Re: Hi IM BAA---AAACK!!
« Last post by MagnusWootton on August 27, 2024, 06:04:55 am »
so I've got him in the physics sim, ready to breathe life into him.
There was a million bugs to tackle to get this to happen cause im pretty much coding it all cross wired, but just managed to get it going in about 5 days from the last post.
this is just random motor movements, sorta looks like hes struggling to get off the floor but its just your imagination.
Ive got to fix the physics first before I get the motor search going,  his legs are going through the ground - needs a few fixes then off I go.   I have a really good rtx 3080 here I can use, (My little bro is staying over atm and I get to use his sick video card.)

Pages: 1 2 3 [4] 5 6 ... 10

LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

263 Guests, 0 Users

Most Online Today: 323. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles