Recent Posts

Pages: 1 ... 4 5 [6] 7 8 ... 10
51
Home Made Robots / Hi IM BAA---AAACK!!
« Last post by MagnusWootton on August 19, 2024, 03:35:32 am »


So this robot, is the one im going to build,   this is just an open s-cad render procedural generation cad script and it knocked my socks off how good it turned out!

Its a hydraulic one,  im not sure if my pipes and actuators are big enough,  but I'll be doing a test actuator first, (just the leg with a single actuator) this one is nearly done, just a bit more work to go before I go try to build it.

If I had access to a 2 metre 3d printer I could print it out pretty easy, (the body is only 20 centremeters but the legs are just too long) but I dont, so I've actually got a technique organized which is going to use a large big metre 2d print instead, which just mostly is just the pipes and actuators and then  build the shell off it from there,  in a top half and bottom half that just mirrors together,  should be pretty similar,   I just mainly need something that holds water, cause thats what hydraulic robots do,  just push water from the body to the arm.

So you can see the leg sorta looks like a snake, thats because on the 2d print its just flat straight out with the leg, but I actually bend and twist it into position afterwards with a heat gun and some extra braces im going to make to get it exactly the right angle, and thats how I get all the axi I need off a flat bar,    I'm not sure if it works that well for a humanoid, but since a spider is a pretty scary creature the extra twistiness to the legs I dont think makes it look less spidery, it sorta works for it.

It has a cool hinge configuration,  cause a) it can retract its leg closish to its body,  and b) its roll and yaw hinges interchange for each other as the pitch hinge changes angles.  so its always got access to yaw and roll,  but it swaps what hinge it uses for them depending on the prior pitch angle.


So a bit more work to go,  but I think is going to be the one!  7 years or so after I first wanted to go do this with lots of missing pieces in the jigsaw puzzle.
52
Welcome to AI Dreams forum. / Re: Server Upgrade
« Last post by Freddy on August 12, 2024, 03:20:04 pm »
Good luck :)
53
Welcome to AI Dreams forum. / Server Upgrade
« Last post by infurl on August 12, 2024, 05:25:31 am »
Hi folks, I'm in the process of upgrading the server that's hosting this forum so it may be temporarily inaccessible for short periods of time in the coming days. I'll keep the disruptions to a minimum.
54
Home Made Robots / Re: Attempting Hydraulics
« Last post by WriterOfMinds on August 11, 2024, 08:54:11 pm »
I've been working on this in the last few months and finally have a writeup of the updates! https://writerofminds.blogspot.com/2024/08/hydraulics-ii-pressure-is-on.html Click through to the blog for a video.

The quick version is that I did replace the syringe pump with a peristaltic pump (designed and made by me) and the "cylinder" actuator (also a syringe) with an inflatable bladder. This achieved more reasonable performance, and I'm looking forward to finding ways to do real work with this now. The bladder is made from heat-sealed plastic bags with a 3d-printed valve stem glued in. I'll be able to custom-make them in any size and shape I need at very little expense.
55
General Project Discussion / Re: Project Acuitas
« Last post by WriterOfMinds on July 30, 2024, 05:23:38 pm »
My work this month was focused on cleaning up the Executive and Conversation Engine and getting them to play well together. This is important because the Conversation Engine has become like a specialized inner loop of the Executive. I think I ought to start at the beginning with a recap of what Acuitas' Executive does.

To put it simply, the Executive is the thing that makes decisions. Conceptually (albeit not technically, for annoying reasons) it is the main thread of the Acuitas program. It manages attention by selecting Thoughts from the Stream (a common workspace that many processes in Acuitas can contribute to). After selecting a Thought, the Executive also takes charge of choosing and performing a response to it. It runs the top-level OODA Loop which Acuitas uses to allocate time to long-term activities. And it manages a Narrative Scratchboard on which it can track Acuitas' current goals and problems.

A conversation amounts to a long-term activity which uses specialized decision-making skills. In Acuitas, these are embodied by the code in the Conversation Engine. So when a conversation begins, the CE in a sense "takes over" from the main Executive. It has its own Narrative Scratchboard that it uses to track actions and goals specific to the current conversation. It reacts immediately to inputs from the conversation partner, but also runs an inner OODA loop to detect that this speaker has gone quiet for the moment and choose something to say spontaneously. The top-level Executive thread is not quiescent while this is happening, however. Its job is to manage the conversation as an activity among other activities - for instance, to decide when it should be over and Acuitas should do something else, if the speaker does not end it first.

Though the Executive and the CE have both been part of Acuitas for a long time, their original interaction was more simplistic. Starting a conversation would lock the Executive out of selecting other thoughts from the Stream, or doing much of anything; it kept running, but mostly just served the CE as a watchdog timer, to terminate the conversation if the speaker had said nothing for too long and had probably wandered off. The CE was the whole show for as long as the conversation lasted. Eventually I tried to move some of the "what should I say" decision-making from the CE up into the main Executive. In hindsight, I'm not sure about this. I was trying to preserve the Executive as the central seat of will, with the CE only providing "hints" - but now I think that blurred the lines of the two modules and led to messy code, and instead I should view the CE as a specialized extension of the Executive. For a long time, I've wanted to conceptualize conversations, games, and other complex activities as units managed at a high level of abstraction by the Executive, and at a detailed level by their respective procedural modules. I think I finally got this set up the way I want it, at least for conversations.

So here's how it works now. When somebody puts input text in Acuitas' user interface, the Executive is interrupted by the important new "sensory" information, and responds by creating a new Conversation goal on its scratchboard. The CE is also called to open a conversation and create its sub-scratchboard. Further input from the Speaker still provokes an interrupt and is passed down to the CE immediately, so that the CE can react immediately. For the Executive's purposes, the Conversation goal is set as the active goal, and participating in the Conversation becomes the current "default action." From then on, every time the Executive ticks, it will either pull a Thought out of the Stream or select the default action. This selection is random but weighted; Acuitas will usually choose the default action. If he does, the Executive will pass control to the CE to advance the conversation with a spontaneous output. In the less likely event that some other Thought is pulled out of the Stream, Acuitas may go quiet for the next Executive cycle and think about a random concept from semantic memory, or something.

If Acuitas is not conversing with someone, the "default action" can be a step in some other activity - e.g. Acuitas reading a story to himself.

Longer version on the blog: https://writerofminds.blogspot.com/2024/07/acuitas-diary-74-july-2024.html
56
AI News / Re: ollama and llama3
« Last post by infurl on July 21, 2024, 04:07:53 pm »
Does it matter if I double the RAM?  Thanks for your advice.

Doubling the amount of RAM that you have would allow you to run larger models than you can now. For example one of the newest and best ones is gemma2:27b which is 15GB in size. That won't run on your current system but will certainly run on a 32GB system. There are smaller versions of all the models that will run easily in 16 GB of RAM though and the larger models are significantly slower, so it probably isn't worth it. I think you would need to upgrade your CPU as well as the RAM to be able to run the larger models fast enough to be useful.
57
AI News / Re: ollama and llama3
« Last post by 8pla.net on July 21, 2024, 03:57:54 pm »
Thanks so much for the like you gave to my reply!  I really appreciate that!

I guess a business must have donated this ultra portable business laptop

to a nonprofit organization.  I was able to get it for cheap.  The Maximum

RAM  specification is 16GB.  Yet my research revealed an undocumented

32GB RAM upgrade.  So, your LLM experience is very useful to me. 

Does it matter if I double the RAM?  Thanks for your advice.
58
AI News / Re: ollama and llama3
« Last post by infurl on July 21, 2024, 03:08:42 pm »
Those of us on a budget can do, "nproc --all" in Linux terminal to get our number of cores...
My laptop has 4 cores.   Then In Linux terminal, "cat proc/meminfo" displays ny
Memory Total to be: 16GB.   And command: glxinfo|egrep -i "Video memory:" displays,
"15733MB" for my VRAM.

There's nothing shabby about your laptop if it has 16GB of VRAM. Is it a gaming laptop, or have you got an external GPU? Your main constraint is the amount of system memory that you have because you need to be able to load the model data into memory in its entirety. You'll be able to run all the models that are less than 7 billion parameters with that, no problem, as they are around 5 GB.
59
AI News / Re: ollama and llama3
« Last post by 8pla.net on July 21, 2024, 02:49:22 pm »
Wow, you are so lucky to enjoy a "Linux system with 24 cores, 64 GB of RAM, and 16 GB of video RAM"!
Those of us on a budget can do, "nproc --all" in Linux terminal to get our number of cores...
My laptop CPU has 4 cores.   Then In Linux terminal, "cat proc/meminfo" displays my RAM
Memory Total to be: 16GB.   And command: glxinfo|egrep -i "Video memory:" displays,
"15733MB" for my VRAM.

C++ may be able to do more bare bones generative artificial intelligence with less expensive hardware.
60
AI News / ollama and llama3
« Last post by infurl on July 10, 2024, 02:17:07 am »
Access to generative artificial intelligence just changed radically and for the better. Until recently our options were to use online services which were potentially very expensive and almost certainly heavily restricted, or to try to use open source models locally which required high end hardware to operate and which produced disappointing and mediocre results at best.

Last year we saw the release of ollama which made it incredibly easy to run just about any large language model locally no matter what platform you're on. You still needed a powerful system but at least you didn't have to learn a lot of obscure methods to use it.

https://ollama.com/

Last month the open source large language model llama3 was released. It has proven to be as capable as models two hundred times its size and is so efficient you can run it on a Raspberry Pi 5 if you want to, though it might take some patience.

I've been experimenting with it and it seems to be as good as any of the models that I have used online. I am running it on a Linux system with 24 cores, 64 GB of RAM, and 16 GB of video RAM. The smaller 8 billion parameter model responds to my queries almost instantly while the larger 70 billion parameter model can take a minute or two. Mostly the results produced by the smaller model are quite good enough.
Pages: 1 ... 4 5 [6] 7 8 ... 10

LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

337 Guests, 0 Users

Most Online Today: 353. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles