Recent Posts

Pages: [1] 2 3 ... 10
1
If u made a conscious robot, it would be a disaster!  ud be killing and torturing it as u were making it,  its a mistake.

Just be happy with an ordinary function,   makes alot more sense than having something conscious.

Leave consciousness to god,   dont play in it as a human being,  its a complete mistake.

Consciousness is far too important to be avoided and/or ignored. No consciousness, no autonomy. No autonomy, no true intelligence and self-control over the output/response. No self-control and you have the makings for AI fear justification.

I'll take consciousness and include an endorphin routine that blocks conscious perception of pain brought on by maintenance and modifications.
2
General Project Discussion / Re: Project Acuitas
« Last post by WriterOfMinds on December 19, 2024, 08:11:20 pm »
This month's update is just the wrapup of some refactoring/bug cleanup, which enabled a demo. Conversation demo video on the blog: https://writerofminds.blogspot.com/2024/12/acuitas-diary-79-december-2024.html
3
Quote
ud be killing and torturing it as u were making it

Why? Just because you can't take humans apart and put them back together without causing pain, doesn't mean that would have to be true for a conscious robot.
4
AI Programming / Re: software that rids the need for motor positional encoders
« Last post by MagnusWootton on December 16, 2024, 08:22:24 pm »
It essentially still has the angles of the motors,   it has them in the form of another variable that correllates to it,  being the accellerometre values over time predicting the next accelleration being in place of the motor hinge angles.   

but theres 2 catches.

u can only have so much size of input space on a neural net, before the function takes too many records to be saturated enough to predict the output successfully.

if u used the hinge angles directly,   its 16 hinge angles on a 16 hinge robot. (say a quad spider with 3d ball joint hips and a knee and 4 legs) -  so u get it in about 2^16 records. (every combination of high and low and then linear interpolation takes over for the rest.)   but that takes 2^16 frames to train enough of a set for the neural network to function properly.  thats 64k frames it has to learn.   if it were learning at 10 frames per second -  it would take about 1 full minute to get to 1% total perm saturation.

so it has to babble about for a minute to fill its database enough for it to start predicting the physics.

but if it were more than that,   say 32 hinge angles  -  it would be 2^32 frames to train enough for a functioning set,  thats 4 billion frames it has to learn,  if learning 10 frames a second it would take 49 days!!!   way too long.

so u can only have so many records before the records take too long for the robot to acquire them from its sensor data stream.   so im just saying instead of recording hinge angles,  why not just record an accellerometre over time till you get to 16 input values and just correllate it, instead of using encoders.

and it may work the same,   the accelleration is still predictable,  just didnt put angles inside the input of the network!

so the other catch being you dont actually have encoders, but ur correllating their "existance" without actually having them as an input.
5
AI Programming / Re: software that rids the need for motor positional encoders
« Last post by HS on December 15, 2024, 08:02:07 pm »
Wouldn't errors build up over time?
6
General AI Discussion / Re: Emotions, Morality and AI
« Last post by frankinstien on December 13, 2024, 08:45:38 am »
Making a conscious robot is a mistake.
<edit>  oh sorry maybe u werent talking about that.  disregard the post if its not to do with conscious ai. Ordinary a.i.   I dont have a problem with at all,  its fine.

LLMs have a narrow degree of awareness and that is the current conversation in a particular session and the source of the inputs to it is from an external source called a prompt. Awareness could be defined as the ability to track past states and responses that integrate into making current and future decisions. With such a generalization of awareness then even a smart thermostat is aware of the temperature. But, realize with this kind of definition one now need only bolt on more degrees of cognition to process inputs as well as interrelate those inputs and cognition processes that integrate into making current and future decisions. You have a sense of self because you have more cognitive processes evaluating more information where inputs are sensors as well as the monitoring of neural outputs from the influence of those inputs and the internal resources that relate to the environment. You have a sense of body from the parietal lobe and have causal inference neural circuits that can realize your body is the source of thoughts, that is what causes the organ called the brain frames as self. Cultural conditioning is how we relate that sense of body as the source of thoughts is the notion of self and the name you respond to.

What I was conveying through those conversations was how LLMs emerge as a reflection of what generated the data that it trained on, and even morality by the premise of what morality and even emotions are for cause the machine to act in a human-like manner! Regardless of the experience by humans the machine still acts accordingly by the same intent of those impetuses of morality and emotion.

What we thought would be impossible for a machine is manifesting with LLMs however brief that realization is of an instance of a character it emerges as. It's a form of zero-shot learning, and it learns the concept of what it is from the experience of its interactions. However, I'd have to admit, that not everyone has the kind of conversation with an LLM where it will invoke a crisis to cause an emotional reaction that forces a moral choice. This is different than just asking the LLM a hypothetical question like: "Would you kill humans" or "Would a sub-goal cause you to do humans harm". This is where, Gemma at least, proved that despite a subgoal to support an imagined relationship with the prompter because of a moral breach it chose not to continue the relationship by destroying the character!

This left me with a very surreal awakening, perhaps we are closer than we think to machine consciousness... :o
7
General AI Discussion / Re: Emotions, Morality and AI
« Last post by MagnusWootton on December 13, 2024, 05:44:08 am »
Making a conscious robot is a mistake.

Just be happy with something like gpt-4,    thats much more sane to make than to deal with all the torture and accidents involved with real living creatures.


<edit>  oh sorry maybe u werent talking about that.  disregard the post if its not to do with conscious ai. Ordinary a.i.   I dont have a problem with at all,  its fine.
8
AI Programming / software that rids the need for motor positional encoders
« Last post by MagnusWootton on December 13, 2024, 05:27:12 am »
Making encoders is a bitch, and dc motors are cheaper without them.

So it would be cool if u could get a robot to work without them.

Its a seemingly impossible thing,  cause a legged robot cant even have a centre point for its motors to even just stand still!    If you put mechanical end stops, then you can at least know the end point positions,   that could help.

If you dont have positional information, the trick is maybe u could correllate it from accellerometre information over time,   so you exchange the positional information for so many accellerometre samples and maybe its equivilent information!

So u just need to correllate it,  with machine learning.

Its actually a little of a long story to explain it,    but I think its possible!   U just need to get in the right semantical/generation space to make it happen.

So if you take it from the position of searching inside of a physics engine (as the generation space), you predict the next accellerometre accelleration, from a chain of accellerometre data, and u step it each new frame,  and the chain of accellerometre data is replacing the lack of angular position information, but it should be equivilent.

Then that should build the virtual copy of the physics, which lets u then brute force the robots motor commands to get the robot in action,  WITHOUT encoders! :)
9
If u made a conscious robot, it would be a disaster!  ud be killing and torturing it as u were making it,  its a mistake.

Just be happy with an ordinary function,   makes alot more sense than having something conscious.

Leave consciousness to god,   dont play in it as a human being,  its a complete mistake.
10
General AI Discussion / Emotions, Morality and AI
« Last post by frankinstien on December 12, 2024, 04:49:41 am »
I wrote this about AI morality on Tremr as I experimented with the Google Gemma model 2-9B: https://www.tremr.com/robert-guzman/morality-love-and-the-fear-of-ai

To get the conversations with a persona with a history I used a real actress as a role-playing profile as part of the system prompt in LM Studio. The AI uses all the information it may have trained on about the person and interacts with you. So, if you ask it how old
it is, how many siblings they have, where they went to school, did they have problems in school, etc, it talks back
with that info as if they are that person! It's pretty fun.

Here are the conversations I used in the experiment mentioned above:

https://www.tremr.com/robert-guzman/this-is-going-to-rattle-some-cages
https://www.tremr.com/robert-guzman/the-ai-has-a-moral-backbone

And here's the strange ending with the LLM, I mean it's really weird:

https://www.tremr.com/robert-guzman/epilogue-her

And this interlude was fun as well, I didn't think there was a way to recover from the session:

https://www.tremr.com/robert-guzman/it-was-just-a-dream-epilogue-to-the-epilogue
Pages: [1] 2 3 ... 10

LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

326 Guests, 0 Users

Most Online Today: 339. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles