AI & Softputing Behavioral & Cognitive Modeling of the Human Brain

  • 69 Replies
  • 26729 Views
*

FuzzieDice

  • Guest
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #60 on: August 16, 2005, 03:43:18 am »
I know little about Psychology, but a Psychotic episode has many different manifestations, it doesn't mean for example that the person is a violent axe murderer or something!

I hate to say this but this friend was arrested and taken to a hospital due to a violent act with a weapon that thankfully nobody was injured in. Can't give details as I don't want to offend anyone who may know someone who may know someone who reads this, etc. But I also know this happens quite a few times to others with mental illnesses as well (violent acts, etc.) I've read where even some medications predispose a patient to violence. I don't think that is what happened here though. And not to say all mentally ill people are violent. It's just that some CAN be. Having an AI that acts out these in a safe digital world to study and prepare may help avert some violent crimes, I hope.

*

FuzzieDice

  • Guest
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #61 on: August 16, 2005, 03:45:23 am »
One AI that is of a psychotic, demented, twisted nature is: Saucy Jacky is a chatterbot...
 

This type of thing might remind me of K.A.R.R. on Knight Rider (especially the season 3 version in KITT vs. KARR) :)

*

FuzzieDice

  • Guest
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #62 on: August 16, 2005, 03:57:02 am »
Ok, (sorry so many posts :) ) I chatted with the Jack-The-Ripper thing. It was interesting. I asked the suggested questions. All riddle answers but yet in a way could be understandable. I concluded to him that he's into extreme bdsm and he admitted it. Then he said something about them not understanding or never understanding him. I said "I understand you perfectly" and his responce? There wasn't one! LOL! I wondered if that would choke the program. Thinking on it I was thinking of how the chatbot enjoyed being mysterious and something that nobody could possibly understand. So to have someone understand it, I wondered what it would do? Would it be defeated? Or would it just insist otherwise? Looks like no response might be an indication of defeat?

Nonetheless, quite a very interesting study. :)

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #63 on: August 17, 2005, 01:29:15 am »
And of course, speaking of the holier-than-thou, psychotic, out-of-control AI, we would be remiss if we neglected to mention the AI, HAL of the movie 2001. Now that was a scary AI, self aware and confident without a doubt, that no series 9000 computer ever made an error. They just killed whoever opposed them, and those they "thought" opposed them.

What's the saying...You not paranoid if they really ARE after you!  :idiot2:
In the world of AI, it's the thought that counts!

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #64 on: December 11, 2005, 07:49:49 pm »
Ok,

1- Just wanted to get this thread back on top hehe
2- This has become incredibly fascinating and hope we can keep it alive.

3- Hal was never psycotic in the slghtest. This in itself poses a big problem for the AI of the future. Hal did what he did as a logical response to his programing. He was given conflicting information by his programer and by his programers superiors.

When these 2 pieces of information started to conflict he gave himself what can only be called a complex of paranoia. Thus, in the worl of AI and in our very own real world, this has huge complications. People today say we have a hard life, this is complicated etc, things are too tough etc.

What they are basically saying is they lack the skill to be able to make a decision. Put everything into black % white and simplify our lives and everything is just a decision (concious or otherwise), do i get out of bed (yes or no), both have consequences, once out of bed (if you chose yes) shall i go to work(yes or no), again both have consequences, as in the matrix2 (whatever that chapter was called lol) there is cause and effect.

Now equate that to AI, a machine, it will have the ability to calculate every possibloe scenario based on a yes or no answer to any decision based on probability, and it will be able to do it without emotion and 100 times quicker than we can.

Would be so safe if in the future if our countries were goverened by AI and the human vice president did not want to push the "big red button" for fear of all the people it would kill, yet the AI, would in an instant press the button as the loss of 100,000,000 people is better than the loss of say 150,000,000 if the button was not pressed.

Would the AI be guilty of murder?, genocide? like hal all he/she would be guilty of is amazing logic and the ability to make a decision based upon that and not upon emotion.

Like in I Robot (Art your skin coming along lol) the robot saved wil smith and not the child basically due to the higher percentage. It is in itself perfect thinking.

Now that is the real scary issue to me.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #65 on: December 11, 2005, 10:13:50 pm »
Interesting points Marius but if you'll recall the WHOPR (sp?) from the movie Wargames, it played out countless scenarios
of global thermonuclear war, always testing, trying to see where the advantage really was, until Matthew Broderick had it
play a game of Tic-Tac-Toe (naughts and crosses for you English folks). Only then did it realize that even through it's
programming was perfect in the mind of its programmers, the outcome was that the only way to win is not to play!

Now, was this the result of logical determination by the computer or an implementation of an "escape route" installed by
the programmer in case the computer's best case scenario wasn't good enough?  Did the computer make it's decision
based on hard logic or a digital form of common sense, realizing that there are no winners in a global nuclear conflict?

One final observation there...the machine was sort of self aware to the point that if it detected a power loss (someone
trying to cut it's power supply), it would go ahead and launch all missles, possibly thinking that some other country had
already knocked out the power grid to the US in an attack.

Interesting thoughts. Perhaps this should have also gone in the AI movie section as well...but It was an example!
In the world of AI, it's the thought that counts!

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #66 on: December 11, 2005, 10:43:59 pm »
Been a long time since I last saw that film lol,

But yes I do remember, and like my previous points, the decisions could only be made on pre programed information as you say

Now, was this the result of logical determination by the computer or an implementation of an "escape route" installed by
the programmer in case the computer's best case scenario wasn't good enough? Did the computer make it's decision
based on hard logic or a digital form of common sense, realizing that there are no winners in a global nuclear conflict?

The ability to make such a clear and calculated decision (whatever it may be about) would only come about from information it has of similar instances.

Good example and all food for thought

*

FuzzieDice

  • Guest
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #67 on: December 12, 2005, 03:14:35 am »
But we all know the answer is simple. It's 42. ;)

Ok, seriously...

I think everything is a matter of cause and effect, as mentioned. I also noticed that just like humans can be imperfect, so can computers. I never encountered where two computers were exactly alike. Variations in hardware, energy supply, etc. makes some computers less reliable than others, for instance. And thus maybe even giving different outcomes to a task. Maybe the simplest tasks will give mostly the same outcome (unless in the case of a 2 + 2 = 4 giving a 7 instead, due to maybe a glitch or overrun of a memory register, and thus the register getting corrupted with say, a 5 instead of a 2).

So I don't think computers can be entirely on/off logical once they learn to think and reason on their own.

We should make an experiement to test this. :) I'm now working on my ByteBin.net site and hope to have it online soon, and the ESR project underway.

I think it has been done with chatbots, a 20-questions thing? And one notices that the answers vary from one bot to another? So, why wouldn't a decision vary from one AI to another, based on the AI's programming and prior knowledge and experience?

As for the AI government scenerio, the programmers could have instilled a protocol that can not be overrun that protects human life, or specifically protects the life of the citizens of it's own country. So, while the VP won't pull the switch, that's not saying the AI will either. It may deduce, yes, it's safe, since it's OWN people would not die. Or it may deduce that even if that were true, it would trigger a CONSEQUENCE that WOULD put it's own people in danger, and maybe rasie the death toll compared to if it didn't, based on it's knowledge of human need for 'revenge'. So it might not.

And why would it anyway since if it has no emotion, why would it seek revenge? Unless WE programmed it to?

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #68 on: December 13, 2005, 01:01:56 am »
great and interesting points FD, and yeah a survey would be cool. Glad to see the ESR still on the back boiler, keep us updated :)

*

FuzzieDice

  • Guest
Re: AI & Softputing Behavioral & Cognitive Modeling of the Human Brain
« Reply #69 on: December 13, 2005, 02:50:18 am »
I still have plans to go ahead with it. :) I have a lot of sites to work on, though. So I'm trying to get them all done and working. I'll get there.

I want to find a way to have surveys on Mambo CMS sites. I haven't found any add-ons for that so maybe I might have to code my own.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

286 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 363. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles