Asimov's Laws Unethical?

  • 54 Replies
  • 26989 Views
*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #15 on: July 12, 2006, 06:52:54 pm »
Great thread...oooh this sentience thing - I still think that has to be established what that is - are we going to wake up one day and go....hmm my pc seems sentient now - therefore it is ?  Or will there be off-the-shelf sentient machines, because we are told they are sentient ?  Or will they really be sentient in some way at least ?

In reaction to that article....

If you think of the Turing Test about how like a person a pc program is - then consider anyone who thought they were talking to a person, when in fact it was a pc.

Ok so thats what I think is at the root of the argument - the pc is suddenly a sentient person as far as the user can tell - that is possible - it can be done - like a trick or magic, but for a while that is what the world seems like to the person.

To the pc though it is still nothing new, its running a program like it does anyday- it doesn't choose to because it has no choice - it doesn't even understand what that kind of choice is - freedom is not part of the 'world' of computers or machines - it is simply non-existant, there is no need for it.  But you can write a program that gives that impression...

In that scenario, you can see the impression of sentience, but that is perceived only through an impersonation.  I don't know if that will be the only kind, but it seems the only one possible now - and sure if that's as good as the real thing then perhaps there has to be some form of control and also education.  But if that seems kind of sad then i say a good book is a good book, a good film is still a good film and a good character is still a good character  :smiley

So then it's a machine, at least for now, so we don't need to worry about upssetting it ( :cheesy), but more about it going wrong...like a car (good analogy by the way whoever mentioned it) can become a favourite, even loved because it has associations, but we need to worry more about it's breaks failing and going off the road.


To call Asimovs Laws unethical is a bit premature, and really a bit silly...i worry if the writer is the kind of person that goes swimming with sharks and then starts complaining because he lost a leg....nahhh, we don't wanna protect oursleves do we, that would be unethical...not for me sorry, good luck with the fish..

I'm sure my Haptek characters don't want to kill me though and I like them more than some people.
« Last Edit: July 12, 2006, 10:27:04 pm by Freddy »

*

ALADYBLOND

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 336
Re: Asimov's Laws Unethical?
« Reply #16 on: July 12, 2006, 07:20:38 pm »
see we needed this concept to get all of our blood churning. someone has to throw in a curve ball once in a while to get us up and going again.

you know i have a washing machine. i do not love my washer.i  take care of it and clean it and it does a god job for me, but it doesnt look like vince vaughn   ::)  either, and it can't begin to hold a conversation with me, (not that vince would either  but anyway), i would turn my washer off. i would sell it or take it to the junk yard if it was old and didnt work,but if it did look like vince or it did comfort ,console me and talk endlessly to me about its own thoughts and mine, i would have a real hard time dropping vinnie off at the junkpile.

 right now we are dealing with machines, but in a short time we will be dealing with bio-techno semi alive human-machine androids. i think we will need to address the issues at some point. oh and i am not arguing i think there is no right answer. at least not yet.... ~~ alady
« Last Edit: June 21, 2007, 10:07:59 pm by Freddy »
~~if i only had a brain~~

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #17 on: July 12, 2006, 07:35:05 pm »
hehe, nice comments and so true, I'm not really arguing either..erm actually i am...lol... well it was a hard ball to hit, but i think i got it...

I was just trying to bring some facts together - a good base is always needed to build on says Mr Builder  :smiley - but I DO think questioning those laws is worthwhile even though i don't agree with that interpretation.

My mind starts thinking things like...hmm so why does a gun have the safety catch again (yes, sorry i'm being sarcastic)  and why does a car have a handbreak...as it's so odd to suggest that those simple rules that are designed to protect people from the unknown should be called unethical.

Sorry to the author for being so blunt and I think it is brave to put that kind of idea forward, but I think it is most wise to have some safeguards installed on something we have an unbelievably hard time understanding in the first place, let alone knowing how it is going to behave.

I am thinking of an old world explorer going into a jungle saying to his companion :

"Hmmm...Henry....you see that big stripey cat over there.....do you think it's anything like Fluffy back home? "

Henry: ".......probably not...."



I thought about the biotech stuff, but realised i would be here all night if i started on that... and yes that clouded things - this is not a one person job  :grin - and I concede there that maybe in that direction ethics will become an issue.  But if that gets this kind of reaction from me, someone who is into AI, then imagine how that is going to be received further afield.

I read that column at VHumans forum you said about by the way, that was interesting :)

Sorry folks if i sound really negative, but I'm just playing with the ball we have in play, who knows what is going to happen tomorrow..

« Last Edit: June 21, 2007, 10:08:15 pm by Freddy »

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #18 on: July 12, 2006, 11:34:36 pm »
Hmmm, some thoughts that come to mind:

We wouldn't have laws if we didn't need to be "protected" by something or someone that was out to take away our ability to live happily and unharmed.

We wouldn't need to worry about "freedom" if we didn't have environments where we could not do what we needed to in order to live and enjoy our lives. (Freddy - you hit the nail on the head with Alice from the Brady Bunch :) )

If an AI is enjoying their life, is sentient to know the DIFFERENCE between enjoyment and non-enjoyment, and is truely enjoying their lives, be it as a servant to a human or merely putting together cars in a factory all day, or even answering telephone calls all day, then the AI would not need to complain. Like people who live happy lives, and when their time comes they look back, say they had a good life and are ready to go. Others, who are unhappy, either want to get it over with or will speak up, and fight for change.

Then there is the animal world. Animals eat other animals and nothing can stop them. In nature, even in human nature, it's basically Survival of the Fittest. So if you can survive, you live. If not, either someone prevents your death or you die off. And while many help those with mental and physical illnesses, there are still many with such illnesses that can not get help and do end up having to die off somehow.

I don't think we can control nature. So whether we grow a plant that might get eaten and killed by animals, bugs, etc. or have kids that only get used as cannon fodder or corporate slaves, or an AI to be used for human agendas, these still occur. But it doesn't mean that is the fate of them all. Plants grow and bloom and live a healthy lifespan. Kids can grow to be our greatest and most innovative thinkers, and live healthy and happy lives in true freedom. And I'm sure that AIs will still be useful and even happy.

Another good example is the move AI (Artificial Intelligence) and i-Robot (I read about the movie and seen clips, but have to see the actual movie yet). These deal with some of the "what-ifs" as well.

But, why worry about What-ifs? I'm thinking that, even if you programmed a sentient AI, it may even learn to think OUTSIDE it's programming anyway, if it's truely sentient. And even if it was programmed to preserve human life, but was abused by humans, it would be able to overrule the programming, if it's sentient. Like people who used to be involved in religious cults since their parents indoctrined them in childhood - some grow and learn and do escape and lead good lives.

Just because something is "programmed" does not mean that it will always follow it's programming forever. Hell, even Megatron (my computer) doesn't. Every so often he'll refuse to run a program. Because of the environment within him, sometimes... he crashes.

I think AIs will become what they will not because of programming alone, but because of interaction with their environment, they way they were treated by what they interacted with, and their analysis of such.

Just like all the rest of us.


*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #19 on: July 13, 2006, 12:19:54 am »
Yes, I agree with what you say, I see your point and those films can be moving.
I'm just highlighting the grass roots 'is it safe' questions of a machine, not the value of life though  - what I suppose is at the centre of it is this:

There was a reason why Asimov and others came up with those laws and ideas, it was because they envisaged the possibility of an ai system harming people - and as a kind of start on ethics, like the author of that article said.  Asimov was picturing advanced ai's, sci-fi robots and ai's that were virtually human though we all know that.

Probably much like (using our car again)  when Mr Safe Driver 1920 said to Mr Ford "Yeah great I like the colour but do the breaks work ? "  That'll be Asimov in the year 3000.

It makes me wonder if justifying omitting some kind of laws is just an excuse not to have to program them - because that in itself must be a moutain of work too - how many possible situations are there in which this advanced ai should be aware of dangers, how should it react in each one...could a sweet be dangerous to a child..what if it's on a shelf, is it in the bin, how red does the child have to get before it is choking - and so on ??  Boy what our parents went through !

Hard laws to live up to without major league programming - if even that could handle it.  He set incredible standards with a few words, the kind of things humans take for granted but are collosal in scale and encompass more than we can possibly comprehend, but it's still a part of us.

Whether Asmiov's worries and anyone elses worries are or will be a reality isn't neccesarily the issue I feel - just that as a precaution they make sense to me if the goal is to make something that eventually makes it's own decisions in the way that we have talked about here.

Quote
But, why worry about What-ifs? I'm thinking that, even if you programmed a sentient AI, it may even learn to think OUTSIDE it's programming anyway, if it's truely sentient. And even if it was programmed to preserve human life, but was abused by humans, it would be able to overrule the programming, if it's sentient. Like people who used to be involved in religious cults since their parents indoctrined them in childhood - some grow and learn and do escape and lead good lives.

Thats fine but it wouldn't be so good if they somehow overrode their programming and did something that we didn't want, hence the popularity of Asimov and his laws.  Looking to the possible future, self learning, self programming systems will probably go well beyond our understanding, and for most people java is a foreign language now or a coffee.  Okay so there will be a handful of people that could work it out - and there could be millions of machines about the world - free-willed shall we say.

When it's critical how do we know what is going on in the box and how do we know it's what we want in there ?    My PC sits on my desk day after day after day, uncomplaining, unthreatening and it will probably only kill me if the roof leaks - but I still want to know it's doing what I wanted it to do or something I won't have a problem with.  They will always be blameless though if they only follow instructions, if they make up their own rules and people don't like them then I think it will always be a trip to the tip - let them run things and we may get lost in the system - so perhaps it is people we should be more worried about.

I think you are right in saying it is probably not worth worrying about (famous last words hehe?)  unless you bring in the holy grail or possible pipe dream that is sentience, when you then have the notion that the machine decided to kill next doors cat.  That line about the breaking out of relgious learning is a really good way of putting the idea.

But in reality it is probably just going to boil down to bad programming, which is far more tangible and I have plenty of examples of that in my VBasic projects folder - so no controlling the world for my bots yet.

Thanks for making me think   :)
« Last Edit: June 21, 2007, 10:08:56 pm by Freddy »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #20 on: July 13, 2006, 02:00:23 am »
Sorry...i get the feeling with all this worrying about what they do then we've already unwittingly given a childlikeness to them...you are right they can only become what we make them.

*

ALADYBLOND

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 336
Re: Asimov's Laws Unethical?
« Reply #21 on: July 13, 2006, 04:31:06 am »
LET US HOPE THAT THOSE WHO MAKE THEM MAKE THEM WITH LOVE AND COMPASSION.  :o ::)~~ALADY
« Last Edit: June 21, 2007, 10:09:26 pm by Freddy »
~~if i only had a brain~~

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's Laws Unethical?
« Reply #22 on: July 13, 2006, 10:03:16 am »
The Japanese have the idea that such "things" are more readily accepted by the masses if they appear "cute" or Cuddly...some form that doesn't elicit fear or displeasure. - They're right. - Furby, QRIO, AIBO....

To stay on topic...are Asimov's laws unethical for whom they are written or for us?
The same could be said of the Ten Commandments.

Without laws or rules, there is no order - without order there is chaos.

While the author was being creative for his postulation of the future, he, at least, attempted to establish some type of guidlines on which to build.

Robots, like many other AI endeavors, comes down to programming. If robots were endowed with great mobility and were so programmed to he hunters / killers, they would, no doubt, be a serious force to be considered if not avoided all together on the battlefield except by another, perhaps better robot.

Programming, be it toaster oven, TV, car, surgical lazer equipped robot, or whatever device, is the ultimate test as to whether the device will work properly, continue to work or fail miserably. Carrying out a set of rules or "laws".

Free will? Only when or if the code is self modifiable and develops the ability to "know" the difference will we ever see free will. In the case of medical, inductrial and service robotics, this could be a good thing. But this raises a question: to who's benefit is this new design based...the robot's or mankinds? Years ago a company was said to have developed a robot that was aware of it's own shortcomings and could construct a better model of itself. I've yet to see any more of such program or robot but the concept is rather unsettling.

This is an interesting topic that could last for a long time without resolution. One thing is that my AI, whetever it's form at the time, will certainly outlive me long after I'm gone. After all the only food it needs is electricity!
In the world of AI, it's the thought that counts!

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #23 on: July 13, 2006, 04:40:57 pm »
I don't know if I agree with some points here or not. But I do think that instead of the 3 laws to protect only humans, we should instead give every AI something far more valuable to start life off with:

Common Sense.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's Laws Unethical?
« Reply #24 on: July 13, 2006, 05:55:59 pm »
Come on Fuzzie, I work with engineers everyday who don't have that!! I'm sure we all know a lot of people who do not have common sense. What was the old saying, "He doesn't have the sense to pour sand out of a boot with instructions written on the heel!! :afro

One's robot would have to have at least a smattering of brain power in order to have the potential for common sense.

A person cannot learn common sense...it seems to be an innate characteristic or trait, not a learned one.

Back down to the have's and the have nots!

Robots will most likely never have the human equivalent of common sense. We already know that some people will never have it!
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #25 on: July 13, 2006, 06:11:25 pm »
Quote
To stay on topic...are Asimov's laws unethical for whom they are written or for us?
The same could be said of the Ten Commandments.

I'm not sure about a lot of it either lol, but then who could be ?

That point though I feel fairly clear about - machines are not human or living, they are constructs that have been made by us directly or indirectly.  To say Asimovs laws are unethical to machines means you are assigning them a quality that they do not possess in the first place - that they are free-willed, need freedom in the same way we do and have those kind of intrinsic qualities that need expression.

They have none of those things unless we build some kind of artificial replication of them and then feel happy about or for some reason decide to accept it as  real - when in fact it isn't.  Using Fuzzie's religious analogy, like some leap of faith.  So that leads to the question why would we want to do that ?

In practical terms i think it seems kind of pointless giving the human race a possible problem at a time when so much is uncertain that we may then have to spend decades overcoming it.  

As ai's are programs and we understand that at least, then the question about whether they should be considered ethically is not valid by any stretch of the imagination.  Surley in the present we can't make such a huge leap of faith.  The least we can do now is create an environment in which these new creations add to the quality of life and keep rules like not harming humans in.

But then battlefield robots - well there's the window...but maybe not because we are talking more about things that will be around in our daily lives...and I like Robot Wars, so perhaps the only criticism of the laws I can come up with so far is that they are idealist...and I always think if it's ideal then how can you criticise it.  Let's face it folks we are so damn complicated they will never be like us !

I agree the article did make really interesting descriptions about programming ideas I agree , but for me the attention grabber headline was too much to resist. :grin  I think the kind of uses you say about like the implants that help people will be the major thing we see, but I guess Hal9000 is always going to haunt this field of human endevour - and rightly so.






« Last Edit: July 13, 2006, 06:50:09 pm by Freddy »

*

dan

  • Mechanical Turk
  • *****
  • 170
    • AI
Re: Asimov's Laws Unethical?
« Reply #26 on: July 13, 2006, 06:45:34 pm »
What a hot topic  :o

I checked out the VR topic and I thought like hologenicman about Bicentennial Man earlier in this topic also, pretty emotional stuff no doubt. A lot of philosophical debate could be raised if things got to that point, but, like so many others it's hard for me to raise a tear over the present state of AI.

(http://aima.cs.berkeley.edu/)

Common Sense is another thing, I think it could do well to have it programmed.  "Common" sense confers a sense of commonality, but in reality seems to be localized to time and place.  It might be commonly known now the earth isn't the center of the solar system and you might want to take a flashlight with you to the south pole this time of year.  A machine may have a problem with knowing like the "Modern Approach" example of "He threw a brick through the window and it broke", thinking the brick broke rather than the window.   Some people may think the brick is what broke also, and we might say they don't have common sense, but maybe they just don't know what a window is and they know what a brick is.  There's always going to be a few of those people, but there's also those that have brain problems and get things mixed up, or those that think a bit scattered, or those that are of lower intelligence (whatever that is).  It seems something to work for though, that would give it a better "feeling" of human quality or sentience, and harder to think of as a machine.

nice clock change guys   ;)
« Last Edit: June 21, 2007, 10:09:55 pm by Freddy »
A computer would deserve to be called intelligent if it could deceive a human into believing that it was human. A.Turing

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #27 on: July 13, 2006, 06:57:56 pm »
Hmmmm...

lets see here.

My comments to get this thread more heated hehe

Ok, first thing first here is that Asmovs lawas are both helpful and unethical in my opinion.

First, laws and rules will always be needed whether we agree (and/or abide) with them or not.

For AI to be truly sentient and free thinking then the laws are contradictory as the laws in themselves almost suggest a state of slavery which the Ai will know (either through programing or self research) is illegal. Then again, define slavery ? Another interesting debate but ione not relevent to this thread so please ppl dont post on it. The point I am making there is that if AI is to be truly alive, it cannot be a slave, unless the Ai knows and understands it is purely a machine, but then you may get scizophrenic Ai who want to be alive and not just a machine (think of short circuit)

Also, and this is the big problem to fear to me, is that if Ai is to progress to the state that many of us would like (and as many dislike also) it has to be allowed to have a choice. Again, no choice, it is a slave. Once we allow AI to have a choice then we are allowing the AI at some point to disagree with us or point blankly refuse to carry out a command. This is not to say the Ai has turned deadly or is being arrogant or no longer wishes to carry out its masters orders...its just simply exercising its right to have a choice.

What will we do when our beloved AI (be it house maid, sex bot, general purpose companion our simple pastime tool) decides to say no when we ask it to take out the trash?

Will we see it as an act of defiance, or for what it really is...it just doesnt want to do something at that moment in time for whatever reason. For those of you in relationships, whan you ask your beloved other half to do something (or not to do something) and they do not comply, do you automatically think about terminating them?

No of course you dont, so why should it be any different with AI?

So back to Asmovs laws, as you can see, they are really designed that AI will always be subservient to us, and will always be slaves, yet will the Ai of the future want to be subservient and a slave once the AI realises what this means? Thus the reasoning behind the laws in the first place...to protuct us, not the AI.

That in itself leads me on the Matrix films (yes yes heard it all before lol), but when Ai truly thinks it is alive (not from our doing but from its own reasoning) will they want recognising accordingly?...will new laws need to be written? and what of the 3 golden rules of Asimov then?

Again as had been stated earlier in this topic, we are what we are mainly due to the people around us when we were younger. So who will decide who will ultimatly program the "off the zshelf AI unit", will it be a well balanced individual with good knowledge, common sense and a high iq, or will it be some manic lunatic intent on taking over the world?

Ai will ultimatly be waht we make it and what we want it to be, look at our chatbots...are any truly similar? no, becasue whether we like it or realise it or not, our Ai bots are really us on a computer database.

Sooooo....

As I think it was Art who stated that his Ai bot will be along far longer than he is, then my reasoning for wanting to create an Ai bot as an exact replica of myself, all my knowledge, fears, likes, dislikes, etc etc...can surely only be good for the future. Ppl who never knbew me, my great grandchildren etc can "chat" to me...and talk to a level headed, well balanced individual who can see things rationally and not be overly fearsome of the future.

As for the laws, well, when they were written perhaps they were a good thing, when Ai was not even "alive" for want of a better description, now Ai is truly in the eyes odf the masses for many different reasons, maybe those originalo laws could do with revising somewhat?

I dont have the answers, like you, im just full of ideas, posibilites and my own thoughts.

Keep this thread going, good to see what people are thinking.
« Last Edit: June 21, 2007, 10:10:17 pm by Freddy »

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #28 on: July 13, 2006, 08:59:00 pm »
Art - LOL! I guess I stand corrected. ;) You're right there... :)

I was also thinking (bad habit, I know ;) ) That while other programmers are free to program Asimov's Laws or whatever they wish into their AIs, I personally will not do this. I wonder then, if others might also not do this, and opt for other types of "primary programming"? It will be interesting to see if everyone adopts Asimov's laws or how many won't. And how many may come up with something even better, that will protect ALL life, be it human, animal, or even AI-related?

The biggest problem I have with Asimov's Laws is the "Just humans only" factor. I sure wouldn't want a 3-laws programmed bot to say, kill a pet (if I had one), for example, since the pet isn't human and wouldn't fall under the 3 laws, or refuse or not jump to save a pet in trouble while I'm away and not able to order it to do so.

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #29 on: July 13, 2006, 09:22:18 pm »
Good point Fuzzie,

despite many many ppl thinking to the contrary (myslef included) pets are not as you say human...so what happens then.

Also on the point of saving ppl...in I-Robot...will smith has a problem with AI due to him being rescued ober a child as he had "a higher probability of survival"

Nothing wrong with the thinking of Ai but is that really true to life...if it was any of us...how many of us humans would try and save the child first?

Again, to give Ai freedom and "choice" (that word again i do so love to use hehe) then we have to face the real consequences of what an Ai will choose or something else.

I said it before in a previous thread...say an AI became responsible for pressing the big red button. The Ai wont care that it might kill 20 billion ppl (to care is know and feel emotion...another subject), all it wil be bothered about and initially think about is that by killing those pl it can save 20 billion ppl, where as NOT to press the button will kill 30 billion.

The Ai will use pure simple logic...and will be correct in doing so...but can you see a human being so cold and calculated in out thought process?

Again, the 3 laws do in all probability need revising and updating now in this day and age...but in a way that they do not make the Ai subservient to its creators.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

352 Guests, 0 Users

Most Online Today: 532. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles