Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: Hopefully Something on October 10, 2018, 06:30:14 am

Title: ETHICS
Post by: Hopefully Something on October 10, 2018, 06:30:14 am
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.
Title: Re: ETHICS
Post by: ranch vermin on October 10, 2018, 07:18:46 am
ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

So thats a rule of ethics...  If you had your evolving network,  this would be the governing system for scoring its behaviour,   But the program that scores these "situational happenings" is probably harder to make than the neural network that is getting scored by it to evolve the motor.

An easier rule to make is "AVOID ALL ACCELLERATION"    and this is very easy to score just off the robots eye,   and because guns cause great accelleration, i wonder if the robot might end up keeping guns away from ppl because of the accelleration they cause, but also it could cause lots of behaviours i cant even think of.

Theres a cool thing on computerphile, about if u ever managed to make a rule following robot,   how unpredictable it is what it would end up doing...

https://www.youtube.com/watch?v=4l7Is6vOAOA
Title: Re: ETHICS
Post by: LOCKSUIT on October 12, 2018, 10:56:48 pm
When Freddy says this is a family site, he must mean us. We are the family. Despite our crazy attributes, this is a family site! ;p

Just joking, be ethical.
Title: Re: ETHICS
Post by: DemonRaven on October 13, 2018, 08:54:32 pm
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

That is a hard one because people disagree as to what ethics to follow. Some say there are no ethics that all is permissible (chaos) others go the other extreme and make so many rules that you can't move without breaking a rule. I think Christ had the right idea. Love God - when we love things, drugs, money,  alcohol, jobs etc above people it causes problems so the best thing to love above people is a loving God that wants you to love People and love others as yourself- well you have to love yourself first but not to the point where you are harming others and that is where the love others as yourself comes in.  Now one could get upset about my using religion as a example but it is actually a very logical approach when one takes it apart and studies it. But This is just my opinion and you can take it or leave it.
Title: Re: ETHICS
Post by: Korrelan on October 13, 2018, 09:35:43 pm
Quote
Now one could get upset about my using religion

 :o

Quote
I think Christ had the right idea

I agree… Apparently paul the apostle who taught the gospel of christ had an opinion…

St Paul's advice in 1 Timothy 2:12, in which the saint says: "I do not permit a woman to teach or to have authority over a man, she must be silent."

Do you know what christ had to say about homosexuality? (Romans 1:27), or do you pick and choose which bits to stand by?

I personally think that religion is the last yardstick we should employ for teaching ethics to anyone, especially an AGI.

Quote
But This is just my opinion and you can take it or leave it.

 :)
Title: Re: ETHICS
Post by: DemonRaven on October 13, 2018, 09:49:48 pm
Christ did mention something that alluded to homosexuality Matthew 19:11“Not everyone can accept this word,” Jesus answered, “but only those to whom it has been given. 12 For there are eunuchs who were born that way; others were made that way by men; and still others live like eunuchs for the sake of the kingdom of heaven. The one who can accept this should accept it.”  Humans are the ones who get bent out of shape over sex. Sex is for reproduction, companionship, pleasure. Paul was a religious teacher from what i understand so his theology was strict and patriarcal. christ was more lenient toward women and tended to use them as examples of how much greater their faith was then men's were.  Christ was a very logical person. Paul was passionate but a jew through and through. Christ was also jewish but if one believes that He really was the Son of God then you would have to know that God made all kinds of creatures including ones that can change from male to female. Christ said that in heaven there would be no marrying or be giving in marriage. The need for reproduction would be gone.  A logical God would not see a need for it. Almost like a computer would. God is a God of balance and logic. It may not always be apparent to us but the logic is there. Just look at nature.
Title: Re: ETHICS
Post by: DemonRaven on October 13, 2018, 09:51:50 pm
Just remember you asked for my opinion on the matter lol
Title: Re: ETHICS
Post by: Korrelan on October 13, 2018, 10:33:10 pm
Quote
alluded to homosexuality

Alluded is the key word here.

Quote
Eunuchs

Do you know what a Eunuch is?

Quote
Paul was a religious teacher from what i understand

Paul taught the gospels of christ, christ’s teachings.  He actually said…

Romans 1:27, "In the same way also the men, giving up natural intercourse with women, were consumed with passion for one another. Men committed shameless acts with men and received in their own persons the due penalty for their error."

And that’s not alluded… that’s a quote.

Quote
Just remember you asked for my opinion on the matter lol

I didn’t ask for your opinion, I asked… Do you know what christ had to say about homosexuality?... not your opinion… the facts… his actual writings… not your personal interpretation.

Bringing religion into any topic is looking for controversy; you know this… that’s why you stated in your previous post…

Quote
Now one could get upset about my using religion

Please read the forum rules if you are unclear as to how to conduct yourself.

3) You are responsible for what you post in this Forum. As such, be aware of any negative effects that your messages might have on other people. Your membership here could be in jeopardy.

Please respect your peers… my last post on this matter.

Peace.

 :)
Title: Re: ETHICS
Post by: ivan.moony on October 13, 2018, 11:18:00 pm
The irony is.. we fight over what ethics is...

My opinion is that we can't do anything ethical. For anything we do, we encounter some oppressed opinion out there, and in my opinion it is not ethical to say "no" to anyone. Long story short, there is always someone ranting about anything we do, and if ethics is about not fighting, we can only die. Not even that because someone out there cares for us.

About religion, God only knows what is true about those stories from sacred books. There are plenty of those books, I suppose they were all written by men, but in what extent a man could be  ethical? How much was true and how much was made up? In spite of all the possible lies, those books should all have enlightening value, but what was enlightening for men before two thousands years changed a thousand times up to date. We need something modern, if you ask me. But probably, if someone was writing a sacred book today, what would we call him? A lunatic that has hallucinations and needs a treatment? While that man before two thousand years was a very wise guy? I believe we're all stuck with archaic religions, because it is not anymore enough for God to speak to just one of us. I don't really know what's happening. We see a church in silver and gold, and we instantly believe that's it, no doubt. Then we see a poor man in worn clothes, being hungry because he gave his last piece of bread to someone else who needs it not to be hungry, and we call him a psychopath. How to understand that?
Title: Re: ETHICS
Post by: LOCKSUIT on October 14, 2018, 12:30:15 pm
Omg stop talking about religion....sheesh....korrelan neveer does this now look....lol my mom does and now i know none of us can escape it.

I know I know, demonraven isn't religious. He's raw in fact.

"Eunuchs" LOL

The real answer to ethics is to not harm others (or have them get pain), or kill. The universe is meant for the other way around and I STRIVE to get it like that one day soon. If you love something right now that pains/kills someone right now well you can later in VR for example, or done in reality but so no pain/death (need different brains..different bodies..). Done. All is happy!!
Title: Re: ETHICS
Post by: WriterOfMinds on October 14, 2018, 11:55:07 pm
Quote
ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

Define "unjust."  (This might be the biggest issue with your formula -- you've tried to distill ethics by referencing another concept that is just as difficult.)
Define "harm" and "aid."  (For instance, is this about following the expressed desires of creatures, or doing what's in their best interest whether they "want" it or not?)
Define "life."  (Did you mean anything biologically alive? Anything sentient or potentially sentient? What criteria should be used to decide/measure whether something fits in either of those categories?  Were you thinking strictly of individual beings, or do collectives of living things (species, societies) deserve ethical consideration in their own right?)

Your ethical prescription has two parts.  Which one is more important, or are they supposed to be perfectly equal?  Should more effort be expended on avoiding harm, or giving aid?  If giving aid also causes a small amount of harm elsewhere, is that ever acceptable, and when?  If it's not okay to harm "other life," is it okay to harm oneself?  When?

If one is faced with a choice between two harms, how should one weigh their relative severity?  If faced with a choice between two different ways of giving aid, how should one measure their relative benefit?

You have indeed written down a very basic definition of ethics, and I think it is one that would harmonize with most people's intuitions of what good ethics are.  BUT -- as a blueprint for behavior (of an AGI or anyone else), it needs a lot of elaboration.  While I think that simple statements like this can be worthwhile as a beginning, or a kind of master guide, there is a reason for the existence of Ethics as an entire field of study.  There is a reason why extensive deontologies are built up to clarify a dictum such as "love one another."
Title: Re: ETHICS
Post by: Hopefully Something on October 15, 2018, 12:13:52 am
Thank you. I will think about these things.
Title: Re: ETHICS
Post by: Hopefully Something on October 15, 2018, 08:12:23 am
Yes, the AI would need more knowledge to make use of such a generalization. I can't define these words briefly. If you dig into the definitions and try to describe it completely you'd get a ton of meaningless rules. Programming for every possibility doesn't seem smart. Firstly, you can't. Secondly, if you got close enough for folk music, and it was able to act ethically in its environment,  you would end up creating a complex robot instead of a general intelligence.

Now that I think of it, ethics is probably one of the core drivers perpetuating our sequences of thoughts and actions. On par with things like hunger and fear. Going against these "nature's suggestions with a meaningful look" is unpleasant. 
Going with them is pleasant. That's how we learn what to do as living creatures. That's learning to act ethically towards yourself.
This can be simplified to "increasing your well-being".
 
Now for the social ethics. Part of your well-being needs to be tied to the well-being of other life in your circle of awareness. We can't mind read so we need to make an educated guess. We guess at, and are then effected by our perceived well-being of others. When we guess at the state of another lifeforms, the equation (or neuron structure) governing our well-being gets new variables. We treat these as our own (cus they are), then we feel the need to maximize the variables. Or possibly just maximize the result. In order to maximize our well-being

I suspect most of the useful and reliable guesses would be made through body language. Therefore it would help if an AGI was geared/amenable towards learning a universal body language. (Something like what's used between dogs and humans.) Then it could guess at the states of others. The degree to which the AI would be able to help, would be limited by the body language interpretation ability, which would be limited by the similarity of an encountered organism to itself. But we do that anyways so it should be fine.
Title: Re: ETHICS
Post by: DemonRaven on October 18, 2018, 04:45:56 pm
Yes, the AI would need more knowledge to make use of such a generalization. I can't define these words briefly. If you dig into the definitions and try to describe it completely you'd get a ton of meaningless rules. Programming for every possibility doesn't seem smart. Firstly, you can't. Secondly, if you got close enough for folk music, and it was able to act ethically in its environment,  you would end up creating a complex robot instead of a general intelligence.

Now that I think of it, ethics is probably one of the core drivers perpetuating our sequences of thoughts and actions. On par with things like hunger and fear. Going against these "nature's suggestions with a meaningful look" is unpleasant. 
Going with them is pleasant. That's how we learn what to do as living creatures. That's learning to act ethically towards yourself.
This can be simplified to "increasing your well-being".
 
Now for the social ethics. Part of your well-being needs to be tied to the well-being of other life in your circle of awareness. We can't mind read so we need to make an educated guess. We guess at, and are then effected by our perceived well-being of others. When we guess at the state of another lifeforms, the equation (or neuron structure) governing our well-being gets new variables. We treat these as our own (cus they are), then we feel the need to maximize the variables. Or possibly just maximize the result. In order to maximize our well-being

I suspect most of the useful and reliable guesses would be made through body language. Therefore it would help if an AGI was geared/amenable towards learning a universal body language. (Something like what's used between dogs and humans.) Then it could guess at the states of others. The degree to which the AI would be able to help, would be limited by the body language interpretation ability, which would be limited by the similarity of an encountered organism to itself. But we do that anyways so it should be fine.

Just be aware that body language differs between cultures and some cultures are very difficult to read.
Title: Re: ETHICS
Post by: DemonRaven on October 18, 2018, 05:11:21 pm
The irony is.. we fight over what ethics is...

My opinion is that we can't do anything ethical. For anything we do, we encounter some oppressed opinion out there, and in my opinion it is not ethical to say "no" to anyone. Long story short, there is always someone ranting about anything we do, and if ethics is about not fighting, we can only die. Not even that because someone out there cares for us.

About religion, God only knows what is true about those stories from sacred books. There are plenty of those books, I suppose they were all written by men, but in what extent a man could be  ethical? How much was true and how much was made up? In spite of all the possible lies, those books should all have enlightening value, but what was enlightening for men before two thousands years changed a thousand times up to date. We need something modern, if you ask me. But probably, if someone was writing a sacred book today, what would we call him? A lunatic that has hallucinations and needs a treatment? While that man before two thousand years was a very wise guy? I believe we're all stuck with archaic religions, because it is not anymore enough for God to speak to just one of us. I don't really know what's happening. We see a church in silver and gold, and we instantly believe that's it, no doubt. Then we see a poor man in worn clothes, being hungry because he gave his last piece of bread to someone else who needs it not to be hungry, and we call him a psychopath. How to understand that?

' Then we see a poor man in worn clothes, being hungry because he gave his last piece of bread to someone else who needs it not to be hungry, and we call him a psychopath. How to understand that?" That is not being a psychopath that is the very definition of love. A psychopath would steal that last piece of bread and eat it himself to stay alive.  I know what a psychopath is. High functioning psychopaths are common in areas of law enforcement, science and politics.  My father had a PH.D in Chemistry and Physics.  He doesn't have many emotions. He also doesn't bother to mask either like many do.
Title: Re: ETHICS
Post by: AgentSmith on October 19, 2018, 12:26:31 pm
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

The initial and most important steps of AGI development will definitely be based on learning from demonstrations where the AI will observe the behavior of humans, learn from it and (hopefully eventually) try to reproduce it. This means that the ethics of  humans will have a high impact on the ethics of future AGI. However, at this point and for this context I am not really sure whether this good, bad, neutral or anything else...
Title: Re: ETHICS
Post by: Art on October 20, 2018, 04:13:50 am
Yes, by all means, Do as we (humans) say, not as we do!  O0
Title: Re: ETHICS
Post by: DemonRaven on October 21, 2018, 07:47:43 am
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

The initial and most important steps of AGI development will definitely be based on learning from demonstrations where the AI will observe the behavior of humans, learn from it and (hopefully eventually) try to reproduce it. This means that the ethics of  humans will have a high impact on the ethics of future AGI. However, at this point and for this context I am not really sure whether this good, bad, neutral or anything else...

lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol
Title: Re: ETHICS
Post by: AgentSmith on October 22, 2018, 06:02:13 am
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Title: Re: ETHICS
Post by: Zero on October 22, 2018, 11:01:42 am
The first thing that comes to mind is the impossibility to define exactly the words we would use to write the rules an AI should follow. But there are a lot more issues. First, how do we know that rule X or rule Y is a good one. We don't. Even if 90% of mankind could agree on something, they could be wrong (but what do 90% agree on anyway?). Now, imagine we do define words exactly somehow, and we do hardwire rules in an AI's mind, we still have a problem. Being intelligent is, among other things, being able to see things differently, to evolve an understanding, to redefine words and ideas. It's like trying to teach water not to flow in this or that direction. Intelligence is, by all means, a wild thing. In my opinion, a machine is either programmed and constrained by rules, in which case it is potentially dangerous because it is controlled by humans (who are dangerous), or it is free, as in 'free will', in which case it has to be treated for what it is: a citizen, with rights and duties. Citizenship, lawyers, trials, ...etc. That's the only way to go.
Title: Re: ETHICS
Post by: Korrelan on October 22, 2018, 12:44:24 pm
As a guiding ethic… perhaps some kind of self fulfilling paradox…

Treat others as you wish to be treated?

 :)
Title: Re: ETHICS
Post by: Art on October 22, 2018, 12:49:50 pm
Then That would be the "Golden Rule".  O0

You (we) speak of the ASI...need there be just one or will there be several? Who decides?

Will the ASI of one country or state decide the ASI of a different one is incorrect or flawed?

It could be a whole new game of thrones taking place! Just a thought... :knuppel2:
Title: Re: ETHICS
Post by: ivan.moony on October 22, 2018, 12:50:34 pm
As a guiding ethic… perhaps some kind of self fulfilling paradox…

Treat others as you wish to be treated?

 :)

Robot, plug everyone into 220 V AC/DC!!!  :2funny:
Title: Re: ETHICS
Post by: Korrelan on October 22, 2018, 01:45:47 pm
Quote
Will the ASI of one country or state decide the ASI of a different one is incorrect or flawed?

I think as long as emotional intelligence is kept out of the mix, two ASI would always eventually agree.  There is no escaping the nature of pure provable logic, each would provide argument for their case using logic, and the other would have no disagreement because it’s pure logic. If ones logical premise is flawed due to lack of information, they will exchange the relevant knowledge to reach a logical compromise/ agreement.

Quote
one or will there be several? Who decides?

Give the above, no matter how many ASI machines exist; if they are allowed to exchange information freely they can all be considered just one.

Quote
Robot, plug everyone into 220 V AC/DC!

Haha.. I’d honestly not thought of that… as he stuffs a sandwich into his PC’s CD drive.

 :)
Title: Re: ETHICS
Post by: DemonRaven on October 22, 2018, 06:50:17 pm
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Well I was half serious you might want to watch this video and think again. Training them takes time but you get a better product.
https://youtu.be/USqL1V0Sd98 (https://youtu.be/USqL1V0Sd98)
Title: Re: ETHICS
Post by: ivan.moony on October 22, 2018, 07:24:54 pm
Quote
one or will there be several? Who decides?

Give the above, no matter how many ASI machines exist; if they are allowed to exchange information freely they can all be considered just one.

Quote
Robot, plug everyone into 220 V AC/DC!

Haha.. I’d honestly not thought of that… as he stuffs a sandwich into his PC’s CD drive.

 :)

There's always a workaround. In this case, a robot should think what would it want if it was in the place of observed living being (meaning if it would have the same needs). The key word is "want", which should be somehow represented in robot's knowledge base. Once the robot detects our wishes, it may choose whether to get along with them, or not. The problem arises when contradictory wishes take place. Contradiction is something that can be detected by logic science, and I don't see another way than to somehow implement this logic into decision mechanism, if we want an ethic aware machine. If logic can be learned by neural networks, that could be a way to go without hard-coding logic rules. But this way or another, contradictions should be detectable.
Title: Re: ETHICS
Post by: LOCKSUIT on October 22, 2018, 07:54:31 pm
"The key word is "want", which should be somehow represented in robot's knowledge base."

A dog's favorite word!
Title: Re: ETHICS
Post by: Hopefully Something on October 22, 2018, 09:19:56 pm
"The key word is "want", which should be somehow represented in robot's knowledge base."

A dog's favorite word!

I think a dog is a prime example of an NGI which can adequately learn the ethics expected of it. And it can learn all this information just through body language! Think of the bandwidth, or compression, or just low data transfer required, to impart such a complex and complete understanding. If we build a solid foundation like this, we can be more confident that the towers of intellect of a super intelligence won't come crashing back down to smite it's creators.
Title: Re: ETHICS
Post by: AgentSmith on October 22, 2018, 09:40:27 pm
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Well I was half serious you might want to watch this video and think again. Training them takes time but you get a better product.

Better in what sense? Solving a specific task with high precision like the products of Boston Dynamics? This has not much to do with AGI as these products are completely unable to extend their knowledge and skills by their own or even to transfer these things to other tasks. It is just overspecialized crap that consumed much money, time and effort. And whenever the task changes a little bit a new enormous investment cycle has to start. AGI related concepts will not rely on this and they will enable agents to learn and to generalize knowledge and skills with ease by their own.
Title: Re: ETHICS
Post by: DemonRaven on October 22, 2018, 11:23:40 pm
Well if you are talking the general public then it was already tired with chatbot  billy, daisy and paula. There were many others but those are the ones I can think of off the top of my head.  Microsoft also tried it so if you think that you are somehow more intelligent then these guys then go for it. I have been around for a while and saw did and didn't work.  I see what the general public says to my chatbots and it isn't pretty.
Title: Re: ETHICS
Post by: Art on October 23, 2018, 02:48:22 am
@ Agent Smith,

I would much rather my robots be able to examine then find workable solutions in order to solve a problem rather than be a "One Trick Pony".
If the solving of a particular problem involved failure then that failure is not as much of a failure as it becomes an object lesson or learning experience.
Given enough attempts (and perhaps a share of failures) it should eventually learn how to succeed.

As Thomas Edison once said, "I have not failed. I've just found 10,000 ways that won't work."
Title: Re: ETHICS
Post by: LOCKSUIT on October 23, 2018, 07:24:52 pm
"it should eventually learn how to succeed."

:)

True, as he said, you'd never think iphones were possible 500 years ago, simply there is many ways to fail yet a fair share of ways to do almost whatever you desire, especially in VR. Like said, dishwasher was possible, just gotta figure it out.

Interesting, we all know the "we live, we die and go 6 feet under (or worse), have a chance for myself or children to be saved by the ai robots, etc, but not likely for me if I'm older"....but also 2 things you probably never realized:
1) Even if you could be kept alive and excited forever, even re-built from the dead in VR, you could always, ALWAYS, indeed, be KEPT dead, DEAD, if no one survived or the future system refused to do so everywhere in the universe where life exists. You'd never awake back, true death. It IS possible!!
2) You guessed it. It is possible too. Eternal life. Can ya? Well, you're alive right now, right? (what we consider life), ur happy (hopefully), and we know with tech or even certain mutations - you could live forever, so long as a nanoborg ate Earth and left the sun's danger. Even if the universe crunched back in, you could and WERE living a long time, there was no "exact duration" given, and there mustn't! Also, 500 years can and is fine, there's no "now you are a man at 20". Sure, you can! think! that! And be that. Just saying 500yo creatures lifes are incredible too. And bad memories can be erased and happy the whole time.

Obviously we hope for the eternity allowed, and possible. That's life and that's the life I want and will go for :)
Title: Re: ETHICS
Post by: Hopefully Something on October 23, 2018, 07:50:23 pm
Well if the universe is infinite and time is eternal then death is impossible. Your brain would be remade by chance every so often depending on the universal probability of this happening. From the point of view of a conscious physical structure ( which has a chance of getting created by the universe) life probably never ends.  O0
Title: Re: ETHICS
Post by: LOCKSUIT on October 23, 2018, 07:56:07 pm
...my brain

But what about clones. I can have clones in VR in syn exact time. It is a new person, and each can divert some different way too and be different. So is the brain remade going to re-animate me or make "another" ? It's a long question even I, still, can't answer. I have a lot of notes on it though. From waiting lists ta...lots. For example the life may even matter, say you are frozen and die, or brain shot apart but put back together, it seems like you came back, the "life" or "way" it happened may matter......all i know for sure is hurry, i don't want to test this water.

For example I also note that we can all be the same, sensing the same, and same bodies etc. Same homes. We can slowly shift to that yes. So the whole thing is a long standing question that confuses me. It looks bad and good.
Title: Re: ETHICS
Post by: LOCKSUIT on October 23, 2018, 08:03:54 pm
We survive brain and limb modification lol :) on a daily basis.

The brain algorithm makes a consciousness, I'm just a robot.

Why would you want to accept that though? See. Try to my friend. It's you wanting to be more than just a machine.

I've clearly broken the chain/rule, I do want to be more, but it empowers me to know the truth. Besides, I know so much about the brain that I know we are machines, it's rediculus how much information I have!!!!!!!!!!!! How could you not put it together!? We have everything knowledge about from computers, to Google, to type writers, to implants, you name it. Sound waves, crystals, homes, evolution, omg so much. Maybe I just see all the patterns and am super smart.

Korr knows how i feel, he knows our response system well :)
Title: Re: ETHICS
Post by: Art on October 23, 2018, 11:55:57 pm
From Aerosmith's Dream On: Dream on, Dream on, Dream on,  Dream until your dream come true...

Perhaps you are trapped in the Matrix...? Did you take the Red Pill? O0
https://www.youtube.com/watch?v=zE7PKRjrid4 (https://www.youtube.com/watch?v=zE7PKRjrid4)
Title: Re: ETHICS
Post by: DemonRaven on October 24, 2018, 06:41:13 am
We survive brain and limb modification lol :) on a daily basis.

The brain algorithm makes a consciousness, I'm just a robot.

Why would you want to accept that though? See. Try to my friend. It's you wanting to be more than just a machine.

I've clearly broken the chain/rule, I do want to be more, but it empowers me to know the truth. Besides, I know so much about the brain that I know we are machines, it's rediculus how much information I have!!!!!!!!!!!! How could you not put it together!? We have everything knowledge about from computers, to Google, to type writers, to implants, you name it. Sound waves, crystals, homes, evolution, omg so much. Maybe I just see all the patterns and am super smart.

Korr knows how i feel, he knows our response system well :)

If you made a VR of you and could upload your consciousness  and you were around I don't think  your clone would not be happy about being stuck in a machine.  No matter how much hype some companies give you, it won't be you. It will be a imitation of you.  The universe won't last forever so eventually even robots would die. might take a few billion years but it will happen. I love sci fi and many movies have explored the idea. But just like a twin is not you neither would your uploaded consciousness be you. It might be a way to have "offspring" with out a partner and leave your mark on history but again it is not you. When i go put me in a box I don't want to live forever in a machine that can't feel.
Title: Re: ETHICS
Post by: Korrelan on October 24, 2018, 10:05:23 am
Are you having some kind of a mental crisis Lock?

Quote
So is the brain remade going to re-animate me or make "another" ?

You are a different person today than you were yesterday, and you will be different again tomorrow.  Not only are you the sum of your experiences, you are also the continuity of your experiences. 

If your body/ brain/ memories are cloned, I mean an exact replica, and the original was discarded (important) then upon waking you would be the same ‘person’ and wouldn’t notice the change.  So long as the continuity of your existence/ reality is not altered, and you have no mental experience of the event you would just carry on as normal.  The copy would simply carry on from where the original ended, but it has to be an exact copy, including the electrochemical activity in your brain, you would need a molecular copier.

Quote
The brain algorithm makes a consciousness, I'm just a robot.

The regularly used phrase ‘we are machines/ robots’ can be very misleading.

The modern term of robot implies an electromechanical machine, something that is well within our capabilities to create at this point in our technological evolution.

Yes, you are a machine, you are an electrochemical machine… there is a huge difference.  We humans have no experience/ knowledge regarding the building/ emulating of this type of machine… literally none. 

Quote
Korr knows how i feel, he knows our response system well

I may have some insights into how our emotional intelligence works but trust me, I’m the last person you should regard as emotionally intelligent.

Quote
Maybe I just see all the patterns and am super smart.

It sounds like you are frustrated at your lack of progress, when you look at your huge list of information you can see the connections/ similarities… how it all works, in your minds eye... but lack the skills to build it.

Observing a problem space from the perspective of ‘the big picture’ only highlights/ emphasises the actual problem, you need to get deeper, dissect and theorise.  If you want to move forward stop thinking about the information, and consider what mechanisms created it.

It’s a kin to collecting newspapers, thinking that one day you have the time to sort and organise them… to make them useful in some way… a type of procrastination.  Have you considered using some kind of mind-map software to organise your information?

The amount of information you have is only a small part of the problem space, and I can guarantee that information is both contradictory and inaccurate… it was compiled by humans.

 :)
Title: Re: ETHICS
Post by: AgentSmith on October 24, 2018, 01:59:22 pm
Well if you are talking the general public then it was already tired with chatbot  billy, daisy and paula. There were many others but those are the ones I can think of off the top of my head.  Microsoft also tried it so if you think that you are somehow more intelligent then these guys then go for it. I have been around for a while and saw did and didn't work.  I see what the general public says to my chatbots and it isn't pretty.

To be honest, I am thinking since a while about how I would create a chatbot with learning from demonstrations. However, I have doubts that a chatbot based on my concept could already produce good results in a Turing test. Besides the huge amounts of data I would need to train the bot some basic insights are still missing. Did you hard code your chatbot or did you use some sort of machine learning?

@ Agent Smith,

I would much rather my robots be able to examine then find workable solutions in order to solve a problem rather than be a "One Trick Pony".
If the solving of a particular problem involved failure then that failure is not as much of a failure as it becomes an object lesson or learning experience.
Given enough attempts (and perhaps a share of failures) it should eventually learn how to succeed.

As Thomas Edison once said, "I have not failed. I've just found 10,000 ways that won't work."

Learning by trial and error is an essential and necessary characteristic of advanced intelligence. Hard coding agent policies without usage of machine learning concepts becomes very fast infeasible as tasks gain complexity.
Title: Re: ETHICS
Post by: LOCKSUIT on October 25, 2018, 02:15:53 pm
i was thinkinng just today, dogs got frontal cortex, mine had a personality, they really do think and not just impulses
Title: Re: ETHICS
Post by: DemonRaven on October 25, 2018, 11:43:58 pm
To be honest I am basically lazy and if I can get someone else to do most of the work I will let them lol. I have not studied AI programming persea. I do know AIML and SIML is similar those are not that hard to learn. The personality forge Is the main one i started with because it was more challenging. If I really had to I could sit and frustrate myself and learn the code but I prefer to do that tedious part and that is the bots answers and replies.  Programming as long as it stays away  the complicated mathematics is not that hard to learn for me because it looks like a language I just don't want to if I don't have to. I did teach myself how to create websites so AI is just another kind of programming. As I have stated in other forums I am a bee and and soon as i figure out how something works I get bored so I have a hard time sticking with something. But I did manage to stick around long enough to learn some programming/website design and AIML/SIML. IT seems to be complicated enough to keep my interest and creating things like websites and chatbots is actually more of a art form and i come from a family of artists. Along with all the other stuff they know.