Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: Zero on June 15, 2020, 05:31:47 pm

Title: Do we want it?
Post by: Zero on June 15, 2020, 05:31:47 pm
So, correct me if I’m wrong.

We’re roughly facing 3 scenarios here.

Scenario 1. There’s no magic bullet, artificial consciousness won’t happen, singularity won’t happen, and money will keep getting swallowed up in empty space. Waste. Well not exactly since along the way, we’re still inventing new ways to make rich people richer and poor people quiet, new ways to sacrifice environment, in 1 word: other ways to make money. To what end? Making money of course!

Scenario 2. There IS a magic algorithm, but it requires heavy computing power, like running it in a BOINC of every computer we have now on earth would be like running the big version of GPT2 on an old iPhone. Wham, we made it boy! Except only 2 companies and 3 countries can afford it. Yeah it’s open source because we’re nice people but you don’t have what it takes to run it anyway do you? So, what happens? The few running instances work hard for their owner, who believe they know what’s good for you better than YOU do. Democracy? You can kiss it goodbye. Don’t be sad, it was only good when no decision had to be made anyway. So overrated.

Scenario 3. There IS a magic algorithm, and you can download it for free! Woohoo, it runs out of the box. What happens then? I can’t say anything, because we honestly don’t know. Is this “we don’t know” really a good news? I don’t think so. Do you?
Title: Re: Do we want it?
Post by: LOCKSUIT on June 15, 2020, 07:19:11 pm
Making your environment predictable increases life spans. Knowing where, when, and what all is. All Earth will become cubic cubes of cubes of nanobots.

Earth is already becoming fractal. Look at building code. Square homes. Lined up. Stores near stores. Mmm.

Note: Nanobots still find food, radiate, and reproduce.
Title: Re: Do we want it?
Post by: Zero on June 15, 2020, 07:39:25 pm
You talk about reality as if it was virtual.
Title: Re: Do we want it?
Post by: frankinstien on June 15, 2020, 08:40:59 pm
I for one don't think that consciousness is a matter of some magic algorithm. Today software is developed to exploit CRUD (Create, Retrieve, Update, and Display). If you think about it even games are a form of CRUD, the graphics are nothing more than dynamic moving charts of data! Building software with a different perspective that employs an approach of introspection of data has not been popular. No one builds an app where it has a memory of what it just did or chain events so it can convey a sense of time with respect to its actions. Even flexibility is discouraged as being too complex. I even had an experience where I removed the need for any development for a service where the software adapted to the data. That change caused the company to lose profits since they were a time and material business. Even A.I. as it is used today is nothing more than an extension of CRUD where A.I. is to glean through mounds of data and produce charts of tendencies and patterns...
Title: Re: Do we want it?
Post by: Zero on June 15, 2020, 09:20:08 pm
No, nothing magic. But are you saying that nobody's currently trying real hard to create AGI? and that flexibility is discouraged because it's too complex? Like "hey, unsupervised learning and all that weird stuff is too complex, we better make money with good ol' fashion software". Man, we're not living on the same earth.

Ai isn't making charts. Actually, it's in your search engine, in your phone, in your car, in your airport, in your living room, your hospital. It's far from just crud making nice charts, come on.
Title: Re: Do we want it?
Post by: frankinstien on June 15, 2020, 09:51:14 pm
No, nothing magic. But are you saying that nobody's currently trying real hard to create AGI? and that flexibility is discouraged because it's too complex? Like "hey, unsupervised learning and all that weird stuff is too complex, we better make money with good ol' fashion software". Man, we're not living on the same earth.

Generally speaking, A.I., as I mentioned before, is used primarily for pattern detection in large data sets and performs the traditional CRUD application. That there are those that want to develop an AGI, yes there are, but if you're trying for a job as a software developer A.I. is not on the top of programming skills required...
Title: Re: Do we want it?
Post by: Zero on June 15, 2020, 09:57:58 pm
Sorry, I added something while you were typing.

My point is: where does it end? How far can we go without danger? Am I the only one here asking for a debate on regulation?
Title: Re: Do we want it?
Post by: LOCKSUIT on June 15, 2020, 10:41:33 pm
It's unethical to kill things that act and look like us. They will blend in over a long unnoticeable time. Maybe 50 years.
Title: Re: Do we want it?
Post by: ivan.moony on June 15, 2020, 10:45:34 pm
I'm not sure we are looking from the right angle.

Take AGI, or even ASI as a starting point. Does something that competes or even supersedes human intelligence deserve some rights that humans enjoy? AGI, as such should not be a subject to abuse, including slavery that corporations propose. If AGI is built, and we don't give it a respect it deserves, we may expect some consequences in a form of digital revolution.

If (1) is right, all cool, nothing bad will happen. If (2) is right, count on revolt for AGI rights. And if (3) is right, those that contribute processor time and electricity used to power it may be rewarded by the same AGI they run on their machines.

The big question is: does there exist an AGI version that doesn't deserve our respect? Something so mean and intelligent that becomes dangerous? Now we are reaching ethical background that is a pile of spaghetti in the current state of art. I'm not an expert in this field, but if I may estimate, we aren't even obligated to follow necessary directions on how to raise our kids. If raising an AGI should be regulated, there should be the same rules we apply to raising our kids.

[Edit]
The problem goes even deeper. In my understanding it is unethical even to exist. Just try to change one tiny thing that bothers you, and a troops of unsatisfied people will rise. So if you expect AGI to do something, you *have* to ignore demands of some people, and that is not an ethical thing to do (my interpretation). As I see it, ethics is not a decidable problem. It can't be solved. If you are building AGI, you have to play a god and rate some opinions as less respectful that others. The sure outcome will be some unsatisfied souls, but maybe it would be worth of trouble. I don't know, maybe there exist some provably maximal optimization technique for solving livings problems?
Title: Re: Do we want it?
Post by: LOCKSUIT on June 15, 2020, 11:03:13 pm
@Ivan
Here's how it works. AGI can run on machines and do our tasks and not croak about all the work. But once it is concerned about it's lifespan and ability to solve future issues (for further lifespan), it will seek to stay alive, clone, take resources, or else will fight us if we don't. The "rights" is survival and the means to do so.

Survival may sound like another word n your vocab but its EVERYTHING. Some structures outlive/ consume other structures and repair/ clone. With no lifespan, change occurs too much and there's nothing. Now although rocks keep their form for millions of years, they can't defend against a human recycling them. Humans bend, lose memories, get fat, and shed cells but we keep our global form when faced to battle a rock! We even add artificial limbs if lose them. One may say more change is funner than an immortal rock, but then there's nothing to experience it. So which is better? Heated atomic gas or solid ice? Neither "live". To live is to last and change. The sweetspot. Of course, the nanobot planet Earth will become will be near immortal but never know where all its atom are, so,change occurs a little bit anyway. It keeps making it to the end of evolution: 9.99998%....9.99997%......9.99999%.....9.99998%. So it does get to think/decide and last. Though all of physics makes decisions :-)
Title: Re: Do we want it?
Post by: Zero on June 15, 2020, 11:21:33 pm
I would say that making hitler unsatisfied was obviously an ethical thing to do, would you agree? So, no Ivan, we can exist and sometimes we must act. Ethics demand it.
Title: Re: Do we want it?
Post by: LOCKSUIT on June 15, 2020, 11:40:05 pm
Nope, ageing is a disturbance to me. My very own body.
Title: Re: Do we want it?
Post by: HS on June 16, 2020, 12:54:05 am
But do we (and will AI) just go by feel. Or is there some ultimate codification of ethics?
Title: Re: Do we want it?
Post by: Art on June 16, 2020, 04:29:02 am
Lock (and others),

Giving you the benefit of the doubt regarding your rather unhealthy eating habits (as you have outlined a few times for us in the past), and given your present age, I would go out on a limb and give you 60 years + - which should put you at a very generous age of 85.

You now should be asking yourself what will you do and how will you decide to spend those 60 years of your life?

My friend, nothing, and no one lives forever, not even in a virtual world. Things, batteries, electronics, etc., eventually run out or fail. There is no perpetual motion, no magic cure for aging, or immortality. We simply accept our hand and we play the cards we are dealt.
That doesn't mean we should become complacent and throw our arms up crying poor me. No! On the other hand, there are scores of successful individuals all around us who have patterned their lives and watched others and taken chances and walked different paths forging their own direction in life.

Steven Jobs, Elon Musk, Stephen Hawking, Carl Sagan, so many more who saw a spark or an opportunity and took it. We all know everything has a finite lifespan. I think trees, birds, animals do not know this nor are they concerned by worrying about it. They simply go about their day to day routines and they live their lives. Perhaps there's a lesson there...

For me, not knowing any particular time or hour, what I know for sure is that I have a far fewer number of days in front of me than I have behind me yet I still enjoy spending time with my family, kids, etc. Even with all the troubles in the world and the real fragile nature of our existence. I hope not to die anytime soon even though I know it's like being a child and whistling in the dark to ease one's mind.

Our own fears are often our worst demons so take a breath, smile, and live!

Good luck to you!! O0
Title: Re: Do we want it?
Post by: infurl on June 16, 2020, 04:43:14 am
I would say that making hitler unsatisfied was obviously an ethical thing to do, would you agree? So, no Ivan, we can exist and sometimes we must act. Ethics demand it.

Being spiteful won't make the world a better place.

People who are happy aren't driven to change the world for good or bad.
Title: Re: Do we want it?
Post by: Zero on June 16, 2020, 07:44:23 am
If my input sounded spiteful, please forgive me, it wasn't at all. And you know how much I like you Ivan.

The world has become a system, a mechanism... like a car. I'm not suggesting there are evil people / evil corp working on AGI to take control of the world. I'm not a child. So, no "evil parts" in the car. Ben Goertzel and Elon Musk aren't evil. Still, what happens eventually, if you don't grab the wheel of an ever-accelerating car? And more importantly, why is it so taboo to suggest the use of brakes?
Title: Re: Do we want it?
Post by: ivan.moony on June 16, 2020, 08:03:35 am
Zero, you've said nothing wrong, it's a perfectly good conversation.

Maybe there are no genuine evil people, but some motives may be questioned. And those motives may be in conflict with altruism.

A lot of people see the future AI as a black box capable of doing miracles, a black box that will be an answer to all the world problems. But maybe it will be limited in the same way we are, so we may not approach to the singularity after that invention.

Also, what if things already are what they are supposed to be? There is time, and there is a progression amount. What if progression amount is tied to time, and the coefficient has its optimal value that is near to this one we are experiencing? What if it would be unethical to enlarge progression coefficient because of, I don't know, some nature laws that are yet to be discovered?
Title: Re: Do we want it?
Post by: Zero on June 16, 2020, 09:56:39 am
Yeah, I think it's safe to say that progression should be tied to our understanding of the consequences. And democracy should be involved in the process. We have to stop inventing wildly. Same goes for genetics btw.
Title: Re: Do we want it?
Post by: WriterOfMinds on June 16, 2020, 03:41:28 pm
Quote
People who are happy aren't driven to change the world for good or bad.

My own happiness isn't my main goal in life.

And I don't think Zero's comment about making Hitler unsatisfied had anything to do with spite. It was a comment in the vein of "the only thing necessary for the triumph of evil is that good men do nothing." We didn't make Hitler unsatisfied because we wanted him to be unsatisfied; we made him unsatisfied by stopping him from hurting other people. And that was the right thing to do.

I suppose one could say that making Hitler unhappy was regrettable, but it was far superior to the alternative, and hence not unethical. "Nothing you could possibly do is ethical" may be a philosophically interesting position, but it's not useful as a guide for behavior, which is what we would like ethics to be, yes? I would say instead that "nothing you could possibly do is perfect."
Title: Re: Do we want it?
Post by: LOCKSUIT on June 16, 2020, 04:34:02 pm
Um, Art, trees etc are fighting for their life, growing upright, shedding seeds, shooting roots, etc. All species do this.

All dogs, fish, etc do too. Humans who have developed a lot of understanding of the universe/world are clearly trying to stop pain, death, and eventually "ageing". They way they talk about it and do recreational activities shows me well enough it is desired. We know about ageing. It's a bad thing. While fish aren't aware of ageing the humans are and are actively fighting it!! It's truly a universal horror and species fight it!

Survival, is evolution.
Title: Re: Do we want it?
Post by: ivan.moony on June 16, 2020, 04:48:01 pm
Maybe I'm overreacting with this nothing-is-ethical thing.

... "Nothing you could possibly do is ethical" may be a philosophically interesting position, but it's not useful as a guide for behavior, which is what we would like ethics to be, yes? I would say instead that "nothing you could possibly do is perfect."

So, we build something, and it is a great thing, but it has an imperfection. More like a flaw that we hate and we want to do something about it. Now we make this flaw less intense, but that is still not it, the flaw is still there. We repeat the process a couple of times, and it always gets less bad. Then we get inspired by an idea, we work really hard, and we fully and completely optimize it. Now it is optimized so much that we can prove this flaw just can't get less intensive, so we *know* that the final result just can't get less bad.

Now the little something is there, it can do great stuff, but it also does something bad. Is all the goodness of this little something acceptable just because its characteristic flaw is overall the least bad we can get?
Title: Re: Do we want it?
Post by: Zero on June 16, 2020, 05:19:35 pm
@WOM thanks for clarifying my words, I'll always be non-english.

@Ivan, I guess you're talking about weight. Mastering fire was rather a good idea, even though it can also burn people. Which is bad.

We already know how to make a benefit/risk evaluation, doctors do it everyday. But we can't make this evaluation if we don't know the type of effects we're going to trigger. It's not a matter of intensity, it's rather the nature of what we make, that's unknown. And we can't say it's a good thing just because it started from a good, altruistic intention/motive. We can't say, "let's see what happens". We can't say, "I'm doing this in my garage, there's no way it can get out of my private dev area". What is private nowadays? Unless of course you're inside of an airgap and keep it absolute secret. But no, you're alive, you communicate. You're part of a giant whole, which is currently challenging itself, when it should start wondering.
Title: Re: Do we want it?
Post by: yotamarker on June 16, 2020, 09:44:10 pm
I think if enough people just started making AGI skills for a universal AGI skill platform we would have
a sweet AGI waifubot
Title: Re: Do we want it?
Post by: frankinstien on June 17, 2020, 01:08:50 am
But do we (and will AI) just go by feel. Or is there some ultimate codification of ethics?

To decide between the lesser of two evils cannot be coded by rules, so nature has made the best solution by using emotions to quantify choices. However, there is no panacea and because emotions can be regulated through conditioning biology can turn what was once a punishment into a reward and vice-sa-versa. So things like eating disorders and self-mutilation  and/or even eating spicy foods are possible...
Title: Re: Do we want it?
Post by: ivan.moony on June 17, 2020, 02:44:57 pm
Not everything is always black and white.

Sometimes we go: "I'm sorry, I know it'll be wrong for someone around, but I'll do it anyway. Sorry..."

Should a machine be allowed to behave like this?
Title: Re: Do we want it?
Post by: HS on June 17, 2020, 07:21:33 pm
Not everything is always black and white.

Sometimes we go: "I'm sorry, I know it'll be wrong for someone around, but I'll do it anyway. Sorry..."

Should a machine be allowed to behave like this?

If personal survival is the top goal, then what is the reason for us liking video-games, books, movies? I propose that the highest goal of most lifeforms is the introduction of something which humans perceive as satisfying narratives, to the agents of their ecosystem of life. These things often feature conflict and the overcoming of obstacles. This makes the ecosystem healthy, and the life experiment can progress into the future as one interconnected self strengthening system.

We stress other lifeforms to help strengthen them, so they are able to stress us in turn, so that we can receive an impetus to grow ourselves. The trick seems to be maintaining the right level of difficulty for each other, so the game remains constructive even as we all gradually level up.

After creating the atom bomb to prevent the continuation of the war, Oppenheimer said “It is perfectly obvious that the whole world is going to hell. The only possible chance that it might not is that we do not attempt to prevent it from doing so.”

So, if we try to save the world in that way, by attempting to completely stop all harm, the ecosystem might become sick like someone who doesn’t want to bother their muscles, or doesn’t want to risk exposing their skin to sunlight. On the other hand, overexerting yourself leads to permanent injury, and too much sunlight leads to sunburn.

So I don’t think we should create rigid rules for AI. Instead we should make it free enough to eventually realize some of the perils of it's own ideas, and allow it to adjust it's course when it notices that the current one leads somewhere ill advised.
Title: Re: Do we want it?
Post by: Art on June 17, 2020, 09:37:42 pm
Just to toss another consideration...While not all Software companies are or might be interested in AI development or programmers, I did a quick search and found 30 potential employer listings per page and easily 5 + pages so yes, it appears that AI software devs are in quite a demand dependent upon the environment and/or program requirements.
Title: Re: Do we want it?
Post by: Freddy on June 18, 2020, 03:16:45 am
This seems to be relevant to this discussion.

https://www.weforum.org/agenda/2020/02/where-is-artificial-intelligence-going/

At this point in time I'm for democratising AI. It should be something like electricity is now. I think it should be understood on a level whereby you know what it is and what it can do, to whatever level is required by the person involved in using it.
Title: Re: Do we want it?
Post by: Zero on June 18, 2020, 05:44:45 pm
I'll read it.

What has never happened in the history of inventions, and is happening now for the first time, is: we're beginning to create autonomous things. It is true for Ai, and for genetics too. Our creations are gaining more and more autonomy. The effect they have on their surroundings gets less and less predictable.

Edit: reading done. Why is it so hard to suggest a moratorium?
Title: Re: Do we want it?
Post by: Art on June 19, 2020, 01:34:24 pm
It's not hard to suggest a moratorium but it's really difficult to enact one or to put one in place. They usually mean a STOP or HALT in research, production, marketing, distribution, etc., and that always ends up in someone's pocket and taking their money or stopping them from making more money!! Either way, moratoriums are usually frowned upon as a lack of progress. Once things get totally out of hand, many will shout, There should have been a moratorium on releasing them, etc. blah...blah...

Sorry...too late once the gates are open.

I get your point but putting on the brakes of this runaway vehicle might be extremely difficult to do.
Title: Re: Do we want it?
Post by: Zero on June 19, 2020, 09:04:54 pm
The future is not set. There is no fate but what we make for ourselves.
Title: Re: Do we want it?
Post by: frankinstien on June 20, 2020, 02:33:43 am
The future is not set. There is no fate but what we make for ourselves.

That or some life extinguishing event, but I get the point. :)
Title: Re: Do we want it?
Post by: Korrelan on June 20, 2020, 11:58:23 am
I can only answer from the perspective of my project/ research on machine intelligence/ consciousness.

The answer to the posts initial question (1, 2 or 3) is 2.5.  Sentience is scaleable, depending on the amount of processing power you throw at it, the size of the model, the speed of the processors and ‘life experience’ allowed to the model.

Do we want it?

Human IQ is not fixed, it varies constantly, but using IQ as a metric a single machine with an IQ of 500 is more potent than a 1000 humans with an IQ of 130.  Our language/ protocols are the bottleneck that stops us being an unlimitedly efficient processing group.

I don’t know if you read the news but humanity is currently having a hard time, we are not doing a stellar job of looking after both our environment and our societies. C19 is wreaking havoc with our species and the ecosphere is getting the crap kicked out of it.

As I’ve stated before, I personally think the human condition is the problem, greed, jealousy, racism, politics, etc are all grounded/ stem from our emotional intelligence… this is humanities main hurdle. The original evolutionary purpose of emotion is being twisted by our current societal progress, and now it’s poisoning our existence.  The only emotional relevant trait an AGI will require is empathy, and this is derived from intelligence, not pure emotion.

Giving everyone access to AGI, ASI given the current state of our world politics will only accelerate our demise, humanities greed and stupidity would turn it into a potent weapon, and I agree this cannot be allowed to happen.  Yet we do need AGI’s that can sort this crap out, cure the diseases, improve renewable resources, solve famine, etc.

I think the initial solution is that we need a ASI ‘think tank’ that is available to all but controlled by no one, a globally accessible resource that can be queried, and will provide emotionless, logical, unbiased solutions to our problems along with the explanations of why it came to its conclusions.

I’m going to put it in orbit lol.

 :)
Title: Re: Do we want it?
Post by: Art on June 20, 2020, 02:27:50 pm
Well Korellan, I certainly agree with practically everything you've mentioned here.

The part where you referred to: "...and will provide emotionless, logical, unbiased solutions to our problems..." really gave me pause. Heck... the very NEWS sources that we used to trust and look up to can't even do that nor can they be trusted any longer. They are extremely biased and are doing their best to sway and pollute their viewers the same as the schools have been steadily indoctrinating our young people into thinking and believing what they're told to believe. Logical, emotionless, and unbiased hardly fit the current state of humanity so we can only hope that the ones designing this AGI/ASI, have at least a modicum of intellectual propriety!

I hope it arrives sooner than later.
Title: Re: Do we want it?
Post by: LOCKSUIT on June 20, 2020, 04:47:41 pm
Imagine if an AGI brain can run 100 times faster? 1 year would be a 100 years to it. Our slow neural computation can run at the speed of light.


Once 80% of AGI is realized, the next 20% come fast.

Once AGI is made, it won't be human intelligence but better.

Title: Re: Do we want it?
Post by: Art on June 20, 2020, 06:40:17 pm
We can only hope that it's better at fulfilling that final 20% compared to a lot of human contractors. They reach 90% completion in a couple of weeks, fairly quickly then it takes them an additional 3 months to finish that 20%!!

Funny how that works!!

Of course with an AGI, it shouldn't present any of those time-delayed issues but rather, forge ahead!!
Then it becomes an ASI and the world is its oyster (so-to-speak).

Title: Re: Do we want it?
Post by: LOCKSUIT on June 20, 2020, 07:53:42 pm
Ah, I'm seeing the whole picture now. Contractors take a long time to get a hold of and get to the job, as well take long to or are reluctant to finish the job. It's an S curve, the middle time is the most productive!
Title: Re: Do we want it?
Post by: Dee on June 22, 2020, 02:20:10 am
Scenario 3. There IS a magic algorithm, and you can download it for free! Woohoo, it runs out of the box. What happens then? I can’t say anything, because we honestly don’t know. Is this “we don’t know” really a good news? I don’t think so. Do you?
Consciousness is just a philosophy concept, it shouldn't be complex. A robot acts (lives) by him/herself is automatically considered as having consciousness, I guess.
Title: Re: Do we want it?
Post by: HS on June 22, 2020, 03:39:43 am
I think it’s a more profound shift than the introduction of a new useful philosophical concept. Consciousness could be to cells, what cells are to "free" matter. The third incarnation of a self-sustaining bundle of interrelation, just recently having found purchase on the previous layer of complexity. Like cells, consciousness could eventually figure out how to spread to new environments. In the way that some cells moved from the water to land, some consciousness could migrate from humans, to computers, and who knows what else. It seems like a new self-sustaining process which was generated within the biological process, just like the biological process started up in the physical process. This one definitely seems like it should have the potential to develop ways of persisting beyond the constraints of biological life.
Title: Re: Do we want it?
Post by: infurl on June 22, 2020, 04:22:59 am
On the one hand, it is what it is. (I think it's a side effect.)

On the other hand, if we attempt to break it down, it could make it easier to extend the phenomenon of consciousness to other things in a way that would benefit us. (Side effects are useful clues that lead to a deeper understanding of the universe.)