A philosophical issue

  • 44 Replies
  • 15669 Views
*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: A philosophical issue
« Reply #15 on: October 02, 2015, 11:17:14 pm »
We can probably at some point copy our brain waves and thoughts to a robot or another body but no matter how it is peddled it is still not you just your "twin" . It is no more you then a clone would be.
So sue me

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: A philosophical issue
« Reply #16 on: October 03, 2015, 10:52:53 pm »
As always in life, you can choose: doing what the crowd wants, or doing what you want. You know what crowd wants: fun and money, they don't care about big questions that may or may not bother you.

Do what crowd wants and you'll be a rich and popular man, but you'll still miss something.
Do what you want and you'll be a happy man, but you'll be poor and no one will care about you.

Or you can do it both, but hardly in the same time. That hardly means AGI. If you are about to build an AGI, you might find that you are making a big fusion of everyone's wishes. Fun, science, jobs, politics, medical care, community developement, you name it, it's all there (including weapons, but I won't wave that subject right now). So everyone is happy, crowd gets their things, you get your soul.

But guess what? It may take decades. Ultron, that's a very long time and you are young. You shocked me sevaral months ago when you said you left your girlfriend because of something, by my opinion not worth of it. May I propose something? Get a day job, try to get your own family and do what you want in the spare time, not all of the time. Don't do the same mistake I did 15 years ago when I pushed everyone away because of my 24/7 commitment to building an AGI. And the more time I invest into it, the more pressure there is to finish the task. It became addictive and now I can't help myself, like I'm in the middle of infinite loop. That might not end until the end of my life. That's about 40 years away from now, but I'm trying to make changes right now in my life and the older I am, the heavier it is to make the change. But I'm still trying, so don't write me off yet.

Try to get a real life beside the job while you are young. It is people you are ought to run about, not machines. AGI is fine, but in the spare time, listen to my advice.

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: A philosophical issue
« Reply #17 on: October 04, 2015, 11:43:29 pm »
Ah, I thank you for the friendly advice, Moony. The truth about my failed relationship was that it wasn't fulfilling anyhow - spending 24/7 working on A.I. for months was merely a result of this. I was chronically bored and heavily annoyed by the smallest of things. Therefore it is not the work that drove away everyone else.


But I had other things to mention in this reply other then my personal life (not that I am secretive - I don't mind, I'm an open person) since it's not the place and time perhaps.


On the topic of simulations and virtual environments - the ones we usually create are limited, true.. But life can also be considered a simulation or "virtual" - only with a lot more entities, parameters etc (more complex). However, why bother with virtual environments if you can just program a bundle of servo's and make the test on the real field? While it is much cheaper (free, in fact) to test programs virtually, it is much more complex and barely accurate at all.


Back to moony.. Unless we fully commit to something in life, we will never feel complete. There will always be a hole.. Something missing... It is natural that we need to settle down eventually and create a family, but what great man ever succeeded with that being on the top of his head? I am not fully committed to working on this - there are periods, yes but i have other hobbies. I do cross-fit, train ninjutsu and am heavily addicted to camping and hiking. In the end tho, it is nature that we are exploring and I believe the best way to solve any problem is to see it from a wider or different perspective - this is what gives me so much confidence in my A.I. research and theories. I know of nobody with a similar viewpoint and that gives hope.




This topic was started with the goal to help me solve some of those issues and exchange opinions on some theoretical A.I. work. Being a programmer, I also incorporate some old-fashioned problem-solving techniques such as "divide and conquer" and my current division is the memory issue - how does our memory work? The most sound personal theory to date is "association". I have wrote about it a lot and a long time ago, so If anyone is interested in diving deeper I can find the link.
Software and Hardware developer, and everything in between.

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: A philosophical issue
« Reply #18 on: October 05, 2015, 12:49:18 am »
Having a family is way overrated in my opinion and yes i did have one. Kids are expensive and a lot of work and when they leave the nest you get forgot about.
So sue me

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: A philosophical issue
« Reply #19 on: October 05, 2015, 02:06:04 am »
Yes, Raven, and the best thing about Granddogs is they give lots of love while they visit, then they go back home with those rug-rats too!  O0

Boredom is not to be confused with loneliness, which really sucks. Been there... Find a hobby, dream some dreams, fulfill an interest, go out around real people once in a while, hang / associate with similarly minded folks. It's also great to self-indulge once in a while. (my weakness is chocolate/peanutbutter ice cream). I try to have it at least once a month, just to keep in touch with my human side, if only briefly....;)
In the world of AI, it's the thought that counts!

*

Zero

  • Eve
  • ***********
  • 1287
Re: A philosophical issue
« Reply #20 on: October 10, 2015, 05:33:13 pm »
One set of knowledge always opens new pathways to new knowledge previously unknown ...

Maybe the wisest and most on-topic words spoken yet.


(The more I learn, the more I realize I do not know.)

I see that you yourself realize and agree with me that what we have today is NOT artificial intelligence - despite it being advertised as such.
Do you think all we need to do to make a program 'intelligent' or 'aware' (in any sense) is by just forcing it to chew indefinitely on infinite chunks of information? Or does the genius come from summarizing all of the knowledge gathered so far - twisting it, re-imagining it and making new combinations which eventually lead to new ideas, imagination, innovation.. Evolution... ?

The human mind is much more then an information processor and storage unit. If were able to finish this sentence with a definition of what it actually is, then I would be somebody who recreate it artificially - but I can't.

Self-awareness itself is easy to understand, but it's only a keystone, not the entire church.
It goes like this. You can feel your thoughts, which means you have sensors in your brain.
So, if you're using an imperative language for instance, just keep track of everything happening in your program (like function calls), make an activity log out of it, and send this log directly to the program's input, as soon as possible. You can do this easily by wrapping up every low-level function like this:
Code
function AwarePrint(text)
  SendToInput("printing:"+text)
  Print(text)
end
Now, the program can feel its own mental activity. However, if you don't want it to get overloaded, it has to be able to focus on a small part of its input, ignoring the rest.
Once you're here, stick all your AI goodness in the main loop, sit back and enjoy.
This is all bullshit obviously ;)

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: A philosophical issue
« Reply #21 on: October 12, 2015, 12:55:12 am »
Zero, what would be the point of being self-aware in general? How would an A.I. benefit from this? How do we?


Never underestimate the depth of these terms and the issues associated with them, please. It is a personal progress-blocker, thinking you got it all figured out, trust me.
Software and Hardware developer, and everything in between.

*

Zero

  • Eve
  • ***********
  • 1287
Re: A philosophical issue
« Reply #22 on: October 12, 2015, 07:44:15 am »
Quote
Zero, what would be the point of being self-aware in general? How would an A.I. benefit from this? How do we?
I understand your question, but there's no answer: things are what they are, that's all. Why do birds sing? Why all these big guys on TV keep building beautiful bikes and custom cars? I don't know. You said: "The human mind is much more then an information processor and storage unit. If were able to finish this sentence with a definition of what it actually is, then I would be somebody who recreate it artificially - but I can't." Why would you do it? Because that's what we do, maybe. Birds sing, big guys build cars, we do AI (and Mr.SuperFlowers farts in forums).

I don't underestimate the consequences of what I said. But it's only a keystone, useless if left alone.

EDIT: I could be wrong, but I see it like this (image).
« Last Edit: October 12, 2015, 08:35:18 am by Zero »

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: A philosophical issue
« Reply #23 on: October 12, 2015, 11:56:27 pm »
I understand your question, but there's no answer...


This was in fact, my point.


The image you attached got me thinking on something, though. Maybe we could measure "awareness" by calculating how much % of the ellipses overlap? An interesting thought I just figured I would throw in.


Your image illustrates a pretty straight-forward way to look at those terms, however having this previous thought in mind I still cannot come even close to agreeing with myself on what is self-awareness...
Software and Hardware developer, and everything in between.

*

Zero

  • Eve
  • ***********
  • 1287
Re: A philosophical issue
« Reply #24 on: October 13, 2015, 07:54:50 am »
I agree on ellipses overlap.

At some point, we all need to find something which makes sense. But even when we'll succeed and ask our AI "so do you feel alive?", we won't be satisfied... this is where the live-your-life thing comes in handy. Look into buddhism, jump from a plane, try LSD. Make a child and look at her while she's looking at you. This makes sense.Then, work on AI or on a beautiful bike :)

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: A philosophical issue
« Reply #25 on: October 13, 2015, 02:07:36 pm »
I think everyone has their own definition of awareness (how aware is a goldfish) yet it is alive ?

Awareness of one sensors may may not be enough ...
Awareness of the environment or body , knowing its limitations maybe a beginning , attempting to make the environment or body a "better place" or change ones existence to something better ...maybe another ?

Plus these things need to be self discovered.. Not programmed ... Otherwise we would have made a program which is aware of itself and limitations .... It needs to come to its own conclusions and be able to define its reasons for believing that it is self aware ...
"I think therefore i am" .

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: A philosophical issue
« Reply #26 on: October 14, 2015, 12:32:10 am »
So Spydaz, could someone NOT program an AI to be aware of itself and it's limitations?
Word had it that a few years ago some Japanese researchers had build a robot that was aware of it's own limitations and was able, with other bots, to construct a robot(s) better that itself.

It all comes down to the programming. With all the advances in tech and robotics, sensors, processors, etc., I would certainly not be as bold as to state that an AI was not able to "discover" itself.

We can only imagine where the robots might be 10 years from now or how advanced the OK Googles, Cortanas and siri's might be by then.

I'm not saying these robots or AI will become godlike but they might have knowledge of their own presence and of their own kind.

Just offering more speculation as you mentioned, everyone has their own definition of awareness.

It will be an interesting journey.
In the world of AI, it's the thought that counts!

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: A philosophical issue
« Reply #27 on: October 14, 2015, 06:58:30 am »
I would think that evaluating an AI that became self aware by itself would be potentially more dangerous than the AI that had been programmed to be self aware ....
The reason being that theAI that became self aware .. Actually became more than its own program (made its own decisions, begins to create its own imperatives)
Whereas the latter ... Can only do what its been programmed to do therefore no self intelligence has occurred. Its imperatives are still the programmers ....

This is the danger which people believe is the problem with AI . But we are no where near this level of development... Yet we could mimic it.
A machine which can redesign its self and improve (machine learning) is a basic task which is "desirable" for autonomous robots. Yet still wouldn't make it self aware in the sense that it would go outside its mandate ... And design other entities for fun ?

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6856
  • Mostly Harmless
Re: A philosophical issue
« Reply #28 on: October 15, 2015, 11:05:24 pm »
I think that intelligence or self awareness is a very hard definition to pin down ....
Are animals self aware ? Are we even self aware ? The animal responds to its imperitive for food / reproduction / nesting .... Is this robot like behaviour ... We have the same imperatives ... Yet pleasure is also an addition ....

We must be self aware or else we would not have made the notion up in the first place ;)

I think animals are self aware in the same way.

There's that thing about monkeys and also elephants that recognise themselves in mirrors. Would a robot recognise itself ?

Some birds do not recognise themselves in windows, they show territorial behaviour as if it was another bird.

Also I agree that something that became self aware by itself is more worrying than something programmed.

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: A philosophical issue
« Reply #29 on: October 16, 2015, 01:12:56 am »
I have said this before that intelligence is relative to its environment. Compared to dogs, we would seem stupid as we can not process the kind of information from smells that they do. Like wise dogs seem stupid to us because they can not solve math problems and other problems.  A cat would think we were stupid because we can not see and react the way they do. Intelligence and self awareness is a hard definition to pin down. Is a bird  not self aware because it does not recognize itself in the mirror or is it simply because they are not knowledgeable that the mirror is a reflection rather then a rival. IT obviously has some kind of self awareness or it would not feed itself and bath itself. So then how can we as humans program a AI to be truly intelligent and self aware when we can not completely even define intelligence and self awareness. Once we totally understand the concept of intelligence in ourselves and other species then we can translate that concept to a machine.

A lot of scientists are trying to do just that. Define and examine what makes up intelligence and self awareness. How neurons in the brain work and how signals are transmitted and stored in the brain. One thing though we are trying to do what nature took millions of years to perfect.  So in my opinion it takes more then one discipline in science to accomplish what many call the singularity.
So sue me

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

253 Guests, 0 Users

Most Online Today: 359. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles