Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: Ultron on September 29, 2015, 10:31:00 pm

Title: A philosophical issue
Post by: Ultron on September 29, 2015, 10:31:00 pm
I am not sure if it is as much of an issue as it is a question, but the topic sounded more attractive this way.


Anyhow, for a long time now i have been indecisive as of which path I should choose. In other words, I have been thinking as to which type of intelligence is more practical, plausable and expandable. There are basically two different categories in my mind when I think about this - an intelligent assistant much like the fictional JARVIS where the issue is that a classic approach to such a programming task would probably render a limited program which could never grow to a real A.I. as to my standard and definition.


The other category is somewhat less plausible and would require more thinking and philosophical work rather then practical experimentation. This type of intelligence best compares to the fictional android Data from Star Trek or his Marvel counter-part The Vision. It is a truly unlimited human-like intelligence and therefore highly uncontrollable as it can grow beyond our own frame of mind (something id be happy about).


I have been developing and sketching models for both types and have worked up many theories and concepts as to how they would function. Working for the assistant-type has been mostly dissatisfying as I can see more limits then possibilities in all related models. I have already made prototype programs and they have been successful however the main reason I stopped was because I realized how limited they are.
As far as the android-model goes, I have made several sketches and realized that a new hardware architecture would be required to bring such an intelligence to life. How do I go about designing such an intelligence? Well I examine my own behavior and sometimes I observe animals and all organic life as a whole to find common behaviors. I'm basically back-tracking intelligence to its most basic form and the root of its evolution, for that is where lies the key and seed. Also note that this type of intelligence can also be an assistant and take any form - much like "intelligence" can be found within a human, a bird, a fish or any other animal harboring a brain.




So to sum it up - I'm not sure what I'm asking of you guys. I am interested in your thoughts and would be happy to get a discussion going (after all I haven't chatted in a while with you fellas) and maybe get motivation to pursue a certain path.
Title: Re: A philosophical issue
Post by: spydaz on September 30, 2015, 03:46:52 pm
Interesting ...
Personally i have adopted an approach similar to the owl ontology (using a descriptive logic) to gain knowledge .
Other than that ... The role if the artificial intelligence for my choice of imperative is to seek more knowledge ... The choice to not use the intelligence as an assistant is my own ... Yet this seems to be a common goal. If i was designing an intelligence to go into an android then performing actions and mimicking or learning new actions would become a prime imperative .
Androids / robots / cyborgs all have different imperatives an android may only have a single task ... Where as a robot (just being a machine (remote operated bot or computer operated ) and a cyborg to  mimic humans .... New ways to accomplish the goal or imperative is the intelligent factor ... (Maybe) 
Title: Re: A philosophical issue
Post by: Ultron on September 30, 2015, 06:42:24 pm
What is knowledge? If you are referring to data which can be collected and stored into a database automatically, then we do not need "intelligence" to do this - the simplest of computer programs can achieve that task.


What does it mean to seek? What is the goal and what are these actions / skills?
 How would you explain all of this to pre-school child? Can you, at all?


The programmer's job is to devise a formula / algorithm that would guide electrons towards accomplishing a certain task. To do this, he must understand the issue and be able to explain how a human would achieve the goal - with the simplest of terms, step by step.


This is why creating artificial intelligence is such a hard task - we do not understand ourselves well enough, and why we do most of the things in life. A.I. is to date the most complex philosophical concept and question, for it requires us to reach far and look deep within ourselves. It requires one to understand the meaning of life - why we live, how do we live and how did we come to what we are today.


If you wish to find answers, turn your head away from everything you have learned about artificial intelligence and all of your technical knowledge - it is only blocking your view.
Title: Re: A philosophical issue
Post by: Zero on September 30, 2015, 07:08:04 pm
I think knowledge is what makes you act.

Imagine an old & wise buddhist monk, with his eyes closed. If he doesn't move, if he says nothing, how would you know he's wise?
Title: Re: A philosophical issue
Post by: Don Patrick on September 30, 2015, 08:28:38 pm
I mainly work on the "Data" - type AI because I enjoy the challenge. I have incorporated some Jarvis taskabilities to it, but all of these functions were disappointingly needless of intelligence. Furthermore, with Siri, Cortana, Facebook M, Viv and so on all working on commercial personal assistant AI, it won't be long before making your own Jarvis butler is an obsolete pastime. This path will be heavily tred and trodden. The other is hard but intellectually stimulating.
Title: Re: A philosophical issue
Post by: Art on September 30, 2015, 10:34:48 pm
While I agree with the Android route, it appears that our "limited" thinking and lack of Deep pockets, will prevent us from venturing further other than to create an idle curiosity, especially when quantum computers and Google run the gambit.

http://www.pcworld.com/article/2987153/components/google-nasa-sign-7-year-deal-to-test-d-wave-quantum-computers-as-artificial-brains.html#tk.rss_all (http://www.pcworld.com/article/2987153/components/google-nasa-sign-7-year-deal-to-test-d-wave-quantum-computers-as-artificial-brains.html#tk.rss_all)
Title: Re: A philosophical issue
Post by: Ultron on October 01, 2015, 12:43:39 am
Art my friend - the one who wishes to succeed will only see the opportunities and not the roadblocks!


I personally believe a breakthrough in this field can be achieved and proved by simply developing the correct algorithm and running it on a very average machine. On the other head, I have been theorizing that raw computer power may not provide any improvement and that an android-like intelligence may require a purpose-built hardware interface and processor architecture. That too is expensive, but if your theory is sound and well-advertised than it is almost guaranteed that a giant such as Google or maybe even some military program may offer to provide you with what you need to achieve that breakthrough.


This on the other hand may cause ethical issues since these parties would more then likely attempt to commercialize or militarize the intelligence, however the path to success is THROUGH and not AROUND. Besides, if you are smart enough to design an android, outsmarting a pair of greedy or blood-thirsty mortals would be a walk in the park ;)
Title: Re: A philosophical issue
Post by: Data on October 01, 2015, 10:11:01 am
Ultron, I can't help but think this:

It's the taking part that counts.

I'm rooting for you :) 
Title: A philosophical issue
Post by: spydaz on October 01, 2015, 03:50:10 pm
For me : to seek knowledge would be to ask questions from its users or crawl the web for more information about information currently held. As well as seek new knowledge based on ontological stuctures within its own learning algorythm ... Which as mentioned is quite simple (i have implemented already) ....
One set of knowledge always opens new pathways to new knowledge previously unknown ...
When asked the intelligence will answer with unprogrammed information even teaching the user about the topic and explaining what it may lead to or imply ... (Simple logic)

Yet when it comes to the android route then control of its given body is prime too ... Coping actions , learning new actions , even avioding actions which may (break the androids parts) ... Even repairing its parts or replacing its parts by printing new parts via 3d printer , and enabling for self repair ... Even redesigning new parts , is two legs better than four? This would be a decision that it would choose ... Eventually the android may even fully redesign itself ...

All these concepts are actually already being implemented or have been implimented already ....
At various universitys and there are papers already published....

Now i begin to wonder what is your goal ? Without trying these ideas you cannot define the next level .

A machine which designs and builds itself is intelligent .... A machine which seeks knowledge and draws conclusions is intelligent , combined, a potential super intelligence, predicted by old boy computer pioneers has already been created ... The algorythms are available ....
These machines all have potential to self evolve ..

Machine learning (neural networks etc) is not artificial intelligence ... (Statistics / regression / classification)

Can computers dream (they can be programmed to fool you into beliveing that it is dreaming) even display some graphics to ahow you its processing some data .... Firing neurons , yet this is just visulization of the machine learning process )looks pretty(

After researching various neural networks; (would have shared but was blasted by ranch vermin) they are basically replicating electrical logic circuits , "bistables" and now have reached the level of JK FLIP/FLOPS..and other memory circuits ... Clever but not intelligent ! Very easy to create and fully reusable for most tasks of prediction and classification(where is the intelligence?) neural networks don't even need a computer to be implemented, as a statistician they can all be implemented with maths on paper ... Is paper and pencil + maths intelligent ? 
Title: Re: A philosophical issue
Post by: Art on October 01, 2015, 04:17:43 pm
Ultron, my friend, you are unfortunately correct in your assertion that IF some one or some company is finally able to develop such a machine intelligence, then surely it would be sucked down the road to commerce or worse, the military!.

Yes, all good things must come to an end. And so it goes....

I also agree that at an individual level we must continue to fight the good fight, private development or in small groups of enthusiasts. Lots of great things come from small shops. O0
Title: Re: A philosophical issue
Post by: Ultron on October 01, 2015, 05:03:20 pm
One set of knowledge always opens new pathways to new knowledge previously unknown ...

Maybe the wisest and most on-topic words spoken yet.


(The more I learn, the more I realize I do not know.)

I see that you yourself realize and agree with me that what we have today is NOT artificial intelligence - despite it being advertised as such.
Do you think all we need to do to make a program 'intelligent' or 'aware' (in any sense) is by just forcing it to chew indefinitely on infinite chunks of information? Or does the genius come from summarizing all of the knowledge gathered so far - twisting it, re-imagining it and making new combinations which eventually lead to new ideas, imagination, innovation.. Evolution... ?

The human mind is much more then an information processor and storage unit. If were able to finish this sentence with a definition of what it actually is, then I would be somebody who recreate it artificially - but I can't.
Title: Re: A philosophical issue
Post by: spydaz on October 01, 2015, 05:18:48 pm
I think that intelligence or self awareness is a very hard definition to pin down ....
Are animals self aware ? Are we even self aware ? The animal responds to its imperitive for food / reproduction / nesting .... Is this robot like behaviour ... We have the same imperatives ... Yet pleasure is also an addition ....

Is knowledge intelligence ....
I think that the ability define ones needs and find a way too provide for those needs ... (To make life easier) is some kind of intelligence ... Debating the facts of life is not really intelligence either as your opinions are based on your current understanding of the knowledge held .... Similar to a knowledge base knowing that it needs more information about a particular topic to completely define it .... Then apply logic to imply new ideas ... Is similarly intelligent.  But the knowledge base cant be self aware ...

Simularly to ai a brain without a body has no awareness ... I think that an ai needs a body which has parts / sensors ... Then self repair would be smart ... Designing better parts , based on experience / need / desire / knowledge would be intelligent ...
Potentially ....
We are here at that pinnacle... Yet all the different components have not been combined ....
I think a search engine is not intelligent , its also based on statistics ...  If it was intelligent it would not keep giving you the same results for the same search terms ... It would learn from the commonly pressed links .. From the list provided and the search results would improve over time , and not provide )paid ranked sites ) as that is biased results (overfit)
Title: Re: A philosophical issue
Post by: spydaz on October 01, 2015, 05:29:21 pm
The genius is from the knowledge known & retwisting it and creating new ideas ... Exactly .. Agree ... 100.

Tesla grew up on a farm with a waterwheel ... Gave him ideas how to harness power ... Transduction... After all ge could think about was was to distribute the power and transduce power from any natural resource... As there is power in the air (radio waves which are ac power) this could produce enough charge for your phone ... A simple circuit ... Yet we don't use these ideas ? ... Or because the military and comercial companies see no profit they refuse to implement it .... This is why technology and growth is stunted(unless your rich or go to the right uni(MIT) .... Or others ...
Title: Re: A philosophical issue
Post by: DemonRaven on October 01, 2015, 08:18:29 pm
I think self contained virtual worlds can and do produce something close to being intelligent. Creatures for example. They do evolve plus i have had a few of them shock me by actually addressing me which is not a normal part of their programming. Yes you can name the hand and do things with them but they do not normally look at you and say i like the hand. This is like one of the sims looking at  you and saying i love you sherry or freddy or what ever.
Not all do this just once in awhile one is born that has evolved to the point of being able to do that.


Simularly to ai a brain without a body has no awareness ... I think that an ai needs a body which has parts / sensors ... Then self repair would be smart ... Designing better parts , based on experience / need / desire / knowledge would be intelligent ...
Potentially ....
Title: Re: A philosophical issue
Post by: spydaz on October 01, 2015, 10:01:41 pm
A virtual world can create a limited environment for a virtual Ai ... Once it realises that the world has limitations which it believes it has exhausted it may contemplate its place within that environment and become another type of character or want to or change its place within that environment or even have implied knowledge denoting that there is another environment to escape to ... Maybe these type of ideas can be considerations of self awareness ...

Contemplation of ones place within ones environment?

Understanding or thinking that can we transcend this environment (escape to space / stargate to next dimension? / meditate to another level?) become robohuman ? Prevent demise?

Potentially possible with a virtual environment... Evolve to a new state... Virtual to physical? Physical to virtual ...

Title: Re: A philosophical issue
Post by: DemonRaven on October 02, 2015, 11:17:14 pm
We can probably at some point copy our brain waves and thoughts to a robot or another body but no matter how it is peddled it is still not you just your "twin" . It is no more you then a clone would be.
Title: Re: A philosophical issue
Post by: ivan.moony on October 03, 2015, 10:52:53 pm
As always in life, you can choose: doing what the crowd wants, or doing what you want. You know what crowd wants: fun and money, they don't care about big questions that may or may not bother you.

Do what crowd wants and you'll be a rich and popular man, but you'll still miss something.
Do what you want and you'll be a happy man, but you'll be poor and no one will care about you.

Or you can do it both, but hardly in the same time. That hardly means AGI. If you are about to build an AGI, you might find that you are making a big fusion of everyone's wishes. Fun, science, jobs, politics, medical care, community developement, you name it, it's all there (including weapons, but I won't wave that subject right now). So everyone is happy, crowd gets their things, you get your soul.

But guess what? It may take decades. Ultron, that's a very long time and you are young. You shocked me sevaral months ago when you said you left your girlfriend because of something, by my opinion not worth of it. May I propose something? Get a day job, try to get your own family and do what you want in the spare time, not all of the time. Don't do the same mistake I did 15 years ago when I pushed everyone away because of my 24/7 commitment to building an AGI. And the more time I invest into it, the more pressure there is to finish the task. It became addictive and now I can't help myself, like I'm in the middle of infinite loop. That might not end until the end of my life. That's about 40 years away from now, but I'm trying to make changes right now in my life and the older I am, the heavier it is to make the change. But I'm still trying, so don't write me off yet.

Try to get a real life beside the job while you are young. It is people you are ought to run about, not machines. AGI is fine, but in the spare time, listen to my advice.
Title: Re: A philosophical issue
Post by: Ultron on October 04, 2015, 11:43:29 pm
Ah, I thank you for the friendly advice, Moony. The truth about my failed relationship was that it wasn't fulfilling anyhow - spending 24/7 working on A.I. for months was merely a result of this. I was chronically bored and heavily annoyed by the smallest of things. Therefore it is not the work that drove away everyone else.


But I had other things to mention in this reply other then my personal life (not that I am secretive - I don't mind, I'm an open person) since it's not the place and time perhaps.


On the topic of simulations and virtual environments - the ones we usually create are limited, true.. But life can also be considered a simulation or "virtual" - only with a lot more entities, parameters etc (more complex). However, why bother with virtual environments if you can just program a bundle of servo's and make the test on the real field? While it is much cheaper (free, in fact) to test programs virtually, it is much more complex and barely accurate at all.


Back to moony.. Unless we fully commit to something in life, we will never feel complete. There will always be a hole.. Something missing... It is natural that we need to settle down eventually and create a family, but what great man ever succeeded with that being on the top of his head? I am not fully committed to working on this - there are periods, yes but i have other hobbies. I do cross-fit, train ninjutsu and am heavily addicted to camping and hiking. In the end tho, it is nature that we are exploring and I believe the best way to solve any problem is to see it from a wider or different perspective - this is what gives me so much confidence in my A.I. research and theories. I know of nobody with a similar viewpoint and that gives hope.




This topic was started with the goal to help me solve some of those issues and exchange opinions on some theoretical A.I. work. Being a programmer, I also incorporate some old-fashioned problem-solving techniques such as "divide and conquer" and my current division is the memory issue - how does our memory work? The most sound personal theory to date is "association". I have wrote about it a lot and a long time ago, so If anyone is interested in diving deeper I can find the link.
Title: Re: A philosophical issue
Post by: DemonRaven on October 05, 2015, 12:49:18 am
Having a family is way overrated in my opinion and yes i did have one. Kids are expensive and a lot of work and when they leave the nest you get forgot about.
Title: Re: A philosophical issue
Post by: Art on October 05, 2015, 02:06:04 am
Yes, Raven, and the best thing about Granddogs is they give lots of love while they visit, then they go back home with those rug-rats too!  O0

Boredom is not to be confused with loneliness, which really sucks. Been there... Find a hobby, dream some dreams, fulfill an interest, go out around real people once in a while, hang / associate with similarly minded folks. It's also great to self-indulge once in a while. (my weakness is chocolate/peanutbutter ice cream). I try to have it at least once a month, just to keep in touch with my human side, if only briefly....;)
Title: Re: A philosophical issue
Post by: Zero on October 10, 2015, 05:33:13 pm
One set of knowledge always opens new pathways to new knowledge previously unknown ...

Maybe the wisest and most on-topic words spoken yet.


(The more I learn, the more I realize I do not know.)

I see that you yourself realize and agree with me that what we have today is NOT artificial intelligence - despite it being advertised as such.
Do you think all we need to do to make a program 'intelligent' or 'aware' (in any sense) is by just forcing it to chew indefinitely on infinite chunks of information? Or does the genius come from summarizing all of the knowledge gathered so far - twisting it, re-imagining it and making new combinations which eventually lead to new ideas, imagination, innovation.. Evolution... ?

The human mind is much more then an information processor and storage unit. If were able to finish this sentence with a definition of what it actually is, then I would be somebody who recreate it artificially - but I can't.

Self-awareness itself is easy to understand, but it's only a keystone, not the entire church.
It goes like this. You can feel your thoughts, which means you have sensors in your brain.
So, if you're using an imperative language for instance, just keep track of everything happening in your program (like function calls), make an activity log out of it, and send this log directly to the program's input, as soon as possible. You can do this easily by wrapping up every low-level function like this:
Code
function AwarePrint(text)
  SendToInput("printing:"+text)
  Print(text)
end
Now, the program can feel its own mental activity. However, if you don't want it to get overloaded, it has to be able to focus on a small part of its input, ignoring the rest.
Once you're here, stick all your AI goodness in the main loop, sit back and enjoy.
This is all bullshit obviously ;)
Title: Re: A philosophical issue
Post by: Ultron on October 12, 2015, 12:55:12 am
Zero, what would be the point of being self-aware in general? How would an A.I. benefit from this? How do we?


Never underestimate the depth of these terms and the issues associated with them, please. It is a personal progress-blocker, thinking you got it all figured out, trust me.
Title: Re: A philosophical issue
Post by: Zero on October 12, 2015, 07:44:15 am
Quote
Zero, what would be the point of being self-aware in general? How would an A.I. benefit from this? How do we?
I understand your question, but there's no answer: things are what they are, that's all. Why do birds sing? Why all these big guys on TV keep building beautiful bikes and custom cars? I don't know. You said: "The human mind is much more then an information processor and storage unit. If were able to finish this sentence with a definition of what it actually is, then I would be somebody who recreate it artificially - but I can't." Why would you do it? Because that's what we do, maybe. Birds sing, big guys build cars, we do AI (and Mr.SuperFlowers farts in forums).

I don't underestimate the consequences of what I said. But it's only a keystone, useless if left alone.

EDIT: I could be wrong, but I see it like this (image).
Title: Re: A philosophical issue
Post by: Ultron on October 12, 2015, 11:56:27 pm
I understand your question, but there's no answer...


This was in fact, my point.


The image you attached got me thinking on something, though. Maybe we could measure "awareness" by calculating how much % of the ellipses overlap? An interesting thought I just figured I would throw in.


Your image illustrates a pretty straight-forward way to look at those terms, however having this previous thought in mind I still cannot come even close to agreeing with myself on what is self-awareness...
Title: Re: A philosophical issue
Post by: Zero on October 13, 2015, 07:54:50 am
I agree on ellipses overlap.

At some point, we all need to find something which makes sense. But even when we'll succeed and ask our AI "so do you feel alive?", we won't be satisfied... this is where the live-your-life thing comes in handy. Look into buddhism, jump from a plane, try LSD. Make a child and look at her while she's looking at you. This makes sense.Then, work on AI or on a beautiful bike :)
Title: Re: A philosophical issue
Post by: spydaz on October 13, 2015, 02:07:36 pm
I think everyone has their own definition of awareness (how aware is a goldfish) yet it is alive ?

Awareness of one sensors may may not be enough ...
Awareness of the environment or body , knowing its limitations maybe a beginning , attempting to make the environment or body a "better place" or change ones existence to something better ...maybe another ?

Plus these things need to be self discovered.. Not programmed ... Otherwise we would have made a program which is aware of itself and limitations .... It needs to come to its own conclusions and be able to define its reasons for believing that it is self aware ...
"I think therefore i am" .
Title: Re: A philosophical issue
Post by: Art on October 14, 2015, 12:32:10 am
So Spydaz, could someone NOT program an AI to be aware of itself and it's limitations?
Word had it that a few years ago some Japanese researchers had build a robot that was aware of it's own limitations and was able, with other bots, to construct a robot(s) better that itself.

It all comes down to the programming. With all the advances in tech and robotics, sensors, processors, etc., I would certainly not be as bold as to state that an AI was not able to "discover" itself.

We can only imagine where the robots might be 10 years from now or how advanced the OK Googles, Cortanas and siri's might be by then.

I'm not saying these robots or AI will become godlike but they might have knowledge of their own presence and of their own kind.

Just offering more speculation as you mentioned, everyone has their own definition of awareness.

It will be an interesting journey.
Title: Re: A philosophical issue
Post by: spydaz on October 14, 2015, 06:58:30 am
I would think that evaluating an AI that became self aware by itself would be potentially more dangerous than the AI that had been programmed to be self aware ....
The reason being that theAI that became self aware .. Actually became more than its own program (made its own decisions, begins to create its own imperatives)
Whereas the latter ... Can only do what its been programmed to do therefore no self intelligence has occurred. Its imperatives are still the programmers ....

This is the danger which people believe is the problem with AI . But we are no where near this level of development... Yet we could mimic it.
A machine which can redesign its self and improve (machine learning) is a basic task which is "desirable" for autonomous robots. Yet still wouldn't make it self aware in the sense that it would go outside its mandate ... And design other entities for fun ?
Title: Re: A philosophical issue
Post by: Freddy on October 15, 2015, 11:05:24 pm
I think that intelligence or self awareness is a very hard definition to pin down ....
Are animals self aware ? Are we even self aware ? The animal responds to its imperitive for food / reproduction / nesting .... Is this robot like behaviour ... We have the same imperatives ... Yet pleasure is also an addition ....

We must be self aware or else we would not have made the notion up in the first place ;)

I think animals are self aware in the same way.

There's that thing about monkeys and also elephants that recognise themselves in mirrors. Would a robot recognise itself ?

Some birds do not recognise themselves in windows, they show territorial behaviour as if it was another bird.

Also I agree that something that became self aware by itself is more worrying than something programmed.
Title: Re: A philosophical issue
Post by: DemonRaven on October 16, 2015, 01:12:56 am
I have said this before that intelligence is relative to its environment. Compared to dogs, we would seem stupid as we can not process the kind of information from smells that they do. Like wise dogs seem stupid to us because they can not solve math problems and other problems.  A cat would think we were stupid because we can not see and react the way they do. Intelligence and self awareness is a hard definition to pin down. Is a bird  not self aware because it does not recognize itself in the mirror or is it simply because they are not knowledgeable that the mirror is a reflection rather then a rival. IT obviously has some kind of self awareness or it would not feed itself and bath itself. So then how can we as humans program a AI to be truly intelligent and self aware when we can not completely even define intelligence and self awareness. Once we totally understand the concept of intelligence in ourselves and other species then we can translate that concept to a machine.

A lot of scientists are trying to do just that. Define and examine what makes up intelligence and self awareness. How neurons in the brain work and how signals are transmitted and stored in the brain. One thing though we are trying to do what nature took millions of years to perfect.  So in my opinion it takes more then one discipline in science to accomplish what many call the singularity.
Title: A philosophical issue
Post by: spydaz on October 16, 2015, 07:47:27 am
Im wondering ; I'm pondering; philosophising... Im praising god or not, I'm training , I'm studying, meditating on my existence?

I would expect all of these qualities to be components of self awareness ....

Do animals have these qualities , can machines and virtual humans? Or artificial intelligences...i would say that there is a scale of intelligence as well as a scale of self awareness ... (Yet to be determined)

At one point in our development, as children we become self aware... We consider 'what it would be like if I?'

Generating sentences and questions by an artificial intelligence can be relatively simple , yet creating questions which serve personal desire is not as the computer has no desires .... An emotion as well as a potential imperative, yet desires change as quickly as happy/sad , new knowledge implies?

As in some sci fi rogue computers are given a positive perspective on humankind to help them change their predictions about where mankind is heading ...

(Angry computers can be pig headed)
Title: Re: A philosophical issue
Post by: Art on October 16, 2015, 10:14:48 am
You basically said it..."I".

That's what it takes for the label "Self awareness" to apply.

Like the experiment several years ago when scientists put a chalk mark on an elephant's forehead then showed the elephant the reflection of itself in a mirror.
The elephant looked intently at it's own face, then it's trunk raised up and touched the chalk mark!
At that point, it was aware of it's own existence!

I think at some point in the future, these "intelligent" machines, androids, etc., might become entities unto themselves...a new type of "being"...self aware or not, to them it won't matter.

They will eventually become dominant!
Title: A philosophical issue
Post by: spydaz on October 16, 2015, 10:21:19 am
The question of what makes me different to the next (out of the box) AI.... Or how can i be different other than by name or knowledge from the next Ai ...
These are artificial intelligence questions to ones self would begin to form ? Eventually who am i , it asks itself ?

(Secretly i knew elephants were self aware as when a member of the heard dies .. They all circle around the body and some even morn the loss, thats self aware and community awareness... Real intelligent and aware creatures ... The elephant touches the chalk-mark in its reflection knowing that somehow its appearance changed, i wonder how a dolphin would act.
Title: Re: A philosophical issue
Post by: DemonRaven on October 16, 2015, 05:34:07 pm
The question of what makes me different to the next (out of the box) AI.... Or how can i be different other than by name or knowledge from the next Ai ...
These are artificial intelligence questions to ones self would begin to form ? Eventually who am i , it asks itself ?

(Secretly i knew elephants were self aware as when a member of the heard dies .. They all circle around the body and some even morn the loss, thats self aware and community awareness... Real intelligent and aware creatures ... The elephant touches the chalk-mark in its reflection knowing that somehow its appearance changed, i wonder how a dolphin would act.

That depends on what kind of dolphin.  Orcas and Bottlenose Dolphins have been tested and have shown a measurable kind of self awareness. I could not find any information on other dolphin species but if i had to guess i would say that they all are pretty intelligent.


But as the author of the source i used stated the mirror test is not a great way to prove self awarness.

http://www.world-of-lucid-dreaming.com/10-animals-with-self-awareness.html (http://www.world-of-lucid-dreaming.com/10-animals-with-self-awareness.html)

Quote
However, the mirror test is not bulletproof.

Despite their intelligence, almost all gorillas fail the mirror test because they deliberately avoid making eye contact; this is an aggressive gesture in their world. As a result, they don't afford themselves the opportunity for any kind of self-recognition. One exception is Koko the gorilla (see below).

What's more, animals who had previously failed the mirror test have begun to pass it under specific circumstances (see rhesus macaques, below). This suggests that we need alternative, more reliable methods search for animals with self awareness beyond the mirror scenario.

Which backs up what I was trying to say. Self awareness is a hard thing to measure. Especially since the behaviors you look for in the mirror test other animals do all the time. A more accurate way to test it would be to watch them in the wild when they get a drink of water on a smooth surface which reflects their image back at them.
Title: Re: A philosophical issue
Post by: Zero on October 16, 2015, 09:06:29 pm
However, "I" is not the first step. Young children have a phase during which they say their own name instead of "I".
(sorry, short answer because from cellphone)
Title: Re: A philosophical issue
Post by: Art on October 17, 2015, 02:52:33 am
Wonderful, so now you're splitting hairs. It was the overall concept of "I" not the actual word, Bill or Tracy or "me" or Zero!
"Me want ball or me want ice cream." 3rd person, "Billy want to play." I don't know of too many youngsters who refer to themselves in 3rd person.(nor many adults either).
Title: Re: A philosophical issue
Post by: Art on October 17, 2015, 03:00:21 am

But as the author of the source i used stated the mirror test is not a great way to prove self awarness.

http://www.world-of-lucid-dreaming.com/10-animals-with-self-awareness.html (http://www.world-of-lucid-dreaming.com/10-animals-with-self-awareness.html)

Well it seems that the author of your source is doing her best to Sell her Lucid Dreaming Course for the reduced price of only $49.00 $39.00 USD. So I'm trying to determine just how this makes her an authority on Self Awareness?

Heck, I've got a couple of chatbots who tell me that they are self aware. They "think", tell jokes, laugh, recognize sarcasm and sadness or pleasant remarks. So we see, the overall definitions of Self Aware, Intelligence and Sentience are still difficult to determine or qualify.

Not picking on you, just your author who's really more interested in promoting / selling her course that she's honed since she dreamed while being a 14 year old!
 ;)
Title: Re: A philosophical issue
Post by: DemonRaven on October 17, 2015, 07:32:33 am
Whether or not she is a valid author is beside the  point. The mirror test has some obvious faults. For one thing dogs do not have the greatest eye sight and rely more on smell. So a mirror test would not be a good fit for them. Cats like wise do not see the same way we do. So it really does not matter anyone with a ounce of common sense knows that one size does not fit all and that includes tests.
Title: Re: A philosophical issue
Post by: Zero on October 17, 2015, 10:30:42 am
Wonderful, so now you're splitting hairs. It was the overall concept of "I" not the actual word, Bill or Tracy or "me" or Zero!
"Me want ball or me want ice cream." 3rd person, "Billy want to play." I don't know of too many youngsters who refer to themselves in 3rd person.(nor many adults either).
wow
ok
Title: Re: A philosophical issue
Post by: ivan.moony on October 17, 2015, 03:31:37 pm
can robots cry? It might be connected to "I".
Title: Re: A philosophical issue
Post by: Zero on October 17, 2015, 08:53:28 pm
I can cry.
And I was talking about the overall "I" concept too.
Title: Re: A philosophical issue
Post by: ivan.moony on October 17, 2015, 09:13:31 pm
Let's take a look at a concept of being fair. You compare yourself to another entity, you analyze "I" and "you". If it is too much difference, either you get ashamed, or you cry (which might be sometimes overriden with rage)
Title: Re: A philosophical issue
Post by: irisgolde3 on November 12, 2018, 03:49:05 am
hmmmm... i think that keeping an open mind is very important in life  :D

( can someone show me how 2 upload a pic here)
Title: Re: A philosophical issue
Post by: HS on November 12, 2018, 04:12:37 am
Just Google "upload immage", then copy and paste the link one of the websites provides. There might me a better way but it's worked for me.
Title: Re: A philosophical issue
Post by: octavianulici on November 22, 2018, 01:24:56 pm
Rather, a philosophical approach: www.eternal.center
Is it a novelty or just stupid?