A philosophical issue

  • 41 Replies
  • 6056 Views
*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 458
  • There are no strings on me.
A philosophical issue
« on: September 29, 2015, 10:31:00 pm »
I am not sure if it is as much of an issue as it is a question, but the topic sounded more attractive this way.


Anyhow, for a long time now i have been indecisive as of which path I should choose. In other words, I have been thinking as to which type of intelligence is more practical, plausable and expandable. There are basically two different categories in my mind when I think about this - an intelligent assistant much like the fictional JARVIS where the issue is that a classic approach to such a programming task would probably render a limited program which could never grow to a real A.I. as to my standard and definition.


The other category is somewhat less plausible and would require more thinking and philosophical work rather then practical experimentation. This type of intelligence best compares to the fictional android Data from Star Trek or his Marvel counter-part The Vision. It is a truly unlimited human-like intelligence and therefore highly uncontrollable as it can grow beyond our own frame of mind (something id be happy about).


I have been developing and sketching models for both types and have worked up many theories and concepts as to how they would function. Working for the assistant-type has been mostly dissatisfying as I can see more limits then possibilities in all related models. I have already made prototype programs and they have been successful however the main reason I stopped was because I realized how limited they are.
As far as the android-model goes, I have made several sketches and realized that a new hardware architecture would be required to bring such an intelligence to life. How do I go about designing such an intelligence? Well I examine my own behavior and sometimes I observe animals and all organic life as a whole to find common behaviors. I'm basically back-tracking intelligence to its most basic form and the root of its evolution, for that is where lies the key and seed. Also note that this type of intelligence can also be an assistant and take any form - much like "intelligence" can be found within a human, a bird, a fish or any other animal harboring a brain.




So to sum it up - I'm not sure what I'm asking of you guys. I am interested in your thoughts and would be happy to get a discussion going (after all I haven't chatted in a while with you fellas) and maybe get motivation to pursue a certain path.
Time... Doesn't seem so constant when you think about it, does it?

*

spydaz

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 106
Re: A philosophical issue
« Reply #1 on: September 30, 2015, 03:46:52 pm »
Interesting ...
Personally i have adopted an approach similar to the owl ontology (using a descriptive logic) to gain knowledge .
Other than that ... The role if the artificial intelligence for my choice of imperative is to seek more knowledge ... The choice to not use the intelligence as an assistant is my own ... Yet this seems to be a common goal. If i was designing an intelligence to go into an android then performing actions and mimicking or learning new actions would become a prime imperative .
Androids / robots / cyborgs all have different imperatives an android may only have a single task ... Where as a robot (just being a machine (remote operated bot or computer operated ) and a cyborg to  mimic humans .... New ways to accomplish the goal or imperative is the intelligent factor ... (Maybe) 

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 458
  • There are no strings on me.
Re: A philosophical issue
« Reply #2 on: September 30, 2015, 06:42:24 pm »
What is knowledge? If you are referring to data which can be collected and stored into a database automatically, then we do not need "intelligence" to do this - the simplest of computer programs can achieve that task.


What does it mean to seek? What is the goal and what are these actions / skills?
 How would you explain all of this to pre-school child? Can you, at all?


The programmer's job is to devise a formula / algorithm that would guide electrons towards accomplishing a certain task. To do this, he must understand the issue and be able to explain how a human would achieve the goal - with the simplest of terms, step by step.


This is why creating artificial intelligence is such a hard task - we do not understand ourselves well enough, and why we do most of the things in life. A.I. is to date the most complex philosophical concept and question, for it requires us to reach far and look deep within ourselves. It requires one to understand the meaning of life - why we live, how do we live and how did we come to what we are today.


If you wish to find answers, turn your head away from everything you have learned about artificial intelligence and all of your technical knowledge - it is only blocking your view.
Time... Doesn't seem so constant when you think about it, does it?

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 332
  • Fictional character
    • SYN CON DEV LOG
Re: A philosophical issue
« Reply #3 on: September 30, 2015, 07:08:04 pm »
I think knowledge is what makes you act.

Imagine an old & wise buddhist monk, with his eyes closed. If he doesn't move, if he says nothing, how would you know he's wise?

*

Don Patrick

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 386
    • Artificial Detective
Re: A philosophical issue
« Reply #4 on: September 30, 2015, 08:28:38 pm »
I mainly work on the "Data" - type AI because I enjoy the challenge. I have incorporated some Jarvis taskabilities to it, but all of these functions were disappointingly needless of intelligence. Furthermore, with Siri, Cortana, Facebook M, Viv and so on all working on commercial personal assistant AI, it won't be long before making your own Jarvis butler is an obsolete pastime. This path will be heavily tred and trodden. The other is hard but intellectually stimulating.
Personal project: NLP -> learning -> knowledge -> logical inference -> A.I.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4407
Re: A philosophical issue
« Reply #5 on: September 30, 2015, 10:34:48 pm »
While I agree with the Android route, it appears that our "limited" thinking and lack of Deep pockets, will prevent us from venturing further other than to create an idle curiosity, especially when quantum computers and Google run the gambit.

http://www.pcworld.com/article/2987153/components/google-nasa-sign-7-year-deal-to-test-d-wave-quantum-computers-as-artificial-brains.html#tk.rss_all
In the world of AI, it's the thought that counts!

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 458
  • There are no strings on me.
Re: A philosophical issue
« Reply #6 on: October 01, 2015, 12:43:39 am »
Art my friend - the one who wishes to succeed will only see the opportunities and not the roadblocks!


I personally believe a breakthrough in this field can be achieved and proved by simply developing the correct algorithm and running it on a very average machine. On the other head, I have been theorizing that raw computer power may not provide any improvement and that an android-like intelligence may require a purpose-built hardware interface and processor architecture. That too is expensive, but if your theory is sound and well-advertised than it is almost guaranteed that a giant such as Google or maybe even some military program may offer to provide you with what you need to achieve that breakthrough.


This on the other hand may cause ethical issues since these parties would more then likely attempt to commercialize or militarize the intelligence, however the path to success is THROUGH and not AROUND. Besides, if you are smart enough to design an android, outsmarting a pair of greedy or blood-thirsty mortals would be a walk in the park ;)
Time... Doesn't seem so constant when you think about it, does it?

*

Data

  • Global Moderator
  • ***********
  • Eve
  • *
  • 1263
  • Overclocked // Undervolted
    • Datahopa - Share your thoughts ideas and creations
Re: A philosophical issue
« Reply #7 on: October 01, 2015, 10:11:01 am »
Ultron, I can't help but think this:

It's the taking part that counts.

I'm rooting for you :) 

*

spydaz

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 106
A philosophical issue
« Reply #8 on: October 01, 2015, 03:50:10 pm »
For me : to seek knowledge would be to ask questions from its users or crawl the web for more information about information currently held. As well as seek new knowledge based on ontological stuctures within its own learning algorythm ... Which as mentioned is quite simple (i have implemented already) ....
One set of knowledge always opens new pathways to new knowledge previously unknown ...
When asked the intelligence will answer with unprogrammed information even teaching the user about the topic and explaining what it may lead to or imply ... (Simple logic)

Yet when it comes to the android route then control of its given body is prime too ... Coping actions , learning new actions , even avioding actions which may (break the androids parts) ... Even repairing its parts or replacing its parts by printing new parts via 3d printer , and enabling for self repair ... Even redesigning new parts , is two legs better than four? This would be a decision that it would choose ... Eventually the android may even fully redesign itself ...

All these concepts are actually already being implemented or have been implimented already ....
At various universitys and there are papers already published....

Now i begin to wonder what is your goal ? Without trying these ideas you cannot define the next level .

A machine which designs and builds itself is intelligent .... A machine which seeks knowledge and draws conclusions is intelligent , combined, a potential super intelligence, predicted by old boy computer pioneers has already been created ... The algorythms are available ....
These machines all have potential to self evolve ..

Machine learning (neural networks etc) is not artificial intelligence ... (Statistics / regression / classification)

Can computers dream (they can be programmed to fool you into beliveing that it is dreaming) even display some graphics to ahow you its processing some data .... Firing neurons , yet this is just visulization of the machine learning process )looks pretty(

After researching various neural networks; (would have shared but was blasted by ranch vermin) they are basically replicating electrical logic circuits , "bistables" and now have reached the level of JK FLIP/FLOPS..and other memory circuits ... Clever but not intelligent ! Very easy to create and fully reusable for most tasks of prediction and classification(where is the intelligence?) neural networks don't even need a computer to be implemented, as a statistician they can all be implemented with maths on paper ... Is paper and pencil + maths intelligent ? 

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4407
Re: A philosophical issue
« Reply #9 on: October 01, 2015, 04:17:43 pm »
Ultron, my friend, you are unfortunately correct in your assertion that IF some one or some company is finally able to develop such a machine intelligence, then surely it would be sucked down the road to commerce or worse, the military!.

Yes, all good things must come to an end. And so it goes....

I also agree that at an individual level we must continue to fight the good fight, private development or in small groups of enthusiasts. Lots of great things come from small shops. O0
In the world of AI, it's the thought that counts!

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 458
  • There are no strings on me.
Re: A philosophical issue
« Reply #10 on: October 01, 2015, 05:03:20 pm »
One set of knowledge always opens new pathways to new knowledge previously unknown ...

Maybe the wisest and most on-topic words spoken yet.


(The more I learn, the more I realize I do not know.)

I see that you yourself realize and agree with me that what we have today is NOT artificial intelligence - despite it being advertised as such.
Do you think all we need to do to make a program 'intelligent' or 'aware' (in any sense) is by just forcing it to chew indefinitely on infinite chunks of information? Or does the genius come from summarizing all of the knowledge gathered so far - twisting it, re-imagining it and making new combinations which eventually lead to new ideas, imagination, innovation.. Evolution... ?

The human mind is much more then an information processor and storage unit. If were able to finish this sentence with a definition of what it actually is, then I would be somebody who recreate it artificially - but I can't.
Time... Doesn't seem so constant when you think about it, does it?

*

spydaz

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 106
Re: A philosophical issue
« Reply #11 on: October 01, 2015, 05:18:48 pm »
I think that intelligence or self awareness is a very hard definition to pin down ....
Are animals self aware ? Are we even self aware ? The animal responds to its imperitive for food / reproduction / nesting .... Is this robot like behaviour ... We have the same imperatives ... Yet pleasure is also an addition ....

Is knowledge intelligence ....
I think that the ability define ones needs and find a way too provide for those needs ... (To make life easier) is some kind of intelligence ... Debating the facts of life is not really intelligence either as your opinions are based on your current understanding of the knowledge held .... Similar to a knowledge base knowing that it needs more information about a particular topic to completely define it .... Then apply logic to imply new ideas ... Is similarly intelligent.  But the knowledge base cant be self aware ...

Simularly to ai a brain without a body has no awareness ... I think that an ai needs a body which has parts / sensors ... Then self repair would be smart ... Designing better parts , based on experience / need / desire / knowledge would be intelligent ...
Potentially ....
We are here at that pinnacle... Yet all the different components have not been combined ....
I think a search engine is not intelligent , its also based on statistics ...  If it was intelligent it would not keep giving you the same results for the same search terms ... It would learn from the commonly pressed links .. From the list provided and the search results would improve over time , and not provide )paid ranked sites ) as that is biased results (overfit)

*

spydaz

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 106
Re: A philosophical issue
« Reply #12 on: October 01, 2015, 05:29:21 pm »
The genius is from the knowledge known & retwisting it and creating new ideas ... Exactly .. Agree ... 100.

Tesla grew up on a farm with a waterwheel ... Gave him ideas how to harness power ... Transduction... After all ge could think about was was to distribute the power and transduce power from any natural resource... As there is power in the air (radio waves which are ac power) this could produce enough charge for your phone ... A simple circuit ... Yet we don't use these ideas ? ... Or because the military and comercial companies see no profit they refuse to implement it .... This is why technology and growth is stunted(unless your rich or go to the right uni(MIT) .... Or others ...

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 536
    • Chatbotfriends
Re: A philosophical issue
« Reply #13 on: October 01, 2015, 08:18:29 pm »
I think self contained virtual worlds can and do produce something close to being intelligent. Creatures for example. They do evolve plus i have had a few of them shock me by actually addressing me which is not a normal part of their programming. Yes you can name the hand and do things with them but they do not normally look at you and say i like the hand. This is like one of the sims looking at  you and saying i love you sherry or freddy or what ever.
Not all do this just once in awhile one is born that has evolved to the point of being able to do that.


Simularly to ai a brain without a body has no awareness ... I think that an ai needs a body which has parts / sensors ... Then self repair would be smart ... Designing better parts , based on experience / need / desire / knowledge would be intelligent ...
Potentially ....

*

spydaz

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 106
Re: A philosophical issue
« Reply #14 on: October 01, 2015, 10:01:41 pm »
A virtual world can create a limited environment for a virtual Ai ... Once it realises that the world has limitations which it believes it has exhausted it may contemplate its place within that environment and become another type of character or want to or change its place within that environment or even have implied knowledge denoting that there is another environment to escape to ... Maybe these type of ideas can be considerations of self awareness ...

Contemplation of ones place within ones environment?

Understanding or thinking that can we transcend this environment (escape to space / stargate to next dimension? / meditate to another level?) become robohuman ? Prevent demise?

Potentially possible with a virtual environment... Evolve to a new state... Virtual to physical? Physical to virtual ...


 


Users Online

25 Guests, 5 Users
Users active in past 15 minutes:
keghn, Freddy, Art, ivan.moony, marc13120
[Administrator]
[Global Moderator]
[Roomba]
[Trusty Member]

Most Online Today: 45. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles