This Machine is Conscious!!!

  • 102 Replies
  • 36511 Views
*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: This Machine is Conscious!!!
« Reply #90 on: May 06, 2010, 11:19:19 pm »
Well, here is Raul's professional AI opinion on my CM prototype.  
------------------------------------------------------------------------------------------------------------

Thanks for your replies to my former questions. I am glad you found my comments useful. Having read your responses, here I go with my two cents:

In your video you seem to make a distinction between “true thinking” and other sorts of information processing which we could find difficult to call “thinking”.

At this point, I think it is important to remark the difference between implicit (or unconscious) processing and explicit (or conscious) processing. When you say “true thinking” I guess you refer to the latter. However, it is not that easy to claim that deductive reasoning with associated semantics is the same as conscious processing. Let me ask you one question in this regard: don’t you think human brain perform deductive reasoning with grounded meanings unconsciously? If so, what is, from your point of view, the fundamental difference between conscious and unconscious thinking?

Well, to be honest, I must admit that I think performing reasoning with grounded meaning is a requirement for conscious thinking. In fact, if I am correct, having grounded meanings will eliminate the Chinese Room objection. Maybe, as you say, this is a good start in the challenge of designing a conscious machine. Then, other aspects will have to be considered: ability to report mental content, self-awareness, etc., etc.

Another issue related with grounded meaning is that some authors would claim that you really need a physically situated agent, e.g. a real robot, in order to generate true real-world grounded meanings.

I agree with you that the Turing Test is not the ultimate test for consciousness. That’s one of the reasons why I proposed ConsScale. Anyhow, passing the Turing Test would be a hallmark of consciousness (as there are other less demanding criteria for lower levels of consciousness).

I agree that your prototype is centered around the concept of access consciousness, so I won’t ask again for phenomenal states  

Your Alldroid model looks to me like a cognitive architecture. So, looks like the obvious next step is to implement it and confront it to a real problem domain to see how it works.

About the “No Phenomenology No Consciousness” proposition, I’d rather say that you hit the problem of “no sensorimotor interaction no cognition, and therefore no consciousness”. I understand your claim that no sensorimotor interaction is needed for pure or core consciousness. However, from a developmental standpoint, I’d argue that sensorimotor capabilities are needed in the first place in order to develop meaning. How could an agent have an internal mental state with meaning if it didn’t acquire it from experience?

----------------------------------------------------------------------------------------------------------------------------

If you want to see a comprehensive list of all the humanoid robots in existence as of 2010, check out luisbeck007's list. 100's of humanoids of all sizes:

http://sites.google.com/site/luisbeck007/humanoid5668433486468332
« Last Edit: May 08, 2010, 07:21:07 pm by TrueAndroids »

*

NoLongerActive

  • Trusty Member
  • ***
  • Nomad
  • *
  • 53
Re: This Machine is Conscious!!!
« Reply #91 on: May 07, 2010, 06:00:50 am »
I have to agree with Raul. I think one's experiences are what makes the person, builds their character and also produces conscious thought. I would love to have Raul come to this forums and discuss things here as well. That would be quite awesome. :)

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: This Machine is Conscious!!!
« Reply #92 on: May 08, 2010, 03:15:31 pm »
Well TikaC (and everyone), here's my response to Raul's analysis of my conscious machine prototype. Hope you've enjoyed this investigation of machine consciousness.
---------------------------------------------------------------------------------------------------------------------

Thanks for your "two cents" :) Raul;

It's really clarifying where we stand. To take one small part, your key points of:

1. "I’d rather say that you hit the problem of “no sensorimotor interaction no cognition, and therefore no consciousness." ...

2. "I’d argue that sensorimotor capabilities are needed in the first place in order to develop meaning."
 

Through a series of questions (if I lost a leg, would I still be conscious? etc), I showed you CONCLUSIVELY that no sensorimotor interaction is necessary for pure or core consciousness.

Here we agree as you say:

"I understand your claim that no sensorimotor interaction is needed for pure or core consciousness."

So you are at this point agreeing with me that "no sensorimotor interaction is needed for access or core consciousness -- if meaning (semantic components) can be found" So this is progress.

You then say: "However, from a developmental standpoint, I’d argue that sensorimotor capabilities are needed in the first place in order to develop meaning.  

How could an agent have an internal mental state with meaning if it didn’t acquire it from experience?"


These are of course great points and the essence of the question in point.

To answer it we need to start with a pure example of a pure sensorimotor system (camera/text writer). So we begin with this inside view of a face recognition program which is a simple example of a sensorimotor system.

Here's the youtube video:
 http://www.youtube.com/watch?v=9DL9BPHKD2c

Sensory Capability: camera observing face
Motor Capability: writes "Aleksey Izmailov"

So I must ask you; where is the meaning in this sensorimotor system? Is it in the Searlean "squiggles of syntax" that says "If image of Type12927837 then "Aleksey Izmailov""? The Searle Argument says there is no understanding or meaning here and so - as such, no matter how complex the squiggles get - no machine consciousness.

Humans are developed from the sensorimotor ground up, into the higher order realm of semantic reasoning or human thinking (conscious = audible or unconscious = silent). But conscious robots can be built from the top down due to materials and prior knowledge from human development, and it is in this top level that the meaning is in fact found. And the meaning is in its set of logical definitions, just like humans.

Thanks for your input!
« Last Edit: June 07, 2010, 09:02:06 pm by TrueAndroids »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6832
  • Mostly Harmless
Re: This Machine is Conscious!!!
« Reply #93 on: May 12, 2010, 02:27:14 pm »
I need a guide book to read this thread  :D  Some interesting points I have picked up though.  Over four thousand views now and your article over fifteen hundred views.  Nice going :)

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: This Machine is Conscious!!!
« Reply #94 on: May 13, 2010, 04:58:42 am »
 :D Thanks a lot Freddy for letting me work through my thoughts with you all on all this here!  This topic really is a popular one these days. I had no idea. I'm trying to figure out my next move. Raul has not responded yet to my question of "where's the meaning going to be found in a sensorimotor system, without semantic reasoning?" Maybe he thought it was rhetorical. I would love to get him to acknowledge that I am indeed the first to create a spark of machine consciousness, even if it is just the so called "easy problem" of access consciousness.

If you would like to read more on Raul's sensorimotor point of view (and a great general guide) check out::

How to Build Consciousness into a Robot: TheSensorimotor Approach by J. Kevin O'Regan
http://lpp.psycho.univ-paris5.fr/pdf/2647.pdf

Of course the part that got my attention is where this AI expert says (2007):

From a theoretical standpoint (although currently no one has actually done it), there
would appear to be no logical obstacle to implementing Access Consciousness in a
robot: the reason is that Access Consciousness ultimately corresponds to a behavioral
capacity.


He defines it as:

What we mean when we say someone has Access Consciousness to
something is that the person currently knows that he (considered as a person with a
self) is poised to make use of that thing in his ongoing rational decisions, in his
planning, intentions and linguistic behavior.


And here is the Abstract:

Abstract. The problem of consciousness has been divided by philosophers into
the problem of Access Consciousness and the problem of Phenomenal
Consciousness or "raw feel". In this chapter it is suggested that Access
Consciousness is something that we can logically envisage building into a robot
because it is a cognitive capacity giving rise to behaviors or behavioral
tendencies or potentials. A few examples are given of how this is being done in
current research. On the other hand, Phenomenal Consciousness or "raw feel" is
problematic, since we do not know what we really mean by "feel". It is
suggested that three main properties are what characterize feel: the fact that
feels are different from each other, that there is structure in these differences,
and that feels have sensory presence. It is then shown how, by taking the
sensorimotor approach [24], [27] it is possible to account for these properties in a
natural way and furthermore to make counter-intuitive empirical predictions
which have recently been confirmed. In conclusion it is claimed that when we
take the sensorimotor approach to feel, building raw feel into a robot becomes a
theoretical possibility, even if we are a long way from actually attaining it.


Phenomenal Consciousness is often referred to as "the hard problem of consciousness".
http://en.wikipedia.org/wiki/Hard_problem_of_consciousness



« Last Edit: May 13, 2010, 05:05:58 am by TrueAndroids »

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: This Machine is Conscious!!!
« Reply #95 on: May 15, 2010, 10:54:53 pm »


This picture from Daz3D "Discover the 3D Artist within You" http://www.daz3d.com/i/0/0?_m=d is a good way to remember this machine consciousness issue between:

phenomenal consciousness (hard problem) - the vivid, color picture representing the sensorimotor system

access consciousness (easy problem) - the black and white picture representing the "self semantic reasoning" capability as seen in my youtube demo. http://www.youtube.com/user/TrueAndroids  

David Chalmers is the famous developer of the hard problem of phenomenal consciousness.

An interesting article on this debate is The Hard Problem is Dead; Long live the hard problem http://users.california.com/~mcmf/hardproblem.html  

In it the author Teed Rockwell says:

Chalmers' theory seems to be especially ontologically promiscuous , for it requires us to posit physical--mental "Siamese fraternal twins" which don't resemble each other, but are joined at the hip for all time for some inexplicable reason. It may be that reality is ontologically messy, and we just have to learn to live with that fact. But if there is another theory which accounts for the same facts with more simplicity and elegance, it should be considered to be more acceptable.

I thought hmm, that sounds a lot like my thinking on this. Then I saw Chalmer's response, and now I'm thinking I'm not the only one heading down this road of "self semantic reasoning means access consciousness present.":

David Chalmers writes:

     hi teed, not a solution to your problem, but a couple of relevant data points.

      (1) milner and goodale's work on two perceptual systems. i imagine you know this, as you were at the claremont conference. they postulate two visual systems, one for online control of direct motor action, the other for cognitive analysis, planning, etc. the latter system is supposed to be for "semantic" perception, connected to language, etc. and only the latter system is supposed to be associated with conscious processes -- the online system is unconscious. if something like this hypothesis is correct, this suggests that consciousness would be more likely to be associated with a pure-language system than a pure-motor system. of course one can argue that m&g's allocation of consciousness between these systems is essentially grounded on a prior assumption that consciousness goes with the cognitive/ semantic system, so this doesn't prove anything, but it's interesting nevertheless. w.r.t. your cases, of course your language-free system was far more than a pure online motor-reaction system, so that would complicate things. my own money is on both of your systems being conscious, in very different ways.

 http://users.california.com/~mcmf/cqmail4.html

Above I stated my conclusion that only the higher order thinking level - the self semantic reasoning level - will exhibit consciousness, according to the Searle Chinese Room Argument, and my thought experiment of removing sensors, etc. And that led to my current question "Where will the meaning (required for consciousness) be found in a sensorimotor system, without self semantic reasoning?" (Not in the syntactic code - no matter how complex it gets - according to Searle).

« Last Edit: May 16, 2010, 05:10:48 pm by TrueAndroids »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6832
  • Mostly Harmless
Re: This Machine is Conscious!!!
« Reply #96 on: May 16, 2010, 01:17:19 pm »
I like the analogy with the picture, helps me understand the problem better  :)

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: This Machine is Conscious!!!
« Reply #97 on: June 01, 2010, 07:45:35 pm »
Raul the AI expert has responded to my post to him (see above):
--------------------------------------------------------------------------------------------------------------------------------------------------
Thak you for the interesting discussion!

I see what your point is. The problem is that intuitively, I’m much more comfortable with the sensorimotor ground up development approach (as confronted with the top down definition of meaning as you have argued).

For me, there’s no meaning in the face recognition program. However, I don’t fully agree with the application of the Searle argument that follows. I think an artificial system could end up having complex squiggles of syntax, of course. And such a system could not be claimed to be develop any meaning (specially, if the rules have just been added by the programmer). But I also think that an artificial system could be designed in such a way that it could learn the meaning during a developmental phase, thus acquiring (meaningful) rules from the experience of sensorimotor interaction with the world.

To put this argument in the context of the face recognition example: the face detection process and related output have no meaning to the program because the corresponding rule (If image of Type12927837 then "Aleksey Izmailov") has been just hardcoded. Therefore, if we somehow could look for the location of the associated meaning, it is located in the programmer’s mind. On the contrary, if you design an agent to autonomously learn by interacting with the environment, then its output could have grounded meanings. In other words, motor outputs are selected based on their causal implications (meaning) for the system.

I think the big problem here is how we define the concept “meaning”. Here, I was considering situatedness as an essential feature for the acquisition of grounded meanings. You say that is in top level where meaning can be found in humans, and I agree, but I don’t see how this human top-level meaning can also be considered meaning for agents different from humans… Hmmm, need to think more about this  :)
-------------------------------------------------------------------------------------------------------------------------------------------------
« Last Edit: June 13, 2010, 06:08:21 pm by TrueAndroids »

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: This Machine is Conscious!!!
« Reply #98 on: August 12, 2010, 04:23:54 pm »
**********
**********
 UPDATE TO THREAD:::

 Hi, well as you can see above I could not get Raul the AI Expert to see the machine consciousness I presented to him. Very disappointing.  I also wasn't getting any response from Prof Searle at Berkeley who I asked to review it, and so I lost interest in the project and took it down from youtube. Without the AI experts acknowledging that what I have is TRULY CONSCIOUS (even if just access), then I will need to be rich to push it on my own. And I'm not.  In 2006 Searle (Chinese Room) told me if my machine has semantic understanding and reasoning he would say it's conscious. Now he won't even talk to me and didn't even respond.

No other links to it.

Thanks for your comments and interest,

ken (TrueAndroids)

**********
**********
« Last Edit: August 12, 2010, 04:47:33 pm by TrueAndroids »

*

Data

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1279
  • Overclocked // Undervolted
    • Datahopa - Share your thoughts ideas and creations
Re: This Machine is Conscious!!!
« Reply #99 on: August 12, 2010, 04:54:40 pm »
Im sorry to hear this Ken, all that work and nothing from Prof Searle.

I would say “don’t give up”, and try in some way to enrich your project from this experience, take it to the next level, how about now trying to give it the ability to hold a conversation, or something that is more likely to impress them next time, what ever you decide I wish you all the best.

I also think the work you have done is excellent, you have my vote for what its worth.

Shame you took your Youtube channel down, it might have found some interest from other quarters.

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: This Machine is Conscious!!!
« Reply #100 on: August 12, 2010, 05:06:18 pm »
Dam, I just discovered this thread today  (well, I just created an account on here a few days ago), and I am reading the many, many posting from March.. still have a few pages to read yet.

I missed the video .. I would really like to see it.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5864
Re: This Machine is Conscious!!!
« Reply #101 on: August 12, 2010, 11:08:07 pm »
TrueAndroids,

Sometimes, like our friend here, Freddy, we can be a bit too hard on ourselves...or our own worst critic.  I for one, enjoyed your work and the videos.

If you don't believe in what you're doing (or have done) how can you expect anyone else to give it credence?

Put the vids back up and continue to forge your own path as you journey through the jungles of AI. We're right here behind you! (never cared to take the lead in the jungle, you know....) ;)

« Last Edit: May 15, 2011, 10:06:25 pm by Art »
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6832
  • Mostly Harmless
Re: This Machine is Conscious!!!
« Reply #102 on: August 14, 2010, 09:29:43 pm »
Yes I agree with what has been send.  I hope you have a change of heart.  Don't let the buggers get you down.