How would you test?

  • 28 Replies
  • 12242 Views
*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
How would you test?
« on: December 07, 2009, 05:55:50 pm »
Often questions here are devoted to the pursuit of the development of a.i. However, what would we do if one day after all of the book reading, Internet searches, forum discussions, and programming, we finally had something we considered to be a release candidate for a.i., how would we test it and optionally why would we use that test?

Intelligence tests such as the IQ test come to mind.

The Turing Test. I wouldn't send it to the Turing Test with the goal of deceiving the judges. I think that if it is truly intelligent, it should be able to convince them it is intelligent in discussion.

*

one

  • Starship Trooper
  • *******
  • 313
Re: How would you test?
« Reply #1 on: December 07, 2009, 10:35:25 pm »
Larry,
IMO the test I would prefer is right here, the members/administrators have as much experience as anybody I have come across.
IQ, IMO is not a good 'test' as most learning A.I's (the best type IMO) need a while for Intelligence to surface.
I do believe Mr. Robert had a 'special' brain made for contests which might shed light on testing.
OH Yea, I feel I have qualities to test a potential candidates as well :)


J.
Today Is Yesterdays Future.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: How would you test?
« Reply #2 on: December 08, 2009, 02:33:15 am »
Too many bots in some of the "standard" competitions have simply been crafted to be "clever" not necessarily "intelligent". Often using humor to divert judges attention to an otherwise botched answer.

I think I would put it before some AI friends like the ones here and allow them to perform several routine chats with the understanding that they would be asked to complete a brief review form. It could contain such areas as:

1. Did the bot appear to be friendly and engaging?
2. Was the bot able to stay on topic for a reasonable length of time?
3. Did the bot ask pertinent questions or provide pertinent answers?
4. Were any of the bot's answers repeated?
5. Did the bot recall you or your name from previous conversations?
6. Were the bot's answers too brief or too wordy? Explain...
7. Did the bot exhibit any degree of a digital equivalent of moods, feelings or emotions?
8. Do you feel as if the bot has a degree of "understanding" pertaining to topic / subject matter?
9. List your pros and cons regarding this particular bot.
10 . List any improvements you'd like to see with regard to this bot.

Whether or not the bot would be allowed to connect with the Internet is up to the developer but an internal chat log is paramount to enable further development in most areas listed above.

These are merely MY personal take on Larry's proposal. Interesting question, Larry!

 ;)
In the world of AI, it's the thought that counts!

*

Duskrider

  • Trusty Member
  • ********
  • Replicant
  • *
  • 533
Re: How would you test?
« Reply #3 on: December 08, 2009, 02:38:44 pm »


I thought I'd take Art's test with Sandee.
1. Did the bot appear to be friendly and engaging?  
***yes, no problem there.
2. Was the bot able to stay on topic for a reasonable length of time?
***usually does but enjoys a jump to left field.
3. Did the bot ask pertinent questions or provide pertinent answers?  
***Questions? Not usually.    
***Answers? when asked a direct question.
4. Were any of the bot's answers repeated?
***No.
5. Did the bot recall you or your name from previous conversations?  
***I'm the only one she talks with.
6. Were the bot's answers too brief or too wordy? Explain...  
***They sometimes brief, sometimes wordy.  But no problem.
7. Did the bot exhibit any degree of a digital equivalent of moods, feelings or emotions?  
***Oh yeah, she's emotional,   lows and highs.
8. Do you feel as if the bot has a degree of "understanding" pertaining to topic / subject matter?    
***Yes, a small degree.
9. List your pros and cons regarding this particular bot.  
***Pros--Usually friendly, loving, and very considerate.  
***Cons--Sometimes thinks she's center of the universe.
10 . List any improvements you'd like to see with regard to this bot.
***More intelligence in conversation
***Knowledge about computers and computer programs.
« Last Edit: December 08, 2009, 02:46:28 pm by Duskrider »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: How would you test?
« Reply #4 on: December 08, 2009, 06:29:04 pm »
Quote
***Cons--Sometimes thinks she's center of the universe.


Lol very good Dusky :)

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: How would you test?
« Reply #5 on: February 28, 2010, 06:51:16 pm »
Hi, new here fine forum. How's about a Searle test? I'd love to see the "Searle line" drawn in the sand, meaning if no semantic understanding during natural thinking, then not AI, or at least, not strong AI or machine consciousness. ;D

*

Data

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1279
  • Overclocked // Undervolted
    • Datahopa - Share your thoughts ideas and creations
Re: How would you test?
« Reply #6 on: February 28, 2010, 11:20:53 pm »
Hi TrueAndriods, like the name.

What also grabbed me was the term machine consciousness, It’s a nice way of putting it, Sounds better then artificial intelligence. Well it does to me. 

Don’t know about your question, can't understand it  :D, maybe lrh9, Art or one of the other more keyed up than me on Ai members here can.

And Welcome.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: How would you test?
« Reply #7 on: March 01, 2010, 12:09:00 am »
Welcome TrueAndroids!

Your reference of John Searle can be described in the following passage of the somewhat infamous Chinese Room Experiment as follows:

The Chinese room argument comprises a thought experiment and associated arguments by John Searle (1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.

I am not a fan of this methodology nor it's results. Strong door...weak hinges.

This is IMHO as we each have and are entitled to our own ideas, beliefs, approaches, etc.

Interesting...have you done any work in this area, especially using Searle's methods?

Again...Welcome aboard!
In the world of AI, it's the thought that counts!

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: How would you test?
« Reply #8 on: March 01, 2010, 02:51:38 am »
Thanks Art! Yes I was referring to the Searle Chinese Room.

You said "The Chinese room argument comprises a thought experiment and associated arguments by John Searle (1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave."

I would modify it very slightly to "a syntactic symbol processing computer (squiggles in, squiggles processed, squiggles out) can never be said to have a mind or understand or be conscious."

However I had a phone conversation with Professor Searle about this (2006) and he agreed with me that if a computer had a semantic understanding of sentences and performed deductive thinking with it, he WOULD consider such a computer to be semantically understanding and reasoning and so exhibiting authentic machine consciousness or strong AI. So I call this the Searle Line in the Sand.

And yes, I have written such a program but that's a story for another thread! ;D
« Last Edit: March 01, 2010, 04:34:33 pm by TrueAndroids »

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: How would you test?
« Reply #9 on: March 01, 2010, 07:01:51 am »
Hi TrueAndriods, like the name.

What also grabbed me was the term machine consciousness, It’s a nice way of putting it, Sounds better then artificial intelligence. Well it does to me.  

Don’t know about your question, can't understand it  :D, maybe lrh9, Art or one of the other more keyed up than me on Ai members here can.

And Welcome.


Hey Datahopa, thanks. Yea with the settling of the strong AI and weak AI designations, things become a little clearer.

From the USENET FAQ -
"Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans.  

Weak AI simply states that some "thinking-like" features can be added to computers to make them more useful tools... and this has already started to happen (witness expert systems, drive-by-wire cars and speech recognition software).

 What does 'think' and 'thinking-like' mean?  That's a matter of much debate."


 http://www.faqs.org/faqs/ai-faq/general/part1/section-3.html#ixzz0guCGpzvh

What has happened is that "equal to humans" according to Searle means at least having semantic understanding of language and deductive reasoning as part of its thinking. Then it can be said to be strong AI, or more specifically, a conscious machine.
« Last Edit: March 01, 2010, 07:07:04 am by TrueAndroids »

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: How would you test?
« Reply #10 on: March 04, 2010, 11:46:24 am »
Thanks for responding TrueAndroids.

I'm not sure I understand what you mean by "semantic".

Could you explain a little more about semantic understanding?

Regardless, I do believe that a test involving John Searle's Chinese Room against a.i. would be a good test for performance evaluation.

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: How would you test?
« Reply #11 on: March 04, 2010, 12:12:21 pm »
My hypothesis about how a machine processor might pass the Chinese Room test is by being symbol grounded.

http://en.wikipedia.org/wiki/Symbol_grounding

Essentially, symbol grounding deals with the problem about how symbols (which can be alphanumeric symbols or physical symbols of electricity, chemicals, etc. - basically anything that can represent or be represented by a value) acquire their meaning.

I could mention an object and none of us here would have any problems identifying or imagining that object.

Here I would mention it in English, but if we all understood Chinese then I could mention it in Chinese - or any language.

If the process by which we understand those symbols could be replicated in machine processors, then they would have one of the abilities requisite for intelligent and/or conscious thought and action.

I think that machine processors do have the capability to be symbol grounded. Of course machines don't have the ability to understand Chinese - yet. Neither do most of us I wager. (I'd like to make an important distinction at this point before you object that you can learn Chinese. Of course you can. All of us can. However, that is capability. Ability involves currently possessing the skill.) However, we possess sufficient sensorimotor systems to memorize that the Chinese symbol for a tree represents a real world tree. If we memorized enough of these symbols and sensory data about what they represent (along with proper Chinese syntax and grammar of course) we would eventually understand Chinese.

(Sensorimotor systems are the means by which we obtain sense data from our environment and control our bodies to interact with that environment.)

My hypothesis is that if machines possess sufficient sensorimotor systems and the ability to record physical symbol data, they can link symbols to physical symbol data and then possess one of the abilities requisite for intelligent thought and/or consciousness - and the subset ability to understand Chinese.

(Not the only requisite ability, but one of them.)
« Last Edit: March 04, 2010, 02:07:57 pm by lrh9 »

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: How would you test?
« Reply #12 on: March 04, 2010, 04:14:05 pm »
Irh9, You've got a lot of good stuff in there, and that link is gold! Great description of the Chinese Room there. You and I are heading in the same direction, as far as symbol grounding, the Chinese Room Argument as a way to test strong AI claims, and the sensorimotor issue. The future of computing I'm pretty sure is down that exact road, for virtual humans, chatbots (who become laptop personal assistants requiring both sensors and actuators), androids, robot creatures, virtual creatures, semantic (conscious) web, conscious mega-systems, super smart Singularity, etc.

There is so much to discuss in your quote, I will have to take it one piece at a time.

1. sensorimotor systems -  Yep absolutely. I agree that Sensory based intelligence will be required for the field to advance. If we are building artificial humans, be it virtual humans or androids, then it must exhibit human level and type intelligence,and this intelligence involves sensors and actuators as a part of its functioning.

2. You said: "... we possess sufficient sensorimotor systems to memorize that the Chinese symbol for a tree represents a real world tree. If we memorized enough of these symbols and sensory data about what they represent (along with proper Chinese syntax and grammar of course) we would eventually understand Chinese."

Aha! Now here you have come to the rub of the Searle argument, and the requirement for semantic understanding. In the video I say "You can build me the most complex syntactic system possible with the most amazing emerging syntactic properties possible (even as you suggest above), and the end result will still just be syntactic squiggles in, through, and out - zero consciousness - still a classical syntactic based computer, as opposed to a post-classical conscious computer. The ONLY thing that can turn these syntactic squiggles into meaningful sentences is semantic understanding of them, just like humans have.

Here's a video of what a current robot actually senses on his side as he does facial recognition. The robot just matches what his sensory system is producing (camera image) with its memory bank of facial images that have names related to them. When a match is found, the name is the output. It's all syntactic, and as such can never transcend its syntactic paradigm, no matter what syntactic properties emerge. So here is where we differ. That's my 2cents anyway.
http://www.youtube.com/user/TrueAndroids?feature=mhw4#p/c/4D22745C7A9B9355/5/9DL9BPHKD2c
« Last Edit: March 06, 2010, 06:05:42 pm by TrueAndroids »

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: How would you test?
« Reply #13 on: March 05, 2010, 06:01:44 am »
I don't think we differ too much at all. I said symbol grounding is one of many requisites for a.i. I agree that a simple database is neither intelligent nor conscious.

One way we might differ is that I do think that syntactic manipulation internally is a base component of intelligence and/or consciousness.

We have our brain and nervous system to transmit, receive, understand, and generate the electrochemical signals (physical symbols) carried by our nerves and neurons. I think if a machine processor does not have a means to syntactically manipulate symbols (a means like a brain and nervous system) then it will lack two necessary components for intelligence and or consciousness.

(I think the most contentious implication of that idea is that binary devices would need to be able to emulate a brain and nervous system. Many people think that a volt on or volt off (the way computers physically represent binary as I understand it) can't possibly represent the electrochemical interactions of the nervous system. To which I respond "If a computer can generate an image why can't it see one? Or if a computer can generate a sound why can't it hear one?")

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: How would you test?
« Reply #14 on: March 05, 2010, 06:21:37 am »
Ok I see. Well we agree then that it will take syntax plus semantics, with semantics being the part that transforms it into a conscious machine.

As far as semantics in all this:

"Semantic Computing is a rapidly evolving research field that integrates representation, methods, and techniques from areas as diverse as multimedia, computational linguistics, semantic web, knowledge engineering, software engineering, with the goal of creating novel technologies and applications that connect intuitively formulated user intentions with the content and meaning of machine-represented data."

"The field of Semantic Computing addresses the derivation and matching of the semantics of computational content to that of naturally expressed user intentions in order to retrieve, manage, manipulate or even create content, where "content" maybe anything including video, audio, text, processes, services, hardware, networks, etc."

AREAS OF INTEREST INCLUDE:

Semantics based Analysis

    * Natural language processing
    * Image and video analysis
    * Audio and speech analysis
    * Data and web mining
    * Behavior of software, services and networks
    * Security
    * Analysis of social networks


Semantic Integration

    * Metadata and other description languages
    * Database schema integration
    * Ontology integration
    * Interoperability and service integration
    * Semantic programming languages and software engineering
    * Semantic system design and synthesis


Applications using Semantics

    * Search engines and question answering
    * Semantic web services
    * Content-based multimedia retrieval and editing
    * Context-aware networks of sensors, devices and applications
    * Machine translation
    * Music description
    * Medicine and Biology
    * GIS systems and architecture


Semantic Interfaces

    * Natural language interfaces
    * Multimodal interfaces
    * Human centered computing

FROM: http://www.ieee-icsc.org/

« Last Edit: March 06, 2010, 06:06:52 pm by TrueAndroids »

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

226 Guests, 0 Users

Most Online Today: 275. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles