Problems and Effects in AI
The problems and effects in AI are listed on this page; these include aforementioned obstacles that effect the development, usage and perception of such systems.
The AI Effect
So named because of the consequence of the fruit of the AI field; once a cognitive function is understood it is no longer classified as important, in short, anything understood looses importance. This effect is noticeable within the AI community and its presence explains why the public do not think of such technology, once incorporated into their lives (Face/speech recognition, number plate recognition) as AI. No one is sure when this effect will end, but when it’s near complete termination, it has been thought that we may adopt a more spiritual outlook on the mind.
The ELIZA Effect is the belief that a bot is capable of understanding, when it really is doing a simple exercise. Note that the AI effect and the ELIZA effect complement each other.
The Uncanny Valley effect applies to human-like robots and simulated beings. Part of the human brain is dedicated to noticing human features (you might see a face in a cloud, for instance, where, of course, none exists) This, along with the knowledge you are looking at an artificial item, is 'uncanny', and more detailed attention is paid to flaws in the simulation of humans (size of eyes, skin tone, colouration, etc) thus generating a repugnant effect. There are two ways around this problem;
- Created something with human features that does not dirctly to simulate human features (IE, no need for flesh, but eyes, nose and a mouth)
- Increase the ability of the robot to simulate human-like movements/ increase the graphical capacity of the simulation.
A phenomenon that affects all chat-bots, they are ranked by three different types, each more serious then the last; they may, or may not, escalate or lessen.
Type 1 Cascade Failure; The bot starts out normally, however, after time (the amount of which depending on the amount and type of stimulation) the bot may ask more than one question (often about the same topic, but phrased differently), and gradually worsen over time, it may even start responding to itself, for example;
User: I like dogs
ALICE: Why do you like dogs? Don’t you like cats?
User: I like dogs because they are loyal, cats just treat you indifferently.
ALICE: I see, you like dogs because they are loyal.
(Note: This problem expands, many AIML bots now spurt an
error message or, somehow, reset themselves.)
* Some time later in the discussion *
User: I don’t enjoy talking about sports.
ALICE: Why not? Do you dislike sports?
ALICE: I like sports.
ALICE: Do you? That’s interesting...
This problem occurs after a fairly large discussion. It is rectified (thanks to AIML bots and their simplicity) by resetting the program. Also note; it occurs in UltraHAL too. Examples; ALICE, UltraHAL.
Type 2 Cascade Failure; A Type 2 cascade failure is only possible with bots that can remember. Those humans that interact with these programs may present erroneous data which may be incorrect, or only partially correct; considering that most systems like this are often given a complete amount of ‘truth’ by their creators and those close to the program, dangerous data can be regarded as true by the same systems. For the bot, this has little consequence, but using that knowledge when conversing with other humans may lead to serious communication difficulties as everything the bot has learned, whether good or bad, may be used in communication with everyone.
The public Jabberwacky remembers just about everything said to it; it would communication using what it's remembered, paraphrasing other users words. Because of the variety of dialect and in many cases the use of colourful language, Jabberwacky becomes very difficult to understand.
Type 3 Cascade Failure; The most serious of them all, a Type 3 failure, occurs only in systems that learn and include that data into the whole. These systems may rely on semantic understanding and, more commonly, autonomous data gathering (and its subsequent ability to interpret that data). Once a piece of data is located that comes from a trusted source, but is incorrect is entered into the bot’s database; that bot would perceive that data as true. If the incorrect data is important enough, it can change the bot’s perception to any data (to be entered or already in the database). A Type 3 has proven to be less and less likely as the bot (in this case, equipped with a learning system that watched a human classify data) becomes more and more adept at ‘understanding’ information. A word of caution, such a prevalent checking system may mistake true data for false if the true data conflicts with information already in the bot’s database.
With the example bot; the probability of this happening tends to of become less probable.
As of now, there is only one system with the semantic abilities combined with the conversational interface.