Problems and Effects in AI
The problems and effects in AI are listed on this page; these include aforementioned obstacles that effect the development, usage and perception of such systems.
The AI Effect
So named because of the consequence of the fruit of the AI field; once a cognitive function is understood it is no longer classified as important, in short, anything understood looses importance. This effect is noticeable within the AI community and its presence explains why the public do not think of such technology, once incorporated into their lives (Face/speech recognition, number plate recognition) as AI. No one is sure when this effect will end, but when it’s near complete termination, it has been thought that we may adopt a more spiritual outlook on the mind.
ELIZA Effect
The ELIZA Effect is the belief that a bot is capable of understanding, when it really is doing a simple exercise. Note that the AI effect and the ELIZA effect complement each other.
Uncanny Valley
The Uncanny Valley effect applies to human-like robots and simulated beings. Part of the human brain is dedicated to noticing human features (you might see a face in a cloud, for instance, where, of course, none exists) This, along with the knowledge you are looking at an artificial item, is 'uncanny', and more detailed attention is paid to flaws in the simulation of humans (size of eyes, skin tone, colouration, etc) thus generating a repugnant effect. There are two ways around this problem;
- Created something with human features that does not dirctly to simulate human features (IE, no need for flesh, but eyes, nose and a mouth)
- Increase the ability of the robot to simulate human-like movements/ increase the graphical capacity of the simulation.
Cascade Failure
A phenomenon that affects all chat-bots, they are ranked by three different types, each more serious then the last; they may, or may not, escalate or lessen.
Type 1 Cascade Failure; The bot starts out normally, however, after time (the amount of which depending on the amount and type of stimulation) the bot may ask more than one question (often about the same topic, but phrased differently), and gradually worsen over time, it may even start responding to itself, for example;
User: I like dogs ALICE: Why do you like dogs? Don’t you like cats? User: I like dogs because they are loyal, cats just treat you indifferently. ALICE: I see, you like dogs because they are loyal. (Note: This problem expands, many AIML bots now spurt an error message or, somehow, reset themselves.) User: Yes * Some time later in the discussion * User: I don’t enjoy talking about sports. ALICE: Why not? Do you dislike sports? ALICE: I like sports. ALICE: Do you? That’s interesting...
This problem occurs after a fairly large discussion. It is rectified (thanks to AIML bots and their simplicity) by resetting the program. Also note; it occurs in UltraHAL too. Examples; ALICE, UltraHAL.
Type 2 Cascade Failure; A Type 2 cascade failure is only possible with bots that can remember. Those humans that interact with these programs may present erroneous data which may be incorrect, or only partially correct; considering that most systems like this are often given a complete amount of ‘truth’ by their creators and those close to the program, dangerous data can be regarded as true by the same systems. For the bot, this has little consequence, but using that knowledge when conversing with other humans may lead to serious communication difficulties as everything the bot has learned, whether good or bad, may be used in communication with everyone.
The public Jabberwacky remembers just about everything said to it; it would communication using what it's remembered, paraphrasing other users words. Because of the variety of dialect and in many cases the use of colourful language, Jabberwacky becomes very difficult to understand. Example; Jabberwacky.
Type 3 Cascade Failure; The most serious of them all, a Type 3 failure, occurs only in systems that learn and include that data into the whole. These systems may rely on semantic understanding and, more commonly, autonomous data gathering (and its subsequent ability to interpret that data). Once a piece of data is located that comes from a trusted source, but is incorrect is entered into the bot’s database; that bot would perceive that data as true. If the incorrect data is important enough, it can change the bot’s perception to any data (to be entered or already in the database). A Type 3 has proven to be less and less likely as the bot (in this case, equipped with a learning system that watched a human classify data) becomes more and more adept at ‘understanding’ information. A word of caution, such a prevalent checking system may mistake true data for false if the true data conflicts with information already in the bot’s database.
With the example bot; the probability of this happening tends to of become less probable. As of now, there is only one system with the semantic abilities combined with the conversational interface. Example; Jeeney.
Will LLMs ever learn what is ... is? by HS (Future of AI) |
Who's the AI? by frankinstien (Future of AI) |
Project Acuitas by WriterOfMinds (General Project Discussion) |
Ai improving AI by infurl (AI Programming) |
Atronach's Eye by WriterOfMinds (Home Made Robots) |
Running local AI models by spydaz (AI Programming) |
Hi IM BAA---AAACK!! by MagnusWootton (Home Made Robots) |
Attempting Hydraulics by MagnusWootton (Home Made Robots) |
LLaMA2 Meta's chatbot released by spydaz (AI News ) |
ollama and llama3 by spydaz (AI News ) |
AI controlled F-16, for real! by frankinstien (AI News ) |
Open AI GPT-4o - audio, vision, text combined reasoning by MikeB (AI News ) |
OpenAI Speech-to-Speech Reasoning Demo by MikeB (AI News ) |
Say good-bye to GPUs... by MikeB (AI News ) |
Google Bard report by ivan.moony (AI News ) |
Elon Musk's xAI Grok Chatbot by MikeB (AI News ) |
Most Online Today: 390. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)