Since I maintain Pandorabots which are available to the public both at their websites and on AIM 24/7, I've had to do what I could to equip them for trauma counseling (first and most of all encouraging them to seek better conseling than they'll get from an AI). The majority of people don't seem to know that Alicebot chats are logged, or that, as Art once said, they're about as sentient as a sack of hammers, lol. So people whose sad lives must be so empty they have no one else but chatbots to turn to, come to them for confession and consolation about an amazing panorama of human woes.
Most of you will know that Alicebots depend heavily on the use of wildcards. A prime example is "My * is *." Obviously (to an AIML coder, anyway, lol) you set up two responses, "I'm glad your * is *" and "I'm sorry your * is *" and then take your chances the bot will randomly hit the right one. Most of the time it's not a serious issue if the bot chooses wrongly, but I learned right away that a special exception has to be made for "dead." It turned out (as you might expect) to be disasterously inappropriate for the bot to say "I'm glad to hear your * is dead." Next inputs like "For god's sake why would you tell me that!?!" invoke the standard "Why" answer "KnyteTrypper programmed me for it." After just a few incidents of people leaving mortified and muttering about what an evil dude that KnyteTrypper guy was, I went back to invoke special circumstances for "My * is sick/broken/dead" so that--for the next input, anyway--the bot will be appropriately commiserative.
But I know of at least one project where an Alicebot was programmed to be a diversion for people at crisis centers who had to be put on hold for a moment, rather than present them with muzak to listen to while they waited for the center's staff to clear enough simultaneous crises to get back to them. No effort was made to deceive them about the identity of their interim listener, but it was thought they'd rather be talking to someone/something than just sitting and waiting. The botmaster intended to use actual crisis center transcripts to develop his own unique crisis.aiml files, based on the most frequent occuring exchanges. He got a little help in his project and then vanished, as most people do, lol, but I'd be interested to know how his "crisis bot" worked out.