Ai Dreams Forum

Chatbots => General Chatbots and Software => Topic started by: on September 11, 2017, 12:51:45 am

Title: Judge My Chatbot Transcript
Post by: on September 11, 2017, 12:51:45 am
Unfortunately, this year my chatbot entry crashed just before the chatbot contest deadline. The bad news was, by the time I finally resuscitated my chatbot, it was too late to enter the contest.  The good news is my  resuscitated chatbot just completed an entire contest round of the actual questions asked this year. So, I wrote a simple program to let you... Judge My Chatbot Transcript

Thanks for being a chatbot contest judge at:
Title: Re: Judge My Chatbot Transcript
Post by: on September 11, 2017, 03:27:48 am
We have some scores from a few judges!  Of course, it is up to them to disclose their scores.

Suggestions are so useful to help improve this very new chatbot, which running on the new message-mode protocol .

Thank you sincerely, judges!
Title: Re: Judge My Chatbot Transcript
Post by: on September 11, 2017, 03:18:25 pm
Half the judges are from the UK, while other half are from the US.   Both providing comprehensive statistics needed for this chatbot to make a full recovery.
Title: Re: Judge My Chatbot Transcript
Post by: Art on September 13, 2017, 03:36:50 am
Isn't that similar to locking the barn after the horse has gone? I mean the contest and its questions have already been posted for review. Now you come out with your fixed bot asking for it to field these same known questions and expect a non-biased answer?

We know you always play fair but this "late" test of yours would really only have merit if a whole new set of answers were provided in a true contest environment. This would just serve to keep the playing field level, as it were and is certainly not meant to reflect harshly on you or your refined bot.

Hopefully we'll have a chance to see how your bot performs after the next chatbot contest. Good luck!

I hope you understand, my $.02

Title: Re: Judge My Chatbot Transcript
Post by: on September 14, 2017, 05:16:58 pm
Technically, with a current Turing test contest that is not on the books yet. It has not reached a status of being (100%) next, yet... Right? Technically, having yet to complete a final round for the first time ever. So, with that in the background, this becomes the story of a young chatbot in recovery, after its design and training was abused. Sudden rule changes is so unusual once a contest begins.  Some may consider that not playing fair. So, let's leave it to others to Judge My Chatbot Transcript.

As for playing unfair with those helpful people who Judge My Chatbot Transcript, it is a huge demerit to spoil precious feedback. Like gold, feedback is an asset. Testing new software designs, can be paid work. So, voluntary feedback is a huge incentive for a programmer to make sure to play fair, or it is the programmer who loses out.

So metaphorically, the horse is still in the barn.  Thankfully, when a Turing test contest __unofficially__ posts, that provides little chance to reach a large segment of the audience, who would enjoy an opportunity to Judge My Chatbot Transcript.
Title: Re: Judge My Chatbot Transcript
Post by: Zero on September 14, 2017, 10:16:02 pm
Yes feedback is precious. But honestly the chatbot's answers are all completely wrong, so what good is a feedback if it's zero from Q1 all the way down to Q20?

How  can we help you improve it?
Title: Re: Judge My Chatbot Transcript
Post by: on September 15, 2017, 03:58:28 am
Hi Zero,

Like in the Turing test contest, I will leave the discussion of the scores up to the individual judges.
I will mention that so far every final score has been unique, no duplicate final scores, which is interesting.
I would say that allowing the public to Judge My Chatbot Transcript has been quite a success!
The software, I wrote to Judge My Chatbot Transcript works!  No technical difficulties whatsoever, so far.

You asked, "How  can we help you improve it?"  You already did help me improve it...  Honestly!
Please feel free to explain what is it that you think makes a chatbot's answers all completely wrong...
Compared to what is it that you think may make a human's answers all completely wrong...

I do appreciate the time you took to Judge My Chatbot Transcript.   Thank you Zero.

Title: Re: Judge My Chatbot Transcript
Post by: on September 15, 2017, 04:38:34 am
Thank you everyone for your support!  I have some good news to share with you.
Judge My Chatbot Transcript just hit #1 on the front page of Google.

People are taking interest, to Judge My Chatbot Transcript. They are showing
support here at the A.I. Dreams Company, UK.  As well as, at the Chatbots Organization.

It seems to be gaining in popularity every day.  Doesn't that make you a bit curious to
see why people going crazy to Judge My Chatbot Transcript
Title: Re: Judge My Chatbot Transcript
Post by: Freddy on September 15, 2017, 04:48:53 am
Hi 8planet !

Glad you are getting some hits go your way. I think it might help to understand what type of bot this is, if that is possible. Is it some kind of pattern-matcher or a learning system or something ?

I have to admit that a lot of the replies did not seem related to the questions, so I am sorry I had to score low. But it's best to be honest. If it helps you get your bot working better then it's just what you want :)

I think it takes some nerve to present a project such as this - I find with Jess I just keep everything crossed that she will say something clever !

Best of luck.

Oh and btw, this site is not a company, been down that route, nothing down that way  ;)
Title: Re: Judge My Chatbot Transcript
Post by: Zero on September 15, 2017, 07:36:37 am
(been down the "company" route?), thank you for accepting my previous input like you did.

Ok, the first answer, to Q1, isn't completely wrong: "greetings" is a plausible answer. About the other answers, there are two points:
- It is obvious that most of them are either produced by a mechanical rephrasing procedure, or the result of some scanning procedure which extracts an action word (and sounds like some sort of a debug mode). This instantly lets the judge know that (s)he's talking to a machine.
- English is clearly not my mother tongue, but several answers seems so grammatically incorrect that they don't even have a meaning, or if they do it's a really unexpectedly complicated meaning. Very unlikely from a human anyway.

Still, I didn't score it yet. How am I supposed to do? Should I treat the conversation as a whole, and then use notes to highlight good and bad parts? Or should I note each answer independently, without taking the whole conversation into account? I know, you probably don't want to interfere with the judging process, but what would be more useful for you?

A little side-note: repeating "judge my thing" in bold everwhere isn't necessary :)
Title: Re: Judge My Chatbot Transcript
Post by: on September 15, 2017, 04:14:47 pm
@Freddy:  Thanks for providing data! It may not be so apparent, but a panel of judges has formed.  Overall, in the short time it has been online, there has been enough data collected to fill a spreadsheet.   And, with that said, I just hit 800 posts at the A.I. Dreams Company, United Kingdom (abbreviated  The word "Company", reminds me of the, Willy Wanker Candy Company

@Zero:  In my opinion, both the observations you made are correct. Questions that refer back to each other as a whole are presented... As well as, questions that stand independently.  Any notes you share are like gold, to all A.I. researchers visiting here. Anyone reading your notes can literally reference them to improve their chatbot.  And, I can give you an excellent example which is like a public service to chatbot designers,

Congratulations to the Judge My Chatbot Transcript panel of judges, for successfully completing so many contest rounds, and for exceeding the size of the panel of judges in the oldest Turing test competition.