As always, Squarebear is fun and entertaining. And, we have all read, in the news how supercomputer AI engineers have recognized Kuki (fna Mitsuku**) in the past. And, while I continue to agree with Squarebear, since like before, LLMs are using supercomputers, I want to keep an open mind on Large Language Models.
So, I found a video that makes a decent counter argument. To be fair, for a Turing Test: Prompt Engineering is needed because Blenderbot 3 is a Large Language Model, not a sequence to sequence model that by default behaves like a chatbot. Please see timestamp 4:37, which starts by saying "You could use this model to be a chatbot."
So, AiDreamers, let's write some Prompt Engineering of our own in Python...
inp = """Bill: My name is Bill. What is your name?
Blenderbot3: """
Now by stripping away the Python code, I wonder what would happen if we tried Prompt Engineering on the web with just this a input:
Bill: My name is Bill. What is your name?
Blenderbot3:
Citation: sentdex
https://www.youtube.com/watch?v=3EjtHs_lXnk
Lastly, I wonder what would happen if the Large Language Model was given a stack of contest transcripts with correct responses?
Thanks for reading.
** Abbreviation "fna" stands for: Formerly Known As. Reference: https://acronyms.thefreedictionary.com/Formerly+Known+As