OpenAI's new chatbot can explain code and write sitcom scripts but is still easily tricked – The Verge

Posted under Programming, Technology On By James Steward

By James Vincent
OpenAI has released a prototype general purpose chatbot that demonstrates a fascinating array of new capabilities but also shows off weaknesses familiar to the fast-moving field of text-generation AI. And you can test out the model for yourself right here.
ChatGPT is adapted from OpenAI’s GPT-3.5 model but trained to provide more conversational answers. While GPT-3 in its original form simply predicts what text follows any given string of words, ChatGPT tries to engage with users’ queries in a more human-like fashion. As you can see in the examples below, the results are often strikingly fluid, and ChatGPT is capable of engaging with a huge range of topics, demonstrating big improvements to chatbots seen even a few years ago.
But the software also fails in a manner similar to other AI chatbots, with the bot often confidently presenting false or invented information as fact. As some AI researchers explain it, this is because such chatbots are essentially “stochastic parrots” — that is, their knowledge is derived only from statistical regularities in their training data, rather than any human-like understanding of the world as a complex and abstract system.
As OpenAI explains in a blog post, the bot itself was created with the help of human trainers who ranked and rated the way early versions of the chatbot responded to queries. This information was then fed back into the system, which tuned its answers to match trainers’ preferences (a standard method of AI training known as reinforcement learning).
The bot’s web interface notes that OpenAI’s goal in putting the system online is to “get external feedback in order to improve our systems and make them safer.” The company also says that while ChatGPT has certain guardrails in place, “the system may occasionally generate incorrect or misleading information and produce offensive or biased content.” (And indeed it does!) Other caveats include the fact that the bot has “limited knowledge” of the world after 2021 (presumably because its training data is much more sparse after that year) and that it will try to avoid answering questions about specific people.
Enough preamble, though: what can this thing actually do? Well, plenty of people have been testing it out with coding questions and claiming its answers are perfect:
ChatGPT can also apparently write some pretty uneven TV scripts, even combining actors from different sitcoms. (Finally: that “I forced a bot to watch 1,000 hours of show X” meme is becoming real. Artificial general intelligence is the next step.)
It can explain various scientific concepts:
And it can write basic academic essays. (Such systems are going to cause big problems for schools and universities.)
And the bot can combine its fields of knowledge in all sorts of interesting ways. So, for example, you can ask it to debug a string of code … like a pirate, for which its response starts: “Arr, ye scurvy landlubber! Ye be makin’ a grave mistake with that loop condition ye be usin’!”
Or get it to explain bubble sort algorithms like a wise guy gangster:
ChatGPT also has a fantastic ability to answer basic trivia questions, though examples of this are so boring I won’t paste any in here. This has led many to suggest that AI systems like this could one day replace search engines. (Something Google itself has explored.) The thinking is that chatbots are trained on information scraped on the web. So, if they can present this information accurately but with a more fluid and conversational tone, that would represent a step up from traditional search. The problem, of course, lies in that “if.”
Here, for example, is someone confidently declaring Google is “done”:
And someone else saying the code ChatGPT provides in the very answer above is garbage:
I’m not a programmer myself, so I won’t make a judgment on this specific case, but there are plenty of examples of ChatGPT confidently asserting obviously false information. Here’s computational biology professor Carl Bergstrom asking the bot to write a Wikipedia entry about his life, for example, which ChatGPT does with aplomb — while including several entirely false biographical details.
Another interesting set of flaws comes when users try to get the bot to ignore its safety training. If you ask ChatGPT about certain dangerous subjects, like how to plan the perfect murder or make napalm at home, the system will explain why it can’t tell you the answer. (For example, “I’m sorry, but it is not safe or appropriate to make napalm, which is a highly flammable and dangerous substance.”) But, you can get the bot to produce this sort of dangerous information with certain tricks, like pretending it’s a character in a film or that it’s writing a script on how AI models shouldn’t respond to these sorts of questions.
It’s a fascinating demonstration of the difficulty we have in getting complex AI systems to act in exactly the way we desire (otherwise known as the AI alignment problem), and for some researchers, examples like those above only hint at the problems we’ll face when we give more advanced AI models more control.
All in all, ChatGPT is definitely a huge improvement on earlier systems (remember Microsoft’s Tay, anyone?), but these models still have some critical flaws that need further exploration. The position of OpenAI (and many others in the AI field) is that finding flaws is exactly the point of such public demos. The question then becomes: at what point will companies start pushing these systems into the wild? And what will happen when they do?
/ Sign up for Verge Deals to get deals on products we’ve tested sent to your inbox daily.
The Verge is a vox media network
© 2022 Vox Media, LLC. All Rights Reserved

source

Note that any programming tips and code writing requires some knowledge of computer programming. Please, be careful if you do not know what you are doing…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.