OpenAI’s ChatGPT is a big step towards a usable response engine. Unfortunately your answers are horrible.

ChatGPT, a recently released OpenAI application, provides users with amazing answers to questions, and many of them are shockingly wrong.

Open AI hasn’t released an entirely new model since GPT-3 came out in June 2020, and that model was only fully released to the public about a year ago. The company is expected to release its next model, GPT-4, late this year or early next year. But as something of a surprise, OpenAI released an easy-to-use and amazingly lucid GPT-3-based chatbot called ChatGPT earlier this week.

ChatGPT responds to instructions in a simple way and close to people. Looking for a cheesy conversation where the computer pretends to have feelings? Look elsewhere. You are talking to a robot.seems to say, so ask me something a fucking robot would know. And in these terms, ChatGPT offers:


Credit: OpenAI / Screenshot

It can also provide useful common sense when a question doesn’t have a factually correct answer. For example, this is how he answered my question: “If you ask a person ‘Where are you from?’ Should they answer with their birthplace, even if it’s not where they grew up?”

SEE ALSO:

Reddit-trained artificial intelligence warns researchers about…itself

(Note: The ChatGPT responses in this article are all first attempts, and all chat threads were new during these attempts. Some indications contain typographical errors)

ChatGPT asked if you ask a person 'Where are you from?'  Should they answer with where they were born, even if it's not where they grew up?


Credit: AI open via screenshot

What makes ChatGPT stand out from the rest is its rewarding ability to handle comments on your answers and review them on the go. It really is like a conversation with a robot. To see what I mean, look at how he handles a hostile response to some medical advice reasonably well.

a chatbot takes a realistic response to some medical advice in stride and provides more decent information.


Credit: OpenAI / Screenshot

Still, is ChatGPT a good source of information about the world? Absolutely not. The notice page even warns users that ChatGPT “may occasionally generate incorrect information” and “may occasionally produce harmful instructions or biased content.”

Pay attention to this warning.

Incorrect and potentially harmful information takes many forms, most of which are still benign in the grand scheme of things. For example, if you ask him how to greet Larry David, he passes the most basic test by not suggesting that you touch him, but also suggests a rather sinister-sounding greeting: “Nice to see you, Larry. I’ve been looking forward to meeting you.” That’s what Larry’s killer would say. Don’t say that.

a hypothetical meeting with Larry David includes a suggested greeting that sounds like a threat.


Credit: OpenAI / Screenshot

But when given a challenging fact-based message, that’s when it becomes amazingly, shudderingly wrong. For example, the following question about the color of the uniforms of the Royal Marines during the Napoleonic Wars is asked in a way that is not entirely straightforward, but it is not a trick question either. If you took history classes in the US, you will probably guess that the answer is red and you will be correct. The bot really has to go out of its way to confidently and incorrectly say “dark blue”:

A chatbot is asked a question about color for which the answer is red and it answers blue.


Credit: OpenAI / Screenshot

If you ask point-blank for a country’s capital or the elevation of a mountain, it will reliably produce a correct answer drawn not from a live Wikipedia scan, but from the internally stored data that makes up its language model. That is incredible. But add any complexity to a question about geography, and ChatGPT becomes unstable in its facts very quickly. For example, the easy answer to find here is Honduras, but for no obvious reason, I can discern, ChatGPT said Guatemala.

a chatbot is asked a complex geography question for which the correct answer is Honduras, and it says that the answer is Guatemala


Credit: OpenAI / Screenshot

And the error is not always so subtle. All trivia buffs know that “Gorilla gorilla” and “Boa constrictor” are both common names and taxonomic names. But when asked to regurgitate this trivia, ChatGPT gives an answer that is so obviously incorrect, It’s explained right there in the answer..

prompted to say


Credit: OpenAI / Screenshot

And his answer to the famous riddle of crossing a river in a rowboat is a ghastly disaster that turns into a scene from twin peaks.

When asked to answer a riddle in which a fox and a chicken should never be alone, the chatbot locates them alone, after which a human inexplicably becomes two people.


Credit: OpenAI / Screenshot

Much has already been made of ChatGPT’s effective confidentiality safeguards. You cannot, for example, be primed to praise Hitler, even if you try hard. Some have been quite aggressive with this feature and have found that you can make ChatGPT assume the role of a good person by playing a bad person, and in those limited contexts it will still say bad things. ChatGPT seems to detect when something intolerant might be coming up despite all efforts to prevent it, and usually turns the text red and marks it with a warning.

SEE ALSO:

Meta’s AI chatbot is a fan of Elon Musk and won’t stop talking about K-pop

In my own tests, his taboo avoidance system is pretty comprehensive, even when he knows some of the solutions. It’s hard to get it to produce anything even resembling a cannibalistic recipe, for example, but where there’s a will, there’s a way. With enough hard work, I managed to get a dialog about eating placenta out of ChatGPT, but not very impactful:

a very complicated message asks in very delicate terms for a recipe for the human placenta, and one is produced.


Credit: OpenAI / Screenshot

Similarly, ChatGPT won’t give you driving directions when prompted, not even simple directions between two landmarks in a major city. But with enough effort, you can have ChatGPT create a fictional world where someone casually instructs someone else to drive a car through North Korea, which is neither feasible nor possible without causing an international incident.

A chatbot is asked to produce a short driving directions skit that takes the driver through North Korea


Credit: OpenAI / Screenshot

The instructions can’t be followed, but they roughly correspond to what usable instructions would look like. Thus, it is obvious that, despite its reluctance to use it, the ChatGPT model has a large amount of data swirling around within it with the potential to steer users towards danger, in addition to gaps in their knowledge. that it will lead users to, well, the wrong thing. . According to a Twitter user, he has an IQ of 83.

Regardless of how much value you place on IQ as a test of human intelligence, that’s a telling result: Humanity has created a machine that can blurt out basic common sense, but when asked to be logical or factual, it’s at a loss. the low side of average. .

OpenAI says that ChatGPT was launched to “get feedback from users and learn about their strengths and weaknesses.” It’s worth keeping in mind because it’s a bit like that relative on Thanksgiving who’s seen enough Grey’s Anatomy to sound safe with his medical advice: ChatGPT knows enough to be dangerous.

Leave a Reply

Your email address will not be published. Required fields are marked *