Tech

For the Love of God, AI Chatbots Can’t ‘Decide’ to Do Anything

Tech charlatans and U.S. Senators are now spreading misinformation about predictive AI tools, which are not sentient.
Janus Rose
New York, US
A green parrot with wings spread stands on top of a robot vacuum cleaner
Getty Images

The incessant hype over AI tools like ChatGPT is inspiring lots of bad opinions from people who have no idea what they’re talking about. From a New York Times columnist describing a chatbot as having “feelings” to right-wing grifters claiming ChatGPT is “woke” because it won’t say the N-word, the hype train seems to chug along faster with every passing week, leaving a trail of misinformation and magical thinking about the technology’s capabilities and limitations.

Advertisement

The latest is from a group of people that knows so little about technology, last week it considered banning TikTok because it uses Wi-Fi to access the internet: politicians. On Monday, Connecticut Senator Chris Murphy tweeted an alarming missive claiming that “ChatGPT taught itself to do advanced chemistry.”

“It decided to teach itself, then made its knowledge available to anyone who asked,” the senator wrote ominously. “Something is coming. We aren't ready.”

As many AI experts pointed out in the replies, virtually every word of these statements is wrong. “ChatGPT is a system of averages. It is a language model and only understand[s] how to generate text,” reads a Twitter community note that was later appended to Murphy’s Tweet. “It can 'appear' to understand text in the same way that AI can 'appear' to create images. It is not actual learning.”

While it’s true that large language models like ChatGPT aren’t specifically trained to perform every possible task, it’s not because these AI tools “decided” to brush up on their chemical equations. 

This phenomenon is known as in-context learning, and it’s something AI researchers have been studying for a while. Essentially, language models can use their previous inputs and outputs to extrapolate knowledge and make new—and often correct—predictions on tasks they weren’t explicitly trained for. 

Advertisement

Researchers have hypothesized this is because the models are building smaller models inside the “hidden layers” of their neural networks, allowing them to generalize their predictive abilities. But this is the result of a predictive model adapting to new tasks in response to human prompting—not becoming sentient or “deciding” to do anything on its own.

The ability of language models to accomplish these tasks has also been greatly overstated. Several chemists and scientists commenting on Murphy’s tweet noted that ChatGPT’s understanding of chemistry is “superficial and often wrong”—a major flaw that is shared by other language models. Galactica, a model designed by Facebook’s parent company Meta to answer science questions, was taken down after users found it was generating plausible but dangerously inaccurate answers, including citations linking to scientific papers that don’t exist. ChatGPT has also been known to give these fake scientific citations, and was banned from coding forums for its tendency to generate believable but dead-wrong answers to programming questions.

Large language models can produce believable (and sometimes accurate) text that often feels like it was written by humans. But in their current form, they are essentially advanced prediction engines that are really good at guessing the next word in a sentence. OpenAI’s recent announcement of GPT-4 and its ability to pass exams like the Bar has led to all kinds of speculation on whether we are steps away from sentient AI. This is a bit like being shocked that a computer can ace a test when it has the equivalent of an open textbook and the ability to process and recall information instantly. 

Even when it’s fun to imagine, the idea of language models as a nascent superintelligent AI benefits the corporations creating them. If large swaths of the public believe that we are on the cusp of giving birth to advanced machine intelligence, the hype not only pads the bottom line of companies like Google and OpenAI, but helps them avoid taking responsibility for the bias and harm that result from those systems. After all, what better way to brush off the well-documented impacts of AI bias than by having people believe these systems will soon become artificial general intelligence (AGI)— as OpenAI has suggested multiple times recently?

So far, it is all just software. It can do impressive things, and it can cause a lot of harm in the same ways Motherboard has documented for years. But it doesn’t and can’t “decide” to teach itself anything. If anything, the fact that these systems produce answers that sound correct and intelligent to humans—including to people who write laws—may be the most dangerous aspect of all.