Tech

Scientists Made a Mind-Bending Discovery About How AI Actually Works

Researchers are trying to figure out why AI systems are good at learning so much with so little.
A row of computer servers with coologreen, red and yellowwir
picture alliance / Getty Images

Researchers are starting to unravel one of the biggest mysteries behind the AI language models that power text and image generation tools like DALL-E and ChatGPT. 

For a while now, machine learning experts and scientists have noticed something strange about large language models (LLMs) like OpenAI’s GPT-3 and Google’s LaMDA: they are inexplicably good at carrying out tasks that they haven’t been specifically trained to perform. It’s a perplexing question, and just one example of how it can be difficult, if not impossible in most cases, to explain how an AI model arrives at its outputs in fine-grained detail. 

Advertisement

In a forthcoming study posted to the arXiv preprint server, researchers at the Massachusetts Institute of Technology, Stanford University, and Google explore this “apparently mysterious” phenomenon, which is called “in-context learning.” Normally, to accomplish a new task, most machine learning models need to be retrained on new data, a process that can normally require researchers to input thousands of data points to get the output they desire—a tedious and time-consuming endeavor. 

But with in-context learning, the system can learn to reliably perform new tasks from only a few examples, essentially picking up new skills on the fly. Once given a prompt, a language model can take a list of inputs and outputs and create new, often correct predictions about a task it hasn’t been explicitly trained for. This kind of behavior bodes very well for machine learning research, and unraveling how and why it occurs could yield invaluable insights into how language models learn and store information. 

But what’s the difference in a model that learns, and doesn’t merely memorize? 

“Learning is entangled with [existing] knowledge,” Ekin Akyürek, lead author of the study and a PhD student at MIT, told Motherboard. “We show that it is possible for these models to learn from examples on the fly without any parameter update we apply to the model.”

Advertisement

This means the model isn’t just copying training data, it’s likely building on previous knowledge, just as humans and animals would. Researchers didn’t test their theory with ChatGPT or any other of the popular machine learning tools the public has become so enamored with lately. Instead, Akyürek’s team worked with smaller models and simpler tasks. But because they’re the same type of model, their work offers insight into the nuts and bolts of other, more well-known systems. 

The researchers conducted their experiment by giving the model synthetic data, or prompts the program never could have seen before. Despite this, the language model was able to generalize and then extrapolate knowledge from them, said Akyürek. This led the team to hypothesize that AI models that exhibit in-context learning actually create smaller models inside themselves to achieve new tasks. The researchers were able to test their theory by analyzing a transformer, a neural network model that applies a concept called “self-attention” to track relationships in sequential data, like words in a sentence. 

By observing it in action, the researchers found that their transformer could write its own machine learning model in its hidden states, or the space in between the input and output layers. This suggests it is both theoretically and empirically possible for language models to seemingly invent, all by themselves, “well-known and extensively studied learning algorithms,” said Akyürek. 

Advertisement

In other words, these larger models do work by internally creating and training smaller, simpler language models. The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario. 

Of the team’s results, Facebook AI Research scientist Mark Lewis said in a statement that the study is a “stepping stone to understanding how models can learn more complex tasks, and will help researchers design better training methods for language models to further improve their performance.” 

While Akyürek agrees that language models like GPT-3 will open up new possibilities for science, he says they’ve already changed the way humans retrieve and process information. Whereas previously typing a prompt into Google only retrieved information and us humans were responsible for choosing (read: clicking) what information worked to serve that query best, “Now, GPT can retrieve the information from the web but also process [it] for you,” he told Motherboard. “That's why it's very important to learn how to prompt these models for data cases that you want to solve.”

Of course, leaving the processing of information to automated systems comes with all kinds of new problems. AI ethics researchers have repeatedly shown how systems like ChatGPT reproduce sexist and racist biases that are difficult to mitigate and impossible to eliminate entirely. Many have argued it’s simply not possible to prevent this harm when AI models approach the size and complexity of something like GPT-3.

Though there’s still a lot of uncertainty about what future learning models will be able to accomplish and even about what current models can do today, the study concludes that in-context learning could eventually be used to solve many of the issues machine learning researchers will doubtlessly face down the road.