The 'Rationality' Workshop That Teaches People to Think More Like Computers
Image: Shutterstock

FYI.

This story is over 5 years old.

Tech

The 'Rationality' Workshop That Teaches People to Think More Like Computers

The Center For Applied Rationality wants to help humans make better choices.

Melissa Beswick, a research coordinator at the University of Pennsylvania and one of my closest friends, has tried for years to force herself into the habit of swimming laps.

"There have been times when I've tried to swim consistently, but it's only lasted a couple weeks, never more," she told me.

Finally, that's changed. Beswick now swims two to three times a week, and she's confident she can stick with it.

Advertisement

She credits her newfound motivation at least in part to a curious trip she took out to the Bay Area last fall. She flew across the country to spend five days in a cramped hostel with about three dozen others—mostly young tech workers—all there to attend a workshop hosted by The Center For Applied Rationality, or CFAR.

As far as self-help seminars go, CFAR is definitely unique. Instead of invoking spirituality or pointing toward a miracle cure-all, the organization emphasizes thinking of your brain as a kind of computer.

Throughout the workshop, participants and facilitators described their thinking patterns using programming and AI terms, Beswick said.

"One of the first things we had to do was create a 'bugs list,'" Beswick told me, meaning a list of personal thinking errors. The exercise was a nod towards the term programmers use to refer to problems in computer code.

"We had all noticed in different ways in different contexts that being smart, and being well educated and even being really well intentioned was far from a guarantee from making what turned out to be really stupid decisions."

The names of classes even, are sometimes derived from tech terms. One class, dubbed "propagating urges," comes from the machine learning term backpropagation.

CFAR likes to ask, If something, either a human or an AI, were to make a perfectly rational choice, what would that look like?

The founding of CFAR

Advertisement

CFAR's founders, Anna Salamon, Julia Galef, Michael Smith, and Andrew Critch all have impressive backgrounds in math, artificial intelligence, science, or a combination.

In 2011, Salamon, CFAR's earliest founder, was working at the Machine Intelligence Research Institute (MIRI) an artificial research firm that now shares its offices with CFAR in Berkeley. CFAR originally began as an extension of MIRI, she explained in an email.

"I was doing training and onboarding for the Machine Intelligence Research Institute, which in practice required a lot of rationality training. And I began to feel that developing exercises for training 'rationality'—the ability to form accurate beliefs in confusing contexts, and to achieve one's goals—was incredibly important, and worth developing in its own right," Salamon wrote.

As an experiment, she offered a test workshop. Critch, then completing a Ph.D in math at the University of California, Berkeley, attended. Salamon was so impressed with him that she turned him into an instructor halfway through the program.

Julia Galef, one of the founders of CFAR, explaining a technique Image: CFAR

Smith, who had recently completed a joint Ph.D in math and science education at the University of California, San Diego and San Diego State, found CFAR via an ad that Salamon posted. Within several weeks, Galef would join as well, after hearing about the nonprofit through mutual friends.

"We had all noticed in different ways in different contexts that being smart, and being well educated and even being really well intentioned was far from a guarantee from making what turned out to be really stupid decisions," Galef said.

Advertisement

Critch, Galef, Salamon, and Smith started offering classes through CFAR in 2012, and their organization is now recognized as a nonprofit. CFAR charges a $3,900 workshop fee, although students can first take advantage of a 20 minute free virtual consultation. (Scholarships are also available, and Beswick received one because she was a student at the time).

Our brains just aren't equipped with the skills to make coherent choices in today's complex world, Galef said. Whereas our ancestors once faced problems focused on short term reward, like avoiding predators and finding food, we're now tasked with trying to wade through increasingly complex and confusing sets of circumstances on a daily basis.

Many of the differences that exist between how humans actually act and the way that an ideal reasoner would behave "stem from the fact that the human brain did not evolve in a context that is very much at all like the context we have to operate in now," Galef told me.

"The ancestral environment on the savannah that our ancestors were making decisions on did not involve complicated long term decisions," she said. "It also didn't involve abstractions."

In other words, it's hard to force yourself to go to the gym when the short term reward of taking a nap seems so much sweeter. The abstract prize of long term health oftentimes just isn't enough to motivate us. Complicating things even more, is the fact that much of the time, we're not even aware of what's really motivating our behavior.

Advertisement

"A lot of the implicit algorithms that our brains are running have to do with our emotions and our motivations and a lot of messy, unconscious stuff," Galef said.

CFAR doesn't advertise much. Many participants, including Beswick, learn about the organization through a blog called Less Wrong, a rationality and artificial intelligence focused forum linked to Eliezer Yudkowsky, who is listed as a curriculum consultanton CFAR's website.

Yudkowsky is a divisive personality, both for his controversial views on social issues like polyamory, and for the fact that he lacks a formal education and has never written a peer-reviewed article about artificial intelligence.

If something, either a human or an AI, were to make a perfectly rational choice, what would that look like?

Many of CFAR's participants are followers of Yudkowsky's work, but it has also managed to appeal to a broader audience.

In addition to helping individuals reach their goals, CFAR's proponents fear that Earth's most powerful and important leaders aren't aware of their own shortcomings, and that ignorance is at the root of much of the world's problems. If we can train our leaders learn to think more effectively about complex problems, CFAR says, we could save humanity from future crises.

Thinking like an AI

The task of saving the entire human race seemed lofty, but when I spoke to Galef about some of CFAR's specific techniques and their connections to AI, they appeared almost intuitive. One connection she made was between a statistical model, and the way that humans actually make decisions.

Advertisement

"One type of artificial intelligence algorithm is a Bayesian algorithm," she began. These algorithms are based on Bayes' theorem, a probability theory that describes how much the likelihood of something being true changes based on new information.

The problem, Galef explained, is that we often don't adjust our beliefs, despite the contrary information we encounter.

In fact, there's good evidence to support the idea that hearing something that flies in the face of our beliefs only causes us to cling to them more. "In practice what the human brain tends to do is to dismiss evidence that contradicts our pet theory by finding some explanation that allows us to retain the strength of our beliefs," Galef said.

In other words, basic forms of AI that utilize Bayes' theorem, like spam filters, are better at adjusting their beliefs based on new facts than humans sometimes. There are instances in which human brains do operate sort of like these algorithms, but "there are some significant ways that we diverge from this model of a perfect Bayesian reasoner," Galef pointed out.

CFAR's techniques hope to help bridge the gap, by helping its participants to assess whether the reasons people have for their beliefs are actually sound ones.

So far, a fair amount of the participants who have experienced CFAR's teachings firsthand see positive results in their lives, even a year later, at least according to survey data that CFAR has internally collected.

Advertisement

Participants at one of CFAR's workshops Image: CFAR

CFAR conducted a small randomized trial where 50 participants were admitted to its workshop, but only 25 were permitted to actually attend. Each group was asked to fill out a series of surveys containing both objective and subjective questions about their thinking patterns and habits, and to have loved ones do the same.

The data showed that those who did attend CFAR's workshop had a statistically significant decrease in what psychologists refer to as neuroticism, or the fundamental personality trait marked by anxiety, fear, moodiness, envy, and loneliness. There was also a less significant, but still marked increase in self-efficacy, or the belief in one's ability to accomplish goals.

One of the participants I spoke to, 19-year-old first-year Oxford University student Ben Pace, said that the workshop gave him "an appreciation of how much a single person could really achieve, if they understood how they thought, and planned accordingly." Pace had traveled all the way from the UK to attend, and even persuaded a friend to do the same at a later workshop.

"Everyone there truly wanted to be better, and that might sound trite because of course everyone wants to be a better version of themselves, but the people at this workshop were really willing to put in the effort, to put in the time, to look at themselves critically," Beswick told me.

It's hard to say with certainty that learning to think more rationally by using CFAR's techniques will save us all, but if the robots do come for us, it seems only reasonable that in the meantime, we try to learn to think like them.