FYI.

This story is over 5 years old.

Tech

The Incredible Challenge of Digitizing the Human Brain

The Human Brain Project wants to build a complete brain on a supercomputer. But first it has to make sense of all the data we have on the brain.
Image: Shutterstock/wavebreakmedia

The machines might be getting smarter, but they're still a long way off from emulating the dizzying complexity of the human brain. We don’t even understand how our own brains work yet. But the Human Brain Project, which is funded by the EU and was launched towards the end of last year, plans to work on both of these simultaneously.

It aims to build a model of the complete human brain “in silico,” or on a supercomputer, in order to give neuroscientists a new tool to understand how the brain functions, as well as informing technologies that could emulate the brain’s computing power. According to its vision statement, “The goal of the project is to build a completely new ICT infrastructure for neuroscience, and for brain-related research in medicine and computing, catalysing a global collaborative effort to understand the human brain and its diseases and ultimately to emulate its computational capabilities.”

Advertisement

Basically, they want to build a virtual brain.

As part of London Tech Week, Sean Hill, one of the project’s coordinators at EPFL in Switzerland, gave a lecture at Imperial College’s Data Science Institute. That’s because, as Hill explained, the work is very much a data project. You need to gather what you know before you can start applying it to a computer model.

Hill was very insistent, however, that “the Human Brain Project is not a data generation project. It is a data integration project.” In other words, there’s a load of data on the brain out there, but it’s only useful if it can be brought together, organised, and have the relevant bits picked out. A lot of neuroscience studies involve brains from different animals, for instance, and there's the additonal problem that many studies in the field have proven not to be reproducible.

Sean Hill at the lecture. Image by the author

Because of that, Hill thinks the current model of science publishing doesn’t put enough emphasis on the details of the method used to gather data about the brain. After the talk, he told me that the Human Brain Project is very specific about what data it needs, and what controls need to be in place.

“We need a lot of details about the method and how it was measured: the details of the measuring devices, the experimental protocols, all of those things," he explained. "And then using that knowledge, we say, well which features can be extracted from that data that would be useful to recreate in the model?”

Advertisement

The database he showed from the Blue Brain Project, a precedent to this new initiative, looked like a very niche, academic Wikipedia. Papers were curated and searchable, and you could click through to get full details on the methodology.

After that, it’s a matter of translating the data you have into a computer model—which is rather more complex than it sounds. Despite reams of data, you’re not going to be able to model each tiny part of the brain individually. You need a set of principles regarding how each bit of information we have links in with the rest, so you can predict a larger-scale picture.

“We need to understand the relationship between many different levels of detail,” Hill went on. If, for instance, you have single-cell transcriptome information (i.e. information on what genes are being expressed) from a mouse brain, “we need to understand, if you look at those transcriptomes, how do they relate to the morphology of a neuron? How do they relate to its electrical properties? How do they dictate the properties that occur at other levels?”

Only then can you draw overlying principles that will help to predict what occurs with certain gene expressions in the human brain, for example, and build a complete, functioning model. Figuring out those principles from animal brains is imperative because, as Hill said, “When we talk about human brains, there are lots of them around, but not many of them are accessible.”

Advertisement

He showed demonstrations from the Blue Brain Project, which modelled part of a rat’s brain: complex webs of neurons, axoms, and dendrites that popped and flashed as the on-screen neurons fired. Building a complete model of the human brain, however, will require a more advanced computer than is currently in existence: an exascale supercomputer capable of 1018 floating operations per second. They’re basically hoping that such a machine will exist before their 2023 end goal, which isn’t entirely wishful thinking. In 2010, a European project set out to build an exascale supercomputer by 2019. (Update: Hill has also informed me that there's a division of the project dedicated to "specifying and procuring" the supercomputer required.)

In the meanwhile, Hill said the first release of the platforms should be available to the project partners in a year’s time, and to the public about 18 months after that. This public access is part of the project's commitment to an open dialog, especially when it comes to ethical issues. There are discussions to be had on how, when we better understand the brain, we should use that knowledge. Hill told me it could be of great benefit to fields like education, where knowing more about how someone’s brain works could help you teach them more effectively.

“On the other hand," he admitted, "that could be used to discriminate."

Another clear application of this knowledge is to improve computing. The brain is an incredibly low-energy, high-power computer, and researchers have already started trying to emulate it in hardware. Hill said he's had to address people's miconceptions that they're developing some sort of brain replacement for robots.

“We’re hoping to learn principles that may be very useful in developing brain-like functions," he said. "But our central goal is not about developing intelligent machines. It’s really about understanding the brain."