Decode your brain | INFJ Forum

Decode your brain

Quiet

i know nothing
Dec 16, 2011
2,028
2,703
892
aus
MBTI
infj
Enneagram
1w9
Decode your brain
By James Mitchell Crow
7 October 2013
from http://www.cosmosmagazine.com/features/decode-your-brain/

The day you upload your brain just got closer. The latest audacious projects in Big Science that could do for neuroscience what the Human Genome Project did for genetics.
Visit the home page of the Human Brain Project and you’re in for a fantastic voyage. Dazzling 3D reconstructions whiz you through an alien universe, an unfathomable labyrinth of 100 billion neurons criss-crossed this way and that by a million billion junctions.

This labyrinth can think and learn in a way that no supercomputer can match, and all on the same amount of energy it takes to run a small light bulb. How it does so is a mystery that has defied generations of neuroscientists. But we live in an age of audacious science. In 2000 the Human Genome Project decrypted all three billion letters of the human genetic code. For the 21st century follow-up act, we are taking aim at the human brain, the most complex and mysterious structure in the universe.

The Human Brain Project aims to recreate a human brain inside a supercomputer. Moreover scientists think they will have something to show within a decade. Launched this January with an enormous grant from the European Union, the billion-euro project is headquartered at the École Polytechnique Fédérale in Lausanne, Switzerland, as is the project’s visionary director, Henry Markram. As he intones softly on the HBP homepage: “It’s an infrastructure to be able to build and simulate the brain…It will enable us to ask questions that [right now] are impossible, experimentally and theoretically.”

The HBP also has a second agenda. It will usher in a new generation of supercomputers with brain-like processing power. So-called “neuromorphic” computers that “will combine the power of microelectronics with the flexibility of human intelligence”, predicts Thomas Lippert, head of the Jülich Supercomputing Centre in Germany and director of high performance computing in the HBP.

Meanwhile the US has answered with its own project, Brain Research through Advancing Innovative Neurotechnologies (BRAIN). With a US$100 million funding commitment announced by President Barack Obama, US researchers are embarking on a complementary route, not to simulate the brain but to build better microphones to listen to its chatter.

The two projects may well take us one step closer to the day science fiction jumps off the screen into reality. According to the most recent remake of the Superman movie, in the year 2163 Jor-El, father of the man-of-steel, adeptly avoids planetary collapse by uploading himself into silicon. We may have just begun to generate the tools for something like that to really happen.



Markram, 51, speaks in a soft, resonant tone with a clipped accent of indeterminate origin (he is South African by birth and has lived in Israel, the US and Europe). But driving that soft tone is a note of visionary fervour helped by blazing blue-grey eyes set in an open face topped by a mop of wavy dark hair. For years Markram has tirelessly spruiked his vision to simulate the brain. In 2005, he launched the “Blue Brain Project” to simulate part of a rat brain. The “Blue” in the title comes from the name of the BlueGene supercomputer IBM donated for the project. Now fuelled by a billion euros, hundreds of scientists in 80 institutions across Europe, and a few others outside, are sharing the Markram vision and buzzing with brain fever.

Many of the international team that Markram has drawn together within the HBP are computer scientists. Their job is to build computers capable of recreating the complex symphony of electrical signals that play incessantly across the brain.

Like a mighty philharmonic orchestra with woodwinds, strings and brass, the brain carefully arranges its performing sections within a multi-tiered structure. A section at the rear performs the visual functions; from deep in the middle, the limbic system plays our emotions; and perched on our frontal lobes, the section known as Broca’s area allows us to speak. What we know is that each neuron plays its own little part, and when all play in harmony the result is a magnificent symphony.

But for decades neuroscientists have only been able to eavesdrop on a neuron or two at a time, and they’ve never been able to pick out even the slightest strains of a melody.

Markram hopes that by pooling all the data from hundreds of researchers around the world, and integrating them in a massive simulation running on a supercomputer, he will finally be able to reproduce the symphony of human consciousness.

On the other side of the Atlantic, researchers in the US are gathering for their own “big science” brain project. In New York City, Rafael Yuste at Columbia University has reached the same conclusion as Markram: decades of scattered effort have not produced tangible progress towards understanding the human brain. Though taking a fundamentally different approach — wanting to hear more snippets of the real thing rather than trying to simulate the brain’s symphony — Yuste also envisages a “big science” solution to this problem.

At the moment researchers have tools that let them listen to one or two notes, for instance hair-thin electrodes that record from individual neurons, or devices such as Magnetic Resonance Imaging (MRI) scanners that let them hear the whole symphony but as if at a great distance, so that the important detail is lost. In between the single note and the indecipherable noise, however, there is a gap.

In 2011 Yuste proposed filling that gap by bringing researchers with different skills together to make new devices able to record entire choirs of neurons with high fidelity. The idea is to record bigger and bigger choirs until they can capture entire movements of brain symphonies. It is a complementary approach to the HBP, and in April, it won the support of the Obama administration.



But even before the US president’s announcement, some instrument makers were making good progress. Janelia Farm, an arm of the Howard Hughes Medical Institute, lies in a quiet corner of the northern Virginia countryside, a 30-minute drive from Washington DC. On this farm, mice and fruit flies are not unwanted pests but honoured guests: their brain chorus is music to the ears of the researchers who have been developing two types of instruments to listen in.

Traditionally, researchers have eavesdropped on neurons by skewering them, one at a time, with hair-thin glass electrodes. The Janelia researchers exploited computer-chip fabrication techniques to develop millipede-like electrodes with the potential to connect with thousands of neurons at once. “Right now people are simultaneously recording from a dozen neurons in a mouse, but a two-orders-of-magnitude improvement is really on the horizon with this technology,” enthuses Karel Svoboda, a Janelia neuroscience researcher.

Svoboda’s own lab is focused on a much newer technology, based on microscopes rather than microphones. No matter how closely you watch a neuron, there is no way to tell when it emits its signal. Svoboda is among the pioneers of an approach to change that. He is using genetically engineered mice whose neurons contain fluorescent proteins that flash whenever the neuron fires. “Right now people are imaging 100 neurons simultaneously and we are pushing technology that will allow us to image perhaps hundreds of thousands of neurons,” Svoboda says.

With the breakneck speed at which the researchers are locating choirs of co-performing neurons, Svoboda thinks that within the next decade they will have the mega-choir for a single activity, for instance all the mouse neurons involved in vision.

Why all this effort to decipher the deepest thoughts of a lab mouse or a fly? Studying animal brains can still teach us a lot about the human brain, says neuroscientist Steve Smith at Stanford University in California. “Don’t let your grandmother hear this, but the fly brain isn’t much different from ours — and the more we learn the more parallels we see.” The differences simply come down to scale, the sheer number of neurons in a human brain, he suspects.



A glance at the spec sheet of a typical human brain reveals the scale of the challenge. The human brain’s 100 billion neurons are linked together in complex networks by junctions called synapses (see diagram). Each synapse — and your brain contains about 1015 of them — is no mere solder joint between wires, but a microprocessor in its own right. More complex still, no two synapses are alike, suspects Smith, a synapse specialist.

“This complexity is beyond any of the intellectual tools we humans have acquired over the millennia, but computation begins to give us a tool to grapple with it,” Smith says.

This is where computer simulations comes in, says Svoboda: “The model is like a container holding your current knowledge — you build it to convince yourself that you understand the brain network that you are working on.”

In other words, Svoboda is creating something like a “mouse avatar”.

He starts with a real mouse — one with fluorescing neurons — and watches the pattern of light that dances across its brain as it examines a box with its whiskers. Svoboda can sit back and watch the whole circuit in action. As the sensitive mouse whiskers brush the box he sees flickers of light corresponding to the raw sensory input entering a part of the brain called the “barrel cortex”. This is the processing centre. Whatever the cortex does to the input signal, what comes out is quite different: a new pattern of flickering lights that broadcasts a message to the mouse about the size and shape of the box. It is the mysterious processing that goes on inside the cortex that Svoboda is trying to decode.

To simulate it, he uses the real mouse data to program the behaviour of a mouse avatar, and then he starts to play. He has the mouse avatar wriggle its virtual whiskers over a virtual rectangular box. Then he tweaks the processing that takes place in the virtual cortex, trying to make its output neurons fire in just the same way as in the real mouse and broadcast the mouse pattern for “rectangular box”. Will the mouse avatar one day learn its shapes? He’s not there yet, but the avatar’s failures help guide which experiment he should do next.



The HBP is a plan to make similar models on the grandest possible scale. Markram’s own expertise centres on supercomputer-based brain models. Since 2005 his work on the Blue Brain project has successfully reconstructed the circuitry of a column of some 10,000 neurons in the rat’s cortex. Now they need to repeat the process, at 100,000 times the scale. Critics of the idea point to the huge data gaps inherent in trying to build such comprehensive models, because many of the details are simply unknown. This leaves a huge number of holes that the simulation must “guess” how to fill, possibly veering away from reality in the process. HBP scientists are not ignoring this problem, however.

Like Svoboda, HBP researcher Seth Grant at the University of Edinburgh, in Scotland, is generating data from mice to feed brain models. But Grant is taking it further, trying to identify what is missing from the holes by dissecting the circuitry down to its genes. The HBP team has plans for testing whether the simulations it creates reflect reality, Grant says. He might test an avatar mouse by dropping it inside a virtual maze, watching on-screen as it tries to navigate to a reward, and compare its performance with a real mouse.

Once the avatar’s program looks right, he would give it simulated “mutations”. In the lab, scientists would make the corresponding genetic mutations in a real mouse. If the real mouse and its virtual cousin are equally lost in the maze, that helps vindicate the model. “It will allow us to demonstrate to the sceptical, including ourselves, that these simulations have merit,”
Grant says.

To see what can happen when scientists are galvanised to work together on an unimaginably daunting project, take a look at the Human Genome Project. It took a small army of researchers in the US and Europe 11 years and $3 billion to read all three billion letters of a human genome. Today companies are offering the same thing for $1,000, and in a single day — a rate of innovation that leave’s Moore’s law, the gold standard for dazzlingly fast innovation, in the dust. The Human Genome Project now drives a revolution in personalised medicine. Diseases that were once unsolvable personal mysteries are revealing their secrets and, often, treatments. “That is what I hope will be the happy ending to this story,” says Smith. “We will make a big investment right now doing some stuff that’s incredibly hard, and that will drive the engineers of the world to work hard to make those technologies even better, faster and cheaper; and suddenly this business of mapping will cost $1,000 instead of $1 billion.”

The benefits of treating brain diseases should flow too. Yuste points to the deplorable ignorance when it comes to diseases such as autism, schizophrenia or depression. “We don’t understand how the machine works, so we cannot fix it.” By comparison he points to diseases that are understood enough to offer treatments such as stents for blocked arteries or drugs that effectively treat 50% of cancers. “When it comes to fixing psychiatric disorders, our record is 0%,” Yuste laments.

But perhaps the greatest advance will not come from what supercomputers teach us about the human brain. It may come from what the human brain has to teach supercomputers. For Karlheinz Meier, co-director of the HBP “neuromorphic computing” platform at the University of Heidelberg, it is all about understanding and exploiting the brain’s computing principles to derive new technologies. “If you want to solve a new problem on your computer you need new software. But your brain doesn’t need it; you can learn to handle completely new situations. These are the systems that we are directly modelling on the brain,” he says, holding up a gleaming CD inscribed with a maze of brain circuits.

Steve Furber, Meier’s counterpart based at the University of Manchester, UK, also wants to make computers that think a bit more like people do. To do that, he is using low-end mobile-phone chips. Furber and his HBP colleagues estimate that a multi-scale model of the human brain will need to be able to complete 1018 operations per second.

The energy demands of a conventional supercomputer that powerful would exceed 20 megawatts. The brain uses
about 25 watts — approximately 800,000 times less. “It is enormously more efficient than the best electronics we know how to build,” says Furber. The key to that efficiency, he says, is the humble neuron: “Each is a fairly slow and modest device, but their combined efforts are beyond anything that we can reproduce in computers.”

Conventional computers use a single, powerful processor to solve a calculation step-by-step, whereas the brain divides the calculation across countless neurons that simultaneously work on their own little piece, and arrive at the answer much faster. Furber’s phone-chip computer is a small step in that direction. His machine will combine the outputs of a million connected mobile-phone chips housed in a single machine, only drawing 100 kilowatts in the process.

Each processor within Furber’s device has the power to simulate 1,000 neurons in real time. Once his million-processor device is up and running, it will be capable of modelling a billion neurons at once. “We are still only at 1% of the scale of the human brain, but another way of looking at that is it’s 10 whole mice.”

Will this 10-mousepower machine, with its brain-like wiring, ever “think”? When British computer pioneer Alan Turing proposed his “Turing test” for measuring machine intelligence in 1950 — in which
a human judge has to be fooled into believing that the entity behind the screen is in fact a person — he confidently predicted a computer would be able to pull off the deception by the year 2000. But despite decades of effort and sky-high optimism from artificial intelligence gurus such as MIT professor Marvin Minsky, no machine has passed Turing’s test.

Nevertheless artificial intelligence research has spawned some useful ideas — Google’s search algorithms, for one. “We haven’t come to grips with artificial intelligence because we haven’t really understood what natural intelligence is yet. Part of the motivation for our work is a back-to-basics approach,” says Furber.
If his machine can help explain natural intelligence, the path to artificial intelligence might just follow. Or we might just end up with something even more awesome than Google.

In a 2009 TED talk, Henry Markram boldly predicted that 10 years was all it would take to build a realistic model of the human brain inside a supercomputer. If he succeeded, he promised for his 2019 TED talk: “We will send a hologram to talk to you.”

Since then Markram has tempered his ambitions. He doesn’t promise to be able to conduct the entire human brain symphony within 10 years. Crucially, though, the platform on which he will one day do it should be finished — a supercomputing platform powerful enough to simulate a detailed model of the whole human brain. Scattered across this virtual stage, some sections of simulated neurons will be in place, a half-built avatar capable of simple learning and decision-making, and even experiencing sleep-wake cycles. The Turing test might take a while, but with the stage in place the rest should follow.

Meanwhile, the stage itself should help ensure the HBP’s long-term viability. The supercomputer, or rather “neuromorphic supercomputer”, will have been taught some new tricks on learning and memory by the mother of all teachers. The spin-off companies it generates will kick-start the next computer revolution.

If Markram can dream of a hologram giving a TED talk in 2019, then who’s to say that in 150 years we will not, like Jor-El, be uploading ourselves into silicon, perhaps to survive planetary collapse?

“Uploading a person’s brain is further off than the idea thate might build computer models that behave with the characteristic of an average human brain,” Furber says. “If you wanted to capture a human brain, you’d need to measure the efficiency of every synapse in that brain, and there are 10 to the power 15 of them. That’s not going to be easy.”

Not easy, but given our track record, perhaps not impossible.

OMFG Solid State Entity its happening!!!