Science is difficult to understand and even more difficult to explain. John Bohannon thinks that words are inept at explaining scientific concepts, and should stay out of the way. Powerpoint is useless too. Instead, Bohannon argues, scientific concepts should be explained with dance. He foresees a boost to the economy if dancers were to be hired as aids to presenters, not only because those dancers would have jobs, but because science would be communicated more effectively, leading to more innovation.
Bohannon presents these ideas in an engaging TEDx talk, with the help of the Black Label Movement dance team. No doubt, seeing people dance out cellular locomotion is fun and more straightforward than hearing a verbal description of the same thing. I wonder though if such concepts would be more accurately portrayed and easier to understand through animations. Perhaps there is something about seeing people perform live that is more engaging than seeing animations or the same performance on a screen. If that’s true, then having dancers at one’s presentations would be very helpful (it would also make that presentation stand out, if no one else has dancers).
When we look back at the important advances in neuroscience in the 20th and 21st centuries, what will we remember? What will we still find useful and worth pursuing further? The field is still in its nascent stages, even a century after Ramon y Cajal showed evidence for the neuron doctrine, establishing the neuron as a fundamental unit of the nervous system; and Brodmann published his cytoarchitecture studies that convinced the world that the brain is divided into distinct areas and likely uses those to divvy up processing. Yet we still have virtually no clue how the brain works: there is no central theory, no cures for brain diseases; only a whole lot of curious, enthusiastic and optimistic minds and some funding to help them get stuff done.
Scientific American is collaborating with marine scientists on a project to crowd-source analysis of whale songs and calls. Having gathered thousands of sound files from many species of whales, scientists now need to classify each call and song to get an understanding of each specie’s repertoire. Once the calls and songs are sorted and classified, scientists can pursue interesting questions like, is a whale’s song repertoire related to its intelligence?
To classify the vocalizations, scientists are asking the public for help. On whale.fm, anyone (no expertise required) can sift through some spectrograms and embedded sound files, and match them to a template. It’s easy, fun and cool. Something that would take one person months or years to do, can now by accomplished much faster by the public in a fun format.
Some previous efforts in scientific crowd-sourcing like FoldIt, a game in which people fold proteins based on simple rules (computers can’t do this), or the search for new galaxies by amateur astronomers from images taken by the Hubble telescope. Perhaps this type of effort could help the Connectome efforts to map out the brain down to each synapse using electron microscopy, where every neurite in a cross-sectional image must be strung to itself in adjacent images. Tracing axons across thousands of EM images could actually make a fun and productive game.
It is by no mistake that philosophers from Plato to Hume to Adam Smith have advocated a division of labor as a driving force of society and its economy. The more specialized one’s labor, the more advanced the resulting product. As people gain expertise in their distinct fields, the more they are able to advance those fields. This is as true for labor as it is for science (in general; there is a lot to be said for multi- and cross-disciplinary approaches and out of the box thinking that specialization usually dampens. But on the whole, it is undoubtedly more advantageous to specialize in a field than not). Scientific experts are people society relies on to advance knowledge and establish facts; they are the people we go to when we need answers.
Here is why we need experts: Bret Stephens is a journalist and Wall Street Journal columnist whose training is, supposedly, in journalism and maybe economics (Wikipedia, which never lies, says he attended the London School of Economics). In his column today, Bret Stephens writes about global warming. Entitled “The Great Global Warming Fizzle,” the article compares climate change science to a religion – and a dying one too – whose adherents are “spectacularly unattractive people” and whose “claims are often non-falsifiable, hence the convenience of the term ‘climate change’ when thermometers don’t oblige the expected trend lines.”
Now if Bret Stephens were an environmental scientist with proper training, his criticism of climate science would be worth hearing. But just as we don’t take physics advice from members of the Taliban (Sam Harris’s favorite example), we shouldn’t take climate change advice from Bret Stephens. Has he seen the data in question? Would he know how to interpret it? Would he draw the same conclusion if money and government intervention were not factors? The worry is that he is not fulfilling his role of journalist, in which he is expected to provide fair interpretations of the science and policy to the public, who are not experts. Instead of facts, we get an opinion piece on something Bret Stephens has no expertise in.
Here’s the dilemma for those who care – is it better to ignore the vocal people who don’t know what they’re talking about or to correct them and spread the correct message? The latter would be a far more active and constructive choice, but it should have to be a proactive message instead of reactive as in this post.
Slate magazine had a slideshow by Daniel Engber a little more than a week ago on unusual laboratory animals and why they’re important. The slideshow was prompted by Engber’s observation that mice and rats make up an enormous proportion of all lab animals, perhaps limiting what we can conclude from experimental results and narrowing our perspective on what questions to ask. In short, scientists need to start thinking outside the box when it comes to model organisms. Engber lists fourteen animals, some of which have already given important clues to specific questions. I will mention some of those here.
1. The squid: the squid peaked in importance in the 1950′s, when Hodgkin and Huxley got the idea to use its giant axon (up to 1mm diameter) to study properties of the action potential. The large diameter made it possible for them to insert micro-electrodes directly into the intracellular space of the axon, thereby measuring the flow of ions across the membrane during various stages of the action potential or under different extracellular ionic concentrations. This work resulted in Hodgkin and Huxley’s mathematical model of AP generation, which earned them the 1963 Nobel Prize. Continue reading →
The celebrated cognitive neuroscientist Michael Gazzaniga has a new book coming soon, Who’s In Charge - about the implications of neuroscientific findings for the law. To promote it, Slate printed an excerpt that asks what it means for responsibility and culpability if free will doesn’t exist.
The idea that “If determinism is true, then no one is responsible for anything” doesn’t have to be true: a person acting criminally is still the most proximal cause of the bad behavior and should be held accountable; the Big Bang isn’t to blame for his criminality. Moreover, by this reasoning, everyone who commits crimes is not responsible because their brains made them do it: if determinism is true, they also had no choice but to ‘sin’. So why should a seemingly healthy offender go to jail (where he doesn’t get rehabilitated and doesn’t learn to not repeat his offenses), while one with a brain tumor or schizophrenia should be treated medically and perhaps even reinstated into society?
These questions, I think, lead us think of determinism and free will as inappropriate for the legal system. If no one has a choice in their behavior, then clinically sick people shouldn’t be treated any differently from anyone else; if no one is responsible for their actions, then sick people aren’t somehow “less responsible” than others.
What emerges is that those who “can’t help” but to act criminally (i.e. the schizophrenics) are treated medically and released back into society when they’re healthy again (psychiatry has a whole lot of catching up to do if that is to actually happen; perhaps neuroscience can help?). So why don’t we also treat those criminals who appear healthy? If their brain made them kill, then there must be something wrong with it. What I’m driving at is that a judicial system based on retribution doesn’t make much sense. Wouldn’t we be far better off if we actually fixed criminals? Perhaps that’s wishful. Or worse, dystopian.
Patch clamping is an electrophysiological method used to measure cell dynamics. The setup involves attaching a pulled-glass pipette filled with conductive solution to a cell membrane and recording the currents that pass through that patch of membrane. The technique is notoriously difficult in cultured neurons or brain slices, and even more difficult in live animals (ultimately, the most appropriate system to study). Several scientists have recently developed an algorithm to automate the process, thereby reducing the skill level and time commitment required to perform patch clamp experiments in vivo.
fMRI has traditionally been used for mapping the brain and correlating brain function with specific structures. The method has become a sort of laughing-stock within the electrophysiological community because of the countless studies that proclaim region A to be responsible for function B. A typical blunder can go like this: “Increased activation of the amygdala during a fear conditioning task suggests that the amygdala is the brain’s fear center.” To be fair, the method is still very useful and serious scientists don’t fall into this fallacy as much as the popular media does. Some outstanding questions are what the measured signal (blood oxygenation level; BOLD) actually means for neural activity; whether it’s possible to disambiguate excitation from inhibition; how activation in one region affects connected regions; and what the causal relationships among activated regions are. To address the last two of these, Ed Boyden and colleagues at MIT used a combination of optogenetics and fMRI (Opto-fMRI) in awake mice. The idea is that if they can change the dynamics of a defined population of cells in a localized and fast way (they infected pyramidal cells in mouse somatosensory cortex with ChR2), the network effects of that activation would be revealed by fMRI. In this way, they validate the network effects in both technologies. One limitation that’s still inherent to fMRI is the slow temporal resolution – while optogenetic stimulation changes membrane potential with millisecond-resolution, fMRI’s hemodynamic response is much slower. Perhaps other imaging methods like multiphoton imaging may be used in the future to dissect large-scale circuits in awake animals.
Here is a nice interview with Jeff Lichtman of Harvard, who is working on a cellular-level map of synaptic connections in the brain (a connectome). The interview raises several questions, like how can we collect thousands of petabytes (millions of gigabytes) of data of the structure of the brain at the level of individual cells? Do we even need so much data? Even though connectomics won’t reveal much about neural dynamics (i.e. how neurons actually transmit or integrate information), it should be a useful tool for further work in theoretical neuroscience. Someone has to do it.
One caller in this interview asks a great question on the hard problem of consciousness: when scientists look at neuronal activity when one is thinking of a childhood pet, where in the universe is that image of the dog? All the scientists see, after all, is electrical activity…
Preparing for SfN 2011, I have to give a shout-out to one of the coolest emerging technologies in neuroscience, optogenetics. Optogenetics, as everyone no doubt knows by now, is a method that allows researchers to control the electrical activity of neurons using light. Scientists infect certain types of neurons with an algae transmembrane channel protein that allows the flow of ions into a cell when light of preferred wavelength shines upon it. The method has been described well elsewhere (Steve Ramirez waxes poetic about it on the Mind the Gap Junction blog). Optogenetics is an amazing method for many reasons, but mainly because by allowing us to directly activate or silence neurons, it makes it possible to establish causal relationships in neural circuits: if neuron A is hyperactive, the mouse runs around in circles; if A is silenced, perhaps the mouse is unable to run in circles; therefore, activity in neuron A causes the mouse to run in circles.
This is important because traditional electrophysiological methods allow us to only record activity without manipulating it directly (stimulating electrodes are rather crude spatially), and the methods that did allow us to manipulate activity (i.e. pharmacology or stimulating electrodes) have a myriad of effects that make precise causes of behavior unclear (i.e. does TTX act only on sodium channels? Which types? etc).
As optogenetics becomes more and more refined and widespread, I can’t help to wonder what it will do for the most prevalent of neurological diseases. Will this method cure Alzheimer’s? How about Parkinson’s? Optogenetics promises to show us circuit-level interactions among neurons and perhaps even to nail down the network effects of particular diseases. But if we’re looking to find cures for diseases instead of just fixes, we ought to not forget our molecular biologists and maybe even geneticists. That’s not to say that treatments for neurological diseases are worthless! There are, after all, no cures for any brain diseases so far – so anything will be useful. With all this enthusiasm over optogenetics, we have to be honest about its capabilities and limitations.