Congress to Peer Review Science Funding?

There is a bill circulating in the House of Representatives, sponsored by Lamar Smith of Texas, that aims to give Congress the power of oversight over government grants for scientific research. Grants from the National Science Foundation are given out to basic research projects based on their scientific merit, as determined by a system of peer review. Lamar Smith’s legislature hopes to “improve” science funding (= reduce government spending) with this proposal:

CERTIFICATION.—Prior to making an award of
any contract or grant funding for a scientific research project, the Director of the National Science Foundation shall publish a statement on the public website of the  Foundation that certifies that the research project—

(1) is in the interests of the United States to  advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science; 

(2) is the finest quality, is ground breaking, and answers questions or solves problems that are of utmost importance to society at large; and 

(3) is not duplicative of other research projects
being funded by the Foundation or other Federal
science agencies

 

Perhaps the effort is genuine and Smith wants to boost scientific research and output in the U.S. Perhaps it is out of a naive micro-manager tendency that he’s proposing to overtake the peer review process (which is definitely far from perfect), just as he did with the SOPA (“Stop Online Piracy Act”) bill. Or maybe the underlying motivation is more sinister.

Regardless, the point of basic science is that it does not promise to bring about any specific advances or cures for social or medical ailments. Rather, basic science advances society by a more stochastic process – two steps forward, one step back. Many scientific projects either don’t work out at all or the results are negative or un-interpretable; but all information produced through the scientific process is useful. If my experiment produces confusing results, the conclusion shouldn’t be that I wasted the government’s money; on the contrary, that money (and time!) are saved for the next person who wants to test a similar hypothesis – that person doesn’t have to duplicate my failed effort and can design a better (or different) experiment.

The National Science Foundation is distinguished primarily from the National Institutes of Health in that the latter aims to improve human health through science, while the NSF simply aims to fund scientific projects in order to produce knowledge. (By the way, the NSF’s budget is ~$7 billion compared to the NIH’s $32 billion). With the tight budgets, the funding agencies are already forced to choose only the most competitive and promising proposals. The agencies must be accountable, but holding them at gunpoint won’t produce any innovation.

More on the issue on The Huffington Post, American Science Blog and The New Yorker.

The President’s BRAIN Initiative

At three pounds, 100 billion cells, 10,000 as many connections, the human brain makes Facebook look like child’s play of a network, not without reason: our brains are solely responsible for our every thought, emotion and action. The human brain is the most complicated machine in the known universe.

It is fitting then, that President Obama announced this week that the state of our knowledge of brain function is in a sort of swamp despite tremendous progress in the past century, and it is time to pave our way out in an effort to solve how the brain functions.

The BRAIN Initiative seeks $100 million for the next fiscal year to fund new research into mapping connections between nerve cells, with the ultimate aim of curing monstrous diseases like Alzheimer’s, Parkinson’s, autism and PTSD. The cornerstone of the proposal, based on two idea-papers published by top neuroscientists in the last few months, in Science and Neuron, is to create new technologies capable of measuring the electrical activities of millions of brain cells at a time (the current state of the art is hundreds of cells).

The President hopes this sort of “big science” project will follow the Human Genome Project’s success in creating jobs and boosting the US economy, while unifying neuroscientists around the world in the pursuit of cures for major diseases (according to a Times article, a $600 billion annual worldwide toll for dementia care alone). While well-intentioned, the proposal is ripe with serious problems. Neuroscientists, like Prince Herbert, must be cautious.

One problem with the proposal – and a way in which it differs from previous Big Science projects – is that it’s not clear what victory would look like. With the Moon Shot and Human Genome Project, we had clear goals to work toward and knew exactly when those were achieved. On the other hand, how will we know when we’ve understood the brain? The Initiative’s aim to record from every neuron involved in a behavior doesn’t help its case – surely there are millions of possible behaviors one could study, with even more states in which the appropriate networks start the behavior, not to mention the multitude of ways a neural network can progress through learning. The proposal doesn’t clarify how such experiments could be set up or what information they would provide.

Interestingly, there is an effort already underway to map connections between brain cells to the resolution of the approximately 10,000 inputs per neuron in whole circuits or brains; the Connectome Project, led by Sebastian Seung, Jeff Lichtman and Winfried Denk, is quite controversial because its goal is to create static pictures of the connections rather than ever-evolving ones of electrical activity, but it is also very well defined and promises clear answers, much like the Human Genome Project did. One of the experiments proposed by the Connectome team, according to Jeff Lichtman, is to create diagrams of the circuit responsible for generating songbirds’ songs (a highly complex and well-defined learned behavior) before and after learning. Such data would be tremendously important for our understanding of how neurons organize themselves during learning to produce sequences of complex movements.

In addition to the vagueness of the BRAIN Initiative’s goals, its promise to cure diseases like Alzheimer’s reveals either a misunderstanding of how basic science works, or simply a dangerous exaggeration that will discredit neuroscientists in the public eye. If we fail to find cures in the next decade, will conservatives in Congress conclude that science just doesn’t work?

And while the proposed “mapping” of activity will provide great scientific insight into brain function, it won’t find cures horrible diseases such as Alzheimer’s because those are rooted in molecular and genetic failures rather than in neuronal electrical activity itself. Aberrant activity is a result, not a cause, of molecular and genetic problems; manipulating activity, such as Deep Brain Stimulation for Parkinson’s, is a temporary fix. The Initiativewill likely make the highest impact in cases of mechanical damage to the brain like stroke, where circuit function goes awry because dead neurons don’t make critical computations. In such cases, measuring activity in healthybrains will be enormously helpful to treatments.

Elements most critical to the Initiative’s success will not be those that define it, but those outside of it. The organizers must ensure that funding for BRAIN does not syphon money from existing projects or alternative and independent future proposals. Scientists must have the freedom to pursue their interests without having to follow the government’s vision.

Idolatry in Science

Gary Marcus recently celebrated Noam Chomsky in an essay about the famous linguist’s life and influence on the field of linguistics over the past fifty years. There is no doubt that Chomsky has had tremendous impact on American intellectual life over the years, from work on language to political and philosophical ideas. However, Gary Marcus’s description of Chomsky’s influence on the field and his colleagues is somewhat troubling and unfortunately not unique to Chomsky but prevalent in the sciences. In every scientific sphere, it seems, a handful of individuals have excessive sway; these one-percenters are revered to an extent that their opinions go unquestioned (unchallenged) at best or as dogma, at worst.

As Marcus points out, young linguists have a hard time studying what Chomsky finds uninteresting, the tragedy of which manifests itself in those people either not getting jobs and recognition in the field, or abandoning their interests in favor of Chomsky’s: “A good way for a young linguistics graduate student to make a name is to develop an intriguing idea that Chomsky mentions in one of his footnotes; it’s a riskier move to study something that Chomsky doesn’t find to be important.”

This is also the case in the life sciences, where such idols serve on funding committees and editorial boards of journals, which not only serve as gatekeepers to young investigators’ professional lives, but also have huge influence on the technological and medical progress in Western society.

Growing up (i.e. in college), I had the impression or dream that scientists were in many ways removed from the regular vices that the rest of humanity was suffering from. I mean the need for social status. The typical scientist, in my imagination, was a disheveled person wearing ragged, ripped clothes, sometimes overdue on a shower. Presumably these people are so immersed in their thoughts and experiments that they simply don’t care to follow social customs; they have no need to impress people with fancy designer clothes; they forget to be social and polite not because they’re rude or too cool, but simply because they’re living on another plane, etc. Scientists don’t care to know what Lindsay Lohan did last night because they don’t look up to celebrities in the way that others do.

That was my imagination; the reality is not as cute. As the Chomsky example shows, scientists do have idols and status symbols. The most obviously silly status is what journals one publishes in. Over the years, a handful of journals have accumulated such reverence that even one paper published in them can make a scientist’s career. Publishing in Nature or Science is the equivalent to owning a Porsche or dating a supermodel or [fill in whatever people want for the sole reason that other people want it]. Everyone is guilty. Knowing that this is a problem is not enough; I am still impressed by papers published in such journals and would be ecstatic if my work appeared in those glossy pages. (The counterargument is that top journals publish scientific papers based on their merit, so it only makes sense that those publications should be bellwethers of good science. For this to be true, the editors at those journals would have to be blind to the authors’ and universities’ identities (which they’re not).

As far as idolatry goes, the real victims may be scientific theories that get shut down simply because the Chomskys of science don’t care for them (or have their own counter theories); or young scientists who need to publish and get grants (“the rich get richer”). I wish Dr. Chomsky all the best for his birthday and future, and can only hope for the sake of science that the old maxim attributed to Max Planck does not stay true much longer: Truth never triumphs — its opponents just die out.

Runaway Selection in Birds of Paradise

I watched a program on PBS the other night about birds of paradise – exotic birds from New Guinea with elaborate displays. To attract females, males have evolved these intricate feathers and courtship dances and rituals. A Parotia male, for example, will clear out a dancing ground and when a female is in sight, he will puck up feathers around his chest into a sort of collar similar to those of Italian nobility of the Renaissance (or perhaps closer to a ballerina’s skirt) with bright iridescent feathers forming a shield below his neck, the long quills on his head that usually point lazily toward his rear will stick straight up in a semicircle around his head. The most impressive part is the actual dance: with every feather aroused, he goes into a trance, shaking his head left to right, bouncing up and down on his feet, and encircling his audience, one lucky female who judges the performance and decides if the male is a worthy mate.

Such displays by males of hundreds of bird species, each unique and captivating are the result of millions of years of evolution, with each generation ensuring the propagation of the best displays. The best displays, in turn, are supposed to convey fitness – how successful the male’s offspring will be and how good of a father he will be. In tough climates, where food resources are scarce and predatory pressure is fierce, most animals evolve to survive by being the best at finding food and hiding from predators. In New Guinea, where most of the species of Birds of Paradise are found, the birds have for millennia enjoyed rich nutritional resources in the dense rainforest and limited pressure from predators. This easy lifestyle has allowed extravagant features to evolve, features that have nothing to do with actual fitness and in some species would be a handicap in a mano-a-mano situation with a predator.

Like the birds of paradise, we humans have enjoyed a relatively pressure-free evolutionary existence in the past few hundred years. This timeframe is not relevant for macroscopic evolutionary changes, but the idea does make me wonder what kind of runaway selection evolution may endow humans with. Meanwhile, here is a video of the Parotia’s and Superb Bird of Paradise’s courtship dances:

 

Aside

A couple of weeks ago in the New York Times, David Ewing Duncan wrote an article, “How Science Can Build a Better You,” describing a brain-machine interface called Braingate that supposedly uses a tiny bed of “electrons” to read out brain activity. Scientists recently described this device’s ability to decode neural signals to control a prosthetic arm; this and other devices promise to restore mobility in paralyzed or tetraplegic patients.

However, the Braingate device actually used an array of electrodes rather than electrons. An electron is a subatomic particle that carries negative charge; the flow of electrons is the basis of electrical stimulation. Electrodes, on the other hand, are wires that measure changes in electrical potential.

While the spelling difference is trivial, the semantic error is significant. Writing about science is a challenge for those who have no training in science, as is copy-editing; the complexity of science should require journalists to reach a level of expertise in their field before bringing their reports to the world. On the opposing side, American readers should have the basic education in science to know the difference between electrodes and electrons, and should not be at risk of being branded as nerds for pointing out such mistakes. Investment in early childhood education is critical for basic science knowledge, and the upcoming presidential election will determine if Americans choose “electrodes” over “electrons”.

Lost Thoughts in the Wake-Sleep Transition

I’ve been meaning to write about this curious phenomenon I experience every time I go to sleep. Lying in bed last night, I was thinking about a movie I had just finished watching – The Aviator, a great movie! – and was overcome by a sudden frustration: some idea that was running through my mind had simply vanished, to be replaced by something silly and mundane. Trying desperately to remember what I had just been thinking about, I could find no trace of my thoughts. It was as if they were never recorded. This happens several times, until I finally give up and fall asleep. Even more perplexing is that I am aware of those lost thoughts, I know something is missing. I just can’t remember what it was.

If these aren’t freak phenomena, one can imagine something in the awake-sleep transition that messes with short term memory. It’s as if whatever network or assembly representing the would-be memory doesn’t undergo short term plasticity necessary to “solidify” those connections. This is of course overly simplistic and probably misleading language, but a way to think about it. Perhaps this can (or has been?) analyzed in rats, as in the “replay” or reactivation activity in hippocampus of experienced events, during sleep, as in this paper by Matt Wilson of MIT. One could examine lost thoughts in the awake-sleep transition by looking at the temporal structure of activity during that transition vs. same activity during experience on a maze, for example. Perhaps this loss of thought depends on some subcortical “kick” that’s absent during sleep. Just a thought.

Link

The Daily Show aired a special report by Aasif Mandvi on “an expensive lesson about bringing fish back to life,” or the dangers of leaving children with the capability to make purchases on the Apple App Store. The point of this story is that children can’t inhibit behavior as well as adults can due to their underdeveloped frontal cortices; and are therefore vulnerable targets to those whose sole purpose is to make easy money, not unlike drug dealers selling to addicts who just can’t help themselves:

The Daily Show With Jon Stewart Mon – Thurs 11p / 10c
Tap Fish Dealer
www.thedailyshow.com
Daily Show Full Episodes Political Humor & Satire Blog The Daily Show on Facebook

Aside

Christopher Hitchens writes in the January edition of Vanity Fair about what he believes to be a nonsensical maxim: “What doesn’t kill me, makes me stronger.” Hitchens is suffering from esophageal cancer, the primary reason for the sentiment that he is not becoming “stronger,” but is rather on a terminal decline.

The phrase is attributed to Nietzsche, whose mental decline late in life, Hitchens notes, probably did not make him any stronger. Nor did the philosopher Sydney Hook consider himself stronger after a terrible experience in a hospital. Hitchens considers himself to be among the many who don’t conquer illness to come out stronger. But there is a flaw in this reasoning – the first condition to becoming stronger is to not be killed. Hitchens is thankfully still alive and kicking (i.e. writing), but he hasn’t defeated his cancer (yet, hopefully); it is only after the cancer is over with that Hitchens can say he’s stronger or weaker. Now is premature. The more important qualification is that “stronger” should mean mentally stronger, not physically. Diseases that target the mind specifically, like Nietzsche’s syphilis, should be discounted; all others should hopefully be an exercise for the power of will and mental fortitude.

Whenever you think life is hard, remember Hitchens and countless others who brave horrible diseases. Stay stark, Hitch!

Hitchens’s essay may be found here.

Science, Religion and Values: Magisteria Redefined

Science and religion have been archenemies for some time now, with one on a quest for knowledge and truth, and the other seeking to fill a perceived void of meaning in lives. Logical inspection confirms the two systems are incompatible with one another, since science requires evidence for all claims, whereas religion insists on faith when there is no evidence whatsoever. But many do have both science and religion in their lives. How do they deal with the conflict? Stephen Jay Gould wrote in a 1997 essay on the non-overlapping magisteria, NOMA, that there actually is no conflict between science and religion:

“No such conflict should exist because each subject has a legitimate magisterium, or domain of teaching authority—and these magisteria do not overlap (the principle that I would like to designate as NOMA, or “nonoverlapping magisteria”).

The net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory). The net of religion extends over questions of moral meaning and value. These two magisteria do not overlap, nor do they encompass all inquiry (consider, for starters, the magisterium of art and the meaning of beauty). To cite the arch cliches, we get the age of rocks, and religion retains the rock of ages; we study how the heavens go, and they determine how to go to heaven. Continue reading

Brainy Computers

“We’re not trying to replicate the brain. That’s impossible. We don’t know how the brain works, really,” says the chief of IBM’s Cognitive Computing project, which aims to improve computing by creating brain-like computers capable of learning in real-time and consuming less power than conventional machines. No one knows how the brain works, but have the folks at IBM tried to figure it out?

It seems strange to say that it’s impossible to replicate the brain, especially coming from a man whose blog‘s caption reads, “to engineer the mind by reverse engineering the brain.” Perhaps I’m picking at his words – replicating and reverse engineering are totally different things; to replicate is to copy exactly, while reverse engineering isn’t as strict, since it’s concerned with macroscopic function rather than microscopic structure. But of all the things that seem conceptually impossible today, it’s the “engineer the mind” that’s the winner, especially if one can’t “replicate the brain.” The chances of engineering a mind are greater the closer the system is to the brain; that’s why my MacBook, to my continual disappointment, does not have a mind.

These little trifles haven’t stopped Darpa from funding IBM and scientists elsewhere. IBM now boasts a prototype chip with 256 super-simplified integrate-and-fire “neurons” and a thousand times as many “synapses.” This architecture is capable of learning to recognize hand-drawn single-digit numbers. Its performance may not be optimal, but still impressive considering the brain likely allocates far more neurons (and far more complicated neurons) to the same task. On another front, the group reported using a 147,456-CPU supercomputer with 144TB of main memory to simulate a billion neurons with ten thousand as many synapses. Now if only they could combine these two efforts and expand their chip from two hundred to a billion neurons.