Search This Blog

Friday, April 22, 2011

Mild brain shocks may improve learning and cognition

Mild brain shocks may improve learning and cognition: "



Around 1800, Italian scientist Jean Aldini zapped the brains of dead felons with electricity to make their bodies move. He later reported using the same technique to cure 'melancholy.' This sounds like the history of electroconvulsive (shock) therapy, but those were actually the first experiments in transcranial direct-current stimulation (tDCS), tweaking the brain with very mild shocks, 1,000 times less intense than delivered by shock therapy. A resurgence in tDCS is now underway. (Experiment 'Consent Video' above from the Berenson-Allen Center for Noninvasive Brain Stimulation.) Indeed, neuroscientists at the University of New Mexico are using a tDCS device powered by a 9-volt battery to see if 2 milliamps shocks to certain regions of the scalp can improve cognition and learning. Early results are promising. (In fact, tDCS may even prime neurons to respond to transcranial magnetic stimulation (TMS), a technique we've posted about on BB many times in which bursts from a magnetic coil near the head alter brain activity. TMS has been tested as a potential treatment for certain severe neurological and psychological disorders. Scientific journal Nature surveys the tDCS field in its latest issue. From Nature:


Last year a succession of volunteers sat down in a research lab in Albuquerque, New Mexico to play DARWARS Ambush!, a video game designed to train US soldiers bound for Iraq. Each person surveyed virtual landscapes strewn with dilapidated buildings and abandoned cars for signs of trouble — a shadow cast by a rooftop sniper, or an improvised explosive device behind a rubbish bin. With just seconds to react before a blast or shots rang out, most forgot about the wet sponge affixed to their right temple that was delivering a faint electric tickle. The volunteers received a few milliamps of current at most, and the simple gadget used to deliver it was powered by a 9-volt battery.


It might sound like some wacky garage experiment, but Vincent Clark, a neuroscientist at the University of New Mexico, says that the technique, called transcranial direct-current stimulation (tDCS), could improve learning. The US Defense Advanced Research Projects Agency funded the research in the hope that it could be used to sharpen soldiers' minds on the battlefield. Yet for all its simplicity, it seems to work.


Volunteers receiving 2 milliamps to the scalp (about one-five-hundredth the amount drawn by a 100-watt light bulb) showed twice as much improvement in the game after a short amount of training as those receiving one-twentieth the amount of current1. 'They learn more quickly but they don't have a good intuitive or introspective sense about why,' says Clark.


The technique, which has roots in research done more than two centuries ago, is experiencing something of a revival. Clark and others see tDCS as a way to tease apart the mechanisms of learning and cognition. As the technique is refined, researchers could, with the flick of a switch, amplify or mute activity in many areas of the brain and watch what happens behaviourally. The field is 'going to explode very soon and give us all sorts of new information and new questions', says Clark. And as with some other interventions for stimulating brain activity, such as high-powered magnets or surgically implanted electrodes, researchers are attempting to use tDCS to treat neurological conditions, including depression and stroke. But given the simplicity of building tDCS devices, one of the most important questions will be whether it is ethical to tinker with healthy minds — to improve learning and cognition, for example. The effects seen in experimental settings 'are big enough that they would definitely have real-world consequences', says Martha Farah, a neuroethicist at the University of Pennsylvania in Philadelphia.


'Neuroscience: Brain buzz'





"

The end of "rare" music and other digitizable media

The end of "rare" music and other digitizable media: "woodstock-front.jpg

This Rolling Stones former-rarity is easy to find online.

My consciousness was forever altered when I happened on Kamandi #3 at age 11. I wanted to read every comic Jack Kirby had created up to that point. But early issues of Fantastic Four were rare and expensive. I bought what I could afford and treasured them. Today I'm sure I could get my hands on PDFs of every issue of Fantastic Four in short order (but I don't have to because I bought the cheap pulpy Essential Fantastic Four anthologies - the ones to get are Vol 1, 2, 3, 4, and 5 -- after that Kirby jumped ship for DC). Rare old comics, along with music and cult films, are no longer rare.


Bill Wyman of Slate explores 'what it means to have all music [and other digitizable media] instantly available.'


A rarity might be less popular; it might be less interesting. But it's no longer less available the way it once was. If you have a decent Internet connection and a slight cast of amorality in your character, there's very little out there you might want that you can't find. Does the end of rarity change in any fundamental way, our understanding of, attraction to, or enjoyment of pop culture and high art?

...

In a recent issue of the New York Review of Books, the poet Dan Chiasson wrote at length about Keith Richards' autobiography and made an interesting point near the end, about how scarcity and rarity, long ago, actually fueled artistic endeavor:


[T]he experience of making and taking in culture is now, for the first time in human history, a condition of almost paralyzing overabundance. For millennia it was a condition of scarcity; and all the ways we regard things we want but cannot have, in those faraway days, stood between people and the art or music they needed to have: yearning, craving, imagining the absent object so fully that when the real thing appears in your hands, it almost doesn't match up. Nobody will ever again experience what Keith Richards and Mick Jagger experienced in Dartford, scrounging for blues records.

Point taken--but let's remember it's a small sacrifice. I have this or that fetish object--the White Album on two 8-tracks in a black custom case, for example, or a rare Elvis Costello picture disc. And I remember the joy of the find. But it's hard to feel bad about the end of rarity; didn't a lot of the thrill come from feeling superior when you had something others didn't? You really want to get nostalgic about that? We're finally approaching that nirvana for fans, scholars, and critics: Everything available, all the time. (Certainly Richards and Jagger would approve.) It's not an ideal state of affairs for a rights holder, of course. But for the rest of us, what is there to complain about?



Lester Bangs' Basement


"

The end of "rare" music and other digitizable media

The end of "rare" music and other digitizable media: "woodstock-front.jpg

This Rolling Stones former-rarity is easy to find online.

My consciousness was forever altered when I happened on Kamandi #3 at age 11. I wanted to read every comic Jack Kirby had created up to that point. But early issues of Fantastic Four were rare and expensive. I bought what I could afford and treasured them. Today I'm sure I could get my hands on PDFs of every issue of Fantastic Four in short order (but I don't have to because I bought the cheap pulpy Essential Fantastic Four anthologies - the ones to get are Vol 1, 2, 3, 4, and 5 -- after that Kirby jumped ship for DC). Rare old comics, along with music and cult films, are no longer rare.


Bill Wyman of Slate explores 'what it means to have all music [and other digitizable media] instantly available.'


A rarity might be less popular; it might be less interesting. But it's no longer less available the way it once was. If you have a decent Internet connection and a slight cast of amorality in your character, there's very little out there you might want that you can't find. Does the end of rarity change in any fundamental way, our understanding of, attraction to, or enjoyment of pop culture and high art?

...

In a recent issue of the New York Review of Books, the poet Dan Chiasson wrote at length about Keith Richards' autobiography and made an interesting point near the end, about how scarcity and rarity, long ago, actually fueled artistic endeavor:


[T]he experience of making and taking in culture is now, for the first time in human history, a condition of almost paralyzing overabundance. For millennia it was a condition of scarcity; and all the ways we regard things we want but cannot have, in those faraway days, stood between people and the art or music they needed to have: yearning, craving, imagining the absent object so fully that when the real thing appears in your hands, it almost doesn't match up. Nobody will ever again experience what Keith Richards and Mick Jagger experienced in Dartford, scrounging for blues records.

Point taken--but let's remember it's a small sacrifice. I have this or that fetish object--the White Album on two 8-tracks in a black custom case, for example, or a rare Elvis Costello picture disc. And I remember the joy of the find. But it's hard to feel bad about the end of rarity; didn't a lot of the thrill come from feeling superior when you had something others didn't? You really want to get nostalgic about that? We're finally approaching that nirvana for fans, scholars, and critics: Everything available, all the time. (Certainly Richards and Jagger would approve.) It's not an ideal state of affairs for a rights holder, of course. But for the rest of us, what is there to complain about?



Lester Bangs' Basement


"

Boris Indrikov's art

Boris Indrikov's art: "borisI.jpg


"

Do bacteria control your brain?

Do bacteria control your brain?: "ourecolioverlords.jpg


A new study has found evidence suggesting that you are not what you eat, so much as you are what's living in your gut. In mice, at least, the presence of normal gut bacteria has a significant impact on how an individual mouse behaves, and how its brain develops.



this new study is the first to extensively evaluate the influence of gut bacteria on the biochemistry and development of the brain. The scientists raised mice lacking normal gut microflora, then compared their behavior, brain chemistry and brain development to mice having normal gut bacteria. The microbe-free animals were more active and, in specific behavioral tests, were less anxious than microbe-colonized mice.



In one test of anxiety, animals were given the choice of staying in the relative safety of a dark box, or of venturing into a lighted box. Bacteria-free animals spent significantly more time in the light box than their bacterially colonized littermates. Similarly, in another test of anxiety, animals were given the choice of venturing out on an elevated and unprotected bar to explore their environment, or remain in the relative safety of a similar bar protected by enclosing walls. Once again, the microbe-free animals proved themselves bolder than their colonized kin ...



Consistent with these behavioral findings, two genes implicated in anxiety -- nerve growth factor-inducible clone A (NGF1-A) and brain-derived neurotrophic factor (BDNF) -- were found to be down-regulated in multiple brain regions in the germ-free animals ...



When Pettersson's team performed a comprehensive gene expression analysis of five different brain regions, they found nearly 40 genes that were affected by the presence of gut bacteria. Not only were these primitive microbes able to influence signaling between nerve cells while sequestered far away in the gut, they had the astonishing ability to influence whether brain cells turn on or off specific genes.



Personally, I'd like to see more analysis on what these findings mean. The Scientific American story quoted above makes it sound like normal gut bacteria are, on the whole, kind of cramping the brain's style. Given the evidence that exists about healthy gut bacteria's importance to maintaining other aspects of physical health, I'm curious whether this study implies that we humans have accepted a bit of a trade off. We get gut bacteria that help us digest food and train our immune systems—but we loose some control over how our brains function, possibly to our detriment, but possibly not, depending on the circumstances.



Oh, and, before the rest of you get a chance, I'm going to jump in here and make the obvious comment: 'I, for one, welcome our new E. coli overlords.'



Scientific American: The Neuroscience of the Gut



Via Matt Feltz




"

Carved-away goose-egg

Carved-away goose-egg: "
Instructables user Bbstudio has been doing some extraordinary egg carving for the Eggbot Easter challenge. This carved-away goose egg is probably the most physically impressive, though there's a lot more aesthetically pleasing (if less improbable) designs in his portfolio.


BBStudio

(via Neatorama)




"

DOG vs. BUBBLES

DOG vs. BUBBLES: "

Hysterical bubbles! (original) [Video LInk]


"

Meet Science: What is "peer review"?

Meet Science: What is "peer review"?: "

When the science you learned in school and the science you read in the newspaper don't quite match up, the Meet Science series is here to help, providing quick run-downs of oft-referenced concepts, controversies, and tools that aren't always well-explained by the media.



peerreview.jpg

'According to a peer-reviewed journal article published this week ...'



How often have you read that phrase? How often have I written that phrase? If we tried
to count, there would probably be some powers of 10 involved. It's clear from the context that 'peer-reviewed journal articles' are the hard currency of science. But the context is less obliging on the whys
and wherefores.



Who are these 'peers' that do the reviewing? What, precisely, do they review? Does a peer-reviewed paper always deserve respect, and how much trust should we place in the process of peer
review, itself? If you don't have a degree in the sciences, and you aren't particularly well-versed in self-taught
science Inside Baseball, there's really no reason why you should know the answers to all
those questions. You can't be an expert in everything, and this isn't something that's explicitly taught
in most high schools or basic level college science courses. And yet, I and the rest of the science media
continue to reference 'peer review' like all our readers know exactly what we're talking about.



I think it's high time to rectify that mistake. Ladies and gentlemen, meet peer review:

What does the phrase 'peer-reviewed journal article' really mean?



This part you've probably already figured out. Journal articles are like book reports, usually written
to document the methodology and results of a single scientific experiment, or to provide evidence
supporting a single theory. Another common type of paper that I talk about a lot are "meta analyses"
or "reviews"—big-picture reports that compare the results of lots of individual experiments,
usually done by compiling all the previously published papers about a very specific topic. No single
journal article is meant to be the definitive last word on anything. Instead, we're supposed to improve
our understanding of the world by looking at what the balance of evidence, from many experiments
and many articles, tell us. That's why I think reviews are often more useful, for laypeople. A single
experiment may be interesting, but it doesn't always tell you as much about how the world works as a
review can.



Both individual reports and reviews are published in scientific journals. You can think of these
as older, fancier, more heavily edited versions of 'zines. The same scientists who read the journals
write the content that goes in the journals. There are hundreds of journals. Some publish lots of
different types of papers on a very broad range of topics—"Science" and "Nature", for
instance—while others are much, much more specific. "Acute Pain", say. Or "Sleep Medicine
Reviews". Usually, you have to pay a journal a fee per page to be published. And you—or the
institution you work for—has to buy a subscription to the journal, or pay steep prices to read
individual papers.



Peer review really just means that other scientists have been involved in helping the editors of these
journals decide which papers to publish, and what changes need to be made to those papers before
publication.



How does peer review work?



It may surprise you to learn that this is not a standardized thing. Peer review evolved out of the informal practice of sending research to friends and colleagues to be critiqued, and it's never really been codified as a single process. It's still done on a voluntary basis, in scientists' free time. Such as that is. And most journals do not pay scientists for the work of peer review. For the most part, scientists are not formally trained in how to do peer review, nor given continuing education in how to do it better. And they usually don't get direct feedback from the journals or other scientists about the quality of their peer reviewing.



Instead, young scientists learn from their advisors—often when that advisor delegates, to the grad students, papers he or she had volunteered to review. Your peer-review education really depends on whether your advisor is good at it, and how much time they choose to spend training you. Meanwhile, feedback is usually indirect. Journals do show all the reviews to all of a paper's reviewers. So you can see how other scientists reviewed the same paper you reviewed. That gives you a chance to see what flaws you missed, and compare your work with others'. If you're a really incompetent peer reviewer, journals might just stop asking you to review, altogether.



Different journals have different guidelines they ask peer reviewers to follow. But there are some commonalities. First, most journals weed out a lot of the papers submitted to them before those papers are even put up for peer review. This is because different journals focus on publishing different things. No matter how cool your findings are, if they aren't on-topic, then 'Acute Pain' won't publish them. Meanwhile, a journal like 'Science' might prefer to publish papers that are likely to be very original, important to a field, or particularly interesting to the general public. In that case, if your results are accurate, but kind of dull, you probably will get shut out.



Second, peer reviews are normally done anonymously. The editors of the journal will often give the paper's author an opportunity to recommend, or caution against, a specific reviewer. But, otherwise, they pick who does the reviewing.



Reviewers are not the people who decide which papers will be published and which will not. Instead, reviewers look for flaws—like big errors in reasoning or methodology, and signs of plagiarism. Depending on the journal, they might also be asked to rate how novel the paper's findings are, or how important the paper is likely to be in its field. Finally, they make a recommendation on whether or not they think the specific paper is right for the specific journal.



After that, the paper goes back to the journal's editors, who make the final call.



If a paper is peer reviewed does that mean it's correct?



In a word: Nope.



Papers that have been peer reviewed turn out to be wrong all the time. That's the norm. Why? Frankly, peer reviewers are human. And they're humans trying to do very in-depth, time-consuming work in a limited number of hours, for no pay. They make mistakes. They rush through, while worrying about other things they're trying to get done. They once had to share a lab with the guy whose paper they're reviewing and they didn't like him. They get frustrated when a paper they're reviewing contradicts research they're working on. By sending every paper to several peer-reviewers, journals try to cancel out some of the inevitable slip-ups and biases, but it's an imperfect system. Especially when, as I said, there's not really any way to know whether or not you're a good peer reviewer, and no system for improving if you aren't. There's some evidence that, at least in the medical field, the quality and usefulness of reviews actually goes down as the reviewers get older. Nobody knows exactly why that is, but it could have to do with the lack of training and follow-up, the tendency to get more set in our ways as we age, and/or reviewers simply feeling burnt out and too busy.



It's also worth noting that peer review is really not set up to catch deliberate fraud. If you fake your results, and do it convincingly, there's not really any good reason why a peer reviewer would catch you. Instead, that's usually something that happens after a paper has been published—usually when other scientists try to replicate the fraudster's spectacular results, or find that his research contradicts their own in a way that makes no sense.



If a paper isn't peer-reviewed, does that mean it's incorrect?



Technically, no. But, here's the thing. Flawed as it is, peer review is useful. It's a first line of defense. It forces scientists to have some evidence to back up their claims, and it is likely to catch the most egregious biases and flaws. It even means that frauds can't be really obvious frauds.



Being peer reviewed doesn't mean your results are accurate. Not being peer reviewed doesn't mean you're a crank. But the fact that peer review exists does weed out a lot of cranks, simply by saying, 'There is a standard.' Journals that don't have peer review do tend to be ones with an obvious agenda. White papers, which are not peer reviewed, do tend to contain more bias and self-promotion than peer-reviewed journal articles.



You should think critically and skeptically about any paper—peer reviewed or otherwise—but the ones that haven't been submitted to peer review do tend to have more wrong with them.



What problems do scientists have with peer review, and how are they trying to change it?



Scientists do complain about peer review. But let me set one thing straight: The biggest complaints scientists have about peer review are not that it stifles unpopular ideas. You've heard this truthy factoid from countless climate-change deniers, and purveyors of quack medicine. And peer review is a convenient scapegoat for their conspiracy theories. There's just enough truth to make the claims sound plausible.



Peer review is flawed. Peer review can be biased. In fact, really new, unpopular ideas might well have a hard time getting published in the biggest journals right at first. You saw an example of that in my interview with sociologist Harry Collins. But those sort of findings will often published by smaller, more obscure journals. And, if a scientist keeps finding more evidence to support her claims, and keeps submitting her work to peer review, more often than not she's going to eventually convince people that she's right. Plenty of scientists, including Harry Collins, have seen their once-shunned ideas published widely.



So what do scientists complain about? This shouldn't be too much of a surprise. It's the lack of training, the lack of feedback, the time constraints, and the fact that, the more specific your research gets, the fewer people there are with the expertise to accurately and thoroughly review your work.



Scientists are frustrated that most journals don't like to publish research that is solid, but not ground-breaking. They're frustrated that most journals don't like to publish studies where the scientist's hypothesis turned out to be wrong.



Some scientists would prefer that peer review not be anonymous—though plenty of others like that feature. Journals like the British Medical Journal have started requiring reviewers to sign their comments, and have produced evidence that this practice doesn't diminish the quality of the reviews.



There are also scientists who want to see more crowd-sourced, post-publication review of research papers. Because peer review is flawed, they say, it would be helpful to have centralized places where scientists can go to find critiques of papers, written by scientists other than the official peer-reviewers. Maybe the crowd can catch things the reviewers miss. We certainly saw that happen earlier this year, when microbiologist Rosie Redfield took a high-profile peer-reviewed paper about arsenic-based life to task on her blog. The website Faculty of 1000 is attempting to do something like this. You can go to that site, look up a previously published peer-reviewed paper, and see what other scientists are saying about it. And the Astrophysics Archive has been doing this same basic thing for years.



So, what does all this mean for me?



Basically, you shouldn't canonize everything a peer-reviewed journal article says just because it is a peer-reviewed journal article. But, at the same time, being peer reviewed is a sign that the paper's author has done some level of due diligence in their work. Peer review is flawed, but it has value. There are improvements that could be made. But, like the old joke about democracy, peer review is the worst possible system except for every other system we've ever come up with.



If you're interested in reading more about peer review, and how scientists are trying to change and improve it, I'd recommend checking out Nature's Peer to Peer blog. They recently stopped updating it, but there's lots of good information archived there that will help you dig deeper.



Journals have also commissioned studies of how peer review works, and how it could be better. The British Medical Journal is one publication that makes its research on open access, peer review, research ethics, and other issues, available online. Much of it can be read for free.



_____________________________________________________________



The following people were instrumental in putting this explainer together: Ivan Oransky, science journalist and editor of the Retraction Watch blog; John Moore, Professor of Microbiology and Immunology at Weill Cornell Medical College; and Sara Schroter, senior researcher at the British Medical Journal.





Image: Some rights reserved by Nic's events




"

Michael Chabon's introduction to The Phantom Tollbooth 50th anniversary edition

Michael Chabon's introduction to The Phantom Tollbooth 50th anniversary edition: "Michael Chabon has written a special introduction for the fiftieth anniversary edition of Norman Juster's wonderful, classic kids' book The Phantom Tollbooth. As you might expect, it's a lovely piece of work.



I am the son and grandson of helpless, hardcore, inveterate punsters, and when I got to Milo getting lost in The Doldrums where he found a (strictly analog) watchdog named Tock, it was probably already too late for me. I was gone on the book, riddled like a body in a crossfire by its ceaseless barrage of wordplay--the arbitrary and diminutive apparatchik, Short Shrift; the kindly and feckless witch, Faintly Macabre; the posturing Humbug, and, of course, the Island of Conclusions, reachable only by jumping. Puns--the word's origin, like the name of some pagan god, remains unexplained by etymologists--are derided, booed, apologized for.


When my father and grandfather committed acts of punmanship they were often, generally by the women at the table or in the car with them, begged if not ordered to cease at once. 'Every time I see you,' my grandfather liked to tell me, grinning, during the days of my growth spurt, 'you grusomer!' Maybe puns are a guy thing; I don't know. I can't see how anybody who claims to love language can fail to marvel at the beautiful slipperiness of meaning that puns, like aquarium nets, momentarily catch and bring shimmering to the surface. Puns act to shatter or at least compromise meaning; a pun condenses unrelated, even opposing meanings, like a collapsing dwarf star, into a singularity. Maybe it's this antisemantic vandalism that leads so many people to shun and revile them.


And yet I would argue--and it's a lesson I learned first from my grandfather and father and then in the pages of The Phantom Tollbooth--that puns, in fact, operate to generate new meanings, outside and beyond themselves. Anyone who jumps to conclusions, as to the island of Conclusions, is liable to find himself isolated, alone, unable to reconnect easily with the former texture and personages of his life. Without the punning island first charted by Norton Juster, we might not understand the full importance of maintaining a cautionary distance toward the act of jumping to conclusions, as Mr. Juster implicitly recommends.




'The Phantom Tollbooth' and the Wonder of Words


(Thanks, Zack!)




"