You may have noticed that I haven’t posted anything of neuro-subtance since, oh, January 2012.
Well, between you and me, there is a perfectly good reason why that is the case.
That was about the time I completed the paperwork to end my neuroscience career and begin a new adventure in private industry as a software engineer.
The short version is that my wife and I have decided to make Santa Barbara, California our permanent home. Given that the postdoc is a temporary position usually lasting less than five years, the time was coming soon to move on up to being a professor or move on out of academia. After much deliberation, I have chosen the latter.
I opted not to announce the transition publicly for some time in case I changed my mind, or if the new position didn’t work out. I figured I could always head back to research as long as nobody noticed I had temporarily left. Now, here we are, almost two years later and I have neglected to let you, my closest internet-friends, in on the big news. How terrible!
Some important factors in the decision:
(a) Funding. There were plans afoot for me to make the transition from postdoctoral researcher to research faculty. This would have potentially kept the lights on in the Bennett household for many years to come. I easily could have enjoyed a long career focused uniquely on research in cognitive neuroscience. The only risk? Funding. Funding rates have been dropping year over year for decades. The average age at which a new investigator finally gets their first R01 grant has risen to 43+. The percent of funded research project grants has fallen from 32% in 1999 to 24% in 2007. The success rate for funding on your first grant submission is now 12%. [see Broken-Pipeline.pdf]
It’s not that I can’t handle the continuous funding chase, but it is a bitter struggle. The situation isn’t getting better either. Would you want to place a bet on whether success rates are going to fall below 20% or rise back to 30% as we move forward? Further, if a grant falls through, and there happens to be a gap in funding coverage, then my paycheck would simply cease to exist. Working for a private employer may be ‘at will’ employment, but the chances of a negative outcome may be far lower than rolling the dice with the NIH, NSF, or other funding agency every few years.
The lab that I was working in experienced just such a shortfall this last year as the federal sequester hit the campus. The lab was funded primarily by funds that, filtered through other agencies, came from the U.S. Army. When the sequester arrived, lab funding took a double-digit drop. My postdoc advisor was hard pressed to pay people’s salaries. Data acquisition stopped and everyone just had to hope that they had enough quality data to publish/graduate. That is a scary proposition.
(c) Compensation. The postdoctoral researcher is seen as a training position where you are learning important new skills and scientific perspectives from the principal investigator that you work for. Based on the salary data I collected as part of my job search, postdocs are typically getting paid a (literal) fraction of their value in private industry. This, again, tends to get better when you become an assistant professor, but it can take a while to climb the ladder. I would have received a bit of a bump if I would have gone into a research faculty position, but it paled in comparison to the, um, significant increase I negotiated as part of my new software development position.
(b) Lifestyle. Part and parcel of academia is the fact that everyone is constantly moving around. Your best friend today may or may not be in the same town with you next year. While this stabilizes a bit when/if you finally land that coveted assistant professor position, the time leading up to that point is defined by flittering around the country. First you head off to grad school, then off to a postdoc, then to who knows where. I know some individuals who are on their fourth city in ten years, with another move on the way. For some this is exhilarating, as you are essentially getting paid to see the world. For me it has been harder, constantly making incredible friends and then having to say goodbye. I now have friends stretching from Washington, to Maine, to Florida, and back to California. It was finally time to stop flying around and put down some roots.
The other lifestyle point is that my wife and I have really fallen in love with Santa Barbara, California. The weather is amazing. We are able to live two blocks from the beach and an eight minute drive to hiking in the mountains. I am able to commute by bike almost every day of the year. These geographical and climate features are fantastic, but beyond that is the fact that we have met some amazing people in Santa Barbara, and it just feels like home now. I don’t want to say goodbye, and now I don’t have to.
Things aren’t all bad as a postdoc, however. I would be remiss if I didn’t highlight some incredible benefits of the old job:
(a) Schedule. Being able to set your own schedule is an incredible benefit of the postdoc. I am a nightowl by nature, so working 10am-6pm with another hour or two of reading in the late evening suited me perfectly. Working an 8-5pm job is not really my cup of tea. The flexibility to take time away from work was also fantastic as a postdoc. I had some genuinely awesome travel and volunteer experiences that I might have never experienced in a regular job, including trips to Hawaii and several excursions to Europe. Now I have to hoard my ATO days like they are bars of gold.
(b) People. I have met some of the most genuinely awesome people in my years as a researcher. There is something truly magical about a group of people who have similar interests and come together for a common cause. I found that special kinship in graduate school, my postdoc, and also with the larger community of cognitive neuroscientists. Even though I am leaving neuroscience professionally, I will always identify with this cadre of individuals who brand themselves ‘neuroscience geeks’. While there is some of this kinship in my new position, there is an overriding sense that many folks are just punching the clock.
(c) Academic Freedom. You are at the mercy of your postdoctoral advisor to a point, but at the end of the day you have an awesome job where, for all intents and purposes, you are getting paid to think. How absolutely freaking incredible is that? You know that one thing that you are most curious about? Here is some money to learn everything you can about that one thing and then here is some more money to find out the answers to questions we don’t even know to ask yet. I won’t lie to you – I miss it all. Frequently.
So, what am I doing now?
Well, I am working for a small scientific instrumentation company in Goleta, California that goes by the name [redacted]. Well, I could tell you their name, but I would need to obtain prior approval from my corporate overlords (no joke). I work as a firmware engineer in the software group, helping to maintain the embedded applications that drive the instruments. As a neuroscientist, the best part of my week was spending an entire afternoon writing a gnarly MATLAB script. Now, I get to do that every day. I could not have hoped for a better place to end up after falling out of the Ivory Tower.
I have had a really, really great run as a professional neuroscience researcher. Great friends. Great research. Amazing conferences. Deep debates on the nature of of the brain and cognition. An Ig Nobel award! I credit any success I might have had to the wonderful mentors who took me under their wing, and to the long list of quality people I have encountered on my academic journey. I love cognitive neuroscience and scientific exploration more than I can adequately put into words. Still, I was not content to let the sunk costs of a graduate education dictate my future actions, especially when other options are now a better fit for my personal and professional goals.
“He will be mourned by his families and friends; he will be mourned by his nation; he will be mourned by the people of the world; he will be mourned by a Mother Earth that dared send one of her sons into the unknown.
In their exploration, they stirred the people of the world to feel as one; in his death, he binds more tightly the brotherhood of man.
In ancient days, men looked at stars and saw their heroes in the constellations. In modern times, we do much the same, but our heroes are epic men of flesh and blood.
Others will follow, and surely find their way home. Man’s search will not be denied. But he was the first, and he will remain the foremost in our hearts.
For every human being who looks up at the moon in the nights to come will know that there is some corner of another world that is forever mankind.”
- Nixon, in a speech prepared if Armstrong/Aldrin were stranded on the Moon’s surface. It has been paraphrased to be singular instead of plural.
The original text: http://www.lettersofnote.com/2010/11/in-event-of-moon-disaster.html
“It would be nice if all of the data which sociologists require could be enumerated because then we could run them through IBM machines and draw charts as the economists do. However, not everything that can be counted counts, and not everything that counts can be counted.” – William Bruce Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking (1963)
The best humor always has a touch of reality. The comic is from the site Saturday Morning Breakfast Cereal – check it out!
Not all research findings make their way out of the lab. Sometimes they can get snagged on the way out the door. The reasons for this can range from funding, to politics, and even simple forgetfulness. Below is an abstract that I have been sitting on for over two years. It details an analysis that we conducted of the full fMRI Data Center (fMRIDC) archive. All datasets in the archive are from published manuscripts, so the analysis was an investigation of both fMRIDC archive quality and the quality of data used for publication in the early 2000s. Unfortunately, I don’t have the time or resources to do much more with it, so I will release it here in the hopes that our existing work might be of some utility.
The fMRI Data Center (fMRIDC) was founded as a large-scale repository for functional neuroimaging datasets from around the world. Since its inception the archive has grown to hold 122 fMRI datasets from a diverse array of cognitive domains. For years these datasets have been made available at no cost to any interested party. Within the last 12 months there have been 543 requests for 725 datasets coming from a mix of 60% domestic and 40% international sources. The goal of this project was to investigate data quality across the entire fMRIDC archive by holding each study up to the same stringent examination criteria. We hoped to determine what percent of studies could adequately be reused in a larger meta-analysis of functional imaging data.
We examined each of the 122 datasets contained in the fMRIDC archive. Initial criteria for inclusion required a dataset to contain functional MRI data in normal human volunteers. This eliminated all studies with only anatomical data, nonhuman data, and all clinical datasets. Further criteria for inclusion required datasets to have whole-brain coverage, no anomalous signal dropouts, no severe MR artifacts, and a minimum group size of 8 subjects. This eliminated all studies with gross data quality problems. It should be noted that only studies with data problems across all subjects were excluded on this basis. A single subject with bad data would not lead to disqualification.
Across all datasets we found that 48% of studies in the fMRIDC archive had issues that prevented their reanalysis. The most likely reason for exclusion was missing fMRI data (19 studies), with the second most likely reason being missing study metadata (11 studies). These issues have nothing to do with the data themselves, but center around problems related to acquisition of the data in a complete set. Other issues we found that would prevent the reuse of data included incomplete brain coverage (9 studies), corrupt/blank data (6 studies), data with severe visible artifact (6 studies), experiments with less than 8 subjects (5 studies), nonhuman data (2 studies), and experiments with only anatomical data (1 study).
This project represents the first step in understanding how data quality varies across a large sample of fMRI studies. From this analysis we can conclude that only about half of the studies met our criteria for further reanalysis. Still, the figure of 48% should not be taken as an indicator of quality across all fMRI experiments in the literature. The vast majority of issues had to do with the challenge of acquiring datasets and study metadata from the original authors.
Steve Jobs died today. I found out while I was on the bus as I came home from work.
He was a technology pioneer to be sure. Certainly one of the most effective CEOs to come around since the title was invented. Through his leadership a stream of amazing and beautiful devices were released to the public, turning Apple from a company on the verge of bankruptcy to one of the most profitable in the United States. From his early work on the Apple I to the wildly successful iPhone 4, he revolutionized the daily life of billions of people around the world.
I felt a strong feeling of loss when I heard that he had died. It came from the untimely departure of a man who I had never met, but nevertheless saw fit to draw personal inspiration from.
Why was I drawn to Steve Jobs? It was his idea that all details matter, even down to the individual pixel. It was the notion that even the intangible minutiae will impact our perception of an object, like the exact radius of a corner or the amount of friction on a piece of glass. It was the mandate that you aren’t finished until you pour a piece of your soul into your creation.
So many of my own greatest accomplishments have been done using tools that once existed only in his mind. Steve Jobs made me want to be a better creator, and a better person.
Before I heard the news I had spent the afternoon working through a book on Objective-C, the programming language used in the creation of Mac, iPhone, and iPad applications. I got an itch to do some OS X programming the night before, but I needed a refresher on the syntax of the language to get going again. In hindsight, I can think of no better tribute to the man than spending the day becoming a better programmer, honing my skills to one day create something insanely great.
While I was on the bus I downloaded his Stanford Commencement address and listened to it again with new perspective. One passage struck me in particular:
Remembering that I'll be dead soon is the most important tool I've ever encountered to help me make the big choices in life. Because almost everything — all external expectations, all pride, all fear of embarrassment or failure - these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.- Steve Jobs, Stanford Commencement, 2005
Stay Hungry. Stay Foolish. Thanks Steve.
Above Image: My first computer, an Apple //c. I am pretty sure that Steve didn’t have a hand in how it looked.
Full text of the Stanford Commencement:
Read stories on the creation of the Macintosh:
Andy Ihnatko’s remembrance:
Walt Mossberg’s remembrance:
I hate being late to a party. You finally arrive after the festivities have begun and you know that your friends have already been there for hours, having a grand time doing what they do best. So it is with the latest neuromarketing debacle involving the New York Times and the pseudoscience that appeared on the op-ed page. All the best stuff has already been written.
A branding consultant (Martin Lindstrom) commissions a neuromarketing company (MindSign) to do a neuroimaging study. Sixteen subjects underwent fMRI data acquisition while being shown audio and video of ringing iPhones. Visual and auditory cortex was active across all conditions. There was also activity in the insula. The authors interpret the sensory cortex activity as a kind of cross-modal synesthesia experience. The authors further interpret the insula activity as the subjects experiencing feelings of love and compassion. Headlines around the web ring loudly with headlines “YOU LOVE YOUR iPHONE”.
Web points of interest:
1) Read the original op-ed piece by Martin Lindstrom to give yourself some context regarding what was said and the arguments that were made. It will probably make your skin crawl with tales of babies wanting cell phones to be iPhones and terrible definitions of synesthesia. Stick with it anyway.
2) Start at Russ Poldrack’s weblog and read his first post on the topic. He called it crap, and he was being direct and truthful.
3) Now read Tal Yarkoni’s excellent in-depth discussion of the problem. If you read nothing else today, go and check this one out.
4) Next, read the post by Vaughan Bell at Mind Hacks, which is also a nice follow-up. Double points for using the term “facepalm jamboree”.
5) Finally, see the list of people who support Poldrack’s position on the Lindstrom article. Many of the best minds in neuroscience are agreed that the Op-Ed piece is not representative of good science:
To be honest, I don’t have a whole lot to add to the conversation. On the topic of reverse inference you really can’t do better than Russ Poldrack and Tal Yarkoni. The Yarkoni blog post is particularly good, effectively nuking the Lindstrom piece from orbit. It is, in a way, poetic since Poldrack and Yarkoni are working on the databases and methods that will enable probabilities to be put on arguments such as Lindstrom’s. That is, if insula activation is observed how likely is it that the emotion of ‘love’ is being experienced. To give their technology a try surf on over to http://neurosynth.org/ and check it out.
One aspect of the debate that I am particularly interested in is the purported role of the insula in the experience of love and affection. Unfortunately, Lindstrom provided very little detail in terms of the spatial location of their insula activity, effectively preventing anyone from criticizing the work on that basis. But, for the sake of argument, let’s put the insular question forward. Does it matter where in the insula that the activity was observed? The short answer is: absolutely.
There is an excellent paper by A. D. “Bud” Craig entitled “Forebrain emotional asymmetry: a neuroanatomical basis?” that details how the left and right insula have a different pattern of connectivity to the homeostatic afferents that provide information on our current body state. Craig describes how the right insula is preferentially involved in sympathetic nervous system activity geared toward engaging with the environment, energy use, and even “fight or flight” responses. Conversely, the left insula is preferentially involved in parasympathetic activity geared toward contentment, energy conservation, and “rest and digest” responses.
In our evolution, humans seem to have bolted-on social components to this underlying insular emotional asymmetry. The right insula seems to be associated with the experience of social disgust and social avoidance. This has been seen in work such as the original Philips et al. (1997) paper, showing prominent right anterior insula activity during disgust. The left insula seems to be associated with the experience of social compassion and social approach. There is less evidence for this, but meta-analyses such as Ortigue et al. (2010) have reported this pattern.
In short, leaving out which hemisphere the results occurred in is a huge faux pax on the part of Lindstrom. It is not the greatest sin of the piece, and probably not even the greatest sin of the insula argument. Still, it is certainly a prominent FAIL from the perspective of a researcher with an interest in the insula.
One final point of discussion I would like to raise is with regard to an earlier prefrontal.org post on the Seven Sins of Neuromarketing. Let’s see which ones are most prominent in the current discussion:
1) The curtain of proprietary analysis methods limits our knowledge of how effective neuromarketing can be.
We have no idea what methods Lindstrom and his colleagues used to arrive at their findings. It could be the best study in the history of ever, or it could be riddled with common statistical flaws. We have no idea because the work isn’t peer-reviewed. As before, we don’t even know where in the insula the results were located!
3) Most people’s introduction to neuromarketing is through press releases, not peer-reviewed studies.
Let’s just establish this as a rule: the New York Times editorial page is not the right place to introduce the world to your cutting-edge, unproven fMRI methods. Period. In fact, we should come up with a verb for what always happens afterward: you get Poldrack’d.
4) Neuromarketing methods are not immune to subjectivity and bias.
In a way, scientific claims are guilty until proven innocent by empirical evidence. Honestly, can I trust a man who has written books with titles like Buyology, Brandwashed, and Brand Sense to be objective with regard to a neuromarketing study with a sensational headline? If this was work was peer-reviewed then we could evaluate his evidence in a balanced manner, but an Op-Ed piece does not allow for this luxury and leaves the question of bias open.
6) People are rushing the field to make a quick buck, and not everyone is trustworthy.
I think that this represents the case in point.
Ortigue S, Bianchi-Demicheli F, Patel N, Frum C, Lewis JW. (2010). Neuroimaging of love: fMRI meta-analysis evidence toward new perspectives in sexual medicine. J Sex Med. 7(11): 3541-3552.
Phillips ML, Young AW, Senior C, Brammer M, Andrew C, Calder AJ, Bullmore ET, Perrett DI, Rowland D, Williams SC, Gray JA, David AS. (1997). A specific neural substrate for perceiving facial expressions of disgust. Nature. 389(6650): 495-498.
One of the first things you learn in an introductory psychology class is the topic of cognitive bias. These are situations or contexts in which human beings cannot reliably make effective judgements or discriminations. For instance, information that tends to confirm our own assumptions is generally judged to be correct (Confirmation Bias). Another example is the disproportionate attention given to negative experiences relative to positive experiences (Negativity Bias). In each situation perception and decision making is distorted even though we should know better. It may be the case that we need to come up with a new bias to explain investigator behavior. Significance Bias anyone?
There is a great article by Nieuwenhuis, Forstmann, and Wagenmakers in this month’s edition of Nature Neuroscience. Entitled “Erroneous analyses of interactions in neuroscience: a problem of significance”, the paper discusses the problem of how to gauge when two effects differ in neuroscience. It turns out that many papers misjudge the difference between effects by basing their judgement on significance values, even though they should know better.
The crux of the issue is that it is improper to judge the difference between two effects by looking at their relative significance. The perceived difference between a significant effect ( i.e. p < 0.05) and non-significant effect (i.e. p > 0.05) does not necessarily mean that the two effects are themselves significantly different. You have to explicitly test for that.
In fMRI, this could mean relating one brain area that is significant to another brain area that is not significant. The temptation is to discuss the significant region as being more active than the nonsignificant region based on the fact that the latter region was below the significance threshold. This actually may or may not be the case.
Andrew Gelman and Hal Stern wrote a similar article on the problem a few years ago. The focus of their piece was simply to draw attention to the issue through the use of several theoretical and real life examples. While they were able to say that the problem existed, they were unable to say how prevalent the problem was across any particular scientific discipline. The power of the Nieuwenhuis, Forstmann, and Wagenmakers paper is that it extends the Gelman & Stern work through an analysis of the existing literature to put concrete numbers on how widespread the problem is in neuroscience.
The authors conducted a survey of 513 articles in major neuroscience journals. They identified 157 papers containing an analysis where the authors would be tempted to make an inferential error by focusing on significance. They found that in 78 out of 157 cases (50%) the authors did indeed make an error. That is far higher that I would have guessed, and one of the reasons I felt compelled to write about it today. I mean, come on, fifty percent? Really?
In the next to last paragraph the authors specifically state the the error of comparing significance levels is particularly acute in neuroimaging. From my perspective we are almost setup for failure in this regard, as significant regions are visualized as a range of attention-grabbing colors while regions that are not significant are visualized as completely blank.
I could rail on a bit longer, but that is time you could be using to go and read this article. There is a lot of good information in the text – it is short, punchy, and well worth your time.
Some additional discussion on the topic:
Gelman A and Stern H. (2006). The Difference Between “Signiﬁcant” and “Not Signiﬁcant” is not Itself Statistically Signiﬁcant. The American Statistician 60(4), 328-331.
Nieuwenhuis S, Forstmann BU, and Wagenmakers EJ. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance. Nature Neuroscience 14, 1105-1107.
Our lab is recruiting subjects for a new study of human memory across the lifespan. We are currently running our first phase of the study. If you are between the ages of 25 and 35 and live in the Santa Barbara area please read the text below and email us if you are interested. – Craig
Research Participants Wanted
The Human Memory and Neuroimaging Lab in the Department of Psychological and Brain Sciences at UCSB is seeking research participants for a functional magnetic resonance imaging (fMRI) study investigating the relationship between various personality and cognitive factors and memory. The study will take place on two separate days and will last about two hours each day. Participants will respond to questionnaires, complete cognitive tests, and have their brain activity measured using fMRI. Participants will be compensated with $20/hour and will receive an image of their brain.
To be eligible, participants must:
• Be between the ages of 25 and 35
• Be native English-speakers
• Not be pregnant
• Not have any metal in their bodies that cannot be removed
• Not be claustrophobic
Please email firstname.lastname@example.org or call (805) 283-9603 for more info.
Click here to read a PDF of our recruiting flyer.
Do you feel like neuromarketing is a disruptive new technology, or just another example of neurohype? Regardless of where you stand on the issue you might be interested in a debate I will be participating in next Monday, the 23rd of May, at Stanford Medical School.
The Stanford Interdisciplinary Group on Neuroscience and Society (SIGNS) is hosting the debate, which is focused on neuroscience in the marketplace. Jim Sullivan, the CEO of NeuroSky, Uma Karmarkar from the Stanford Graduate School of Business, and myself will all weigh in on the topic of whether neuroscience is being used to manipulate consumers.
I think you might already know where I stand based on my ‘Seven Sins’ neuromarketing post, but the event promises to be a lively affair with a diverse array of perspectives. Come check it out if you are in the bay area next week!