Pop psych placebo
My Cialdini rant yesterday prompted some killer insights from consultant-turned-lead-gen expert Ian Brodie:
I think we have a problem in marketing with accepting “laboratory” based psychological experiments at face value and assuming they’ll work exactly the same in the completely different situation of the real world.
I used to do lots of consulting in medical marketing for pharma companies and so got very familiar with their research process.
To simplify: someone would come up with an idea that sounded plausible based on their understanding of how the body worked. They’d then do lab tests on the chemicals involved or in more recent years, computer simulations. Then they’d test on animals. But before they accepted that a drug actually worked they’d have to do large scale double-blind trials and prove a statistically significant improvement over a placebo.
In particular, the point was that the human body is incredibly complex. You might be able to say exactly what happens when chemical A hits a receptor in the brain, but you’re not able to predict what happens when someone ingests the drug, it gets processed in their stomach, gets into the bloodstream, and then eventually to the brain. Completely different. Let alone trying to predict the side effects.
Yet in marketing when we’re dealing with the equally complex psychology of individuals and groups, we’re happy to jump from a small scale experiment performed in a psychology lab (usually on students) straight to the assumption that the exact same thing will happen in the real world despite the myriad of obfuscating and complicating factors in play.
And that’s assuming the lab experiments are even valid – most of the time their findings aren’t reproduced if repeated.
I’m as guilty of this as others. We see a cool new finding in psychology or that some guy has published in a book and we jump to retrofitting it to our own small scale observations without actually testing it properly in real world conditions.
Not that some of these things don’t or won’t work. Just that we need to properly test them, and undoubtedly some won’t work or will have much less impact than the carefully selected case studies imply.
Keep up the interesting emails – very much enjoying them.
Ian nails several points that were in the back of my mind yesterday:
-
The majority of the so-called scientific studies of human behavior you read about have only been run once.
-
The dirty little secret is that in the rare cases where researchers do try to recreate the study, most of the time they get different results.
-
Even when these studies are reproducible, they involve a very non-representative set of people participating (usually psychology majors at big universities who have volunteered).
-
And even when the results are reproducible, and are found to apply to people at large, the underlying psychological principle is often completely drowned out by the ever changing mosh-pit that is life outside of a laboratory.
The ideas in books like Influence can be useful—they can prompt creative approaches that you can test out.
There’s nothing wrong with testing out whether having users click a button before they fill out a form will increase conversion rates.
(Usually it doesn’t in my experience, but occasionally it does give you a bump.)
But in a head to head match, a simple 100-year-old marketing maxim will beat the latest “neuroscience breakthrough” almost every time.