Defective science The big Lebowski

DEFECTIVE SCIENCE: The Big Lebowski guides the way on how to respond to scientists’ hurt feelings

Last updated: April 9th, 2019

[su_panel background=”#d5c38b” color=”#000000″ border=”0px none #ffffff” shadow=”0px 0px 0px #ffffff”]In his “Confronting Reality” series, Jack Surguy challenges commonly held assumptions that our society has adopted as truth. His thoroughly researched and authoritative writing manages to debunk various psychological, scientific, political and social theories that are commonly endorsed within the Western cultural milieu. Challenge your mind to change.

I ’m excited because I get to finally work my all-time favourite movie into one of my articles. I’m of course speaking about the 1998 Coen brothers classic, The Big Lebowski, starring Jeff Bridges, John Goodman and Steve Buscemi.

The Dude


Jeff Bridges in a scene from The Big Lebowski
Jeff Bridges plays The Dude, a middle-aged hippie left over from the ’60s who unwittingly finds himself involved in a kidnapping. The movie has spawned the annual Lebowski Fest that occurs in various places throughout the United States, a respectable library of books (I own five myself), as well as its own “church,” The Church of the Latter-Day Dude. Several scholarly articles have also been written on The Big Lebowski.

Before their adventure, the lives of The Dude (Jeff Bridges), Walter Sobchak (John Goodman) and Donnie (Steve Buscemi) revolve around the game of bowling, a game in which a player takes a heavy ball and rolls it down a lane at several pins in an attempt to knock as many pins down as possible. The more pins a player knocks down, the higher the score they earn is.

However, as anyone remotely familiar with the game knows, if the rolled ball goes too far to the left or right, it’ll fall into the gutter and won’t hit any pins—the dreaded “gutter ball.” The trick is to avoid the gutters on either side while also knocking down as many pins as possible. Talented players will place a spin on the ball and roll it in such a way that it comes very close to going into the gutter, but then cuts back towards the pins due to the spin.

I often use this analogy when working with clients. I frequently tell my clients that it’s often when we’re operating in extremes that we find ourselves in trouble. If we go too far to the right, we hit the gutter, yet if we go too far to the left, we still hit the gutter. We need to find a middle path by trying to avoid extremes.

The “bowling way” of researching human behaviour


sport alley ball game
This is often the approach I take when it comes to scientific research as well (particularly when dealing with psychology and other social fields).

Research into human behaviour is an extremely important area of study and we need to continue trying to better our research practices. Declaring that psychological disorders are a subject too complex to scientifically study and that we should just give up is operating in the extreme. At the same time, claiming we have more confidence and certainty in regard to human behaviour and treatment than we actually do is also operating in the extreme.

We need a middle road. Part of developing this middle road involves looking at things honestly and realistically, without making excuses or justifications.

Only facts and data matter


[su_pullquote align=”right”]It doesn’t matter how the facts and data make me feel, and if they indicate a different conclusion than you’d like, tough![/su_pullquote]

Honesty, being realistic and not resorting to excuses or justifications are inherent aspects of the scientific process. In science, feelings don’t matter. Only facts and data matter. It doesn’t matter how the facts and data make me feel, and if they indicate a different conclusion than you’d like, tough!

Surely, the top researchers from most prestigious schools in the United States believe and adhere to this. Let’s review an article by Harvard University professor Jason Mitchell.

The “evidentiary emptiness of failed replications”


Mitchell earned his undergraduate degree at Yale University and his Ph.D. at Harvard University. His research areas include cognition, the brain, and behavioural and social psychology.

Mitchell has published more than 60 peer-reviewed articles in journals such as Journal of Cognitive NeuroscienceProceedings of the National Academy of SciencesPsychology, Health & Medicineand Brain Research. He has also written eight chapters in publications such as Cognitive Neuroscience, 3rd Edition and Social Neuroscience: Toward Understanding the Underpinnings of the Social Mind. In response to the reproducibility crisis in psychology and some other areas of science, Mitchell published the article On the evidentiary emptiness of failed replications” in 2014.

In this article, Mitchell makes the following statements:

Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value. (p.1)

Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output. (p. 1)

Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims. (p. 1)

Someone who publishes a replication is, in effect, saying something like, “You found an effect. I did not. One of us is the inferior scientist.” (p. 6)

The field [social psychology] will have righted its course, not by reviewing its mistakes, but by instituting positive reforms for strengthening our methods of inquiry into the future. (p. 8)

So we should take note when the targets of replication efforts complain about how they are being treated. These are people who have thrived in a profession that alternates between quiet rejection and blistering criticism, and who have held up admirably under the weight of earlier scientific challenges. They are not crybabies. What they are is justifiably upset at having their integrity questioned. … it cuts at the very core of our professional identities, questioning a colleague’s scientific intentions is therefore an extraordinary claim. (p. 9).

Replication and the nonsense of hurt feelings


Defective science hurt feelingsAccording to the publication How science works,” “If a finding can’t be replicated, it suggests that our current understanding of the study system or our methods of testing are insufficient.” (p. 30)

The presence of cartoon illustrations suggests that this publication is focused on teaching a younger audience about the basics of research. As most people already understand, the ability to replicate is one of the foundational components of science. Yet, a Harvard professor who also earned his Ph.D. from Harvard insists that, “hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.”

From Mitchell’s statements, it also appears that many scientists are getting their feelings hurt when their study isn’t replicated. Mitchell ensures us that these researchers aren’t crybabies. They’re upset because someone is saying, “One of us is an inferior scientist.” Mitchell also insists that our knowledge isn’t going to improve by looking at mistakes. Yeah, when has anyone ever improved at something by examining possible mistakes they might be making, right?!

[su_pullquote align=”right”] If I decide to try and replicate a study, do I need to care about whether or not I hurt the original researchers’ feelings?[/su_pullquote]

Mitchell instead insists that instituting positive reforms for strengthening our methods of inquiry is the best method for progress. Of course, it may prove a little difficult to know what positive reforms need to be made if we don’t replicate or even look at our mistakes. Is this really what the scientific process has come to? If I decide to try and replicate a study, do I need to care about whether or not I hurt the original researchers’ feelings?

Mitchell goes on to state, “if a replication effort were to be capable of identifying empirically questionable results, it would have to employ flawless experimenters. Otherwise, how do we identify replications that fail simply because of undetected experimenter error? When an experiment succeeds, we can celebrate that the phenomenon survived these all-too-frequent shortcomings.” (p. 2)

Mitchell’s predicament


In making this statement, Mitchell created somewhat of a predicament. Essentially, he’s stating that any study seeking to replicate another study will have to be conducted flawlessly if the researchers are hoping to get the same results as the original study.

[su_pullquote align=”right”]If it’s so difficult to reproduce the results from a study, why are evidence-based practices even pushed?[/su_pullquote]

If this is true, then why would clinicians try to use the interventions these studies use, when we’re unable to flawlessly employ them with our clients in office settings? Furthermore, therapists working with the participants in a study often meet with more experienced practitioners to get advice and guidance on cases. The average clinician working in the community doesn’t get this kind of support. If it’s so difficult to reproduce the results from a study, why are evidence-based practices even pushed?

To state that we shouldn’t be concerned when studies fail to replicate, but at the same time expect clinicians to replicate their results while working with clients in therapy sessions is preposterous.

“My job is to prove Dr. Stewart wrong”


Woman in research laboratoryMitchell’s proposal isn’t in line with the way scientific research has typically been conducted. Consider Dr. Alice Stewart, a rather obscure researcher. In the 1950s, Dr. Stewart discovered that using X-rays on pregnant women greatly increased the chances of each child developing cancer. Despite having accurate evidence that supported her claims, it still took her several decades of fighting with the medical industry before they reviewed her work and thankfully stopped the practice.

In a TED talk, “Dare to disagree,” presenter Margaret Heffernan spoke about Dr. Stewart’s research methodology:

Well, she had a fantastic model for thinking. She worked with a statistician named George Kneale, and George was pretty much everything that Alice wasn’t. So, Alice was very outgoing and sociable, and George was a recluse. Alice was very warm, very empathetic with her patients. George frankly preferred numbers to people. But he said this fantastic thing about their working relationship. He said, “My job is to prove Dr. Stewart wrong.” He actively sought disconfirmation. Different ways of looking at her models, at her statistics, different ways of crunching the data in order to disprove her. He saw his job as creating conflict around her theories. Because it was only by not being able to prove that she was wrong, that George could give Alice the confidence she needed to know that she was right. It’s a fantastic model of collaboration—thinking partners who aren’t echo chambers. I wonder how many of us have, or dare to have, such collaborators. Alice and George were very good at conflict. They saw it as thinking.

Dr. Stewart depended on Kneale to look at the data in as many ways as possible, with the intention of proving her wrong. Her feelings weren’t hurt when he questioned her or proved one of her theories incomplete or even false.

For Dr. Stewart and George Kneale, it wasn’t about feelings. It was about facts, and facts don’t care about feelings. Indeed, it was due to Dr. Stewart’s dedication and obsession with discovering truth via the scientific method that so many lives were ultimately saved.

Dr. Stewart unfortunately lived during a time in our history when male doctors, who dominated the field, often looked down on their female peers. Dr. Stewart’s research had been so thoroughly criticized and reviewed, however, that the male-dominated system was forced to take her results seriously.

After more than 50 years have gone by since Dr. Stewart’s experience, you’d think that scientists would be emphatically applying her methods when conducting research. Unfortunately, this isn’t the case. According to Mitchell, “if the most likely explanation for a failed experiment is simply a mundane slip-up, and the replicators are themselves not immune to making such mistakes, then the replication efforts have no meaningful evidentiary value outside of the very local (and uninteresting) fact that Professor So-and-So’s lab was incapable of producing an effect.” (p.2)

The Big Lebowski’s response to Mitchell


Wall stencil of Walter SobachA scene from The Big Lebowski can provide us with some context that may help us decide how to respond to Mitchell’s position.

As stated earlier, the main characters in The Big Lebowski find themselves unexpectedly caught up in a supposed kidnapping. Specifically, three German nihilists with heavy accents have supposedly kidnapped the trophy wife of a successful businessman.

Towards the end of the movie, The Dude (also known as Jeff Lebowski), Walter and Donnie are unexpectedly confronted by the nihilists outside their favorite bowling alley. The nihilists demand that The Dude give them the ransom money, even though the nihilists don’t actually have the supposedly kidnapped woman. Their dialogue is as follows:

[su_quote style=”flat-light”]

Nihilist: Ve vant ze money, Lebowski.
Nihilist #2: Ja, otherwise ve kill ze girl.
Nihilist #3: Ja, it seems you have forgotten our little deal, Lebowski.
The Dude: You don’t have the f**king girl. We know you never did!
Nihilist: Ve don’t care. Ve still vant ze money, Lebowski, or ve f**k you ups.
Walter: No, without a hostage, there is no ransom. That’s what a ransom is. Those are the f**king rules.
Nihilist #2: His girlfriend gave up her toe!
Nihilist #3: She thought we’d be getting million dollars!
Nihilist #2: Iss not fair!
Walter: Fair! Who’s the f**cking nihilists here! What are you, a bunch of crybabies?

[/su_quote]

There are rules when it comes to science


Just as there are evidently rules when it comes to ransom (according to Walter, anyway), there are rules when it comes to science.

Scientists assume their opinions will be respected because they spend a great deal of time studying certain subjects and then following the rules of scientific methodology to find evidence to support their ideas. Part of the scientific process is replicability. If something can’t be replicated, then, back to the drawing board! If the results can be replicated, you move forward with further scientific investigation. Those are the rules.

If you conduct a study and get certain results, but others aren’t able to replicate your findings, then you go back to the drawing board to find out why. You don’t write articles condemning others for trying to replicate your findings! You don’t think to yourself, “Who’s the better scientist?” You don’t claim that not being able to replicate a study isn’t a big deal!

When a researcher does those three things, society at large needs to respond as Walter did:

No, without replicability, there’s no evidence. That’s what evidence is, replicability. Those are the f**king rules! Not fair? Who’s the f**king scientists here? What are you, a bunch of crybabies?!

Read the previous article in this series, DECEPTIVE PSYCHOLOGY: Examining evidence-based treatment and how it affects therapy for abused children»


images via 1. Cast Reunion: Julianne Moore, Jeff Bridges, John Goodman and Steve Buscemi at New York LebowskiFest 2011 by Chris Goldberg, Flickr (CC BY-NC) 2. the-big-lebowski-1 by tlyman1, Flickr (CC BY-NC 2.0) 3. Bowling via Pexels 4. Emotions via Pexels 5. Scientific research via Pexels 6. Walter Sobchak by Tim McG, Flickr  (CC BY-NC-SA 2.0)

Your email address will not be published. Required fields are marked *