The reality of the scientific process
No amount of experimentation can ever prove me right; a single experiment can prove me wrong. – Albert EinsteinThe scientific process is supposed to be the golden standard for the pursuit of knowledge and truth. When asked to think of a scientist, many people will imagine a somewhat “nerdish” individual who’s often more comfortable with numbers than people.
We imagine these scientists as being wholly dedicated to their research and unwavering in their pursuit of the truth. The idea that many published scientists are heavily influenced by politics, that scientific journals often compromise their standards for publications, and that many universities look the other way to secure federal funding, often comes as a shock to most people.
The reproducibility problem
In 2011, the Reproducibility Project, which is a community-based crowdsourcing effort headed by Brian Nosek of the University of Virginia, decided to reproduce 100 psychological studies from 2008. To the utter amazement of many within the field, they discovered that they could only reproduce 39 of the 100 original findings.
In 2016, Nature journal reported that of the 1,576 researchers who took an online questionnaire about reproducibility (reproducing another researcher’s study), more than 70 percent had tried and failed to reproduce another published study, and more than 50 percent reported that they’d failed to reproduce one of their own studies.
Given the fact that reproducibility has been referred to as the “Touchstone” of the Scientific Method, many are surprised to learn that incentives to publish positive replications are low and in fact, many journals are reluctant to publish negative findings.
According to Baker, the author of the study on Nature.com, “several respondents who had published a failed replication said that editors and reviewers demanded that they play down comparisons with the original study.”
The article goes on to state that 52 percent of the respondents agree that there’s a significant “crisis” in reproducibility. However, less than 31 percent of the respondents believe that a failure to reproduce published results indicates that the results are probably wrong, and most state that they still trust the published literature!
Only 6 out of 53 cancer studies were replicated
“If you’re going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it’s true.”
The issue of replication failure is found not only within the field of psychology. According to a 2012 article in the British Journal of Medicine (BMJ), only 6 out of 53 cancer studies were replicated. Glen Begley, who led the study, stated, “These are the studies that the pharmaceutical industry relies on to identify new targets for drug development. But if you’re going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it’s true.”
Brian Nosek led another study on trying to replicate cancer studies, and again, was unable to replicate the published studies. In 2016, two further seminal studies couldn’t be replicated. One was a 1998 study on ego depletion, which further studies were based upon, and the other was a study I’ve referenced many times myself when working with clients, a 1988 study that found our facial expressions can influence our mood. In other words, if I just forced myself to smile, I’d start feeling in a better mood.
Researchers at the pharmaceutical company Bayer examined 67 cancer and cardiovascular studies and only 14 (21 percent) could be reproduced. Given the fact that in 2012, the U.S government was offering nearly $31 billion every year in science funding alone, through the National Institutes of Health, what could be contributing to this issue?
According to many researchers, two factors are greatly contributing to this issue: 1) pressure to publish (publish or perish) and 2) selective reporting.
Current effect on psychological therapy
Before we explore the two factors mentioned above, I’d like to discuss how these issues are currently affecting the field of psychology, and more specifically, therapy. Consider the following by Scott Lilienfeld in Psychology Today:
Surveys show, among other things, that only about 20 percent of people with major depressive disorder … receive anything close to optimal treatment, that many or most practitioners who treat clients with eating disorders are not administering scientifically supported therapies … that large proportions of clinicians who treat obsessive-compulsive disorder do not implement the clear-cut treatment of choice for this condition. …
… Many researchers, myself included, believe that the research leg … should be accorded the highest priority in the decision-making hierarchy. When the rubber meets the road, that is, when well-designed studies demonstrate that Intervention X works better than Intervention Y but when the clinician’s intuition tells him or her to use Intervention Y, we should side with research evidence unless there is a clear-cut reason to do otherwise. [emphasis added]
Lilienfeld wrote this article in 2014, which was after many of the issues with the research were discovered. Nevertheless, he declares that if a client isn’t receiving the intervention being promoted in the research literature, then the client isn’t receiving optimal, or the best, treatment available.
According to Nemade et al. (2007), evidence-based treatments (EBT), another name for the studies found in the research journals, are increasingly becoming the “gold standard” for mental health care.
Nemade advocates for the therapist to use and follow a treatment manual, which he states will specify “the number of sessions to be offered, what to talk about and teach during those sessions, and what techniques are to be employed during those sessions.” He says that the interventions are highly structured and focus on teaching specific skills to specific clients who will benefit from them.
Nemade further states that health care companies like these interventions because they’re short-term in nature. He then asserts that Cognitive Behavioural Therapy (CBT) is one of two therapies considered an EBT for depression.
Lilienfeld, along with Hal Arkowitz, also states that CBT has been the most extensively studied therapy by far. However, many of these studies are considered problematic due to bias and poor methodologies. For example, in a study by Clark et al. (2006), the researchers sought to determine if CBT was more effective than Exposure/Applied Relaxation (EX/AP) in treating certain phobias. The study did find CBT more effective than EX/AP.
However, in analyzing the study, it’s found that in Clark’s study, clients were exposed to anxiety-producing situations before they’d learned any skills for coping with these situations or stimuli. Amazingly, clients were actually instructed not to use any skills they might’ve learned in any of the sessions!
Subjecting these clients to feared stimuli, prior to teaching them any relaxation or coping skills, is contradictory to behavioural principles. Yet, tactics like this are not uncommon within CBT studies.
Whenever researchers want to determine if CBT works better than another treatment model, such as Client-Centred Therapy, the researchers will, in regard to this or any other alternative:
- Restructure and essentially remove any helpful interventions
- Create a treatment manual for the interventions that can be used
- Instruct therapists to avoid any interventions that may be found in CBT
- (Often) have therapists with little to no training in the alternative models apply the interventions
Essentially, after gutting the other therapeutic model, the researchers have the audacity to claim that CBT is superior to the other one!
Meanwhile, the CBT therapists typically receive more supervision, and the therapists and head researchers are all CBT practitioners, giving them a vested interest in the outcomes.
Essentially, after gutting the other therapeutic model, the researchers have the audacity to claim that CBT is superior to the other one!
This is beginning to have a profound impact on the way therapists are able to practice. For example, in Indiana, the Department of Child Services’ Service Standard agreement states that therapists working with children in residential treatment centers must use Trauma-Focused CBT (TF-CBT) as a core competency. The agreement also states, “Approval of the Deputy Director of Placement Support and Compliance is required to utilize any other evidenced-based, trauma-informed practice instead of TF-CBT.”
Children with severe behavioural problems
Basically, I’m supposed to contact the Deputy Director any time I want to use an intervention other than one found within TF-CBT. Now, this essentially requires that I obtain certification in TF-CBT, which costs around $1,200.
As inconvenient as those issues are, the part I find most frustrating is the fact that the developers of TF-CBT have acknowledged that TF-CBT is not an appropriate modality to use with children and adolescents who are engaging in severe behavioural problems—much like the ones referred to locked-secured facilities in which I work. Hodgdon et al. (2013) succinctly described this issue by stating:
Most current available trauma treatments have been developed for the individual or group therapy context. Residential treatment, by nature, is ongoing 24 hours a day, 7 days a week, 365 days a year and cuts across all contexts through which youth move (e.g., school, milieu, social, and clinical), and therefore a circumscribed trauma treatment (i.e., one that is delivered only via individual or group therapy sessions) is less likely to be effective.
Additionally, the most commonly used trauma treatment for youth, Trauma Focused Cognitive Behavioral Therapy … is contraindicated for youth who have: (a) current self-harm or suicidal behaviors (a common problem for many youth in residential care), (b) lack a family system that can provide empathic support during trauma processing (youth in residential care often have unstable family connections, if any at all), and/or (c) are at risk for further trauma exposure (youth in residential care are at increased risk for victimization due to higher rates of run-away behaviors).
Children with abuse-related PTSD symptoms
Nevertheless, this doesn’t stop some from extolling misleading claims about the effectiveness of TF-CBT. In the publication Journal of Family Violence (2013), Zelechoski et al. state the following:
In a two-site, randomized control trial of 229 children with a history of sexual abuse, children in the TF-CBT condition showed significant reductions in PTSD symptoms, depression, and total behavioral problems when compared to participants in the child-centered therapy conditions … In the TF-CBT group, children also showed improvements in interpersonal trust and reduced shame. Moreover, the parents of the participants in the TF-CBT group demonstrated improvements in their levels of depression, distress related to the abuse, parenting practices, and parental support.
This obviously sounds like it supports TF-CBT. The research study Zelechoski et al. are referring to was completed by Judith Cohen, one of the main proponents of TF-CBT. The study, A Multi-Site, Randomized Controlled Trial for Children With Abuse-Related PTSD Symptoms, was published in the Journal of the American Academy of Child and Adolescent Psychiatry. In this study, 229 children and parents were assigned to two different groups: 1) TF-CBT and 2) Child-Centred Therapy (CCT).
Of the 229 that started, 26 (or just over 10 percent) dropped out before the third session. Of the 203 who continued with the study, fewer than 75 percent actually attended all sessions (149 clients). Out of all of those participants, only 89 participants in the TF-CBT group and 91 participants in the CCT group actually completed all 12 assessments that were administered. It was on these 180 participants (78 percent) that the statistical analyses were based.
It should be noted that many studies experience about a 30 percent dropout rate. However, there’s more to consider in this study. In discussing the CCT intervention, the authors state:
[CCT] Therapists offered limited interpretations when clinically appropriate, and addressed behavioral difficulties by encouraging the parent and child to formulate their own personal strategies for behavioral change, rather than providing prescriptive advice in this regard. Although sessions were generally client directed, written psychoeducational information about child sexual abuse was provided and children specifically prompted to share feelings about the sexual abuse during two therapy sessions if they did not do so spontaneously.
In other words,
- The therapists in the CCT group provided little more than reflective listening (stating back to the client what they said) and encouragement.
- They provided no suggestions about how to address any problematic behaviors.
- They limited any interpretations that may have proven useful to the child and parent.
Prompting the child
Most disturbing, however, is the fact that the therapists provided information about sexual abuse to each child and parent without truly processing the information with them, and they prompted each child to discuss their sexual abuse in two sessions while withholding helpful therapeutic interventions.
Now, it doesn’t state when these prompts were made, but if they were made towards the end of the study, then the child is going to be more reactive, resulting in inaccurate assessments.
Regardless of when the children were prompted, the problem lies with a therapist prompting the children and only responding with limited interpretation or asking the parent and child how they believe they should handle the issues raised. The CCT “interventions” that were provided in this study were not like the interventions real CCT therapists provide. Furthermore, to trigger or force the child to recall the abuse, without providing any helpful therapeutic interventions, is highly questionable.
Despite the highly questionable CCT interventions used, initial results still found that “children and parents in both treatments improved significantly from pre- to post-treatment on all measures but the PSQ.”
The PSQ refers to the Parental Support Questionnaire, which assessed parents’ beliefs and perceptions regarding the type and degree of support they provided to their child following the discovery of sexual abuse. In other words, the only difference between treatment results within the two interventions could be found in the degree of support parents provided their children.
However, the authors state that since time was a “main effect,” or that time was an influence on the results, they decided to run more complex statistical analyses—and, as Mark Twain stated, “There are lies, damned lies, and statistics.”
“Adjusting” the results
It was when statistical analysis was used, and when scores were “adjusted,” that differences were seen between the two groups. Either way, the results indicated that of the 89 participants who completed the TF-CBT interventions, 19 still met diagnostic criteria for PTSD. In other words, 21 percent of those treated with TF-CBT still had PTSD at the end of treatment.
According to much of the research literature, then, what we should consider the “gold-star” treatment (TF-CBT), actually demonstrated a dropout rate of around 22 percent, and 21 percent who remained still had PTSD at the end of treatment. Overall, TF-CBT essentially failed to help around 40 percent of the participants. Nevertheless, the authors end their article by stating the following:
The TF-CBT approach evaluated here appears to not only effectively treat PTSD, but also is superior to CCT in reducing abuse-related attributions and shame. It is also effective in reducing parallel depression and parental distress about their children’s sexual abuse, and in enhancing parental support of the child and positive parenting practices. This study thus adds further support for the use of TF-CBT in treating multiply traumatized sexually abused children and adolescents.
Is this science? Are these the evidence-based practices and interventions being proclaimed as the best practices?
It’s an unethical act to take a competing treatment model, reconstruct it and add obstacles in its delivery, and then claim your preferred form of treatment is superior.
Now, I fully admit that the “soft sciences” (ie. psychology, sociology) aren’t able to conduct the type of studies that the “hard sciences” (ie. physics, engineering) can. I fully understand the limitations that researchers are commonly faced with. What’s expected, however, is for researchers to be fully honest about their methods and to stop overstating the effectiveness of certain treatments.
I also believe it’s an unethical act to take a competing treatment model, reconstruct it and add obstacles in its delivery, and then claim your preferred form of treatment is superior to the competitor.
In the next article, I’ll examine the process, known as peer review, which studies go through to be considered for publication in journals. More than likely, you’ll again be amazed at the politics and outright bias that’s often a part of our so-called scientific process.