Alice Donahue Investigations Logo

On Implicit Bias: Part II

“The truth surpasses the business case for ending bias; it strengthens the culture case and underpins the justice case. We end bias for the sake of others and for our own.” – Nordell in The End of Bias

In my prior post, On Implicit Bias: Part I, I introduced the book The End of Bias: A Beginning: The Science and Practice of Overcoming Implicit Bias, by Jessica Nordell. In that post, I laid out the three topics for my writing on this book: (1) Understanding examples of implicit bias and microaggressions that can form the basis of a workplace complaint; (2) Eliminating implicit bias in the investigator in order to maintain neutrality and impartiality throughout the process; and (3) Identifying potential expressions of implicit bias in witnesses, understanding how this will impact our credibility determination of the witness.

In On Implicit Bias: Part I, I addressed the first prong, on the ways in which allegations of implicit bias and microaggressions may serve as the basis of a complaint. I specifically discussed the ways in which knowledge of implicit bias can help us, as investigators, better assess the “reasonableness” of these claims; hone our interview skills in investigating these types of allegations; and inform the way we weigh this type of evidence in making our final determinations and investigation findings. In this post, I will discuss the second two prongs on eliminating implicit bias in the investigator, and identifying implicit bias in witnesses.

Eliminating Implicit Bias in the Investigator and Maintaining Neutrality

Understanding implicit bias is crucial for maintaining neutrality in the investigator. One of the primary takeaways of Nordell’s book, is that every human in our society (regardless of their demographic makeup) is susceptible to implicit bias. As such, investigators should regularly engage in practices and precautions that limit the influence of their own potential bias and preserve the unwavering neutrality of the investigation. In this section I discuss (1) implicit bias and mental health; and (2) practices for decreasing bias in making credibility determinations.

The Correlation between Implicit Bias and Mental Health

First, Nordell discusses the relationship between mental health and implicit bias: “…’implicit’ stereotypes influence behavior especially when people are tired or stressed, under time pressure, or otherwise mentally taxed, while more ‘explicit’ beliefs dominate when people have the motivation or mental resources to think carefully about their actions.”

Implicit biases are essentially neurological shortcuts that fail to take in the entirety of the situation. When a person’s mind is especially taxed, fatigued and stressed, they are much more susceptible to taking a shortcut, acting on implicitly biased assumptions. In her book, Nordell specifically addresses this phenomenon in the context of police brutality. She writes: “Indeed, studies suggest that impaired officers do more racial profiling. One analysis of over ten thousand stops by Oakland, California, police found that when officers were stressed and fatigued, they searched and handcuffed African Americans at higher rates. A study of police recruits, too, found that those who were more fatigued showed more bias when performing in a simulation: they mistakenly identified an unarmed Black suspect as armed and decided to shoot.”

As attorney investigators, taking care of our mental health is crucial for providing neutral and effective investigative work. Mental health struggles are endemic within the legal profession. A report by the California Lawyers Association cites a 2022 study that found: “…of the 3,400 law firm respondents, 67% reported they suffered from anxiety, 35% suffered from depression, 44% suffered from isolation, and 19% contemplated suicide. Moreover, 44% felt mental health and substance use in the legal industry were at “crisis levels.” Interestingly, with respect to drug problems, only 2% admitted they had a drug problem, but 18% said they knew a colleague who did. With respect to alcohol problems, 9% admitted they had a problem, but about 44% knew of a colleague who did.”

Many lawyers experience pressure to attain high levels of billable hours and churn out work-product at an astounding pace; yet California rules of ethics also require that lawyers maintain a level of competence (that includes mental and emotional bandwidth) when rendering legal services for their clients. Rule 1.1 of the California Rules of Professional Conduct states: “For purposes of this rule, “competence” in any legal service shall mean to apply the (i) learning and skill, and (ii) mental, emotional, and physical ability reasonably necessary for the performance of such service.” Rule 1.1 explicitly includes the mental and emotional ability to competently perform services in its definition of “competence,” aside from the more obvious requirement of learning and skill.  Thus, attorneys have an ethical obligation to care for their mental and emotional wellbeing such that they can competently provide services to their clients.

With respect to attorney-investigators and implicit bias, an investigator who is stressed out and mentally taxed is more likely to take neurological shortcuts and make decisions based on implicit biases. Reducing mental stress and fatigue, and prioritizing mental health in general, therefore, can help us in providing more neutral, effective, and reliable investigations for our clients. The importance of these two issues in our profession is even further underscored by the California bar’s requirement of continuing education credits in the specific fields of competence and implicit bias.

The Uniform “Checklist” as a Mode for Decreasing Bias

Throughout Nordell’s book, she provides us with astounding studies on the impact of implicit bias in our society; however, she also provides potential solutions for preventing and diminishing these trends moving forward.

One remarkable study found that implementing a basic checklist at the end of a doctor’s visit with a patient decreased fatal blood clot complications significantly: “…the number of internal medicine patients who returned to the hospital with blood clots within ninety days of discharge fell from twenty to two. And after the introduction of the checklist, the rate of fatal pulmonary embolism was cut in half.”

Originally, this study was meant to reduce human error in general: “[A checklist] plugs memory holes and hangs a safety net under human errors so they don’t add up.” The research team had not looked at the data for patterns involving implicit bias; when they went back to review their data along these lines, however, they were alarmed by what they saw. Indeed, “While 31 percent of male trauma patients had failed to get treatment, the rate was 45 percent for women. In other words, women had been nearly 50 percent more likely to miss out on blood clot prevention than men, and in greater danger of dying of this particular cause.” Nordell acknowledges that this disparity could have had other causes; for example, many of the patients who arrive with gunshot wounds are men, and doctors may be more inclined to prescribe greater blood clot prevention for more severe injuries. Nonetheless, after the team implemented their checklist, the gender gap completely disappeared: “Women and men received the right clot prevention at exactly the same rates.”

Nordell describes the checklist as a kind of “choice architecture” or “a way of shaping a doctor’s behavior not through persuasion but through design.” The checklist “forces doctors to disentangle the thinking that goes into a medical decision.” In other words, the doctor’s implicit assumptions are disrupted by an objective series of tasks to be applied uniformly to every patient.

While a checklist cannot supplant nuanced decision-making entirely (of course, a doctor must still be able to address the unique circumstances of their patient, and respond to nuanced subtleties), the checklist provides a “fail-safe” for decision-making, ensuring that important steps don’t fall through the cracks.

Within the realm of workplace and school investigations, the California Department of Fair Employment and Housing Workplace Harassment Prevention Guide for California Employers factors on credibility can serve as a sort of uniform checklist when making credibility determinations.[1] The eight factors laid out in this guide – (1) the inherent plausibility of each person’s statement; (2) corroborating evidence that would tend to support or contradict each person’s statement; (3) each person’s motive to lie; (4) the extent to which a witness was able to perceive, recollect or communicate about the matter; (5) history of honesty/dishonesty; (6) habit/consistency; (7) inconsistent statements; and (8) manner of testimony – can be used as objective measurements of a witness’s credibility. In using these factors, the investigator can provide a uniform and objective analysis to each witness across each investigation, reducing the potential for biased or subjective findings.

Importantly, this guide also cautions against the overreliance on “demeanor” as a credibility factor. When an investigator relies heavily on demeanor to make their determination, they are more likely to inadvertently imbue their analysis with significant bias. For example, an investigator might write that a female witness appeared “hysterical,” or that a Black witness appeared “well-spoken.”[2] While an investigator may be unaware of the bias present in these types of comments and observations, an employee (if they ever became privy to the investigative report), could use these types of comments as a basis for further complaints. Moreover, a savvy trial attorney could also use these details to undermine the impartiality and integrity of an investigative report in litigation.

Generally, these types of demeanor observations rely significantly on an investigator’s own internal presumptions, even if unrelated to the witness’s demographic identity. For example, assuming that a witness is guilty or untruthful because they are sweating or fidgeting. An investigator actually has no way of knowing the true cause of a witness’s demeanor (perhaps the witness has a medical condition that causes them to sweat, or has had traumatic experiences in the past that are now causing them to appear especially nervous in their interview).

The erroneous and even potentially discriminatory impact of these assumptions is again highlighted by Nordell in the context of racial bias and policing: “Officers are taught to notice ‘pre-attack’ indicators such as anxiety and reduced mental processing in suspects, but many of these behaviors are the same responses a person exhibits when feeling threatened. Black individuals in particular are at risk of the phenomenon known as stereotype threat: concern about being stereotyped which can affect one’s actions and behavior. They may be especially prone to appear anxious and therefore, in the eyes of the police, suspicious.” Here, the officers make incorrect assumptions about a suspect’s demeanor, failing to consider the other potential causes for the behavior.

The eight credibility factors listed above rely more heavily on objective facts and evidence, rather than internal presumptions about how someone should behave in their position. Relying on this “checklist” and avoiding demeanor-based conclusions will make an investigator’s findings more equitable and more impartial. As noted by Nordell, however, the checklist cannot supplant the decision-making process entirely, and an investigator must still harbor the independent thinking, creativity, and mental flexibility to respond to each case’s unique evidence and facts.[3]

Identifying Expressions of Implicit Bias in Our Credibility Determinations of Key Witnesses

Finally, knowledge of these implicit bias processes and patterns can also help us better assess the credibility of key witnesses in our investigation. For example, obviously biased comments spoken by a witness (even if unrelated to the allegations) are likely to significantly decrease a witness’s overall credibility, especially if the issues being investigated relate to discrimination or harassment.

Furthermore (and perhaps less obviously), witnesses who present a “color blind” attitude will also be less credible.  As Nordell explains, “color blind” attitudes often lead to greater instances of implicit bias: “Another study explored the impact of ‘color blindness,’ investigating how White employees’ attitudes about racial and ethnic differences affected the experiences of employees of color. Researchers surveyed nearly five thousand workers in eighteen different departments of a health-care organization, assessing the extent to which those departments practiced ‘color blindness’ or multiculturalism. The study found that in departments where differences were downplayed, employees of color perceived more bias and felt less engaged. By contrast, when White employees noted and appreciated differences, employees of color felt more engaged and detected less bias.”

Nordell continues, “There appears to be a neural basis for this: brain imaging studies suggest that when people are motivated to check biased behavior, they pay more attention to racial cues, and then work to curb their own stereotyping. Curtailing biased behavior seems to require paying attention to differences – the very opposite of trying to be ‘blind’ to them.”

I like to think of this phenomenon as trying to make out a series of shapes and colors with a vision impairment. In this case, one would be less able to notice details and differences and more likely to identify all the shapes by one consistent characteristic, such as all square or all blue. Then, one would attribute a set of characteristics (or stereotypes) to all the shapes identified as square or identified as blue. In this analogy, removing this “color blindness” would allow the person to see that some blue shapes actually had distinct patterns on them, or some square shapes actually had rounded edges.

In being able to see and recognize the unique distinctions among all the different shapes, the implicit categorizing of the shapes as all square or all blue would be disrupted, and the person would need to rely on new, more conscious manners of perceiving these shapes: “Thoughtfully attending to distinctions… also illuminates the absurdity of applying a broad stereotype: there are simply too many differences among members of a group for the stereotypes to be correct.” Essentially, making unconscious thought processes (“color blindness”) more conscious (“multiculturalism”) has been found to significantly reduce instances of implicit bias.

Thus, in an investigation, if a direct witness to an alleged event tells me, “I did not view the situation as race based. I don’t think of things in that way; I don’t see color. We’re all people at the end of the day,” their account that the incident did not appear race-based will be given significantly less weight in my final analysis. Not only is a person harboring a “color blind” mindset more likely to behave within their own set of implicit biases; but a witness who “does not see color” as a default, universal view is unlikely to have an objective perspective as to whether the incident appeared race based. In other words, their default presumption is that these biases do not exist in our society, when we know, objectively, that they do.

Conclusion

Ultimately, knowledge of implicit bias is an important part of our jobs as investigators. While implicit bias evidence alone is unlikely (currently) to give rise to legal liability for employers in litigation, this type of evidence frequently comes up in workplace and school investigations; as investigators, we must know how and when to consider this type of evidence. We can even use this knowledge to improve our own investigation skills, make us more neutral fact finders, and allow us to reflect greater impartiality in our final reports, all of which will help in providing a better and more reliable investigation to our clients.

Nordell is an American author, poet and science writer, and interdisciplinary scholar, with degrees in both poetry and physics. Nordell’s first book, The End of Bias, was shortlisted for the 2022 Columbia Journalism/Lukas Prize for Excellence in Nonfiction, the 2022 NYPL Bernstein Book Award for Excellence in Journalism, and the 2021 Royal Society Science Book Prize. The book presents the culmination of fifteen years of prior research, reporting, and writing on the subject of implicit bias.


[1] The Department of Fair Employment and Housing (“DFEH”) is now known as the California Civil Rights Department (“CRD”).

[2] As discussed in my first post on this book, labeling women as “hysterical” or over emotional is a common bias. In addition, referring to Black professionals as “well-spoken” is also known to be a common microaggression.

[3] The importance of mental flexibility and creativity was introduced in last month’s An Introduction to the Book Blog, and will be a recurring theme in my blog post series.

Share this article:

Related articles

Newsletter

Subscribe for Literature & the Law Insights

Receive Alice's monthly book review, analyzing works of literature and non-fiction and providing insights on how the work's themes relate to the practice of conducting workplace and school investigations.