In our Engaged Scholarship series, we speak with members of the UC Law SF faculty who are influential scholars, advocates, and thought leaders. Our latest conversation is with Chancellor and Dean David Faigman, John F. Digardi Distinguished Professor of Law.

 David Faigman is an influential academic whose landmark work has shaped the field of scientific evidence. Over the past five years, he has been cited more than any other evidence scholar in the United States and has served as an expert witness in criminal cases from Kansas City to Tel Aviv.

Faigman is co-author and co-editor of the five-volume treatise Modern Scientific Evidence: The Law and Science of Expert Testimony (Thomson West), which has been cited by the U.S. Supreme Court. He has published three books and more than 60 articles and essays on scientific evidence, constitutional decision-making, and the Supreme Court.

***

Q: You’re a law school dean, a lawyer, a researcher, an author, and an advocate. How do these identities interact with each other in your life?

A: The scholarship always precedes the advocacy. I once was on a panel at a symposium for the National Academy of Sciences and one of the questioners asked whether I was questioning prosecutorial use of forensic science because I was politically progressive. I responded that I didn’t have an agenda; my only agenda is in favor of good science; and I don’t care who’s using bad science, whether it’s criminal defendants or plaintiffs in civil cases or prosecutors in criminal cases. My agenda is to bring good research data into legal decision making. That has marked my career.

My early scholarship was thought to put me on the conservative political side, because I wrote my law school Note on the “battered women syndrome” and questioned the empirical research that had been forwarded, which was actually only clinical anecdote. Hence, my challenging prosecutorial use of forensic science is not at all political. I object to the use of bad science in court, no matter who is offering it.

A good example of my non-partisanship, I suppose, was my first major article, published in the Emory Law Journal, which concerned the question of how social science evidence should be—or should not be—received by the courts. It set forth pretty clearly my objective of insisting that courts exclude bad science and only rely on valid research literatures. Indeed, this general view was adopted in 1993 by the United States Supreme Court in the case of Daubert v. Merrell Dow Pharmaceuticals. After that decision, I began broadening my perspective to other areas of scientific evidence beyond behavioral psychology, including forensic identification, toxic torts and medical causation, and, most recently, neuroscience. In short, Daubert called upon judges to be gatekeepers against bad, or “junky,” science and I sought to examine myriad areas in which such testimony was being offered.

My other writing, which is grounded in the area of scientific evidence in the courtroom but extends to doctrinal considerations surrounding the Constitution more generally, involves the question of how scientific research should be used in constitutional cases. There are a surprising number of constitutional cases that depend on what might be termed “constitutional facts.” In fact, I wrote an entire book on the subject, Laboratory of Justice: The Supreme Court’s 200-Year Struggle to Integrate Science and the Law.

For instance, Roe v. Wade involves the question, when does fetal viability occur? That is a legitimate scientific question that the Court used to set the threshold for navigating between a woman’s fundamental right of autonomy to choose how she’s treated and the government’s interest in protecting life. Viability is a medical and scientific issue that relates to the likelihood that the fetus could survive outside the womb. The Court held that, under Due Process, viability was the point in a pregnancy when the state’s interest became compelling and could outlaw abortion. But when viability occurs as a factual matter is a scientific question.

There are lots of those kinds of issues in constitutional law, where the Supreme Court relies on factual premises to support its holdings. In Brown v. Board of Education, the Supreme Court asserted that the psychological impacts of segregation supported its holding. Indeed, especially as later cases made clear, both the issue of segregation and that of affirmative action involved social science questions surrounding the impacts of discrimination.

In whatever area of the law’s use of scientific research, whether it is ordinary civil litigation or extraordinary constitutional litigation, the courts should insist on sound science. My scholarship and my advocacy are driven by this principle.

Q: That’s interesting. I might have thought of Roe v. Wade as a case with a central scientific element but wouldn’t have put Brown v. Board in that category.

A: There’s another famous case you might not think of: New York Times v. Sullivan. In that case, the Supreme Court held that when a public figure sues for defamation, he or she must prove that the defendant acted with “actual malice,” which means that the defendant either knew the statement was false or recklessly disregarded whether or not it was true. In reaching this holding, Justice Brennan justified this “breathing space” for free speech on the basis that the failure to provide it would chill speech. This behavioral assertion is eminently testable and is essentially an empirical hypothesis about how people act.

In fact, many of the disagreements between the Federalists and Anti-Federalists over whether to ratify the US Constitution involved factual disagreements. Two in particular. One concerned whether small states or large states would be the best guarantor of liberty. Madison argued that a large federal government was needed, because the many factions in such a government would cancel one another out. The Anti-Federalists argued for small states, and against a large national government, citing Montesquieu’s defense of small republics as the best guarantors of liberty. This is ultimately a factual question for political scientists.

The second concerned a debate between Madison and Jefferson. The latter believed that people were essentially good, but government corrupted them. The former, Madison, argued that government was needed to protect against the base impulses of what he would have called “the mob.” This view was made famous by Madison’s quote in The Federalist, “If men were angels, government would not be necessary.”

So how you organize government is itself based on questions of behavioral psychology, political science, and economics. My career has been based on evidence-based policy and fact-based jurisprudence; I don’t believe we ought to have the luxury of making up facts because they suit our advocacy goal. When I wrote on the battered women syndrome, I was highly critical of using it in defense of women who kill in self-defense.

However, in that same article, I recognized that women suffering domestic violence were not well-served by conventional rules of self-defense—which were based on “Old West” notions of men fighting men. I argued that the law of self-defense needed to take into account the physical reality that a smaller woman might need to defend herself in less conventional ways, especially when police were unresponsive and she or her children might be endangered if they tried to escape the battering relationship. In short, I argued that the law didn’t need to employ a pseudo-science to accomplish what justice demanded.

Q: You’ve been writing and thinking about the law for more than 30 years. What work are you proudest of?

A: I’m proudest of my work in two areas. One would be my work on the scientific evidence questions that arise in constitutional law. Evidence scholars typically don’t teach constitutional law, so they don’t see all the evidentiary questions that arise in cases like the ones I spoke about earlier or even cases like Dred Scott v. Sanford or Plessy v. Ferguson. Virtually all the Court’s landmark cases involve these kinds of empirical questions. As I taught constitutional law, it struck me that everyone on the court was either assuming or relying on empirical statements that might or might not have research bases behind them. I was proud to be one of – and I continue to be one of – very few people thinking about the scientific evidence questions raised in constitutional decision making and how to resolve them. I’ve written two books and half a dozen articles on the matter. Well, the Supreme Court has yet to adopt my suggestions, but time will tell. [Laughs.]

The other area would be the work I’ve done more recently on what I call “G2i,” which is the problem of reasoning from group data in science to individual decision making in the law. And again, this is a systemic problem that most people have not been thinking about. But they are starting to think about it, and that’s where a lot of my citations are coming from now.

When you are a scientist, you study populations or samples of populations. So if I’m a researcher interested in whether benzene causes leukemia or whether glyphosate, the active ingredient in Roundup, causes non-Hodgkin’s lymphoma, what do I do? Well, I look at a group of people that used Roundup and a group of very similar people that didn’t use Roundup and I see if the rate of non-Hodgkin’s lymphoma is greater among those who use Roundup than those who don’t. That gives me an insight into whether Roundup is causing this particular illness. And that’s all group data.

When you get to court, there’s one plaintiff. Let’s say the plaintiff has non-Hodgkin’s lymphoma and used Roundup. Well, the problem is that lots of people use Roundup and never get non-Hodgkin’s lymphoma – and lots of people get non-Hodgkin’s lymphoma who never used Roundup. So there isn’t a perfect correspondence; it isn’t what’s called a “signature disease.” The problem this presents is, how do you reason from the group data scientists are producing to the question of whether this individual case is an instance of the group phenomenon? The manner of statistical inference is fundamentally different in these two determinations.

You can see this by looking at other sciences, like meteorology. If a meteorologist talks about the likelihood that a hurricane will hit Miami, they wouldn’t say they have a reasonable degree of scientific or meteorological certainty that the hurricane will hit Miami. They say, instead, something like there’s a 48 percent likelihood, based on a 95 percent confidence model, that a category-five hurricane is going to hit Miami – and they will produce probability distribution charts for its likely trajectory. Only in the law do courts say, you can’t tell us what the probability is. You must tell us that this case is an instance of that, to some reasonable degree of forensic, psychological, or scientific certainty.

Scientists talk at the group level and lawyers talk at the individual level, but you can’t fully translate group data to individual cases. My recent work on this question of G2i illuminates the fundamental disconnect between what scientists do for a living and what courts do. It very much represent the work that I’ve done my entire career, which is to ask how we integrate scientific research into legal decision making.

Q: On a more granular level, how do you integrate your scholarship and research with your work as a law school dean? Do you carve out time on the weekends, set an hour aside during the workday, or is it more catch-as-catch-can?

A: It’s a lot of weekend time, which is mainly a matter of practical necessity; I don’t work as well at night as I used to. I usually get home around seven and I like to just hang out with my wife, grandkids, or whomever is around. And one thing I’m doing more now is writing with other authors; I’ll have an idea and pitch it to somebody else or somebody else who has an idea will pitch it to me. That way I’m not working on one article full time just by myself.

Over the past 10 years, I’ve been working to define the group to individual problem. In the two articles I’m working on now, I’m trying to manage the problem. I can’t say that this problem can be solved, because there’s a fundamental disconnect, but I do believe it can be managed.

The first of those two articles is on what’s called “differential ideology.” I have three co-authors: a sociologist at the University of Houston, a statistician at the Cleveland Clinic, and a statistician at the University of Cambridge. I understand statistics, but I’m not operating on the same level as these very high-powered statisticians. And when we approach the problem of how to reason from group data to individual cases, we really needed statisticians to help us.

The other article is in a symposium issue of the William & Mary Law Review. But I didn’t have time to do all the research. So I asked my research assistant, Kelsey Geiser, who did a lot of work getting the footnotes in line and making sure I wasn’t saying anything stupid in terms of the research literature, to co-author it with me. She’s been great and is incredibly talented.

I’m working on two other pieces. One is a short piece on firearm forensics that I am coauthoring with a couple of psychologists. The other is also on firearms identification, which will likely be a longer and more technical piece on the problems with this research literature.

Q: The pandemic has had a profound personal and professional impact on many people. What have you learned about yourself in the past 18 months?

A: I learned that I’m more of a social animal than I realized. Being a professor is very social; you’re around students, you’re doing committee work, and you’re talking to your colleagues. Of course, there’s a lot of time spent alone, because nobody’s sitting at your elbow while you’re writing. And I’ve always enjoyed a little bit of isolation; I like the Zen experience of writing.

But COVID gave me too much of that and helped me realize that I need a mixture of the two. Life is not as enjoyable when it’s spent too much alone.

Q: You’ve also spent quite a bit of time serving as an expert witness during the pandemic. What has that experience been like – and what message are you working to convey in your testimony?

A: Now that we’re getting back to in-person work, I’ll probably be doing less expert testimony work, because I can’t travel as much. Ironically, perhaps, the pandemic allowed me to do more work as an expert, because there was no travel involved. I could block a morning and testify in Baltimore, without losing several days traveling and testifying. I’m testifying on firearms research in a case in Israel now. In no other universe than the Zoom universe would I be able to serve as an expert witness in a case in Tel Aviv. So that’s been a nice opportunity and it has made my work more international than it otherwise would have been.

In these cases, I offer testimony on why the judgments rendered by firearms examiners should be limited. They claim the ability to identify a particular cartridge case or bullet as coming from a specific gun, very often stating that they can do so “to the exclusion of all guns in the world.” Well, the research simply does not support their ability to individualize in this way.

The most serious problem associated with the research studies on firearms involves their mistreatment of error rates. The study designs involve testing examiners’ ability to identify whether a bullet or cartridge case “matches” or “does not match” a bullet or cartridge case that came from a known source. Hence, in the research, there are only two possible answers, match or no match. However, in fieldwork, examiners sometimes find that there is too little information to rule-in or rule-out a match; these are labeled “inconclusive.”

Following this three-possible-answer approach, firearms researchers allow their examiner-subjects to answer “match,” “no-match,” or “inconclusive,” even though there are only two actual answers. Worse, the researchers treat an “inconclusive” answer as correct—which is completely absurd. An alternative way suggested by some of these researchers is to not count inconclusive answers at all. But the problem with this is that the examiners only answer the easy questions and answer all the hard questions “inconclusive.”

Imagine if the bar exam adopted that policy. Nobody would fail if you only graded the questions that the examinee decided to answer.

The fundamental problem is that these research methods were not designed by mainstream scientists and do not meet the most basic standards of scientific validity. When I serve as an expert witness, I am working to explain the serious research design and execution flaws in this research and, hopefully, persuade the court that they should not permit firearms experts to testify that a particular bullet or cartridge case was fired from a specific gun. The most that the literature supports is that the bullet or cartridge case was fired from a particular type of gun, not a particular gun.

Q: How many cases do you think you’ve served on during the pandemic?

A: Probably around a dozen, if you add the ones I’ve done to the ones that I’ve agreed to do. I have a couple I’ll be doing next year. The two biggest problems I’ve had is that I don’t know how to say no very well and I have a blind spot about my future calendar. If it’s October and someone asks me to do something in May, I think, “Oh sure. By May, I won’t have any trouble getting it done. There’s plenty of time between now and then.”

What I don’t always appreciate is that my calendar will look the same every week from now through April. But it does make you very productive if you can’t say no. [Laughs.]

On a more personal note, I don’t like writing to deadlines. I enjoy the process of writing, but I mostly enjoy the process of editing my own writing. I like the element where you think about voice and realize you need to add two more sentences here or move this paragraph around. It’s that artistic part of writing that I really enjoy. I’ve found over time that when I need to write something with a real deadline, it feels like pulling teeth and the artistry drops down a little bit.

All my books have been written on my own time, except for one, The Laboratory of Justice. In that case, the publishers decided when it was going to go out and there was no leeway. It was mostly written, but they were very specific about the last chapter – and I had some very late nights before that deadline.

Q: Many writers would concede that even if they hate deadlines, they wouldn’t start — or finish — without them. So having that inner motivation is a beautiful thing. Where do you think that comes from? Is it the pressure of wanting what you have to say to be said? 

A: Yeah, I think that’s it. I have a sense that I have ideas that are new or different and I want to get the word out. It’s not that I’m so smart, but the law has historically been pretty dumb about science; it doesn’t take a rocket scientist to see the problems with a lot of scientific evidence, like non-DNA forensic identification evidence. There’s a lot that is bad in the law’s use of scientific research and so there’s a lot to write about.

There’s that quote, “I don’t like writing, but I like having written.” When I’m first sitting down and putting pen to paper, I have to remember to let the pen flow and worry about it afterwards. If I get up at six in the morning and write until 11 a.m., I can write 12 or 13 pages. I’ll only keep three or four of them in the end. But I give the pen a lot of credit and I try to let it move.

Q: What’s the most important piece of wisdom you’ve received?

A: I remember talking to one of my undergraduate professors, Dr. Virginia Pratt, about whether I should go to graduate school for history or psychology or something else. (I always knew I wanted to be a professor.) She was a diehard supporter of mine and she said, “Whatever you do, you will be successful, but do what you love. The rest will follow.”