Research

Epistemic risks in the diversity-trumps-ability model

Heather Douglas (2009) claims that when there are potential risks involved in the confirmation of a hypothesis, scientists should raise their evidential standards to ensure public safety. If Douglas is correct that scientists should consider the bad consequences associated with making erroneous claims, such that it requires scientists to raise their evidential standards in order to avoid causing negligent harm, then does it follow that scientists should likewise consider the potential benefits of accepting or rejecting a hypothesis? Suppose the hypothesis in question is in support of a social-good. In this case, should a scientist relax their evidential standards, since the acceptance of the hypothesis has positive consequences? I attempt to answer these questions in relation to the construction and application of mathematical models. I use Hong and Page’s ‘diversity trumps ability’ model as a key example where academics have dropped their epistemic standards because the model’s stated results support a social-good.

Special Issue of Synthese, forthcoming

Will Transparency Bolster or Hinder Public Trust in Science?

The concept of transparency has received a great deal of attention in the philosophy of science literature, especially as it relates to communicating scientific studies and findings to non-experts. Although transparency is usually considered necessary for bolstering public trust in science, more recently philosophers have argued that transparency may actually generate unwarranted skepticism. The worry is that if scientists are completely transparent, they will expose non-experts to practices that conflict with their idealized normative assumptions concerning proper scientific methodology, thus causing more skepticism in science. The transparency debate has consequently focused on the discrepancy between actual scientific practice and non-experts’ assumptions. I argue that claiming we shouldn’t be transparent because non-experts hold a false folk philosophy of science oversimplifies the issue. When determining whether transparency will bolster or hinder public trust in science, we must instead consider the role special interest groups play in intentionally fostering doubt.

This paper is currently under review.

Algorithms and the Ethics of Discrimination in the Insurance Context

Increasingly, algorithms associated with AI, big data, and machine learning play a central role in public and private practices, and it is often lately noted that the outcomes associated with these processes can be worse for people of color, for women, and for people in other minority or marginalized communities. There is much debate over when, and how, these processes have outcomes worse than the ones they are typically replacing, and when they do, how those outcomes be evaluated. A context of particular interest is the insurance industry. While it is unethical, and often illegal, to discriminate along lines of race, sex, and other protected classes, modern algorithms frequently incorporate data that correlates with these characteristics, introducing the possibility of discrimination by proxy. For example, car insurance may be based on data from credit reports, leading to worse rates for low income people, and by extension racialized people as they tend to have lower incomes. Does this outcome mean that the use of algorithms is racist or discriminatory in other unethical ways? Recent scholarship in law and sociology has drawn connections between the social and ethical norms associated with insurance before the introduction of the new algorithms and has introduced useful distinctions and helpful concepts. This paper explores the issue from a philosophical and ethical point of view.

This paper is co-authored with Patricia Marino.

Draft available upon request.

%d bloggers like this: