An introduction
Hello! I’m Darcy. I have a PhD in philosophy but I have successfully divested from having ‘philosopher’ as a central part of my identity. I have a job as a research analyst at a tech company, and I hope to write more about my process of leaving philosophy and finding this job. But I also have some lingering philosophical projects that I want to write about here.
First, here’s a little background on my academic work. My dissertation focused on citation bias in science. According to a substantial body of empirical evidence, women are less likely to be cited in a variety of scientific fields. This likely poses a number of issues for the normal functioning of science. I assume that all people are equally likely to be able to contribute to the progress of science, so if some voices are being left out, science is worse off.
My dissertation examines this problem through both traditional philosophical analysis and through empirical methods. I used the ‘three paper’ model instead of the more traditional monograph model for my dissertation. In the first section, I argue that citation bias represents an epistemic injustice against women: women are prevented from being treated as epistemic agents (or less fancily, people who know things that others might like to learn). In the next section, I argue that a scientific community where citation bias occurs is one that is epistemically inferior to one where it does not, and scientists have an obligation to improve the practices of their community as a result. In the final section, I built an agent based model to try and quantify some of the harms that women who are discriminated against face, especially as the use of citation metrics increases in science.
I knew for the last few years of my PhD that I did not want to compete for an academic position in philosophy. I love philosophy, but it has never loved me back. I was and continue to be unwilling to make the sacrifices necessary to succeed on the academic job market. I am not willing to take the risk of investing multiple years into posts docs, VAPs, and adjuncting to never actually succeed in finding a permanent position or to only find security at a job in a place I don’t want to live.
I successfully found a job outside of academia. I work as a research analyst for a tech company. The company has brands around housing, cars, and people search. My team generates content that improves the SEO of those brands. To do so, we conduct data studies with publicly available data sets to find newsworthy insights connected to the company’s brands. My first study was on electricity prices. It does use some of the quantitative skills I developed in the last chapter of my dissertation, but it also generally requires strong analysis skills (something philosophers have in spades!).
Even though I’ve left academic philosophy, I still have a few lingering philosophical projects I want to pursue. In particular, I’m interested in applying philosophical methods to problems of AI. AI ethics is a growing field that draws on the insights of a wide variety of disciplines, although philosophers have not been at the forefront, despite generally being the experts on ethics.
I spent the last few years of my PhD working on the ethics of another emerging technology: neural tech. As a neuroethics researcher with the Center for Neural Technology at the University of Washington, I worked on projects where I had the opportunity to talk to neural tech researchers about how ethics does and should shape their research. The research group I worked with is developing a report for publication about how to effectively integrate ethics into novel research that we hope will be helpful to academic research centers (and any interested industry research labs, too!).
But approaching AI as a philosopher of science and epistemologist can help make sense of some of the developing problems of AI. In particular, I think focusing more on the structures of AI communities and the impacts of industry being the primary source of funding for AI research are more productive than just focusing on the ethical issues that are often in many ways downstream of the causes. Luckily, philosophers of science are very interested in both of those topics, and I want to bring some of the most valuable insights from those bodies of literature to discussions of AI.
I hope to write updates here regularly!