Beatrice Marchegiani
Interview with Jonathan Birch
In this interview, Beatrice Marchegiani (MSt Practical Ethics student at Kellogg College) engages in a thought-provoking conversation with Dr. Jonathan Birch, a Professor of Philosophy at LSE's Department of Philosophy, Logic, and Scientific Method. Dr. Birch is a renowned philosopher specialising in questions of sentience. His research focuses on the evolution of social behaviour, animal sentience, and their connection to welfare. This interview explores topics of sentience, animal ethics, and moral status.
A. Could you please provide a brief introduction of yourself, including your academic background, and share some insights about your current research?
I'm a Professor of Philosophy at the London School of Economics, and Principal Investigator on the Foundations of Animal Sentience project. I lead a research team with two main goals: to develop better methods for studying the subjective feelings of animals scientifically, and to put the emerging science of animal sentience to work to design better policies, laws, and ways of caring for animals.
B. What initially sparked your interest in the field of philosophy, and more specifically, what led you to focus on the concept of sentience?
I don't think I'd heard of philosophy (as an academic subject) until my second year of university. I was studying Natural Sciences at Cambridge, and there is a second year option called History and Philosophy of Science. And it was very well taught. I had a lecturer called Peter Lipton who spoke about philosophy and its importance with real charisma and gravitas - sadly, he died soon afterwards, in 2007.
I've had an interest in animal welfare for at least as long. I don't know when I first started thinking about animal welfare in terms of "sentience" - the capacity to have experiences that feel good or feel bad from the animal's point of view, such as pain and pleasure. Donald Broom's book Sentience and Animal Welfare (2014) was a key influence.
C. To what extent do you believe philosophy can address inquiries concerning the nature of sentience, and in contrast, what aspects fall within the domain of empirical sciences like biology?
I don't see a sharp border between philosophy and the sciences. On my project I've had postdocs and students coming to questions of sentience from biology and animal welfare science (the scientific field that studies animal welfare, often based in veterinary medicine departments), others who've come to the same questions from philosophy.
Scientists often want space to reflect on foundational questions in their discipline (What are we studying? Why are we studying it? What is the ethical and social relevance of this?) and don't get this space in a normal scientific job, and so they are drawn towards philosophy. Meanwhile, questions that sound philosophical (Which other animals are conscious? Why does that matter?) require close engagement with scientific evidence.
Our centre at the LSE - the Centre for Philosophy of Natural and Social Science - provides an environment where scientists and philosophers can collaborate on foundational questions of common interest.
D. Sentience, often defined as the capacity to experience, has been frequently singled out as the foundation for moral consideration of a being. Do you agree with this perspective? Do you believe that the ability to undergo valenced experiences (such as pleasure and pain), as opposed to merely being sentient, constitutes a more suitable criterion for determining moral status?
I use "sentient" to refer to the capacity for valenced experience: experiences that feel good or feel bad from the animal's point of view. This is how the term is often used in animal ethics, bioethics, and animal welfare science. So, when people in these areas say that sentience is necessary and sufficient for moral status, they most likely mean in this sense.
The term can also be used in a broader sense, as a term for the capacity to have any kind of conscious experience, valenced or not. This is more common in other areas of philosophy, such as philosophy of mind.
There is then a debate about whether conscious experience of any kind is sufficient for moral status, or whether you also need valence.
My view is: conscious experience alone is not enough to ground interests. Imagine a being that is only capable of experiencing light and dark - nothing else - and has no emotional response to either state: light vs. dark is a completely neutral matter to them. I don't think such a being has interests. They have no interest in lightness, darkness, or anything else. If you then ask: "What would you need to add to give this being morally significant interests?" - I think one way would be to add valence. If lightness or darkness feel good or feel bad from the animal's point of view, it has interests that we ought to take into account.
E. Are there specific agricultural practices involving animals that you find particularly problematic but receive insufficient attention? If you had to prioritize, would you focus your efforts on ending practices that exploit and harm complex animals (like mammals) or those that affect a significantly larger number of simpler beings (like fishes and insects)?
Our treatment of other animals, in general, receives insufficient attention. The issue should be on all of our minds virtually all of the time.
Think of it like this: don't you want to live in a way that reduces the amount of suffering in the world, rather than adding to it? What's the point of being alive if your existence just adds to the world's suffering? Yet intensive animal agriculture, in its current form, puts us all in the position of living our lives atop a vast mountain of animal suffering. It's very hard to live in a way that reduces the size of the mountain rather than increasing it.
The question about prioritization of different causes is hard. I certainly resist any attempt to frame the issue in terms of "complex vs. simpler beings". Invertebrates are complex beings.
People will often eat large numbers of invertebrate animals at a single sitting - think of someone demolishing 25-30 shrimps at once - so the numbers involved in the industry globally are mind-blowing. These animals are often farmed in systems that completely neglect their welfare. For example, it's routine to perform eyestalk ablation (cutting off the eyestalks) on breeding female shrimp to induce faster ovulation.
However, this isn't necessarily a reason to prioritise invertebrates over mammals when forced to choose. Mammals are substantially more likely to be sentient, because they share the brain areas most closely linked to sentience in our own case. Those of us who want to help animals should not forget invertebrates, but we are taking a big risk if we pour all our resources into helping invertebrates.
Luckily, for the most part, we don't have to choose: we should advocate for the welfare of all sentient animals, and caring more about shrimps doesn't have to mean caring less about pigs or chickens. If someone asks "should I eat chicken or shrimp?" - the advice should be to eat plants.
F. What is your stance on the relationship between sentience and artificial intelligence (AI)? How can insights from the study of animals and sentience inform our understanding of whether AI possesses sentience?
I don't rule out the possibility of sentience in AI. The main problem is that AI systems trained on vast corpuses of human-generated training data (like Large Language Models) are able to "game" any behavioural criteria we might come up with for use in animals. Information about what humans find persuasive is implicitly included in the training data.
By "gaming the criteria", I mean that they can skilfully mimic behaviours that humans regard as evidence of sentience without possessing any of the underlying mechanisms. They are akin to a patient who wants to present as though they have disease X, has memorised the whole diagnostic manual for disease X, and is able to skilfully mimic any outward symptom in that manual. I call this the "gaming" problem.
Is there a way around this? I wrote a piece with Kristin Andrews called "to understand sentience in AI, first understand it in animals". I think the long-term plan here has to be to study as many instances of sentience in the animal kingdom as we can. By doing this, we can learn something about what sorts of computational features (if any) are essential to sentience, as opposed to being just a contingent aspect of how sentience is implemented in the human brain. There are already suggestions (like possession of a ‘global workspace’, or ‘perceptual reality monitoring’) but the study of sentience in other animals is still at a very early stage. If we robustly establish that some cluster of features is reliably indicative of sentience, we can then look for those features in AI. We are really a long way off being able to do this, which is why it is very misleading to say we have "tests" for consciousness in AI.
G. In your view, what approach holds the most promise for assessing sentience in various beings? Should we employ distinct testing methodologies for AI as compared to animals, or can a unified approach suffice?
In animals, we have a lot of markers of sentience we can look for - a lot of things that raise the probability of sentience when we find them. Some markers are behavioural, others neural.
For example, a study by Robyn Crook looked at how octopuses responded to an injection of acetic acid in the arm. The octopuses guarded and tended the affected arm, scraped at the skin with their beak, became averse to the chamber where they experienced the effects of the injury, and came to prefer a chamber where they could experience the effects of a local anaesthetic on the injured area. When we see behaviour like this in any other animal (such as a rat), scientists conclude: this animal is in pain. It is clearly probability-raising.
With AI, the science is less mature. There is less agreement about what is even probability-raising. Some colleagues and I have recently proposed some signs that may raise the probability of sentience in AI, but I think they only raise the probability a little bit - they are not as probative as the markers we have for animals, which exploit the fact we have a shared evolutionary history with them.
H. When confronted with the challenge of objectively comparing subjective experiences among vastly different beings, what is your approach? Considering that animals may have radically different phenomenological experiences than humans, such as their perception of time, do you believe there are methods to accommodate these disparities in cross-species comparisons?
We don't currently have a solution to the problem of "interspecies welfare comparisons". It's often noted that the intensity of felt pain can vary even among people, so of course it might vary between a person and a shrimp.
As you note, even if we could compare intensity, there would also be the problem of comparing experienced duration. Bees in some sense "live fast": their vision has a very high refresh rate, so that they see an ordinary computer screen as a pulsing of static frames. But does this mean that one second of pain for a bee feels like much longer than one second of pain for us? We have no idea. At present it's unclear how to get empirical tractability on such a question, but we're working on it.
Crucially, though, we don't need a solution to these problems to be able to start treating these animals better - we can give them moral consideration without claiming to be able to give an exact numerical weight to their interests.
Suggested further reading
Andrews, K., & Birch, J. (2023). What has feelings? Aeon, 23 February 2023. https://aeon.co/essays/to-understand-ai-sentience-first-understand-it-in-animals
Browning, H., & Birch, J. (2022). Animal sentience. Philosophy Compass, 17(5), e12822. https://doi.org/10.1111/phc3.12822
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., ... & VanRullen, R. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708.