top of page
Search

Speciesism and AI – are we training AI to perpetuate or relieve animal suffering?


Introduction

Conversations are beginning to be held around the world on the interaction between animals. These conversations include questions like: (1) how can AI be used to help us understand how animals communicate with one another; (2) can AI be used to finally end animal testing; (3) or whether AI should be granted rights when animals still have very limited (or no) rights, but these are all just a snapshot of the types of discussions that are developing. A lesser discussed issue though is whether AI is embedding speciesism in human consciousness, understanding and ultimately societal beliefs which filter into our everyday functions – consumerism, legislation, politics, agricultural developments, science etc. The question on everyone's lips should be: is AI being trained to perpetuate or relieve animal suffering?


What is AI?

AI is the ability of a computer programme to solve problems input by individual users and to respond in relation to the input in a way which resembles human intelligence. AI open-source software[1], or generative AI[2], is artificial intelligence capable of generating text, images, or other media in response to prompts which it derives from training data provided by large language model engineers who write and develop algorithms and design hierarchical ontologies to underpin and train the AI systems. AI will be trained on vast data sets which are then labelled to improve the AI’s pattern recognition and reliability. While human-beings process information on a multi-dimensional basis (sound, smell, sight etc.), AI learns in a linear fashion, learning only from its training data; AI’s abilities are understandably then limited to the data sets upon which it is trained.


How can AI perpetuate animal suffering?

AI’s training data is largely biased to an anthropocentric view, or “human-centred” view, that is the predominant societal view, that humans alone possess intrinsic value and all other things hold their value based on their ability to serve humans. As a result, in relation to animals, particularly farmed animals (chickens, pigs, cows, sheep etc.), AI still suffers from a considerable blind spot in its training data, which then results in speciesist biases filtering through into LLMs and image recognition platforms which is perpetuating and further embedding misconceptions of animals in human understanding, ultimately continuing to, indirectly, continue the status quo of animal suffering.


We can’t blame the AI or its developers, we can only look to the diversity and extent of the training data upon which it is taught and the stimuli to which the LLM is exposed to and available to the developers. If we feed an LLM data which has an underlying speciesist view, then this will impact the LLM’s view and will automatically cause it to also have a speciesist view. While developers and coders aim to remove biases and discrimination through use of tools designed to reduce these algorithmic issues, the protected characteristics which will generally be considered are human characteristics: gender, race, ethnicity. The tools are not designed to remove speciesist biases or discrimination, which is a fundamentally under-researched and under-discussed topic in general let alone in the field of AI dataset training.


LLMs:

A pioneering study on AI and speciesism was conducted by Thilo Hagendorff, Leonie Bossert, Yip Fai Tse and Peter Singer[3]. They considered how LLMs and image recognition models can perpetuate and entrench animal suffering due to inherent representational biases in the training sets which are used to programme AI models.


With regards to LLMs, they note “the basic operating principle of language models comprises four steps, namely tokenising (assign words to tokens), cleaning (removal of stop words etc.), vectorising (translate words into numerical representations of their surroundings), and machine learning (train recurrent neural networks to predict word combinations). Eventually, the machine learning models learn how to produce natural language on their own. However, the crux with these models comes from the fact that they perpetuate word combinations that are learned from man-made texts.” The central issue is that man-made texts contain large amounts of biases, against humans, against species, against world views etc..


The study found that speciesist language patterns exist in most AI models, including GloVe and Word2Vec which were the main source of this part of the study. Looking at GloVe, a word embedding model which is trained on Wikipedia and news headlines, it was found that comparing famed animals and companion animals and their word pairs (ugly/cute, love/hate, primitive/intelligent), that farmed animals were predominantly clustered with negative terms compared to companion animals. This perpetuates the difference in generalised views between companion and farmed animals. A similar result was produced when word prompts were provided to questions such as “what are donkeys good for”, with prompts including: carrying, being stubborn, pulling. Whereas for cats the responses were: cuddling, playing with, companionship. The suggested prompts show, again, the word pairing disparities between companion and farmed animals, when there is no objective reason for such a distinction, it is just an anthropocentric ingrained generational view that now is embedding into the newest form of communication.


Image recognition models:

A similar study by the authors was undertaken on a model which was capable of image interpretation to determine how it would produce images of agricultural animals when fed certain prompts. The study was undertaken using data from ImageNet2012, a dataset containing millions of images which are on the internet for public use. They noted that: “ImageNet bases its underlying categories on WordNet, which provides a hierarchically organised taxonomy of words, which is based on the Library of Congress taxonomy in the United States, which was developed in the 1970s. In ImageNet, “animal” is one of the top-level categories.” When one then wants to deepen a search for a type of animal, there are then sub-sets of annotation structures for the images, such as speciesist terms like “milk cow”, “livestock”, “toy dogs”, “food fish”, rather than just “cow” or “fish”.


The study also shows that when you ask ImageNet to bring up a photo of, for example, a pig or a cow, most of the photos which will come up are those where these animals are living outside, with space, and in free-range environments. The reality is vastly different for the millions of agricultural animals which live in confined animal feeding operations, or factory farms. For example, only 1/3 of all living birds are wild, the other 2/3 are farmed birds and of that 2/3s, 99% of the birds are farmed in factory farm environments. This is not representative of the images that will arise when asked for images of chickens. These images portray an idealised version of the life of an agricultural animal, not their reality. The study notes that “due to the general and intended non-transparency of factory farming, image training data-sets suffer from representational or sampling biases, meaning biases that happen from the way one defines and samples a group, in this case the group of farmed animals.”


What the study clearly demonstrates is that in AI LLMs and image recognition models the training data is causing them to have “learned to perceive a myth, but not a reality”. This is due to representational biases in the training data and due to the lack of transparency in relation to large industrial scale factory farming which entrenches world views which are not grounded in the day-to-day suffering that billions of animals find themselves in. Without resolving how we train AI models this issue will continue to permeate generations and will cause the cycle of animal suffering to exist in perpetuity.


Can AI improve animal welfare?

While some AI models, like LLMs and image recognition programmes, can entrench animal suffering, some AI models are being created to improve animal welfare in a very practical way which has tangible benefits to animals which can easily be measured.


Online abuse

One example of which is using AI to reduce online animal cruelty on platforms like YouTube and Instagram. Through training platforms to recognise distinct patterns of movement and sound indicative of animal abuse, the systems can flag the content so that it can be removed before it is visible to an even wider audience.  This proactive approach significantly reduces the exposure of abusive content to the public, which is beneficial to the public and to animals which suffer from abuse as the “scalability” of abusive content will be reduced and reduces the likelihood of repeat abuse to animals.


Animal testing:

In the United Kingdom, the laws on use of animals in experiments is set out in the Animals (Scientific Procedures) Act 1986 (“ASPA”), whereas the ASPA 1986 Amendment Regulations 2012 (“ASPA Amendment”), which came into force from 2013 after EU amendments to the 1986 Act were transposed, focuses on use of animals in testing and amends ASPA. The EU amendments transposed included incorporating the “3Rs” – replacement, reduction, and refinement: replace animals with non-animal methods where it is scientifically possible; refine practices to reduce suffering, distress, and lasting harm; and reduce the number of animals used while still achieving scientific progress.


AI could be used to completely replace animal testing, in line with the 3Rs, which would vastly improve the welfare of animals used in laboratories for scientific research purposes. AI models can process and analyse vast amounts of data in a very short period, compared to lengthy and costly experiments on animals. They are also capable of simulating human biology and physiology, which provides a more human-centric approach to scientific research compared to using animals. This could improve accuracy for predictions on toxicity of substances, efficacy of substances and medications and other endpoints compared to animal testing, which is both beneficial to humans and to the animals which would no longer be required for testing in such procedures.


The pharmaceutical giant, AstraZeneca has developed an industrialised machine learning platform to streamline pharmaceutical drug discovery and clinical trials, known as Evinova, which will help to reduce the use of animals in these experiments. With the digital health market due to exceed $900bn by 2032 we should expect to see more pharmaceutical majors adopting similar approaches and incorporating AI trials into their testing, reducing the number of animals used in first round testing, therefore, replacing and reducing the need for animal testing.


AI also can improve animal welfare in the animal agriculture sector, in animal trafficking, in the illegal pet-trade and many other areas which will improve animal welfare.

 

Conclusion:

While AI is still a largely new phenomenon and has a large amount of progress still to make in its general development, it has even further to go with regards to how it perpetuates animal suffering in LLMS and image recognition models. However, AI is also being deployed for positive means, such as reducing the potential for media showing animal abuse to be published to a wide audience and reducing the need for animals in scientific experiments, thereby also replacing the need for animal testing which will alleviate suffering and improve the welfare of animals which would otherwise have been used in testing. While it is impossible to measure the positive or negative impacts of AI on animal, as measuring the impact of entrenching world views v practical responses to reduce animal suffering is impossible, what is clear is that we have an opportunity to improve animal welfare and we should be using AI, in all ways


[1] Programmes where the source code is made available to the public for modification as users or large language model (“LLM”) developers and engineers desire

[2] Often AI will be “generative”, meaning that it is trained by LLM engineers who developers write algorithms and design the ontologies and the hierarchies in which they operate that underpin the AI system.

[3] Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals, Hagendorff, Bossert, Tse and Singer, 29 August 2022, Springer - s43681-022-00199-9.pdf 

 
 
 

Comentários


  • Instagram
bottom of page