The Unethical Underbelly of AI: A Call for Universities to Take a Stand

2 May 2025

Caroline Ball, Academic Librarian, University of Derby

The rapid rise of generative artificial intelligence (AI) has sparked intense debate concerning its implications across various sectors, including higher education. While supporters of Gen AI in universities praise its potential to revolutionise teaching, learning, and research, a closer look reveals a deeply troubling ethical landscape that demands our attention.

The attraction of AI in academia is undeniable. AI-powered tools hold the potential to personalise learning experiences, streamline administrative tasks, and most importantly, accelerate research discoveries. However, these potential benefits are overshadowed by the ethical pitfalls that permeate the AI ecosystem. The current state of AI development and deployment raises serious concerns about algorithmic bias, censorship, copyright infringement, environmental sustainability, labour exploitation, and systemic inequities.

One of the most pressing ethical concerns surrounding the use of AI is algorithmic bias. AI algorithms are trained on massive datasets that often reflect and amplify existing societal biases and prejudices. When we implement AI into admissions systems, assessment tools, library catalogues, or research methodologies, we risk embedding discriminatory patterns into our educational infrastructure, perpetuating systemic inequities and reinforcing stereotypes. The opacity of many AI algorithms, often referred to as ‘black boxes’, further hinders transparency and accountability, making it difficult to identify and address potential biases.

The very foundations of AI systems like Gemini, ChatGPT, DeepSeek and Claude are often built upon the exploitation of marginalised communities and the environment. The extraction of vast amounts of data without informed consent, perpetuates a system of surveillance and control that undermines democratic principles and disproportionately affects vulnerable populations. The reliance on low-paid workers in the Global South to perform data labelling and content moderation tasks further exacerbates global inequalities, exposing these individuals to exploitative practices and precarious working conditions.

The use of AI in content creation and data mining raises significant copyright concerns. The legal ownership of AI-generated content remains ambiguous, and the potential for AI to infringe on the copyrights of original creators is a growing and much-publicised concern. The training data used by AI models often consists of copyrighted material scraped from the internet without permission or compensation to the original creators. Content creators, including artists, writers, and musicians, are witnessing their work used to generate profits for AI companies without recognition or remuneration.

To add insult to injury, academic publishers are now beginning to license access to their content to AI companies, some without providing academics the opportunity to opt out. This forces complicity on academics, turning their intellectual contributions into commodities for AI profit without their consent and with no remuneration for them or their institutions.

Active censorship is another concern. In accordance with guidelines that requires Chinese AI tools to align with the country’s ‘core socialist values’, DeepSeek, the newest of the gen AI models, censors answers relating to topics sensitive to China’s political leadership, including Taiwan, Tiananmen Square, the Dalai Lama. Other AI systems exhibit their own forms of bias and censorship, even when less explicit, as already discussed.

In this era of climate change, the environmental impact of AI also cannot be overlooked. The training and operation of AI models, especially large language models, demand enormous computational resources, resulting in a considerable carbon footprint. The data centres that power a single AI model can consume more water annually than a mid-size town. This energy-intensive nature necessitates a critical examination of AI’s environmental sustainability and its long-term impact on the planet.

Universities, as institutions committed to knowledge, ethics, and social responsibility, have a crucial role to play in shaping the future of AI, not just in the research that influences its development, but in the adoption of a morality that could determine the futures of AI, the educational sector and humanity itself. It is imperative for universities to take a principled stance and lead in developing ethical frameworks for AI use, instead of setting aside ethics to avoid being left behind in the uncritical rush to adopt AI.

Universities need to develop comprehensive AI literacy programmes that help students, researchers, and staff understand AI tools’ limitations, biases, and ethical concerns. Institutions should establish strong ethical AI acquisition policies that question AI vendors on transparency, data practices, algorithmic bias testing, environmental impact, and fair labour standards. Universities across the UK could collaborate to develop shared assessment criteria, reducing duplication of effort and creating stronger collective standards.

Intellectual output must be protected by advocating for opt-out rights for university-produced content, ensuring that research publications and educational materials cannot be used for AI training without consent and appropriate compensation. Where relevant, universities could draw on their own computing and science departments to develop small-scale, transparent AI tools trained on carefully curated, ethically obtained datasets, demonstrating alternative approaches to AI development.

Active engagement with UK and EU AI regulation is also essential, with universities contributing academic perspectives to developing frameworks like the EU AI Act and UK policies currently in development, advocating for regulatory approaches that embody values of transparency, equity, and knowledge justice.

Importantly, universities should commit to documenting and sharing AI impacts by establishing monitoring systems to document both benefits and harms of AI implementations across their operations, creating an evidence base for future decision-making and contributing to the growing body of research on AI ethics in higher education.

Just as drugs can have unintended side effects and risks, so too can technologies – yet there is no regulatory framework to ensure innovative technologies are rigorously evaluated for safety, efficacy and potential societal harms before being widely deployed. We must use our own critical faculties, and yet within the sector above all others where you would expect to see such criticality, it is glaringly absent.