How AI may be Chipping Away at Human Dignity

Björn Fasterling is professor of law and business ethics at EDHEC Business School in France. He is a member of EDHEC’s Augmented Law Institute and directs its Digital Ethics Officer Program. His research and teaching interests cover the human rights responsibilities of business and digital ethics. He currently serves as an executive board member of the European Business Ethics Network (EBEN). In the fall of 2025, he visited Nova Law School in Lisbon. In a prior life he was a business lawyer in Berlin.

 

How AI may be Chipping Away at Human Dignity[1]

Almost every academic colleague I speak with has encountered student assignments that appear to be written largely with the help of large language model (LLM) tools such as ChatGPT, Gemini, and related applications. As an academic journal reviewer, I have also seen manuscript submissions containing non-existent references, errors that are difficult to explain except as model-generated fabrications. I have even heard of reviewers delegating peer review to AI systems, which has reportedly led some authors to embed hidden prompts in manuscripts (for example, using white font) instructing the system to recommend acceptance.

 

The loss of human experience

What worries me about these practices is not only the element of cheating. In some respects, the deeper concern for me is the loss of human experience that accompanies the outsourcing of cognitive work. Students may spend less time writing text, but they also lose the learning that comes from struggling with argumentation, structure, and language.[2] The same applies to academic authors and reviewers: authors lose part of the craft of writing, and reviewers lose the experience of engaging critically with a colleague’s work and participating in scholarly dialogue.

The issue extends beyond academia. If one can “create masterpieces” using music generators such as Suno, why invest time in learning an instrument, an activity that is time consuming and often frustrating? The lost experience, in that case, is precisely the hard practice and the very gradual acquisition of musical understanding. A similar dynamic may emerge with real-time translation tools built into everyday devices: they can reduce the incentive to learn additional languages. The legal profession offers further examples. As a young lawyer, I remember disliking contract drafting because of its tedious repetition. AI can now produce draft contracts quickly, but junior lawyers may lose the formative experience of learning how contracts are built (and there is also a labor-market implication: since firms historically assigned drafting to junior lawyers, automation can reduce the demand for entry-level positions, with predictable pressure on the job market).

 

“Unleashing” potential?

At this point, the obvious counterargument is that AI merely automates tedious, data-intensive cognitive tasks. From this perspective, automation would “free” humans for more context-sensitive and strategic work and would also increase the importance of distinctly human qualities such as empathy, love, and social connection. A historic parallel of technology diminishing the relevance of one cognitive capacity but “unleashing” many others is the invention of writing. Writing reduced the need to memorize, yet it advanced civilization by enabling storage and transmission of information, so that cognitive effort could be shifted to other activities.

The analogy to writing is suggestive but has limits. Writing arguably made one cognitive experience (memorization) less relevant but expanded many others. Contemporary AI tools, by contrast, risk displacing an increasing range of cognitive experiences at once: writing, summarizing, translating, drafting, and more, to the point that there are not many types of cognitive experience left that could be expanded. Moreover, the claim that AI will lead humans to cultivate empathy and love does not appear very compelling to me, at least not for now. It is not obvious that gains in automation are matched by gains in empathy and love. In some contexts, the trend clearly runs in the opposite direction, as illustrated by the prospect of cheaper warfare enabled by AI-powered drones.

The care sector is perhaps where the optimistic argument might be most plausible. In hospitals, technology can support clinicians and, in principle, free time and attention that could be reinvested in the clinician-patient relationship. If efficiency gains were used to increase the time and attention available to patients, AI could indeed enhance the experience of human interaction in a care setting. But if efficiency gains are instead used primarily to reduce staffing so that fewer clinicians can treat more patients without any meaningful increase in time of clinician attention per patient, then there is no comparable experience gain. Given budget pressures in many health systems, I do not currently expect major improvements in clinician-patient relationships, at least none that would qualify as “unleashing empathy” in the health sector. That said, the amount of time that clinicians can spend with patients is ultimately an empirical question. The effects of AI in this respect could be confirmed or refuted, and positive developments are not out of reach.

 

Any relevance for human rights? Attritional Harm

Up to now, I’ve been describing a somewhat diffuse kind of loss: less practice, less dialogue, less attention invested in what we do. The moment you try to translate that into the language of human rights, however, it starts to become difficult. Human rights debates around AI usually focus on more readily identifiable harms such as privacy breaches, discrimination, due process failures in automated decision-making, or exploitation in digital and physical supply chains. So where, if anywhere, does this kind of “experience loss” belong? At first it appears that it doesn’t, at least not in any straightforward way. There is no human right to “preserve a range of human experiences”. The fact that students may have less of the learning that comes with writing during their academic careers does not amount to a human rights “violation”. After all, people remain free to write their assignments without extensive reliance on GenAI. AI may place pressure on, or disincentivize certain human experiences, but it does not preclude them. In that sense, the human rights frame can seem ill-suited to the problems I have been describing. This may also explain why few human rights scholars have engaged with this issue. Still, the fact that the above concerns do not amount to “violations” does not mean they are irrelevant to human rights. One way to apply human rights differently is through an account of gradual, attritional harm, an approach developed by Sue Anne Teo, who speaks of “slow violence to human rights”.[3]Her argument focuses on disempowerment: as individuals become less able to understand algorithmic mediation enabled by mass data, they also become less able to question the conditions that harm them.

 

Loss of space shared between humans and human dignity

I think the potentially attritional effect of AI on dignity may run deeper than that. The automation of cognitive abilities holds out a promise of dramatic productivity gains: we can write more text, generate more music, and process more problems in far less time. In competitive settings, for example in business or the legal profession, those gains are scarcely optional, they are becoming pressures.

But while AI expands what we can produce, it does nothing to expand our capacity to attend. Human attention and experience remain biologically limited, no matter how much AI increases our output. Something has to give. My concern is that what gives is the experience itself. We come to experience less of what we do. And if we experience less, we have also less experience to share with others. Take the simple academic example already mentioned above: when a student really thinks though a problem, writes her own solution, and submits it, she shares a learning experience with me. I can assess it, respond to it, and offer feedback that connects to what she understood. If the assignment is instead a collage of prompted text, there is much less experience being shared, and much less that I can meet with feedback.

In my opinion, this matters for dignity. A diminished experience of the world produces a thinner space shared between humans. Dignity is not only about each person being unique and therefore deserving respect. There is also a relational dimension. Dignity is something that shapes how we stand in relation to one another. We can extend dignity to others through attention, recognition, and meaningful engagement. Where there is less shared human experience, there are fewer occasions for that extension of dignity.

I think that this is one way to understand how AI may be “chipping away” at human dignity.

 

DISCLOSURE: There is obvious irony here. I used ChatGPT 5.2 to polish some of the above sentences.

 

[1] I am grateful to Claire Bright and her students from Nova Law School in Lisbon, where I held a short “technology and human rights” class in Octobre 2025. As our time together was very limited, I could not discuss all ideas I would have liked to discuss. This blog entry spells out one of these not discussed ideas. Since the idea is still only loosely developed, perhaps a blog entry is the most appropriate media to share and mature it. The title “chipping away at human dignity” was not suggested by ChatGPT or other LLM-based technology, but by my colleague Dorothee Baumann after hearing me out on the subject. So, thank you to Dorothee, and of course, I do not imply that she endorses what I am saying in this blog contribution.

[2] A very much discussed study demonstrates harmful effects of LLM-use on learning, see Kosmyna, N. et al. (2025), “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task”, at: https://doi.org/10.48550/arXiv.2506.08872

[3] Sue Anne Teo (2025), Artificial Intelligence and its “slow violence” to human rights. AI and Ethics 5 (3), 2265-2280. https://doi.org/10.1007/s43681-024-00547-x

 

Suggested citation: B. Fasterling, ‘How AI may be Chipping Away at Human Dignity‘, NOVA BHRE Blog, 10 January 2026