This text was submitted and positive assessed as a seminar paper in the research seminar „Ethics in an Era of Emergencies“.
1. Introduction
When talking about AI ethics, it is essential to look at it from a broader approach to this topic. There are many interesting aspects, like: Which consequences does AI have for the environment? What does the development of Narrow AI, Artificial General Intelligence or Artificial Superintelligence mean for our view on human beings? Take as an example a car crash – who is responsible if the AI of a self-driving car decides, who will be killed in the case of a crash (Max Planck Institute for Human Development 2020)? The German Institute for Employment Research (2024) forecasts that AI will change many the jobs due to automation (see also Davenport 2018, 131; Sholler & MacInnes 2024, 338-340). Therefore, we can ask, if or how AI will change our jobs, our social systems and our society. As you see, there are several important questions of responsibility and sociality as well as anthropologic and ethical questions.
To handle these issues, it is important to degeneralize and to focus on one . The most interesting point for me is the question about AI and what it means for our thinking about work. It is a major topic as well. It is worth discussing what the term work means today and for the future as well as how work and AI are corresponding in general. These questions are answered by Jean-Philippe Deranty, Carl Rhodes and Ruth Yeoman in an article (2023) in that way to use their thoughts as a general basic for this paper (regarding the term of work). Beyond, Deranty deals with the question of the future of work – in particular as a critique on the post-work conception (2024). Without receiving the whole anthropological debate, In this paper, this explanation will be uses as a working definition about the term work for this paper:
“[W]ork is any activity that is engaged in with the aim of producing an output that is of value for someone. The output to which value is attached, in the most general sense of being valuable to someone, can be a material object but it can also be a change in someone’s circumstances which from their perspective is useful […]. Work is one of the characteristic features of the human species, alongside religion, war, art, living in a particular kinship form, having one’s action constrained by moral rules that can be enforced through symbolic and material punishment, and so on.” (Deranty 2024, 8)
This short and general definition tries to summarize the characteristics of work that are discussed nowadays. It includes aspects like care work, educational work, types of self-employment work or work for economic organizations, too.
It seems to be almost self-evident that AI will transform work in a way that “become able to perform tasks once the unique domain of humans” (Deranty et al. 2023, 801). Under this premise the research question is: Should we substitute some human work by AI? This paper uses a normative approach. First, potentials of AI with the focus of human work are discussed. Then, the spotlights will be set on the previously mentioned factors of work. The third and final step is to weight arguments and to answer the research question.
2. The Expectations of AI
Today, we’re not able to estimate the real potentials of AI seriously yet. With getting ChatGPT well known by the public it feels like a new era of AI, but with a clear view on AI we see that the humanity is still at the beginning of AI development. Todays’ famous AI products, e.g. ChatGPT, Microsoft Copilot and Google Gemini, are based on so called Large Language Models (LLM). The algorithm of LLM is based on complex statistical calculation models using human language. After the user prompted the task, the AI calculates the probabilities of the language-based tokens of the mathematical most likely answer. The AI does not check if the answer is correct or wrong. This kind of Narrow AI might be useful to get simple daily tasks done, for example getting a short overview or summary about the answers of participants of a survey. But the user has no guarantee that the result is right.
Following Leopold Aschenbrenner, one of the former important researchers and developers of openAI (the company which developed ChatGPT), there are several more steps to take with regards to AI development. Within the next couple of years, he expects huge steps of development: from Narrow AI to Artificial General Intelligence (AGI) and thereafter (Artificial) Superintelligence. While ChatGPT-4 has the capability of a “smart high schooler”, AGI can be “smart as PhDs or experts that can work beside us as coworkers” but reaching the level of Superintelligence AI will exceed the human possibilities and enable a “dramatically accelerate scientific and technological progress” as well as “an industrial and economic explosion” (all Aschenbrenner 2024).[1]
Subsequent to these technical questions and the outlined technological progress it is to expect that the increasing use and meaning of AI forces a significant economic and cultural transformation (Aschenbrenner 2024; Deranty et al. 2023, 801; Koeszegi et al. 2024, 18; Vallor & Ganesh 2024, 402). Due to AI “enable new forms of agential power” (Vallor & Ganesh 2024, 397) it is to discuss, who is responsible for the results of automated work. Already the mathematical complexity of the code of LLM based AI opens this question. Transferred to Aschenbrenner’s concept of Superintelligence this issue gets much more urgent. This new power is built up on two factors: the quantity and the quality of the results. Quantity means in this context, that AI is ready to fulfill tasks faster than humans; i.e. if you prompt a LLM based AI quite good, the AI produces a useful text in a couple of moments or draws a picture faster than people can take a picture with their smartphones. The factor quality intends that for some tasks the outcome of AI is previously better than the human one. Latest under the premise of AGI or Superintelligence we can expect much better results of the AI then people ever where able to do so. Following Sabine Koeszegi et al. it is to expect that decisions made by AI- respective automation-driven systems will be of “increased efficiency in decision-making, including lower costs and better outcomes” (2024, 18) in all areas of life – for the economy, the public sector and healthcare.
On a first view these issues – responsibility, quantity and quality – seem to be almost banal questions with quick answers to give for the recent topic. However, in the thesis of Shannon Vallor and Bhargavi Ganesh, reveals their explosiveness: AI joining “the metaphysical lineage of families, communities, religious institutions and political states” (2024, 399). Following this aspect, AI is becoming a new independent social institution with potential power over the life of every single person. In the future AI becomes to be a relevant political and cultural factor. AI means a new vulnerability for the human being. This is caused in factors like the bias of AI systems (Davenport 2018, 175-179; Theodorou & Aler Tubella 2024, 38), a lack of transparency of the code (Davenport 2018, 179-182; Koeszegi et al. 2024, 25) or the possible independent agency of AI (Aschenbrenner 2024; Vallor & Ganesh 2024).
Aschenbrenner states that the AI developing companies will create “superintelligence by the end of the decade” (Aschenbrenner 2024). Following his hypotheses we will get a powerful and the human capabilities exceeding tool. This new tool might be the gate opener to a transhumanistic or post-humanistic transformation of our culture, our society and our thinking about the humanity. It is not the point to discuss which door will be opened – the transhumanistic or the post-humanistic one, like Eugen Dolezal (2024) discussed it on the Example of Nick Bostrom –, the crucial point is just like Stefan Selke (2023, 120) summarizes: in an age of Superintelligence resurrection will be defined completely new. He warns against anthropomorphize Superintelligence, but at the same time he demands to accept that this technology can be a real game changer. Therefore, we should seriously ask the relevant anthropological and ethical questions. The important point is, that AI and Superintelligence have the potential to change the narratives of humanity (like resurrection) und this means a considerable vulnerability for every human being.
The topic of this paper is not Superintelligence. However, this section should sketch the potentials of Artificial Intelligence and the expectations by AI. This paper uses the escalation model of AI – Superintelligence – to emphasizing the seriousness of this issue.
3. Some factors of substitute human work by AI
Certainly, there are a lot of ethical and anthropological questions regarding AI to discuss: e.g. freedom and autonomy, political participation, social justice, general justice as well, truth, ecological sustainability and data protection. Due to the expected profound cultural transformation, as outlined in the previously section, I will focus the discussion on some categories of work. To get finally an answer to the guiding question, the spotlight will be set on to the following questions in the context of work: What does it mean, if AI can do human work better and faster? What does the issue of political power and responsibility mean for the use case of work? Biographies and social integration are massively affected by labor – what does it mean, if AI can do the same work at the same or a better level? These questions are related to the definition of work by Jean-Philippe Deranty in the introduction of this article and its central aspects.
3.1. The Quality of the Work Results
To discuss this aspect we can use the Facebook-example of Thomas Davenport (2018, 172-175): AI-generated fake news hijacked this social media platform on several occasions. The automatic algorithms of Facebook cannot solve this kind of attacks. Lateron, Facebook established a system of (journalistic) fact-checking and technical solutions. Nowadays, Facebook uses AI to answer on AI-caused issues (Davenport 2018, 174; Meta/Facebook 2025). This example shows that the quality of work results can be improved by using AI – like in this case to identify and fight again fake news or hate speech. And that is the point: the use of AI should upgrade the results of acting. What better results means should be defined by subject-relevant criteria – in the example: to reduce fake news more consequently. As well as the example shows, this better is part of a concrete, conscious or unconscious, moral perspective. It is not enough to do things more efficiently or effectively. In the case of the Facebook-example the “Community Standards define what is and isn’t allowed on Facebook” (Meta/Facebook 2025) and better means to carry out this rules and to save the – desirable or from Meta’s perspective – transparent discourse (without AI-hijackers) on this platform. In this case AI is used to implement moral values, the so-called Community Standards. On the way to reach the goal – implementing really moral better results of AI –, there are still open challenges regarding using AI under the aspects of fairness and equality, e.g. the bias and “that the outcomes of some machine learning algorithms put certain groups at a disadvantage” (Davenport 2018, 175). Finally, the question remains whether this is an aspect that is directly relevant to AI technology or whether it is rather a question of the data with which AI is trained.
3.2. The Impact of Work on Workers or Employees
Another dimension is the impact of AI on employees and their self-image as working people. One of the central premises of thinking about AI is to understand AI as a virtual assistant. This system of assistance may change the daily work in hundreds of offices, factories and other institutions. AI is or will soon become a tool to decrease the personally workload of every employee. For every but especially this new form of divided work, for every transformation of the working environments Ruth Yeoman diagnosed:
“When instituted into work-regimes characterized by high levels of voicelessness, alienation, and lack of collective decision-making, technology undercuts the human knowledge, skill, and craft needed to form cognitive capabilities—including evaluation, judgment, and decision-making—and even emotional capabilities, such as empathy and social connectedness.” (2021, 36)
To respond on this factor of the quality of work, Yeoman recommends in her conception of Meaningful Work to develop new narratives of work and therefore to keep labor meaningful for the single worker. Following her argumentation, this meaningfulness requires a well-balanced self-image between technical possibilities and well-reflected moral values. In her suggested conception of Meaningful Work, a “ethic of care”, Yeoman describes the work of an AI-future to be beyond a technocratically definition of work: human work will be to care on each other, on the resources of the world or it “extends even to caring for technology” (Yeoman 2021, 42). If we replace human work by AI in the future, it will results in new capacities to solve the substantial questions of life, those questions that make an essential difference. If the use of AI “help us enhance our ethical, relational, and specialist capabilities so that we are equipped to identify, frame, and respond to complex moral and practical problems in the human and natural world” (Yeoman 2021, 42-43), the quality of the life of working people and the quality of being a society will increases. Only then there will be no alienation of workers, but rather there a chance to reverse this effect of industrialized labor. In this case work can be more inventive and – even based on Yeoman – more care-ful.
Denis Kleinrichert argues that the “proposition of empathy defines the cognitive human duty toward others’ well-being, and is grounded in a theoretical concern toward initiating others’ improved states of well-being by designing and deploying AI with an empathetic human lens to empower others” (2024, 2755). This adds an explicit deontological approach with regards to the question of impacts of AI for working persons. This reminds that AI should not only support humans to follow the right aims, but AI should also be a moral good by itself. By her conception of empathy, she also emphasizes the significance of meaningful workplaces. Subsequently she warns against “a lack of direct interpersonal human contacts” (Kleinrichert 2024, 2748) due to the use of AI on workplaces. In this context, she deliberates the (Kantian) concept of dignity. Additionally she expands the topic of AI and work to a more holistic view: work doesn’t end in itself, rather aims on fulfilling on interests of third parties. Therefore she describes empathy as a rational conception and “a human duty to interpersonal connectedness, resulting in prosocial human agency” (Kleinrichert 2024, 2573).
To summarize the three major points: (1) AI should not lead to a new kind of alienation; (2) to resist this danger workplaces should be meaningful and human work should be asking (renewed) the meta-questions of work; (3) one of the big challenges of human working with AI is to strengthen the interpersonal dimension of work.
3.3. The Bias of AI – an Aspects of the responsible Use of AI
The topic of responsibility and AI (and work) is discussed by nearly every author. The issue, who is at the end accountable for the decisions made by AI, is broad discussed in the ethical and scientific literature. Nearly like an axiom seems the answer and consensus, that the last moral responsibility must be or is by a human. This premise may depend on the other premise of working AI, to think AI as an assistant of employees or worker. But in the everyday practice of using AI this consensus depends especially on the aspect of the bias of AI.
Koeszegi, Zafari and Grabler criticize the transparency of AI code and the apparently neutrality of decisions made by AI. They remind the deceptive security of fairness and rationality of computer made decisions. The non-humanity and the automation of a process “lends it certain neutrality and legitimacy and thus weakens the users’ sense of competency. It has been argued that algorithmic and automated processes could increase the perceived ‘neutrality’ of the result” (Koeszegi et al. 2024, 24). Furthermore, they criticize furthermore the intransparency of the codes and the difficulties of understandability even for experts. For non-experts these automated procedures seem to be completely inscrutable.
In addition to this first aspect, Andreas Theodorou and Andrea Aler Tubella point out the dimension of AI bias. They note that in current AI systems it is completely unclear, which and whose values (consciously or unconsciously) implemented into the codes. As part of this problem they identify “the incorporation of high-level values such as transparency, fairness, and accountability in the design and deployment of intelligent systems” (Theodorou & Aler Tubella 2024, 34). To solve this problem, they demand the willful embedding of values into AI systems to increase the trustworthy of these systems. By concretizing, operationalizing and contextualizing moral values it is possible to develop a catalogue of values and implement it in the AI. Making this value-implementation to a part of the AI design secures a “meaningful human control” (Theodorou & Aler Tubella 2024, 39) over AI and their outcome. In particular from the perspective of philosophical and theological ethics it is to ask critically which and whose values could it be to manipulate the codes. Theodorou and Aler Tubella leave this question under the keyword of “operationalization or abstraction gap” (2024, 41) unanswered. The crucial aspect for them is to achieve sustainable social control over AI systems and to de-bias these systems.
The critical question remains: how can AI systems be protected from malicious manipulation und how we can protect society, worker and employees, workplace from harmful consequences by using AI? Independent of the proposal to infiltrate “good” values into AI, a further aspect is to educate workers and employees, also pupils and students, i.e. future workers and future employees. Davenport (2018, 144-146) as well as Koeszegi et al. (2024, 24) name it as important, to train for new skills to handle AI. They insist that is the central future skill is to look critically on AI and the results of AI, to get an understanding of the structures of the data and biases of AI. Over and beyond it is important to remain human and not to compete with the machine. As emotionally intelligent beings “humans have a competitive advantage in the workplace if they are perceptive, sensitive, and insightful about human emotions” (Davenport 2018, 145).
Following the inductive methodic of this section it can be generalized: there is no neutral or transparent AI, due its base and training on human thoughts. Responsibility for the results remains within the individual. However, the employees and worker must be empowered to take on the responsibility on the structural side (critically reviewing bias and transparency of the code) as well as on the individual side (future skills). Referring on Theodorou and Aler Tubella, this also means, that it is worth considering that there should be ways to determine the (moral) values processed by AI. It might be a chance to increase transparency, not in generally but in the concrete way, to design the values for every request.
3.4. Work as a Part of Social Justice
When linking the terms work and sociality of man has to be done with utter care. The questions about work and e.g. freedom or social participation “has been used ideologically to justify some of the most brutal forms of oppression in human history” (Smith 2021, 17). In his critique of Axel Honneth’s conception, that “to participate in the labour market” (Smith 2021, 24) is an expression of social freedom, Nicholas Smith confirms the assumption, that work is an absolute part of the social freedom of every human being. He states that also under the condition of automation social freedom must be guaranteed:
“It is a freedom that requires a capacity for subjective appropriation, mutual completion, and self-determination across social roles. Social emancipation, understood as the unleashing of social freedom, is a matter of human beings coming to realize themselves through their key social relationships, relationships we enter in the course of meeting our individual and collective needs.” (Smith 2021, 28)
These aspects of political power, social justice as well as freedom and work also should be discussed with regards to AI. It is the central condition of that discussion, that AI does not lead to a lack of freedom, which humans do not sort out by their usefulness or that humans get suppressed in any other ways. If these conditions are not met, we did not learn the lesson from history and we must end the use of AI at work. The aim of using AI as a labor tool must rather be to protect the freedom of human. At this point my argumentation follows on from the previous aspect (the responsibility for the results of using AI); e.g.: How to create compatibility of ensuring the freedom of AI users if there is no sufficient transparency and knowledge about the code of the tool they are using?
Previously, the individual perspective of Ruth Yeomans demands to create new narratives of work was considered. Its social point shall be described briefly. The loss of appropriate narratives for work due to transformation, states Yeoman, has consequences that lead to undermining democratic social structures. She calls for a strong and comprehensive cooperation between the different and relevant actors “at a system-level if they are to bring ethically enabled technology to bear upon complex challenges” (Yeoman 2021, 44): the public administration, AI companies, politics and civil society. The aim of this cooperation is to save democratic values in a society working with AI. She has “described these goods elsewhere as freedom as non-domination, autonomy as non-alienation, and dignity as being recognized as a particular person with a life of one’s own to lead” (Yeoman 2021, 35). Of course, her descriptions of freedom, autonomy and dignity are worth discussing. But the point she addresses is: For the upcoming technical transformation of workplaces and work itself, a society needs suitable values and meanings to lever it and sub consequently to save social justice and political empowerment.
A third sub-aspect is the practical question of income and paying the daily bills if we substitute human work by AI. Using data of the Federal Reserve and the OECD Dan Sholler and Ian MacInnes observe a “general trend of wage-productivity decoupling” (2024, 343) in the last decades. If the productivity of an economy increases due to technical progress it does not mean that employees will earn a higher income. In their opinion AI will speed up this development, i.e. “wage increases will go to the highest earners, while the lowest earners will have modest to no growth in their incomes” (Sholler & MacInnes 2024, 344). This inequal gain on economic growth is an aspect to be considered. If a society achieves any kind of social justice and subsequently as well as political justice and social participation, would it not be helpful not to widen this income gap too much?
4. Conclusion
“A positive imagining of the future of work can invigorate aspirations to repair, rehabilitate, and heal experiences of alienation in our present relationship with work. Positive technologically-driven future of work narratives are hopeful that a synthesis of human-machine capabilities will enable human beings to experience more freedom, autonomy and dignity in their work and that long-held aspirations of meaningful work for all can be realized.” (Deranty et al. 2023, 802)
I share this positive view on the future of Deranty, Rhodes and Yeoman, basically. However, the challenges on the way into this positive future must be overcome. The leading question of this paper is: Should we substitute some human work by AI? It led into the heart of the debate about the challenges of the future of work. As shown during this paper labor can be replaced by AI. But for the substitution these three conditions have to be fulfilled:
- The use of AI increases the meaningfulness of human work. I had pointed out Stefan Selke’s argument that AI would challenge us to redefine resurrection. With dignity and the desire for narratives in mind, it is time to at least reinterpret the biblical narrative of resurrection and thus avoid a new and maybe stronger form of alienation. An effort from the whole society is needed to answer this anthropological question from multiple perspectives, with extremely different moral values and to take advantage of the opportunities to sharpen the understanding of work. It would be – with reference to Yeoman – an expression of care (Yeoman 2021, 39).
- AI must increase the quality of the results of the task to be fulfilled and of the labor itself. Both perspectives have to be included: the results for the recipients of the service or the product and the anthropological respectively ethical conditions of work for the employees. In case of using AI as an assistant, it needs a high-leveled transparency of the used AI-code as well as of the bias of the AI. It is out of question that the human user of AI is ultimate responsible. Therefore, it is important that the user has a tangible and genuine chance to know what she or he does by using a certain AI-system.
- The use of AI at work must not endanger the freedom and political as well as social justice. This depicts the importance of the aspects of bias and transparency, of employed and unemployed, of meaningfulness and alienation, of a good salary and a growing income gap. This complex and interconnected aspects require solutions that do not fall into under-complexity. Without these necessary clarifications, we would have to face the avoidable consequences of implementing AI. A serious social debate excluding AI to ensure a minimum of truthfulness in the debating arena is needed. As important as this demand seems, is it not also utopian?
Bibliography
Aschenbrenner, Leopold: (2024, accessed on: January 2nd, 2025). Situational Awareness. Retrieved from https://situational-awareness.ai
Bostrom, Nick (2014): Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press.
Davenport, Thomas H. (2018): The AI advantage : how to put the artificial intelligence revolution to work (Management on the cutting edge). Cambridge, Massachusetts: The MIT Press.
Deranty, Jean-Philippe (2024): The case for work. Oxford: Oxford University Press.
Deranty, Jean-Philippe, Rhodes, Carl & Yeoman, Ruth (2023): Does work have a future? The need for new meanings and new valuings of work. Organization (London, England), 30, 799-808. doi:10.1177/13505084231186236
Dolezal, Eugen R. (2024): Der überholte Mensch: Kritische Anfragen an Nick Bostroms Konzept von Superintelligenz (BestMasters). Wiesbaden: Springer VS.
Institute for Employment Research: (2024, accessed on: December 12th, 2024). Job Futuromat. Retrieved from https://job-futuromat.iab.de/en/
Kleinrichert, Denise (2024): Empathy: an ethical consideration of AI & others in the workplace. AI & SOCIETY, 39(6), 2743-2757. doi:10.1007/s00146-023-01831-w
Koeszegi, Sabine T., Zafari, Setareh & Grabler, Reinhard (2024): The computer says no: how automated decision systems affect workers’ role perceptions in socio-technical systems. In: Garcia-Murillo, Martha, MacInnes, Ian, & Renda, Andrea (Hgg.): Handbook of Artificial Intelligence at Work. Cheltenham, UK: Edward Elgar Publishing, 16-31.
Max Planck Institute for Human Development: (2020, accessed on: December 12th, 2024). Moral Machine. Retrieved from https://www.moralmachine.net
Meta/Facebook: (2025, accessed on: February 2nd, 2025). How Facebook uses artificial intelligence to moderate content. Retrieved from https://www.facebook.com/help/1584908458516247/?helpref=uf_share
Selke, Stefan (2023): Technik als Trost: Verheißungen Künstlicher Intelligenz. Bielefeld: transcript Verlag.
Sholler, Dan & MacInnes, Ian (2024): AI and income inequality: the danger of exacerbating existing trends toward polarization in the US workforce. In: Garcia-Murillo, Martha, MacInnes, Ian, & Renda, Andrea (Hgg.): Handbook of Artificial Intelligence at Work. Cheltenham, UK: Edward Elgar Publishing, 338-355.
Smith, Nicholas H. (2021): Work as a realm of social freedom. In: Breen, Keith & Deranty, Jean-Philippe (Hgg.): The Politics and Ethics of Contemporary Work. Whither Work? London and New York: Routledge, 16-30.
Theodorou, Andreas & Aler Tubella, Andrea (2024): Responsible AI at work: incorporating human values. In: Garcia-Murillo, Martha, MacInnes, Ian, & Renda, Andrea (Hgg.): Handbook of Artificial Intelligence at Work. Cheltenham, UK: Edward Elgar Publishing, 32-46.
Vallor, Shannon & Ganesh, Bhargavi (2024): Artificial intelligence and the imperative of responsibility: Reconceiving AI governance as social care. In: Kiener, Maximilian (Hg.): The Routledge Handbook of Philosophy of Responsibility. Abingdon, Oxon: Routledge, 395-406.
Yeoman, Ruth (2021): The future of meaningfulness in work, organizations, and systems. In: Breen, Keith & Deranty, Jean-Philippe (Hgg.): The Politics and Ethics of Contemporary Work. Whither Work? London and New York: Routledge, 31-48.
[1] I’m aware that there are still other and sometimes more differentiated prognoses about Superintelligence like that of Aschenbrenner, e.g. Nick Bostrom (2014) or Stefan Selke (2023, 116-123). But to focus on the recent topic of this paper the expectations of Leopold Aschenbrenner are well suitable to get a clear argumentation.
Image by Ronald Carreño from Pixabay
