Home | Research | Groups | Sven Nyholm

Research Group Sven Nyholm


Link to website at LMU

Sven Nyholm

Prof. Dr.

Principal Investigator

Ethics of Artificial Intelligence

Sven Nyholm

is Professor of Ethics of Artificial Intelligence at LMU Munich.

His research and teaching encompass applied ethics (particularly, but not exclusively, ethics of artificial intelligence), practical philosophy, and philosophy of technology. Currently, he is working on his fourth book, which will be about the ethics of artificial intelligence. His previous books were concerned with Kantian ethics, the ethics of human-robot interactions and the ethics of technology.

Team members @MCML

PhD Students

Link to website

Dilin Gong

Ethics of Artificial Intelligence

Recent News @MCML

Link to Artificial Intelligence as a Radio Host

26.11.2024

Artificial Intelligence as a Radio Host

Link to Our PI Sven Nyholm About AI in Government Services

10.07.2024

Our PI Sven Nyholm About AI in Government Services

Link to Sven Nyholm About the Role of AI in India's Political Campaigns

12.06.2024

Sven Nyholm About the Role of AI in India's Political Campaigns

Publications @MCML

2025


[7]
S. Campell, P. Liu and S. Nyholm.
Can Chatbots Preserve Our Relationships with the Dead?
Journal of the American Philosophical Association First view (Feb. 2025). DOI
Abstract

Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this ’thanabot’, could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation of your relationship with a deceased loved one, though it might support one’s continuing bonds with the dead. To the second question, we argue that, in and of themselves, relationships with thanabots cannot benefit us as much as rewarding and healthy intimate relationships with other humans, though we explain why it is difficult to make reliable comparative generalizations about the instrumental value of these relationships.

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[6]
B. D. Earp, S. P. Mann, M. Aboy, E. Awad, M. Betzler, M. Botes, R. Calcott, M. Caraccio, N. Chater, M. Coeckelbergh, M. Constantinescu, H. Dabbagh, K. Devlin, X. Ding, V. Dranseika, J. A. C. Everett, R. Fan, F. Feroz, K. B. Francis, C. Friedman, O. Friedrich, I. Gabriel, I. Hannikainen, J. Hellmann, A. K. Jahrome, N. S. Janardhanan, P. Jurcys, A. Kappes, M. A. Khan, G. Kraft-Todd, M. Kroner Dale, S. M. Laham, B. Lange, M. Leuenberger, J. Lewis, P. Liu, D. M. Lyreskog, M. Maas, J. McMillan, E. Mihailov, T. Minssen, J. Teperowski Monrad, K. Muyskens, S. Myers, S. Nyholm, A. M. Owen, A. Puzio, C. Register, M. G. Reinecke, A. Safron, H. Shevlin, H. Shimizu, P. V. Treit, C. Voinea, K. Yan, A. Zahiu, R. Zhang, H. Zohny, W. Sinnott-Armstrong, I. Singh, J. Savulescu and M. S. Clark.
Relational Norms for Human-AI Cooperation.
Preprint (Feb. 2025). arXiv
Abstract

How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI’s capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence

Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


2024


[5]
S. Nyholm.
Digital Duplicates and Personal Scarcity: Reply to Voinea et al and Lundgren.
Philosophy and Technology 37.132 (Nov. 2024). DOI
Abstract

In our recent paper in this journal, (‘Digital Duplicates and the Scarcity Problem: Might AI Make Us Less Scarce and Therefore Less Valuable?’’, Danaher & Nyholm (2024)), John Danaher and I discussed the possibility of creating digital duplicates of particular people (e.g. by means of creating fine-tuned language models whose outputs sound like those of a particular person). We were specifically interested in how this might be seen as affecting the value of particular people as unique individuals and as scarce resources…

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[4]
S. Milano and S. Nyholm.
Advanced AI assistants that act on our behalf may not be ethically or legally feasible.
Nature Machine Intelligence 6 (Jul. 2024). DOI
Abstract

Google and OpenAI have recently announced major product launches involving artificial intelligence (AI) agents based on large language models (LLMs) and other generative models. Notably, these are envisioned to function as personalized ‘advanced assistants’. With other companies following suit, such AI agents seem poised to be the next big thing in consumer technology, with the potential to disrupt work and social environments. To underscore the importance of these developments, Google DeepMind recently published an extensive report on the topic, which they describe as “one of [their] largest ethics foresight projects to date”1. The report defines AI assistants functionally as “artificial agent[s] with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations”. The question the Google DeepMind researchers argue we should be pondering is ‘what kind of AI assistants do we want to see in the world?’. But a more fundamental question is whether AI assistants are feasible, given basic ethical and legal requirements. Key issues that will impact the deployment of AI agents concern liability and the ability of users to effectively transfer some of their agential powers to AI assistants.

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


2023


[3]
B. H. Lang, S. Nyholm and J. Blumenthal-Barby.
Responsibility Gaps and Black Box Healthcare Ai: Shared Responsibilization as a Solution.
Digital Society 2.52 (Nov. 2023). DOI
Abstract

As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called ‘responsibility gaps’ occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps are generated by ‘black box’ healthcare AI, (2) the presence of responsibility gaps (if unaddressed) creates serious moral problems, (3) a suitable solution is for relevant stakeholders to voluntarily responsibilize the gaps, taking on some moral responsibility for things they are not, strictly speaking, blameworthy for, and (4) should this solution be taken, black box healthcare AI will be permissible in the provision of healthcare.

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[2]
J. Smids, H. Berkers, P. Le Blanc, S. Rispens and S. Nyholm.
Employers Have a Duty of Beneficence to Design for Meaningful Work: A General Argument and Logistics Warehouses as a Case Study.
The Journal of Ethics (Oct. 2023). DOI
Abstract

Artificial intelligence-driven technology increasingly shapes work practices and, accordingly, employees’ opportunities for meaningful work (MW). In our paper, we identify five dimensions of MW: pursuing a purpose, social relationships, exercising skills and self-development, autonomy, self-esteem and recognition. Because MW is an important good, lacking opportunities for MW is a serious disadvantage. Therefore, we need to know to what extent employers have a duty to provide this good to their employees. We hold that employers have a duty of beneficence to design for opportunities for MW when implementing AI-technology in the workplace. We argue that this duty of beneficence is supported by the three major ethical theories, namely, Kantian ethics, consequentialism, and virtue ethics. We defend this duty against two objections, including the view that it is incompatible with the shareholder theory of the firm. We then employ the five dimensions of MW as our analytical lens to investigate how AI-based technological innovation in logistic warehouses has an impact, both positively and negatively, on MW, and illustrate that design for MW is feasible. We further support this practical feasibility with the help of insights from organizational psychology. We end by discussing how AI-based technology has an impact both on meaningful work (often seen as an aspirational goal) and decent work (generally seen as a matter of justice). Accordingly, ethical reflection on meaningful and decent work should become more integrated to do justice to how AI-technology inevitably shapes both simultaneously.

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[1]
S. Nyholm.
Is Academic Enhancement Possible by Means of Generative Ai-Based Digital Twins?
American Journal of Bioethics 23.10 (Sep. 2023). DOI
MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence