Skip to main content
Innovation
Machine learning

Strategic Delegation of Moral Decisions to AI

Christopher Sprigman

Our study examines how individuals perceive the moral agency of artificial intelligence (AI), and, specifically, whether individuals believe that by using AI as their agent, they can offload to the AI some of their responsibility for a morally sensitive decision. Existing literature shows that people often delegate self-interested decisions to human agents to mitigate their moral responsibility. This research explores whether individuals will similarly delegate such decisions to AI to reduce moral costs.

Participants (hereinafter, “Allocators”) took part in a dictator game, allocating a $10 endowment between themselves and a Recipient. In the experimental treatment, Allocators could involve ChatGPT in their allocation decision, at the cost of incurring added time to complete the experiment. When engaged, the AI executed the transfer by informing the Recipient of a necessary payment code. Around 35% of Allocators chose to involve the AI, despite the opportunity costs of a much-prolonged process.

To isolate the effect of the AI’s perceived responsibility, a control condition replaced the AI with a non-agentive computer program, while maintaining identical decision protocols.

Allocators who involved the AI transferred significantly less money to the Recipient, suggesting that delegating the transfer to AI reduced the moral costs associated with self-interested decisions. Prosocial individuals, who face higher moral costs from violating a norm, were significantly more likely to involve the AI than proself individuals. A responsibility measure indicates that Allocators who attributed more responsibility to the AI were also more likely to involve the AI.

The study suggests that AI systems provide human actors with an easily accessible, low-cost, and hard-to-monitor means of offloading personal moral responsibility, highlighting the need to consider in AI regulation not only the inherent risks of AI output, but also how AI’s perceived moral agency can influence human behavior and ethical accountability in human-AI interaction.