Training Artificial Intelligence and Its Impact on Social Preferences: Results from an Online Experiment

As artificial intelligence (AI) becomes increasingly integrated into our daily lives, there are concerns about the impact of AI on human decision-making and social preferences. A recent online experiment by Victor Klockmann (Univ. Würzburg), Alicia von Schenk (Univ. Würzburg) and Marie Claire Villeval (Univ. Lyon) explored whether individuals adjust their behavior when they know that their choices will train an algorithm to make decisions based on their revealed social preferences.

In the experiment, participants repeatedly played a dictator game, where they were given the power to allocate resources between themselves and an anonymous recipient. The choices made by participants were used to train an AI algorithm, which would then make decisions for future participants in the same game.

The study found that when participants were aware of the impact of their choices on future generations of players, they were more likely to behave prosocially, meaning they allocated resources more equally between themselves and the anonymous recipient. However, this was only true when participants themselves bore the risk of being harmed by future algorithmic choices.

The findings suggest that the knowledge of the externality of AI training can increase prosocial behavior, but only when individuals feel that their own well-being is at stake. This has important implications for the design and implementation of AI systems, particularly in the context of social decision-making.

Overall, the study highlights the complex interplay between AI and human decision-making and the need for careful consideration of the potential impacts of AI on social preferences. As AI becomes more prevalent in our lives, it is essential that we continue to investigate these interactions to ensure that AI is used to promote, rather than undermine, human well-being.

The complete scientific publication is available under the following link: Artificial intelligence, ethics, and intergenerational responsibility.
 
11.04.2023 – Written by ChatGPT, edited by Alicia von Schenk