Victor Klockmann, Alicia von Schenk & Marie Claire Villeval, 2025, European Economic Review, 178
Abstract:
In the field of machine learning, the decisions of algorithms depend on extensive training data contributed by numerous, often human, sources. How does this property affect the social nature of human decisions that serve to train these algorithms? By experimentally manipulating the pivotality of individual decisions for a supervised machine learning algorithm, we show that the diffusion of responsibility weakened revealed social preferences, leading to algorithmic models favoring selfish decisions. Importantly, this phenomenon cannot be attributed to shifts in incentive structures or the presence of externalities. Rather, our results suggest that the expansive nature of Big Data fosters a sense of diminished responsibility and serves as an excuse for selfish behavior that impacts individuals and the whole society.
In the field of machine learning, the decisions of algorithms depend on extensive training data contributed by numerous, often human, sources. How does this property affect the social nature of human decisions that serve to train these algorithms? By experimentally manipulating the pivotality of individual decisions for a supervised machine learning algorithm, we show that the diffusion of responsibility weakened revealed social preferences, leading to algorithmic models favoring selfish decisions. Importantly, this phenomenon cannot be attributed to shifts in incentive structures or the presence of externalities. Rather, our results suggest that the expansive nature of Big Data fosters a sense of diminished responsibility and serves as an excuse for selfish behavior that impacts individuals and the whole society.


