Moritz von Zahn, Lena Liebich, Ekaterina Jussupow, Oliver Hinz & Kevin Bauer, 2025,
Information Systems Research, [Published Online: 8 December, 2025]
Information Systems Research, [Published Online: 8 December, 2025]
Abstract:
The use of explainable artificial intelligence (XAI) to render black-box artificial intelligence (AI) more interpretable to humans is gaining practical relevance. Prior research has shown that XAI can influence how humans “think”. Yet, little is known about whether XAI also affects how people “think about their thinking” (i.e., their metacognitive processes). We address this gap by investigating whether XAI affects metacognitive processes that guide human confidence judgments about their ability to perform a task and thereby, their decision whether to delegate the task to AI. We conducted two incentivized experiments in which domain experts repeatedly performed prediction tasks, with the option to delegate each task to an AI. We exogenously varied whether participants initially received explanations about AI’s overall prediction logic. We find that AI explanations improve how well humans understand their own ability to perform the task (metacognitive accuracy). This improvement causally increases both the frequency and effectiveness of human-to-AI delegation. Additional analyses show that these effects primarily occur when explanations reveal to humans that AI’s prediction logic diverges from their own, leading to a systematic reduction of overconfidence. Our findings highlight metacognitive processes as a central, previously overlooked channel through which AI explanations can influence human-AI collaboration. We discuss practical implications of our results for organizations implementing XAI to comply with regulatory transparency requirements as, for example, outlined in the European Union Artificial Intelligence Act.
The use of explainable artificial intelligence (XAI) to render black-box artificial intelligence (AI) more interpretable to humans is gaining practical relevance. Prior research has shown that XAI can influence how humans “think”. Yet, little is known about whether XAI also affects how people “think about their thinking” (i.e., their metacognitive processes). We address this gap by investigating whether XAI affects metacognitive processes that guide human confidence judgments about their ability to perform a task and thereby, their decision whether to delegate the task to AI. We conducted two incentivized experiments in which domain experts repeatedly performed prediction tasks, with the option to delegate each task to an AI. We exogenously varied whether participants initially received explanations about AI’s overall prediction logic. We find that AI explanations improve how well humans understand their own ability to perform the task (metacognitive accuracy). This improvement causally increases both the frequency and effectiveness of human-to-AI delegation. Additional analyses show that these effects primarily occur when explanations reveal to humans that AI’s prediction logic diverges from their own, leading to a systematic reduction of overconfidence. Our findings highlight metacognitive processes as a central, previously overlooked channel through which AI explanations can influence human-AI collaboration. We discuss practical implications of our results for organizations implementing XAI to comply with regulatory transparency requirements as, for example, outlined in the European Union Artificial Intelligence Act.


