{"id":3551,"date":"2026-01-19T15:04:56","date_gmt":"2026-01-19T14:04:56","guid":{"rendered":"https:\/\/flex.uni-frankfurt.de\/?p=3551"},"modified":"2026-01-19T15:04:56","modified_gmt":"2026-01-19T14:04:56","slug":"knowing-not-to-know-explainable-artificial-intelligence-and-human-metacognition","status":"publish","type":"post","link":"https:\/\/flex.uni-frankfurt.de\/index.php\/publications\/knowing-not-to-know-explainable-artificial-intelligence-and-human-metacognition\/","title":{"rendered":"Knowing (Not) to Know: Explainable Artificial Intelligence and Human Metacognition"},"content":{"rendered":"<div class=\"custom-pagination-text-research-img\"><a href=\"https:\/\/doi.org\/10.1287\/isre.2024.1431\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-3089\" src=\"https:\/\/flex.uni-frankfurt.de\/wp-content\/uploads\/2023\/09\/cover-1.png\"\" width=\"193\" height=\"261\" \/><\/a><\/div>\n<div class=\"custom-pagination-text-research-content\">Moritz von Zahn, Lena Liebich, Ekaterina Jussupow, Oliver Hinz &#038; Kevin Bauer,\u00a02025, <em><br \/>\nInformation Systems Research, <\/em> [Published Online: 8 December, 2025]<em><br \/>\n<\/em><\/p>\n<div><\/div>\n<div class=\"custom-pagination-text-research-content\">\n<p><!--more Continue reading--><\/p>\n<div class=\"custom-pagination-text-research-content\"><a href=\"https:\/\/doi.org\/10.1287\/isre.2024.1431\">link to publication<\/a><\/div>\n<div class=\"custom-pagination-text-research-abstract\">Abstract:<br \/>\nThe use of explainable artificial intelligence (XAI) to render black-box artificial intelligence (AI) more interpretable to humans is gaining practical relevance. Prior research has shown that XAI can influence how humans \u201cthink\u201d. Yet, little is known about whether XAI also affects how people \u201cthink about their thinking\u201d (i.e., their metacognitive processes). We address this gap by investigating whether XAI affects metacognitive processes that guide human confidence judgments about their ability to perform a task and thereby, their decision whether to delegate the task to AI. We conducted two incentivized experiments in which domain experts repeatedly performed prediction tasks, with the option to delegate each task to an AI. We exogenously varied whether participants initially received explanations about AI\u2019s overall prediction logic. We find that AI explanations improve how well humans understand their own ability to perform the task (metacognitive accuracy). This improvement causally increases both the frequency and effectiveness of human-to-AI delegation. Additional analyses show that these effects primarily occur when explanations reveal to humans that AI\u2019s prediction logic diverges from their own, leading to a systematic reduction of overconfidence. Our findings highlight metacognitive processes as a central, previously overlooked channel through which AI explanations can influence human-AI collaboration. We discuss practical implications of our results for organizations implementing XAI to comply with regulatory transparency requirements as, for example, outlined in the European Union Artificial Intelligence Act.\n<\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[],"class_list":["post-3551","post","type-post","status-publish","format-standard","hentry","category-publications"],"_links":{"self":[{"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/posts\/3551","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/comments?post=3551"}],"version-history":[{"count":1,"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/posts\/3551\/revisions"}],"predecessor-version":[{"id":3552,"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/posts\/3551\/revisions\/3552"}],"wp:attachment":[{"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/media?parent=3551"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/categories?post=3551"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/flex.uni-frankfurt.de\/index.php\/wp-json\/wp\/v2\/tags?post=3551"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}