From Machine Learning to Machine Teaching (ML2MT)
This project is part of the larger consortium ”From Machine Learning to Machine Teaching – Making Machines AND Humans Smarter (ML2MT)” funded by the Volkswagen Foundation.
Our specific focus is on how humans can learn from AI’s explainability. As AI tools become increasingly sophisticated and ubiquitous, users often face growing challenges in understanding and trusting AI-generated results. The reasoning processes behind many AI models remain opaque—the so-called “black box” problem, which limits effective human-AI collaboration.
To address this, we use a house price estimation task to investigate whether and how humans can learn from Explainable Artificial Intelligence (XAI) that is a set of methods designed to make machine learning models’ decisions more transparent and interpretable. Specifically, we examine how model and data complexity influence human learning with or without AI explanations.
Through this approach, the project aims to advance our understanding of human learning from AI, contributing to more effective and trustworthy human-AI interaction with implications across various domains of everyday life.
Related Publications:
Guo, D., & Shing, Y. L. (2024). Linking the congruency effect in memory to confirmation bias in decision-making across the lifespan—Common roles of the medial prefrontal cortex: A selective review. European Psychologist, 29(4), 245–256. https://doi.org/10.1027/1016-9040/a000536

