🤝
Trust in AI
How humans develop, calibrate, and sometimes misplace trust in algorithmic systems. From algorithm aversion to automation bias.
3 research papers
Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err
A landmark study demonstrating that people are more likely to abandon algorithmic forecasters after witnessing them make errors, even when the algorithm consistently outperforms hu...
Understanding the Effect of Accuracy on Trust in Machine Learning Models
This study investigates how stated and observed accuracy levels influence human trust in machine learning models. The findings reveal that trust is not a simple linear function of ...
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
A theoretical framework that formalizes the concept of trust in AI by analyzing its prerequisites, causes, and goals. The paper distinguishes between contractual trust (based on ex...