Artificial Intelligence (AI) scientists are challenged to create intelligent, autonomous agents that can make rational decisions. In this challenge, they confront two questions: what decision theory to follow and how to implement it in AI systems. This paper provides answers to these questions and makes three contributions. The first is to discuss how economic decision theory - Expected Utility Theory (EUT) - can help AI systems with utility functions to deal with the problem of instrumental goals, the possibility of utility function instability, and coordination challenges in multi-actor and human-agent collectives settings. The second contribution is to show that using EUT restricts AI systems to narrow applications, which are "small worlds" where concerns about AI alignment may lose urgency and be better labelled as safety issues. This papers third contribution points to several areas where economists may learn from AI scientists as they implement EUT. These include consideration of procedural rationality, overcoming computational difficulties, and understanding decision-making in disequilibrium situations.
Titelaufnahme
Zugänglichkeit
Das Dokument ist öffentlich zugänglich im Rahmen des deutschen Urheberrechts.
Links
Zusammenfassung
Nutzungshinweis