Accountability for Practical Reasoning Agents
Cranefield, Stephen; Oren, Nir; Vasconcelos, Wamberto
Cite this item:
Cranefield, S., Oren, N., & Vasconcelos, W. (2019). Accountability for Practical Reasoning Agents. In M. Lujak (Ed.), Agreement Technologies: 6th International Conference, AT 2018, Bergen, Norway, December 6-7, 2018, Revised Selected Papers (pp. 33–48). Presented at the 6th International Conference on Agreement Technologies, Springer. doi:10.1007/978-3-030-17294-7_3
Permanent link to OUR Archive version:
http://hdl.handle.net/10523/9234
Abstract:
Artificial intelligence has been increasing the autonomy of man-made artefacts such as software agents, self-driving vehicles and military drones. This increase in autonomy together with the ubiquity and impact of such artefacts in our daily lives have raised many concerns in society. Initiatives such as transparent and ethical AI aim to allay fears of a “free for all” future where amoral technology (or technology amorally designed) will replace humans with terrible consequences. We discuss the notion of accountable autonomy, and explore this concept within the context of practical reasoning agents.We survey literature from distinct fields such as management, healthcare, policy-making, and others, and differentiate and relate concepts connected to accountability. We present a list of justified requirements for accountable software agents and discuss research questions stemming from these requirements. We also propose a preliminary formalisation of one core aspect of accountability: responsibility.
Date:
2019
Editor:
Lujak, Marin
Publisher:
Springer
Pages:
33-48
Conference:
6th International Conference on Agreement Technologies, Bergen, Norway
Keywords:
Accountability; Responsibility; Answerability; BDI; Agents
Research Type:
Conference or Workshop Item (Paper published in proceedings)
Languages:
English
Notes:
The final authenticated version is available online at http://doi.org/10.1007/978-3-030-17294-7_3.