Logo image
Can LLMs Reason About Trust?: A Pilot Study
Preprint   Open access

Can LLMs Reason About Trust?: A Pilot Study

Anushka Debnath, Stephen Cranefield, Emiliano Lorini and Bastin Tony Roy Savarimuthu
arXiv.org
Cornell University
11/06/2025
Handle:
https://hdl.handle.net/10523/47425

Abstract

Computer Science - Computation and Language Computer Science - Computers and Society Computer Science - Human-Computer Interaction Computer Science - Multiagent Systems trust Large Language Models (LLMs) trust reasoning
In human society, trust is an essential component of social attitude that helps build and maintain long-term, healthy relationships which creates a strong foundation for cooperation, enabling individuals to work together effectively and achieve shared goals. As many human interactions occur through electronic means such as using mobile apps, the potential arises for AI systems to assist users in understanding the social state of their relationships. In this paper we investigate the ability of Large Language Models (LLMs) to reason about trust between two individuals in an environment which requires fostering trust relationships. We also assess whether LLMs are capable of inducing trust by role-playing one party in a trust based interaction and planning actions which can instil trust.
url
https://doi.org/10.48550/arXiv.2507.21075View
Preprint (Author's original)All Rights Reserved Open

Metrics

29 Record Views

Details

Logo image