摘要
发展值得人类用户信任的人工智能系统是影响人机合作发展的核心问题之一。目前,人机合作中信任的研究主要来自于计算机领域,侧重于研究如何构建、实现和优化人工智能系统面对特定任务的计算能力与处理能力。针对“什么因素影响了人对人工智能系统的信任,如何准确测量人对人工智能系统的信任”等问题的研究仍处于起步阶段,尚缺乏人类用户参与的实证研究或实验研究中缺乏严谨的行为科学实验方法。本文回顾了行为科学领域中对人际信任的研究方法,在人机合作框架下,探讨了人与人工智能系统之间的信任关系,梳理了人类个体对人工智能系统信任态度的影响因素。希冀该研究成果能够为构建人机合作中的信任度计算模型提供一定理论依据。
Human trust in Artificial Intelligence(AI)is a core issue influencing the development of human–machine cooperation.Recent research on trust in human-computer cooperation has focused on ways to build,implement,and optimize the computing and processing power of AI systems for specific tasks.However,research on factors affecting people’s trust in AI systems and the means of accurately measuring it is still in its infancy,and empirical studies on human users as well as rigorous experimental methods based on behavioral science are lacking.This paper reviews approaches to interpersonal trust in the field of behavioral science.In the framework of human–machine cooperation,the relationship of trust between the human and the AI system is discussed,and factors influencing the human’s trust-related attitude toward the AI system are analyzed.The work here provides a theoretical basis for the establishment of a model to calculate trust in the context of human–machine cooperation.
作者
朱翼
ZHU Yi(Nanjing Audit University,Nanjing 210000,China)
出处
《国防科技》
2021年第4期4-9,共6页
National Defense Technology
基金
国家自然科学基金青年科学基金项目(62006121)。
关键词
人机合作信任
行为科学
人因工程
human–machine trust
behavioral science
ergonomics