If you think you know what trustworthy AI entails, this article by Sarah Pink and colleagues might prompt you to question your assumptions. It introduces a perspective that challenges the notion of trust as a transactional component in human-technology interactions. The article suggests considering trust as more of “a feel” – an anticipation of future events. This viewpoint, enriched by ethnographic studies conducted by Sarah and colleagues, indicates that placing trust in technology, individuals, or organizations does not guarantee their trustworthiness. AI systems, humans, and organizations cannot be intrinsically labeled as trustworthy or ethical, because trust and ethics are context-dependent attributes rather than fixed properties or tangible assets that can be obtained or owned.
So, what does this mean in terms of trust & AI? Sarah and colleagues argue for an interdisciplinary research framework that combines computing and social sciences. This framework envisions trust as a unifying element bridging various disciplines, stakeholders, practitioners, and technologies, thereby facilitating a more holistic exploration of trust within the realm of AI. It goes without saying that this perspective raises complex questions concerning the role of trust in AI design and development, as well as the ethical implications for AI and the training of its developers. Yet, it is precisely these more challenging questions that should be of concern for those genuinely invested in the concept of trustworthy AI.
Needless to say, very happy to collaborate with Sarah on our Repair project, exploring these and related questions.