From the opacity to the social misattribution problem

We are pleased to announce that TAS group member Prof. Alessandro Facchini, will deliver a seminar entitled “From the opacity to the social misattribution problem: a plea for more interdisciplinary collaboration in addressing epistemic and ethical challenges of (X)AI”. The seminar will be held on Wednesday, 10 December 2025 at 11:30am in room C1.02 at Campus East, Viganello.

Abstract

Artificial Intelligence (AI) systems are increasingly integrated into various aspects of our daily lives. However, despite the range of potentially beneficial applications, their broader adoption in critical domains (e.g., healthcare or education) has been limited to date and faces many inhibiting factors. One well-known key inhibitor contributing to the underuse or non-use of AI, and thus hindering successful AI adoption, is the difficulty humans face in determining how much to trust algorithmic behaviour. This is often due to the lack of explainability in the opaque behaviour and structure of many AI algorithms, making it challenging to evaluate individual outcomes and render them understandable to concerned stakeholders. Given this challenge, a growing number of scholars and policymakers support the thesis that transparency and explainability are suitable means for facilitating stakeholder trust in opaque AI systems, thereby making them more acceptable as supporting tools e.g. in decision making. Nevertheless, empirical research and real-world experience reveal that we currently face a prima facie paradoxical situation: on one hand, explainability and broader transparency (as opposed to AI system opacity) appear to be key requirements for enhancing trust and virtuous appropriation of AI systems; on the other hand, attempts to implement this goal using standard tools and methods from eXplainable AI (XAI) often trigger various impediments to such appropriation. The latter can result from the existence or emergence of cognitive and epistemic biases, which sometimes lead to wrongly attributed capabilities not afforded by the system, resulting in detrimental interactions with the technology. The “paradox” lies actually on the contextual and multi-form nature of the opacity problem.

To overcome these difficulties, the research community has thus begun investigating AI through a socio-technical lens, attempting to contextualise this technology within its broader social and organisational environment. Through fruitful collaboration between the humanities and computer science, the perspective that AI systems are artefacts embedded in networks of norms that shape their design and influence trust has gained prominence in scientific discourse.

By endorsing this perspective, this presentation will attempt to clarify the many facets of the underlying problems and demonstrate how a more “human-centred” and interdisciplinary approach – one aimed at designing not merely the technical AI system but the (local) socio-technical system – can address some of the epistemic and ethical challenges of AI. Furthermore, we will discuss how by looking at AI through the lens of the humanities, we can not only address “old” problems, but also formulate new ones, such as the problem of social misattribution in the case of LLMs.

The seminar is based on joint works and/or ongoing research collaborations with (among others): Andrea Ferrario (SUPSI, IDSIA / U. Zurich / ETHZ), Aleksandra Przegalinska (U. Kozminski), Emanuele Ratti (U. Bristol), Elisa Rubegni (U. Lancaster / Poli. Milano), Alberto Termine (SUPSI, IDSIA)

Short bio

Alessandro Facchini is Associate Professor in Epistemology, Logic and Ethics of AI, working since 2015 at IDSIA (USI-SUPSI). Before that, he was Assistant Professor at the Informatics Institute of the University of Warsaw, where he was leading a Homing Plus project on fixpoint logics. Even before, he was a Visiting Researcher at the Computer Science Department of the UCSC, and later a Post-Doc at the Information and Language Processing Systems group of the University of Amsterdam.He finished his PhD in 2010, under the co-supervision of Jacques Duparc (U. Lausanne) and Igor Walukiewicz (LaBRI – U. Bordeaux  2). Before that, in 2006 he completed a two-year Doctoral Programme in Logic and Foundations of Mathematics at the University of Barcelona. His current interests span from the theory of probabilities and logic, to epistemic and ethical issues related to AI/XAI. Since January 2025 he is the co-coordinator of the BSc in Data Science and AI, and the SUPSI coordinator for the European University Alliance EUonAIR.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *