Trustworthy Autonomous Systems (TAS) @IDSIA USI-SUPSI
Latest news and events
-
Safety-Critical Control for Virtual Coupling Train Operations
We are pleased to announce a seminar entitled “Safety-Critical Control for Virtual Coupling Train Operations”. The seminar will be held on 18 March 2026 at 15:00 in room B1.02 at USI East Campus. The speaker will be Yike Li (University of Cagliari / Politecnico di Bari). Abstract Virtual Coupling (VC) is an emerging railway signaling
-
EUonAIR Open Innovation Challenge on Responsible AI
We are pleased to highlight the EUonAIR Open Innovation Challenge 2025, an exciting call for proposals focused on Responsible AI in Practice, promoted within our community by Prof. Alessandro Facchini, member of the Trustworthy Autonomous Systems (TAS) research group. The challenge invites interdisciplinary teams (2–5 members) from EUonAIR partner institutions to develop working AI prototypes
-
Call For Papers: Special Session on Engineering Trust at WCCI 2026
Please find below a Call for Papers of a flagship conference Special Session on Engineering Trust, co-sponsored by the TAS group within the Ethical, Legal, Social, Environmental and Human Dimensions of AI/CI (SHIELD) Technical Committee. 📣 CALL for PAPERS 📣 Special Session on “Engineering Trust: Ethical, Legal and Societal Impacts of Computational Intelligence on Human
-
From the opacity to the social misattribution problem
We are pleased to announce that TAS group member Prof. Alessandro Facchini, will deliver a seminar entitled “From the opacity to the social misattribution problem: a plea for more interdisciplinary collaboration in addressing epistemic and ethical challenges of (X)AI”. The seminar will be held on Wednesday, 10 December 2025 at 11:30am in room C1.02 at
-
Balancing Accuracy and Interpretability: TAS contribution at TRUST-AI
The work “Balancing Accuracy and Interpretability in Multi-Sensor Fusion through Dynamic Bayesian Networks” (F. Corradini, C. Grigioni, A. Antonucci, J. Guzzi, F. Flammini) was presented by PhD student Franca Corradini at the TRUST-AI workshop, co-located with the 28th European Conference on Artificial Intelligence (ECAI) 2025, which was held last week in Bologna (Italy). The work addressed a transparent-by-design sensor fusion framework for perception
-
Franca Corradini presents her research on robust and explainable perception
Yesterday, Franca Corradini, PhD student and member of the TAS group, presented her research prospectus focused on the use of probabilistic models to achieve robust and explainable multi-sensor perception in autonomous vehicles. Her work explores probabilistic approaches to model uncertainty, support explainable decision-making, and enhance perception robustness. A key aspect of the research is the
-
AI4GOOD workshop hosting a TAS speech on REXASI-PRO
Last week Prof. Francesco Flammini participated to the AI readiness workshop held in Geneva within the AI4GOOD Summit 2025 organized by the International Telecommunication Union (ITU) – the specialized agency of the United Nations (UN) addressing information and communication technologies. AI4GOOD is the UN leading platform on Artificial Intelligence to solve global challenges The focus
-
Explainable hybrid neural models for logistic terminal automation
The paper entitled “Towards explainable decision support using hybrid neural models for logistic terminal automation” (R. D’Elia, A. Termine, and F. Flammini -TAS group members) has been accepted for publication in conference proceedings and presented at the International Summer Conference 2025 (ISC25) “Intelligent Systems & Decision Making: Human Insights in the Era of AI”, held
-
Workshop organized by TAS members
On Tuesday 27 May 2025, we will host a mini-workshop with talks by Giuseppe Primiero and Francesca Doneda, from the LUCI lab (https://luci.unimi.it/) at University of Milan. Here the program of the day: 13.30-14.15: Talk by Francesca Doneda “AI vs. Human Candidates: A Performance Comparison in Logic Questions”; 14.15 – 15.00: Talk by Giuseppe Primiero “From trust evaluation to
-
New cooperation in trustworthy AI assessment
The TAS group has recently become affiliated with the Z-Inspection initiative for trustworthy AI assessment. Z-Inspection® is a methodology for assessing the trustworthiness of AI systems. It uses a holistic, participatory approach, incorporating ethical principles from the EU framework and other sources, and employing socio-technical scenarios to identify potential risks and ethical tensions. The process

