Submit your paper About Artificial Intelligence (AI) innovations have led to increasingly complex models, such as foundational and generative multimodal models. They achieve state-of-the-art performance across various domains, however, their black-box nature raises concerns about their trust, accountability, and ethical deployment. The INSAIT Workshop (INterpretable Systems for Artificial Intelligence Transparency) aims to bring together researchers and practitioners to explore interpretability in AI, focusing on methods that enhance transparency, explainability, and human-centric trust. Understanding why AI systems work so well and why they fail is important for their integration into real-world applications used by millions of people. We are now accepting self-nominations for reviewers, please complete this form. Call for papers This workshop invites contributions from academia and industry on theoretical and practical topics including, but not limited to: Explanation methods: post-hoc interpretability techniques (e.g., SHAP, LIME, attention-based explanations, mechanistic interpretability, etc.). Interpretability by design: design of inherently interpretable architectures (e.g., prototype-based, capsule networks, concept bottleneck models). Concept learning and unlearning techniques: for example, how removing unwanted concepts impacts a model. Human-AI interaction: how interpretability improves user trust and decision-making. Benchmarking interpretability: standardized evaluation metrics for explainability. Ethical & societal implications: ensuring fairness, accountability, and transparency through interpretability. Case studies: applications of interpretability in healthcare, finance, legal systems, and autonomous systems. Paper submissions All submissions will go through a double-blind review process. Papers will be selected based on relevance, significance, novelty of results, technical merit, and clarity of presentation. Only previously unpublished works will be considered. There are two tracks for submission to INSAIT: Proceedings track: Submitted papers to the proceedings track must not exceed 12 pages (including references). Accepted papers will be published in a dedicated volume of Springer Lecture Notes in Computer Science (LNCS). Extended abstract track: In this track, we invite papers up to 6 pages (including references) for presenting high-impact, innovative ideas or work-in-progress that may not be ready for full publication. Submissions to both tracks will be featured during the workshop’s poster session, and a subset of all submissions will be selected for spotlight talks. Papers must be in English. Each accepted paper must be covered by at least one author registration (either a Full registration or a Workshop/Tutorial registration if you plan to attend the workshops/tutorials only). We suggest workshop papers are prepared and submitted using the official ICIAP template (there is also this Overleaf template). Please submit your papers in PDF format through OpenReview. Important Dates Submission deadline - June 7, 2025 Author notification - July 01, 2025 Camera ready deadline - July 10, 2025 Note: all deadlines are in Anywhere on Earth (AoE). Workshop Event When: September 15 or 16 (to be announced), 2025 Where: Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome Registration For detailed instructions and information on registration fees, please visit the ICIAP registration page. Schedule To be announced. Speakers Stephan Alaniz Assistant Professor, Télécom Paris, Institut Polytechnique de ParisExplainable AI Organizers Riccardo Renzulli Postdoc at University of Turin, his interests lie in representation learning, interpretability, medical imaging Eleonora Poeta PhD student at Politecnico di Torino, her interests lie in trustworthy AI, explainable AI, fairness Francesca Naretto Researcher at the University of Pisa, her interests lie in explainable artificial intelligence, data privacy and federated learning Mirko Zaffaroni Researcher at CentAI, his interests lie in explainable AI, synthetic data and multimodal feature fusion Alan Perotti Senior researcher at CentAI, his interests lie in representation learning, interpretability, industry Program Committee Marco Grangetto, University of Turin Enrico Cassano, University of Turin Muhammad Rashid, University of Turin Marco Nurisso, Politecnico di Torino Georgios Leontidis, University of Aberdeen Aiden Durrant, University of Aberdeen Sonia Laguna Cillero, ETH Zurich Eliana Pastor, Politecnico di Torino Salvatore Greco, Politecnico di Torino Lia Morra, Politecnico di Torino Giulia Vilone, Analog Devices .. (to be updated) Contact Email: insait.workshop@gmail.com X: @INSAIT_workshop Instagram: @INSAIT_workshop Bluesky: @insaitworkshop.bsky.social Organized by