• Home
  • About Us
    Organizers Program Committee Previous ICMLAs Organize ICMLA'27
  • Key Dates
  • Contributes
    Call for Papers Call for Special Sessions Call for Workshops Call for Tutorials How to Submit
  • Programs
    Keynotes Schedule Special Sessions Workshops Tutorials
  • Participants
    Accepted Papers Registration Posters Awards
  • Locations
    Accommodation Conference Location Places of Interest Conference Pictures
  • Sponsors

Sidebar[Skip]



Welcome to the ICMLA'26 Official Web Site


Special Session 6:
Explainable and Trustworthy Machine Learning for High-Stakes Applications


Nowadays, machine learning systems are increasingly deployed in real-world environments where their decisions can have significant consequences, such as healthcare, autonomous systems, finance, and public information platforms. In such high-stakes scenarios, model accuracy alone is no longer sufficient. There is a growing demand for machine learning methods that are explainable, robust, reliable, and trustworthy. While recent advances in explainable AI have improved the transparency of complex models, many existing approaches still face challenges related to reliability, generalization, security, and the accountability of explanations. In this context, modern machine learning techniques, including deep learning, robust learning, uncertainty modeling, and secure learning frameworks, show strong potential for enabling the safe and responsible deployment of AI systems.

Research on explainable and trustworthy machine learning has increasingly focused on how transparency, robustness, and reliability can be ensured in real-world AI systems. This special session brings together work that addresses these challenges across different stages of the machine learning pipeline, including explainability, robustness under uncertainty and distribution shift, evaluation of explanation methods, and secure and accountable AI systems. The session also includes recent developments related to deep learning models, such as explainability for neural networks, robustness and calibration, and applications in safety-critical domains including healthcare, autonomous systems, and industrial settings.

Scope and topics:

This session invites submissions with high quality works that are related but are not limited to the topics below:

  • Explainable and interpretable machine learning
  • Trustworthy and accountable AI systems
  • Robustness, reliability, and calibration of machine learning and deep learning models
  • Explainable AI under distribution shift and uncertainty
  • Evaluation and benchmarking of explainability methods
  • Data provenance, audit trails, and reproducibility in AI systems
  • Secure and verifiable machine learning pipelines
  • Cryptographic guarantees for model integrity and explanations
  • Fairness, bias mitigation, and ethical AI
  • Adversarial robustness and attack-resilient explainable AI
  • Explainability methods for deep learning models (CNNs, Transformers, foundation models)
  • Post-hoc and intrinsic explainability for deep learning systems
  • Explainable AI and deep learning for medical imaging, healthcare, and vision-based applications
  • Explainability in safety-critical, autonomous, and industrial AI systems

Chairs:

  • Chair Emails

  • Mehmet Akif Gulum: mehmetakifgulum@hitit.edu.tr
    Mehmed Kantardzic

  • Chair Biographies

  • Mehmet Akif Gulum is an Assistant Professor of Computer Science at Hitit University, Turkey. His research focuses on explainable artificial intelligence, trustworthy machine learning, robustness and reliability of AI systems, and medical imaging. His work addresses the development of transparent and dependable machine learning models for high impact real world applications, with particular emphasis on healthcare, safety critical systems and responsible AI. He has authored and co-authored several peer reviewed publications in leading journals and international conferences in the areas of explainable AI, medical imaging and applied machine learning.
    Mehmed Kantardzic is a Professor of Computer Science at the University of Louisville, USA. His research interests include data mining, machine learning, big data analytics and knowledge discovery. He is the author of multiple books and numerous research publications in leading journals and conferences and has extensive experience in organizing international conferences and scientific events in the field of machine learning and data analytics.

Technical Committee

  • Hanqing Hu, Hobart and William Smith, USA
  • Jason Turner, Humana, USA
  • Rukiye Savran Kiziltepe, Ankara University, Turkey
  • Tolga Buyuktanir, Agra Fintech, Turkey
  • Mariofanna Milanova, University of Arkansas at Little Rock, USA
  • Tegjyot Singh Sethi, Google, USA

Paper Submission Instructions

All papers will be double-blind reviewed and must present original work.

  • CMT Submission Site
  • Select the track: Special Session 6: Explainable and Trustworthy Machine Learning for High-Stakes Applications

Papers submitted for reviewing should conform to IEEE specifications. Manuscript templates can be downloaded from:

  • IEEE website

Keydates

  • Submission Deadline: June 20, 2026
  • Notification of Acceptance: July 10, 2026

Registration

In order for your paper to be published in the proceedings you must register to the conference.

Paper Presentation Instructions

The papers submitted to this track will be presented in person as part of the conference. There is no virtual presentation for this session.





ICMLA'26