• Home
  • About Us
    Organizers Program Committee Previous ICMLAs Organize ICMLA'26
  • Key Dates
  • Contributes
    Call for Papers Call for Special Sessions Call for Workshops Call for Tutorials How to Submit
  • Programs
    Keynotes Schedule Special Sessions Workshops Tutorials
  • Participants
    Accepted Papers Registration Posters Awards
  • Locations
    Accommodation Conference Location Places of Interest Conference Pictures
  • Sponsors

Sidebar[Skip]



Welcome to the ICMLA'25 Official Web Site


Special Session 4:
ROSE-LLM: Robustness and Security of Large Language Models


The widespread integration of Large Language Models (LLMs) into real-world systems has introduced new challenges that demand urgent attention from the machine learning, cybersecurity, and systems research communities. While LLMs demonstrate exceptional performance in a variety of natural language processing tasks and are increasingly combined with other modalities such as vision and audio, their deployment in high-stakes environments, including healthcare, law enforcement, autonomous vehicles, and critical infrastructure - exposes serious vulnerabilities that can no longer be ignored. This special session will focus on the technical foundations, emerging risks, and defense mechanisms surrounding the robustness and security of LLMs. Theoretical advancements and empirical studies have shown that even the most capable LLMs are susceptible to manipulation through prompt injection, adversarial queries, or syntactic perturbations that bypass safety mechanisms and generate harmful or misaligned outputs. Moreover, model performance often deteriorates sharply under distributional shifts or input noise, making them brittle in real-world conditions. In the multimodal setting, where LLMs interact with vision encoders or other sensory modules, the attack surface becomes significantly broader, raising complex challenges in cross-modal consistency, modality-specific perturbation, and unified threat modeling. Another growing concern is the leakage of sensitive information through model outputs. The ROSE-LLM special session aims to address these interconnected concerns by creating a dedicated platform for advancing the state of research in secure, robust, and trustworthy LLM development. We invite contributions that propose new threat models, characterize vulnerabilities in both text-only and multimodal LLMs, and develop principled defense strategies grounded in adversarial training, robust optimization, certified defense, data obfuscation, or secure model fine-tuning. Equally important are efforts that propose systematic evaluation protocols, scalable benchmarks, and toolkits for measuring LLM behavior under adversarial and real-world stress conditions. We are also interested in system-level studies that explore the deployment of LLMs on edge devices, federated environments, or distributed systems with strict privacy, latency, and compliance constraints. Moreover, we are also interested in how foundation models can be used for anomaly detection.

This special session seeks to unite researchers across machine learning theory, security, privacy, and AI systems design, providing a forum to exchange ideas, share tools, and establish best practices through a half-day long special session. With the combination of technical paper presentations, invited talks, panel discussions, and interactive sessions, ROSE-LLM will catalyze a robust dialogue around the future of secure language modeling.

Scope and topics:

Topics of interest include, but are not limited to:

  • Threats and Vulnerabilities in LLMs
  • Efficient Fine-tuning of LLMs
  • Robustness of LLMs
  • Defense Mechanisms for LLMs
  • Evaluation Frameworks and Benchmarks for LLM Robustness
  • Analysis of Cost and Resource Efficiency of LLMs
  • LLMs Deployment on edge devices, federated environments, or distributed systems
  • Multimodal LLM Security and Robustness
  • Scalable and Trustworthy LLM Applications
  • Adversarial Attacks on Instruction-Tuned and Open-Weight LLMs
  • Anomaly Detection with Foundation Models
  • Trustworthiness and reliability of foundation models in critical anomaly detection applications
  • LLM Misuse and Social Impact Studies
  • Applications and Case Studies on Foundation Models
  • Limitations of Current Foundation Models in detecting Anomalies within Complex and Noisy Datasets
  • Future Directions for Anomaly Detection as Foundation Models
  • Best Practices, Policy, and Governance for LLM Security

Chairs:

  • Chair Emails

  • Dr. Ahmed Imteaj: imteaj@ieee.org
    Dr. M. Hadi Amini: moamini@fiu.edu

  • Chair Biographies
  • Dr. Ahmed Imteaj is a tenure-track Assistant Professor of Electrical Engineering and Computer Science department at Florida Atlantic University, where he leads the Security, Privacy, and Intelligence for Edge Devices Lab (SPEED Lab). Dr. Imteaj received the NSF CRII Grant Award, US DHS Grant, ORAU Research Innovation Partnership Grant, 2024 Outstanding Teacher of the Year award, nominated for 2025 SIU Rising Star Faculty Award and Early Career Faculty Excellence Award. Dr. Imteaj earned his Ph.D. in Computer Science from Florida International University in 2022, recognized with the prestigious FIU Real Triumph Graduate Award. During his time at FIU, he also earned his M.Sc. degree, recognized with the Outstanding Master's Degree Graduate Award. He has a B.Sc. degree in Computer Science and Engineering. More details are available here .

    Dr. M. Hadi Amini: Details are available here .

Technical Committee

  • Dr. Zhen Ni, Florida Atlantic University
  • Dr. Musfiqur Sazal, Oak Ridge National Lab
  • Dr. Hasib-Al Rashid, Amazon Web Services (AWS)
  • Dr. Minhaj Alam, University of North Carolina Charlotte
  • Dr. Abdur R Shahid, Southern Illinois University
  • Dr. Khaled Mohammed Saifuddin, Northeastern University
  • Dr. Nur Imtiazul Haque, University of Cincinnati
  • Dr. Deepti Gupta, Texas A&M University - Central Texas
  • Dr. Alvi Ataur Khalil, Florida International University
  • Dr. Adnan Maruf, Missouri State University
  • Dr. Khandaker Mamun Ahmed, Dakota State University

Paper Submission Instructions

All papers will be double-blind reviewed and must present original work.

  • CMT Submission Site
  • Select the track: Special Session 4: ROSE-LLM: Robustness and Security of Large Language Models

Papers submitted for reviewing should conform to IEEE specifications. Manuscript templates can be downloaded from:

  • IEEE website

Keydates

  • Submission due date: August 20, 2025
  • Notification of Acceptance: September 10, 2025
  • Camera Ready Papers: September 20, 2025
  • Pre-registration: September 20, 2025
  • Conference: December 3-5, 2025

Registration

In order for your paper to be presented and published in the proceedings, you must register to the conference.

Paper Presentation Instructions

The papers submitted to this track will be presented in person as part of the conference. There is no virtual presentation for this session.

Note: More details about this special session can be explored here.





ICMLA'25