Artificial Intelligence (AI) systems are increasingly integrated into a wide range of domains reshaping the landscape of information security. From synthetic content generation to intelligent system deployment, AI technologies now influence every layer of communication, data processing, and multimedia production. This evolution brings both opportunities and risks. On one hand, AI can enhance defenses by detecting hidden information, tracing content origins, and identifying covert channels; on the other hand, it can empower adversaries to design stealthier data-hiding methods, generate highly realistic steganographic media and optimize side-channel attacks. Moreover, even AI can be watermarked or targeted by data exfiltration attacks, being a fundamental asset for campanies.
In this context, the 1st Workshop on Hidden Information, Steganography, Privacy, and Emerging Risks (WHISPER 2026), co-located with ITASEC & SERICS, aims to bring together researchers and practitioners from both academia and industry to explore the intersection of AI, steganography, digital watermarking, and privacy. The workshop will focus on emerging risks, defensive and adversarial strategies, and the dual role of AI, as both a potential threat and a powerful safeguard, in the protection, tracing, and authentication of information.
Topics of Interest Submissions are invited on all topics related to information hiding, watermarking, and privacy in the era of AI, including but not limited to:
- Watermarking schemes, usages, and challenges, including model and content watermarking for intellectual property protection.
- Novel cloaking mechanisms within models and/or datasets with robustness and fairness properties.
- New threat models leveraging information-hiding techniques.
- Secure watermarking of large language models, generative models and traditional machine learning models.
- Deep learning solutions for detecting and generating covert communication.
- Adversarial attacks and defenses in data hiding and watermarking.
- Data-exfiltration attacks to AI models, including membership inference attacks and privacy leakages.
Submission Guidelines Submissions must be written in English and provided in PDF format, adhering to the ITASEC conference template EasyChair style. Manuscripts should at most 10 pages in length, excluding references. We accept both previously published papers and original submissions presenting new ideas or preliminary results. All submissions will undergo peer review by at least two members of the Program Committee.
Important Dates
- Paper Submission: 5 December 2025
- Notification: 20 December 2025
- Carmera-ready: 10 January 2026
- Workshop: 09 February 2026