IWBF2026 logo

14th International Workshop on Biometrics and Forensics

Côte d’Azur, EURECOM, April 23–24, 2026.

Keynote Speakers

IWBF 2026 will feature keynote talks by leading experts at the intersection of biometrics, forensics, security, and AI. Below are the confirmed keynote speakers.

Didier – Keynote Speaker

Prof. Didier Meuwly

Keynote title: AI in Forensic Biometrics: the two sides of the medal

Abstract: The forensic process supports several forensic biometric applications: intelligence, investigation and evaluation in court. It aims at answering forensic questions about the source of biometric items and about the reconstruction of activities (e.g. type, location, time and sequence) involving biometric items. Concretely the process consists in examining (i.e. recovering, analysing and interpreting) biometric items, and in answering the questions by reporting the observations made and the opinions formulated. The rapid development and widespread use of artificial intelligence in society also affects forensic biometrics. This keynote lecture presents the generative (GenAI) and discriminative (DisAI) applications derived from artificial intelligence models that are playing a role in forensic biometrics. On one side it explains the use of GenAI to produce artificial forensic biometric items (e.g. morphing, deepfakes). On the other side, it describes the use of GenAI and DisAI in the recovery, analysis, interpretation and reporting steps of the forensic biometrics process. Finally it explores the accountability requirements for a forensic biometric process embedding GenAI and DisAI, from a scientific, legal, and societal perspective.

Short bio: Didier shares his time between the Netherlands Forensic Institute, where he is a principal scientist and the University of Twente, where he holds the chair of forensic biometrics. He specialises in the probabilistic evaluation of biometric data, the validation of likelihood ratio models and in identity management. Didier has served in several international terrorist and war-crime case reviews on request of ICTY, STL, UN, UK and CH. He has (co-)authored more than 65 peer-reviewed publications in forensic science. He is an associate and guest editor of Forensic Science International, a member of the ENFSI RD standing committee and of the ISO Technical Committee 272.

Dr. John Howard – Keynote Speaker

Dr. John Howard

Keynote title: Fairness in Forensic Face Recognition: What We Get Wrong, Why It Happens, and How to Fix It

Abstract: to be announced soon.

Short bio: Dr. John Howard is a computer scientist specializing in the evaluation of artificial intelligence systems for bias and the impact of automated-system bias on human performance. He has served as principal investigator on numerous research and development efforts across industry and government, and is the former Chief Data Scientist at the U.S. Maryland Test Facility, a leading AI test and evaluation laboratory. He currently serves as editor of ISO/IEC 19795-10 and ISO/IEC 19795-6, the international standards for testing bias in biometric systems and evaluating the performance of deployed biometric technologies. Dr. Howard has acted as an expert witness in legal cases involving data-privacy regulations, patent disputes, and the forensic use of face recognition in the United States, and holds an academic appointment at Southern Methodist University’s AT&T Center.

Prof. Heather Zheng – Keynote Speaker

Prof. Heather Zheng

Keynote title: Navigating AI’s impact on Human Creativity and Identity

Abstract: Today’s generative AI models are proficient at producing artistic content that closely mimic the art styles of human creators. These models are being misused to replicate the unique styles of individual artists without consent or compensation, and to deceive individuals and businesses seeking to license or purchase authentic human-made art. In this talk, I will discuss the progress and challenges on protecting human creatives against such harms, and those on distinguishing AI-generated content from artwork created by human artists. Finally, I will discuss the issue of limited transparency in model provenance and ways to address it.

Short bio: Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD from University of Maryland, College Park. Prior to joining University of Chicago in 2017, she spent 6 years in industry labs (Bell-Labs and Microsoft Research Asia) and 12 years as a faculty at University of California at Santa Barbara. At UChicago, she co-directs the SAND Lab (Security, Algorithms, Networking and Data) together with Prof. Ben Y. Zhao. She was one of MIT Technology Review’s Innovators under 35 in 2005; her research on cognitive radios was featured by MIT Technology Review as one of the 10 Emerging Technologies in 2006. More recently, her work on protecting human artists against unethical data exploration received the USENIX Internet Defense Prize, the Chicago Innovation Award, a special mention in TIME Magazine Best Inventions of 2023, and the Community Impact Award from the Concept Art Association in 2024. She is a fellow of ACM and IEEE, and has served on several editorial boards and steering committees for journals and conferences.