LivDet-Iris 2025

Iris Liveness Detection Competitions



Competition Description




Summary



LivDet-Iris 2025 is the sixth edition of the iris liveness competition of LivDet-Iris series. This competitions serve as evaluations of iris presentation attack detection organized approximately every two-three years.

The main research questions to be answered by LivDet-Iris 2025 are: (a) the detection of unknown attacks (i.e., when the exact spoof type is not given to the algorithm) to iris recognition systems, (b) the effect of 'aging' on iris PAD algorithms trained on older datasets when detecting textured contact lenses produced more recently, and (c) the response of iris PAD methods to realistically-looking, ISO-compliant iris images retouched by modern generative models (e.g., blending features of two identities into one iris image). All three problems are among the most important research efforts related to security of iris recognition systems. LivDet-Iris 2025 has been included in official IJCB 2025 competition list.

A summary of all previous LivDet-Iris competitions is available as a Chapter 7 in new Handbook of Biometric Anti-Spoofing, and this IJCB 2023 paper presents the most recen LivDet-Iris 2023 competition.



Competition Parts



This competition has two parts, competitors may participate in only one part, or two parts.



Part 1: "Algorithms-Independently-Tested" will involve the evaluation of the software solutions (submitted to the organizers) on a large dataset encompassing near-infrared, ISO-compliant iris images that either represent authentic irises, or presentation attacks.



Part 2: "Systems" will involve the systematic testing of submitted iris recognition systems based on physical artifacts presented to the sensors. An analysis of the performance in each part independently will be used to determine an overall winner whose algorithm and system have the lowest error rates, respectively.



Part 1: Tasks



The algorithms submitted to Part 1 will be evaluated in three distinct tasks described below.



Task 1 -- "Industry Partner’s Tests": One of the industry partner’s (PayEye) will run all submissions on their sequestered dataset representing the most popular physical attacks observed and/or anticipated in the operational scenario of iris recognition-based payments. The dataset won’t be released to the subjects, however presentation attack instruments represented in the dataset are known: paper printouts, e-book reader presentations, other (artificial eyes, doll eyes, mannequin eyes, etc.), and samples synthesized by Generative Adversarial Networks.



Task 2 -- "Deep Learning-Aided Morphing": Submissions will be tested against a selection of morphed samples encompassing various classes of morphing.

The sequestered test dataset for this task is made by taking iris images from two different identities and putting patches of the iris texture from one of them onto the other. The two images are selected to be of similar pupil size and preprocessed to match brightness and contrast. However, simply putting patches of one iris into another iris could lead to unrealistic boundaries. Thus, (i) alpha blending (the classical approach to handling unnatural boundaries when splicing two images), and (ii) deep-learning-based inpainting (the boundary regions are inpainted using deep-learning models trained to specifically inpaint iris textures) are used to increase the realism of the resulting morphed iris images.



Task 3 -- "Robustness of PAD to Advanced Manufacturing Methods of Textured Contact Lens Patterns": This task focuses on evaluating the robustness of iris PAD against modern manufacturing methods of textured contact lens patterns. As modern high-resolution printing, multi-layered texturing, and enhanced pigmentation techniques make textured lenses increasingly difficult to distinguish from natural irises, PAD methods trained on older datasets may struggle to maintain accuracy. This task aims at answering the question where we are, as a community, with are readiness to detect the newest textured contact lenses, given the training datasets we collected in the past.



Part 2: Evaluation Details



Laboratory staff will systematically attempt to spoof the system. The vendor shall indicate whether they are participating in PAD only evaluation and/or full PAD + comparison evaluation. For each, the submitted device shall specify the decision for PAD only and the decision for PAD + comparison. The vendor may also output a score to allow further analysis.

The parameters adopted for the performance evaluation will be as follows::

  • There will be at least 500 bona fide attempts and 500 spoof attempts.
  • Bona fide attempts will be performed following best practices in iris image acquisition (such as those provided by NIST's IREX-V recommendations) to maximize the probability of acquiring a sample compliant with ISO/IEC 19794-6.
  • Spoof attempts will be performed in a way that makes the sensor produce an iris image.
  • Spoof types may include printed irises, displayed irises, textured contact lenses. The two evaluations may include different spoof types.
  • A system's ability to not acquire a spoof will be considered a correct rejection of a spoof and calculated as the attack presentation non-response event. The ability to not acquire a live iris will be considered an incorrect rejection of a live iris and calculated as a failure to acquire event. In computing the overall average and determining the winner, the attack presentation non-response rate (APNRR) and failure to acquire (FTA) will be considered.
  • If the system is able to generate a liveness score for the presented live iris or spoof, then a stored image should be supplied, however, it is not mandatory. If the system cannot detect that a live iris or spoof is presented to the device, then no image can be stored, and a non-response event will be recorded.



Evaluation Metrics and Winner Selection



Part 1: All submissions will be evaluated using metrics recommended by ISO/IEC 30107-1:2016: APCER (Attack Presentation Classification Error Rate) and BPCER (Bonafide Presentation Classification Error Rate). Area Under the ROC curve (AUROC), where the ROC will be built from APCER and 1-BPCER scores, will be used to provide performance estimates for each task. The closer the AUROC is to 1.0, the better the algorithm. In the IJCB paper we also plan to report APCER and BPCER on the test sets for a fixed acceptance threshold 0.5, to assess the generalization of the submitted algorithms to unknown data and without a possibility to fine-tune the acceptance threshold.

Since all three tasks in Part 1 are different, we plan to announce one winner for each task. The winning algorithm will be the one demonstrating the largest AUROC for that particular task. Multiple submissions (addressing one or more tasks) from one team/institution are allowed and welcomed. The same team/institution may be a winner in one, two or all three tasks.

Part 2: In the full-system tests (iris PAD + comparison), the winning system will be the one with the lowest RIAPAR (Relative Impostor Attack Presentation Accept Rate), which is the sum of FRR (False Rejection Rate) and IAPAR (Impostor Attack Presentation Accept Rate). FRR will be obtained during genuine live presentations to the sensor. IAPAR will be obtained through systematic impersonation tests in which true identities will be mimicked by irises displayed on a Kindle and printed on paper. In the PAD only tests, the winning system will be the one with the lowest sum of BPCER and APCER.



Important Dates




  • May 31, 2025: Participants’ deadline for algorithm submissions, along with (optional) short descriptions of methods from non-anonymous participants who wish to co-author the IJCB paper.
  • June 7, 2025: The evaluation of submissions completed.
  • June 14, 2025: Official competition results ready and announced. First version of the paper draft ready and circulated to co-authoring participants.
  • June 23, 2025: The paper summarizing the competition submitted to IJCB.


How to Participate



To participate in Part 1 -- "Algorithms-Independently-Tested":
  • Register to the competition by filling out this Google Form. A small sample of images demonstrating the image format in the sequestered test data will be sent to the registered participants.
  • (Optional) Execute the database license agreements for datasets used in past LivDet-Iris competitions. Upon correct execution of the license agreements, you will obtain instructions from the institution redistributing the data on how to download a copy of the dataset(s).
  • Train algorithm(s) with training data of your choice.
  • Prepare the submission package following the instructions in this LivDet-Iris GitHub.
  • Submit your submission via this Google Form.



To participate in Part 2 -- "Systems":
  • Register to the competition by filling out this Google Form. A small sample of images demonstrating the image format in the sequestered test data will be sent to the registered participants.
  • (Optional) Execute the database license agreements for datasets used in past LivDet-Iris competitions. Upon correct execution of the license agreements, you will obtain instructions from the institution redistributing the data on how to download a copy of the dataset(s).
  • Ship the hardware with the installation instructions to the nearest academic partner (University of Notre Dame or University of North Carolina - Charlotte) (please contact us at contact@livdet-iris.org to discuss the shipping details). It should be possible to install the submitted system on a clean Windows or Linux system, or operate the system in a standalone way. The equipment made available to the organizers will be returned to the participants right after the competition is concluded.


Winners and IJCB Paper Summary


Winner: To be announced on June 14, 2025

Paper Summary: Link to a pre-print to be posted here after the paper draft is ready.



Organizers


Academic partners

University of Notre Dame, IN, USA
Dr. Adam Czajka
Dr. Kevin Bowyer
Siamul Karim Khan
Mahsa Mitcheff
Samuel Webster

University of North Carolina – Charlotte, USA
Dr. Stephanie Schuckers

Clarkson University, NY, USA
Afzal Hossain

NASK -- National Research Institute, Warsaw, Poland
Ewelina Bartuzi-Trokielewicz
Alicja Martinek
Adrian Kordas