GANproj
GANproj
GANproj
Abstract
Figure 3. Some more examples of synthetic irides that are generated using different Generative Adversarial Networks (GANs) for partially
and fully synthetic iris images.
• Discuss the future direction to overcome the chal- 2. Iris Recognition
lenges of current methods to generate enhanced syn-
thetic iris datasets. The foundational technology behind modern automated
iris recognition systems can be traced back to the work
of John Daugman [20], who is credited with the develop-
ment of the core algorithms that make such systems possi-
ble. Daugman’s work leverages the distinct patterns found
Table 1. Some examples of iris datasets captured in different spectrum using various sensors showcasing the lack of number of samples
and also number of subjects in the datasets.
Dataset Name Devices Spectrum Number of Images (subjects)
CASIA-Iris-Thousand [4] Iris scanner (Irisking IKEMB-100) NIR 20,000 (1000)
CASIA-Iris-Interval [4] CASIA close-up iris camera NIR 2,639 (249)
CASIA-Iris-Lamp [4] OKI IRISPASS-h NIR 16,212 (411)
CASIA-Iris-Twins [4] OKI IRISPASS-h NIR 3,183 (200)
CASIA-Iris-Distance [4] CASIA long-range iris camera NIR 2,567 (142)
ICE-2005 [1] LG2200 NIR 2,953 (132)
ICE-2006 [2] LG2200 NIR 59,558 (240)
IIITD-CLI [44] Cogent and VistaFA2E single iris sensor NIR 6,570 (240)
URIRIS v1 [64] Nikon E5700 VIS 1,877 (241)
URIRIS v2 [63] Canon EOS 5D VIS 11,102 (261)
MILES [24] MILES camera VIS 832 (50)
MICHE DB [92] iPhone 5, Samsung Galaxy (IV + Tablet II) VIS 3,732 (184)
CSIP [72] Xperia Arc S, iPhone 4, THL W200, Huawei Ideos X3 VIS 2,004 (100)
WVU BIOMDATA [15] Irispass NIR 3,099 (244)
JIRIS, JPC1000 and
IIT-Delhi Iris Dataset [5] NIR 1,120 (224)
digital CMOS camera
CASIA-BTAS [92] CASIA Module v2 NIR 4,500 (300)
IIITD Multi-spectral Periocular [75] Cogent Iris Scanner NIR, VIS, Night Vision 1,240 (62)
CROSS-EYED [73] Dual Spectrum Sensor NIR, VIS 11,520 (240)
in the human iris for secure and accurate human recogni- for variations in image quality, occlusion and noisy
tion. There have been significant improvements on his ini- data.
tial work for various security, identification and privacy ap-
plications [68, 50, 51]. Iris Recognition can be divided into For a detailed explanation of iris recognition, we recom-
4 stages: mend referring to [9], [20], [59].
With recent developments in the field of deep-learning,
• Iris Segmentation: Images in most iris datasets do not deep networks have found application in all 4 stages of
only contain the iris itself but also region in the vicin- iris recognition. Iris segmentation and feature extraction
ity of the iris such as pupil, sclera, eyelashes etc. So, have particularly benefited from deep learning-based ap-
the first step towards iris recognition is to segment the proaches as they often handle noise and complexities in iris
iris from the captured image to remove unnecessary datasets better than traditional approaches. Nguyen et al.
or extra information. Most of the initial segmentation [59] studied the performance of pre-trained convolutional
approaches, including Daugman [21], involve identi- neural networks (CNNs) in the domain of iris recognition.
fying the pupil and iris boundaries. In traditional ap- The study reveals that features derived from off-the-shelf
proaches, the occlusions due to eye-lashed and eyelids CNNs can efficiently capture the complex characteristics of
are minimized by edge detection and curve fitting tech- irides. These features are adept at isolating distinguishing
niques. visual attributes, which leads to encouraging outcomes in
iris recognition performance. While the progress in the field
• Normalization: Post segmentation, the variations in
of deep learning has helped improve the reliability of iris
the size of segmented irides (caused due to distance
recognition systems by improving their performance under
from sensor, or contraction and dilation of the pupil)
various conditions, the lack of iris data with sufficient inter
are minimized via geometric normalization where an-
and intra-class variations limits the training and testing of
nular irides are unwrapped to a fixed-size rectangular
these systems. Therefore, we need to explore the field of
image.
generative AI to generate synthetic iris datasets with suffi-
• Feature Encoding: Iris features are extracted from the cient inter and intra-class variations to help train and test a
geometrically normalized iris image and encoded so robust iris recognition system.
that they can be used for matching. Most common
techniques for iris feature extraction involve Gabor Fil- 3. Iris Presentation Attack Detection
tering [21], BSIF [18], etc. that help in capturing the
unique textural properties of the iris. Presentation attack detection (PAD) is an essential com-
ponent of iris recognition systems. As the reliance on iris
• Matching: Once features are encoded, various match- recognition systems grows, so does the sophistication of at-
ing algorithms can be used for iris recognition. Daug- tacks designed to exploit them. Here, we briefly examine
man [21] used Gabor phase information to encode the the nature of iris presentation attacks, the methodologies
iris and hamming distance to compare the encoded iri- developed to detect them, and the challenges faced in en-
des. These have been improved over time to account hancing iris PAD systems.
Figure 4. Some examples of real bonafide and PA iris images from MSU-Iris-PA01 [87]: (a) bonafide samples and (b) presentation attacks:
(i) artificial eye, (ii) & (iii) printed eye, (iv) Kindle display and (v) cosmetic contact lens. [87, 88].
Iris presentation attacks (PAs), sometimes known as eyes and artificial eyes. [33, 54] proposed deep network
spoofs, refers to physical artifacts that aims to either im- based PA detection methods to detect different types of PAs.
personate someone or obfuscate one’s identity or create a To achieve this, Hoffman et al. [33] proposed a deep net-
virtual identity (see Figure 4). Several types of presentation work that utilizes patch information with a segmentation
attacks have been considered in the literature: mask to learn features that can distinguish bonafide from
iris PAs.
• Print Attack: One of the simplest forms of iris PA
While these iris PA detection methods perform well on
involves the attacker presenting a high-quality photo-
various datasets, attackers are continuously finding new
graph of a valid subject’s iris to the biometric system.
ways to bypass them, leading to an arms race between se-
Basic systems might be misled by the photograph’s vi-
curity experts and attackers. As a result, the PAD methods
sual fidelity unless they are designed to detect the ab-
need to be constantly updated (re-trained or fine-tuned) and
sence of depth or natural eye movements.
tested against the latest forms of attacks. This calls for PA
• Artificial Eyes: Attackers may employ high-grade detection methods that can generalize well over new (or un-
artificial (prosthetic or doll) eyes that replicate the seen) PAs without the hassle of re-training or fine-tuning.
iris’s texture and three-dimensionality. These artifi- Here, “Seen PAs” are those which the PAD methods have
cial eyes seek to deceive scanners that are not sophisti- been exposed to during the training phase. In contrast, “Un-
cated enough to discern liveness indicators such as the seen PAs” are not included in the training phase, posing a
pupil’s response to light stimuli. concerning challenge for accurate PA detection. Recent de-
velopments in PAD methods have focused on enhancing the
• Cosmetic Contact Lens: A more nuanced approach in- ability of systems to generalize, distinguishing bonafide iri-
cludes the usage of cosmetic contact lenses that have des from PAs, even when encountering previously unseen
been artificially created with iris patterns that can ei- PAs. Gupta et al. [30] proposed a deep network called
ther conceal the attacker’s true iris or mimic someone MVANet, which uses multiple convolutional layers for gen-
else’s iris. This type of attack attempts to bypass sys- eralized PA detection. This network not only improves PA
tems that match iris patterns by introducing false tex- detection accuracy, but also addresses the high computa-
tural elements. tional costs typically associated with training deep neural
networks by using a simplified base model structure. Eval-
• Replay-Attack: Playing back a video recording of a
uations across different databases indicate MVANet’s pro-
bonafide iris to the sensor constitutes another PA. Ad-
ficiency in generalizing to detect new and unseen PAs. In
vanced iris recognition systems counter this by looking
[76], Sharma and Ross proposed D-NetPAD, a PAD method
for evidence of liveness, like blinking or involuntary
based on DenseNet to generalize over seen and unseen PAs.
pupil contractions.
It has demonstrated a strong ability to generalize across di-
Researchers have proposed different methods to effec- verse PAs, sensors, and data collections. Their rigorous test-
tively detect different types of PAs. In [31, 44] proposed ing confirms D-NetPAD’s robustness in detecting general-
to utilize textual descriptors like GIST, LBP and HOG to ized PAs [76].
detect printed eyes and cosmetic contact lens. Similarly, Most of the PAD methods formulate PA detection as a
Raghavendra and Busch [66] utilize cepstral features with binary-class problem, which demands the availability of a
binary statistical image features (BSIF) to distinguish be- large collection of both bonafide and PA samples to train
tween bonafide irides and print attacks. Another way to de- classifiers. However, obtaining a large number of PA sam-
tect print attack is the liveness test that is lacking in printed ples can be much more difficult than bonafide iris samples.
eyes [17, 39]. Liveness test can also be helpful to detect at- Further, classifiers are usually trained and tested across sim-
tacks like artificial eyes. Eye gaze tracking [48] and multi- ilar PAs, but PAs encountered in operational systems can
spectral imaging [12] have good results in detecting printed be diverse in nature and may not be available during the
training stage. Therefore, we need to explore the genera- with different characteristics, such as size, shape, and
tive methods to generate partially synthetic iris images (as texture, can be generated. Most of the research in this
identity is not the focus in PA detection) that can help build category focuses on generation synthetic iris images
a balanced iris PA datasets. This will help researchers to with gaze estimation and rendering eye movements.
better train and test their PAD methods. Wood et. al. [84] proposed a 3-D morphable model for
the eye region with gaze estimation and re-targeting
4. Generating Synthetic Irides gaze using a single reference image. Similarly, [7]
focuses on achieving photo-realistic rendering of eye
As mentioned earlier, synthetic iris images offer several movements in 3D facial animation. The model is built
advantages, including scalability, diversity, and control over upon 3D scans of a face captured from various gaze
the generated data. Some of the methods to generate such directions, enabling the capture of realistic motion of
images are listed below, categorized on the basis of method the eyeball, eyelid deformation, and the surrounding
used: skin. To represent these deformations, a 3D morphable
model is employed.
• Texture Synthesis: This technique has been widely
used for generating synthetic iris images. These meth-
ods analyze the statistical properties of real iris im- • Image Warping: Image warping techniques involve ap-
ages and generate new images based on those statistics. plying geometric transformations to real iris images to
Shah and Ross [74] proposed an approach for generat- generate synthetic images. These transformations can
ing digital renditions of iris images using a two-step include rotations, translations, scaling, and deforma-
technique. In the first stage, they utilized a Markov tions. Image warping allows for the generation of syn-
Random Field model to generate a background texture thetic iris images with variations in pose, gaze direc-
that accurately represents the global appearance of the tion, and occlusions. In [11], Cardoso et. al. aimed
iris. In the subsequent stage, various iris features, in- to generate synthetic degraded iris images for evalu-
cluding radial and concentric furrows, collarette, and ation purposes. The method utilizes various degrada-
crypts, are generated and seamlessly embedded within tion factors such as blur, noise, occlusion, and con-
the texture field. In another example, Makthal and trast changes to simulate realistic and challenging iris
Ross [52] introduced an approach for synthetic iris image conditions. The degradation factors are care-
generation using Markov Random Field (MRF) mod- fully controlled to achieve a realistic representation of
eling. The proposed method offers a deterministic syn- degraded iris images commonly encountered in real-
thesis procedure, which eliminates the need for sam- world scenarios. In [16], a method combining prin-
pling a probability distribution and simplifies compu- cipal component analysis (PCA) and super-resolution
tational complexity. Additionally, the study highlights techniques is proposed. The study begins by intro-
the distinctiveness of iris textures compared to other ducing the iris recognition algorithm based on PCA,
non-stochastic textural patterns. Through clustering followed by the presentation of the iris image synthe-
experiments, it is demonstrated that the synthetic irides sis method. The proposed synthesis method involves
generated using this technique exhibit content similar- the construction of coarse iris images using prede-
ity to real iris images. In a different approach, Wei et. termined coefficients. Subsequently, super-resolution
al. [83] proposed a framework for synthesizing large techniques are applied to enhance the quality of the
and realistic iris datasets by utilizing iris patches as synthesized iris images. By manipulating the coeffi-
fundamental elements to capture the visual primitives cients, it becomes possible to generate a wide range of
of iris texture. Through patch-based sampling, an iris iris images belonging to specific classes.
prototype is created, serving as the foundation for gen-
erating a set of pseudo irises with intra-class variations. • Generative Adversarial Networks (GANs): GANs
Qualitative and quantitative analyses demonstrate that have gained significant attention for generating real-
the synthetic datasets generated by this framework are istic and diverse synthetic iris images. In a GAN
well-suited for evaluating iris recognition systems. framework, a generator network learns to generate syn-
thetic iris images, while a discriminator network dis-
• Morphable Models: Morphable models have been uti- tinguishes between real and synthetic images. The two
lized for generating synthetic iris images by captur- networks are trained in an adversarial manner, result-
ing the shape and appearance variations in a statisti- ing in improved image quality over time. GANs can
cal model. These models represent the shape and tex- generate iris images with realistic features, including
ture of irises using a low-dimensional parameter space. iris texture, color, and overall appearance. Minaee and
By manipulating the parameters, synthetic iris images Abdolrashidi [55] proposed a framework that utilizes a
generative adversarial network (GAN) to generate syn- upon combination of convolutional autoencoder (CAE) and
thetic iris images sampled from a learned prior distri- DCGAN and the evaluation of DSB-GAN is conducted on
bution. The framework is applied to two widely used three biometric modalities: fingerprint, iris, and palmprint.
iris datasets, and the generated images demonstrate a One of the notable advantages of DSB-GAN is its efficiency
high level of realism, closely resembling the distribu- due to a low number of trainable parameters compared to
tion of images within the original datasets. Similarly, existing state-of-the-art methods. Yadav et al. [87, 88]
Kohli et. al. [45] proposed iDCGAN (iris Deep Con- leverages RaSGAN to generate high-quality partially syn-
volutional Generative Adversarial Network), a novel thetic iris images in NIR spectrum and evaluates the effec-
framework that leverages deep convolutional genera- tiveness and usefulness of these images as both bonafide
tive adversarial networks and iris quality metrics to and presentation attack. They also proposed a novel one-
generate synthetic iris images that closely resemble class presentation attack detection method known as RD-
real iris images. Bamoriya et. al. [6] proposed an PAD for unseen presentation attack detection, addressing
novel approach, called Deep Synthetic Biometric GAN the challenge of generalizability in PAD algorithms. Zou
(DSB-GAN), for generating realistic synthetic biomet- et al. [97] proposed 4DCycle-GAN that is designed to en-
rics that can serve as large training datasets for deep hance the database of iris PA images by generating syn-
learning networks, enhancing their robustness against thetic iris images with cosmetic contact lenses. Building
adversarial attacks. upon the Cycle-GAN framework, the 4DCycle-GAN algo-
rithm stands out by adding two discriminators to the exist-
Currently, GAN-based methods to generate synthetic ing model to increase the diversity of the generated images.
biometrics have been proven to be far superior to capture These additional discriminators are engineered to favor the
the intricate details of various biometric cues. Therefore, images from the generators rather than those from real-life
in the remaining of the paper we will mainly focus on these captures. This approach reduces the bias towards generating
methods and iris images generated by these methods for our repetitive textures of contact lenses, which typically make
study and analysis. up a significant portion of the training data. In [89], Yadav
and Ross proposed to generate bonafide as well as different
4.1. Partially Synthetic Irides types of presentation attacks in NIR spectrum using a novel
image translative GAN, known as CIT-GAN. The proposed
Partially-Synthetic data refer to synthetic samples that architecture translates the style of one domain to another us-
contain artificial components mixed with real biometric ing a styling network to generate realistic, high resolution
data. In this approach, certain aspects or attributes of the iris images.
biometric data are synthetically generated, while other parts
are derived from real individuals (see Figure 5). The goal 4.2. Fully Synthetic Irides
of partially-synthetic data is to introduce controlled varia-
tions or augmentations to the real data, thereby increasing Fully-synthetic biometric data refer to entirely artificial
the diversity and robustness of the dataset. This can be par- biometric samples that do not correspond to any real indi-
ticularly useful in scenarios where the real data is limited, viduals in the training data. These synthetic samples are
imbalanced, or lacks specific variations. For example, in created using mathematical models, statistical distributions,
iris presentation attack (PA) detection where the detection or generative algorithms to simulate the characteristics of
methods aim to detect PA attacks (such as printed eyes, cos- real biometric data (see Figure 6). Some of the texture based
metic contact lens, etc.), limited PA data is available to train methods focuses on generating new iris identities (fully-
the detection methods. This can limit the methods’ devel- synthetic) that are unique from training samples. These
opment and testing as well. Also, with the improvement in methods aim to generate new iris images with both inter
technology more advance PA attacks are present in the real and intra-class variations to help mitigate the issue of small
world (such as good quality textured contact lens, replay at- training data size by increasing the size of the dataset. This
tack using high definition screens, etc.) and the current de- can help in the development of recognition systems and
tection methods are not generalized enough to detect these their testing. Also, by generating fully-synthetic identities
new and unseen attacks. As mentioned earlier, Kohli et al. that do not correspond with real identities, we can solve
[45] proposed iDCGAN that utilizes a deep convolutional the privacy concerns associated with using a real person’s
generative adversarial network to generate synthetic iris im- biometric data. Wang et al. [81] proposed a novel algo-
ages that are realistic looking and closely resemble real iris rithm for generating diverse iris images, enhancing both
images. This framework aims to explore the impact of the the variety and the number of images available for analy-
synthetically generated iris images when used as presenta- sis. The technique employs contrastive learning to separate
tion attacks on iris recognition systems. In [6], Bamoriya et features tied to identity (like iris texture and eye orienta-
al. proposed a novel approach, DSB-GAN, which is built tion) from those that change with conditions (such as pupil
Figure 5. Some examples of partially-synthetic iris PAs (a: Printed eyes, b: artificial eyes and c: cosmetic contact lens) generated using
CIT-GAN [90].
size and iris exposure). This separation allows for precise tinguish between real and fake samples, while the gen-
identity representation in synthetic images. The algorithm erator’s objective is to generate samples that are indis-
uniquely processes iris topology and texture through a dual- tinguishable from real ones. However, in RaSGAN,
channel input system, enabling the generation of varied iris both the generator and discriminator are trained to con-
images that retain specific texture details. Yadav and Ross sider the relative likelihood that a real sample is more
[90] proposed iWarpGAN that aims to disentangle identity realistic than a synthetic one and vice-versa. The aim is
and stylistic elements within iris images. It achieves this to provide the discriminator with feedback not only on
through two distinct pathways: one that transforms identity from the real and synthetic samples but also on how re-
features from real irides to create new iris identities, and alistic each sample is with respect to each other. Thus,
another that captures style from a reference image to infuse improving the stability and convergence properties of
it into the output. By merging these modified identity and GAN by providing more informative gradients to both
style elements, iWarpGAN can produce iris images with a the generator and discriminator. RaSGANs have been
wide range of inter and intra-class variations. Limited work shown to produce realistic synthetic images across var-
has been done in this category to generate irides with iden- ious domains, including images [36, 87], text [58], and
tity that do not match with any identity in the training data. audio [93, 57].
This is an important and upcoming topic that needs more
focus. • StarGAN-v2: Choi et al. [14] proposed a multi-
domain image translative GAN knowns as StarGAN-
5. Generating Adversarial Networks (GANs) v2 that aims to generate realistic looking images in
A generative adversarial network (GAN) consists of two multiple domains. The domains here refers to styles
networks: a generator and a discriminator, which are trained such as hair color, facial expression, etc. The generator
simultaneously through adversarial learning. The generator in StarGAN-v2 takes a source image and a reference
network aims to generate data, such as images, audio, or style code generated by mapping network as inputs and
text, that is indistinguishable from real data, while the dis- translate the source image into a synthetic image that is
criminator aims to differentiate between real and generated from domain exhibiting style properties same as refer-
data. In this study, we have utilized five different types of ence style code. This method showed improvement for
GANs that has been shown in the literature to generate re- multi-domain synthetic image generation when com-
alistic looking iris images: pared with StyleGAN [42] and StarGAN [13].
• Relativistic Average GAN (RaSGAN): RaSGAN • Cyclic Image Translative GAN (CIT-GAN): Yadav
[36] aims to overcome the shortcomings of traditional and Ross [88] proposed CIT-GAN to generate differ-
GANs by introducing a relativistic discriminator. In ent types of iris PAs as well as bona-fide images. In
standard GANs, the discriminator’s objective is to dis- this method, the generator aims to translate given in-
Figure 6. Some examples of fully-synthetic irides generated using iWarpGAN with inter and intra-class variations [90]. SScore here refers
to simiarity score between two iris images using VeriEye.
put image into an image from reference domain. This • iWarpGAN: In [90], Yadav and Ross proposed iWarp-
translation is facilitated by a Style Network that takes GAN that aims to generate fully synthetic iris im-
reference image as input and actively learns to gener- ages i.e., the identity in the iris images generated by
ate style code that clearly indicates the style properties this method doesn’t resemble any identity seen dur-
of reference domain. In comparison to StarGAN-v2 ing training. This is achieved by disentangling identity
that uses mapping network to generate style code from and style using two transformation pathways: (1) Style
a random noise vector, this method is able to learn transformation pathway: aims to generate iris images
the intra-class variations that are present in the mul- with styles that are extracted from a reference image,
tiple domains. Therefore, improving the diversity in without altering the identity (2) Identity transformation
the generated images for each domain. The ”cyclic” pathway: aims to transform identity of input image in
term here refers to cycle-consistency loss that is uti- the latent space to generate identities that is different
lized to ensure that the translated image can be trans- from input image. This method can generate iris im-
lated back to the original source domain. This helps ages with identities unique from training data, generate
ensure that while translating the image from source multiple images per newly generated identity and scal-
domain to reference, underlying characteristic of the able to hundred-thousand identities.
input image such as iris-pupil shape and size are not
changed.
• StyleGAN-3: Karras et al. first proposed StyleGAN
[42] for good quality synthetic image generation with
some improvements in StyleGAN-2 [40] to handle 6. Comparative Analysis
multi-domain image generation, but these approaches
lack in terms of diversity in images being generated In this section, we discuss the different experiments that
and generalization capability of the network. In [41], were conducted to evaluate the capability of different GAN
Karras et al. proposed StyleGAN-3 that overcome methods for generating fully and partially synthetic iris im-
these shortcomings by introducing an adaptive dis- ages. For this, we test the realism of generated images with
criminator augmentation (ADA), which aims at im- respect to real iris data using different methods. Further,
proving the generalization capability of the discrimi- uniqueness of the generated images in terms of identity and
nator network. ADA dynamically adjusts the strength utility of the generated images for PA detection and iris
of data augmentation during discriminator training, ef- recognition is evaluated in this section. The different GAN
fectively making the discriminator more robust to di- methods we studied in this research are: RaSGAN [36, 87],
verse variations in the training data. CIT-GAN [88], StarGAN-v2, StyleGAN-3 [96] and iWarp-
GAN [90].
Figure 7. Yadav and Ross [88] proposed CIT-GAN where the generator aims to translate given input image into an image from reference
domain. This translation is facilitated by a Style Network that takes reference image as input and actively learns to generate style code that
clearly indicates the style properties of reference domain
6.1. Datasets Used • IITD [5]: Originating from the Indian Institute of
Technology, Delhi, the IITD-iris dataset was collected
In this paper, we conducted our experiments and analysis in an indoor setting and consists of 1,120 iris images
using three iris datasets: from 224 subjects. The images were captured using
JIRIS JPC1000 and digital CMOS cameras, each with
• CASIA-Iris-Thousand [4]: Developed by the Chinese a resolution of 320x240 pixels. In line with the previ-
Academy of Sciences Institute of Automation, the ous datasets, this one also utilizes a 70-30 split based
CASIA-Iris-Thousand dataset is a popular resource for on unique identities for its training (314 identities) and
studying iris patterns and for advancing iris recogni- testing sets (134 identities).
tion technologies. This dataset comprises 20,000 iris
images from 1,000 participants, accounting for 2,000 For the preparation of the data, we processed and resized
distinct identities when considering images of both the all the iris images to 256x256 pixels, centering on the iris
left and right eyes. These images are captured at a res- region as determined by the iris and pupil coordinates from
olution of 640x480 pixels. The dataset has been par- VeriEye.2
titioned into training and testing subsets, with a distri-
bution of 70% for training (1,400 identities) and 30%
for testing (600 identities). 6.1.1 Iris PA Datasets
Our exploration into synthetic images for iris PA detection
• CASIA Cross Sensor Iris Dataset (CSIR) [86]: The involves leveraging five distinct iris PA datasets. These
training portion of the CASIA-CSIR dataset, provided datasets included Casia-iris-fake [77], Berc-iris-fake [49],
by the Chinese Academy of Sciences Institute of Au- NDCLD15 [22], LivDet2017 [91], and MSU-IrisPA-01
tomation, was employed in our study. It includes a [87], each comprising authentic iris images alongside var-
total of 7,964 iris images from 100 individuals, repre- ious categories of PAs such as cosmetic contacts, printed
senting 200 unique identities when both eyes are con- iris images, artificial eyes, and display-based attacks. As
sidered. Similar to the first dataset, a 70-30 split based mentioned earlier, we processed and resized the images to
on unique identities was used to divide the images 256x256 pixels, centering on the iris region as determined
into training (5,411 images) and testing sets (2,553 im- by the iris and pupil coordinates from VeriEye. Images that
ages), intended for the training and evaluation of deep
learning models for iris recognition. 2 www.neurotechnology.com/verieye.html
Figure 8. Yadav and Ross [90] proposed iWarpGAN that aims to generate fully synthetic iris images i.e., the identity in the iris images
generated by this method doesn’t resemble any identity seen during training. This is achieved by disentangling identity and style using two
transformation pathways: (1) Style transformation pathway: aims to generate iris images with styles that are extracted from a reference
image, without altering the identity (2) Identity transformation pathway: aims to transform identity of input image in the latent space to
generate identities that is different from input image.
VeriEye failed to process correctly were excluded from the 6.2.1 Fréchet Inception Distance Score
study, with our focus being primarily on the aspect of im-
age synthesis. The resulting processed dataset for analysis The Fréchet Inception Distance (FID) Score is a metric used
contains 24,409 genuine iris images, 6,824 images with cos- to assess the quality of synthetically generated images by
metic contact lenses, 680 artificial eye representations, and comparing their distribution to that of real images, result-
13,293 printed iris images. The train and test division on ing in a score based on the differences. The objective is to
this dataset is explained later in this section. minimize this score, as a lower FID score suggests greater
resemblance between the synthetic and real datasets. It has
been noted that FID scores can span a broad range, with
6.2. Quality of Generated Images
extremely high scores in the 400-600 range indicating sig-
In order to evaluate the realism and quality of the nificant deviation from the real data distribution and, conse-
generated iris images, different GAN methods- RaSGAN, quently, poor synthetic image quality [71].
CIT-GAN, StarGAN-v2, StyleGAN-3 and iWarpGAN- are In our specific analysis of the quality of synthetically
trained using real irides from CASIA-Iris-Thousand, CA- generated iris images produced by different GANs used in
SIA Cross Sensor Iris and IITD-iris dataset, separately. Us- this study, we obtained an average FID score of 24.33 and
ing the trained networks, we generate three sets of 20,000 for RaSGAN and StarGAN-v2. On the other hand, a score
synthetic bonafide images (from each dataset) for each of of 31.82, 26.90, 15.72 and 17.62 were obtained for CIT-
the GANs mentioned above. We then evaluate the realism GAN, StyleGAN-3 and iWarpGAN, respectively. As men-
of the generated images and the quality of the iris using tioned earlier, lower the FID score, more realistic the gen-
three different methods: (1) Fréchet Inception Score [71], erated images are with respect to real images. Therefore,
(2) VeriEye Rejection Rate and (3) ISO-ISO/IEC 29794-6 we conclude that StyleGAN-3 and iWarpGAN generate the
Standard Quality Metrics [3]. most realistic iris images. The distribution of these FID
(a) FID score distribution from generated iris images when GANs are trained using CASIA-Iris-Thousand dataset.
(b) FID score distribution from generated iris images when GANs are trained using CASIA-CSIR dataset.
Figure 9. Histograms showing the realism scores of real iris images (i.e., FID scores) from three different datasets and the synthetically
generated iris images (continued). Lower the FID score, the more realistic the generated iris images are.
(c) FID score distribution from generated iris images when GANs are trained using IITD-iris dataset.
Figure 9. Histograms showing the realism scores of real iris images (i.e., FID scores) from three different datasets and the synthetically
generated iris images (continued). Lower the FID score, the more realistic the generated iris images are.
(b) CASIA-CSIR dataset versus synthetic iris images from various GANs.
Figure 10. Histograms depicting the quality of real irides alongside the quality of synthetic irides. These evaluations are in accordance with
the ISO/IEC 29794-6 Standard Quality Metrics, with the quality scale set between 0 and 100, where a higher score denotes superior quality.
Iris images that were not successfully assessed by this standard were assigned a score of 255. As seen from the figure, score distribution
for real images are closely resembled by images generated using iWarpGAN, StyleGAN-3, followed by CIT-GAN. However, same can not
be said for RaSGAN and StarGAN-v3.
errors in segmentation, are assigned a score of 255. GAN were assigned the score of 255, reflecting their in-
ferior quality. Additionally, a comparison across the three
As shown in Figure 10, the quality scores for 20,000 syn-
datasets showed that the CASIA-CSIR dataset contained
thetic iris images obtained using iWarpGAN, CIT-GAN and
a higher proportion of images with the lowest score of
StyleGAN-3 are on par with those of real iris images. Con-
255, in contrast to the IITD-iris and CASIA-Iris-Thousand
versely, a noticeable number of images generated by RaS-
(c) IITD-iris dataset versus synthetic iris images from various GANs.
Figure 10. Histograms depicting the quality of real irides alongside the quality of synthetic irides. These evaluations are in accordance with
the ISO/IEC 29794-6 Standard Quality Metrics, with the quality scale set between 0 and 100, where a higher score denotes superior quality.
Iris images that were not successfully assessed by this standard were assigned a score of 255. As seen from the figure, score distribution
for real images are closely resembled by images generated using iWarpGAN, StyleGAN-3, followed by CIT-GAN. However, same can not
be said for RaSGAN and StarGAN-v3.
Table 3. PAD-Experiment-1: True Detection Rate (TDR in %) at 1% False Detection Rate (FDR) of different iris PAD methods when trained
using real bonafide irides, real PAs and synthetic PAs generated using different GAN methods. Comparing the baseline performance with
the performance here, we can see that iris PA images generated by StyleGAN-3 and iWarpGAN are better at replacing the real PAs.
BSIF+SVM [22] Fine-Tuned VGG-16 [27] Fine-Tuned AlexNet [47] D-NetPAD [76]
RaSGAN 28.31 81.11 86.74 87.97
CIT-GAN 29.43 85.81 88.37 88.86
StarGAN-v2 29.73 83.47 88.71 88.45
StyleGAN-3 34.05 88.14 90.95 90.75
iWarpGAN 32.09 86.58 89.18 90.28
Figure 11. This figure shows the uniqueness of iris images generated using iWarpGAN when the GANs are trained using CASIA-Iris-
Thousand dataset. The y-axis represents the similarity scores obtained using VeriEye. Here, R=Real, S=Synthetic, Gen=Genuine and
Imp=Impostor. As indicated by the arrow here, similarity between images generated by iWarpGAN is the lowest in comparison to other
GANs, followed by StyleGAN-3. This indicates the uniqueness of the identity generated by these GANs with respect to real identities and
also the generated identities.
Analysis: The variation in the number of samples across different PA categories influences the effectiveness of the
Figure 12. This figure shows the uniqueness of iris images generated using iWarpGAN when the GANs are trained using CASIA-CS iris
dataset. The y-axis represents the similarity scores obtained using VeriEye. Here, R=Real, S=Synthetic, Gen=Genuine and Imp=Impostor.
As indicated by the arrow here, similarity between images generated by iWarpGAN is the lowest in comparison to other GANs, followed
by StyleGAN-3. This indicates the uniqueness of the identity generated by these GANs with respect to real identities and also the generated
identities.
PAD techniques. This impact is evident when the outcomes performance metrics in Table 2 and Table 3, only a marginal
of PAD-Experiment-0 are compared with PAD-Experiment- difference in PAD efficacy is noticeable.
1 & 2. In PAD-Experiment-2, the PAD methods are trained
with 9,439 bonafide samples and an equalized set of PA
6.4.2 Iris Recognition
samples (namely, 5,000 from each PA category), includ-
ing both real and synthesized PAs. Based on the data in As mentioned earlier, the lack of sufficient number of
Table 2 and Table 4, there is a discernible enhancement in unique identities in the dataset with large intra-class varia-
the performance of each PAD approach when trained with tions can affect the training and testing of many iris recogni-
balanced samples from each class. Moreover, a compara- tion methods, especially recognition methods based on deep
tive analysis of synthetic PA samples and actual PA samples networks that needs large number of training samples for
was conducted through PAD-Experiment-1. In this exper- better performance. As seen from the previous experiments,
iment, a portion of the real PA samples in the training set among the GANs studied in this paper, only iWarpGAN
was substituted with synthetic PAs. When comparing the has the capability of generating irides whose identities are
Figure 13. This figure shows the uniqueness of iris images generated using iWarpGAN when the GANs are trained using IITD iris dataset.
The y-axis represents the similarity scores obtained using VeriEye. Here, R=Real, S=Synthetic, Gen=Genuine and Imp=Impostor. As
indicated by the arrow here, similarity between images generated by iWarpGAN is the lowest in comparison to other GANs, followed by
StyleGAN-3. This indicates the uniqueness of the identity generated by these GANs with respect to real identities and also the generated
identities.
sufficeintly different from that of the training data. There- sessing the impact of synthetic iris dataset on enhanc-
fore, we train iWarpGAN using the CASIA-Iris-Thousand, ing the performance of iris recognition methods based on
CASIA-CSIR and IIT-Delhi iris datasets, separately, to gen- deep learning. In this context, EfficientNet, ResNet-101
erate synthetic irides with both inter and intra-class varia- and DenseNet-201 are trained with not only the real irides
tions. The generated dataset is utilized in this experiment to from CASIA-Iris-Thousand, CASIA-CSIR, and IITD-iris
evaluate its usefulness for improved iris recognition. datasets but also with a synthetically generated iris dataset
Recog-Experiment-0: In this baseline experiment, Ef- derived from iWarpGAN.
ficientNet [34], ResNet-101 [56] and DenseNet-201 are
trained using the triplet training approach. Training and Analysis: Figures 14 and 15 illustrate that the recog-
testing has been done using cross-dataset method, i.e., when nition accuracy of iris, matchers based on deep learning,
trained using real irides from CASIA-Iris-Thousand and is enhanced with the incorporation of a larger dataset with
CASIA-CSIR, testing is done on IITD-iris dataset. synthetic samples. This enhancement is particularly evident
when the matcher is trained using a combination of both real
Recog-Experiment-1: This experiment focuses on as- iris images and those synthetically produced by iWarpGAN.
Although the baseline performance of ResNet-101 and Ef- ages: The first area of exploration involves developing
ficientNet is somewhat modest, they exhibits a notably sub- a generalizable solution for creating fully synthetic iris
stantial enhancement in the performance. Similar, behavior images. This involves not just replicating the physical
is seen for DenseNet-201. appearance of an iris but also ensuring that the syn-
thetic images can adapt or respond to different lighting
7. Summary & Future Work conditions and camera specifications, just as a real iris
would. Such a solution would have huge implications
In this section, we summarize the studies conducted in
for enhancing the realism and applicability of synthetic
this paper and also discuss the future scope in the field of
irides in various fields, including iris recognition and
generating synthetic irides.
presentation attack detection. Also, demographic at-
7.1. Current Techniques & their Limitations tributes such as gender, age, etc. needs to be accounted
for while generating synthetic iris images.
In this research, we reviewed and analyzed differ-
ent GAN methods to generate synthetic images for both • Generating Ocular Images: Another intriguing di-
bonafide irides and different presentation attack instru- rection is the generation of complete ocular images,
ments. The generated irides were evaluated for their real- which include not only the iris but also other parts of
ism, quality, unique identities and utility. Using these ex- the eye. So far, the research work in this field manly fo-
periments as our criteria for comparison, we conclude that: cuses on generating cropped iris images and for some
(1) GAN methods like RaSGAN, StarGAN-v2 and CIT- cases the image quality deteriorates as more informa-
GAN can generate realistic looking synthetic dataset, but tion is introduced in the image [88]. Therefore, this
fail to generate enough samples whose identities are differ- area needs attention from researchers to be able to
ent from those in the training dataset, i.e., the identities in study the other distinguishing features of an eye apart
the generated dataset has high similarity with the training from irides [11, 43]. Creating realistic ocular images
dataset and also with itself. Similar behavior was seen for that accurately represent the myriad variations in hu-
StyleGAN-3, however the images generated by StyleGAN- man eyes could also aid in the development of more
3 are highly realistic and very close to original dataset in robust facial recognition technologies by providing a
terms or quality (as seen in Figure 10). On the other hand, method to generate faces that also captures the intri-
iWarpGAN showed the capability of generating synthetic cate details of a real iris, which seems to be missing
irides that has both inter and intra-class variations, which from most of the face generation methods.
can augment real iris datasets for training and testing iris • Synthetic Iris Videos to Mimic Liveness of Real Iri-
recognition methods. This method is scalable to multiple des: The creation of synthetic iris videos that can
domains (using attribute vector), and can also be utilized to mimic the liveness of real irides is a particularly chal-
generate bonafide and various PAs that can be used to en- lenging topic. Such advancements would be beneficial
hance the performance of different PAD methods (as shown in developing more robust PA detection methods. By
in 4). While this method provides solutions for both fully simulating the natural movements and minute dynamic
and partially synthetic iris generation, iWarpGAN utilizes changes in the iris, these videos could provide an au-
image transformation, whereby the network requires both thentic and effective tool for training and improving
an input and a style reference image to modify the iden- liveness detection algorithms in iris recognition sys-
tity and style, resulting in the generation of an output im- tems. Also as mentioned earlier, this could also aid in
age. Such a process could potentially constrain the range of developing a robust facial recognition technologies.
features that iWarpGAN is able to explore. Also, there is
still some similarity observed between training samples and • Multi-spectrum Iris Image Generation: The gener-
generated irides. ation of multi-spectrum iris images presents another
frontier [70, 10]. The human iris exhibits different
7.2. Future Work & Scope characteristics under various light spectrum - a fea-
Numerous researchers are dedicating their efforts to the ture that is often leveraged in biometric systems. De-
creation of synthetic face images encompassing varied at- veloping synthetic iris images that can accurately re-
tributes, styles, identities, spectra, and more [61, 53, 65, 46]. flect these multi-spectral properties would not only en-
However, more study has to be done in the field of synthetic hance the realism of these images but also expand their
iris generation. This opens up a wide array of opportunities utility in biometric recognition systems. Such multi-
that warrant in-depth investigation and exploration. Some spectrum images could serve as a valuable resource
of the possible future work in this field are listed here: for researchers and developers, offering a versatile tool
for testing and improving multi-spectral iris recogni-
• Generalizable Solution for Fully-Synthetic Iris Im- tion technologies.
Figure 14. This figure shows the performance of DenseNet-201, EfficientNet and ResNet-101 in the cross-dataset evaluation scenario i.e.,
trained using CASIA-Iris-Thousand & CASIA-CSIR datasets and tested using IIT-Delhi iris dataset. Improvement in the performance is
seen when size of training set is increased using synthetic irides.
Figure 15. This figure shows the performance of DenseNet-201, EfficientNet and ResNet-101 in the cross-dataset evaluation scenario i.e.,
trained using CASIA-Iris-Thousand & IIT-Delhi iris datasets and tested using CASIA-CSIR dataset. Improvement in the performance is
seen when size of training set is increased using synthetic irides.
• Improved GAN Latent Space Interpretability: As generated images over multiple steps, allowing for bet-
indicated by iWarpGAN, enhancing GANs to produce ter control and coherence in synthesis. With this, Dif-
disentangled representations in their latent space is vi- fusion GANs can capture complex spatial dependen-
tal for generating full synthetic iris images. Here, im- cies inherent in iris textures, ensuring the production
proving semantic interpretability can help ensure that of realistic patterns consistent with real iris images.
changes in the latent space correspond to meaningful Moreover, they exhibit improved stability and conver-
and coherent changes in the generated output [38, 8]. gence during training, mitigating issues like mode col-
Also, developing new visualization techniques to map lapse and artifact generation.
and understand the high-dimensional latent space is
another crucial step. • Application in Deepfake detection: Synthetic images
can play a crucial role in deepfake detection [67, 94]
• Diffusion Diffusion Generative Adversarial Net- by serving as a tool for training and evaluating de-
works: Diffusion GANs [82] offer a promising ap- tection algorithms. By incorporating a diverse range
proach for generating synthetic iris images with high of synthetic data into the training process, detection
realism with respect to real iris images. By employing algorithms can better generalize and identify subtle
a diffusion process, these networks iteratively refine inconsistencies or artifacts indicative of deep fakes.
Moreover, synthetic images allow researchers to exper- [7] M. Banf and V. Blanz. Example-based rendering of eye
iment with various image editing techniques, enhanc- movements. In Computer Graphics Forum, volume 28,
ing the robustness of detection systems against emerg- pages 659–666. Wiley Online Library, 2009.
ing threats. Therefore, synthetic images can serve as [8] D. Bau, J.-Y. Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum,
a critical resource in advancement and development of W. T. Freeman, and A. Torralba. GAN Dissection: Visualiz-
robust deep fake detection methods. ing and understanding generative adversarial networks. In-
ternational Conference on Learning Representations (ICLR),
The potential applications of successfully generated syn- 2018.
thetic iris images are vast and varied. In security and bio- [9] K. W. Bowyer and M. J. Burge. Handbook of Iris Recogni-
metric recognition systems, these images can help improve tion. Springer, 2016.
the accuracy and robustness of systems by providing a di- [10] C. Boyce, A. Ross, M. Monaco, L. Hornak, and X. Li. Mul-
verse range of data for training and testing. In the medical tispectral iris analysis: A preliminary study. In IEEE Confer-
ence on Computer Vision and Pattern Recognition Workshop
field, synthetic iris images could be used for training pur-
(CVPRW’06), pages 51–51, 2006.
poses, enabling medical professionals to recognize and di-
[11] L. Cardoso, A. Barbosa, F. Silva, A. M. Pinheiro, and
agnose eye-related diseases more effectively.
H. Proença. Iris biometrics: Synthesis of degraded ocular
Furthermore, in the realm of entertainment and virtual images. IEEE Transactions on Information Forensics and
reality, realistic synthetic iris images could enhance the vi- Security, pages 1115–1125, 2013.
sual experience by providing more lifelike and expressive [12] R. Chen, X. Lin, and T. Ding. Liveness detection for iris
characters. The ability to generate eyes that accurately recognition using multispectral images. Pattern Recognition
mimic human emotions could revolutionize the way we in- Letters, 33(12):1513–1519, 2012.
teract with virtual environments and characters. In conclu- [13] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo.
sion, while the generation of realistic and unique synthetic StarGAN: Unified generative adversarial networks for multi-
iris images is still in the development stage, it presents an domain image-to-image translation. In Proceedings of the
opportunity for research and exploration. IEEE Conference on Computer Vision and Pattern Recogni-
tion, pages 8789–8797, 2018.
8. Acknowledgments [14] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha. StarGAN v2: Di-
verse image synthesis for multiple domains. In Proceedings
This research is based upon work supported by by NSF of the IEEE/CVF Conference on Computer Vision and Pat-
CITeR funding. The views and conclusions contained tern Recognition (CVPR), June 2020.
herein are those of the authors and should not be interpreted [15] S. Crihalmeanu, A. Ross, S. Schuckers, and L. Hornak. A
as necessarily representing the official policies, either ex- protocol for multibiometric data acquisition, storage and dis-
pressed or implied by NSF CITeR. semination. Technical report, Technical Report, WVU, Lane
Department of Computer Science and Electrical, 2007.
References [16] J. Cui, Y. Wang, J. Huang, T. Tan, and Z. Sun. An iris im-
age synthesis method based on pca and super-resolution. In
[1] The Iris Challenge Evaluation (ICE) conducted by National Proceedings of the 17th International Conference on Pattern
Institute of Standards and Technology (NIST). https: Recognition, 2004. ICPR 2004., volume 4, pages 471–474
//www.nist.gov/programs- projects/iris- Vol.4, 2004.
challenge-evaluation-ice, 2005. [17] A. Czajka. Database of iris printouts and its application: De-
[2] The Iris Challenge Evaluation (ICE) conducted by National velopment of liveness detection method for iris recognition.
Institute of Standards and Technology (NIST). https: In 18th IEEE International Conference on Methods & Mod-
//www.nist.gov/programs- projects/iris- els in Automation & Robotics (MMAR), pages 28–33, 2013.
challenge-evaluation-ice, 2006. [18] A. Czajka, D. Moreira, K. Bowyer, and P. Flynn. Domain-
[3] ISO-Quality-Metrics-Iris-29794-6. Information technology specific human-inspired binarized statistical image features
Biometric sample quality Part 6: Iris image data. Standard, for iris recognition. In IEEE Winter Conference on Applica-
International Organization for Standardization, Geneva, tions of Computer Vision (WACV), pages 959–967, 2019.
CH. (https://www.iso.org/standard/54066. [19] F. K. Dankar and M. Ibrahim. Fake it till you make it: Guide-
html, 2014. lines for effective synthetic data generation. Applied Sci-
[4] CASIA Iris Image Database Version 4.0. ences, 11(5):2158, 2021.
http : / / biometrics . idealtest . org /
[20] J. Daugman. New methods in iris recognition. In IEEE
dbDetailForUser.do?id=4, 2017.
Transactions on Systems, Man, and Cybernetics, Part B (Cy-
[5] IIT Delhi Database. http://www4.comp.polyu. bernetics), 37(5):1167–1175, 2007.
edu . hk / ˜csajaykr / IITD / Database _ Iris .
[21] J. G. Daugman. High confidence visual recognition of per-
htm., 2017.
sons by a test of statistical independence. In IEEE Trans-
[6] P. Bamoriya, G. Siddhad, H. Kaur, P. Khanna, and A. Ojha.
actions on Pattern Analysis and Machine Intelligence, pages
DSB-GAN: Generation of deep learning based synthetic bio-
1148–1161, 1993.
metric data. Displays, 74:102267, 2022.
[22] J. S. Doyle and K. W. Bowyer. Robust detection of textured [37] I. Joshi, M. Grimmer, C. Rathgeb, C. Busch, F. Bremond,
contact lenses in iris recognition using BSIF. In IEEE Ac- and A. Dantcheva. Synthetic data in human analysis: A sur-
cess, 3:1672–1683, 2015. vey. In IEEE Transactions on Pattern Analysis and Machine
[23] P. Drozdowski, C. Rathgeb, and C. Busch. Sic-gen: A syn- Intelligence, 2024.
thetic iris-code generator. In IEEE International Conference [38] M. Kahng and D. H. Chau. How does visualization help
of the Biometrics Special Interest Group (BIOSIG), pages 1– people learn deep learning? evaluation of GAN lab. In
6, 2017. IEEE Workshop on EValuation of Interactive VisuAl Machine
[24] M. Edwards, A. Gozdzik, K. Ross, J. Miles, and E. J. Parra. Learning Systems, volume 1, page 4, 2019.
Quantitative measures of iris color using high resolution pho- [39] M. Kanematsu, H. Takano, and K. Nakamura. Highly reli-
tographs. American Journal of Physical Anthropology, pages able liveness detection method for iris recognition. In IEEE
141–149, 2012. SICE Annual Conference, pages 361–364, 2007.
[25] J. J. Engelsma, S. Grosz, and A. K. Jain. PrintsGAN: Syn- [40] T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and
thetic fingerprint generator. In IEEE Transactions on Pattern T. Aila. Training generative adversarial networks with lim-
Analysis and Machine Intelligence, 45(5):6111–6124, 2022. ited data. Advances in Neural Information Processing Sys-
[26] M. Fang, M. Huber, and N. Damer. Synthaspoof: Devel- tems (NIPS), pages 12104–12114, 2020.
oping face presentation attack detection based on privacy- [41] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten,
friendly synthetic data. In Proceedings of the IEEE/CVF J. Lehtinen, and T. Aila. Alias-free generative adversarial
Conference on Computer Vision and Pattern Recognition, networks. Advances in Neural Information Processing Sys-
pages 1061–1070, 2023. tems (NIPS), pages 852–863, 2021.
[27] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis [42] T. Karras, S. Laine, and T. Aila. A style-based generator ar-
using convolutional neural networks. In Advances in Neu- chitecture for generative adversarial networks. In Proceed-
ral Information Processing Systems (NIPS), pages 262–270, ings of the IEEE/CVF Conference on Computer Vision and
2015. Pattern Recognition, pages 4401–4410, 2019.
[28] E. Gent. A cryptocurrency for the masses or a universal id?: [43] H. Kaur and R. Manduchi. Eyegan: Gaze-preserving,
Worldcoin aims to scan all the world’s eyeballs. In IEEE mask-mediated eye image synthesis. In Proceedings of the
Spectrum, 2023. IEEE/CVF Winter Conference on Applications of Computer
[29] S. A. Grosz and A. K. Jain. SpoofGAN: Synthetic finger- Vision, pages 310–319, 2020.
print spoof images. In IEEE Transactions on Information [44] N. Kohli, D. Yadav, M. Vatsa, and R. Singh. Revisiting
Forensics and Security, 18:730–743, 2022. iris recognition with color cosmetic contact lenses. In IEEE
[30] M. Gupta, V. Singh, A. Agarwal, M. Vatsa, and R. Singh. International Conference on Biometrics (ICB), pages 1–7,
Generalized iris presentation attack detection algorithm un- 2013.
der cross-database settings. In 25th IEEE International Con- [45] N. Kohli, D. Yadav, M. Vatsa, R. Singh, and A. Noore. Syn-
ference on Pattern Recognition (ICPR), pages 5318–5325, thetic iris presentation attack using idcgan. In IEEE Inter-
2021. national Joint Conference on Biometrics (IJCB), pages 674–
[31] P. Gupta, S. Behera, M. Vatsa, and R. Singh. On iris spoof- 680, 2017.
ing using print attack. In IEEE International Conference on [46] J. N. Kolf, T. Rieber, J. Elliesen, F. Boutros, A. Kuijper, and
Pattern Recognition (ICPR), pages 1681–1686, 2014. N. Damer. Identity-driven three-player generative adversar-
[32] J. Han, S. Karaoglu, H.-A. Le, and T. Gevers. Improving ial network for synthetic-based face recognition. In Proceed-
face detection performance with 3d-rendered synthetic data. ings of the IEEE/CVF Conference on Computer Vision and
arXiv preprint arXiv:1812.07363, 2018. Pattern Recognition, pages 806–816, 2023.
[33] S. Hoffman, R. Sharma, and A. Ross. Convolutional neu- [47] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet
ral networks for iris presentation attack detection: Toward classification with deep convolutional neural networks. In
cross-dataset and cross-sensor generalization. In IEEE Con- Advances in Neural Information Processing Systems (NIPS),
ference on Computer Vision and Pattern Recognition Work- pages 1097–1105, 2012.
shops (CVPRW), pages 1620–1628, 2018. [48] E. C. Lee, Y. J. Ko, and K. R. Park. Fake iris detection
[34] C.-S. Hsiao, C.-P. Fan, and Y.-T. Hwang. Design and anal- method using purkinje images based on gaze position. Opti-
ysis of deep-learning based iris recognition technologies by cal Engineering, 47(6):1 – 16, 2008.
combination of u-net and efficientnet. In 9th IEEE Interna- [49] S. J. Lee, K. R. Park, Y. J. Lee, K. Bae, and J. H. Kim.
tional Conference on Information and Education Technology Multifeature-based fake iris detection method. Optical En-
(ICIET), pages 433–437, 2021. gineering, 46(12):127204, 2007.
[35] H. Huang, R. He, Z. Sun, T. Tan, et al. Introvae: Introspective [50] X. Liu, K. W. Bowyer, and P. J. Flynn. Experiments with
variational autoencoders for photographic image synthesis. an improved iris segmentation algorithm. In 4th IEEE
Advances in Neural Information Processing Systems (NIPS), Workshop on Automatic Identification Advanced Technolo-
2018. gies (AutoID’05), pages 118–123, 2005.
[36] A. Jolicoeur-Martineau. The relativistic discriminator: A key [51] S. U. Maheswari, P. Anbalagan, and T. Priya. Efficient iris
element missing from standard GAN. 7th International Con- recognition through improvement in iris segmentation algo-
ference on Learning Representations (ICLR), 2019. rithm. International Journal on Graphics, Vision and Image
Processing, 8(2):29–35, 2008.
[52] S. Makthal and A. Ross. Synthesis of iris images using [66] R. Raghavendra and C. Busch. Presentation attack detec-
markov random fields. In 13th European Signal Processing tion algorithm for face and iris biometrics. In European Sig-
Conference, pages 1–4, 2005. nal Processing Conference (EUSIPCO), pages 1387–1391,
[53] P. Melzi, C. Rathgeb, R. Tolosana, R. Vera-Rodriguez, 2014.
D. Lawatsch, F. Domin, and M. Schaubert. GANdiff- [67] M. S. Rana, M. N. Nobi, B. Murali, and A. H. Sung. Deep-
face: Controllable generation of synthetic datasets for face fake detection: A systematic literature review. In IEEE Ac-
recognition with realistic variations. In Proceedings of the cess, 10:25494–25513, 2022.
IEEE/CVF International Conference on Computer Vision, [68] D. Rankin, B. Scotney, P. Morrow, R. McDowell, and
pages 3086–3095, 2023. B. Pierscionek. Comparing and improving algorithms for
[54] D. Menotti, G. Chiachia, A. da Silva Pinto, W. R. Schwartz, iris recognition. In 13th IEEE International Machine Vision
H. Pedrini, A. X. Falcao, and A. Rocha. Deep representa- and Image Processing Conference, pages 99–104, 2009.
tions for iris, face, and fingerprint spoofing detection. In [69] A. Ross, S. Banerjee, C. Chen, A. Chowdhury, V. Mirjalili,
IEEE Transactions on Information Forensics and Security, R. Sharma, T. Swearingen, and S. Yadav. Some research
10:864–879, 2015. problems in biometrics: The future beckons. In IEEE Inter-
[55] S. Minaee and A. Abdolrashidi. Iris-GAN: Learning to gen- national Conference on Biometrics (ICB), pages 1–8, 2019.
erate realistic iris images using convolutional gan. arXiv [70] A. Ross, R. Pasula, and L. Hornak. Exploring multispectral
preprint arXiv:1812.04822, 2018. iris recognition beyond 900nm. In 3rd IEEE International
[56] S. Minaee and A. Abdolrashidi. Deepiris: Iris recog- Conference on Biometrics: Theory, Applications, and Sys-
nition using a deep learning approach. arXiv preprint tems, pages 1–8, 2009.
arXiv:1907.09380, 2019. [71] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Rad-
[57] T. Nakatsuka, Y. Tsuchiya, M. Hamanaka, and S. Morishima. ford, and X. Chen. Improved techniques for training
Audio-oriented video interpolation using key pose. Interna- GANs. In Advances in Neural Information Processing Sys-
tional Journal of Pattern Recognition and Artificial Intelli- tems (NIPS), pages 2234–2242, 2016.
gence, 35(16):2160016, 2021. [72] G. Santos, E. Grancho, M. V. Bernardo, and P. T. Fi-
[58] S. Nam, S. Jeon, and J. Moon. Generating optimized adeiro. Fusing iris and periocular information for cross-
guessing candidates toward better password cracking from sensor recognition. Pattern Recognition Letters, 57:52–59,
multi-dictionaries using relativistic GAN. Applied Sciences, 2015.
10(20):7306, 2020. [73] A. Sequeira, L. Chen, P. Wild, J. Ferryman, F. Alonso-
[59] K. Nguyen, C. Fookes, A. Ross, and S. Sridharan. Iris recog- Fernandez, K. B. Raja, R. Raghavendra, C. Busch, and J. Bi-
nition with off-the-shelf cnn features: A deep learning per- gun. Cross-eyed-cross-spectral iris/periocular recognition
spective. IEEE Access, 6:18848–18855, 2017. database and competition. In IEEE International Conference
[60] I. Nigam, M. Vatsa, and R. Singh. Ocular biometrics: A sur- of the Biometrics Special Interest Group (BIOSIG), pages 1–
vey of modalities and fusion approaches. Information Fu- 5, 2016.
sion, 26:1–35, 2015. [74] S. Shah and A. Ross. Generating synthetic irises by feature
[61] M. Osadchy, Y. Wang, O. Dunkelman, S. Gibson, agglomeration. In IEEE International Conference on Image
J. Hernandez-Castro, and C. Solomon. Genface: Improving Processing, pages 317–320, 2006.
cyber security using realistic synthetic face generation. In [75] A. Sharma, S. Verma, M. Vatsa, and R. Singh. On cross spec-
First International Conference on Cyber Security Cryptog- tral periocular recognition. In IEEE International Confer-
raphy and Machine Learning, pages 19–33. Springer, 2017. ence on Image Processing (ICIP), pages 5007–5011, 2014.
[62] A. Perala. Princeton identity tech powers galaxy s8 iris scan- [76] R. Sharma and A. Ross. D-NetPAD: An explainable and
ning. https://mobileidworld.com/princeton- interpretable iris presentation attack detector. In IEEE Inter-
identity- galaxy- s8- iris- 003312, 2017. [On- national Joint Conference on Biometrics (IJCB), pages 1–10,
line; accessed 16-December-2017]. 2020.
[63] H. Proenca, S. Filipe, R. Santos, J. Oliveira, and L. Alexan- [77] Z. Sun, H. Zhang, T. Tan, and J. Wang. Iris image clas-
dre. The UBIRIS.v2: A database of visible wavelength sification based on hierarchical visual codebook. In IEEE
images captured on-the-move and at-a-distance. In IEEE Transactions on Pattern Analysis and Machine Intelligence
Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(6):1120–1133, 2014.
(PAMI), 32(8):1529–1535, August 2010. [78] D. J. Suroso, P. Cherntanomwong, and P. Sooraksa. Synthe-
[64] H. Proença and L. Alexandre. UBIRIS: A noisy iris image sis of a small fingerprint database through a deep generative
database. In 13th International Conference on Image Analy- model for indoor localisation. Elektronika ir Elektrotech-
sis and Processing - ICIAP, volume LNCS 3617, pages 970– nika, 29(1):69–75, 2023.
977, Cagliari, Italy, September 2005. Springer. [79] F. Taherkhani, A. Rai, Q. Gao, S. Srivastava, X. Chen,
[65] R. Queiroz, M. Cohen, J. L. Moreira, A. Braun, J. C. J. F. de la Torre, S. Song, A. Prakash, and D. Kim. Control-
Júnior, and S. R. Musse. Generating facial ground truth lable 3d generative adversarial face model via disentangling
with synthetic faces. In 23rd IEEE SIBGRAPI Conference shape and appearance. In Proceedings of the IEEE/CVF Win-
on Graphics, Patterns and Images, pages 25–31, 2010. ter Conference on Applications of Computer Vision, pages
826–836, 2023.
[80] P. Voigt and A. Von dem Bussche. The EU General Data IEEE/CVF Winter Conference on Applications of Computer
Protection Regulation (GDPR). A Practical Guide, 1st Ed., Vision, pages 2412–2421, 2021.
Cham: Springer International Publishing, 10(3152676):10– [90] S. Yadav and A. Ross. iWarpGAN: Disentangling identity
5555, 2017. and style to generate synthetic iris images. In IEEE Interna-
[81] C. Wang, Z. He, C. Wang, and Q. Tian. Generating intra- and tional Joint Conference on Biometrics (IJCB), pages 1–10,
inter-class iris images by identity contrast. In IEEE Inter- 2023.
national Joint Conference on Biometrics (IJCB), pages 1–7, [91] D. Yambay, B. Walczak, S. Schuckers, and A. Czajka.
2022. Livdet-iris 2015 - iris liveness detection competition 2015.
[82] Z. Wang, H. Zheng, P. He, W. Chen, and M. Zhou. Diffusion- In IEEE International Conference on Identity, Security and
GAN: Training GANs with diffusion. In The 11th Interna- Behavior Analysis (ISBA), pages 1–6, 2017.
tional Conference on Learning Representations, 2023. [92] M. Zhang, Q. Zhang, Z. Sun, S. Zhou, and N. U. Ahmed.
[83] Z. Wei, T. Tan, and Z. Sun. Synthesis of large realistic iris The BTAS competition on mobile iris recognition. In 8th
databases using patch-based sampling. In 19th International IEEE International Conference on Biometrics Theory, Ap-
Conference on Pattern Recognition, pages 1–4, 2008. plications and Systems (BTAS), pages 1–7, 2016.
[84] E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson, and [93] Z. Zhang, C. Deng, Y. Shen, D. S. Williamson, Y. Sha,
A. Bulling. A 3d morphable model of the eye region. Opti- Y. Zhang, H. Song, and X. Li. On loss functions and recur-
mization, 1:0, 2016. rency training for GAN-based speech enhancement systems.
[85] A. B. V. Wyzykowski and A. K. Jain. Synthetic latent fin- Interspeech, 2020.
gerprint generator. In Proceedings of the IEEE/CVF Winter [94] H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, and N. Yu.
Conference on Applications of Computer Vision, pages 971– Multi-attentional deepfake detection. In Proceedings of
980, 2023. the IEEE/CVF Conference on Computer Vision and Pattern
[86] L. Xiao, Z. Sun, R. He, and T. Tan. Coupled feature selec- Recognition, pages 2185–2194, 2021.
tion for cross-sensor iris recognition. In 6th IEEE Interna- [95] T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View
tional Conference on Biometrics: Theory, Applications and synthesis by appearance flow. In European Conference on
Systems (BTAS), pages 1–6, 2013. Computer Vision, pages 286–301. Springer, 2016.
[87] S. Yadav, C. Chen, and A. Ross. Synthesizing iris images us- [96] T. Zhu, J. Chen, R. Zhu, and G. Gupta. StyleGAN3: Gener-
ing rasgan with application in presentation attack detection. ative networks for improving the equivariance of translation
In Proceedings of the IEEE/CVF Conference on Computer and rotation. arXiv preprint arXiv:2307.03898, 2023.
Vision and Pattern Recognition Workshops, pages 0–0, 2019. [97] H. Zou, H. Zhang, X. Li, J. Liu, and Z. He. Generation
[88] S. Yadav, C. Chen, and A. Ross. Relativistic Discriminator: textured contact lenses iris images based on 4DCycle-GAN.
A One-Class Classifier for Generalized Iris Presentation At- In 24th International Conference on Pattern Recognition
tack Detection. In IEEE Winter Conference on Applications (ICPR), pages 3561–3566, 2018.
of Computer Vision, pages 2635–2644, 2020. [98] J. Zuo, N. A. Schmid, and X. Chen. On generation and anal-
[89] S. Yadav and A. Ross. CIT-GAN: Cyclic image trans- ysis of synthetic iris images. In IEEE Transactions on Infor-
lation generative adversarial network with application in mation Forensics and Security, 2(1):77–90, 2007.
iris presentation attack detection. In Proceedings of the