|Face recognition has recently become widespread in security applications. Although advancing technology has improved the performance of these systems, they are still prone to various attacks, including spoofing. The inherent feature extraction capability of machine learning techniques and deep neural networks has facilitated more accurate performance in spoofing detection. However, challenges still remain in the generalisation of these methods. One significant challenge is training dataset limitation in terms of size and variance. This paper investigates how different train/test ratios and variance in training data affect model performance with the NUAA dataset for spoofing detection. We show how using different splits of this dataset results in different models with different performances. We also open up new research directions by demonstrating how the problem of generalisation can be neatly demonstrated with an existing manageable dataset.|
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.