Varying direction and energy distribution of the ambient illumination, together with the 3D structure of human face, can lead to severe differences in the shading and shadows on the face. Such variation in face appearance can be much larger than the variation caused by personal identity.
A novel approach is proposed to obtain ambient illumination invariant faces using active Near-IR imaging. Active Near-IR illumination projected by a Light Emitting Diode (LED) light source attached to a camera is used to provide a constant illumination. The difference between two face images captured when the LED light is on and off respectively, is the image of a face under just the LED illumination, and is independent of the ambient illumination.
The idea of employing the difference imaging distinguishes the proposed approach from the other approaches based on Infrared imaging. Simply applying infrared filters, which is usually adopted by those approaches to obtain the infrared face images, will admit the infrared component from ambient illumination. Therefore, the captured infrared image is NOT independent of ambient illumination. This problem is solved by applying the differencing technique in the proposed approach.
A face database with 40 subjects, 2 sessions with a interval of weeks, 4 ambient illumination conditions and 6 shots per condition are captured indoor using the system shown in Figure1, 40*2*4*6=1920 ambient faces in total and same amount of LED faces. A ring of 4 fluorescent lamps are used to provide ambient illumination from different directions.
Face recognition experiments are carried out on the ambient faces and LED faces respectively. Different face representations and classifiers are employed. Three test protocols are defined: Cross Session, Cross Illumination, and Combined test (cross both session and illumination).
The results in Table 1 show that:
For automatic face localization in Near-IR face images, a multistage approach is proposed by combining a feature-based face localization method and a global appearance based method using FloatBoost. The LEDs attached to the camera lead to bright-pupil effect which is very beneficial for automatic face localization. The circular shape of the bright pupil is a scale and rotation invariant feature which is exploited in the first method to quickly detect pupil candidates. Support Vector Machines (SVM) trained on local eye region appearance and global face appearance are employed to finally validate the candidates. Sometimes bright pupils can be missing in the face image or failed to be detected which leads to the failure of this feature based method. In this situation, the second face localisation method based on FloatBoost can be employed as a remedy.
Localisation experiments are performed on LED faces. As shown in Figure 6, taking deye=0.05 as the threshold to distinguish a successful localization, the success rate achieved by the bright pupil detector is 96.5%, which is 6% higher than the FloatBoost detector. A further improvement of 1% is achieved by the multistage approach.
Face recognition experiments are conducted on the LED faces registered by the automatic localization results from the multistage localization approach.
As shown above in Tables 2 and 3:
1) Very low error rates are achieved for all the tests on the automatically localized faces. Nearly all the error rates are below 0.8%, whether trained on manually registered data or automatically registered data, which confirms once again that the proposed multistage approach provides accurate face localization.
2) A practical application scenario is best represented by the combined test of the face recognition system on across Auto/Auto faces, yielding an error rate of less than 0.8% surprisingly. This error rate is better than the result obtained using manually registered training images. It shows the excellent performance of the proposed automatic face recognition system in a practical environment with varying illumination.
For more information, please contact Prof. Josef Kittler.