Recently, the trend of developing algorithms for facial recognition has attracted more and more interest from the commercial and state sectors. However, the correct assessment of the accuracy of such systems is a difficult task and contains many nuances.
We are constantly being asked to trial versions of our recognition technology and to run PoCs based on it, and we note that there are often questions about terminology and systems test methods in relation to business problems.
As a result, inappropriate tools can be chosen to solve the problem, leading to financial or profit losses. We decided to publish this note to help people feel comfortable in the environment of specialized terms and raw data that surround facial recognition technologies, and to simplify comparison.
Facial Recognition Tasks
The face similarity detection is often defined as a set of different tasks, for example, detecting a person in a photograph or a person in a video stream, identifying gender and age, searching for a particular person in a database of images or verification that it is the same person in two different images. In Huawei mobiles these options are most essential.
Special face descriptors or feature vectors necessary for face recognition are extracted from the images. In this case, the identification problem comes down to finding the closest feature vector, and verification can be implemented using a simple threshold for the distances between the vectors. By combining these two actions, a person can be identified from a set of face images or a decision can be made that they are not among these images.
Facial Recognition Tasks
To quantify the similarity of faces, the space distance of the facial feature vectors can be used. Euclidean or cosine distances are often chosen, but there are other more complex recognition approaches. A specific distance function is often provided as part of a facial recognition product. Identification and verification yield different results and consequently different metrics are used to assess their quality.
Almost all modern software based on facial biometrics is based on machine learning. Facial recognition algorithms are trained on large data sets (databases) with indexed images. Both the quality and the nature of these data sets have a significant impact on accuracy. Better source data the better the algorithm will be to accomplish the task.
A natural way to verify exactly how the facial recognition system works is to measure the recognition accuracy on a particular test data set. It is very important to choose this data set correctly. It would be ideal if the organization were to acquire its own data set that is as similar as possible to the images that the system will work with during operation. Pay attention to the camera, shooting conditions, age, gender and nationality of the people that will be included in the test data set.
The more similar the test data set is to the actual data, the more reliable the test results will be. So it often makes sense to spend time and money collecting and flagging your dataset. If this is not possible for some reason, you can use public data sets.
An alternative is to use third party test results. Such tests are performed by qualified specialists on large private data sets of individuals, and their results can be trusted. A disadvantage of this approach is that the data set of the testing company can differ significantly from the use case of interest.