Facial recognition differs from the conventional camara surveillance, as it is not a mere passive recording, but rather it entails identification of an individual by comparing newly capture images with those images saved in a data base.
The status in Europe
Although facial recognition is not yet specifically regulated in Europe, it is covered by the General Data Privacy Regulation – GDPR – as a means of collecting and processing personal biometric data, including facial data and fingerprints. Therefore, facial recognition is only possible under the criteria of the GDPR.
Biometric data provides a high level of accuracy when identifying an individual due to the uniqueness of the identifiers (facial image or fingerprint) and a great potential to improve business security.
The processing of biometric data, which is considered sensitive data, is in principle prohibited with some exceptions, such as, for reasons of substantial public interests, to protect the vital interest of the data holder or another person, or if data holder has given its explicit consent, to name some.
Moreover, other factors such as proportionality or power imbalance are considered to determine if it is a valid exception, for instance, facial recognition can be considered disproportionate to track attendance in a school, since less intrusive options are available. Also even when the data holder has explicitly consented to the processing of biometric data, consideration should be given to potential imbalance of power dynamics between the individual data holder and the institution processing the data. For instance in a student and school scenario, there could be doubts as to whether the consent of the parents of a student to the use of facial recognition techniques, is freely given in the manner intended by the GDPR and therefore, a valid exception to the prohibition of processing.
One of the challenges in this field is that the underlying technology used for facial recognition, for instance AI, can present serious risks of bias and discrimination, affecting and discriminating many people without the social control mechanism that governs human behaviour. Bias and discrimination are inherent risks of any societal or economic activity. Human decision-making is not immune to mistakes and biases. However, the same bias when present in AI could have a much larger effect.
Authentication vs. identification
Obviously biometrics for authentication (which is described as a security mechanism), is not the same as remote biometric identification (which is used for instance in airports or public spaces, to identify multiple persons’ identities at a distance and in continuous manner by checking them against data stored in a database).
The collection and use of biometric information used for facial recognition and identification in public spaces carries specific risks for fundamental rights. In fact, the European Commission (EC) has warned that remote biometric identification is the most intrusive form of facial recognition and it is in principle prohibited in Europe.
So where is all this going?
What should prevail: the protection of fundamental rights, or the advancement that comes with invasive and overpowering new technologies?
New technologies, like AI, bring some benefits, such as technological advancement and more efficiency and economic growth, but at what cost?
Using a risk-based approach the EC has considered the use of AI for remote biometric identification and other intrusive surveillance technologies to be high-risk, since it could compromise fundamental rights such as human dignity, non-discrimination and privacy protection.
The EU Commission is currently investigating whether additional safeguards are needed or whether facial recognition should not be allowed in certain cases, or certain areas, opening the door for a debate regarding the scenarios that could justify the use of facial recognition for remote biometric identification.
Artificial intelligence entails great benefits but also several potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.
To address these challenges, the Commission in its white paper on AI, issued in February this year, has proposed a new regulatory framework on high risk AI, and a prior conformity assessment, including testing and certification of AI facial recognition high risk systems to ensure that they abide by EU standards and requirements.
The regulatory framework will include additional mandatory legal requirements related to training data, record-keeping, transparency, accuracy, oversight and application-based use, and specific requirements for some AI applications, specifically those designed for remote biometric facial recognition.
We should then expect new regulation coming, with the aim to have an AI system framework, compliant with current legislation and that does not compromise fundamental rights.
Opportunities for startups?
Facial recognition technologies are here to stay, therefore, so if you are thinking about changing your hair colour, watch out as your phone might not recognize you! With the speed in which facial recognition is growing, we should not wait too long for new forms of ‘selfie payment’.
Facial recognition is already been used quite successfully in several areas, among them:
- Health: Where thanks to face analysis is already possible to track patience use of mediation more accurately;
- Market and retail: Where facial recognition promises the most, as ‘knowing your customer’ is a hot topic, this means placing cameras in retail outlets to analyze the shopper behavior and improve the customer experience, subject of course to the corresponding privacy checks; and,
- Security and law enforcement: That is, to find missing children, identify and track criminals or accelerate investigations.
With lots of choices on the horizon for facial recognition, it remains to be seen whether European startups will lead new innnovations in this area.