Maintaining compliance with face biometrics
Updated: Nov 19
Face biometrics are becoming increasingly standard for identity verification and user authentication. There are many notable advantages of using these biometrics including impressive precision, frictionless user experience, strong security, and high availability (people cannot forget or lose their faces).
At the same time troubling stories about face recognition routinely make their way into the news, making compliance a key concern for seasoned security professionals evaluating the usage of such technologies. This post will attempt to address such concerns by providing a quick review of the legal state of affairs regarding face biometrics in the United States.
Disclaimer: This blog post is for informational purposes only and not for the purpose of providing legal advice. You should contact your legal department to obtain advice with respect to any particular issue or problem. This is a very dynamic technical field that also has ongoing legislation, regulation, and judicial decisions. Make sure you are up to date!
Let’s first clarify two fundamental terms that tend to get mixed up yet differ greatly in their privacy and compliance implications: authentication and identification.
Biometric authentication answers the question “is this person the one I expect?” Such technology captures biometric information from an individual, and compares that information only to the biometric signature of the expected person. In this way biometric authentication can verify the person is who he or she is supposed to be. In other words this is a 1:1 comparison.
Biometric identification answers the question “who is this person?” Here too the system captures biometric information from an individual, but it then cross-references it against many other people in its system (technically a 1:N search) in order to figure out who that someone is. It’s the usage of face biometric identification that gets almost all the negative press.
With these definitions in mind we can now move to review the legislation as it applies to each of these use cases.
While there are no specific rules targeting biometric authentication, five states, New York, California, Washington, Illinois, and Texas, regulate the generic usage of biometric technology in commercial use. Illinois was a real pioneer in this regard. Its 12 year old Biometric Information Privacy Act (“BIPA”) is a comprehensive statute that gave rise to a number of class-action lawsuits and paved the way for legislation from other states including Texas and Washington.
California was the fourth state to pass a biometric privacy law in 2020 with the California Consumer Privacy Act (“CCPA”). Although targeting the much broader scope of personal data, the CCPA also addresses any business entity that collects biometric identifiers for commercial purposes. The most recent legislation is New York State’s Stop Hacks and Improve Electronic Data Security (SHIELD), which became law on March 12, 2020 and took a similar path by explicitly defining biometric data as private information.
While an exhaustive study of these laws is beyond the scope of this blog post, we can note a few key elements across the various legislation:
A repeated requirement for written information as well as written consent of the user to collect and use any biometrics identifier or information about them.
A trend to exclude cases in which the biometric information is used exclusively for fraud prevention or security purposes.
A further trend to exclude cases when the collecting company can show biometrics data is handled to the same level of privacy as it is required to handle PII under applicable privacy laws.
However, biometric data also falls under the more broadly defined Personal Identifiable Information (PII). That makes the more generic privacy laws, such as GDPR in the EU, apply to any handling of biometric data. Since most companies would have privacy and compliance programs dealing with PII, it should be straightforward to extend such to cover biometric data as well.
That said, it is important to keep in mind that biometric data carries a set of characteristics that in combination pose privacy risks that typically do not exist with other traditional PII - it is permanent (with good approximation - bio-modifications are a topic for another post), and it is strongly identifying (false positives are quite rare). From a compliance standpoint the perpetual nature of biometric data means that storing PII together with biometric data in a single data store with cross references is extremely risky. You should make all efforts to avoid such data architecture.
Since identification is the more robust use case, all the rules and regulatory requirements of authentication apply here as well. Special attention should be paid to the requirement for explicit consent.
A lot of the negative media attention is broadly focused on biometric identification of people that aren’t aware of being monitored, or that they have even been labeled to begin with. Examples vary from identification of protesters by enforcement agencies to automatic labeling of people appearing in images by commercial social networks.
Yet for all the public attention, along with the actual risks and ethical issues there is relatively little legislation specifically addressing biometric identification beyond the generic requirements mentioned above. And the laws that do exist are scattered.
Portland, OR introduced a full ban on any 1:N biometric identification and only permits login/authentication usage. San Francisco, CA has also introduced limiting measures mainly targeting the government's usage of such technologies in public.
This patchwork regulatory environment means that companies using 1:N biometric identification should be extra cautious, keep a close eye on local regulations, and the number of a good PR firm close at hand. At the same time, it is still legal in 45 states for software to identify an individual using images taken without consent while they are in public.
To conclude, maintaining a compliant privacy program and handling biometric data under such a program as PII should serve as the baseline for a compliant biometrics data company. Avoiding the usage of biometric identification and delinking the biometric data from the other PII would be good fundamentals for avoiding regulatory issues and public scrutiny. Most importantly - be obsessed with strong user consent. The tremendous potential of biometrics to enhance security, usability, and experience, certainly justifies the effort.
It does not stop here. The fast pace of technological developments and the regulatory catch up that characterizes the field of biometrics, means that keeping your finger on the pulse is the smart thing to do. Just on November 3rd the ballots passed a Portland, Maine initiative to ban face biometrics use by local police and agencies. Meanwhile in Baltimore, Maryland, a proposal to ban the use of face biometrics in the city has been defeated, albeit by a narrow margin.
Lastly, and most significantly, the California Privacy Rights Act (CPRA) has now passed. This new act overhauls the preexisting CCPA and becomes legally enforceable in 2023. Among other changes, it better defines who is subject to it, and provides consumers with additional rights to determine how businesses use their data. It also quantifies the risk of data loss by setting fines for companies that allow customer data to be leaked.
This emphasizes my first recommendation above - beyond all specificities regarding biometric data, the general recommendation of maintaining a compliant privacy program is key. Stick to that and you are probably already 90% on target.