By: Chris Burkett is a Partner, Baker McKenzie LLP
There is no doubt that the rapid pace of technological advancement in our society is both exciting and, at times, concerning, not to mention dizzying. The full impact of the ongoing technological revolution remains to a large extent unknown, and legal regimes are struggling to keep up. Increasingly there are calls for greater regulation and oversight in the technology space.
One particularly powerful technological tool that is garnering significant attention and debate is facial recognition technology (FRT). Opponents argue that widespread use of FRT poses unique threats to human rights and privacy interests. Those opposed to FRT have described it as a “menace disguised as a gift,” arguing that it is an “irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control.”
On the other hand, proponents see the tremendous advantages that FRT can provide to governments and businesses. For example, FRT systems can be used to spot known shoplifters as they enter a store, allow for frictionless border passage, and identify a missing child or a wanted person on a crowded subway platform.
Despite the benefits, even those who are actively developing FRT technology have called for increased regulation to ensure appropriate limits. Microsoft recently promoted the idea that the US government should take an active role in managing FRT’s use through a bipartisan commission, while at the same time setting out an internal set of principles to address concerns around the use of FRT.
What Exactly Is FRT?
Using the unique characteristics of our faces, FRT is a type of biometric identification (in the same category as fingerprints, retinal scans, and DNA markers). FRT systems use algorithms to identify facial features from a photograph, such as the distance between the eyes or the shape of one’s chin, and then convert that information into a mathematical formula that is compared to a database of previously collected faces to seek a match. As Woodrow Hartzog, Professor of Law and Computer Science at Northeastern University School of Law and the College of Computer and Information Science, noted, surveillance conducted with FRT is different than other biometric identifiers. Its mere presence, he argues, harms freedoms because people changes their behavior when they believe they are being watched.
The Growth of FRT and Responses
FRT has exponentially grown in sectors and spaces that capture the wider public, at times without consent—in law enforcement, immigration, healthcare, marketing, and retail. Controversially, its use in mass surveillance through body cameras and CCTV has led to legislative moratoriums on the technology and responses from companies that manufacture and use FRT themselves. Government surveillance is known to have a chilling effect on the willingness of individuals to engage in public debate and to even associate with those whose values or political views may be considered controversial.
Despite the growing tally of critics, FRT continues to grow as a market and in our everyday lives. China leads the way in the integration of facial-recognition systems into daily governance—it forms a major piece of their so-called social credit system that has blurred the lines of public and private, in an effort to shape social behavior and track targeted groups. The risk of complicity in human rights abuses remains high, and calls for accountability are mounting.
Calls for Broad, Principled Regulatory and Ethical Frameworks
The overwhelming concern for potential abuse has led regulators and businesses alike to draft broad regulatory and ethical frameworks on technology for greater guidance.
The European Union, with one of the world’s strongest privacy regimes, has published a set of Ethics Guidelines for Trustworthy Artificial Intelligence on how companies and governments should develop ethical technology applications. The Council of Europe’s Commissioner for Human Rights has also published recommendations for improving compliance with human rights regulations by parties developing, deploying, or implementing technology such as FRT. Initiatives from NGOs and human rights groups have also prepared similar guidelines and directives for public and private sectors moving forward. One particular concern has been the disproportionate number of errors that occur when FRT is applied to non-Caucasian faces.
In addition, as mentioned earlier, businesses like Microsoft have taken an ethical stand in response to the public critique by calling for legal reform and adopting their own set of ethical principles. These six principles constitute a nascent framework: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance. Microsoft has stated that taking a principled approach to FRT will provide valuable experience that can be shared with and used by other companies.
The Council of Europe has also released recommendations that call for even more in regard to how businesses and public authorities should proceed. It calls for the use of a human rights law framework to map out duties and responsibilities in carrying out human rights due diligence. In particular, Human Rights Impact Assessments of business projects can help identify and evaluate such issues through applying a human rights-based approach that is consistent with the United Nations Guiding Principles on Business and Human Rights. Independent oversight and other governance mechanisms are also called upon to handle compliance and complaints.
Perhaps one of the most effective yet complicated areas to navigate without more effective legislation is the matter of remedies. The Council of Europe writes that “Responsibility and accountability for human rights violations associated with [technology] must always lie with a natural or legal person. At a minimum, individuals should be able to obtain human intervention. Effective remedies should be implemented to ensure individuals have redress for any harm suffered as a result of AI systems.” Without this last piece, an ethical framework will likely deny individuals harmed by misuse appropriate redress.
Whether you are for or against the expansion of FRT in our daily lives, it is clear that we as a society must grapple with how to best protect our freedom and privacy in an age of ever-expanding surveillance technology. It is important that businesses and governments seek to ensure the responsible use of FRT with clear rules that uphold democratic freedoms and human rights. As Brad Smith of Microsoft recently noted, democracies depend on our ability to freely associate without constant surveillance. FRT is being refined and its use expanded at a rapid pace. Only time will tell if responsible business practices and regulatory developments can keep up.
About the Author
Chris Burkett is a Partner at Baker McKenzie LLP, based in Toronto. Chris has extensive experience in litigation, disputes, internal investigations, and regulatory compliance, and he has appeared before a variety of tribunals, all levels of trial court, and the Court of Appeal for Ontario. He has worked in the firm’s London office and conducted internal investigations of anti-corruption and human rights compliance issues for multinational corporations and their subsidiaries across many international jurisdictions. Chris also appears regularly on local and national TV news broadcasts as a legal analyst. Prior to joining the firm, Chris was a Crown Attorney and acted as lead prosecutor in numerous trials. His experience extends to matters involving administrative tribunals, judicial review applications, injunctions, trials, appeals, and sensitive internal investigations.