The framework aims to operationalise use cases and is designed to be deployed and tested as a tool to mitigate risks from potential unethical practices of the technology.
The World Economic Forum (WEF) has released the first framework for the safe and trustworthy use of facial recognition technology.
The Framework for Responsible Limits on Facial Recognition was built by the Forum, industry actors, policymakers, civil society representatives and academics. It is intended to be deployed and tested as a tool to mitigate risks from potential unethical practices of the technology.
The market for facial recognition tools and services is expected to more than double in value to $7 billion by 2024, according to a report published by Markets and Markets. Its use raises concerns in areas such as ethics and privacy, though, and there have been calls by politicians and civil rights agencies to safeguard against potential misuse of the technology.
Biometric monitoring and susceptibility to unfair bias are primary concerns, along with the lack of industry standards that are a barrier to companies and governments deploying the technology’s potential benefits.
“Although the progress in facial recognition technology has been considerable over the past few years, ethical concerns have surfaced regarding its limitations,” said Kay Firth-Butterfield, head of artificial intelligence (AI) and machine learning at the World Economic Forum. “Our ambition is to empower citizens and representatives as they navigate the different trade-offs they will face along the way.”
“Our ambition is to empower citizens and representatives as they navigate the different trade-offs they will face along the way.”
The framework aims to operationalise use cases for two distinct audiences: engineering teams and policymakers. Members of the working group have played two complementary roles.
The first are contributors: industry representatives (Groupe ADP, Amazon Web Services, IDEMIA, IN Groupe, Microsoft and SNCF); policymakers (members of the French Parliament, OPECST,); academics; civil society organisations; and AFNOR Certification.
The second are observers: the French Data Protection Authority (Commission Nationale de l’informatique et des libertés - CNIL) and the French Digital Council (Conseil National du Numérique).
This framework is structured around four steps:
Jean-Luc Dugelay, computer vision researcher at the French graduate school and research centre Eurecom Sophia Antipolis, said recent scientific progress, both in AI and in computer vision more specifically, has enabled a significant breakthrough in areas related to facial recognition.
He added: “For that reason, I believe that it is essential to accompany these advances in science with a global policy reflection on the appropriate use of this technology, through a multi-stakeholder collaboration that involves academics, engineers, technology providers and users, policymakers, lawyers and citizens.”
Concerns over the use of facial recognition technology have prompted some city governments to already take action to ban its use. San Francisco became the first city to ban its use by local agencies and Somerville in Massachusetts passed an order to stop its use by the city government.
Portland in the US has proposed strict controls on the use of the technology but private companies have indicated they may oppose an outright ban.
You might also like: