It is unquestionably inspiring that the Black Lives Matter movement has inspired millions of Americans to question their understandings of race, policing, justice and the true nature of history. It represents, perhaps, the most hopeful global development in this most tumultuous year. Encouragingly, some cities and states have already taken steps toward reforming or altogether eliminating their police departments in an unprecedentedly swift translation of passion into action. This, too, has served to inspire, but for reasons both political and logistical, a national approach to police reform has yet to materialize.
“Defund the Police”: A Simple Slogan for a Complex Problem
Policing in America is, by definition, a state and local responsibility. But this has not stopped the federal government from weighing in on police matters or funding them in significant ways. New national-level reforms to law enforcement will present obvious challenges, but where they would make the greatest impact is in the field of regulating technologies and tools police have available to them. Police have never been more powerful on this front, and particularly worrisome is the availability of invasive — and often malfunctioning — facial recognition software. Intelligent regulation of tools like these must navigate some problematic global developments regarding these technologies and, where appropriate, take cues from the industry responsible for their production.
Globally, there appear to be two trends developing regarding government regulation and use of facial recognition software. While both are underwhelming, they each underscore truths about the nature of these tools with which any better alternatives must also contend.
First is the more immediately worrisome approach to facial recognition that comes in form of a familiar Faustian bargain: the exchange of privacy for safety. This approach involves a full embrace of facial recognition’s capabilities in the name of national security, now including essential and expansive COVID-19 contact tracing. While straightforward, this rationale for facial recognition is far too easy to exploit. For example, it is widely reported and understood that facial recognition software is indispensable to China’s ongoing efforts to target Uighur Muslims. Similarly, the use of facial recognition in India is fertile ground for privacy and civil liberties concerns that show no signs of abetting.
The second approach appears to be, for lack of a better term, punting. This approach was best epitomized by the European Union Commission’s omission of any meaningful reference to facial recognition in its latest white paper on artificial intelligence. This will certainly be remembered as a moment the EU missed. While it might still articulate a comprehensive regional approach to the use of such tools, it is clear that attempting to do so in the age of coronavirus was too much to ask.
While the EU may have punted such a decision into the future, US federal government has seemingly punted it to local authorities. What has arisen instead of a coherent national strategy is a patchwork approach of states and cities banning certain uses of certain tools, as well as a swarm of other jurisdictions that use them all with gusto, ban them completely or have no facial recognition software policy at all.
Fortunately, global demonstrations against the overuse of police power have, for the time being, held further expansion of these tools at bay and on this front, the most meaningful reform has come from technology companies themselves. IBM suspended its development of facial recognition software citing its problematic implications in the hands of law enforcement. Similarly, Amazon suspended the sale of its facial recognition platform called Rekognition to law enforcement agencies for one year, reasoning that lawmakers would need that time to regulate facial recognition programs in a fair, transparent way. Microsoft and Google have also come forward to announce their resistance to providing facial recognition software to law enforcement agencies for the time being.
Line of Business
When IBM, Amazon, Microsoft and Google all decide that a line of business is not worth the risk or the money, lawmakers need to pay attention. That said, concerned citizens must note that most of these encouraging company policies are designed to be temporary, and even those not billed as such can change as quickly as one can schedule a board meeting. Just as critically, these firms mainly cite the hypothetical, improper uses of their software in the hands of others as the primary reason behind their current unfitness. One must squint harder to discern whether these firms believe facial recognition tools are problematic in and of themselves.
To miss this distinction is to miss the entire need for government action and the limits of corporate self-regulation as a national strategy. Though unmentioned in these tech titans’ official statements, the full technical context for why such tools are concerning in any user’s hands are neatly summarized in reports like this one from MIT Media Lab and Microsoft Research in 2018 that lays out specific ways in which facial recognition programs routinely misidentify non-white faces, particularly those of non-white women. These findings demonstrate that such technologies, even when used competently, will not make policing any smarter, less discriminatory or less capable of encroaching on people’s freedoms. The replacement of systemic, interpersonal racism with hard-coded, digital racism is no reform at all.
People need to know what tools are currently in use to police their communities. And they are, indeed, in use. In the fall of 2016, Georgetown Law’s Center on Privacy & Technology published a study claiming that 50% of American adults already had images of their faces captured in at least one police database. Additionally, as recently as 2019, the Department of Homeland Security announced plans to use facial recognition software to monitor 97% of air travelers. A lack of transparency and further delays to meaningful regulation only allow the programs these tools support to render themselves more essential to the day-to-day operations of law enforcement.
The effects of national incoherence on this critical issue are not neutral. Self-imposed bans, incongruous local decisions lacking clear federal guidance, and the steady march of more authoritarian uses of such technologies around the world are insufficient responses to the questions this moment of global reckoning has thrust upon the world’s police and security agencies.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.