Finding the balance between safety and freedom in the shadow of COVID-19

Countries around the globe are focusing their collective attention on humanity’s most immediate existential threat. The coronavirus threatens jobs, global economic activity, international relations, the health of our loved ones, and our own lives. To combat this pandemic, epidemiologists require data so they can better understand where and how the coronavirus may be spreading among populations. World leaders from the international level down to local ranks need to be able to track the spread of the virus in order to make informed decisions about how to manage resources, handle shelter-in-place restrictions, and reopen businesses.

The technologies politicians are testing, like phone-based contact tracing, thermal scanning, and facial recognition, are all euphemisms for surveillance, and tradeoffs being weighed now could extend well beyond this crisis.

Before the pandemic, one of the most important — and popular — movements in ethics and social justice was the push against technology-powered surveillance, especially AI technologies like facial recognition. It’s a rich topic centered around power that pits everyday people against the worst parts of big tech, overreaching law enforcement, and potential governmental abuse. “Surveillance capitalism” is as gross as its name implies, and speaking truth to that particular sort of power feels good.

But now, with millions suddenly unemployed and some 80,000 deaths from COVID-19 in the U.S. alone, the issue is no longer corporate profits or policing efficacy versus privacy, security, and power. In a global pandemic, the tradeoff may very well be privacy, security, and power versus life itself.

The spread of the coronavirus poses an immediate life-and-death threat. No one alive has experienced anything like it on such a scale, and everyone is scrambling to adjust. Against such a dire backdrop, theoretical concerns about data privacy or overreaching facial recognition-powered government surveillance are easily brushed aside.

Is it really such a bad thing if our COVID-19-related medical records go into a massive database that helps frontline health care workers battle the disease? Or if that data helps epidemiologists track the virus and understand how and where it spreads? Or aids researchers in developing cures? Who cares if we have to share some of our smartphone data to find out whether we’ve come into contact with a COVID-19 patient? Is it really that onerous to deploy facial recognition surveillance if it prevents super-spreaders from blithely infecting hundreds or thousands of people?

Those are legitimate questions, but on the whole it’s a dangerously shallow perspective to take.

A similar zeitgeist permeated the United States after 9/11. Out of fear — and a strong desire for solidarity — Congress quickly passed the Patriot Act with broad bipartisan support. But the country lacked the foresight to demand and implement guardrails, and the federal government has held onto broad surveillance powers in the nearly two decades since. What we learned — or should have learned, at least — from 9/11 and the Patriot Act is that a proactive approach to threats should not exclude forward-looking protections. Anything less is panic.

The dangers posed by a hasty and wholesale surrender of privacy and other freedoms are not theoretical. They’re just perhaps not as immediate and clear as the threat posed by the coronavirus. Giving up your privacy amounts to giving up your power, and it’s important to know who will hold onto all that data.

In some cases, it’s tech giants like Apple and Google, which are already not widely trusted, but it could also be AI surveillance tech companies like Palantir, or Clearview or Banjo, which have ties to far right extremists. In other cases, your power flows directly into the government’s hands. Sometimes, as in the case of a tech company the government contracts to perform a task like facial recognition-powered surveillance, you could be giving your data and power to both at the same time.

Perhaps worse, some experts and ethicists believe systems built or deployed during the pandemic will not be dismantled. That means if you agree to feed mobile companies your smartphone data now, it’s likely they’ll keep taking it. If you agree to quarantine enforcement measures that include facial recognition systems deployed all over a city, those systems will likely become a standard part of law enforcement after the quarantines are over. And so on.

This isn’t to say that the pandemic doesn’t require some tough tradeoffs — the difficult but crucially important part is understanding which concessions are acceptable and necessary and what legal and regulatory safeguards need to be put in place.

For a start, we can look to some general best practices. The International Principles on the Application of Human Rights to Communication Surveillance, which has been signed by hundreds of organizations worldwide, has for years insisted that any mass surveillance efforts must be necessary, adequate, and proportionate. Health officials, not law enforcement, need to drive the decision-making around data collection. Privacy considerations should be built into tools like contact tracing apps. Any compromises made in the name of public health need to be balanced against the costs to privacy, and if a surveillance system is installed, it needs to be dismantled when the emergent threat of the coronavirus subsides. Data collected during the pandemic must have legal protections, including stringent restrictions on who can access that data, for what purpose, and for how long.

In this special issue, we explore the privacy and surveillance tradeoffs lawmakers are working through, outline methods of tracking the coronavirus, and examine France as a case study in the challenges governments face at the intersection of politics, technology, and people’s lives.

This is a matter of life and death. But it’s about life and death now and life and death for years to come.

Source: Read Full Article