In lieu of an abstract, here is a brief excerpt of the content:

  • An Interview with Andrew Burt
  • Andrew Burt (bio)

On March 17, 2020, Andrew Burt, a former policy advisor at the Federal Bureau of Investigation's Cyber Division, joined GJIA on the phone to discuss the widespread implications of developing European Union (EU) regulations on facial recognition data and technology. He explains the challenges of crafting laws that mitigate algorithmic bias, differences in European Union and United States regulatory approaches, and influences on businesses and citizens.

Georgetown Journal of International Affairs:

The EU is in the preliminary stages of crafting policy on how to regulate machine learning (ML)-based facial recognition research, development, and implementation. What are the most significant legal, ethical, and technical factors that the EU should consider when designing regulations, particularly regarding human review and correcting data bias?

Andrew Burt:

There is a lot to say about this large and growing field. And the truth is that many dissertations need to be written about how to manage all of the different risks and liabilities that can arise from ML-based systems. That said, I would posit two reactions. First, there are a host of existing laws and regulatory frameworks that already provide helpful guidance and ways to think about minimizing the risks of ML-based systems and artificial intelligence (AI) more broadly. One that I would cite and hope European policymakers draw off of is a regulatory framework called SR 11-7, which broadly governs how US financial institutions use statistical models. One particularly significant concept within the SR 11-7 framework is called "effective challenge," which essentially means that third-party technical, governance, and compliance-minded personnel need to be reviewing, auditing, and "red teaming" all of the models that are being developed.

Secondly, I would make one more broad point about efforts to regulate facial recognition technology. I am incredibly sympathetic to what it is regulators are trying to prevent and the dangers they are reacting to. However, my feeling is that these regulatory efforts are fundamentally incomplete because they are not addressing the broader issue. The biggest challenge is not about facial recognition narrowly, but more broadly about the increasingly easy and powerful ability to perform automated identification using all of the data we as individuals and consumers keep generating.

GJIA:

This year the EU Commission considered proposals that would place a temporary ban on using facial recognition technologies, but ultimately did not pursue the policy. Would you support a temporary ban on facial recognition technologies?

AB:

I would not support the idea of a temporary ban for two reasons. First, I do not think a temporary ban gets to the root of the actual problem, which I think is much larger than just facial recognition. What I am deeply concerned about is the ease with [End Page 44] which organizations can quickly identify people in an automated fashion—again, it is a much broader problem than facial recognition. Second, a ban is too broad of an approach to have the type of impact we are looking for. It is more practical to consider the most concerning uses of facial recognition technologies, voice identification technologies, and a host of other techniques that can make the identification of individuals seamless and automated—that is, to create a grouping of the technologies we are actually the most concerned about. Then, once we identify the technologies that cause the most considerable harms, we can figure out a way to raise the compliance burden on companies that design them in order to ensure responsible behavior.

GJIA:

What are the most significant differences and similarities between EU and US facial recognition regulation? Do you see a potential divergence in regulations? If so, what would be the source of such a divergence?

AB:

We are still in the early days of any serious regulatory efforts on both sides of the Atlantic. Right now, there are very few laws on the books that directly govern how facial recognition technology can be deployed. In the EU there is the General Data Protection Regulation (GDPR), which is especially relevant to this discussion in how it applies to the use of biometric data. While the EU approach to privacy regulations is much more sweeping, is...

pdf

Share