I don’t like the look of your face

There is a problem with automated governance and outsourcing decision making to artificial intelligence. And that problem is probability.

And the probable problem at hand is that the innocent are flagged as guilty, the right candidates flagged as wrong, at an unacceptably high probability.

As we use – even highly reliable – automated systems such as facial recognition technology to separate the good guys from the bad guys, we need to be realistic about the probably harm caused by probable cause.

Take MIT’s “Gaydar”. With the Gaydar, MIT had predict with 91% accuracy whether someone is gay or straight just by analysing your photograph.

Ethical issues of even using such software aside (and in a world where being non-heterosexual is still a capital offence in no less than TEN countries the ethical issues should in fact be front and centre), there are also mathematical issues with that “91% accuracy rate”.

Let’s do a world problem to break down the problems with that “91% accuracy rate”:

  • According to the results of this study) approximately 6.5% of men in the UK have reported feeling an attraction to another man at some point, so for our example, we will go with “6.5% of men in the UK could be considered as gay”.
  • There are approximately 31 million men in the UK

In light of the above figures and definition, this means that there are approximately

  • 2,015,000 gay men in the UK
  • 38,985,000 straight men in the UK

Now, if you were to run all 31 million men in the UK through the MIT Gaydar, the system would guess your sexuality correctly 91% of the time. Which sounds rather reliable, until you look a little closer. Here’s what the Gaydar will actually flag:

  • 1,833,652 gay men, correctly, as gay
  • 181,348 gay men, incorrectly, as straight
  • 35,476,350 straight men, correctly, as straight
  • 3,508,650 straight men, incorrectly, as gay

In other words, out of the 5,309,302 men flagged as gay by the “91% accurate” Gaydar only 35% are actually gay.

In other other words, if the government of Yemen (one of the lovely places where being  gay can get you shot or hung by the government) decades to use the “91% effective” MIT gaydar to prosecute suspected non-heterosexuals they will murder the wrong man two out of every three times.

Now, for another example: What happens when a country uses facial recognition to identify terrorists? Even if the system is 99% accurate in identifying potential terrorists, if less than 0.1% of people actually ARE terrorists, more than 90 out of every 100 potential terrorists flagged by the system would be INNOCENT.

Here’s the maths again:

  • Sample size: 100,000
  • Pre-crime system accuracy: 99%
  • Actual prevalence of terrorists 0.1%

Therefore there are:

  • 100 terrorists
  • 99,900 innocent citizens

System results:

  • 99 terrorists correctly identified as terrorists
  • 1 terrorist incorrectly identified as innocent
  • 98,901 innocent citizens correctly identified as innocent
  • 999 innocent citizens incorrectly identified as terrorists

Again, out of the 1098 terrorists “identified” only 99 – or 9% – are actual terrorists.

In other words, if you are flagged as a terrorist, you have a 91% chance of being falsely accused.

This is highly alarming, considering that maths is not a prerequisite for political science degrees (and even some BA law degrees) in many universities around the world: The people who make decisions based on the “probability” of big data inferences have very limited understanding of probability!

And that’s even before we get into the bits about facial recognition being racist and sexist racist and sexist (the technology is significantly better at identifying white men than it is at identifying people of other ethnicities and sexes.)

Moral of the story? Artificial intelligence and probability based systems even when very accurate among large populations (and even when they are intelligent enough to avoid mistaking photographs used in bus advertising as jaywalkers) are terribly inaccurate about individuals.

So, what will destroy more lives in the future? Actual terrorists or the Weapons of Math Destruction we deploy against innocent citizens in the pursuit of terrorists?

And, more importantly, do you want to live in a world where you are literally judged at face value?

 

 

Share your thoughts

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s