Citizens or code? – who do you trust?

From the calls to “defund the police” that rang across the USA in the lead up to the 2020 presidential election, to the community groups that mobilised across South Africa to defend their homes and businesses in the midst of the July 2021 riots that ripped through South Africa from Durban to Johannesburg; citizens are taking a closer interest in the viability and necessity of privatising safety and security as they feel they can no longer rely on state institutions to adequately protect them

Of course, private security solutions are hardly a new feature of life in South Africa; which has a long history of community watch groups, boomed-off suburbs, street surveillance cameras, and armed response services. What is new, however, is how a new wave of technology companies are repackaging and gamifying “safety as a service” into a new sort of anti-social team sport that encourages neighbours to watch each other and report on suspicious persons and activities, often without fully informed consent of all participants, and with some potentially serious side effects.

Take Citizen for example (which was formerly, aptly, known as Vigilante), a popular US-based “personal safety app” which sends users location-based alerts and notifications on crimes and potential security threats in their area, based on both user generated reports and scanning police communications. (Think of it as a combination of a WhatsApp group and Grindr, only for crime, instead of gossip and hookups.)

Citizen gained notoriety in recent months after the app’s users misidentified a local vagrant as the arsonist responsible for starting a wildfire in California. Citizen spotted the marketing opportunity and offered a USD $30,000 bounty for bringing the suspect to arrest. A manhunt ensued, for the (ultimately innocent) man, leading to all sorts of uncomfortable questions around how far is too far when it comes to private citizens and companies taking law and order into their own hands.

However, those ethical and legal issues aside (as important as the are), perhaps the deeper question we need to interrogate is around social trust. In effect, what apps and companies like Citizen are really doing is encouraging us to replace trust in each other with trust in technology. They are asking us to trust the world as presented to us through our apps rather than the other individuals who share our world with us.

Similar trends towards replacing human relationships with computer code are currently playing out in other areas of economics, politics and society too. DeFi (decentralised finance) and crypto currencies aim to replace trust in fallible trade partners and financial intermediaries with trust in data locked in immutable blockchain databases. Similarly, the so-called “social credit score” systems currently being rolled out across China aim to quantify and codify the social “trustworthiness” as well as the financial creditworthiness of each individual citizen and company. 

All these systems and technologies are intended to make the world a safer place to live and work in. There is no doubt that there are many untrustworthy individuals among us. There are reasons to want to defer to an algorithmic authority instead of our own judgement. However, as the Citizen story shows us, sometimes the data in the app or the blockchain is wrong

This begs the question: Is working to replace community trust and interpersonal relationships with trust in impersonal code really a desirable end goal? Or is it just a way to avoid addressing the reasons why we do not trust each other in the first place?

This article was first published in Brainstorm magazine.

Share your thoughts