AI pattern matching is a black box. You can’t know how a decision is made until it’s too late to unmake it. A private surveillance firm plugging AI into policing doesn’t democratize safety or create objectivity, it accelerates suspicion based on existing grievances.
Except when it’s designed to suspect nothing. Flock’s response to controversies about privacy has included supposed “transparency” features, as well as tools that it claims will enable “public audits” of searches and results. And if your small police department that’s turned to Flock as a “force multiplier” doesn’t have the staff to run audits? No worries: “To support agencies with limited resources for audit monitoring, we are developing a new AI-based tool.… This tool will help agencies maintain transparency and accountability at scale.” Using an AI to monitor an AI is a level of absurdity Philip K. Dick never quite got to. Maybe someone can write a ChatGPT prompt for a novel in his style.
I think Dick would recognize another irony: AIs surveilling AIs surveilling us sounds like a dispassionate threat from without, but the ghost in the machine is that we cannot scrub away the passions and resentments that incite the obsession to begin with. The paternalism that launches the drone for our good doesn’t curb the risk that something will go wrong. When you use sophisticated technology to pursue vengeance, you are not elevating the action to a cause. Involving an AI doesn’t make violence an abstraction. An automated vigilante isn’t impersonal, just efficient.


Dr Suess did though!