AI is entering the state before society is ready


While we are still debating whether AI will take our jobs, it has quietly taken something far more consequential.

It has entered the state.

Not as an experiment. Not as a pilot. But as infrastructure.

Over the past few months, the signals have been hard to ignore. The Reserve Bank of India is seeking feedback from banks on deploying AI-powered facial recognition systems at ATMs, branches, and fraud hotspots across the country. Banks have already flagged concerns about cost and data privacy, but the consultation is underway.

Delhi Police deployed AI-enabled smart glasses with built-in facial recognition and thermal imaging for the first time this Republic Day, with officers scanning crowds against a database of 65,000 criminals in real time. The glasses are already being fitted into moving police vehicles for mobile surveillance.

The Ministry of Home Affairs has approved a pilot rollout of AI facial recognition at seven major railway stations, including Mumbai CST, New Delhi, Bengaluru, and Howrah, cross-referencing faces against India’s National Database on Sexual Offenders containing over two million records. In Visakhapatnam, an AI-powered humanoid robot called ASC Arjun already made its first arrest in February, identifying two habitual offenders from a criminal database and alerting security personnel in real time.

And then there are the classrooms. Karnataka’s government formally approved in February 2026 an AI-driven facial recognition attendance system under the Nirantara project, to be rolled out across 52,686 government and aided schools covering over five lakh students. A coalition of 31 organisations has written to the Chief Minister urging cancellation, warning of risks ranging from data leaks to child trafficking. The government is proceeding.

This is not a tech story anymore. This is a governance story.

 

The shift we are missing

We tend to think of AI as a consumer technology. Chatbots. Image generators. Productivity tools.

But that phase is already behind us.

AI is now being embedded into the systems that define how society functions: who gets access, who gets flagged, who gets watched, who gets believed.

In other words, AI is moving from interface to infrastructure.

And infrastructure has a very different kind of power. It does not ask for your attention. It shapes your reality.

 

The quiet normalization of surveillance

Every deployment today is justified by a reasonable goal.

Prevent fraud. Improve safety. Increase efficiency. Protect women and children.

It sounds hard to argue with. 

If facial recognition reduces ATM fraud, why resist? If AI helps police identify criminals faster, why question it? If schools can track attendance without wasting teaching time on roll calls, why hesitate?

But this is how surveillance systems have always scaled across history. Not through coercion, but through convenience.

The problem is not that these systems exist. The problem is that they are expanding faster than the frameworks governing them.

Banks are already raising concerns about cost and data privacy in the RBI consultation process. Child rights organizations and digital rights experts have pointed out that Karnataka’s school system has no consent framework for collecting biometric data from minors, no clear accountability mechanism if the data is breached, and no answer to what happens when a child’s face is misidentified.

Globally, similar AI surveillance deployments have repeatedly shown limited evidence of improving safety outcomes, while measurably expanding state control and chilling self-expression in public spaces. We are importing the capability before resolving the consequences.

 

The illusion of control

There is a comforting assumption underlying all this: that humans remain in control. That AI is just a tool. Those decisions are still human.

But that assumption becomes fragile when AI is embedded into high-speed systems.

When a system flags you as a criminal match, what happens next? When a biometric system fails on your face because of lighting, age, or the simple fact that the training data did not include people who look like you, who is accountable? When an algorithm decides what counts as fraud, threat, or suspicious behaviour, who audits it?

We are building systems where decisions happen faster than appeals, detection happens without transparency, and accountability becomes diffused across vendors, regulators, and government departments until it disappears entirely.

Once such systems are operational, rolling them back is nearly impossible.

 

India’s moment of choice

India is at a unique point in this transition.

On one hand, it is pushing for sovereign AI, local models, and large-scale adoption. On the other, it does not yet have a unified, enforceable AI governance framework. The Digital Personal Data Protection Act is on the books, but its enforcement mechanisms remain limited, particularly around sensitive categories like biometric data and minors.

What we have instead is a patchwork: sectoral adoption in banking, policing, transport, and education; reactive regulation through labelling mandates and takedown timelines; and fragmented oversight that no single citizen can trace from end to end.

This creates a dangerous asymmetry. The state becomes AI-enabled faster than the citizen becomes AI-aware.

 

What we are not preparing for

The real risk is not job loss. That debate is already tired.

The real risk is a shift in the balance of power.

When AI enters governance, opacity increases, contestability decreases, and trust becomes fragile.

And in a country as large and diverse as India, these systems will not affect everyone equally. They will disproportionately shape the lives of those already navigating structural inequality, those whose faces are least represented in training data, those who cannot afford the legal recourse to challenge a wrongful flag.

Because AI does not erase bias. It operationalizes it. It gives bias the authority of a system and the speed of a machine.

 

The uncomfortable question

We are asking the wrong question.

Not “Is AI good or bad?” Not even “How do we regulate AI?”

The real question is: what happens when systems of power become algorithmic before society becomes literate enough to question them?

Angela Lipps, a 50-year-old grandmother in Tennessee, spent nearly six months in jail because a facial recognition system identified her as a bank fraud suspect. She had never visited the state where the crime occurred. The system moved on. She is still rebuilding her life.

Her case is in the global AI Incident Database. The pattern it represents, wrongful identification, absence of redress, diffused accountability, is not confined to America. It is the pattern that follows every facial recognition deployment that prioritizes operational speed over human rights.

By the time we start asking, the systems will already be in place.

 

The bottom line

AI is not waiting for society to understand it.

It is being adopted by institutions that shape society. Quietly. Rapidly. With each deployment justified individually, and almost never examined as a cumulative shift in who holds power and who is subject to it.

This is no longer about innovation.

This is about power.

And we are entering that phase unprepared, one reasonable-sounding deployment at a time.



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *