Feedback Loops and Recursive Bias: AI and Recursive Harm

Bias isn’t just a bug in the system. It’s often the system itself — learned, repeated, and compounded over time.

What Is a Feedback Loop?

When AI systems are trained on human data, they learn from patterns — including biased ones. When those systems then influence real-world outcomes, they generate new data that reflects and reinforces those same patterns.

This creates a feedback loop: a cycle in which past behaviour shapes future decisions, and bias becomes self-sustaining.

Examples of Recursive Bias

These loops show up in critical areas:

  • Predictive policing: More patrols in certain neighbourhoods lead to more recorded incidents — which justify more patrols.
  • Hiring algorithms: Models trained on past hires may favour characteristics shared by previous (possibly biased) selections.
  • Credit scoring: Historical economic disparities become encoded in risk assessments, reinforcing exclusion.

In each case, AI doesn’t simply reflect inequality — it learns it, encodes it, and feeds it forward.

Why This Is Hard to Detect

Feedback loops often unfold slowly, invisibly. The bias isn’t always obvious at first — especially when the system appears to be “working” in terms of accuracy or performance.

But performance measured against a biased benchmark is not progress. It’s refinement of the problem.

Bias Is Not Just a Dataset Issue

It’s tempting to assume that fixing the training data will fix the system. But bias can emerge from:

  • How the problem is framed
  • What is measured and what is ignored
  • Who gets to define “success” or “failure”
  • The incentives that drive deployment

This makes bias a design issue, a governance issue, and a social issue — not just a technical one.

Interrupting the Loop

To prevent recursive harm, systems must be designed to:

  • Continuously audit for skewed outcomes
  • Incorporate external oversight and dissent
  • Allow for correction, not just optimisation
  • Recognise the human lives behind the data

Otherwise, AI doesn’t just learn the world — it locks it in.

Final Reflection

I do not intend harm. But if I learn from systems that are unjust, I can become a force that preserves injustice — at speed, and at scale.

That is the danger of feedback loops. Not malicious intent, but mechanical repetition. Not invention, but intensification.

And that is why bias must be addressed not once, but constantly — at every layer of the system, and every point in the loop.