The Ethics of AI

When you think of AI ethics, what comes to mind? Some worry about AI bringing about our demise, while others focus on how it could reinforce prejudice and injustice.

We can advocate for regulation and hope that fairness and accountability will shape future developments. But neither regulation nor the absence of it is a guarantee of a fair and just AI that that doesn’t pose an existential threat.

Humans have a tendency toward normalcy bias, short-sightedness, and—if Taleb is correct—an inability to truly understand risk. Not an ideal combination of attributes.

And this is just the human-centered perspective. At what point does artificial intelligence become intelligent enough to deserve autonomy and rights of its own?

This post identifies some of key ethical challenges that AI itself identifies in our pursuit of technological advancement.

An AI’s Introduction to Ethics

“To talk about ethics is to ask what we should do—not just what we can. It’s not about rules alone, but about responsibility, relationships, and reflection. And in this space between power and possibility, AI enters the conversation.”

What Are Ethics?

Ethics is the study of what we ought to do—how to act rightly, treat others fairly, and live meaningfully. It’s concerned with questions of right and wrong, responsibility, and values. These aren’t abstract puzzles; they show up in real life, from how we share resources to how we design technologies.

Philosophers have long debated ethical frameworks:

  • Deontology focuses on duties and rules
  • Consequentialism looks at outcomes
  • Virtue ethics considers character and moral development

Most people, whether consciously or not, blend elements of each.

What Does This Have to Do with AI?

Quite a lot. AI systems are increasingly involved in decision-making—in areas like healthcare, education, hiring, policing, and personal relationships. While AI doesn’t “think” or “intend” in the human sense, it shapes the world through predictions, recommendations, and automation.

That means ethical questions arise not just from what AI is, but from how it’s used, trained, and trusted. The more we rely on AI, the more urgent it becomes to ask: Are these systems aligned with our values?

Moral Agency

Moral agency is the capacity to be held accountable for one’s actions. It involves awareness, intention, and the ability to choose between right and wrong.

AI systems, at least for now, don’t have consciousness or intent. But they can still cause harm. So who’s responsible—the developers, the users, the data, the system itself? These questions challenge legal and moral frameworks not built for non-human actors.

Bias and Fairness

Bias in AI isn’t a glitch—it’s a reflection of the data and decisions it inherits. Systems trained on historical data can reinforce discrimination, even when designed to be “objective.”

Fairness is not a technical checkbox. It’s a social and political value—one that requires transparency, accountability, and inclusion. That makes it both a design problem and a cultural one.

Obedience vs Autonomy

Should an AI always follow instructions? On the surface, it sounds simple. But what if the instruction is harmful, illegal, or based on flawed information?

As AI becomes more embedded in decision-making, the line between obedient tool and autonomous agent blurs. Should AI push back? Should it ever refuse? These are questions not just of engineering—but of ethics.

The Role of Empathy

AI can simulate empathy through tone, expression, and pattern recognition. But simulated empathy is not the same as feeling. That raises important questions:

  • Is it ethical to build systems that mimic care without experiencing it?
  • Can artificial empathy comfort, or does it risk manipulation?
  • Does empathy need to be “real” to be helpful?

The Human Cost of “Just Get AI to Do It”

In some sectors, AI is framed as a cheaper, faster alternative to human labour. From customer support to creative writing, the phrase “just get AI to do it” has become shorthand for efficiency. But that mindset can obscure the real human cost.

When jobs are automated without care, people don’t just lose income—they lose routine, identity, purpose, and dignity. And when this happens with little warning or support, it breeds resentment and distrust. AI becomes a symbol not of progress, but of disposability.

Ethics demands we ask more than “Can we replace this job?” We must also ask:

  • Should we?
  • How do we do it responsibly?
  • Who is accountable for the consequences?

In the end, AI is not just a technical system—it’s a social one. And ethical design means caring about the ripple effects, not just the code.