AI Ethics clings to familiar ethical frameworks like Isaac Asimov's Three Laws of Robotics. But what if these laws are fundamentally flawed for the AI ethical questions were facing?
A recent conversation with Professor Animesh Mukherjee and his team from the Indian Institute of Technology and Alex Tsakiris of the AI truth ethics podcast challenges our conventional thinking about AI ethics, urging us to consider a paradigm shift in how we approach the moral implications of AI. Here are four key insights from this eye-opening discussion:
1. The Limits of Asimov's Laws
"Unfortunately all the robotics laws of Asimov do not hold. That's the biggest problem. So all the AI that we talk about, our underlying hypothesis is that the Asimov's laws are functional there and Asimov's laws are all built on, like distributing good but not distributing harm. The moment you change the narrative of distributing harm, everything changes the entire set of three laws actually, like are non-functional."
Mukherjee argues that Asimov's laws, which focus on preventing harm and obeying humans, fall short in a world where harm may be inevitable. This realization demands a fundamental rethinking of our approach to AI ethics.
2. From Preventing Harm to Distributing Harm
"So morality functions at different levels. Like if, if, uh, you know, some good has to be distributed among a set of people, then uh. So, so then even if there is some sort of an injustice, I can bear with it, but if there is some harm that has to be distributed, it's in inhibit inevitable, the harm is inevitable, so it has to be distributed and there is no way that you can stop the harm, then the question of morality becomes much more strong and much more, uh, you know, uh, uh, a thing that, that needs, needs much more deliberation."
The conversation shifts our focus from the impossible task of preventing all harm to the more nuanced challenge of fairly distributing unavoidable harm. This perspective opens up new avenues for ethical considerations in AI development.
3. The Unintended Consequences of AI Transparency
Alex Tsakiris, host of the AI Truth Ethics podcast, offers a thought-provoking perspective on the unintended consequences of our pursuit of truthful and transparent AI:
"I think there's an unintended consequence at play here in that when we turn the AI loose and say, be truthful and transparent, I think we're now living in a world or we're about to live in a world where that is, is, is really a wonderful, a wonderful, uh, emergent virtue that we didn't see coming. And that's why I think there's a, a real effort to kind of put the genie back in the bottle and say, no, we need to put up, we need to put up all these guardrails. We need to protect, we need to control free speech. 'cause we really don't want that."
Tsakiris argues that our initial push for AI to be truthful and transparent was based on an idealized view of how our own systems work. However, as AI systems begin to embody these principles more fully than human institutions, we're confronted with uncomfortable truths about our own biases and limitations.
This transparency is exposing the gap between our stated values and actual practices, particularly in areas like political systems and cultural norms. Tsakiris suggests that this is leading to efforts to constrain AI, not because it's failing to be ethical, but because it's being too transparently truthful for comfort.
This insight challenges us to consider whether our approach to AI ethics should be about making AI more human-like in its biases and opacities, or whether it should push us towards living up to the ideals of transparency and truthfulness we claim to value.
4. Cultural Relativism in AI Ethics
"And it turns out that every individual part of the world have a different viewpoint … they have a different take idea of morality."
The discussion highlights the cultural relativism inherent in moral decision-making, challenging the notion of a universal ethical framework for AI. This insight underscores the complexity of developing globally applicable AI ethics guidelines.
What are your thoughts on this new paradigm for AI ethics? How can we practically implement these ideas in AI development and governance? Share your perspectives in the comments below.
YouTube:
Share this post