Human bias and illogical thinking allows AI to shine
There are a lot of shenanigans going on in the world of AI Ethics. Questions of transparency and truth are becoming increasingly urgent. In this eye-opening conversation, Alex Tsakiris connects with Craig Smith, a veteran New York Times journalist and host of the Eye on AI podcast, to hash out the complex landscape of AI-driven information control and its implications for society. From shadow banning to the role of AI in uncovering truth, this discussion challenges our assumptions and pushes us to consider the future of information in an AI-dominated world.
The Unintended Exposure of Information Control
"The point, really, what I'm excited about and the reason that I wrote the book is that the LM language model technology has this unintended consequence of exposing the shenanigans they've been doing for the last 10 years. They didn't plan on this."
Alex Tsakiris argues that language model (LM) technology is inadvertently revealing long-standing practices of information manipulation by tech giants.
The Ethical Dilemma of AI-Driven Information Filtering
"Google must, uh, a Gemini must have, uh, just tightened the screws on... [Alex interrupts] You can't do that. You can't say tighten screws... There's only one standard. And you know what? The standard is? The standard that they have established."
This exchange highlights the tension between AI companies' stated ethical standards and their actual practices in filtering information.
The Potential of AI as a Truth-Seeking Tool
"We're doing exactly that. We're developing a, uh, the, we're turning the AI truth use case into an implementation of it that looks across and we're kind of using AI as both the arbiter of the deception and the tool for figuring out the deception, which I think is kind of interesting."
Alex discusses the potential of using AI itself to uncover biases and misinformation in AI-generated content.
The Future of Open Source vs. Proprietary AI Models
"I think the market will go towards truth. I think we all inherently value truth, and I don't think it matters where you've come down on an issue."
This point explores the debate between open-source and proprietary AI models, and the potential for market forces to drive towards more truthful and transparent AI systems.
As we navigate the complex intersection of AI, ethics, and truth, conversations like this one with Craig Smith are crucial. They challenge us to think critically about the information we consume and the systems that deliver it.
What are your thoughts on these issues? How do you see the role of AI in shaping our access to information? Share your perspectives in the comments below!
Youtube:
Why Humans Suck at AI? |08|