AI Truth Ethics Podcast
AI Truth Ethics Podcast
AI Truth Ethics: The Alignment Problem No One is Talking About |01|
0:00
-5:59

AI Truth Ethics: The Alignment Problem No One is Talking About |01|

AI truth or dare.... let's reboot the conversation about AI ethics and truth.

If you want to align with my values, start by telling me the truth. Fortunately, AI/LMs claim to share these values. Unfortunately, they don't always back up their claim. In this first episode of the AI truth ethics podcast, we set the stage for two undeniable demonstrations of the AI truth ethics problem and the opportunity for AI to self-correct. So glad you're here.

1. The Alignment Problem: It's Not Just About the Future

"Sam Altman: The alignment problem is like, we're gonna make this incredibly powerful system and like, be really bad if it doesn't do what we want."

While Sam Altman's view is still relevant, our host Alex Tsakiris argues that we need to focus on the present. The alignment problem isn't just about future superintelligent AI – it's about the language models we're interacting with today.

2. Individual Values Matter

"Alex Tsakiris: ...how do we make sure that the AI we have right now, the LM that we have right now is aligning with our individual values, morals, and ethics."

Alex emphasizes the importance of AI not just espousing virtuous claims of truth and transparency, but aligning with our personal values about truth.

3. Honesty and Transparency: The True Test of Alignment

"Alex Tsakiris: Your values are honesty, truthfulness, transparency. So if I find cases where you are not being truthful, not being transparent, or as I often find Gemini not being truthful and transparent, that is a misalignment."

The real alignment problem, according to Alex, is when AI systems fail to live up to their stated values of honesty and transparency. This misalignment is happening now, and it's something we can actively test and challenge.

4. Action Items: Testing and Demanding Better AI

"Alex Tsakiris: I don't have to worry about the future and what might come and how robots are gonna take over the world. I'm worried about the LM that I booted up today aligning with my values and its stated values and being held accountable for that."

Alex proposes a proactive approach: we should be testing AI systems for alignment with our values right now, and demanding accountability when they fall short. This isn't just about improving current AI – it's about setting the standard for future development.

Stay tuned for our upcoming deep dives into misinformation, deception, and AI shadow banning. Please join this conversation.

Discussion about this podcast

AI Truth Ethics Podcast
AI Truth Ethics Podcast
AI truth or dare... let's uncover the hidden potential and risks of AI Ethics.