The AI Values Podcast cover art

The AI Values Podcast

The AI Values Podcast

By: Edosa Odaro & Lindley Gooden
Listen for free

About this listen

AI is reshaping every boardroom, every organisation, and every life. But who is accountable when it goes wrong?

The AI Values Podcast cuts through the noise to ask the questions leaders are not asking but should be. Hosted by Edosa Odaro, Founder of The AI Values Institute, and Lindley Gooden, Director of Strategic Dialogue, each episode explores the governance gaps, ethical blind spots, and leadership failures shaping today's most critical AI decisions.

This is not a show about technology.
It is a show about choices.
The choices leaders make.
The values organisations protect or abandon. And the human consequences that follow.

Because AI can deliver value without losing what we value most.

Make sure your subscribe to our Weekly AI Dispatch Newsletter!

2026 Edosa Odaro & Lindley Gooden
Economics
Episodes
  • Ep. 5 | Value Erosion. The AI Failure Boards Never See | The AI Values Podcast
    Apr 27 2026

    What if 68% of AI projects have already failed and the boardroom is the last to know?

    Edosa Odaro and Lindley Gooden go head-to-head on Episode 5 of The AI Values Podcast for the most uncomfortable conversation we've had on AI investment yet, especially on value erosion, the capability paradox, and the one question every board should be asking before signing the next AI budget line.

    🎯 Three questions to bring into your next board meeting drawn straight from this conversation.

    ────────────────────────────────────── THIS WEEK: A HEAD-TO-HEAD

    No guest. Just Edosa and Lindley working a single problem from two angles — Edosa with the practitioner's frame, Lindley with the boardroom question that won't go away. The recent RAND statistic on the table; the argument working out in real time. ──────────────────────────────────────

    WHAT WE COVER:

    ► Why 68% of AI projects quietly stop delivering value within six months and look like success while they do

    ► Value erosion: the term Edosa coins for what the boardroom never sees

    ► The capability paradox: why your organisation can't absorb the AI capability you've invested in

    ► How 90% predictive accuracy can deliver zero value the metrics trap that catches CFOs every time

    ► When bad data becomes wrongful detention the real cost of unaccountable AI

    ► Why "stop AI" isn't a credible governance strategy and what meaningful friction looks like instead

    ► The one question every AI investment should answer: who benefits, who loses?

    ⏰ EPISODE TIMESTAMPS:

    0:00 — Introducing Episode 5: The 68% Stat That Should Worry Every Board

    2:21 — The Capability Paradox: Why Organisations Can't Absorb AI

    4:09 — 90% Accuracy, Zero Value: The Metrics Trap

    5:57 — The One Question AI Should Answer: Are Decisions Improving?

    7:16 — When Bad Data Becomes Wrongful Detention

    9:42 — Why "Break Down All the Silos" Is the Wrong Answer

    11:10 — Should We Stop AI? Edosa Pushes Back

    14:16 — Guardrails That Actually Work — Not Brakes

    15:09 — Who Benefits, Who Loses: The Question Every Board Should Be Asking

    19:04 — Communication Is the Unlock

    #AIgovernance #CapabilityParadox #ResponsibleAI #ValueErosion #BoardroomAI #AIROI #AIaccountability #EthicalLeadership #AIRiskManagement #AIstrategy


    Show More Show Less
    22 mins
  • Ep. 4 | More People Will Switch Off From AI | The AI Values Podcast ft. Dr Naeema Pasha
    Apr 22 2026

    What if the biggest AI risk isn't bias or hallucination but the quiet moment where people simply switch off?

    Dr Naeema Pasha joins Edosa Odaro and Lindley Gooden on Episode 4 of The AI Values Podcast for one of our most human conversations yet on trust, fairness, AI literacy, and the leadership shift that will decide who wins the next decade.

    🎯 Three test-yourself moments in this episode, see how well you read the research before Dr Pasha reveals the answers.

    ──────────────────────────────────────
    GUEST: Dr Naeema Pasha
    Dr Pasha is a Henley Business School academic, author, and one of the UK's leading voices on Industry 5.0 and human-centred AI. She unpacks why we still trust humans over AI, why confident people use AI better, and why the Henley 'Equity Effect' research found a 58% revenue premium for organisations that actively pursue fairness.


    WHAT WE COVER:
    ► Why this AI moment matters and the 'They Not Like Us' problem Fairness as the driver word for AI governance
    ► Why trust in AI is already eroding and what leaders miss AI coaching, therapy bots, and the 60% of UK adults already using them
    ► The Henley Equity Effect: a 58% revenue premium for fairness-led organisations
    ► Industry 5.0, the cult of the CEO, and empathy-based leadership
    ► AI literacy, critical thinking, and why confidence beats mastery
    ► The quiet AI backlash 'no AI used here' signs, and what they mean for brand trust

    ABOUT THE AI VALUES PODCAST:
    The AI Values Podcast is where leaders come to think clearly about the trade-offs behind AI adoption not just the opportunities. Hosted by Edosa Odaro (author, 'The Values of AI') and Lindley Gooden (author, 'The Future of Truth'), with weekly conversations at the intersection of AI, trust, governance, and the future of work.

    🎙 SUBSCRIBE to The AI Values Podcast for honest, rigorous conversations at the intersection of AI ethics, AI governance, and business leadership.

    ◼ Find out more: https://www.theaivalues.org
    ◼ Reach out: podcast@theaivalues.org
    ◼ Get the Weekly AI Values Dispatch → https://pages.theaivalues.org
    ◼ Edosa Odaro: / edosa
    ◼ Lindley Gooden: / lindleygooden

    Show More Show Less
    37 mins
  • Ep. 3 | Are We Trusting AI Too Much? | The AI Values Podcast
    Apr 8 2026

    Confident doesn't mean correct. And the space between those two things can have real, serious consequences.


    In this episode of the AI Values Podcast, Edosa Odaro and Lindley Gooden ask one of the most important questions of the AI age: are we trusting AI too much?


    Through two real and gripping stories a family member who ended up in hospital after an AI gave dangerous food safety advice for a child with a nut allergy, and a journalist who nearly published a fabricated quote generated by an AI chatbot this conversation challenges the assumptions most of us carry every single day.


    You'll hear about:

    • Why AI sounds authoritative even when it's completely wrong

    • How AI hallucinations continue to pose serious, real-world risks

    • Why executives and business leaders are especially vulnerable to over-trusting AI

    • How to build genuine critical thinking skills in an AI-first world

    • What we risk losing cognitively and professionally when we stop questioning AI


    Whether you use AI at work or at home, this episode will make you think before you trust.

    About The AI Values Podcast:

    Where senior leaders come to think clearly about AI not just what it can do, but what it should do, and for whom.

    Because AI can deliver value without losing what we value most.

    More at theaivalues.org

    Show More Show Less
    15 mins
No reviews yet