Into AI Safety cover art

Into AI Safety

Into AI Safety

By: Jacob Haimes
Listen for free

About this listen

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/© Kairos.fm Mathematics Science
Episodes
  • Drawing Red Lines w/ Su Cizem
    Apr 6 2026

    Technology has been moving faster than policy for some time now, and the advent of AI isn't changing that, so what can we do to maintain safety despite uncertainty? Su Cizem has spent the last few years trying to answer that question. As an analyst at the Future Society, she works on global AI governance, specifically on building international consensus around AI red lines: the thresholds we collectively agree must never be crossed. In this conversation, Su walks through her path from philosophy to policy, the evolution of the global AI safety summit series, why voluntary commitments from AI labs aren't enough, and what it would actually take to make international cooperation on AI safety real.

    Chapters

    • (00:00) - Introduction
    • (03:23) - From Philosophy to Policy
    • (22:25) - What AI Governance Actually Means
    • (26:49) - The Summit Series
    • (43:01) - Drawing The Red Lines
    • (01:10:51) - Can These Companies Govern Themselves?
    • (01:24:01) - Breaking Into The Field
    • (01:27:51) - Closing Thoughts & Outro

    Critical Links
    Below are the most important links for this episode. For more, visit the episode page on Kairos.fm.
    • Su's LinkedIn
    • Global Call for AI Red Lines
    • The Futures Society report - “Facing the Stakes of AI Together”: 2025 Athens Roundtable Report
    • Politico article - How the global effort to keep AI safe went off the rails
    • TechPolicy.Press article - A Timeline of the Anthropic-Pentagon Dispute
    • The Guardian article - AI got the blame for the Iran school bombing. The truth is far more worrying
    • Google and OpenAI Employee open letter - We Will Not Be Divided
    • The Register article - Altman said no to military AI abuses – then signed Pentagon deal anyway
    • SaferAI report - Evaluating AI Providers’ Frontier AI Safety Frameworks
    Show More Show Less
    1 hr and 32 mins
  • Thinking Through "Digital Minds" w/ Jacy Reese-Anthis
    Mar 10 2026

    Jacy Reese-Anthis, founder of Sentience Institute and researcher at Stanford, began his journey working for animal welfare, but is now finishing up his PhD with research in many different AI subfields at the intersection of neuroscience, philosophy, social science, and machine learning. While this may seem like an odd jump at first, Jacy shares how his work has all been centered around the idea of moral circle expansion. In this episode, we dig into what sentience actually means (or at least how we can begin to think about it), why anthropomorphization is more complicated than it sounds, and how language models may be able to be leveraged as an effective tool for social science research.


    Jacy also shares his median AGI estimate somewhere in there, so stay tuned if you want to catch it.


    Chapters

    • (00:00) - Introduction
    • (05:41) - From Animal Welfare to Digital Minds
    • (09:00) - Founding Sentience Institute
    • (22:00) - Defining Sentience
    • (27:13) - The Anthropomorphization Problem
    • (47:51) - Why "Digital Minds" (Not "Artificial Intelligence")
    • (51:05) - LLMs as Social Science Tools
    • (01:07:03) - Jacy’s AGI Timeline & The Singularity
    • (01:09:23) - Final Thoughts & Outro

    Critical Links
    Below are the most important links for this episode. For more, visit the episode page on Kairos.fm.
    • Jacy's website
    • Wikipedia article - Jacy Reese Anthis
    • Sentience Institute website
    • CHI paper - Digital Companionship: Overlapping Uses of AI Companions and AI Assistants
    • ICML paper - LLM Social Simulations Are a Promising Research Method
    • ACL paper - The Impossibility of Fair LLMs
    • Wikipedia article - ELIZA effect
    • The Atlantic article - How a Google Employee Fell for the Eliza Effect
    Show More Show Less
    1 hr and 11 mins
  • Scaling AI Safety Through Mentorship w/ Dr. Ryan Kidd
    Feb 2 2026
    What does it actually take to build a successful AI safety organization? I'm joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field's premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype.I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you're considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact.Chapters(00:00) - - Intro (08:16) - - Building MATS Post-FTX & Summer of Love (13:09) - - Balancing Funder Priorities and Research Independence (19:44) - - The MATS Selection Process (33:15) - - Talent Archetypes in AI Safety (50:22) - - Comparative Advantage and Career Capital in AI Safety (01:04:35) - - Building the AI Safety Ecosystem (01:15:28) - - What Makes a Great AI Safety Amplifier (01:21:44) - - Lightning Round Questions (01:30:30) - - Final Thoughts & OutroLinksMATSRyan's WritingLessWrong post - Talent needs of technical AI safety teamsLessWrong post - AI safety undervalues foundersLessWrong comment - Comment permalink with 2025 MATS program detailsLessWrong post - Talk: AI Safety Fieldbuilding at MATSLessWrong post - MATS Mentor SelectionLessWrong post - Why I funded PIBBSSEA Forum post - How MATS addresses mass movement building concernsFTX Funding of AI SafetyLessWrong blogpost - An Overview of the AI Safety Funding SituationFortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. researchNY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeCointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeFTX Future Fund article - Future Fund June 2022 Update (archive)Tracxn page - Anthropic Funding and InvestorsTraining & Support ProgramsCatalyze ImpactSeldon LabSPARBlueDot ImpactYCombinatorPivotalAthenaAstra FellowshipHorizon FellowshipBASE FellowshipLASR LabsEntrepeneur FirstFunding OrganizationsCoefficient Giving (previously Open Philanthropy)LTFFLongview PhilanthropyRenaissance PhilanthropyCoworking SpacesLISAMoxLighthavenFAR LabsConstellationColliderNET OfficeBAISHResearch Organizations & StartupsAtla AIApollo ResearchTimaeusRAND CASTCHAIOther SourcesAXRP website - The AI X-risk Research PodcastLessWrong blogpost - Shard Theory: An Overview
    Show More Show Less
    1 hr and 32 mins
No reviews yet