The Psychology of Health cover art

The Psychology of Health

The Psychology of Health

By: Milan Toma
Listen for free

Summary

Each episode is a clear, accessible synthesis of research studies on timely and controversial health topics; no hot takes, no hype, just what actual science says. Hosted by Milan Toma, Ph.D., this podcast cuts through the noise. Instead of speculation and hearsay, you’ll get evidence-based insights on everything from sleep and weight gain to the anatomy of misinformation and the psychology behind public health debates. If you’re frustrated by the flood of opinions online and want to know what the research really shows, this is the show for you.Milan Toma Hygiene & Healthy Living Physical Illness & Disease
Episodes
  • The Limits of Chatbots in Clinical Decision‑Making
    May 7 2026

    Chatbots and large language models are becoming increasingly common in everyday life, but their growing presence in healthcare has raised an important question: Should probabilistic AI systems be used to help make medical decisions? This episode takes a clear, grounded look at why the answer is far more complicated—and potentially far more dangerous—than many people realize.

    Modern chatbots work by predicting the most statistically likely response based on patterns found in massive amounts of text. That makes them great for conversation, brainstorming, and general information, but not for something as complex and high‑stakes as medical diagnosis. In clinical settings, symptoms like persistent cough and chest pain can point to a wide range of possible conditions. A probabilistic model might default to the most common explanation, but medicine doesn’t work on majority statistics—it works on understanding nuance, context, risk, and rare but critical exceptions.

    This episode explores how relying on “most likely” answers can lead to missed diagnoses, delayed treatments, and dangerous oversights. You’ll hear how serious conditions such as pulmonary embolism or early lung cancer can present with the same symptoms as common respiratory infections, making a simplistic, probability‑driven guess both insufficient and unsafe. We also dive into the accuracy paradox—how an AI system can appear highly accurate while still being clinically untrustworthy, simply because it always chooses the dominant category.

    Beyond the risks, this episode highlights what real medical reasoning involves: integrating visual cues, patient history, audio signals, imaging studies, laboratory data, physiological waveforms, and much more. Human clinicians synthesize all these inputs at once, something a probabilistic chatbot was never designed to do. By understanding this difference, listeners will gain a deeper appreciation for the limitations of current AI tools and why responsible, deterministic models are essential in healthcare.

    Whether you’re a clinician, medical student, AI researcher, or simply curious about how technology intersects with patient care, this episode offers a clear and accessible exploration of why chatbots, despite their impressive capabilities, should not be mistaken for diagnostic tools.

    Show More Show Less
    8 mins
  • Viral AI-Beats-Doctors Study
    May 4 2026

    Another week, another headline declaring AI has officially surpassed physicians. This time, it's a study published in Science on April 30, 2026, claiming that OpenAI's o1 model "outperformed physician baselines" across multiple diagnostic reasoning tasks. The research comes from Harvard, Stanford, and Beth Israel Deaconess Medical Center. It's rigorous. It's peer-reviewed. And it's already being cited as proof that doctors are obsolete.

    But here's what those viral headlines won't tell you: the study tested AI on text alone.

    No images. No audio. No physical exams. No watching a patient walk through the door in distress before they utter a single word. No recognizing the subtle facial asymmetry that suggests stroke. No hearing the quality of a cough. No feeling a mass during examination. No interpreting the fear in a patient's eyes.

    In other words—not real medicine.

    In this episode, we unpack why this study, despite its methodological rigor, may be doing more harm than good. We explore the "headline-to-reality pipeline"—how clickbait economics strips away the authors' own caveats until all that remains is a misleading soundbite. We discuss the real-world consequences: misinformed patients with unrealistic expectations, demoralized clinicians, misallocated healthcare resources, and a generation of medical trainees learning exactly the wrong lessons about AI.

    Perhaps most critically, we address the "chatbot conflation problem." When the public hears "AI in medicine," they picture ChatGPT. But as of late 2025, over 850 AI-enabled medical devices have received FDA clearance—more than 70% related to medical imaging. These task-specific systems detecting pulmonary nodules, identifying intracranial hemorrhages, and flagging diabetic retinopathy are fundamentally different from large language models answering text prompts. Different architecture. Different validation. Different regulatory pathways. Different levels of evidence. Lumping them together under "AI" does a disservice to both.

    We also tackle a question the headlines never ask: What would a fair evaluation of AI in medicine actually look like? Hint—it would require multimodal inputs, messy real-world data, and a fundamentally different benchmark: not "Can AI beat doctors?" but "Do doctors WITH AI outperform doctors WITHOUT AI?"

    Finally, we make the case for why medical education must lead this conversation. If we don't teach our students—and frankly, the broader public—the critical distinctions between AI tools, what happens? Clinicians lose trust not just in overhyped chatbots, but in all medical AI, including the FDA-cleared tools actually saving lives. That erosion of trust could take a generation to repair.

    The technical findings of this study may be sound. But science doesn't exist in a vacuum. It exists in a media ecosystem that rewards sensationalism, in a healthcare system desperate for solutions, and in a culture increasingly willing to believe AI can do anything. The responsible approach is to be louder about limitations than findings.

    Because right now, we're celebrating an AI that aced a written exam—while the actual test, the messy, multimodal, deeply human reality of clinical medicine, remains completely ungraded.

    What You'll Learn: • Why text-based AI evaluations fundamentally misrepresent clinical medicine • The critical distinction between task-specific medical AI and general chatbots • How clickbait economics transforms nuanced research into dangerous misinformation • What fair AI evaluation in healthcare would actually require • Why medical educators must lead the conversation on AI literacy

    Resources Mentioned: • Brodeur PG, et al. "Performance of a large language model on the reasoning tasks of a physician." Science. 2026;392(6797):524-527 • FDA AI-Enabled Medical Device Database • Clinical AI Course (NYIT College of Osteopathic Medicine)

    Show More Show Less
    8 mins
  • Medical Education Must Teach AI Differently
    Apr 14 2026

    Artificial intelligence is rapidly moving into classrooms, clinics, and daily healthcare decision making, but much of the public conversation is built on a dangerous misunderstanding. Too often, people now treat artificial intelligence as if it simply means chatbots. In this episode, Dr. Milan Toma explains why that confusion matters and why healthcare professionals must learn to distinguish between conversational tools and task specific medical systems.

    This episode explores the long history of artificial intelligence in medicine, why chatbots are optimized for fluent language rather than true clinical understanding, and why strong performance on text based clinical vignettes should not be mistaken for real world diagnostic ability. Dr. Toma also examines the risks of artificial intelligence sycophancy, the danger of overfitting, the limits of accuracy as a metric, and how data leakage or hidden shortcuts can make weak systems look impressive during development.

    Most importantly, this is a conversation about education and patient safety. Healthcare professionals need more than basic exposure to artificial intelligence tools. They need to understand how different systems work, how they fail, how to evaluate claims critically, and why clinicians must work closely with developers before these tools are trusted in practice.

    The goal is not simply to teach people how to use artificial intelligence. It is to teach them how to question it, evaluate it, and apply it responsibly. The future of healthcare will include artificial intelligence, but safe healthcare depends on how well we teach people to understand it.

    Show More Show Less
    37 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet