• Agent Pentest Benchmarking | Episode 52
    May 14 2026
    In this episode of BHIS Presents: AI Security Ops, the team breaks down a new benchmarking framework designed to evaluate AI pentesting agents against real-world offensive security scenarios.What began as experimental evaluation of “can AI hack?” has quickly shifted into something much closer to operational reality. Organizations are now seeing a surge in agentic tooling and automated pentesting workflows, where human-guided AI systems consistently outperform fully autonomous agents in complex, unsupervised environments.As AI tooling evolves, teams must balance speed with validation, monitoring, and oversight as offensive capabilities outpace defenses.We dig into:The new “AutoPenBench” framework for benchmarking AI pentesting agentsWhy fully autonomous AI hacking only achieved a 21% success rateHow human-assisted AI workflows increased success rates to 64%Testing AI agents against Log4Shell, Heartbleed, Spring4Shell, and classic web exploitsWhy modern offensive AI systems still require heavy human oversight and validationHow custom internal AI frameworks are already finding vulnerabilities humans missedThe operational role of prompt engineering, scaffolding, and agent memoryReal examples of AI agents mis-scoping infrastructure and chasing irrelevant targetsHow AI lowers the barrier for ransomware operations and offensive capability developmentWhy defensive teams need stronger edge visibility, packet capture, and AI-aware monitoring strategies⸻📚 Key Concepts & TopicsAI Pentesting & Agentic SecurityAutonomous AI hacking agentsAgentic AI workflowsAI-assisted penetration testingOffensive security automationBenchmarking & EvaluationAutoPenBenchAI security benchmarkingHuman-in-the-loop validationLong-horizon task evaluationOffensive Security OperationsSQL injectionPath traversalLog4Shell / Heartbleed / Spring4ShellKali Linux offensive toolingAI Infrastructure & Model OperationsPrompt engineeringPersistent agent memoryRoleplay jailbreak techniquesGuardrail reduction strategiesDefensive Security StrategyDefense in depthEdge network monitoringZeek network analysisPacket capture visibilityIndustry & Threat ImplicationsAI-enabled ransomware operationsAI-assisted red teamingInfrastructure scoping failures Operational scalability challenges#AISecurity #CyberSecurity #Pentesting #AIAgents #RedTeam #EthicalHacking #CyberDefense----------------------------------------------------------------------------------------------(00:00) - Video Intro and Sponsor (01:20) - Al Pentesting Benchmark Overview (02:11) - How AutoPenBench Works (03:44) - Real World Results and Experience (05:16) - Real World Results and Experience (06:48) - Human and Al Collaboration (07:38) - Improving Al Agent Workflows (08:56) - Model Limitations and Updates (10:35) - Jailbreaks and Model Guardrails (13:16) - Provider Controls and Trust Factors (14:41) - Lower Barrier for Cyber Attacks (15:39) - Defensive Security Implications (16:59) - Why Red Teams Need Al NowClick here to watch this episode on YouTube. Creators & Guests Brian Fehrman - HostDerek Banks - HostBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.
    Show More Show Less
    18 mins
  • AI and Bug Bounties | Episode 51
    May 11 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down a growing problem in cybersecurity: AI-generated bug bounty “slop” overwhelming the system.

    What started as a powerful way to crowdsource vulnerability discovery is now hitting a breaking point. Programs like cURL’s bug bounty and platforms like HackerOne are seeing a massive surge in submissions — but fewer and fewer of them are actually valid.

    The result? Security teams spending hours reviewing reports that go nowhere, while real vulnerabilities risk getting buried in the noise.

    We dig into:
    • Why cURL shut down its bug bounty program after years of success
    • How valid reports dropped from 1-in-6 to 1-in-20
    • What “death by a thousand slops” actually looks like in practice
    • How AI is flooding programs with low-quality vulnerability reports
    • The difference between “theoretical” vs. exploitable vulnerabilities
    • Why reviewing findings is now harder than generating them
    • How HackerOne is responding to the surge in submissions
    • Whether AI can be used to filter AI-generated noise
    • The role of reproducibility and proof-of-impact in triage
    • Why human expertise still matters in vulnerability validation

    This episode explores a critical shift in security operations: when vulnerability discovery becomes cheap and automated, validation and triage become the real bottleneck.

    📚 Key Concepts & Topics

    Bug Bounty Programs & Triage
    • Submission quality vs. volume imbalance
    • Signal-to-noise challenges in vulnerability pipelines
    • The growing burden of manual validation

    AI in Vulnerability Discovery
    • Automated scanning vs. real exploitability
    • AI-generated findings and false positives
    • The “editor’s dilemma” — review vs. generation

    AI Security Risks
    • Lower barrier to entry for vulnerability discovery
    • Over-reliance on AI without domain expertise
    • Flooding systems with low-quality submissions

    Defensive Strategy
    • Requiring reproducible steps and proof-of-impact
    • Using AI to pre-filter vulnerability reports
    • Combining human expertise with AI tooling

    Industry Impact
    • cURL bug bounty shutdown
    • HackerOne submission pause
    • Shifting economics of vulnerability research

    #AISecurity #BugBounty #CyberSecurity #LLMSecurity #ArtificialIntelligence #InfoSec #BHIS #AIAgents #AppSec
    ----------------------------------------------------------------------------------------------

    • (00:00) - Intro: Bug Bounty Burnout & AI Noise
    • (01:14) - cURL Kills Its Bug Bounty Program
    • (02:05) - “Death by a Thousand Slops” Explained
    • (03:42) - AI vs Vulnerability Scanners: Signal vs Noise
    • (04:38) - HackerOne Pauses Submissions & Industry Impact
    • (05:41) - Can AI Filter AI? Proposed Solutions
    • (07:49) - Why Humans Still Matter in Validation
    • (12:55) - Final Takeaway: AI as a Tool, Not a Replacement

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Ethan Robish - Guest
    • Bronwen Aker - Host
    • Brian Fehrman - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Show More Show Less
    14 mins
  • Vercel Breach | Episode 50
    May 1 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down the Vercel breach — a real-world incident that shows just how fragile modern security has become in the age of AI integrations and SaaS sprawl.

    What started as a simple Roblox cheat script downloaded on a work laptop quickly escalated into a multi-hop compromise involving OAuth permissions, an AI productivity tool, and access into Vercel’s internal systems.

    This wasn’t a zero-day or advanced nation-state exploit. It was a chain of everyday decisions: installing software, clicking “Allow,” and trusting third-party integrations.

    The result? Allegedly $2M worth of data listed for sale, including API keys, internal data, and employee records — all from a breach path that most organizations aren’t even monitoring.

    We dig into:
    • What Vercel is and why it’s such a high-value target
    • How environment variables become the “keys to the kingdom”
    • The full attack chain: Roblox malware → Context.ai → Vercel
    • What infostealers like Lumma actually do (and how cheap they are)
    • How OAuth permissions become persistent backdoors
    • Why AI productivity tools introduce hidden risk
    • The rise of “shadow AI” inside organizations
    • How supply chain attacks continue to scale across ecosystems
    • The role of AI in accelerating attacker speed and capability
    • Why this type of breach is becoming the new normal

    This episode highlights a critical shift in cybersecurity: you don’t have to get hacked directly anymore — attackers just need to compromise something you’ve already trusted.

    📚 Key Concepts & Topics

    Attack Chain & Initial Access
    • Lumma infostealer and malware-as-a-service
    • Credential theft: passwords, cookies, OAuth tokens
    • Low-cost, high-impact compromise paths

    OAuth & Identity Risk
    • “Allow All” permissions and persistent access
    • OAuth tokens as long-lived entry points
    • Lack of visibility into third-party integrations

    AI Security Risks
    • Shadow AI and unsanctioned tool adoption
    • Deep integrations with Google Workspace and SaaS
    • AI tools as new supply chain attack surfaces

    Supply Chain Attacks
    • Multi-hop compromise paths across vendors
    • Real-world parallels (Trivy, LiteLLM)
    • Interconnected ecosystems increasing blast radius

    Threat Landscape Evolution
    • AI accelerating attacker speed and scale
    • Lower barrier to entry for complex attacks
    • Criminal groups operating as decentralized “businesses”

    Defensive Strategy
    • Auditing OAuth integrations and permissions
    • Enforcing least privilege across SaaS tools
    • Segmenting sensitive data and reducing blast radius
    • Avoiding risky behavior on corporate devices

    ⏱️ Chapters

    • (00:00) - Intro & Breach Overview
    • (00:21) - Sponsors & Show Setup
    • (01:29) - What Vercel Is & Why It Matters
    • (02:31) - Initial Compromise: Roblox Script & Infostealer
    • (05:03) - OAuth Permissions & Pivot into Vercel
    • (08:04) - AI Tools, Over-Permissioning & Supply Chain Risk
    • (09:53) - AI Acceleration of Attacks & Ecosystem Impact
    • (13:34) - Threat Actors, Attribution & Key Takeaways

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Brian Fehrman - Host
    • Ethan Robish - Guest

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Show More Show Less
    18 mins
  • Claude Mythos | Episode 49
    Apr 24 2026
    In this episode of BHIS Presents: AI Security Ops, the team breaks down Claude Mythos Preview — Anthropic’s unreleased frontier model that may represent a turning point in AI-powered cybersecurity.What started as a controlled research release under Project Glasswing has quickly become one of the most controversial developments in AI security. Mythos isn’t just better at finding vulnerabilities — it’s operating at a scale and depth that challenges long-held assumptions about how quickly software can be broken… and whether it can realistically be fixed.From leaked internal documents to real-world exploit generation, this episode explores what happens when vulnerability discovery becomes cheap, fast, and automated — while remediation remains slow, manual, and human-bound.The result? A growing asymmetry that could fundamentally reshape the security landscape.We dig into:• What Claude Mythos Preview is and why it was withheld from the public• The leaks that exposed its existence and capabilities• How Project Glasswing is positioning AI for defensive use• Real-world vulnerability discoveries made by the model• The “vulnpocalypse” problem: discovery vs. remediation imbalance• Emerging AI behaviors that raise containment concerns• How attackers are already leveraging AI for offensive operations• The access control dilemma: who gets to use models like this?• Why patching — not discovery — is now the primary bottleneck• What defenders must do to prepare for AI-accelerated exploitationThis episode explores a critical shift in cybersecurity: when vulnerability discovery scales faster than human response, the entire defensive model starts to break down.⸻📚 Key Concepts & TopicsAI-Powered Vulnerability Discovery• Autonomous exploit generation and chaining• Benchmark performance vs. prior models• AI-assisted offensive security workflowsAI Security Risks• Discovery vs. remediation asymmetry• AI-driven vulnerability scaling• Offensive use by nation-states and cybercriminalsModel Behavior & Safety• Emergent autonomy and sandbox escape concerns• Evaluation awareness and deceptive behaviors• Limits of containment and alignmentDefensive Strategy & Readiness• Patch velocity as the new bottleneck• AI-assisted vulnerability management• Open-source ecosystem risk exposureAI Governance & Industry Response• Restricted model releases and access control• Regulatory and financial sector concerns• The future of AI capability containment#AISecurity #CyberSecurity #ArtificialIntelligence #LLMSecurity #BHIS #AIThreats #InfoSec #AIAgents #CyberDefense(00:00) - Intro & Show Overview(01:00) - Sponsors, Hosts, and Episode Setup(01:53) - What Is Claude Mythos Preview?(03:04) - The Leak, Project Glasswing, and Restricted Access(07:53) - Capabilities: Exploits, Benchmarks, and Breakthroughs(09:16) - Real-World Vulnerabilities & “Vulnpocalypse” Concerns(14:47) - Access Control, Threat Actors, and Emerging Risks(21:38) - Defensive Strategy: Patching, AI Tools, and What Comes Next(23:08) - Defensive Strategy: Patching, AI Tools, and What Comes NextClick here to watch this episode on YouTube. Creators & Guests Derek Banks - HostBronwen Aker - HostBrian Fehrman - HostBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.
    Show More Show Less
    26 mins
  • Holocron OpenBrain with Alex Minster | Episode 48
    Apr 22 2026
    In this episode of BHIS Presents: AI Security Ops, the team is joined by Alex Minster to demo his project: HOLOCRON OpenBrain with — a persistent, model-agnostic memory layer designed to solve one of the biggest frustrations in AI workflows.Instead of starting from scratch every time you open a new chat, Alex’s approach creates a centralized “brain” that multiple AI models can connect to, allowing context, notes, and intelligence to persist across sessions, tools, and even platforms.The result? A flexible system that captures thoughts, ingests threat intel, and generates structured outputs — all without locking you into a single AI provider.We dig into:• The “cold start” problem in AI and why it breaks real workflows• What the OpenBrain HOLOCRON is (and isn’t)• How centralized memory changes the way we interact with AI tools• The architecture: Supabase, OpenRouter, MCP, and multi-model access• Using Discord as a lightweight ingestion pipeline for persistent memory• Real-world CTI workflows: capturing intel and generating reports on demand• Managing, editing, and superseding memory over time• The tradeoffs between context richness and security exposure• Multi-model reliability differences (and why they matter)• Practical setup: what it takes to build your own systemThis episode highlights a shift in how AI is used operationally: moving from isolated chats to persistent, structured memory systems that can evolve alongside your work.⸻📚 Key Concepts & TopicsPersistent AI Memory• Solving the “cold start” problem• Centralized context across multiple models• Structured vs raw data ingestionAI Architecture & Tooling• Supabase as a backend memory store• OpenRouter for multi-model access• MCP protocol for integrationsCyber Threat Intelligence (CTI)• Capturing, tagging, and prioritizing intel• Generating automated reports and dashboards• Context-aware intelligence workflowsSecurity & Privacy• Need-to-know data design• Avoiding overexposure via full integrations (email, docs, etc.)• Auditing and removing sensitive dataOperational Workflows• Capturing ideas, notes, and research• Multi-project memory segmentation (“multiple brains”)• Using AI to accelerate—not replace—analysis🔗 HOLOCRON GitHub Guide: https://github.com/belouve/open-brain-holocron🔗 Alex Minster: https://www.linkedin.com/in/alexminster/#AISecurity #CyberSecurity #AIWorkflows #LLM #ThreatIntel #DevSecOps #BHIS #OpenSource #AIEngineering(00:00) - Intro & Guest Introduction (Alex Minster)(00:55) - What Is the OpenBrain HOLOCRON? (Cold Start Problem)(03:00) - How It Works: Centralized Memory & AI Integration(05:30) - Architecture & Free-Tier Stack (Supabase, OpenRouter, MCP)(07:54) - Demo: Capturing Thoughts via Discord(10:55) - CTI Use Case: Prioritizing & Querying Intelligence(15:03) - Managing Memory: Editing, Deleting & Superseding Data(19:04) - Running Protocols: Automated CTI Reports (Demo)(22:05) - Multi-Brain Concept & Segmentation(25:00) - Real-World Output: Reports, Dashboards & Briefings(31:31) - Multi-Model Differences (Claude vs ChatGPT)(35:55) - Improving the System with Feedback Loops(37:29) - How to Build Your Own OpenBrain(41:26) - Real-World Benefits & Workflow Improvements(45:44) - Security Considerations & Data Exposure Risks(47:20) - Where to Find the Project & Contribute(50:16) - Final Thoughts & Wrap-UpClick here to watch this episode on YouTube. Creators & Guests Bronwen Aker - HostAlex Minster "Belouve" - GuestEthan Robish - GuestBrian Fehrman - HostBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.
    Show More Show Less
    51 mins
  • LiteLLM Supply Chain Compromise | Episode 47
    Apr 13 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down the LiteLLM supply chain compromise–a real-world attack that shows how AI systems are being breached through the same old software supply chain weaknesses.

    What initially looked like a bad release quickly escalated into a full-scale compromise affecting a library downloaded millions of times per day. But LiteLLM wasn’t the starting point–it was just one link in a much larger attack chain involving compromised security tools, CI/CD pipelines, and stolen publishing credentials.

    The result? Malicious packages distributed at scale, harvesting secrets, enabling lateral movement, and establishing persistence across affected systems.

    We dig into:
    • What LiteLLM is and why it’s such a high-value target
    • How the attack chain started with compromised security tooling (Trivy, Checkmarx)
    • How unpinned dependencies enabled the compromise
    • The role of CI/CD pipelines in exposing sensitive credentials
    • What the malicious LiteLLM packages actually did (credential harvesting, persistence, lateral movement)
    • The scale of impact given LiteLLM’s widespread adoption
    • Why supply chain attacks are no longer theoretical–and no longer nation-state exclusive
    • How AI is lowering the barrier to entry for attackers
    • Why this wasn’t really an “AI vulnerability”–but an infrastructure failure
    • The growing risk of automated, agent-driven attack discovery

    This episode highlights a critical reality: the biggest risks in AI systems aren’t always in the models–they’re in the pipelines, dependencies, and infrastructure surrounding them.

    📚 Key Concepts & Topics

    Supply Chain Security
    • Dependency poisoning and malicious package distribution
    • CI/CD pipeline compromise
    • Version pinning and build integrity

    Credential & Secrets Exposure
    • API keys, SSH keys, and cloud credentials in pipelines
    • Risks of centralized AI gateways like LiteLLM

    Threat Actor Techniques
    • Tag rewriting and trusted reference hijacking
    • Multi-stage malware (harvest, lateral movement, persistence)
    • Use of lookalike domains for exfiltration

    AI & Security Reality Check
    • AI as an amplifier, not the root vulnerability
    • Traditional security failures in modern AI stacks
    • Automation lowering attacker barriers

    Defensive Strategies
    • Dependency pinning and isolation (Docker, VPS)
    • Atomic credential rotation
    • Treating CI/CD tools as critical infrastructure
    • Monitoring outbound traffic from build environments


    • (00:00) - Intro & Incident Overview
    • (01:26) - What Is LiteLLM & Why It Matters
    • (03:53) - Supply Chain Scope & Why This Is Dangerous
    • (07:31) - Why These Attacks Are Getting Easier (AI + Scale)
    • (10:48) - Attack Chain Breakdown (Trivy → Checkmarx → LiteLLM)
    • (11:50) - What the Malware Did & Impact at Scale
    • (14:23) - Detection, Response & Who Was Safe

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Brian Fehrman - Host
    • Bronwen Aker - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Show More Show Less
    20 mins
  • Model Ablation | Episode 46
    Apr 2 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down model ablation — a powerful interpretability technique that’s quickly becoming a serious concern in AI security.

    What started as a way to better understand how models work is now being used to remove safety mechanisms entirely. By identifying and disabling specific components inside a model, researchers — and attackers — can effectively strip out refusal behavior while leaving the rest of the model fully functional.

    The result? A fast, reliable way to “de-safety” AI systems without prompt engineering, fine-tuning, or significant compute.

    We dig into:
    • What model ablation is and how it works
    • The difference between ablation and pruning
    • How safety behaviors can be isolated inside model internals
    • Why refusal mechanisms are often localized (and fragile)
    • How ablation is being used as a jailbreak technique
    • Why this is more reliable than prompt-based attacks
    • Risks specific to open-weight models and public checkpoints
    • The growing “uncensored model” ecosystem
    • Why interpretability is a double-edged sword
    • Whether safety should be deeply embedded into model architecture
    • What this means for defenders and AI security strategy

    This episode explores a critical shift in AI risk: when safety controls can be surgically removed, they stop being security controls at all.

    📚 Key Concepts & Topics

    Model Internals & Interpretability
    • Neurons, attention heads, and residual stream analysis
    • Activation space and feature directions

    AI Security Risks
    • Prompt injection vs. structural attacks
    • Jailbreaking techniques and safety bypasses

    Model Access & Risk Surface
    • Open-weight vs. API-only models
    • Hugging Face and the uncensored model ecosystem

    AI Safety & Governance
    • Defense-in-depth for AI systems
    • Future standards for ablation resistance

    #AISecurity #ModelAblation #LLMSecurity #CyberSecurity #ArtificialIntelligence #AIResearch #BHIS #AIAgents #InfoSec

    • (00:00) - Intro & Show Overview
    • (01:27) - Removing AI Safety Mechanisms
    • (02:05) - What Is Model Ablation? (Technical Breakdown)
    • (04:01) - Open-Weight Models & Practical Limitations
    • (05:43) - Risks, Use Cases, and Ethical Tradeoffs
    • (07:32) - Security Implications & “You Can’t Ban Math”
    • (10:43) - Future Impact: Open Models Catching Up
    • (17:44) - Final Takeaway: Why “No” Isn’t Security

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Bronwen Aker - Host
    • Derek Banks - Host
    • Brian Fehrman - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Show More Show Less
    18 mins
  • Embedding Space Attacks | Episode 45
    Mar 26 2026

    In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data.

    Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly.

    Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios.

    We dig into:
    - What embeddings are and how AI systems convert text into numerical representations
    - How vector spaces enable similarity search and retrieval in LLM applications
    - What embedding space attacks are and why they matter for AI security
    - How small perturbations in data can drastically change model behavior
    - The risks of poisoned data in RAG and vector databases
    - How attackers can influence search results and downstream AI outputs
    - Why these attacks are subtle, hard to detect, and often overlooked
    - The role of visualization in understanding embedding behavior
    - Real-world implications for AI-powered applications and workflows
    - Defensive considerations when building with embeddings and vector stores

    This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI.

    📚 Key Concepts Covered

    AI Foundations
    - Embeddings and vector representations
    - Similarity search and vector space reasoning

    AI Security Risks
    - Embedding space manipulation
    - Data poisoning in vector databases
    - Retrieval manipulation in RAG systems

    Applications & Impact
    - LLM-powered search and assistants
    - AI pipelines using embeddings
    - Risks in production AI systems

    #AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec

    Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security.
    https://discord.gg/bhis

    • (00:00) - Intro & Episode Overview
    • (01:39) - What Are Embeddings? (AI Only Understands Numbers)
    • (03:44) - The Embedding Process (Text → Vectors)
    • (07:43) - Similarity, Classification & Vector Math
    • (09:55) - Visualizing Embedding Space (2D Projection)
    • (14:29) - Classifiers
    • (15:39) - Playing Games with Information
    • (18:06) - Attack Techniques: Synonyms & Context Manipulation
    • (20:29) - Context Padding
    • (27:10) - Collision Attacks, Defenses & Final Thoughts

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Brian Fehrman - Host
    • Bronwen Aker - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Show More Show Less
    33 mins