BSides Charlotte 2026 Workshops
Tickets for Workshops
(Link to tickets coming soon!)
ADventuring in Active Directory (Full Day Workshop)
ADventuring in Active Directory is a workshop focused on common attacks used against Active Directory environments. Through a mixture of lecture and hands-on exercises, participants will discover ways to examine an Active Directory environment for a variety of common misconfigurations and then exploit these issues to pivot and escalate their access. Not only will the workshop cover the different attacks but also the details of why the attacks work and how an environment can be made resilient to them, making it useful to those looking to hone their offensive skills as well as those who are protecting networks.
Instructor
Eric Kuehn is a Principal security consultant at Secure Ideas as well as an IANS faculty member. He leverages his extensive experience with Microsoft infrastructures and Active Directory to perform penetration tests and offer guidance around system security and architecture. He also is the author of the Red Team Fundamentals for Active Directory course, where he explains the concepts, techniques and best practices for exploiting and defending AD environments. Eric has been working with Active Directory since its release and was the technical leader and architect of one of the largest and most complex AD implementations out there. He holds the CISSP certification and is passionate about sharing his knowledge and skills with others. Eric has delivered talks on Active Directory security and other topics at various conferences, events, webcasts, and Antisyphon Training.
Tickets for Workshops
(Link to tickets coming soon!)
Breaking AI: Prompt Injection, Data Exfiltration and Practical Defenses That Work (1/2-day Workshop, Morning & Afternoon Sessions)
AI systems don’t break like software, they fail in silence, misclassify with confidence, and hallucinate under pressure. This 4-hour hands-on workshop exposes the core vulnerabilities of modern AI, from adversarial image attacks to LLM manipulation. Participants will actively craft exploits from tricking a car dealership chatbot to provide a 99% discount to using chatbots to exfiltrate data. The focus is practical: how do these attacks work, how can you launch them, and what can actually stop them?
A previous version of this workshop was presented BSidesNova 2025. This edition has been significantly updated and expanded to focus on the growing threat landscape around LLMs and foundation models. This version will have a more interactive and hands on component.
OUTLINE:
Introduction
We start by defining what “AI vulnerabilities” actually look like in deployed products: not model weights getting hacked, but authority boundaries getting blurred between prompts, tools, retrieval, and privileged actions. Participants learn a practical threat model for LLM apps, including the difference between direct prompt injection, indirect prompt injection via untrusted content, and tool/agent abuse. The goal is to set a shared language for the rest of the workshop and make it clear why accuracy and safety guardrails don’t equal security.
What AI vulnerabilities look like
Participants will be given a car dealership website with a chatbot connected to an internal database and a few business tools (inventory, pricing, lead notes) inside an isolated sandbox. Using only prompts, you’ll induce the assistant to perform unauthorized or unintended actions, showing how “friendly chat” becomes a control plane when the model has tool access. We then implement and test defenses side-by-side: stronger defense prompts, strict tool schemas, input/output sanitization, injection classifiers/policy gates, and confirmation workflows for high-impact operations.
AI Data Exfiltration
Participants will use prompt injection patterns to extract data the model should not reveal, across two scenarios: a professor’s assignment website with hidden instructions that you paste into the LLM, and a malicious document designed to poison the model’s behavior during summarization or retrieval. We show how these “indirect prompt injections” work in real RAG-like workflows and why naïve filtering fails. This module includes a short demo and case study of EchoLeak (CVE-2025-32711) to illustrate how production copilots can be driven into disclosing sensitive content when untrusted inputs are treated as instructions.
AI Code Execution Vulnerabilities
This module demonstrates what happens when an AI is given broad privileges, especially access to command execution, automation, or administrative actions, where a single malicious or ambiguous prompt can trigger destructive behavior or system instability. Participants will reproduce controlled “tool abuse” failures in a sandbox and observe the operational blast radius (resource exhaustion, unsafe file actions, unintended network calls, etc.). We then lock it down using scalable patterns: delegated permissions, least-privilege tool design, command allowlists with structured arguments, sandboxing/timeouts, and human approval for dangerous actions.
Image attacks
We shift to vision models and craft classic adversarial examples (FGSM, PGD, and C&W) to force confident misclassification under small perturbations. The emphasis is practical: how attack parameters change outcomes, what a real attacker would optimize for, and why “high accuracy” can still be brittle. We close this module by mapping defenses to reality, robustness evaluation, adversarial training tradeoffs, detection pitfalls, and monitoring strategies that catch drift and abuse rather than chasing perfect prevention.
Governance
The final section zooms out to the organizational system: how to threat model AI features, define security requirements for LLM/RAG/agent integrations, and run repeatable red teaming without chaos. We cover disclosure and incident response considerations unique to AI apps (prompt logs, tool audit trails, sensitive retrieval traces) and how to measure resilience over time with benchmarks instead of anecdotes. Participants leave with a governance blueprint: policies for tool permissions, data access boundaries, evaluation gates before release, and a practical checklist for third-party model risk.
Instructor
Pavan Reddy is an AI security researcher and the founder of Adversarial Lab, an open-source framework for red-teaming machine learning models. His work focuses on adversarial attacks, prompt injection vulnerabilities, and the systemic weaknesses of foundation models. He’s presented research at venues like HCI International and FLAIRS, and has taught workshops at conferences like BSides, SquadCon, ACM SIGCITE. Overall, he has delivered over 15 talks and workshops. He educates a wide audience online, where over 80,000 followers tune in for his insights on AI risk and resilience. At Automata, he is a software engineer, leading the company’s effort towards FIPS, FedRAMP ATO and AI initiatives in Vulnerability and Compliance management.
Tickets for Workshops
(Link to tickets coming soon!)
AWS JAM – GenAI & Security on AWS
This workshop is a hands‑on AWS Jam, a gamified learning event where participants work in small teams to solve real‑world cloud challenges in a safe AWS sandbox environment. The session will focus on AI and security scenarios, guiding attendees to use services such as Amazon SageMaker, AWS AI/ML services, and core security tools (for example IAM, CloudTrail, Config, KMS, and WAF) to investigate incidents, protect data, and implement secure AI workloads following AWS best practices. Teams earn points as they complete challenges of increasing difficulty, encouraging collaboration, experimentation, and practical skill‑building that participants can immediately apply to their own AWS environments.
Instructor
Bhavana Dhowdary Dodda is a Solutions Architect at AWS and in her role she helps our customers build cloud native solutions on AWS. Bhavana also serves as a trusted advisor for customers to help transform business using generative AI.
Tickets for Workshops
(Link to tickets coming soon!)
