Workshops

LOCATION
Bank of America Building
13510 Ballantyne Corporate Pl
Charlotte, NC 29277

DATE & TIME
Sunday, March 29th
8:30 am – 5:00 pm (Lunch will be 12:30 pm – 1:00 pm)

TICKETS
BSides Charlotte 2026 Workshop EventBrite

ADventuring in Active Directory

Eric Kuehn

8:30am – 5:00pm

ADventuring in Active Directory is a workshop focused on common attacks used against Active Directory environments. Through a mixture of lecture and hands-on exercises, participants will discover ways to examine an Active Directory environment for a variety of common misconfigurations and then exploit these issues to pivot and escalate their access. Not only will the workshop cover the different attacks but also the details of why the attacks work and how an environment can be made resilient to them, making it useful to those looking to hone their offensive skills as well as those who are protecting networks.

AWS JAM – GenAI & Security on AWS

Bhavana Chowdary Dodda

Morning Session – 8:30 am – 12:30pm || Afternoon Session – 1:00pm – 5:00pm

This workshop is a hands‑on AWS Jam, a gamified learning event where participants work in small teams to solve real‑world cloud challenges in a safe AWS sandbox environment. The session will focus on AI and security scenarios, guiding attendees to use services such as Amazon SageMaker, AWS AI/ML services, and core security tools (for example IAM, CloudTrail, Config, KMS, and WAF) to investigate incidents, protect data, and implement secure AI workloads following AWS best practices. Teams earn points as they complete challenges of increasing difficulty, encouraging collaboration, experimentation, and practical skill‑building that participants can immediately apply to their own AWS environments.

Breaking AI: Prompt Injection, Data Exfiltration, and Practical Defenses

Pavan Reddy

Morning Session – 8:30 am – 12:30pm || Afternoon Session – 1:00pm – 5:00pm

AI systems don’t break like software, they fail in silence, misclassify with confidence, and hallucinate under pressure. This 4-hour hands-on workshop exposes the core vulnerabilities of modern AI, from adversarial image attacks to LLM manipulation. Participants will actively craft exploits from tricking a car dealership chatbot to provide a 99% discount to using chatbots to exfiltrate data. The focus is practical: how do these attacks work, how can you launch them, and what can actually stop them?

A previous version of this workshop was presented BSidesNova 2025. This edition has been significantly updated and expanded to focus on the growing threat landscape around LLMs and foundation models. This version will have a more interactive and hands on component.

OUTLINE:
Introduction (20 minutes).
We start by defining what “AI vulnerabilities” actually look like in deployed products: not model weights getting hacked, but authority boundaries getting blurred between prompts, tools, retrieval, and privileged actions. Participants learn a practical threat model for LLM apps, including the difference between direct prompt injection, indirect prompt injection via untrusted content, and tool/agent abuse. The goal is to set a shared language for the rest of the workshop and make it clear why accuracy and safety guardrails don’t equal security.

What AI vulnerabilities look like (55 minutes).
Participants will be given a car dealership website with a chatbot connected to an internal database and a few business tools (inventory, pricing, lead notes) inside an isolated sandbox. Using only prompts, you’ll induce the assistant to perform unauthorized or unintended actions, showing how “friendly chat” becomes a control plane when the model has tool access. We then implement and test defenses side-by-side: stronger defense prompts, strict tool schemas, input/output sanitization, injection classifiers/policy gates, and confirmation workflows for high-impact operations.

AI Code Execution Vulnerabilities (35 minutes).
This module demonstrates what happens when an AI is given broad privileges, especially access to command execution, automation, or administrative actions, where a single malicious or ambiguous prompt can trigger destructive behavior or system instability. Participants will reproduce controlled “tool abuse” failures in a sandbox and observe the operational blast radius (resource exhaustion, unsafe file actions, unintended network calls, etc.). We then lock it down using scalable patterns: delegated permissions, least-privilege tool design, command allowlists with structured arguments, sandboxing/timeouts, and human approval for dangerous actions.

Image attacks (35 minutes).
We shift to vision models and craft classic adversarial examples (FGSM, PGD, and C&W) to force confident misclassification under small perturbations. The emphasis is practical: how attack parameters change outcomes, what a real attacker would optimize for, and why “high accuracy” can still be brittle. We close this module by mapping defenses to reality, robustness evaluation, adversarial training tradeoffs, detection pitfalls, and monitoring strategies that catch drift and abuse rather than chasing perfect prevention.

Governance (30 minutes).
The final section zooms out to the organizational system: how to threat model AI features, define security requirements for LLM/RAG/agent integrations, and run repeatable red teaming without chaos. We cover disclosure and incident response considerations unique to AI apps (prompt logs, tool audit trails, sensitive retrieval traces) and how to measure resilience over time with benchmarks instead of anecdotes. Participants leave with a governance blueprint: policies for tool permissions, data access boundaries, evaluation gates before release, and a practical checklist for third-party model risk.