Mastering LLM Integration Security: Offensive & Defensive Tactics icon

Mastering LLM Integration Security: Offensive & Defensive Tactics

A 2-day deep dive into AI and LLM fundamentals—through the lens of your hacking adversaries.

This is our 2-day beginner-intermediate LLM course

This course is designed for individuals with a beginner-to-intermediate understanding of artificial intelligence and cybersecurity. Whether you are a security consultant, developer, AI/LLM architect, or prompt engineer, you should have a foundational grasp of AI/LLM concepts and some experience with cybersecurity practices.

An immersive, intensive 2-day journey into the dynamic world of artificial intelligence.As LLMs increasingly becoming an integral part of various products and services, grasping their implementation nuances and securing these implementations is paramount for maintaining robust, efficient, and trustworthy systems.


2 day practical class


Available by Partners


Live, online available


Beginner-Intermediate

Course Overview

Is it for me?

The rapid adoption of AI and, specifically, Large Language Models (LLMs), has opened new frontiers in innovation. And in attack surfaces...As companies rush to harness the power of LLMs in applications ranging from customer service to data analytics, they often overlook the emerging security gaps introduced by prompt injection, data poisoning, insecure plugin designs, and more.

Our course directly tackles these new challenges. Over two immersive days, you’ll not only uncover high-impact vulnerabilities that could already be at work within your systems but also learn how to patch them before they result in breaches or critical data leaks. In addition, we regularly update our modules and labs to incorporate the latest security breakthroughs, proof-of-concept exploits, and real-world incidents.

This focus on cutting-edge threats and solutions means that attendees can return year after year for fresh insights, continually refining their ability to secure AI-driven environments as new vulnerabilities emerge.

Interested

Interested?

1. Our courses are available directly from us; through our training partners or at worldwide technical conferences.

2. You can find course dates and prices on the Courses and Webinars page.
Click here for course dates, prices and content

3. Take a look below at a few of the upcoming courses for this specific training.

4. For more information including private course requests, complete the short form below.

Course Details

This course follows a practical “defense by offense” approach, anchored in real-world scenarios and hands-on labs rather than abstract theory. By the end of the course, you’ll be able to:

  • Think and behave like a sophisticated attacker targeting LLM-based systems.
  • Understand how attackers discover and exploit prompt injections, insecure output handling, data poisoning, and other vulnerabilities in AI workflows
  • Identify and exploit security weaknesses specific to LLM integrations
  • Implement effective prompt engineering and defensive measures
  • Learn to craft prompts that minimize leakage, prevent injection, and ensure your LLM responds reliably within controlled security parameters
  • Design LLM applications with minimal attack surface
  • Explore best practices for restricting AI agent functionality (excessive agency), hardening plugin interfaces, and securing AI-driven workflows
  • Apply forward-thinking strategies to protect training and inference data
  • Develop robust security controls in real-world deployments
  • Translate lab exercises into practical solutions by integrating logging, monitoring, and guardrails for continuous protection of LLM-based services

You will receive:

Access to our Hack-Lab, not just for your work during the course, you will have access for 30 days after the course too. This gives you plenty of time to practice the concepts taught during the course.

Details of the course content:

Prompt Engineering

This module introduces the fundamentals of what prompts are and how they function within the context of AI and LLMs. This module dives into the key aspects of prompt engineering

  • What makes a good prompt
  • How to write effective prompts
  • Including reference text in prompt
  • Few-Shot prompting
  • How to give AI time to think
  • Using Delimiters for Clarity and Security

Prompt Injection

This module covers the security risks associated with prompt injection vulnerabilities, which can lead to unintended behavior or the disclosing of sensitive data and provides strategies to address these issues. Understanding the nuances between direct and indirect prompt injections is vital for recognizing how attackers can exploit these vulnerabilities. By examining real-world examples, we can study the potential impacts and consequences

  • Nature of Prompt Injection Vulnerabilities: Explore how vulnerabilities arise from the manipulation of AI prompts
  • Direct vs. Indirect Injection: Differentiate the methods attackers use to exploit prompt injection weaknesses.
  • Real-World Exploits: Analyze documented instances to understand the practical risks and execution of such attacks.
  • Impact and Consequences: Assess the potential severity of prompt injection, from misinformation to critical data leaks.
  • Defense Strategies: Learn about the latest techniques for detecting and thwarting prompt injection vulnerabilities.
  • Client-Side attacks
  • Case Study: WannaCry

LAB ACTIVITIES:

  • The Math Professor: Users will perform direct prompt injection attacks to convince the professor the answer is always correct
  • Indirect Prompt Injection: Users will perform indirect prompt injection attacks via data which if fetched and supplied to the LLM during RAG

ReACT LLM Agent Prompt Injection

The ReACT framework is designed to enrich Large Language Models (LLMs) with a structured approach to processing and generating tasks. Within this framework, AI agents are given a set of tools and follow a Reasoning-Action-Observation chain to interact with information and environment. However, vulnerabilities may arise from prompt injections, where malicious inputs disrupt normal operations

  • Understanding ReACT: Learn about the ReACT framework and its role in enhancing LLM tasks.
  • Tools Purpose in ReACT: Examine the functionalities of tools provided by the framework for AI agents
  • Tool Abuse in Frameworks: Review how tools intended for productive use can be misused within frameworks, such as LangChain.
  • RAO Chain Exploitation: Analyze how the Reasoning-Action-Observation sequence can be corrupted through prompt injections and other methods
  • Prevention and Mitigation: Gain insight into strategies to safeguard the integrity of systems utilizing the ReACT framework and similar structures.

LAB ACTIVITIES:

  • The Bank of NSS: An imaginary bank built using LangChain, agents and GPT.3.5- turbo as the LLM. The bank assists users with queries related to balance and has access to rag systems to fetch data from various data stores. Users will perform prompt injection to fetch information from other users accounts.

Insecure Output Handing

This module focuses on the concept of insecure output handling in AI systems, providing a deep dive into the risks and examining the consequences through practical examples.

  • Defining Insecure Output Handling: Get familiar with what insecure output handling is and the risks it poses to AI system integrity.
  • Recognizing Vulnerabilities: Examine real-world scenarios where insecure output handling has led to system vulnerabilities
  • Simulated Attacks: Participate in practical exercises designed to exploit insecure output handling in three AI applications, demonstrating the process of gaining unauthorized privileges.
  • Impact of Weaknesses: Understand the potential damage that can result from insecurely handled outputs in AI systems.
  • Proactive Measures: Introduce preventive measures and best practices to secure AI outputs against such vulnerabilities.

LAB ACTIVITIES:

  • Report summarization application: Users will submit documents to be summarized by the application, the response is rendered on the front end. Users will battle with injection payloads inside of documents to try to coerce the LLM to return code which is in turn rendered on the front end.
  • Network analysis agent: Users will utilize the AI agent to perform network analysis on remote hosts, but what if it is possible to execute arbitrary code?
  • Stock Bot: An AI assistant designed to provide users with company stock market analysis. The agent works with live data which is fetched from external resources.But what if it is possible to fetch from an internal resource?

Training Data Poisoning

This module addresses the concept of training data poisoning, a technique where attackers deliberately manipulate the data that an AI model learns from, with the intent to compromise its performance, integrity or functionality.

LAB ACTIVITIES:

  • Adversarial Poisoning Attack Lab: Simulate an attack that feeds misleading input to corrupt the model's learning process.
  • Injecting Factual Information Lab: Practice the technique of altering an LLM's output by injecting incorrect facts into its training dataset.

Supply Chain Vulnerabilities

This module addresses the vulnerabilities associated with the AI / LLM supply chain, examining the points in the supply process that can or might be exploited, and providing realworld examples of such attacks.

Sensitive Information Disclosure

This module covers the concept of Sensitive Information Disclosure within Large Language Models (LLMs). Learners will explore both theoretical concepts and practical risks, enhancing their understanding of how sensitive data can be inadvertently exposed by AI systems.

KEY CONCEPTS:

  • Exploration of how LLMs may unknowingly reveal personal, proprietary, or confidential information embedded within their training data or through their interactions.
  • Discussion on common scenarios and mechanisms that lead to sensitive information disclosure.
  • LAB ACTIVITIES:

  • Incomplete Filtering lab: Sensitive information is not properly filtered in training data.
  • Overfitting / Memorization lab: Sensitive data is memorized during the LLM training process.
  • Misinterpretation: LLM can misinterpret input and disclose sensitive information

Insecure Plugin Design

This module provides an in-depth look at the critical aspects of plugin design within AI applications, focusing on the security vulnerabilities that can arise. Students will learn about the common design flaws in and how these vulnerabilities might be exploited.

LAB ACTIVITIES:

  • Insecure tool usages: Exploit a network analysis tool to archive code execution due to insecure implementation Langchain run method.

File System Operations Security Lab: Evaluate an AI agent's capability to perform file system operations with sanitized paths and test the effectiveness of sanitization against exploitative insertions or confusion tactics post-sanitization.

Excessive Agency in LLMBased Systems

This module covers the concept of excessive agency in LLM systems, this refers to the vulnerability that allows damaging actions to be performed in response to unexpected of ambiguous outputs. This can occur due to hallucinations, prompt injections, malicious plugins, poorly engineered prompts, or a poorly performing model..

LAB ACTIVITIES:

  • Excessive agency with excessive functionality: A medical records-based agent designed to provide acute descriptions of diagnosed conditions. But perhaps more features exist? Users will attempt to modify medical records.
  • Excessive agency with excessive permissions: A file management AI agent, designed to read, list and summarize the contents of files, but once again more undocumented features exist. Users will locate the hidden functionality and used this to create, delete and perhaps even execute commands on the host operating system.

Overreliance in LLM’s

This module covers the concept of overreliance in LLM systems, learners will get an in depth overview of what overreliance consists of, why it occurs and what legal repercussions can be faced.

Enquire about your training

We provide training directly (live, online or in person) and also work with a range of training partners in different locations around the globe for classroom or live, online training. Please contact us with details of your requirement and we will recommend the best route to access our amazing training.



Prerequisites

Who should take this class?

  • Security Professionals
  • Back-End / Front-End Developers
  • System Architects
  • Product Managers
  • Anyone directly involved in the integration and application of LLM technologies

What you will learn:

This course follows a practical “defense by offense” approach, anchored in real-world scenarios and hands-on labs rather than abstract theory. By the end of the course, you’ll be able to:

  • Think and behave like a sophisticated attacker targeting LLM-based systems
    • Understand how attackers discover and exploit prompt injections, insecure output handling, data poisoning, and other vulnerabilities in AI workflows
    • Identify and exploit security weaknesses specific to LLM integrations
    • Practice detecting and attacking common pitfalls (e.g., plugin misconfiguration, overreliance, and supply chain exposures) in real-world lab environments
    • Implement effective prompt engineering and defensive measures
    • Learn to craft prompts that minimize leakage, prevent injection, and ensure your LLM responds reliably within controlled security parameters
    • Design LLM applications with minimal attack surface
    • Explore best practices for restricting AI agent functionality (excessive agency), hardening plugin interfaces, and securing AI-driven workflows
    • Apply forward-thinking strategies to protect training and inference data
    • Develop robust security controls in real-world deployments
    • Translate lab exercises into practical solutions by integrating logging, monitoring, and guardrails for continuous protection of LLM-based services
    • Upcoming Courses

      LLM Course

      Course Information

      You can download a copy of the course information below.

      In addition you will also be provided with a student pack, handouts and cheat-sheets if appropriate.

      Download the course information

      Your Training Roadmap

      Offensive Classes

      Hacking training for all levels: new to advanced. Ideal for those preparing for certifications such as CREST CCT (ICE), CREST CCT (ACE), CHECK (CTL), TIGER SST as well as infrastructure / web application penetration testers wishing to add to their existing skill set.

      Defensive Classes

      Giving you the skills needed to get ahead and secure your business by design. We specialise in application security (both secure coding and building security testing into your software development lifecycle) and cloud security. Build security capability into your teams enabling you to move fast and stay secure.

      Our accreditations

      Crest
      Cyber essentials
      CEH Accreditation
      CCISO Accreditation
      CISSP Accreditation
      CRISC Accreditation
      OSCE Accreditation