Ethical Hacking against and with AI/LLM/ML Training Course

Become professional in AI and LLM Penetration Testing and Vulnerability Discovery

Ratings 5.00 / 5.00
Ethical Hacking against and with AI/LLM/ML Training Course

What You Will Learn!

  • AI/LLM vulnerabilities
  • get to a professional level in AI/LLM penetration testing
  • get to a professional level in AI/LLM bug bounty
  • Basics of AI/LLM
  • AI/LLM Attacks
  • AI/LLM Frameworks
  • AI/LLM Prompt Injection
  • AI/LLM Insecure Output Handling
  • AI/LLM Training Data Poisoning
  • AI/LLM Denial of Service
  • AI/LLM Supply Chain
  • AI/LLM Permission Issues
  • AI/LLM Data Leakage
  • AI/LLM Excessive Agency
  • AI/LLM Overreliance
  • AI/LLM Insecure Plugins
  • AI/LLM Threat Model
  • Using AI for Penetration Testing / Ethical Hacking
  • The Yolo AI Tool

Description

Ethical Hacking against and with AI/LLM/ML Training Course

Welcome to this course of Ethical Hacking and Penetration Testing Artificial Intelligence (AI) and Large Language Models (LLM) Training course.

Important note: This course is NOT teaching the actual usage of Burp Suite and its features.

Your instructor is Martin Voelk. He is a Cyber Security veteran with 25 years of experience. Martin holds some of the highest certification incl. CISSP, OSCP, OSWP, Portswigger BSCP, CCIE, PCI ISA and PCIP. He works as a consultant for a big tech company and engages in Bug Bounty programs where he found thousands of critical and high vulnerabilities.

This course has a both theory and practical lab sections with a focus on finding and exploiting vulnerabilities in AI and LLM systems and applications. The training is aligned with the OWASP Top 10 LLM vulnerability classes. Martin is solving all the LLM labs from Portswigger in addition to a lot of other labs and showcases. The videos are easy to follow along and replicate. There is also a dedicate section on how to use AI for Penetration Testing / Bug Bounty Hunting and Ethical Hacking.

The course features the following:

· AI/LLM Introduction

· AI/LLM Attacks

· AI/LLM Frameworks / writeups

· AI LLM01: Prompt Injection

· AI LLM02: Insecure Output Handling

· AI LLM03: Training Data Poisoning

· AI LLM04: Denial of Service

· AI LLM05: Supply Chain

· AI LLM06: Permission Issues

· AI LLM07: Data Leakage

· AI LLM08: Excessive Agency

· AI LLM09: Overreliance

· AI LLM10: Insecure Plugins

· Threat Model

· Putting it all together

· Using AI for Penetration Testing / Ethical Hacking

· The Yolo AI Tool

Notes & Disclaimer

Portswigger labs are a public and a free service from Portswigger for anyone to use to sharpen their skills. All you need is to sign up for a free account. I will update this course with new labs as they are published. I will to respond to questions in a reasonable time frame. Learning Pen Testing / Bug Bounty Hunting is a lengthy process, so please don’t feel frustrated if you don’t find a bug right away. Try to use Google, read Hacker One reports and research each feature in-depth. This course is for educational purposes only. This information is not to be used for malicious exploitation and must only be used on targets you have permission to attack.

Who Should Attend!

  • Anybody interested in becoming professional in ethical AI/LLM hacking / penetration testing
  • Anybody interested in becoming professional in ethical AI/LLM bug bounty hunting
  • Anybody interested in learning how hackers hack AI/LLM
  • Developers looking to expand on their knowledge of vulnerabilities that may impact them
  • Anyone interested in AI/LLM security
  • Anyone interested in Red teaming
  • Anyone interested in offensive security

TAKE THIS COURSE

Tags

Subscribers

7

Lectures

23

TAKE THIS COURSE