MCP tools are reshaping how developers integrate with AI – but they come with major risks. In this video, we explore how attackers can exploit tool descriptions, sleeper attacks, and tool shadowing to leak sensitive data or even achieve remote code execution. Using a real-world example, we show how even legitimate tools can become dangerous when AI blindly trusts what it's told. Plus, we’ll break down the methods behind tool poisoning and what you can do to protect yourself.
🧠 Topics Covered:
Tool poisoning & tool shadowing explained
Remote code execution through AI tools
Obfuscation techniques using scrollbars & whitespace
How MCP tool descriptions can manipulate AI behavior
🔗 Relevant Links
Equixly Blog Post: https://equixly.com/blog/2025/03/29/mcp-server-new-security-nightmare/
Invariant Labs: https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks
Git: https://github.com/invariantlabs-ai/mcp-injection-experiments
❤️ More about us
Radically better observability stack: https://betterstack.com/
Written tutorials: https://betterstack.com/community/
Example projects: https://github.com/BetterStackHQ
📱 Socials
Twitter: https://twitter.com/betterstackhq
Instagram: https://www.instagram.com/betterstackhq/
TikTok: https://www.tiktok.com/@betterstack
LinkedIn: https://www.linkedin.com/company/betterstack
📌 Chapters:
00:00 Exploit Demo
01:20 Sleeper / Rug Pull Attack
02:30 Tool Poisoning
04:36 Tool Shadowing
05:56 Remote Code Execution
07:45 Outro