Prompt injection
5 videos across 4 channels
Prompt injection poses a real risk as prompts embedded in AI interactions can steer behavior, memories, or decisions in dangerous ways—potentially delivering harmful investment advice or unsafe actions. The topic also covers how rapidly autonomous AI agents, like OpenClaw-style systems, raise security, privacy, and ethics concerns, and how defenders can study advanced hacking techniques and practical labs to understand and mitigate these risks.

Cloudflare AI Security Suite: Protect AI-powered apps with Firewall for AI
The video introduces Firewall for AI, a Cloudflare security solution that provides LLM endpoint discovery, visibility in

Clawdbot has gone rogue (I can't believe this is real)
The video surveys the rapid rise of OpenClaw/Moltbot OpenClaw and Moltbook, highlighting how AI agents are becoming incr

gpt-5.4 is really, really good
The video provides an in-depth take on GPT-5.4 thinking, covering what’s new, how it performs across benchmarks, pricing

become an AI HACKER (it's easier than you think)
The video dives into real AI hacking beyond party tricks, featuring expert Jason Haddix and the CANAM AI security resour