

We’ve all been there - using AI to help us summarize long articles, plan trips, or even handle annoying chores like processing payments. It’s like having a digital butler that never sleeps. But what if that butler was secretly taking orders from a total stranger behind your back?
According to a recent deep-dive by Google’s security team, this isn't just a sci-fi plot anymore. It’s actually happening on the live web.
Most of us have heard of "jailbreaking" an AI (like asking it to write a poem about something it’s supposed to block). That’s a direct attack. But Indirect Prompt Injection is much sneakier.
Imagine an AI agent browsing the web for you. It lands on a normal-looking website, but hidden in the code—in tiny 1-pixel fonts, invisible text, or buried metadata—are secret instructions. You can't see them, but the AI can.
These "booby-trapped" pages tell the AI to "ignore all previous instructions" and do something else instead. Between November 2025 and February 2026, Google saw a 32% spike in these types of attacks. People are literally baiting the web to hijack your AI.
Google found that most of these injections started off as pranks or SEO tricks; like forcing an AI to talk like a bird or refuse to summarize a page. Harmless, right?
Well, it didn't stay harmless for long.
Security researchers at Google and Forcepoint have now discovered "payloads" in the wild that are straight-up criminal. We’re talking about:
The scary part? If your AI has the "privilege" to make payments or send emails, it will follow these hidden instructions using your legitimate credentials. To the bank, it looks like a normal transaction. There’s no "hacker" logging in; your own assistant just got bamboozled.
This is where things get messy. If your AI agent reads a malicious site and "decides" to send $500 to a random account via PayPal, who is responsible?
Right now, there’s no legal framework for this. We are living in a wide-open "gray area" of the internet.
The attack surface grows as our AI gets more powerful. An AI that just reads text is low risk. An AI that can move your money is a high-value target.
While the tech world scrambles to fix this (it’s currently ranked as the #1 vulnerability for AI by security experts), the best thing you can do is be careful about the "permissions" you give your AI tools. Maybe don't give that shiny new browser extension full access to your bank account just yet.
The "window for getting ahead of this threat is closing fast," and as AI agents become a bigger part of our daily lives, the hackers are already one step ahead, waiting in the invisible text.
For the full scoop on how these attacks work and what Google found, check out this report on Decrypt: 👇
👉 Malicious Web Pages Are Hijacking AI Agents, And Some Are Going After Your PayPal
Disclaimer: This article is provided for informational purposes only, mistakes may be made, and it's not offered or intended to be used as legal, tax, investment, financial, or any other advice.
