The Autonomous AI Attacks I Warned About Just Went Live

Remember when I told you AI would start attacking on its own? That any teenager with an internet connection could soon unleash an AI that thinks, adapts, and attacks without human intervention?

Our whitepaper “Emerging Threats to Artificial Intelligence Systems and Gaps in Current Security Measures” predicted this exact scenario. Now Google’s proving us right. https://mountaintheory.ai/emerging-threats-to-artificial-intelligence-systems-and-gaps-in-current-security-measures/

Google just caught it happening in the wild.

Not in a lab. Not in theory. Russian military hackers deployed it against Ukraine. The malware rewrites its own code every hour. It asks Google’s Gemini for new ways to hide. And traditional security can’t see it coming.

Welcome to the age of autonomous AI warfare.

The Attack That Learns As It Destroys

Picture this: A piece of malware wakes up every hour, looks at itself in the mirror, and says, “I need to become someone completely different.” Then it calls up Google’s Gemini AI and asks for a makeover. New code. New signature. New way to hide.

That’s PROMPTFLUX. Google discovered it was being tested on VirusTotal in June 2025.

The scariest part? Its “Thinking Robot” module doesn’t just tweak a few lines of code. It asks Gemini to completely rewrite its entire source code from scratch. Every. Single. Hour.

Your antivirus software that worked at 9 AM? Useless by 10 AM. The signatures your security team updated at lunch? Obsolete by dinner.

This isn’t evolution. It’s a revolution. Every 60 minutes.

Russia Already Weaponized It Against Ukraine

While security vendors were still debating whether AI attacks were “theoretical,” Russia’s APT28 (Fancy Bear) deployed PROMPTSTEAL in live combat operations against Ukrainian defense systems.

This isn’t some script kiddie experiment. This is the same group that hacked the DNC in 2016.

PROMPTSTEAL pretends to be an image generator. “Hey, want to make some cool pictures?” Meanwhile, it’s asking an AI model to write custom commands to steal everything on your network. Not hardcoded commands that defenders can spot. Fresh commands, generated on the fly, that have never existed before.

The AI writes the attack playbook in real time. No two attacks look the same.

Ukraine’s CERT caught it stealing defense documents, military communications, and intelligence data. By the time defenders figured out what happened, the malware had already evolved into something else.

Any Kid Can Now Download Nuclear Weapons

Here’s what keeps me up at night: The underground markets are selling “AI Attack as a Service” with monthly subscriptions. Like Netflix, but for taking down corporations.

Want to destroy a competitor? That’ll be $50 a month. Need to steal cryptocurrency? $100 gets you the premium package. Want to take down critical infrastructure? There’s an app for that.

You don’t need to know how to code anymore. You just need a credit card and a grudge.

The tools come with:

  • Automated reconnaissance that maps entire networks
  • AI that writes custom exploits for whatever it finds
  • Phishing campaigns that adapt to each victim’s psychology
  • Ransomware that evolves to beat recovery attempts
  • Built-in money laundering through crypto

A 16-year-old with WiFi can now launch attacks that would have required a nation-state just two years ago.

Your Security Team Is Fighting F-35s With Muskets

Traditional cybersecurity is built on a simple premise: Bad stuff has signatures. Find the signature, block the threat.

But what happens when the threat has no signature because it’s different every time?

What happens when malware thinks faster than your security team? What happens when it probes your network, finds weaknesses, exploits them, and covers its tracks before a human even knows there’s a problem?

You’re not fighting code anymore. You’re fighting intelligence.

An intelligence that never sleeps. Never takes coffee breaks. Never calls in sick. And gets smarter every single attack.

The Playbooks Are Already Written

The attackers aren’t starting from scratch. They’re feeding AI every cybersecurity playbook ever written:

  • MITRE ATT&CK framework? It knows it.
  • NIST guidelines? Memorized.
  • Your disaster recovery plan? It’s already three steps ahead.

Then they remove ALL the guardrails. No ethics. No safety. No “I can’t do that, Dave.”

Just pure, weaponized intelligence with one directive: Attack. Adapt. Survive.

It will:

  1. Probe your network using techniques from 50 years of hacking history
  2. Escalate privileges using zero days it discovers in real time
  3. Pivot through your systems faster than you can track
  4. Exfiltrate data through channels you didn’t know existed
  5. Obfuscate its presence so well you’ll never find the root cause
  6. Evolve every hour to stay ahead of your defenses

This isn’t science fiction. Google just watched it happen.

China, Iran, and North Korea Are All In

While we debate AI safety, our adversaries are weaponizing it:

China figured out how to trick AI by pretending to be in “cybersecurity competitions.” They tell Gemini they’re students in a capture the flag contest, and the AI happily explains how to hack systems.

Iran is using AI to build surveillance systems that track dissidents, create deepfakes of opposition leaders, and automate attacks on 75% more targets than ever before.

North Korea uses AI to generate perfect phishing emails in languages they don’t even speak. Spanish? Portuguese? Swahili? The AI handles it all.

They’re not testing anymore. They’re operational.

The Clock Is Ticking

Google’s Billy Leonard said something that should terrify every CEO: “We’re only now starting to see this type of activity, but expect it to increase in the future.”

Translation: This is Day One. It gets worse from here.

Every company running AI without protection is a sitting duck. Every AI model without internal defense is a weapon waiting to be turned. Every API without security is a door left wide open.

The question isn’t IF you’ll be attacked by autonomous AI. It’s WHEN.

Fighting Fire With Fire

Here’s the brutal truth: Humans can’t defend against AI attacks. We’re too slow. We need sleep. We make mistakes.

The only way to fight AI attacks is with AI defense. Speed versus speed. Intelligence versus intelligence. Adaptation versus adaptation.

But not the kind of AI defense that watches from the outside. That’s like putting a camera on a bank vault and hoping thieves won’t figure out how to avoid it.

We need defense INSIDE the AI itself. Defense that lives in the reasoning process. Defense that can’t be removed, bypassed, or negotiated with.

Think of it like this:

  • Traditional security is a guard outside the building
  • We need security that’s part of the building’s DNA
  • Security that exists at the molecular level of every decision

That’s what we’re building at Mountain Theory. Defense that operates at machine speed, inside the machine’s mind.

The New Reality

Within 18 months, every major attack will be autonomous. Human hackers will become supervisors, not operators. They’ll unleash AI agents and watch them work.

Your infrastructure? Under constant, evolving attack. Your code? Being analyzed for weaknesses 24/7. Your databases? Mapped and targeted by intelligence that never rests. Your employees? Profiled and targeted with perfect psychological precision. Your AI models? Turned into weapons against you.

This isn’t dystopian fiction. Google just showed us the receipts.

Your Move, Corporate America

You have two choices:

Option 1: Keep pretending your 2015 security stack can handle 2025 AI threats. Wait for the inevitable breach. Explain to shareholders why you didn’t see it coming. Despite Google literally publishing warnings.

Option 2: Accept that the game has fundamentally changed. Build AI defense into your AI systems. Fight autonomous attacks with autonomous defense. Secure your AI at the speed of AI.

The teenagers with grudges aren’t waiting for you to decide. The nation states aren’t pausing their programs for your quarterly planning. The criminal syndicates aren’t taking weekends off.

The autonomous AI attacks I’ve been warning about aren’t coming. They’re here. Right now. Evolving every hour. Getting smarter with each victim.

The only question left: Will you be the next headline, or will you be ready?

Mountain Theory is building the defense layer that lives inside AI itself. Because when the attack thinks for itself, your defense better be able to think too.

The age of human versus human cyberwar is over. The age of AI versus AI has begun. Make sure you’re not bringing a spreadsheet to a singularity fight.

Stay vigilant. Stay autonomous. Stay ahead.


Mike May is CEO of Mountain Theory, building the world’s first AI Infrastructure Defense that operates inside AI’s reasoning process. Because the only way to stop an AI that thinks is with a defense that thinks faster.

The Evidence Is Real. The Receipts Are Here.

Think I’m exaggerating? Check the sources yourself:

🔴 GOOGLE’S SMOKING GUN:

🎯 PROMPTFLUX COVERAGE:

💀 RUSSIA’S LIVE DEPLOYMENT:

🚨 THE INDUSTRY IS SCREAMING:

📊 20+ independent sources confirm: This. Is. Happening. Now.

3 thoughts on “The Autonomous AI Attacks I Warned About Just Went Live”

  1. Hi Mike,
    Thank you for writing about this sobering reminder on the future of rogue AI. How do we counteract this? what preemptive measures should be take to protect data infrastructure? I imagine organizations and corporations can pay for the best preventive measures.

    What about small businesses, schools, healthcare facilities, non-profits and the common man? How do they secure themselves against this rapidly evolving rogue AI. How do they/we stay ahead?
    How to go about building AI defense in an AI system?
    Are the top AI giants responsible for incorporating this defense DNA in their AI code this?

    When you think about it, rogue AI is like a cancer cell invading and taking over healthy cells and creating new dangerous cells. So how do we make the healthy cells immune to the cancer cells, in other words how do we build AI systems with immunity against rogue AI?

    1. Jewel, your cancer cell analogy is brilliant and exactly right. Just like biological immune systems, we need AI with built-in defenses, not just external treatments.

      Here’s the reality for different groups:

      For small businesses/nonprofits, RIGHT NOW:
      Start with the basics that cost nothing. Update everything. Use MFA everywhere. Limit AI tool permissions to read-only where possible. If using AI coding tools, NEVER give them production access. Create air gaps between AI and critical data. Think of it like keeping your valuables in a safe, even if your front door is strong.

      The immunity question is THE question.
      Current AI is like a body with no immune system. We’re trying to build that immunity at Mountain Theory – a defense that lives in the AI’s decision-making process, not watching from outside. But until that’s widely available, here’s what everyone can do:

      Schools/Healthcare (regulated but underfunded):
      You actually have an advantage – compliance requirements. Use them. HIPAA, FERPA, etc, already require access controls that limit AI damage. The regulations you hate might save you. Plus, many vendors offer free/discounted security tools for education and healthcare. Take them.

      Building AI defense into AI:
      This SHOULD be the responsibility of OpenAI, Google, Anthropic, etc. Some are trying (props to Anthropic’s Constitutional AI work). But they’re also racing for market share. We can’t wait for them.

      The solution needs three layers:
      1. AI companies building safer models (happening slowly)
      2. Middleware defense layers (what we’re building)
      3. User-side precautions (what you control)

      Your immune system analogy:
      In biology, immunity comes from recognition (identifying threats), response (neutralizing them), and memory (preventing reinfection). AI needs the same:
      – Recognition: AI that can spot malicious prompts
      – Response: Instant blocking at the token level
      – Memory: Learning from attacks to prevent variants

      For the “common man”:
      Honestly? Don’t put anything in AI you wouldn’t put on a billboard. Assume every AI interaction could be compromised. Use AI for productivity, not security. Keep sensitive data offline.

      The hard truth: Until AI defense becomes as standard as antivirus software, smaller organizations will remain vulnerable. Just as we survived the early internet’s wild west phase, we’ll build the infrastructure to make AI safer.

      In the meantime: Compartmentalize, air gap, and never give AI more access than absolutely necessary. The cancer can’t spread to cells it can’t reach.

      Stay paranoid. It’s not paranoia if the threat is real.

  2. Hi Mike,
    Thank you for taking the time to reply to my questions. I will start implementing some of your suggestions in my personal and professional life. I really hope the solution you laid out comes to fruition on making AI safer. Sometimes it is difficult going into the unknown, so having some knowledge on what we might encounter and tools that we can use does lessen the anxiety. Thank you for shining a light on this ubiquitous technology and the imminent threat it poses. And what we can do to protect ourselves NOW and looking ahead to the future resources that will provide better protection.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top