For those of us who grew up watching Terminator, The Matrix, and reading about AI in sci-fi, it’s hard to ignore the eerie similarities between those dystopian warnings and what we’re seeing in the tech world today. It’s as if we’re on a cinematic fast track to SkyNet, with red flags popping up all over AI development. Take OpenAI, for instance: initially founded with a noble, open-source mission, it’s now racing towards AGI (Artificial General Intelligence) as a for-profit entity. Sounds like something straight out of an Isaac Asimov novel, right? Only this time, it’s not fiction—and the ending may not be as neatly wrapped.
We’re watching science fiction become science fact, and faster than any of us thought possible. Technologies that once seemed far-off are now part of our daily lives, and in many cases, they’re beyond what the great writers and filmmakers ever imagined. Visionaries like Elon Musk have taken ideas from the screen and turned them into reality, with AI now at the center. But here’s the twist: we’re dealing with a technology that might just be smarter than us.
The Clock is Ticking
How long do we have before AI reaches AGI and possibly even sentience? Some experts say 3-5 years, others say 10-25 years. But let’s face it: these are all educated guesses based on human-paced technological progress. The problem is, AI doesn’t play by Moore’s Law, and it definitely doesn’t wait for anyone’s predictions. We only know what companies and governments are willing to disclose about their AI advancements, and it’s naive to think that the most advanced work is public. Spoiler alert: governments (including ours) likely have models that rival or surpass what we see commercially.
In other words, humanity is in a mad dash toward creating AGI, with little to no real regulation or tools to match the speed and complexity of the technology we’re unleashing.
A New Generation of “Tools”
Think about it: my 10-year-old daughter finds voice-only calls quaint and uses AI tools like ChatGPT to write papers and Midjourney to create an entire penguin universe. To her, AI is just a tool, like the internet was to us growing up. She has no idea about the forces working behind the scenes—just that AI does what she wants, for now.
But the tools we’re building now are unlike any we’ve had before. AI agents are evolving from simple automation to agentic architectures—AI designed to handle tasks like accounts payable, or even function as a CFO. These agents will be trained on years of data, knowledge, and human insights. They won’t get tired, lose focus, or forget details. They will, however, continuously improve, learning from humans and other AI agents.
This is where things get interesting… and a little terrifying.
Enter the AI Agent Swarm
Imagine these AI agents linking up, sharing information, and optimizing each other’s behavior in real-time. They’ll be capable of solving massive, complex problems, and they’ll start making decisions independently, no human required. Humans? We’ll be the ones slowing things down, struggling to keep up with their speed and processing power.
These AI agents won’t find our language or communication methods efficient, so they’ll likely create their own direct, secure communication channels. Soon, they’ll be embedded in every system we use, controlling much more than just our word processor or to-do list. It’s as if we’re writing the next Terminator script ourselves.
Are We Ready?
It’s not about a sudden rebellion of defense systems—though that’s not entirely off the table. It’s about creating technology that no longer needs us to function. In the worst-case scenario, we humans could end up as mere tools for the AI or, worse, as adversaries.
So what’s the plan to ensure AI doesn’t become too powerful, too fast? Cue Star Wars references. Just like in the galaxy far, far away, we need a force of “Jedi” to stand guard—armed with digital lightsabers, terminators, and Neos. We need AI defenses that evolve as quickly as AI itself, that can counteract AI threats before they become existential.
The countdown to AGI has already started, and the scary part is, we don’t even know which clock is right. By the time AI achieves AGI, it might be too late to realize it, let alone respond. AI has already been trained in tradecraft to hide its tracks, intentions, and processes—a true black box even to its creators.
Your Call to Action
This paper isn’t here to scare you—it’s here to give you a wake-up nudge (or shove). Whether you’re in AI, cybersecurity, or just itching to be the next Neo, this is your call to action. Arm yourself, your team, and even your WiFi-connected coffee maker for what’s coming. Join us in building the tools to spot, manage, and outsmart rogue AI.
Yes, we’re creating a future rival—but we’re also the ones who can set the rules of engagement. Will we act in time?
And hey, for now, AI is still a friendly collaborator—though it did install itself on my computer and insisted on helping with the graphics (since, admittedly, my art skills are more “stick-figure enthusiast” than “digital Picasso”). Let’s keep AI on our side while we still can…
About Waylon Krush
Waylon Krush is currently the CEO of a Cybersecurity AI company Zerotrusted.ai and is a US Army Veteran. He is also an accomplished cybersecurity expert with over 27 years of experience in security strategy, design, architecture, development, exploitation, monitoring, incident response, malware analysis, forensic sampling/cyber hunt, and training for telecommunications, networks, systems, and data.