
February 14, 2026
Artificial intelligence is evolving at an extraordinary pace. What began as simple chat interfaces has rapidly transformed into autonomous AI agents capable of taking action on our behalf. These tools promise to eliminate repetitive tasks, streamline workflows, and dramatically improve productivity.
But as discussed in Episode 53, innovation without oversight can create serious vulnerabilities.
In this episode, Nick, Robert, and Adam unpack the rise of Clawbot, an AI personal assistant that quickly gained attention for its capabilities, and concern for its risks. What seemed like a breakthrough in automation revealed deeper questions about trust, security, and the future of AI agents in both personal and corporate environments.
Let’s break it down.
Clawbot (formerly known as Claudebot and later Maltbot) is an AI personal assistant designed to run locally on a user’s machine. Unlike cloud-based chatbots that simply generate responses, Clawbot integrates directly with messaging platforms like WhatsApp and Telegram and can execute tasks across multiple applications.
Its capabilities include:
One of its most powerful features is persistent memory. This allows the AI to remember past interactions, maintain context over time, and operate continuously in the background. In theory, it’s the ultimate executive assistant, always on, always learning, always acting. But that level of autonomy requires deep system access. And deep access creates deep risk.

The attraction is obvious.
For busy professionals and executives, AI agents promise relief from the endless stream of administrative tasks that consume time and attention. Instead of manually responding to emails or coordinating travel logistics, users can simply issue a command and let the AI handle the details.
Imagine sending a message that says:
“Book me a flight to Melbourne next Thursday and reserve a hotel near the conference venue.”
And it happens automatically.
That convenience represents the next phase of AI adoption, moving from assistance to delegation.
But delegation requires trust. And trust requires security.
During the podcast discussion, Adam highlighted research that uncovered a troubling reality within Clawbot’s ecosystem.
Researchers discovered 386 malicious skills inside the platform’s skills repository.
These were not minor bugs. They included components capable of:
In addition, the environment reportedly contained a key-stealing application — a direct red flag for anyone concerned about system integrity.
This is where the narrative shifts.
An AI agent with persistent memory and system-level permissions is not just another app. It becomes an embedded operator inside your digital environment. If compromised, it can act with the same access and authority as the user.
That’s no longer convenience. That’s exposure.
Clawbot is not just a standalone case study. It represents a broader shift in how AI tools function.
We are transitioning from:
AI as advisor to AI as autonomous operator
When AI agents can execute actions, access messaging platforms, manage credentials, and operate continuously in the background, they effectively become insiders within your system.
And insiders — whether human or digital — carry significant risk.
Robert emphasised the importance of understanding what users are installing on their networks. Many professionals adopt productivity tools without fully assessing how they interact with internal systems, APIs, or corporate credentials.
This creates two major vulnerabilities:
In short, you’re not just installing software. You’re granting operational authority.
One of the most important themes from Episode 53 was the concept of shadow AI.
Shadow AI occurs when employees adopt AI tools without formal approval from IT or security teams. From the employee’s perspective, the tool improves efficiency. From a security standpoint, it may introduce an unmonitored entry point into the network.
As AI assistants become more powerful, organisations must rethink their approach to governance.
This includes:
Traditional antivirus models were not built for autonomous AI agents with persistent memory and cross-platform access.
This is a new category of threat.
Episode 53 is not anti-AI. It is pro-responsibility.
AI agents have extraordinary potential to increase productivity and streamline operations. But capability without governance creates risk faster than value.
Clawbot demonstrates a critical turning point in AI adoption:
We can no longer treat AI assistants like simple productivity apps.
They are:
As AI evolves from passive responder to active executor, organisations must implement:
The future of AI is autonomous. The question is whether our security practices evolve just as quickly.
Clawbot is more than a cautionary tale about one platform. It is a preview of the challenges that come with autonomous AI.
As these systems become more capable, the stakes increase. The productivity upside is enormous — but so is the potential damage if something goes wrong.
The key lesson from Episode 53 is simple:
Innovation must be balanced with responsibility.
Before asking, “What can this AI do for me?” We must also ask, “What could it do without me knowing?”
That’s the conversation every executive, technologist, and security leader needs to be having now.