ChatDPS

Episode 53: Clawbot Exposed — When AI Assistants Become Security Risks

Artificial intelligence is evolving at an extraordinary pace. What began as simple chat interfaces has rapidly transformed into autonomous AI agents capable of taking action on our behalf. These tools promise to eliminate repetitive tasks, streamline workflows, and dramatically improve productivity.

But as discussed in Episode 53, innovation without oversight can create serious vulnerabilities.

In this episode, Nick, Robert, and Adam unpack the rise of Clawbot, an AI personal assistant that quickly gained attention for its capabilities, and concern for its risks. What seemed like a breakthrough in automation revealed deeper questions about trust, security, and the future of AI agents in both personal and corporate environments.

Let’s break it down.

What Is Clawbot?

Clawbot (formerly known as Claudebot and later Maltbot) is an AI personal assistant designed to run locally on a user’s machine. Unlike cloud-based chatbots that simply generate responses, Clawbot integrates directly with messaging platforms like WhatsApp and Telegram and can execute tasks across multiple applications.

Its capabilities include:

  • Booking flights and hotels
  • Making reservations
  • Managing email responses
  • Scheduling meetings
  • Running background automation tasks

One of its most powerful features is persistent memory. This allows the AI to remember past interactions, maintain context over time, and operate continuously in the background. In theory, it’s the ultimate executive assistant, always on, always learning, always acting. But that level of autonomy requires deep system access. And deep access creates deep risk.

“Just because you can, does it mean you should?”
Robert Feldman

The Appeal of Autonomous AI Agents

The attraction is obvious.

For busy professionals and executives, AI agents promise relief from the endless stream of administrative tasks that consume time and attention. Instead of manually responding to emails or coordinating travel logistics, users can simply issue a command and let the AI handle the details.

Imagine sending a message that says:

“Book me a flight to Melbourne next Thursday and reserve a hotel near the conference venue.”

And it happens automatically.

That convenience represents the next phase of AI adoption, moving from assistance to delegation.

But delegation requires trust. And trust requires security.

The Security Discovery That Changed the Narrative

During the podcast discussion, Adam highlighted research that uncovered a troubling reality within Clawbot’s ecosystem.

Researchers discovered 386 malicious skills inside the platform’s skills repository.

These were not minor bugs. They included components capable of:

  • Stealing cryptocurrency wallet data
  • Logging keystrokes
  • Extracting sensitive credentials
  • Running hidden processes in the background

In addition, the environment reportedly contained a key-stealing application — a direct red flag for anyone concerned about system integrity.

This is where the narrative shifts.

An AI agent with persistent memory and system-level permissions is not just another app. It becomes an embedded operator inside your digital environment. If compromised, it can act with the same access and authority as the user.

That’s no longer convenience. That’s exposure.

The Broader Issue: AI as an Insider Threat

Clawbot is not just a standalone case study. It represents a broader shift in how AI tools function.

We are transitioning from:

AI as advisor to AI as autonomous operator

When AI agents can execute actions, access messaging platforms, manage credentials, and operate continuously in the background, they effectively become insiders within your system.

And insiders — whether human or digital — carry significant risk.

Robert emphasised the importance of understanding what users are installing on their networks. Many professionals adopt productivity tools without fully assessing how they interact with internal systems, APIs, or corporate credentials.

This creates two major vulnerabilities:

  1. Supply Chain Risk
  2. If malicious code is introduced into an AI tool’s update pipeline or skills repository, every user becomes a potential victim.
  3. Credential Exposure
  4. An AI agent operating locally on a corporate device may have access to sensitive systems, stored passwords, or authentication tokens.

In short, you’re not just installing software. You’re granting operational authority.

The Rise of Shadow AI

One of the most important themes from Episode 53 was the concept of shadow AI.

Shadow AI occurs when employees adopt AI tools without formal approval from IT or security teams. From the employee’s perspective, the tool improves efficiency. From a security standpoint, it may introduce an unmonitored entry point into the network.

As AI assistants become more powerful, organisations must rethink their approach to governance.

This includes:

  • Updating acceptable use policies
  • Educating employees about AI risks
  • Restricting unauthorised AI installations
  • Applying least-privilege access controls
  • Monitoring integrations and outbound activity

Traditional antivirus models were not built for autonomous AI agents with persistent memory and cross-platform access.

This is a new category of threat.

Capability Must Be Matched With Control

Episode 53 is not anti-AI. It is pro-responsibility.

AI agents have extraordinary potential to increase productivity and streamline operations. But capability without governance creates risk faster than value.

Clawbot demonstrates a critical turning point in AI adoption:

We can no longer treat AI assistants like simple productivity apps.

They are:

  • Credentialed actors
  • Network participants
  • Persistent software entities
  • Potential supply chain vectors

As AI evolves from passive responder to active executor, organisations must implement:

  • Zero-trust security principles
  • Sandboxed execution environments
  • Regular code audits
  • Strict permission boundaries
  • Clear internal AI policies

The future of AI is autonomous. The question is whether our security practices evolve just as quickly.

Clawbot is more than a cautionary tale about one platform. It is a preview of the challenges that come with autonomous AI.

As these systems become more capable, the stakes increase. The productivity upside is enormous — but so is the potential damage if something goes wrong.

The key lesson from Episode 53 is simple:

Innovation must be balanced with responsibility.

Before asking, “What can this AI do for me?” We must also ask, “What could it do without me knowing?”

That’s the conversation every executive, technologist, and security leader needs to be having now.