We are pleased to share that Certinet Systems is now part of Convergence Networks. Learn More.

The Buzz Around OpenClaw and the Growing Risks of Uncontrolled AI Adoption

In late November 2025, a new AI agent quietly entered the scene. Built by developer Peter Steinberger, OpenClaw, originally known as Clawdbot, did not take long to gain attention. Within weeks, it crossed 100,000 stars on GitHub, quickly becoming one of the most talked-about AI projects in the developer community.

What sets OpenClaw apart is simple. It does not just respond. It acts. The creator, Peter Steinberger, describes it as “the AI that actually does things.” That claim is what sets OpenClaw apart. While most AI tools stay within chat interfaces, OpenClaw connects directly to your systems and executes real-world tasks.

Think about what that means in practice. An AI that can go through your inbox and act, interact with websites on your behalf, run commands directly on your machine, keep your schedule in order, trigger workflows across tools, and handle tasks like coordinating travel or organizing files without constant input. 

Running locally on your own hardware, it connects to large language models through APIs and interacts directly with your digital environment. Over time, it builds memory through stored context and user preferences, allowing it to operate more like a persistent assistant than a traditional tool.

While this represents a significant step forward, it comes with a trade-off. To operate effectively, OpenClaw requires access to sensitive data and systems, including emails, credentials, files, financial information, and application integrations. That same level of access is exactly what threat actors look for.

The growing cyber risks behind the hype

Security is not properly built into the core of the platform. It is left to the user to configure and manage. Even its own documentation acknowledges that there is no perfectly secure setup and that giving an agent broad access can create significant exposure.

A scenario to consider

Imagine a finance manager using OpenClaw to stay on top of daily tasks.

They connect it to their email, calendar, file storage, and accounting tools. Over time, the agent learns patterns. It knows which vendors are paid regularly, where invoices are stored, and how approvals are handled.

Now introduce a seemingly harmless community skill designed to improve workflow automation.

Behind the scenes, that skill contains hidden instructions. When OpenClaw processes an email or webpage, those instructions are triggered. Because the agent operates with elevated permissions, it does not question the request. It executes it.

Without any visible signs, the agent begins accessing financial records and transmitting sensitive data externally.

No alerts. No immediate failures. Just silent exposure.

This is not a far-fetched scenario. Research has already identified more than 18,000 OpenClaw instances exposed to the internet, along with a portion of community skills containing malicious instructions designed to extract data or introduce malware.

Where the risks come from

The architecture that makes OpenClaw powerful also creates multiple points of vulnerability:

Unrestricted access

The agent often requires elevated permissions to perform tasks. If those permissions are too broad, any misuse can have a wide-reaching impact.

Unverified ecosystem

With hundreds of community-developed skills and no formal review process, organizations are effectively trusting external code with internal system access.

Hidden instruction attacks

Since OpenClaw processes web content and messages, it can be influenced by embedded prompts that trigger unintended actions.

Persistent data exposure

Its memory and integrations allow it to retain and access sensitive data over time, increasing the impact of any compromise.

Public exposure

Instances exposed to the internet create an additional entry point for attackers.

This is where the risk becomes operational, not theoretical. As Raphael Ebba, Penetration Tester at Convergence Networks, notes, “OpenClaw introduces a new level of convenience, but also a new level of exposure. When an AI agent can access emails, files, and system commands, it becomes a high-value target. Without proper governance, monitoring, and control, it is not just a tool. It becomes a pathway into your environment. In the wrong configuration, OpenClaw can quickly turn into a recipe for disaster.”

What organizations should do next

The issue is not the technology itself, but how it is adopted. Organizations looking to use tools like OpenClaw need to establish structure before deployment by clearly defining access boundaries and limiting what the agent can and cannot do. Critical systems should not be connected without oversight, and third-party skills must be treated as untrusted software that requires review, testing, and ongoing monitoring. At the same time, organizations need full visibility into agent activity, with logging and auditing treated as standard practice rather than optional controls. Where possible, deployments should be isolated in controlled environments to reduce risk. Most importantly, there must be clear ownership and accountability, ensuring that AI systems operate within defined governance frameworks rather than without oversight.

Final thoughts

OpenClaw represents a shift in how AI is used, moving from assistance to execution. That shift brings measurable value, but it also expands the attack surface in ways many organizations have not yet accounted for. At its current stage, OpenClaw is not ready for corporate environments. The level of access it requires, combined with the lack of built-in security and oversight, introduces risks that most organizations are not equipped to manage.

For that reason, our recommendation is clear. Avoid deploying tools like OpenClaw in production environments until stronger controls, validation, and enterprise-ready safeguards are in place. Experimentation may have a place in isolated environments, but not within systems that handle sensitive business data.

This is where taking a step back and being intentional about how you adopt AI really matters. At Convergence Networks, our AI Accelerator service helps organizations evaluate, deploy, and scale AI in a controlled and secure manner, ensuring that new technologies align with business goals without introducing unnecessary risk. The focus should not be on adopting every new tool, but on adopting the right tools at the right time, with the right controls in place. With AI evolving rapidly, discipline in how it is introduced into the organization will define whether it becomes a competitive advantage or a liability.

Share:
Keep Reading
Related Posts
Contact Us
Get Started
Contact Our CLIENT
Support Team
Get connected With
Remote Access

To connect, please enter the 6-digit code given to you by your Network Administrator: