codingBy HowDoIUseAI Team

How to build a secure OpenClaw alternative with Claude Code

Build your own AI assistant using Claude Code and avoid the security risks that plague OpenClaw's public skills marketplace.

OpenClaw exploded onto the scene faster than any AI project in recent history, exploding from 9,000 to over 60,000 GitHub stars in just a few days. But here's what everyone's talking about behind closed doors: A security audit conducted in late January 2026 — back when OpenClaw was still known as Clawdbot — identified a full 512 vulnerabilities, eight of which were classified as critical.

The promise of an AI assistant that actually gets things done is real. But the security nightmare that comes with it? That's real too.

This guide will show you how to build your own secure alternative using Claude Code. You'll get the same powerful automation capabilities without exposing yourself to the hundreds of malicious skills flooding OpenClaw's marketplace.

What makes OpenClaw so appealing but dangerous?

OpenClaw is a personal AI assistant you run on your own devices. It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat), plus extension channels like BlueBubbles, Matrix, Zalo, and Zalo Personal.

The appeal is obvious: finally, an AI that doesn't just chat but actually does things. It has captured developer attention by offering a "24/7 Jarvis" experience where a self-hosted AI can proactively reach out to users and execute autonomous tasks across multiple messaging apps.

But here's the catch. This high-privilege requirement creates a massive attack surface. If a single malicious skill is loaded, it inherits these system-wide permissions, effectively granting the attacker the same level of access as the agent itself.

The numbers are sobering:

  • A security analysis of 3,984 skills on the ClawHub marketplace has found that 283 skills, about 7.1% of the entire registry, contain critical security flaws that expose sensitive credentials in plaintext
  • Cisco's research found that 26% of the 31,000 agent skills they analyzed contained at least one vulnerability
  • Initial scans by the Bitdefender AI Skills Checker from Bitdefender Labs revealed almost 900 malicious skills, representing nearly 20% of total packages

Why is Claude Code a safer foundation?

Claude Code takes a fundamentally different approach to security. Instead of running with system-wide permissions by default, Claude Code uses strict read-only permissions by default. When additional actions are needed (editing files, running tests, executing commands), Claude Code requests explicit permission. Users control whether to approve actions once or allow them automatically.

The security model is built around permission boundaries:

Write access restriction: Claude Code can only write to the folder where it was started and its subfolders—it cannot modify files in parent directories without explicit permission.

Trust verification: First-time codebase runs and new MCP servers require trust verification · Note: Trust verification is disabled when running non-interactively with the -p flag · Command injection detection: Suspicious bash commands require manual approval even if previously allowlisted · Fail-closed matching: Unmatched commands default to requiring manual approval.

Compare this to OpenClaw's approach: Users regularly choose functionality over the rule of least privilege, simply granting the agent broad permissions, like "Full Disk Access" or "Terminal Access".

How do you set up Claude Code for AI assistant tasks?

Getting started with Claude Code as your secure AI assistant foundation is straightforward. Here's the step-by-step process:

Step 1: Install Claude Code

First, get Claude Code set up on your system. Visit the official Claude Code documentation for installation instructions specific to your platform.

For VS Code users, install the Claude Code extension directly from the marketplace. For command-line enthusiasts, use the CLI version that works across all platforms.

Step 2: Configure security boundaries

This is where you diverge from OpenClaw's permissive approach. Set up your security configuration in your project's .claude directory:

Create a managed-settings.json file to define what Claude Code can and cannot do. This file determines what Claude can do, what it must ask permission for, and what it can never touch. Treat it like your firewall rules.

Example secure configuration:

{
  "permissions": {
    "allow": ["Read(./src/**)"],
    "ask": ["Write(./src/**)", "Bash(git status)", "Bash(npm test)"],
    "deny": ["Read(~/.ssh/**)", "Read(./secrets/**)", "WebFetch", "Bash(curl:*)"]
  }
}

Step 3: Build your custom skills safely

Instead of downloading random skills from a marketplace, build your own. The rules encode security expertise so developers don't need to be security experts.

Create a CLAUDE.md file in your project root to define your assistant's capabilities:

# Personal Assistant Rules

## Email Management
- Only read emails from specific folders
- Never send emails without explicit confirmation
- Always strip sensitive information from responses

## File Operations
- Restricted to project directories only
- Require confirmation for file deletions
- Auto-backup before major changes

## System Commands
- Whitelist safe commands only
- Block network requests by default
- Log all executed commands

What security measures should you implement?

Sandboxing your environment

Sandbox Claude Code – run in a VM or containerized dev environment. Never run as root – AI should never have admin powers.

Set up a dedicated container or virtual machine for your AI assistant. This limits the blast radius if something goes wrong. Use Docker or similar containerization to isolate the assistant from your main system.

Network restrictions

Filesystem restrictions – prevent access to sensitive paths (~/.ssh/, ~/Secrets/). Secrets management – use Vaults, not plaintext .env files.

Configure your firewall to block outbound connections except to trusted services. Use a secrets manager like AWS Secrets Manager or HashiCorp Vault instead of storing API keys in plain text.

Regular security audits

Audit monthly – review your managed-settings.json for drift. Test configs in a safe environment before rolling out to production workstations.

Schedule regular reviews of your configuration files. Security settings have a way of getting relaxed over time as you add new capabilities.

How do you build the core functionality?

Memory and context management

One of OpenClaw's key features is persistent memory. You can replicate this safely using Claude Code's project context system.

Create a structured memory system using markdown files in your project directory:

  • memory/contacts.md for people and relationships
  • memory/preferences.md for your personal preferences
  • memory/workflows.md for common tasks and procedures

Messaging integration

Instead of giving your assistant direct access to messaging apps, build a secure API layer. Create webhook endpoints that sanitize inputs and log all interactions.

Use Claude Code's MCP (Model Context Protocol) servers for integrations, but We encourage either writing your own MCP servers or using MCP servers from providers that you trust.

Automation workflows

Build your automation capabilities incrementally. Start with read-only operations like summarizing emails or analyzing documents, then gradually add write permissions for specific, well-defined tasks.

Example secure workflow:

  1. Email arrives with specific subject pattern
  2. Assistant extracts key information (read-only)
  3. Assistant drafts response but requires approval
  4. User reviews and approves before sending

What advanced features can you add safely?

Voice integration

Claude Code supports voice capabilities on supported platforms. Set up voice commands for common tasks while maintaining security boundaries. Always require verbal confirmation for destructive actions.

Smart scheduling

Build a scheduling assistant that can read your calendar and suggest meeting times, but requires explicit approval before making any changes. This gives you the productivity benefits without the risk of unauthorized calendar modifications.

Document processing

Create workflows for processing documents, extracting information, and generating summaries. Keep all processing local and never send sensitive documents to external services without explicit consent.

What are the ongoing maintenance requirements?

Regular updates

Claude Code is built with security at its core, developed according to Anthropic's comprehensive security program. Stay current with Claude Code updates to get the latest security fixes and features.

Unlike OpenClaw's Wild West approach to skills, Claude Code follows enterprise security practices with regular security updates and vulnerability patches.

Monitoring and logging

Set up comprehensive logging for all assistant actions. Monitoring – watch for unusual file edits or outbound network traffic.

Create alerts for:

  • Unexpected file access attempts
  • Failed permission requests
  • Unusual network activity
  • Changes to security configuration files

Permission reviews

Regularly audit what permissions you've granted your assistant. The key takeaway: Treat Claude like you would an untrusted but powerful intern.

How does this compare to OpenClaw's approach?

The fundamental difference comes down to security philosophy. OpenClaw optimizes for ease of use and functionality, often at the expense of security. While there are guides to make the system more secure, such as the one from Vitto Rivabella, there is a clear conflict between security and convenience. Users regularly choose functionality over the rule of least privilege, simply granting the agent broad permissions.

Your Claude Code-based alternative takes the opposite approach: security by default, with functionality added incrementally and safely.

The trade-offs are real. You won't have access to thousands of pre-built skills from a marketplace. But you also won't have to worry about malware: it silently exfiltrated data to attacker-controlled servers and used direct prompt injection to bypass safety guidelines. That malicious skill has been downloaded thousands of times.

Building your own secure AI assistant takes more initial effort than installing OpenClaw and downloading random skills. But in a world where Token Security found 22% of its enterprise customers have employees running the agent without IT approval, taking control of your AI infrastructure isn't just smart—it's essential.

The future belongs to AI assistants that actually help with daily work. The question isn't whether to build one, but whether to build it securely from the start or deal with the security consequences later. With Claude Code as your foundation, you can have both power and peace of mind.