can i run openclaw in docker - Google Search


Search Results

27 minutes ago

Yes, you can run

OpenClaw in Docker. In fact, containerization is the recommended and most stable way to deploy it because it isolates the agent's autonomous actions from your host operating system.

How to Run OpenClaw in Docker

The official repository includes a setup script that automates the entire process, including building the image and running an onboarding wizard.

  1. Clone the Repository:
    git clone https://github.com/openclaw/openclaw.git
    cd openclaw
    
  2. Run the Setup Script:
    ./scripts/docker/setup.sh
    
    This script builds the gateway image, generates a gateway token, and starts the services via Docker Compose.
  3. Access the Dashboard:
    Open http://127.0.0.1:18789/ in your browser and use the token generated during the setup to log in.

Key Considerations

  • System Requirements: Your machine (or VPS) should have at least 2 GB of RAM for the initial build process, as pnpm install can be memory-intensive.
  • Security: Running in Docker provides a "safety net," ensuring the agent only interacts with files and services you explicitly allow. For enhanced security, you can use Docker Sandboxes to isolate tool executions in micro-VMs.
  • Persistence: By default, OpenClaw uses bind-mounts to ensure your configurations and workspace data survive container restarts.
  • Local Models: If you want to avoid cloud API costs, you can connect OpenClaw to a local Ollama instance by setting the hostname to host.docker.internal (on macOS/Windows) or the bridge IP (on Linux).

Deployment Options

OptionBest ForRequirement
Local DockerPrivacy & TestingDocker Desktop / Engine
VPS (e.g., Hostinger)24/7 AvailabilityLinux VPS with Docker
Managed (e.g., Kimi Claw)Zero MaintenanceNo Docker install needed

Do you want to connect a local LLM like Ollama, or will you be using a cloud provider like Anthropic or OpenAI?

Would you like to see this from a different perspective?

Creating a public link...

  • openclaw/docs/install/docker.md at main - GitHub
    Table_title: File metadata and controls Table_content: header: | summary | Optional Docker-based setup and onboarding for OpenClaw...
    GitHub
  • Run OpenClaw Securely in Docker Sandboxes
    * AI. * Products. Secure Agent Execution with NanoClaw and Docker Sandboxes. NanoClaw integrates with Docker Sandboxes to run AI....
    Docker
  • Docker - OpenClaw Docs
    Is Docker right for me? * Yes: you want an isolated, throwaway gateway environment or to run OpenClaw on a host without local in...
    OpenClaw AI

Show all

yes

24 minutes ago

To connect OpenClaw (running in Docker) to a local

Ollama instance, the most critical step is ensuring the container can reach your host machine's network.

1. Configure Ollama for Network Access

By default, Ollama only listens to 127.0.0.1. You must change this to 0.0.0.0 so the Docker container can "see" it.

  • Linux: Edit your systemd service or run export OLLAMA_HOST=0.0.0.0 before starting ollama serve.
  • macOS/Windows: In the Ollama app settings, ensure it is set to allow connections from your local network.

2. Set the Correct Base URL

When OpenClaw asks for the provider URL during the onboard process or in your .env file, do not use localhost. Use the Docker-specific host address:

  • Base URL: http://host.docker.internal:11434.
  • API Key: Use any placeholder value (e.g., ollama-local), as Ollama does not require a real key for local use.

Update your openclaw.json or .env file with these settings for the best experience:

  • LLM Provider: ollama.
  • Context Length: OpenClaw agents perform better with larger windows. It is recommended to set a context window of at least 64k tokens.
  • Recommended Models: Use models that support tool-calling natively, such as llama3.3, qwen2.5-coder:32b, or gpt-oss:20b.

4. Quick Start Command

If you have Ollama installed, you can use the official launcher which automates much of this setup:

ollama launch openclaw

This command detects if OpenClaw is installed and guides you through picking a local model and configuring the gateway.

Would you like a specific Docker Compose example that includes both OpenClaw and Ollama in the same file for a single-command setup?

Would you like to see this from a different perspective?

Creating a public link...

  • OpenClaw - Ollama's documentation
    Install — If OpenClaw isn't installed, Ollama prompts to install it via npm. Security — On the first launch, a security notice exp...
    Ollama
  • OpenClaw Quickstart: Install with Docker (Ollama GPU or...
    OpenClaw Quickstart: Install with Docker (Ollama GPU or Claude + CPU) * System Requirements and Environment Setup. * OpenClaw Prer...
    www.glukhov.org
  • Ollama - OpenClaw Docs
    Pull a local model if you want local inference: ollama pull glm-4.7-flash # or ollama pull gpt-oss:20b # or ollama pull llama3.3....
    OpenClaw AI

Show all

If I use docker on a host can a virtual guest (virt-manager) access it, and can it use my gpu?

18 minutes ago

Yes, a virtual guest in

virt-manager can access Docker containers running on your host, but sharing a single GPU between them simultaneously is complex and often restricted by your hardware.

1. Accessing Docker from a Guest VM

A virtual guest can communicate with Docker containers on the host over the network.

  • Networking: If your guest and Docker use the same network bridge (e.g., virbr0), they can communicate directly via IP.
  • Exposing Services: To access a containerized service (like an API or dashboard), you must map the container's internal ports to the host's ports (e.g., -p 8080:80). The guest can then reach the service at http://<host-ip>:8080.
  • Shared Files: You can also share files between the host and guest using virtiofs or Samba, allowing the guest to see data used by Docker containers.

2. Using the GPU

Using your GPU in a virtual environment depends on how you allocate it:

  • GPU Passthrough (Full Control): You can assign a physical GPU directly to the guest VM using Virt-manager GPU Passthrough.
    • Limitation: Once passed through, the host (and thus any Docker containers on the host) typically loses access to that GPU.
  • Simultaneous Use (Sharing):
    • Consumer GPUs: Most standard NVIDIA/AMD cards do not support "splitting" between a host and a guest at the same time. You generally have to shut down the VM to release the GPU back to the host's Docker containers.
      • Enterprise GPUs: High-end cards (e.g., NVIDIA Tesla/Quadro) support vGPU or SR-IOV, which allows multiple VMs and the host to share one physical GPU simultaneously, though this often requires paid licenses.
  • Workaround: You can run Docker inside the guest VM. If you pass the GPU to the VM, every container running within that VM can use the GPU.

Summary Table

FeatureSupportKey Requirement
Network AccessYesShared bridge or mapped host ports
GPU for VMYesHardware support for IOMMU / PCI Passthrough
Shared GPURareEnterprise hardware (vGPU) or dynamic switching

Would you like the steps to set up GPU passthrough in virt-manager, or should we look at networking your guest to your host's Docker bridge?

Would you like to see this from a different perspective?

Creating a public link...

  • mwalker179. • 5y ago. To my knowledge, this is not possible. I believe it is because for QEMU, the GPU is passed through to the gu...
    Reddit
  • Access KVM VM over the network from a Docker container
    2 Answers.... You can run KVM/libvirt and Docker an the same networking bridge. Lets assume your KVM bridge is virbr0.... Docker...
    Stack Overflow
  • GPU passthrough with virt-manager: r/VFIO - Reddit
    The only reasonable answer gets downvoted. You cannot use gpu both on the host and on the guest (of course). To use you gpu on the...
    Reddit

Show all

networking your guest to your host's Docker bridge

12 minutes ago

To network a

virt-manager guest directly to a Docker bridge, you have two primary methods: moving Docker onto the existing VM bridge or manually adding the Docker bridge to the VM's configuration.

Instead of connecting the VM to docker0, you can tell Docker to use the existing virbr0 bridge created by libvirt. This puts both your containers and your VM on the same subnet (e.g., 192.168.122.x).

  1. Configure Docker Daemon: Create or edit /etc/docker/daemon.json on your host:
    {
      "bridge": "virbr0",
      "iptables": false
    }
    
  2. Restart Services:
    sudo systemctl stop docker
    sudo systemctl restart libvirtd
    sudo systemctl start docker
    
    Note: Existing containers may need to be recreated to pick up the new IP range.

Method 2: Manually Connect VM to docker0

If you want to keep Docker on its own network but let the VM "plug in" to it, you can add the docker0 bridge as a network interface in virt-manager.

  1. Open Virt-Manager: Select your VM and go to Open > Show virtual hardware details (lightbulb icon).
  2. Add Hardware: Click Add Hardware > Network.
  3. Select Bridge:
    • Network source: Choose Bridge device.
      • Device name: Manually type docker0.
  4. Fix Iptables (Crucial): Docker often blocks traffic from other bridges. Run these commands on the host to allow the VM to talk to containers:
    sudo iptables -I DOCKER-USER -i virbr0 -j ACCEPT
    sudo iptables -I DOCKER-USER -o virbr0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    

Method 3: Shared Macvlan (Advanced)

You can create a macvlan network in Docker that uses the same physical interface as your VM's bridge. This gives containers their own IP addresses on your actual local network, making them appear as "real" devices to your guest VM.

Comparison of Networking Approaches

ApproachBest ForBenefit
Unified BridgeSimplest setupBoth use one subnet (192.168.122.x).
Direct docker0IsolationVM gets a second NIC just for Docker traffic.
Port ForwardingBasic accessNo complex bridge setup; use host-ip:port.

Would you like the specific iptables rules to ensure your VM can reach a container's internet connection as well?

Would you like to see this from a different perspective?

Creating a public link...

  • Docker Networking to allow Host/Guest communication?
    Comments Section. NiiWiiCamo. • 4mo ago • Edited 4mo ago. This sounds like an xy-problem. Why are you trying to do this? Connectin...
    Reddit
  • Networking between KVM VM and docker container on same...
    3 Answers. Sorted by: 7. The solution was as simple as stated in the linked article. I am not sure why my configuration did not ch...
    Server Fault
  • virbr0 vs. docker0 network bridge? - VM Engine (KVM) - Unraid
    itimpi Moderators. June 22, 201510 yr. June 22, 201510 yr. On 6/22/2015 at 8:42 AM, sgibbers17 said: Quote. Quote. But I am lookin...
    Unraid

Show all

Search Labs

Google apps

Google Account

Gary Rust