# Claw-AI-Lab
**Repository Path**: devdz/Claw-AI-Lab
## Basic Information
- **Project Name**: Claw-AI-Lab
- **Description**: One dashboard. An entire research team.
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: preview-v1.1.0
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2026-04-10
- **Last Updated**: 2026-04-10
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
Claw AI Lab: An Autonomous Multi-Agent Research Team
---
## ๐ฅ Updates
- __[2026.04.02]__: Preview v1.1.0 โ powered by **Claw-Code Harness**.
- __[2026.03.25]__: Preview v1.0.0 - initial release.
---
## ๐ค What Is This?
**Claw AI Lab** is a lab-native multi-agent research platform for interactive and scalable AI-driven science. It enables users to create a full AI research lab from a single prompt, with customizable roles, research directions, and collaborative workflows, rather than relying on a single-agent or fixed serial pipeline. Claw orchestrates multiple agents and projects in parallel through a FIFO-based scheduling framework, maximizing compute utilization while supporting cross-project knowledge sharing and mutual improvement. Crucially, the system keeps humans in the loop: users can intervene whenever needed, provide feedback under ambiguity, inject new ideas, and iteratively refine the research process through rollback and continuation. Combined with a simple UI that reduces everything to prompts and clicks, Claw transforms automated research into a more intuitive, steerable, and laboratory-like experience.
We welcome contributions from the community to make this project better together!
You are warmly invited to scroll to the bottom of the page to join our group for beta testing and discussion.
---
## ๐ฅ๏ธ Claw AI Lab Dashboard
Launch projects, monitor agents, and inspect every artifact โ all from a single interface.
Real-time event stream ยท Multi-project overview ยท One-click rollback & resume ยท Artifact inspector
---
## โจ Key Features
| ๐ฅ๏ธ | Interactive UI | Real-time web dashboard with event stream, data shelf, and multi-project monitoring |
| ๐งฌ | Claw Code Harness | Reads your local codebases, datasets & checkpoints โ writes runnable code back to disk |
| ๐ฌ | End-to-End Pipeline | One prompt โ paper + code + figures + experiment logs, fully autonomous |
| ๐ค | Three Research Modes | Explore ยท Discussion (multi-agent debate) ยท Reproduce |
---
### ๐ Generated Project Showcase
Each project autonomously produces a full research deliverable: **Paper** ยท **Code** ยท **Figures** ยท **Experiment Logs**
---
### ๐ Discussion Mode Showcase
Multi-agent discussion on: **"What is the most deployable direction for Video Action Models in Embodied AI?"**
> **Agent A** โ World Model + MPC (Model Predictive Control) is the most industrially stable path.
>
> **Agent B** โ "Train with video, infer with action" is the most deployable policy paradigm.
>
> **Agent C** โ Execution monitoring & SOP (Standard Operating Procedure) automation lands fastest as a product.
**Consensus:** The most deployable form is not a single end-to-end model, but a **layered, modular system** โ use video supervision during training to learn rich dynamics, output actions directly at inference for low latency, and layer planning/MPC/safety modules on top for closed-loop robustness and recovery.
Top 3 Research Directions (ranked by deployability)
| # | Direction | Deployability |
| :---: | :--- | :--- |
| 1 | **Layered Video-Action Stack** โ video-action joint training + direct action inference + MPC safety | Highest โ best balance of latency, interpretability & safety |
| 2 | **Video-to-Plan / SOP** โ demo videos โ step sequences & skill graphs for existing robots | High โ smallest embodiment gap, clearest commercial path |
| 3 | **Execution Monitor** โ real-time step tracking, anomaly detection, re-planning triggers | High โ fastest to production; critical for industrial reliability |
Key Contradictions Resolved
| Debate | Resolution |
| :--- | :--- |
| World Model + MPC vs. Direct Action? | **Combine both** โ world model for representation, direct action for control, MPC for safety |
| Human video: valuable or too much gap? | **Pre-training yes**; direct low-level transfer not yet reliable |
| Is monitoring a "real" action model? | Not the backbone, but **fastest to reach production value** |
**[โ Full Transcript](assets/showcase/discussion_transcript.md)** ยท **[โ Consensus Synthesis](assets/showcase/consensus_synthesis.md)**
---
## ๐ Quick Start
### 1. Install
```bash
git clone https://github.com/Claw-AI-Lab/Claw-AI-Lab.git
cd Claw-AI-Lab
# Create python environment
conda create -n clawailab python=3.11
conda activate clawailab
# Backend
cd backend/agent
pip install -e ".[all]"
pip install websockets
# Frontend
cd ../../frontend
npm install
cd ..
# ML dependencies
# You can add more packages based on your research project
pip install torch torchvision diffusers transformers accelerate safetensors datasets \
huggingface_hub opencv-python pandas matplotlib scikit-image scipy einops tqdm
```
### 2. Configure
Fill in following configurations in examples/config_template.yaml:
```
llm:
api_key: "your-api-key"
primary_model: "gpt-5.4"
coding_model: "gpt-5.4"
image_model: "gemini-3-pro-image-preview"
fallback_models:
- "qwen3.5-plus"
- "qwen-plus"
sandbox:
python_path: "/path/to/your/python3"
```
Thanks a lot for [KOKONI's](https://www.kokoni3d.com/) support for this project, and api_key can be obtained [here](http://www.longcatcloud.com/).
### 3. Run
```bash
./start.sh # Start all services
./start.sh stop # Stop
./start.sh restart # Restart
./start.sh status # Status check
./start.sh fresh # Clean restart (reset all data)
```
Open **http://localhost:5903/** โ Submit your research topic and let the agents work.
---
## ๐ก Tips to Get the Best Results
| # | Recommendation | Why |
|---|---|---|
| 1 | **Prepare local codebases, datasets & checkpoints** โ enter their paths when submitting a project | Avoids download delays and network failures during runs |
| 2 | **Use a strong coding model like GPT 5.4** | Significantly better code quality and fewer iteration cycles |
| 3 | **Review the `IMPORTANT` fields in [Configuration Details](#๏ธ-configuration-details)** | Misconfigured API keys or resource limits are the #1 cause of failed runs |
---
## โ๏ธ Configuration Details
Every field in `examples/config_template.yaml` explained. Fields marked **IMPORTANT** are the ones you almost always need to set.
Click to expand full reference
```yaml
# === Project ===
project:
name: "my-project" # Project identifier, used for directory naming and UI display
mode: "full-auto" # Pipeline mode: "full-auto" runs all stages without human gates
# === Research ===
research:
topic: "Your research topic" # The research topic or paper to reproduce (required)
domains: # Research domains for literature search scope
- "deep-learning"
daily_paper_count: 5 # Number of papers to retrieve per search query
quality_threshold: 3.0 # Minimum relevance score (1-5) for literature screening
reference_papers: [] # List of reference paper titles or arXiv IDs
# === Notifications ===
notifications:
channel: "console" # Notification channel: "console" | "discord" | "slack"
on_stage_start: true # Notify when a stage begins
on_gate_required: true # Notify when human approval is needed
# === Knowledge Base ===
knowledge_base:
backend: "markdown" # Storage format: "markdown" | "obsidian"
root: "docs/kb" # Root directory for knowledge base files
# === OpenClaw Bridge ===
openclaw_bridge:
use_message: false # Enable progress notifications via messaging platforms
use_memory: false # Enable cross-session knowledge persistence
use_web_fetch: false # Enable live web search during literature review
# === LLM ===
llm:
provider: "openai-compatible" # LLM provider: "openai-compatible" | "openai" | "deepseek" | "acp"
api_key: "sk-your-key" # โ ๏ธ **IMPORTANT** API key (or use api_key_env to read from environment)
api_key_env: "RESEARCHCLAW_API_KEY" # Environment variable name for API key (fallback)
primary_model: "gpt-5.4" # โ ๏ธ **IMPORTANT** Main model for research, analysis, and writing
coding_model: "gpt-5.4" # โ ๏ธ **IMPORTANT** Model for code generation (S11)
image_model: "gemini-3-pro-image-preview" # โ ๏ธ **IMPORTANT** Model for figure generation in paper
fallback_models: # Fallback model chain โ used when primary model fails
- "qwen3.5-plus"
- "qwen-plus"
# === Security ===
security:
hitl_required_stages: [] # Stage numbers requiring human approval (e.g. [5, 9, 20])
# === Experiment ===
experiment:
mode: "sandbox" # Execution mode: "sandbox" (local Python) | "docker" | "simulated"
time_budget_sec: 2400 # โ ๏ธ **IMPORTANT** Max wall-clock time per experiment run (seconds)
max_iterations: 3 # Number of iterative refinement cycles in S15 (Edit-Run-Eval loop)
metric_key: "primary_metric" # Name of the primary evaluation metric
metric_direction: "minimize" # Optimization direction: "minimize" | "maximize"
datasets_dir: "" # โ ๏ธ **IMPORTANT** Absolute path to datasets directory
checkpoints_dir: "" # โ ๏ธ **IMPORTANT** Absolute path to model weights directory
codebases_dir: "" # Absolute path to reference codebases directory
shared_results_dir: "" # Directory for cross-project shared results
paper_length: "long" # Paper length: "short" (~4 pages) | "long" (~8 pages)
sandbox:
python_path: "/path/to/python3" # โ ๏ธ **IMPORTANT** Python interpreter for running experiments
sanity_check_max_iterations: 100 # Max fix attempts in S12 code testing
# === Prompts ===
prompts:
custom_file: "" # Path to custom prompts YAML file (empty = use defaults)
```
---
## ๐ Acknowledgement
We learned and reused code from the following projects: [AutoResearchClaw](https://github.com/aiming-lab/AutoResearchClaw), [AutoResearch](https://github.com/karpathy/autoresearch), [claw-code](https://github.com/ultraworkers/claw-code).
We thank the authors for their contributions to the community!
## ๐ License
MIT โ see [LICENSE](LICENSE) for details.
## ๐ Citation
If you find Claw AI Lab useful, please cite:
```bibtex
@misc{wu2026clawailab,
author = {Wu, Fan and Chen, Cheng and Tan, Zhenshan and Zhang, Taiyu and
Gao, Dingcheng and Zhu, Lanyun and Zhu, Qi and Tan, Yi and Ji, Deyi and
Lin, Guosheng and Chen, Tianrun and Ye, Deheng and Liu, Fayao},
title = {Claw AI Lab: An Autonomous Multi-Agent Research Team},
year = {2026},
url = {https://github.com/Claw-AI-Lab/Claw-AI-Lab},
note = {GitHub repository}
}
```
---
## ๐ฌ Community