Proprietary AI systems built and shipped.
E-Labs doesn’t just consult — we build. The platform below is the working infrastructure we operate, refine, and deploy for clients. Every product is in active production use.
E-Labs Copilot
● Production ReadyA fully local conversational AI assistant with CEO-level agent routing, persistent memory, multi-model support, real-time SSE streaming, and built-in tool-calling. Runs 100% on your own hardware with zero cloud dependency — or routes to cloud models when you need them.
E-Labs Copilot acts as a central command interface for all other platform components. It routes requests to the right model, remembers context across sessions, and exposes every capability through a clean single-page interface.
- ✓ Multi-model routing: Qwen, Mistral, DeepSeek, Claude, Gemini, GPT-4
- ✓ CEO orchestration layer selects best model per task automatically
- ✓ Persistent JSON-backed memory & session management
- ✓ SSE streaming with inline tool-call interception
- ✓ THE MACHINE project sidebar integration
- ✓ OpenClaw and Hermes agent runtime badges
$ elabs-copilot start --model qwen3-35b
✓ CEO routing layer active
✓ Memory loaded (1,284 facts)
✓ OpenClaw Gateway connected
✓ Hermes Agent runtime ready
$ curl http://localhost:8001/api/stack
{
"models": ["qwen3", "deepseek-r1", "claude"],
"openclaw": true,
"hermes": true,
"memory_facts": 1284
}
THE MACHINE
● Production ReadyA visual DAG-based multi-agent pipeline that orchestrates parallel and sequential AI tasks across multiple models and runtimes. VRAM-aware model scheduling ensures you never hit out-of-memory errors mid-run. Pause, resume, cancel, and intervene at any node in real time.
THE MACHINE is the orchestration layer for complex, multi-step AI work: research pipelines, multi-file code generation, analysis chains, and automated content production — all in a single visual workflow.
- ✓ Visual node-based workflow designer with drag-and-drop
- ✓ Parallel swarm & sequential DAG execution modes
- ✓ Real-time VRAM / GPU resource telemetry per node
- ✓ Pause / resume / cancel / intervene at any node
- ✓ OpenClaw gateway integration for industry-grade runtime
- ✓ Hung-node detection with automatic stall warnings
$ machine project --create "Market Analysis"
✓ Project initialized (6 nodes)
Nodes:
[1] Research → qwen3-35b DONE
[2] Summarize → deepseek-r1 DONE
[3] Competitors → qwen3-35b RUNNING
[4] Financials → deepseek-r1 waiting
[5] Synthesis → claude waiting
[6] Report → qwen3-35b waiting
VRAM: 38.2 / 49.4 GB
OpenClaw Gateway
● Production ReadyA secure, publicly routable AI inference gateway that acts as the central runtime for THE MACHINE and E-Labs Copilot. Bearer-token authenticated, rate-limited, and TLS-terminated via Caddy reverse proxy. OpenClaw handles agent registration, tool-call dispatch, live capability discovery, and worker health monitoring.
OpenClaw is designed to be the industry-grade backbone for multi-agent workflows — replacing ad-hoc direct LLM calls with a structured, auditable, and publicly accessible inference API.
- ✓ Publicly routable at
gateway.elabs.comwith TLS - ✓ Bearer-token auth with per-worker rate limiting
- ✓ Hot-swap agent / model registration at runtime
- ✓ Live capability map & health endpoints
- ✓ Full event schema for task lifecycle tracking
- ✓ Audit logging for every inference request
Default Port
18789 (local) · TLS on 443 via Caddy (public)
Auth
Bearer token · tokens hashed in DB · per-worker rate limits
Public Access
gateway.elabs.com · requires auth token · Tailscale as fallback
$ curl https://gateway.elabs.com/health
{
"status": "healthy",
"workers": 3,
"models_loaded": ["qwen3", "deepseek-r1"],
"tls": true,
"auth": "bearer_token"
}
$ curl -H "Authorization: Bearer <token>" \
https://gateway.elabs.com/api/agents/list
✓ 3 agents registered & ready
Hermes Agent
● Production ReadyAn autonomous AI agent runtime that self-improves by saving skills from every task it completes. Hermes maintains a growing library of reusable toolsets, compresses past trajectories into compact memory, and operates with a full web UI protected by ephemeral session-token authentication.
Hermes integrates with E-Labs Copilot as a runtime badge — any session can be handed off to Hermes for fully autonomous multi-step task execution. It can browse the web, write and run code, manage files, call APIs, and plan complex workflows independently.
- ✓ Self-saving skill memory — gets smarter with every task
- ✓ Trajectory compression for long-horizon context management
- ✓ Web UI with ephemeral session-token auth & Host-header validation
- ✓ Tool-calling: browser, code execution, file system, external APIs
- ✓ Batch runner for parallel multi-task workloads
- ✓ MCP server integration for structured data access
$ hermes run "Research competitors and write a summary report"
✓ Task received
Planning steps…
[1] Web search: industry landscape
[2] Extract key competitors
[3] Analyze pricing / features
[4] Write markdown report
[5] Save skill: "competitor_research"
✓ Report saved to ./output/
✓ Skill saved (library: 47 skills)
ComfyUI Integration
● Production ReadyA locally hosted AI image and video generation pipeline built on ComfyUI. Supports FLUX, Wan 2.1, SDXL, and custom workflows with full portability. Generates high-quality images and video directly on your GPU — no API fees, no content policy restrictions, no data leaving your network.
Integrated into the E-Labs Copilot backend via a ComfyUI bridge, allowing media generation to be triggered directly from conversational AI sessions or THE MACHINE workflow nodes.
- ✓ FLUX, Wan 2.1, SDXL & fully custom node workflows
- ✓ 100% local GPU — zero API costs or data egress
- ✓ Multi-user API key protection built in
- ✓ Bridge endpoint for Copilot and MACHINE integration
- ✓ Custom workflow import / export & model management
- ✓ GPU VRAM optimization & hardware guidance included
$ comfy generate --workflow flux-dev.json
prompt: "photorealistic office with AI screens"
model: flux1-dev.safetensors
steps: 28 cfg: 7.5
✓ Sampling: 28/28 steps
✓ Image saved: ./output/00042.png
VRAM: 18.4 GB used — zero API cost
$ comfy generate --workflow wan2-video.json
✓ Video generated: 5s @ 1080p
E-Labs Enterprise
● In DevelopmentRole-based access control, multi-tenant model routing, team workspaces, and organization-scoped memory layers built on top of the Copilot and Machine platform. Designed for companies that need department-level AI access with centralized governance, audit logging, and usage reporting.
Enterprise builds on the existing Copilot backend with a proper user database, JWT authentication, per-user memory scoping, and an admin dashboard for managing access tiers and usage quotas.
- ✓ Multi-tenant RBAC: admin, client, demo roles
- ✓ Organization-scoped memory & project isolation
- ✓ Centralized audit logging & per-user usage reporting
- ✓ JWT authentication with bcrypt password hashing
- ✓ Admin dashboard for user management & quota control
- ✓ SQLite (dev) / PostgreSQL (prod) user database
POST /api/auth/login
{ username, password } → JWT token
GET /api/auth/me
{ id, username, role: "client", org }
GET /api/memory?user_id=42
{ facts: [...], scoped_to: "user_42" }
GET /api/admin/users
# admin-only · lists all users
# + usage stats + quota status
Status: actively in development
Contact us to join early access
E-Labs Desktop — one click to launch.
A native desktop application that bundles Copilot, THE MACHINE, OpenClaw, and ComfyUI into a single installer. No Docker, no terminal, no config files — just your complete local AI stack running on Windows or macOS.