Compare commits

...

66 Commits

Author SHA1 Message Date
Bas Nijholt
3c1cc79684 refactor(docker): Use multi-stage build to reduce image size
Reduces image size from 880MB to 139MB (84% smaller) by:
- Building with uv in a separate stage
- Using python:3.14-alpine as runtime base (no uv overhead)
- Pre-compiling bytecode with --compile-bytecode
- Copying only the tool virtualenv and bin symlinks to runtime
2025-12-18 00:58:06 -08:00
Bas Nijholt
12bbcee374 feat(web): Handle invalid config gracefully with error banner (#16) 2025-12-18 00:40:19 -08:00
Bas Nijholt
6e73ae0157 feat(web): Add command palette with Cmd+K (#15) 2025-12-18 00:12:38 -08:00
Bas Nijholt
d90b951a8c feat(web): Vendor CDN assets at build time for offline use
Add a Hatch build hook that downloads JS/CSS dependencies during wheel
builds and rewrites base.html to use local paths. This allows the web UI
to work in environments without internet access.

- Add data-vendor attributes to base.html for declarative asset mapping
- Download htmx, tailwind, daisyui, and xterm.js during build
- Bundle LICENSES.txt with attribution for vendored dependencies
- Source HTML keeps CDN links for development; wheel has local paths
2025-12-17 23:45:06 -08:00
Bas Nijholt
14558131ed feat(web): Add search and host filter to sidebar and services list
- Add search input and host dropdown to sidebar for filtering services
- Add search input and host dropdown to Services by Host section
- Show dynamic service count in sidebar that updates with filter
- Multi-host services appear when any host is selected
2025-12-17 23:35:18 -08:00
Bas Nijholt
a422363337 Update uv.lock 2025-12-17 23:21:37 -08:00
Bas Nijholt
1278d0b3af fix(web): Remove config caching so changes are detected immediately
Config was cached with @lru_cache, causing the web UI to show stale
sync status after external config file edits.
2025-12-17 23:17:25 -08:00
Bas Nijholt
c8ab6271a8 feat(web): Add rainbow hover effect to Compose Farm headers
Animated gradient appears on hover for both sidebar and mobile navbar headers.
2025-12-17 23:09:57 -08:00
Bas Nijholt
957e828a5b feat(web): Add Lucide icons to web UI (#14) 2025-12-17 23:04:53 -08:00
Bas Nijholt
5afda8cbb2 Add web UI with FastAPI + HTMX + xterm.js (#13) 2025-12-17 22:52:40 -08:00
Bas Nijholt
1bbf324f1e Validate services exist in config with friendly error
Show a clean error message instead of a traceback when a service
is not found in config. Includes a hint about adding to config.
2025-12-17 11:04:48 -08:00
Bas Nijholt
1be5b987a2 Support "." as shorthand for current directory service name
Running `cf up .` now resolves to the current directory name,
allowing quick operations when inside a service directory.
2025-12-17 10:57:48 -08:00
Bas Nijholt
6b684b19f2 Run startup time test 6 times 2025-12-17 09:08:45 -08:00
Bas Nijholt
4a37982e30 Cleanup 2025-12-17 09:07:52 -08:00
Bas Nijholt
55cb44e0e7 Drop service discovery mention 2025-12-17 09:07:30 -08:00
Bas Nijholt
5c242d08bf Add cf apply to post 2025-12-17 09:06:59 -08:00
Bas Nijholt
5bf65d3849 Raise Linux CLI startup threshold to 0.25s for CI headroom 2025-12-17 09:05:53 -08:00
Bas Nijholt
21d5dfa175 Fix check-readme-commands hook to use uv run for CI compatibility 2025-12-17 08:58:45 -08:00
Bas Nijholt
e49ad29999 Use OS-specific thresholds for CLI startup test (Linux: 0.2s, macOS: 0.35s, Windows: 2s) 2025-12-17 08:57:50 -08:00
Bas Nijholt
cdbe74ed89 Return early from CLI startup test when under threshold 2025-12-17 08:56:35 -08:00
Bas Nijholt
129970379c Increase CLI startup threshold to 0.35s for macOS/Windows CI 2025-12-17 08:55:40 -08:00
Bas Nijholt
c5c47d14dd Add CLI startup time test to catch slow imports
Runs `cf --help` and fails if startup exceeds 200ms. Shows timing
info in CI logs on both pass and failure.
2025-12-17 08:53:32 -08:00
Bas Nijholt
95f19e7333 Add pre-commit hook to verify all CLI commands are documented in README
Extracts commands from the Typer app and checks each has a corresponding
--help section in the README. Runs when README.md or CLI files change.
2025-12-17 08:45:16 -08:00
Bas Nijholt
9c6edd3f18 refactor(docs): move reddit-post.md into docs folder 2025-12-17 08:45:16 -08:00
github-actions[bot]
bda9210354 Update README.md 2025-12-17 16:35:24 +00:00
Bas Nijholt
f57951e8dc Fix cf up -h output in README.md 2025-12-17 08:34:52 -08:00
basnijholt
ba8c04caf8 chore(docs): update TOC 2025-12-17 16:31:40 +00:00
Bas Nijholt
ff0658117d Add all --help outputs 2025-12-17 08:31:14 -08:00
Bas Nijholt
920b593d5f Fix mypy error: add type annotation for proc variable 2025-12-17 00:17:20 -08:00
Bas Nijholt
27d9b08ce2 Add -f shorthand for --full in apply command 2025-12-17 00:10:16 -08:00
Bas Nijholt
700cdacb4d Add 'a' alias for apply command (cf a = cf apply) 2025-12-17 00:09:45 -08:00
Bas Nijholt
3c7a532704 Add comments explaining lazy imports for startup performance 2025-12-17 00:08:28 -08:00
Bas Nijholt
6048f37ad5 Lazy import pydantic for faster CLI startup
- Create paths.py module with lightweight path utilities (no pydantic)
- Move Config imports to TYPE_CHECKING blocks in CLI modules
- Lazy import load_config only when needed

Combined with asyncssh lazy loading, cf --help now starts in ~150ms
instead of ~500ms (70% faster).
2025-12-17 00:07:15 -08:00
Bas Nijholt
f18952633f Lazy import asyncssh for faster CLI startup
Move asyncssh import inside _run_ssh_command() so it's only loaded
when actually executing SSH commands. This cuts CLI import time
from 414ms to 200ms (52% faster).

cf --help now starts in ~260ms instead of ~500ms.
2025-12-16 23:59:37 -08:00
Bas Nijholt
437257e631 Add cf config symlink command
Creates a symlink from ~/.config/compose-farm/compose-farm.yaml to a
local config file using absolute paths (avoids broken relative symlinks).

Usage:
  cf config symlink                     # Link to ./compose-farm.yaml
  cf config symlink /path/to/config.yaml  # Link to specific file

State files are automatically stored next to the actual config (not
the symlink) since config_path.resolve() follows symlinks.
2025-12-16 23:56:58 -08:00
Bas Nijholt
c720170f26 Add --full flag to apply command
When --full is passed, apply also runs 'docker compose up' on all
services (not just missing/migrating ones) to pick up any config
changes (compose file, .env, etc).

- cf apply          # Fast: state reconciliation only
- cf apply --full   # Thorough: also refresh all running services
2025-12-16 23:54:19 -08:00
Bas Nijholt
d9c03d6509 Feature apply as the hero command in README
- Update intro and NOTE block to lead with cf apply
- Rewrite "How It Works" to show declarative workflow first
- Move apply to top of command table (bolded)
- Reorder examples to show apply as "the main command"
- Update automation bullet to highlight one-command reconciliation
2025-12-16 23:49:49 -08:00
Bas Nijholt
3b7066711f Merge pull request #12 from basnijholt/feature/orphaned-services
Add apply command and refactor CLI for clearer UX
2025-12-16 23:34:34 -08:00
Bas Nijholt
6a630c40a1 Update apply command description to include starting missing services 2025-12-16 23:32:27 -08:00
Bas Nijholt
9f9c042b66 Remove up --migrate flag in favor of apply
Simplifies CLI by having one clear reconciliation command:
- cf up <service>  = start specific services (auto-migrates if needed)
- cf apply         = full reconcile (stop orphans + migrate + start missing)

The --migrate flag was redundant with 'apply --no-orphans'.
2025-12-16 23:27:19 -08:00
github-actions[bot]
2a6d7d0b85 Update README.md 2025-12-17 07:21:38 +00:00
Bas Nijholt
6d813ccd84 Merge af9c760fb8 into affed2edcf 2025-12-17 07:21:23 +00:00
Bas Nijholt
af9c760fb8 Add missing service detection to apply command
Previously, apply only handled:
1. Stopping orphans (in state, not in config)
2. Migrating services (in state, wrong host)

Now it also handles:
3. Starting missing services (in config, not in state)

This fixes the case where a service was stopped as an orphan, then
re-added to config - apply would say "nothing to do" instead of
starting it.

Added get_services_not_in_state() to state.py and updated tests.
2025-12-16 23:21:09 -08:00
Bas Nijholt
90656b05e3 Add tests for apply command and down --orphaned flag
Tests cover:
- apply: nothing to do, dry-run preview, migrations, orphan cleanup, --no-orphans
- down --orphaned: no orphans, stops services, error on invalid combinations

Lifecycle.py coverage improved from 20% to 61%.
2025-12-16 23:15:46 -08:00
github-actions[bot]
d7a3d4e8c7 Update README.md 2025-12-17 07:10:52 +00:00
Bas Nijholt
35f0b8bf99 Merge be6b391121 into affed2edcf 2025-12-16 23:10:36 -08:00
Bas Nijholt
be6b391121 Refactor CLI commands for clearer UX
Separate "read state from reality" from "write config to reality":
- Rename `sync` to `refresh` (updates local state from running services)
- Add `apply` command (makes reality match config: migrate + stop orphans)
- Add `down --orphaned` flag (stops services removed from config)
- Modify `up --migrate` to only handle migrations (not orphans)

The new mental model:
- `refresh` = Reality → State (discover what's running)
- `apply` = Config → Reality (reconcile: migrate services + stop orphans)

Also extract private helper functions for reporting to match codebase style.
2025-12-16 23:06:42 -08:00
Bas Nijholt
7f56ba6a41 Add orphaned service detection and cleanup
When services are removed from config but still tracked in state,
`cf up --migrate` now stops them automatically. This makes the
config truly declarative - comment out a service, run migrate,
and it stops.

Changes:
- Add get_orphaned_services() to state.py for detecting orphans
- Add stop_orphaned_services() to operations.py for cleanup
- Update lifecycle.py to call stop_orphaned_services on --migrate
- Refactor _report_orphaned_services to use shared function
- Rename "missing_from_config" to "unmanaged" for clarity
- Add tests for get_orphaned_services
- Only remove from state on successful down (not on failure)
2025-12-16 22:53:26 -08:00
Bas Nijholt
4b3d7a861e Fix migration and update for services with buildable images
Use `pull --ignore-buildable` to skip images that have `build:` defined
in the compose file, preventing pull failures for locally-built images
like gitea-runner-custom. The build step then handles these images.
2025-12-16 19:42:24 -08:00
Bas Nijholt
affed2edcf Refactor operations.py into smaller helpers
- Add PreflightResult NamedTuple with .ok property
- Extract _run_compose_step to handle raw output and interrupts
- Extract _up_single_service for single-host migration logic
- Deduplicate pull/build handling in _migrate_service
- Add was_running check to only rollback if service was running
2025-12-16 17:08:25 -08:00
Bas Nijholt
34642e8b8e Rollback to old host when migration up fails
When up fails on the target host after migration has already stopped
the service on the old host, attempt to restart on the old host as a
rollback. This prevents services from being left down after a failed
migration attempt.
2025-12-16 16:59:09 -08:00
Bas Nijholt
4c8b6c5209 Add init-network hint when network is missing 2025-12-16 16:22:06 -08:00
Bas Nijholt
2b38ed28c0 Skip traefik regeneration when all services fail
Don't update traefik config if all services in the operation failed.
This prevents adding routes for services that aren't actually running.
2025-12-16 16:21:09 -08:00
Bas Nijholt
26b57895ce Clean up orphaned containers when migration up fails
If `up` fails after migration (when we've already run `down` on the
old host), run `down` on the target host to clean up any containers
that were created but couldn't start (e.g., due to missing devices).

This prevents orphaned containers from lingering on the failed host.
2025-12-16 16:16:09 -08:00
Bas Nijholt
367da13fae Fix path existence check for permission denied
Use stat instead of test -e to distinguish "no such file" from
"permission denied". If stat fails with permission denied, the
path exists (just not accessible), so report it as existing.

Fixes false "missing path" errors for directories with restricted
permissions like /mnt/data/immich/library.
2025-12-16 15:08:38 -08:00
Bas Nijholt
d6ecd42559 Consolidate service requirement checks into shared function
Move check_service_requirements() to operations.py as a public function
that verifies paths, networks, and devices exist on a target host. Both
CLI check command and pre-flight migration checks now use this shared
function, eliminating duplication. Also adds device checking to the
check command output.
2025-12-16 14:53:59 -08:00
Bas Nijholt
233c33fa52 Add device checking to pre-flight migration checks
Services with devices: mappings (e.g., /dev/dri for GPU acceleration)
now have those devices verified on the target host before migration.
This prevents the scenario where a service is stopped on the old host
but fails to start on the new host due to missing devices.

Adds parse_devices() to extract host device paths from compose files.
2025-12-16 14:35:52 -08:00
Bas Nijholt
43974c5743 Abort on first Ctrl+C during migrations
Detect when SSH subprocess is killed by signal (exit code < 0 or 255)
and treat it as an interrupt. This allows single Ctrl+C to abort the
entire operation instead of requiring two presses.
2025-12-16 14:31:33 -08:00
Bas Nijholt
cf94a62f37 docs: Clarify pull/build comments in migration 2025-12-16 14:26:48 -08:00
Bas Nijholt
81b4074827 Pre-build Dockerfile services during migration
After pulling images, also run build for services with Dockerfiles.
This ensures build-based services have their images ready before
stopping the old service, minimizing downtime.

If build fails, abort the migration and leave the service running
on the old host.

Extract _migrate_service helper to reduce function complexity.
2025-12-16 14:17:19 -08:00
Bas Nijholt
455657c8df Abort migration if pre-pull fails
If pulling images on the target host fails (e.g., rate limit),
abort the migration and leave the service running on the old host.
This prevents downtime when Docker Hub rate limits are hit.
2025-12-16 14:14:35 -08:00
Bas Nijholt
ee5a92788a Pre-pull images during migration to reduce downtime
When migrating a service to a new host, pull images on the target
host before stopping the service on the old host. This minimizes
downtime since images are cached when the up command runs.

Migration flow:
1. Pull images on new host (service still running on old)
2. Down on old host
3. Up on new host (fast, images already pulled)
2025-12-16 14:12:53 -08:00
Bas Nijholt
2ba396a419 docs: Move Compose Farm to first column in comparison table 2025-12-16 13:48:40 -08:00
Bas Nijholt
7144d58160 build: Include LICENSE file in package distribution 2025-12-16 13:37:15 -08:00
Bas Nijholt
279fa2e5ef Create LICENSE 2025-12-16 13:36:35 -08:00
Bas Nijholt
dbe0b8b597 docs: Add app.py to CLAUDE.md architecture diagram 2025-12-16 13:14:51 -08:00
59 changed files with 5927 additions and 445 deletions

88
.github/check_readme_commands.py vendored Executable file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env python3
"""Check that all CLI commands are documented in the README."""
from __future__ import annotations
import re
import sys
from pathlib import Path
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import typer
from compose_farm.cli import app
def get_all_commands(typer_app: typer.Typer, prefix: str = "cf") -> set[str]:
"""Extract all command names from a Typer app, including nested subcommands."""
commands = set()
# Get registered commands (skip hidden ones like aliases)
for command in typer_app.registered_commands:
if command.hidden:
continue
name = command.name
if not name and command.callback:
name = command.callback.__name__
if name:
commands.add(f"{prefix} {name}")
# Get registered sub-apps (like 'config')
for group in typer_app.registered_groups:
sub_app = group.typer_instance
sub_name = group.name
if sub_app and sub_name:
commands.add(f"{prefix} {sub_name}")
# Don't recurse into subcommands - we only document the top-level subcommand
return commands
def get_documented_commands(readme_path: Path) -> set[str]:
"""Extract commands documented in README from help output sections."""
content = readme_path.read_text()
# Match patterns like: <code>cf command --help</code>
pattern = r"<code>(cf\s+[\w-]+)\s+--help</code>"
matches = re.findall(pattern, content)
return set(matches)
def main() -> int:
"""Check that all CLI commands are documented in the README."""
readme_path = Path(__file__).parent.parent / "README.md"
if not readme_path.exists():
print(f"ERROR: README.md not found at {readme_path}")
return 1
cli_commands = get_all_commands(app)
documented_commands = get_documented_commands(readme_path)
# Also check for the main 'cf' help
if "<code>cf --help</code>" in readme_path.read_text():
documented_commands.add("cf")
cli_commands.add("cf")
missing = cli_commands - documented_commands
extra = documented_commands - cli_commands
if missing or extra:
if missing:
print("ERROR: Commands missing from README --help documentation:")
for cmd in sorted(missing):
print(f" - {cmd}")
if extra:
print("WARNING: Commands documented but not in CLI:")
for cmd in sorted(extra):
print(f" - {cmd}")
return 1
print(f"✓ All {len(cli_commands)} commands documented in README")
return 0
if __name__ == "__main__":
sys.exit(main())

2
.gitignore vendored
View File

@@ -42,3 +42,5 @@ htmlcov/
compose-farm.yaml
!examples/compose-farm.yaml
coverage.xml
.env
homepage/

View File

@@ -1,4 +1,13 @@
repos:
- repo: local
hooks:
- id: check-readme-commands
name: Check README documents all CLI commands
entry: uv run python .github/check_readme_commands.py
language: system
files: ^(README\.md|src/compose_farm/cli/.*)$
pass_filenames: false
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:

View File

@@ -11,11 +11,12 @@
```
compose_farm/
├── cli/ # CLI subpackage
│ ├── __init__.py # Main Typer app, version callback
│ ├── __init__.py # Imports modules to trigger command registration
│ ├── app.py # Shared Typer app instance, version callback
│ ├── common.py # Shared helpers, options, progress bar utilities
│ ├── config.py # Config subcommand (init, show, path, validate, edit)
│ ├── lifecycle.py # up, down, pull, restart, update commands
│ ├── management.py # sync, check, init-network, traefik-file commands
│ ├── lifecycle.py # up, down, pull, restart, update, apply commands
│ ├── management.py # refresh, check, init-network, traefik-file commands
│ └── monitoring.py # logs, ps, stats commands
├── config.py # Pydantic models, YAML loading
├── compose.py # Compose file parsing (.env, ports, volumes, networks)
@@ -27,6 +28,10 @@ compose_farm/
└── traefik.py # Traefik file-provider config generation from labels
```
## Web UI Icons
Icons use [Lucide](https://lucide.dev/). Add new icons as macros in `web/templates/partials/icons.html` by copying SVG paths from their site. The `action_btn`, `stat_card`, and `collapse` macros in `components.html` accept an optional `icon` parameter.
## Key Design Decisions
1. **Hybrid SSH approach**: asyncssh for parallel streaming with prefixes; native `ssh -t` for raw mode (progress bars)
@@ -54,15 +59,16 @@ CLI available as `cf` or `compose-farm`.
| Command | Description |
|---------|-------------|
| `up` | Start services (`docker compose up -d`), auto-migrates if host changed. Use `--migrate` for auto-detection |
| `down` | Stop services (`docker compose down`) |
| `up` | Start services (`docker compose up -d`), auto-migrates if host changed |
| `down` | Stop services (`docker compose down`). Use `--orphaned` to stop services removed from config |
| `pull` | Pull latest images |
| `restart` | `down` + `up -d` |
| `update` | `pull` + `down` + `up -d` |
| `apply` | Make reality match config: migrate services + stop orphans. Use `--dry-run` to preview |
| `logs` | Show service logs |
| `ps` | Show status of all services |
| `stats` | Show overview (hosts, services, pending migrations; `--live` for container counts) |
| `sync` | Discover running services, update state, capture image digests |
| `refresh` | Update state from reality: discover running services, capture image digests |
| `check` | Validate config, traefik labels, mounts, networks; show host compatibility |
| `init-network` | Create Docker network on hosts with consistent subnet/gateway |
| `traefik-file` | Generate Traefik file-provider config from compose labels |

View File

@@ -1,16 +1,20 @@
# syntax=docker/dockerfile:1
FROM ghcr.io/astral-sh/uv:python3.14-alpine
# Install SSH client (required for remote host connections)
# Build stage - install with uv
FROM ghcr.io/astral-sh/uv:python3.14-alpine AS builder
ARG VERSION
RUN uv tool install --compile-bytecode "compose-farm[web]${VERSION:+==$VERSION}"
# Runtime stage - minimal image without uv
FROM python:3.14-alpine
# Install only runtime requirements
RUN apk add --no-cache openssh-client
# Install compose-farm from PyPI
ARG VERSION
RUN uv tool install compose-farm${VERSION:+==$VERSION}
# Copy installed tool virtualenv and bin symlinks from builder
COPY --from=builder /root/.local/share/uv/tools/compose-farm /root/.local/share/uv/tools/compose-farm
COPY --from=builder /usr/local/bin/cf /usr/local/bin/compose-farm /usr/local/bin/
# Add uv tool bin to PATH
ENV PATH="/root/.local/bin:$PATH"
# Default entrypoint
ENTRYPOINT ["cf"]
CMD ["--help"]

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Bas Nijholt
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

625
README.md
View File

@@ -10,7 +10,7 @@
A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
> [!NOTE]
> Run `docker compose` commands across multiple hosts via SSH. One YAML maps services to hosts. Change the mapping, run `up`, and it auto-migrates. No Kubernetes, no Swarm, no magic.
> Run `docker compose` commands across multiple hosts via SSH. One YAML maps services to hosts. Run `cf apply` and reality matches your config—services start, migrate, or stop as needed. No Kubernetes, no Swarm, no magic.
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
@@ -27,6 +27,7 @@ A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
- [Multi-Host Services](#multi-host-services)
- [Config Command](#config-command)
- [Usage](#usage)
- [CLI `--help` Output](#cli---help-output)
- [Auto-Migration](#auto-migration)
- [Traefik Multihost Ingress (File Provider)](#traefik-multihost-ingress-file-provider)
- [Comparison with Alternatives](#comparison-with-alternatives)
@@ -43,7 +44,7 @@ I used to run 100+ Docker Compose stacks on a single machine that kept running o
Both require changes to your compose files. **Compose Farm requires zero changes**—your existing `docker-compose.yml` files work as-is.
I also wanted a declarative setup—one config file that defines where everything runs. Change the config, run `up`, and services migrate automatically. See [Comparison with Alternatives](#comparison-with-alternatives) for how this compares to other approaches.
I also wanted a declarative setup—one config file that defines where everything runs. Change the config, run `cf apply`, and everything reconciles—services start, migrate, or stop as needed. See [Comparison with Alternatives](#comparison-with-alternatives) for how this compares to other approaches.
<p align="center">
<a href="https://xkcd.com/927/">
@@ -56,18 +57,26 @@ Before you say it—no, this is not a new standard. I changed nothing about my e
Compose Farm just automates what you'd do by hand:
- Runs `docker compose` commands over SSH
- Tracks which service runs on which host
- Auto-migrates services when you change the host assignment
- **One command (`cf apply`) to reconcile everything**—start missing services, migrate moved ones, stop removed ones
- Generates Traefik file-provider config for cross-host routing
**It's a convenience wrapper, not a new paradigm.**
## How It Works
1. You run `cf up plex`
2. Compose Farm looks up which host runs `plex` (e.g., `server-1`)
3. It SSHs to `server-1` (or runs locally if `localhost`)
4. It executes `docker compose -f /opt/compose/plex/docker-compose.yml up -d`
5. Output is streamed back with `[plex]` prefix
**The declarative way** — run `cf apply` and reality matches your config:
1. Compose Farm compares your config to what's actually running
2. Services in config but not running? **Starts them**
3. Services on the wrong host? **Migrates them** (stops on old host, starts on new)
4. Services running but removed from config? **Stops them**
**Under the hood** — each service operation is just SSH + docker compose:
1. Look up which host runs the service (e.g., `plex``server-1`)
2. SSH to `server-1` (or run locally if `localhost`)
3. Execute `docker compose -f /opt/compose/plex/docker-compose.yml up -d`
4. Stream output back with `[plex]` prefix
That's it. No orchestration, no service discovery, no magic.
@@ -228,6 +237,7 @@ The CLI is available as both `compose-farm` and the shorter `cf` alias.
| Command | Description |
|---------|-------------|
| **`cf apply`** | **Make reality match config (start + migrate + stop orphans)** |
| `cf up <svc>` | Start service (auto-migrates if host changed) |
| `cf down <svc>` | Stop service |
| `cf restart <svc>` | down + up |
@@ -235,7 +245,7 @@ The CLI is available as both `compose-farm` and the shorter `cf` alias.
| `cf pull <svc>` | Pull latest images |
| `cf logs -f <svc>` | Follow logs |
| `cf ps` | Show status of all services |
| `cf sync` | Discover running services + capture image digests |
| `cf refresh` | Update state from running services |
| `cf check` | Validate config, mounts, networks |
| `cf init-network` | Create Docker network on hosts |
| `cf traefik-file` | Generate Traefik file-provider config |
@@ -246,13 +256,17 @@ All commands support `--all` to operate on all services.
Each command replaces: look up host → SSH → find compose file → run `ssh host "cd /opt/compose/plex && docker compose up -d"`.
```bash
# Start services (auto-migrates if host changed in config)
cf up plex jellyfin
cf up --all
cf up --migrate # only services needing migration (state ≠ config)
# The main command: make reality match your config
cf apply # start missing + migrate + stop orphans
cf apply --dry-run # preview what would change
cf apply --no-orphans # skip stopping orphaned services
cf apply --full # also refresh all services (picks up config changes)
# Stop services
cf down plex
# Or operate on individual services
cf up plex jellyfin # start services (auto-migrates if host changed)
cf up --all
cf down plex # stop services
cf down --orphaned # stop services removed from config
# Pull latest images
cf pull --all
@@ -263,9 +277,9 @@ cf restart plex
# Update (pull + down + up) - the end-to-end update command
cf update --all
# Sync state with reality (discovers running services + captures image digests)
cf sync # updates state.yaml and dockerfarm-log.toml
cf sync --dry-run # preview without writing
# Update state from reality (discovers running services + captures digests)
cf refresh # updates state.yaml and dockerfarm-log.toml
cf refresh --dry-run # preview without writing
# Validate config, traefik labels, mounts, and networks
cf check # full validation (includes SSH checks)
@@ -284,6 +298,10 @@ cf logs -f plex # follow
cf ps
```
### CLI `--help` Output
Full `--help` output for each command. See the [Usage](#usage) table above for a quick overview.
<details>
<summary>See the output of <code>cf --help</code></summary>
@@ -316,12 +334,13 @@ cf ps
│ down Stop services (docker compose down). │
│ pull Pull latest images (docker compose pull). │
│ restart Restart services (down + up). │
│ update Update services (pull + down + up).
│ update Update services (pull + build + down + up). │
│ apply Make reality match config (start, migrate, stop as needed). │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Configuration ──────────────────────────────────────────────────────────────╮
│ traefik-file Generate a Traefik file-provider fragment from compose │
│ Traefik labels. │
sync Sync local state with running services.
refresh Update local state from running services. │
│ check Validate configuration, traefik labels, mounts, and networks. │
│ init-network Create Docker network on hosts with consistent settings. │
│ config Manage compose-farm configuration files. │
@@ -331,6 +350,9 @@ cf ps
│ ps Show status of all services. │
│ stats Show overview statistics for hosts and services. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Server ─────────────────────────────────────────────────────────────────────╮
│ web Start the web UI server. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -338,6 +360,541 @@ cf ps
</details>
**Lifecycle**
<details>
<summary>See the output of <code>cf up --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf up --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf up [OPTIONS] [SERVICES]...
Start services (docker compose up -d). Auto-migrates if host changed.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --host -H TEXT Filter to services on this host │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf down --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf down --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf down [OPTIONS] [SERVICES]...
Stop services (docker compose down).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --orphaned Stop orphaned services (in state but removed from │
│ config) │
│ --host -H TEXT Filter to services on this host │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf pull --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf pull --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf pull [OPTIONS] [SERVICES]...
Pull latest images (docker compose pull).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf restart --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf restart --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf restart [OPTIONS] [SERVICES]...
Restart services (down + up).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf update --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf update --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf update [OPTIONS] [SERVICES]...
Update services (pull + build + down + up).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf apply --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf apply --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf apply [OPTIONS]
Make reality match config (start, migrate, stop as needed).
This is the "reconcile" command that ensures running services match your
config file. It will:
1. Stop orphaned services (in state but removed from config) 2. Migrate
services on wrong host (host in state ≠ host in config) 3. Start missing
services (in config but not in state)
Use --dry-run to preview changes before applying. Use --no-orphans to only
migrate/start without stopping orphaned services. Use --full to also run 'up'
on all services (picks up compose/env changes).
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --dry-run -n Show what would change without executing │
│ --no-orphans Only migrate, don't stop orphaned services │
│ --full -f Also run up on all services to apply config │
│ changes │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
**Configuration**
<details>
<summary>See the output of <code>cf traefik-file --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf traefik-file --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf traefik-file [OPTIONS] [SERVICES]...
Generate a Traefik file-provider fragment from compose Traefik labels.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --output -o PATH Write Traefik file-provider YAML to this path │
│ (stdout if omitted) │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf refresh --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf refresh --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf refresh [OPTIONS]
Update local state from running services.
Discovers which services are running on which hosts, updates the state file,
and captures image digests. This is a read operation - it updates your local
state to match reality, not the other way around.
Use 'cf apply' to make reality match your config (stop orphans, migrate).
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --config -c PATH Path to config file │
│ --log-path -l PATH Path to Dockerfarm TOML log │
│ --dry-run -n Show what would change without writing │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf check --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf check --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf check [OPTIONS] [SERVICES]...
Validate configuration, traefik labels, mounts, and networks.
Without arguments: validates all services against configured hosts. With
service arguments: validates specific services and shows host compatibility.
Use --local to skip SSH-based checks for faster validation.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --local Skip SSH-based checks (faster) │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf init-network --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf init-network --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf init-network [OPTIONS] [HOSTS]...
Create Docker network on hosts with consistent settings.
Creates an external Docker network that services can use for cross-host
communication. Uses the same subnet/gateway on all hosts to ensure consistent
networking.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ hosts [HOSTS]... Hosts to create network on (default: all) │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --network -n TEXT Network name [default: mynetwork] │
│ --subnet -s TEXT Network subnet [default: 172.20.0.0/16] │
│ --gateway -g TEXT Network gateway [default: 172.20.0.1] │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf config --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf config --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf config [OPTIONS] COMMAND [ARGS]...
Manage compose-farm configuration files.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────╮
│ init Create a new config file with documented example. │
│ edit Open the config file in your default editor. │
│ show Display the config file location and contents. │
│ path Print the config file path (useful for scripting). │
│ validate Validate the config file syntax and schema. │
│ symlink Create a symlink from the default config location to a config │
│ file. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
**Monitoring**
<details>
<summary>See the output of <code>cf logs --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf logs --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf logs [OPTIONS] [SERVICES]...
Show service logs.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ services [SERVICES]... Services to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all services │
│ --host -H TEXT Filter to services on this host │
│ --follow -f Follow logs │
│ --tail -n INTEGER Number of lines (default: 20 for --all, 100 │
│ otherwise) │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf ps --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf ps --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf ps [OPTIONS]
Show status of all services.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf stats --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf stats --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf stats [OPTIONS]
Show overview statistics for hosts and services.
Without --live: Shows config/state info (hosts, services, pending migrations).
With --live: Also queries Docker on each host for container counts.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --live -l Query Docker for live container stats │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
**Server**
<details>
<summary>See the output of <code>cf web --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf web --help -->
<!-- echo '```' -->
<!-- CODE:END -->
</details>
### Auto-Migration
When you change a service's host assignment in config and run `up`, Compose Farm automatically:
@@ -346,7 +903,7 @@ When you change a service's host assignment in config and run `up`, Compose Farm
3. Runs `up -d` on the new host
4. Updates state tracking
Use `cf up --migrate` (or `-m`) to automatically find and migrate all services where the current state differs from config—no need to list them manually.
Use `cf apply` to automatically reconcile all services—it finds and migrates services on wrong hosts, stops orphaned services, and starts missing services.
```yaml
# Before: plex runs on server-1
@@ -358,6 +915,14 @@ services:
plex: server-2 # Compose Farm will migrate automatically
```
**Orphaned services**: When you remove (or comment out) a service from config, it becomes "orphaned"—tracked in state but no longer in config. Use these commands to handle orphans:
- `cf apply` — Migrate services AND stop orphans (the full reconcile)
- `cf down --orphaned` — Only stop orphaned services
- `cf apply --dry-run` — Preview what would change before applying
This makes the config truly declarative: comment out a service, run `cf apply`, and it stops.
## Traefik Multihost Ingress (File Provider)
If you run a single Traefik instance on one "frontdoor" host and want it to route to
@@ -462,16 +1027,16 @@ Update your Traefik config to use directory watching instead of a single file:
There are many ways to run containers on multiple hosts. Here is where Compose Farm sits:
| | Docker Contexts | K8s / Swarm | Ansible / Terraform | Portainer / Coolify | Compose Farm |
| | Compose Farm | Docker Contexts | K8s / Swarm | Ansible / Terraform | Portainer / Coolify |
|---|:---:|:---:|:---:|:---:|:---:|
| No compose rewrites | ✅ | | | ✅ | ✅ |
| Version controlled | ✅ | ✅ | ✅ | | |
| State tracking | | | ✅ | ✅ | ✅ |
| Auto-migration | ❌ | ✅ | ❌ | ❌ | ✅ |
| Interactive CLI | | ❌ | ❌ | ❌ | |
| Parallel execution | | | ✅ | ✅ | ✅ |
| Agentless | ✅ | ❌ | ✅ | ❌ | ✅ |
| High availability | ❌ | | | ❌ | ❌ |
| No compose rewrites | ✅ | | | ✅ | ✅ |
| Version controlled | ✅ | ✅ | ✅ | | |
| State tracking | | | ✅ | ✅ | ✅ |
| Auto-migration | ✅ | ❌ | ✅ | ❌ | ❌ |
| Interactive CLI | | ❌ | ❌ | ❌ | |
| Parallel execution | | | ✅ | ✅ | ✅ |
| Agentless | ✅ | ✅ | ❌ | ✅ | ❌ |
| High availability | ❌ | | | ❌ | ❌ |
**Docker Contexts** — You can use `docker context create remote ssh://...` and `docker compose --context remote up`. But it's manual: you must remember which host runs which service, there's no global view, no parallel execution, and no auto-migration.

View File

@@ -4,8 +4,31 @@ services:
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
# Compose directory (contains compose files AND compose-farm.yaml config)
- ${CF_COMPOSE_DIR:-/opt/compose}:${CF_COMPOSE_DIR:-/opt/compose}
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
environment:
- SSH_AUTH_SOCK=/ssh-agent
# Config file path (state stored alongside it)
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/compose}/compose-farm.yaml
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
web:
image: ghcr.io/basnijholt/compose-farm:latest
command: web --host 0.0.0.0 --port 9000
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
environment:
- SSH_AUTH_SOCK=/ssh-agent
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
labels:
- traefik.enable=true
- traefik.http.routers.compose-farm.rule=Host(`compose-farm.${DOMAIN}`)
- traefik.http.routers.compose-farm.entrypoints=websecure
- traefik.http.routers.compose-farm-local.rule=Host(`compose-farm.local`)
- traefik.http.routers.compose-farm-local.entrypoints=web
- traefik.http.services.compose-farm.loadbalancer.server.port=9000
networks:
- mynetwork
networks:
mynetwork:
external: true

79
docs/reddit-post.md Normal file
View File

@@ -0,0 +1,79 @@
# Title options
- Multi-host Docker Compose without Kubernetes or file changes
- I built a CLI to run Docker Compose across hosts. Zero changes to your files.
- I made a CLI to run Docker Compose across multiple hosts without Kubernetes or Swarm
---
I've been running 100+ Docker Compose stacks on a single machine, and it kept running out of memory. I needed to spread services across multiple hosts, but:
- **Kubernetes** felt like overkill. I don't need pods, ingress controllers, or 10x more YAML.
- **Docker Swarm** is basically in maintenance mode.
- Both require rewriting my compose files.
So I built **Compose Farm**, a simple CLI that runs `docker compose` commands over SSH. No agents, no cluster setup, no changes to your existing compose files.
## How it works
One YAML file maps services to hosts:
```yaml
compose_dir: /opt/stacks
hosts:
nuc: 192.168.1.10
hp: 192.168.1.11
services:
plex: nuc
jellyfin: hp
sonarr: nuc
radarr: nuc
```
Then just:
```bash
cf up plex # runs on nuc via SSH
cf apply # makes config state match desired state on all hosts (like Terraform apply)
cf up --all # starts everything on their assigned hosts
cf logs -f plex # streams logs
cf ps # shows status across all hosts
```
## Auto-migration
Change a service's host in the config and run `cf up`. It stops the service on the old host and starts it on the new one. No manual SSH needed.
```yaml
# Before
plex: nuc
# After (just change this)
plex: hp
```
```bash
cf up plex # migrates automatically
```
## Requirements
- SSH key auth to your hosts
- Same paths on all hosts (I use NFS from my NAS)
- That's it. No agents, no daemons.
## What it doesn't do
- No high availability (if a host goes down, services don't auto-migrate)
- No overlay networking (containers on different hosts can't talk via Docker DNS)
- No health checks or automatic restarts
It's a convenience wrapper around `docker compose` + SSH. If you need failover or cross-host container networking, you probably do need Swarm or Kubernetes.
## Links
- GitHub: https://github.com/basnijholt/compose-farm
- Install: `uv tool install compose-farm` or `pip install compose-farm`
Happy to answer questions or take feedback!

170
hatch_build.py Normal file
View File

@@ -0,0 +1,170 @@
"""Hatch build hook to vendor CDN assets for offline use.
During wheel builds, this hook:
1. Parses base.html to find elements with data-vendor attributes
2. Downloads each CDN asset to a temporary vendor directory
3. Rewrites base.html to use local /static/vendor/ paths
4. Fetches and bundles license information
5. Includes everything in the wheel via force_include
The source base.html keeps CDN links for development; only the
distributed wheel has vendored assets.
"""
from __future__ import annotations
import re
import shutil
import subprocess
import tempfile
from pathlib import Path
from typing import Any
from urllib.request import Request, urlopen
from hatchling.builders.hooks.plugin.interface import BuildHookInterface
# Matches elements with data-vendor attribute: extracts URL and target filename
# Example: <script src="https://..." data-vendor="htmx.js">
# Captures: (1) src/href, (2) URL, (3) attributes between, (4) vendor filename
VENDOR_PATTERN = re.compile(r'(src|href)="(https://[^"]+)"([^>]*?)data-vendor="([^"]+)"')
# License URLs for each package (GitHub raw URLs)
LICENSE_URLS: dict[str, tuple[str, str]] = {
"htmx": ("MIT", "https://raw.githubusercontent.com/bigskysoftware/htmx/master/LICENSE"),
"xterm": ("MIT", "https://raw.githubusercontent.com/xtermjs/xterm.js/master/LICENSE"),
"daisyui": ("MIT", "https://raw.githubusercontent.com/saadeghi/daisyui/master/LICENSE"),
"tailwindcss": (
"MIT",
"https://raw.githubusercontent.com/tailwindlabs/tailwindcss/master/LICENSE",
),
}
def _download(url: str) -> bytes:
"""Download a URL, trying urllib first then curl as fallback."""
# Try urllib first
try:
req = Request( # noqa: S310
url, headers={"User-Agent": "Mozilla/5.0 (compatible; compose-farm build)"}
)
with urlopen(req, timeout=30) as resp: # noqa: S310
return resp.read() # type: ignore[no-any-return]
except Exception: # noqa: S110
pass # Fall through to curl
# Fallback to curl (handles SSL proxies better)
result = subprocess.run(
["curl", "-fsSL", "--max-time", "30", url], # noqa: S607
capture_output=True,
check=True,
)
return bytes(result.stdout)
def _generate_licenses_file(temp_dir: Path) -> None:
"""Download and combine license files into LICENSES.txt."""
lines = [
"# Vendored Dependencies - License Information",
"",
"This file contains license information for JavaScript/CSS libraries",
"bundled with compose-farm for offline use.",
"",
"=" * 70,
"",
]
for pkg_name, (license_type, license_url) in LICENSE_URLS.items():
lines.append(f"## {pkg_name} ({license_type})")
lines.append(f"Source: {license_url}")
lines.append("")
lines.append(_download(license_url).decode("utf-8"))
lines.append("")
lines.append("=" * 70)
lines.append("")
(temp_dir / "LICENSES.txt").write_text("\n".join(lines))
class VendorAssetsHook(BuildHookInterface): # type: ignore[misc]
"""Hatch build hook that vendors CDN assets into the wheel."""
PLUGIN_NAME = "vendor-assets"
def initialize(
self,
_version: str,
build_data: dict[str, Any],
) -> None:
"""Download CDN assets and prepare them for inclusion in the wheel."""
# Only run for wheel builds
if self.target_name != "wheel":
return
# Paths
src_dir = Path(self.root) / "src" / "compose_farm"
base_html_path = src_dir / "web" / "templates" / "base.html"
if not base_html_path.exists():
return
# Create temp directory for vendored assets
temp_dir = Path(tempfile.mkdtemp(prefix="compose_farm_vendor_"))
vendor_dir = temp_dir / "vendor"
vendor_dir.mkdir()
# Read and parse base.html
html_content = base_html_path.read_text()
url_to_filename: dict[str, str] = {}
# Find all elements with data-vendor attribute and download them
for match in VENDOR_PATTERN.finditer(html_content):
url = match.group(2)
filename = match.group(4)
if url in url_to_filename:
continue
url_to_filename[url] = filename
content = _download(url)
(vendor_dir / filename).write_bytes(content)
if not url_to_filename:
return
# Generate LICENSES.txt
_generate_licenses_file(vendor_dir)
# Rewrite HTML to use local paths (remove data-vendor, update URL)
def replace_vendor_tag(match: re.Match[str]) -> str:
attr = match.group(1) # src or href
url = match.group(2)
between = match.group(3) # attributes between URL and data-vendor
filename = match.group(4)
if url in url_to_filename:
return f'{attr}="/static/vendor/{filename}"{between}'
return match.group(0)
modified_html = VENDOR_PATTERN.sub(replace_vendor_tag, html_content)
# Write modified base.html to temp
templates_dir = temp_dir / "templates"
templates_dir.mkdir()
(templates_dir / "base.html").write_text(modified_html)
# Add to force_include to override files in the wheel
force_include = build_data.setdefault("force_include", {})
force_include[str(vendor_dir)] = "compose_farm/web/static/vendor"
force_include[str(templates_dir / "base.html")] = "compose_farm/web/templates/base.html"
# Store temp_dir path for cleanup
self._temp_dir = temp_dir
def finalize(
self,
_version: str,
_build_data: dict[str, Any],
_artifact_path: str,
) -> None:
"""Clean up temporary directory after build."""
if hasattr(self, "_temp_dir") and self._temp_dir.exists():
shutil.rmtree(self._temp_dir, ignore_errors=True)

View File

@@ -4,6 +4,7 @@ dynamic = ["version"]
description = "Compose Farm - run docker compose commands across multiple hosts"
readme = "README.md"
license = "MIT"
license-files = ["LICENSE"]
authors = [
{ name = "Bas Nijholt", email = "bas@nijho.lt" }
]
@@ -47,6 +48,13 @@ dependencies = [
"rich>=13.0.0",
]
[project.optional-dependencies]
web = [
"fastapi[standard]>=0.109.0",
"jinja2>=3.1.0",
"websockets>=12.0",
]
[project.urls]
Homepage = "https://github.com/basnijholt/compose-farm"
Repository = "https://github.com/basnijholt/compose-farm"
@@ -71,6 +79,9 @@ version-file = "src/compose_farm/_version.py"
[tool.hatch.build.targets.wheel]
packages = ["src/compose_farm"]
[tool.hatch.build.hooks.custom]
# Vendors CDN assets (JS/CSS) into the wheel for offline use
[tool.ruff]
target-version = "py311"
line-length = 100
@@ -100,7 +111,7 @@ ignore = [
]
[tool.ruff.lint.per-file-ignores]
"tests/*" = ["S101", "PLR2004", "S108", "D102", "D103"] # relaxed docstrings + asserts in tests
"tests/*" = ["S101", "PLR2004", "S108", "D102", "D103", "PLC0415", "ARG001", "ARG002", "TC003"] # relaxed for tests
[tool.ruff.lint.mccabe]
max-complexity = 18
@@ -118,6 +129,10 @@ ignore_missing_imports = true
module = "tests.*"
disallow_untyped_decorators = false
[[tool.mypy.overrides]]
module = "compose_farm.web.*"
disallow_untyped_decorators = false
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
@@ -152,4 +167,11 @@ dev = [
"ruff>=0.14.8",
"types-pyyaml>=6.0.12.20250915",
"markdown-code-runner>=0.7.0",
# Web deps for type checking (these ship with inline types)
"fastapi>=0.109.0",
"uvicorn[standard]>=0.27.0",
"jinja2>=3.1.0",
"websockets>=12.0",
# For FastAPI TestClient
"httpx>=0.28.0",
]

View File

@@ -8,6 +8,7 @@ from compose_farm.cli import (
lifecycle, # noqa: F401
management, # noqa: F401
monitoring, # noqa: F401
web, # noqa: F401
)
# Import the shared app instance

View File

@@ -18,14 +18,14 @@ from rich.progress import (
TimeElapsedColumn,
)
from compose_farm.config import Config, load_config
from compose_farm.console import console, err_console
from compose_farm.executor import CommandResult # noqa: TC001
from compose_farm.traefik import generate_traefik_config, render_traefik_config
if TYPE_CHECKING:
from collections.abc import Callable, Coroutine, Generator
from compose_farm.config import Config
from compose_farm.executor import CommandResult
_T = TypeVar("_T")
@@ -57,7 +57,9 @@ _STATS_PREVIEW_LIMIT = 3 # Max number of pending migrations to show by name
@contextlib.contextmanager
def progress_bar(label: str, total: int) -> Generator[tuple[Progress, TaskID], None, None]:
def progress_bar(
label: str, total: int, *, initial_description: str = "[dim]connecting...[/]"
) -> Generator[tuple[Progress, TaskID], None, None]:
"""Create a standardized progress bar with consistent styling.
Yields (progress, task_id). Use progress.update(task_id, advance=1, description=...)
@@ -75,12 +77,15 @@ def progress_bar(label: str, total: int) -> Generator[tuple[Progress, TaskID], N
console=console,
transient=True,
) as progress:
task_id = progress.add_task("", total=total)
task_id = progress.add_task(initial_description, total=total)
yield progress, task_id
def load_config_or_exit(config_path: Path | None) -> Config:
"""Load config or exit with a friendly error message."""
# Lazy import: pydantic adds ~50ms to startup, only load when actually needed
from compose_farm.config import load_config # noqa: PLC0415
try:
return load_config(config_path)
except FileNotFoundError as e:
@@ -93,7 +98,10 @@ def get_services(
all_services: bool,
config_path: Path | None,
) -> tuple[list[str], Config]:
"""Resolve service list and load config."""
"""Resolve service list and load config.
Supports "." as shorthand for the current directory name.
"""
config = load_config_or_exit(config_path)
if all_services:
@@ -101,12 +109,28 @@ def get_services(
if not services:
err_console.print("[red]✗[/] Specify services or use --all")
raise typer.Exit(1)
return list(services), config
# Resolve "." to current directory name
resolved = [Path.cwd().name if svc == "." else svc for svc in services]
# Validate all services exist in config
unknown = [svc for svc in resolved if svc not in config.services]
if unknown:
for svc in unknown:
err_console.print(f"[red]✗[/] Unknown service: [cyan]{svc}[/]")
err_console.print("[dim]Hint: Add the service to compose-farm.yaml or use --all[/]")
raise typer.Exit(1)
return resolved, config
def run_async(coro: Coroutine[None, None, _T]) -> _T:
"""Run async coroutine."""
return asyncio.run(coro)
try:
return asyncio.run(coro)
except KeyboardInterrupt:
console.print("\n[yellow]Interrupted[/]")
raise typer.Exit(130) from None # Standard exit code for SIGINT
def report_results(results: list[CommandResult]) -> None:
@@ -139,11 +163,27 @@ def report_results(results: list[CommandResult]) -> None:
raise typer.Exit(1)
def maybe_regenerate_traefik(cfg: Config) -> None:
"""Regenerate traefik config if traefik_file is configured."""
def maybe_regenerate_traefik(
cfg: Config,
results: list[CommandResult] | None = None,
) -> None:
"""Regenerate traefik config if traefik_file is configured.
If results are provided, skips regeneration if all services failed.
"""
if cfg.traefik_file is None:
return
# Skip if all services failed
if results and not any(r.success for r in results):
return
# Lazy import: traefik/yaml adds startup time, only load when traefik_file is configured
from compose_farm.traefik import ( # noqa: PLC0415
generate_traefik_config,
render_traefik_config,
)
try:
dynamic, warnings = generate_traefik_config(cfg, list(cfg.services.keys()))
new_content = render_traefik_config(dynamic)
@@ -199,5 +239,5 @@ def run_host_operation(
results.append(result)
if result.success:
state_callback(cfg, service, host)
maybe_regenerate_traefik(cfg)
maybe_regenerate_traefik(cfg, results)
report_results(results)

View File

@@ -14,8 +14,8 @@ from typing import Annotated
import typer
from compose_farm.cli.app import app
from compose_farm.config import load_config, xdg_config_home
from compose_farm.console import console, err_console
from compose_farm.paths import config_search_paths, default_config_path, find_config_path
config_app = typer.Typer(
name="config",
@@ -23,14 +23,6 @@ config_app = typer.Typer(
no_args_is_help=True,
)
# Default config location (internal)
_USER_CONFIG_PATH = xdg_config_home() / "compose-farm" / "compose-farm.yaml"
# Search paths for existing config (internal)
_CONFIG_PATHS = [
Path("compose-farm.yaml"),
_USER_CONFIG_PATH,
]
# --- CLI Options (same pattern as cli.py) ---
_PathOption = Annotated[
@@ -84,18 +76,8 @@ def _get_config_file(path: Path | None) -> Path | None:
if path:
return path.expanduser().resolve()
# Check environment variable
if env_path := os.environ.get("CF_CONFIG"):
p = Path(env_path)
if p.exists():
return p.resolve()
# Check standard locations
for p in _CONFIG_PATHS:
if p.exists():
return p.resolve()
return None
config_path = find_config_path()
return config_path.resolve() if config_path else None
@config_app.command("init")
@@ -108,7 +90,7 @@ def config_init(
The generated config file serves as a template showing all available
options with explanatory comments.
"""
target_path = (path.expanduser().resolve() if path else None) or _USER_CONFIG_PATH
target_path = (path.expanduser().resolve() if path else None) or default_config_path()
if target_path.exists() and not force:
console.print(
@@ -144,7 +126,7 @@ def config_edit(
console.print("[yellow]No config file found.[/yellow]")
console.print("\nRun [bold cyan]cf config init[/bold cyan] to create one.")
console.print("\nSearched locations:")
for p in _CONFIG_PATHS:
for p in config_search_paths():
console.print(f" - {p}")
raise typer.Exit(1)
@@ -189,7 +171,7 @@ def config_show(
if config_file is None:
console.print("[yellow]No config file found.[/yellow]")
console.print("\nSearched locations:")
for p in _CONFIG_PATHS:
for p in config_search_paths():
status = "[green]exists[/green]" if p.exists() else "[dim]not found[/dim]"
console.print(f" - {p} ({status})")
console.print("\nRun [bold cyan]cf config init[/bold cyan] to create one.")
@@ -227,7 +209,7 @@ def config_path(
if config_file is None:
console.print("[yellow]No config file found.[/yellow]")
console.print("\nSearched locations:")
for p in _CONFIG_PATHS:
for p in config_search_paths():
status = "[green]exists[/green]" if p.exists() else "[dim]not found[/dim]"
console.print(f" - {p} ({status})")
raise typer.Exit(1)
@@ -247,6 +229,9 @@ def config_validate(
err_console.print("[red]✗[/] No config file found")
raise typer.Exit(1)
# Lazy import: pydantic adds ~50ms to startup, only load when actually needed
from compose_farm.config import load_config # noqa: PLC0415
try:
cfg = load_config(config_file)
except FileNotFoundError as e:
@@ -261,5 +246,68 @@ def config_validate(
console.print(f" Services: {len(cfg.services)}")
@config_app.command("symlink")
def config_symlink(
target: Annotated[
Path | None,
typer.Argument(help="Config file to link to. Defaults to ./compose-farm.yaml"),
] = None,
force: _ForceOption = False,
) -> None:
"""Create a symlink from the default config location to a config file.
This makes a local config file discoverable globally without copying.
Always uses absolute paths to avoid broken symlinks.
Examples:
cf config symlink # Link to ./compose-farm.yaml
cf config symlink /opt/compose/config.yaml # Link to specific file
"""
# Default to compose-farm.yaml in current directory
target_path = (target or Path("compose-farm.yaml")).expanduser().resolve()
if not target_path.exists():
err_console.print(f"[red]✗[/] Target config file not found: {target_path}")
raise typer.Exit(1)
if not target_path.is_file():
err_console.print(f"[red]✗[/] Target is not a file: {target_path}")
raise typer.Exit(1)
symlink_path = default_config_path()
# Check if symlink location already exists
if symlink_path.exists() or symlink_path.is_symlink():
if symlink_path.is_symlink():
current_target = symlink_path.resolve() if symlink_path.exists() else None
if current_target == target_path:
console.print(f"[green]✓[/] Symlink already points to: {target_path}")
return
# Update existing symlink
if not force:
existing = symlink_path.readlink()
console.print(f"[yellow]Symlink exists:[/] {symlink_path} -> {existing}")
if not typer.confirm(f"Update to point to {target_path}?"):
console.print("[dim]Aborted.[/dim]")
raise typer.Exit(0)
symlink_path.unlink()
else:
# Regular file exists
err_console.print(f"[red]✗[/] A regular file exists at: {symlink_path}")
err_console.print(" Back it up or remove it first, then retry.")
raise typer.Exit(1)
# Create parent directories
symlink_path.parent.mkdir(parents=True, exist_ok=True)
# Create symlink with absolute path
symlink_path.symlink_to(target_path)
console.print("[green]✓[/] Created symlink:")
console.print(f" {symlink_path}")
console.print(f" -> {target_path}")
# Register config subcommand on the shared app
app.add_typer(config_app, name="config", rich_help_panel="Configuration")

View File

@@ -1,11 +1,14 @@
"""Lifecycle commands: up, down, pull, restart, update."""
"""Lifecycle commands: up, down, pull, restart, update, apply."""
from __future__ import annotations
from typing import Annotated
from typing import TYPE_CHECKING, Annotated
import typer
if TYPE_CHECKING:
from compose_farm.config import Config
from compose_farm.cli.app import app
from compose_farm.cli.common import (
AllOption,
@@ -19,12 +22,15 @@ from compose_farm.cli.common import (
run_async,
run_host_operation,
)
from compose_farm.console import console
from compose_farm.console import console, err_console
from compose_farm.executor import run_on_services, run_sequential_on_services
from compose_farm.operations import up_services
from compose_farm.operations import stop_orphaned_services, up_services
from compose_farm.state import (
add_service_to_host,
get_orphaned_services,
get_service_host,
get_services_needing_migration,
get_services_not_in_state,
remove_service,
remove_service_from_host,
)
@@ -34,28 +40,11 @@ from compose_farm.state import (
def up(
services: ServicesArg = None,
all_services: AllOption = False,
migrate: Annotated[
bool, typer.Option("--migrate", "-m", help="Only services needing migration")
] = False,
host: HostOption = None,
config: ConfigOption = None,
) -> None:
"""Start services (docker compose up -d). Auto-migrates if host changed."""
from compose_farm.console import err_console # noqa: PLC0415
if migrate and host:
err_console.print("[red]✗[/] Cannot use --migrate and --host together")
raise typer.Exit(1)
if migrate:
cfg = load_config_or_exit(config)
svc_list = get_services_needing_migration(cfg)
if not svc_list:
console.print("[green]✓[/] No services need migration")
return
console.print(f"[cyan]Migrating {len(svc_list)} service(s):[/] {', '.join(svc_list)}")
else:
svc_list, cfg = get_services(services or [], all_services, config)
svc_list, cfg = get_services(services or [], all_services, config)
# Per-host operation: run on specific host only
if host:
@@ -64,7 +53,7 @@ def up(
# Normal operation: use up_services with migration logic
results = run_async(up_services(cfg, svc_list, raw=True))
maybe_regenerate_traefik(cfg)
maybe_regenerate_traefik(cfg, results)
report_results(results)
@@ -72,10 +61,37 @@ def up(
def down(
services: ServicesArg = None,
all_services: AllOption = False,
orphaned: Annotated[
bool,
typer.Option(
"--orphaned", help="Stop orphaned services (in state but removed from config)"
),
] = False,
host: HostOption = None,
config: ConfigOption = None,
) -> None:
"""Stop services (docker compose down)."""
# Handle --orphaned flag
if orphaned:
if services or all_services or host:
err_console.print("[red]✗[/] Cannot use --orphaned with services, --all, or --host")
raise typer.Exit(1)
cfg = load_config_or_exit(config)
orphaned_services = get_orphaned_services(cfg)
if not orphaned_services:
console.print("[green]✓[/] No orphaned services to stop")
return
console.print(
f"[yellow]Stopping {len(orphaned_services)} orphaned service(s):[/] "
f"{', '.join(orphaned_services.keys())}"
)
results = run_async(stop_orphaned_services(cfg))
report_results(results)
return
svc_list, cfg = get_services(services or [], all_services, config)
# Per-host operation: run on specific host only
@@ -97,7 +113,7 @@ def down(
remove_service(cfg, base_service)
removed_services.add(base_service)
maybe_regenerate_traefik(cfg)
maybe_regenerate_traefik(cfg, results)
report_results(results)
@@ -124,7 +140,7 @@ def restart(
svc_list, cfg = get_services(services or [], all_services, config)
raw = len(svc_list) == 1
results = run_async(run_sequential_on_services(cfg, svc_list, ["down", "up -d"], raw=raw))
maybe_regenerate_traefik(cfg)
maybe_regenerate_traefik(cfg, results)
report_results(results)
@@ -134,11 +150,150 @@ def update(
all_services: AllOption = False,
config: ConfigOption = None,
) -> None:
"""Update services (pull + down + up)."""
"""Update services (pull + build + down + up)."""
svc_list, cfg = get_services(services or [], all_services, config)
raw = len(svc_list) == 1
results = run_async(
run_sequential_on_services(cfg, svc_list, ["pull", "down", "up -d"], raw=raw)
run_sequential_on_services(
cfg, svc_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
)
)
maybe_regenerate_traefik(cfg)
maybe_regenerate_traefik(cfg, results)
report_results(results)
def _format_host(host: str | list[str]) -> str:
"""Format a host value for display."""
if isinstance(host, list):
return ", ".join(host)
return host
def _report_pending_migrations(cfg: Config, migrations: list[str]) -> None:
"""Report services that need migration."""
console.print(f"[cyan]Services to migrate ({len(migrations)}):[/]")
for svc in migrations:
current = get_service_host(cfg, svc)
target = cfg.get_hosts(svc)[0]
console.print(f" [cyan]{svc}[/]: [magenta]{current}[/] → [magenta]{target}[/]")
def _report_pending_orphans(orphaned: dict[str, str | list[str]]) -> None:
"""Report orphaned services that will be stopped."""
console.print(f"[yellow]Orphaned services to stop ({len(orphaned)}):[/]")
for svc, hosts in orphaned.items():
console.print(f" [cyan]{svc}[/] on [magenta]{_format_host(hosts)}[/]")
def _report_pending_starts(cfg: Config, missing: list[str]) -> None:
"""Report services that will be started."""
console.print(f"[green]Services to start ({len(missing)}):[/]")
for svc in missing:
target = _format_host(cfg.get_hosts(svc))
console.print(f" [cyan]{svc}[/] on [magenta]{target}[/]")
def _report_pending_refresh(cfg: Config, to_refresh: list[str]) -> None:
"""Report services that will be refreshed."""
console.print(f"[blue]Services to refresh ({len(to_refresh)}):[/]")
for svc in to_refresh:
target = _format_host(cfg.get_hosts(svc))
console.print(f" [cyan]{svc}[/] on [magenta]{target}[/]")
@app.command(rich_help_panel="Lifecycle")
def apply(
dry_run: Annotated[
bool,
typer.Option("--dry-run", "-n", help="Show what would change without executing"),
] = False,
no_orphans: Annotated[
bool,
typer.Option("--no-orphans", help="Only migrate, don't stop orphaned services"),
] = False,
full: Annotated[
bool,
typer.Option("--full", "-f", help="Also run up on all services to apply config changes"),
] = False,
config: ConfigOption = None,
) -> None:
"""Make reality match config (start, migrate, stop as needed).
This is the "reconcile" command that ensures running services match your
config file. It will:
1. Stop orphaned services (in state but removed from config)
2. Migrate services on wrong host (host in state ≠ host in config)
3. Start missing services (in config but not in state)
Use --dry-run to preview changes before applying.
Use --no-orphans to only migrate/start without stopping orphaned services.
Use --full to also run 'up' on all services (picks up compose/env changes).
"""
cfg = load_config_or_exit(config)
orphaned = get_orphaned_services(cfg)
migrations = get_services_needing_migration(cfg)
missing = get_services_not_in_state(cfg)
# For --full: refresh all services not already being started/migrated
handled = set(migrations) | set(missing)
to_refresh = [svc for svc in cfg.services if svc not in handled] if full else []
has_orphans = bool(orphaned) and not no_orphans
has_migrations = bool(migrations)
has_missing = bool(missing)
has_refresh = bool(to_refresh)
if not has_orphans and not has_migrations and not has_missing and not has_refresh:
console.print("[green]✓[/] Nothing to apply - reality matches config")
return
# Report what will be done
if has_orphans:
_report_pending_orphans(orphaned)
if has_migrations:
_report_pending_migrations(cfg, migrations)
if has_missing:
_report_pending_starts(cfg, missing)
if has_refresh:
_report_pending_refresh(cfg, to_refresh)
if dry_run:
console.print("\n[dim](dry-run: no changes made)[/]")
return
# Execute changes
console.print()
all_results = []
# 1. Stop orphaned services first
if has_orphans:
console.print("[yellow]Stopping orphaned services...[/]")
all_results.extend(run_async(stop_orphaned_services(cfg)))
# 2. Migrate services on wrong host
if has_migrations:
console.print("[cyan]Migrating services...[/]")
migrate_results = run_async(up_services(cfg, migrations, raw=True))
all_results.extend(migrate_results)
maybe_regenerate_traefik(cfg, migrate_results)
# 3. Start missing services (reuse up_services which handles state updates)
if has_missing:
console.print("[green]Starting missing services...[/]")
start_results = run_async(up_services(cfg, missing, raw=True))
all_results.extend(start_results)
maybe_regenerate_traefik(cfg, start_results)
# 4. Refresh remaining services (--full: run up to apply config changes)
if has_refresh:
console.print("[blue]Refreshing services...[/]")
refresh_results = run_async(up_services(cfg, to_refresh, raw=True))
all_results.extend(refresh_results)
maybe_regenerate_traefik(cfg, refresh_results)
report_results(all_results)
# Alias: cf a = cf apply
app.command("a", hidden=True)(apply)

View File

@@ -5,7 +5,7 @@ from __future__ import annotations
import asyncio
from datetime import UTC, datetime
from pathlib import Path # noqa: TC003
from typing import Annotated
from typing import TYPE_CHECKING, Annotated
import typer
from rich.progress import Progress, TaskID # noqa: TC002
@@ -22,14 +22,13 @@ from compose_farm.cli.common import (
progress_bar,
run_async,
)
from compose_farm.compose import parse_external_networks
from compose_farm.config import Config # noqa: TC001
if TYPE_CHECKING:
from compose_farm.config import Config
from compose_farm.console import console, err_console
from compose_farm.executor import (
CommandResult,
check_networks_exist,
check_paths_exist,
check_service_running,
is_local,
run_command,
)
@@ -42,8 +41,12 @@ from compose_farm.logs import (
merge_entries,
write_toml,
)
from compose_farm.operations import check_host_compatibility, get_service_paths
from compose_farm.state import load_state, save_state
from compose_farm.operations import (
check_host_compatibility,
check_service_requirements,
discover_service_host,
)
from compose_farm.state import get_orphaned_services, load_state, save_state
from compose_farm.traefik import generate_traefik_config, render_traefik_config
# --- Sync helpers ---
@@ -52,41 +55,10 @@ from compose_farm.traefik import generate_traefik_config, render_traefik_config
def _discover_services(cfg: Config) -> dict[str, str | list[str]]:
"""Discover running services with a progress bar."""
async def check_service(service: str) -> tuple[str, str | list[str] | None]:
"""Check where a service is running.
For multi-host services, returns list of hosts where running.
For single-host, returns single host name or None.
"""
assigned_hosts = cfg.get_hosts(service)
if cfg.is_multi_host(service):
# Multi-host: find all hosts where running (check in parallel)
checks = await asyncio.gather(
*[check_service_running(cfg, service, h) for h in assigned_hosts]
)
running_hosts = [
h for h, running in zip(assigned_hosts, checks, strict=True) if running
]
return service, running_hosts if running_hosts else None
# Single-host: check assigned host first
assigned_host = assigned_hosts[0]
if await check_service_running(cfg, service, assigned_host):
return service, assigned_host
# Check other hosts
for host_name in cfg.hosts:
if host_name == assigned_host:
continue
if await check_service_running(cfg, service, host_name):
return service, host_name
return service, None
async def gather_with_progress(
progress: Progress, task_id: TaskID
) -> dict[str, str | list[str]]:
services = list(cfg.services.keys())
tasks = [asyncio.create_task(check_service(s)) for s in services]
tasks = [asyncio.create_task(discover_service_host(cfg, s)) for s in cfg.services]
discovered: dict[str, str | list[str]] = {}
for coro in asyncio.as_completed(tasks):
service, host = await coro
@@ -216,59 +188,60 @@ def _check_ssh_connectivity(cfg: Config) -> list[str]:
return asyncio.run(gather_with_progress(progress, task_id))
def _check_mounts_and_networks(
def _check_service_requirements(
cfg: Config,
services: list[str],
) -> tuple[list[tuple[str, str, str]], list[tuple[str, str, str]]]:
"""Check mounts and networks for all services with a progress bar.
) -> tuple[list[tuple[str, str, str]], list[tuple[str, str, str]], list[tuple[str, str, str]]]:
"""Check mounts, networks, and devices for all services with a progress bar.
Returns (mount_errors, network_errors) where each is a list of
Returns (mount_errors, network_errors, device_errors) where each is a list of
(service, host, missing_item) tuples.
"""
async def check_service(
service: str,
) -> tuple[str, list[tuple[str, str, str]], list[tuple[str, str, str]]]:
"""Check mounts and networks for a single service."""
) -> tuple[
str,
list[tuple[str, str, str]],
list[tuple[str, str, str]],
list[tuple[str, str, str]],
]:
"""Check requirements for a single service on all its hosts."""
host_names = cfg.get_hosts(service)
mount_errors: list[tuple[str, str, str]] = []
network_errors: list[tuple[str, str, str]] = []
device_errors: list[tuple[str, str, str]] = []
# Check mounts on all hosts
paths = get_service_paths(cfg, service)
for host_name in host_names:
path_exists = await check_paths_exist(cfg, host_name, paths)
for path, found in path_exists.items():
if not found:
mount_errors.append((service, host_name, path))
missing_paths, missing_nets, missing_devs = await check_service_requirements(
cfg, service, host_name
)
mount_errors.extend((service, host_name, p) for p in missing_paths)
network_errors.extend((service, host_name, n) for n in missing_nets)
device_errors.extend((service, host_name, d) for d in missing_devs)
# Check networks on all hosts
networks = parse_external_networks(cfg, service)
if networks:
for host_name in host_names:
net_exists = await check_networks_exist(cfg, host_name, networks)
for net, found in net_exists.items():
if not found:
network_errors.append((service, host_name, net))
return service, mount_errors, network_errors
return service, mount_errors, network_errors, device_errors
async def gather_with_progress(
progress: Progress, task_id: TaskID
) -> tuple[list[tuple[str, str, str]], list[tuple[str, str, str]]]:
) -> tuple[list[tuple[str, str, str]], list[tuple[str, str, str]], list[tuple[str, str, str]]]:
tasks = [asyncio.create_task(check_service(s)) for s in services]
all_mount_errors: list[tuple[str, str, str]] = []
all_network_errors: list[tuple[str, str, str]] = []
all_device_errors: list[tuple[str, str, str]] = []
for coro in asyncio.as_completed(tasks):
service, mount_errs, net_errs = await coro
service, mount_errs, net_errs, dev_errs = await coro
all_mount_errors.extend(mount_errs)
all_network_errors.extend(net_errs)
all_device_errors.extend(dev_errs)
progress.update(task_id, advance=1, description=f"[cyan]{service}[/]")
return all_mount_errors, all_network_errors
return all_mount_errors, all_network_errors, all_device_errors
with progress_bar("Checking mounts/networks", len(services)) as (progress, task_id):
with progress_bar(
"Checking requirements", len(services), initial_description="[dim]checking...[/]"
) as (progress, task_id):
return asyncio.run(gather_with_progress(progress, task_id))
@@ -276,12 +249,12 @@ def _report_config_status(cfg: Config) -> bool:
"""Check and report config vs disk status. Returns True if errors found."""
configured = set(cfg.services.keys())
on_disk = cfg.discover_compose_dirs()
missing_from_config = sorted(on_disk - configured)
unmanaged = sorted(on_disk - configured)
missing_from_disk = sorted(configured - on_disk)
if missing_from_config:
console.print(f"\n[yellow]On disk but not in config[/] ({len(missing_from_config)}):")
for name in missing_from_config:
if unmanaged:
console.print(f"\n[yellow]Unmanaged[/] (on disk but not in config, {len(unmanaged)}):")
for name in unmanaged:
console.print(f" [yellow]+[/] [cyan]{name}[/]")
if missing_from_disk:
@@ -289,7 +262,7 @@ def _report_config_status(cfg: Config) -> bool:
for name in missing_from_disk:
console.print(f" [red]-[/] [cyan]{name}[/]")
if not missing_from_config and not missing_from_disk:
if not unmanaged and not missing_from_disk:
console.print("[green]✓[/] Config matches disk")
return bool(missing_from_disk)
@@ -297,17 +270,15 @@ def _report_config_status(cfg: Config) -> bool:
def _report_orphaned_services(cfg: Config) -> bool:
"""Check for services in state but not in config. Returns True if orphans found."""
state = load_state(cfg)
configured = set(cfg.services.keys())
tracked = set(state.keys())
orphaned = sorted(tracked - configured)
orphaned = get_orphaned_services(cfg)
if orphaned:
console.print("\n[yellow]Orphaned services[/] (in state but not in config):")
console.print("[dim]These may still be running. Use 'docker compose down' to stop them.[/]")
for name in orphaned:
host = state[name]
host_str = ", ".join(host) if isinstance(host, list) else host
console.print(
"[dim]Run 'cf apply' to stop them, or 'cf down --orphaned' for just orphans.[/]"
)
for name, hosts in sorted(orphaned.items()):
host_str = ", ".join(hosts) if isinstance(hosts, list) else hosts
console.print(f" [yellow]![/] [cyan]{name}[/] on [magenta]{host_str}[/]")
return True
@@ -359,6 +330,21 @@ def _report_network_errors(network_errors: list[tuple[str, str, str]]) -> None:
console.print(f" [red]✗[/] {net}")
def _report_device_errors(device_errors: list[tuple[str, str, str]]) -> None:
"""Report device errors grouped by service."""
by_service: dict[str, list[tuple[str, str]]] = {}
for svc, host, dev in device_errors:
by_service.setdefault(svc, []).append((host, dev))
console.print(f"[red]Missing devices[/] ({len(device_errors)}):")
for svc, items in sorted(by_service.items()):
host = items[0][0]
devices = [d for _, d in items]
console.print(f" [cyan]{svc}[/] on [magenta]{host}[/]:")
for dev in devices:
console.print(f" [red]✗[/] {dev}")
def _report_ssh_status(unreachable_hosts: list[str]) -> bool:
"""Report SSH connectivity status. Returns True if there are errors."""
if unreachable_hosts:
@@ -404,8 +390,8 @@ def _run_remote_checks(cfg: Config, svc_list: list[str], *, show_host_compat: bo
console.print() # Spacing before mounts/networks check
# Check mounts and networks
mount_errors, network_errors = _check_mounts_and_networks(cfg, svc_list)
# Check mounts, networks, and devices
mount_errors, network_errors, device_errors = _check_service_requirements(cfg, svc_list)
if mount_errors:
_report_mount_errors(mount_errors)
@@ -413,8 +399,11 @@ def _run_remote_checks(cfg: Config, svc_list: list[str], *, show_host_compat: bo
if network_errors:
_report_network_errors(network_errors)
has_errors = True
if not mount_errors and not network_errors:
console.print("[green]✓[/] All mounts and networks exist")
if device_errors:
_report_device_errors(device_errors)
has_errors = True
if not mount_errors and not network_errors and not device_errors:
console.print("[green]✓[/] All mounts, networks, and devices exist")
if show_host_compat:
for service in svc_list:
@@ -468,19 +457,21 @@ def traefik_file(
@app.command(rich_help_panel="Configuration")
def sync(
def refresh(
config: ConfigOption = None,
log_path: LogPathOption = None,
dry_run: Annotated[
bool,
typer.Option("--dry-run", "-n", help="Show what would be synced without writing"),
typer.Option("--dry-run", "-n", help="Show what would change without writing"),
] = False,
) -> None:
"""Sync local state with running services.
"""Update local state from running services.
Discovers which services are running on which hosts, updates the state
file, and captures image digests. Combines service discovery with
image snapshot into a single command.
file, and captures image digests. This is a read operation - it updates
your local state to match reality, not the other way around.
Use 'cf apply' to make reality match your config (stop orphans, migrate).
"""
cfg = load_config_or_exit(config)
current_state = load_state(cfg)

View File

@@ -23,7 +23,6 @@ from compose_farm.cli.common import (
report_results,
run_async,
)
from compose_farm.config import Config # noqa: TC001
from compose_farm.console import console, err_console
from compose_farm.executor import run_command, run_on_services
from compose_farm.state import get_services_needing_migration, load_state
@@ -31,6 +30,8 @@ from compose_farm.state import get_services_needing_migration, load_state
if TYPE_CHECKING:
from collections.abc import Mapping
from compose_farm.config import Config
def _group_services_by_host(
services: dict[str, str | list[str]],

View File

@@ -0,0 +1,48 @@
"""Web server command."""
from __future__ import annotations
from typing import Annotated
import typer
from compose_farm.cli.app import app
from compose_farm.console import console
@app.command(rich_help_panel="Server")
def web(
host: Annotated[
str,
typer.Option("--host", "-H", help="Host to bind to"),
] = "0.0.0.0", # noqa: S104
port: Annotated[
int,
typer.Option("--port", "-p", help="Port to listen on"),
] = 8000,
reload: Annotated[
bool,
typer.Option("--reload", "-r", help="Enable auto-reload for development"),
] = False,
) -> None:
"""Start the web UI server."""
try:
import uvicorn # noqa: PLC0415
except ImportError:
console.print(
"[red]Error:[/] Web dependencies not installed. "
"Install with: [cyan]pip install compose-farm[web][/]"
)
raise typer.Exit(1) from None
console.print(f"[green]Starting Compose Farm Web UI[/] at http://{host}:{port}")
console.print("[dim]Press Ctrl+C to stop[/]")
uvicorn.run(
"compose_farm.web:create_app",
factory=True,
host=host,
port=port,
reload=reload,
log_level="info",
)

View File

@@ -203,6 +203,51 @@ def parse_host_volumes(config: Config, service: str) -> list[str]:
return unique
def parse_devices(config: Config, service: str) -> list[str]:
"""Extract host device paths from a service's compose file.
Returns a list of host device paths (e.g., /dev/dri, /dev/dri/renderD128).
"""
compose_path = config.get_compose_path(service)
if not compose_path.exists():
return []
env = _load_env(compose_path)
compose_data = yaml.safe_load(compose_path.read_text()) or {}
raw_services = compose_data.get("services", {})
if not isinstance(raw_services, dict):
return []
devices: list[str] = []
for definition in raw_services.values():
if not isinstance(definition, dict):
continue
device_list = definition.get("devices")
if not device_list or not isinstance(device_list, list):
continue
for item in device_list:
if not isinstance(item, str):
continue
interpolated = _interpolate(item, env)
# Format: host_path:container_path[:options]
parts = interpolated.split(":")
if parts:
host_path = parts[0]
if host_path.startswith("/dev/"):
devices.append(host_path)
# Return unique devices, preserving order
seen: set[str] = set()
unique: list[str] = []
for d in devices:
if d not in seen:
seen.add(d)
unique.append(d)
return unique
def parse_external_networks(config: Config, service: str) -> list[str]:
"""Extract external network names from a service's compose file.

View File

@@ -3,16 +3,12 @@
from __future__ import annotations
import getpass
import os
from pathlib import Path
import yaml
from pydantic import BaseModel, Field, model_validator
def xdg_config_home() -> Path:
"""Get XDG config directory, respecting XDG_CONFIG_HOME env var."""
return Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config"))
from .paths import config_search_paths, find_config_path
class Host(BaseModel):
@@ -153,24 +149,10 @@ def load_config(path: Path | None = None) -> Config:
3. ./compose-farm.yaml
4. $XDG_CONFIG_HOME/compose-farm/compose-farm.yaml (defaults to ~/.config)
"""
search_paths = [
Path("compose-farm.yaml"),
xdg_config_home() / "compose-farm" / "compose-farm.yaml",
]
if path:
config_path = path
elif env_path := os.environ.get("CF_CONFIG"):
config_path = Path(env_path)
else:
config_path = None
for p in search_paths:
if p.exists():
config_path = p
break
config_path = path or find_config_path()
if config_path is None or not config_path.exists():
msg = f"Config file not found. Searched: {', '.join(str(p) for p in search_paths)}"
msg = f"Config file not found. Searched: {', '.join(str(p) for p in config_search_paths())}"
raise FileNotFoundError(msg)
if config_path.is_dir():

View File

@@ -9,7 +9,6 @@ from dataclasses import dataclass
from functools import lru_cache
from typing import TYPE_CHECKING, Any
import asyncssh
from rich.markup import escape
from .console import console, err_console
@@ -53,6 +52,15 @@ class CommandResult:
stdout: str = ""
stderr: str = ""
# SSH returns 255 when connection is closed unexpectedly (e.g., Ctrl+C)
_SSH_CONNECTION_CLOSED = 255
@property
def interrupted(self) -> bool:
"""Check if command was killed by SIGINT (Ctrl+C)."""
# Negative exit codes indicate signal termination; -2 = SIGINT
return self.exit_code < 0 or self.exit_code == self._SSH_CONNECTION_CLOSED
def is_local(host: Host) -> bool:
"""Check if host should run locally (no SSH)."""
@@ -156,6 +164,8 @@ async def _run_ssh_command(
success=result.returncode == 0,
)
import asyncssh # noqa: PLC0415 - lazy import for faster CLI startup
proc: asyncssh.SSHClientProcess[Any]
try:
async with asyncssh.connect( # noqa: SIM117 - conn needed before create_process
@@ -210,7 +220,16 @@ async def run_command(
stream: bool = True,
raw: bool = False,
) -> CommandResult:
"""Run a command on a host (locally or via SSH)."""
"""Run a command on a host (locally or via SSH).
Args:
host: Host configuration
command: Command to run
service: Service name (used as prefix in output)
stream: Whether to stream output (default True)
raw: Whether to use raw mode with TTY (default False)
"""
if is_local(host):
return await _run_local_command(command, service, stream=stream, raw=raw)
return await _run_ssh_command(host, command, service, stream=stream, raw=raw)
@@ -418,12 +437,15 @@ async def check_paths_exist(
"""Check if multiple paths exist on a specific host.
Returns a dict mapping path -> exists.
Handles permission denied as "exists" (path is there, just not accessible).
"""
# Only report missing if stat says "No such file", otherwise assume exists
# (handles permission denied correctly - path exists, just not accessible)
return await _batch_check_existence(
config,
host_name,
paths,
lambda esc: f"test -e '{esc}' && echo 'Y:{esc}' || echo 'N:{esc}'",
lambda esc: f"stat '{esc}' 2>&1 | grep -q 'No such file' && echo 'N:{esc}' || echo 'Y:{esc}'",
"mount-check",
)

View File

@@ -8,8 +8,8 @@ from dataclasses import dataclass
from datetime import UTC, datetime
from typing import TYPE_CHECKING, Any
from .config import xdg_config_home
from .executor import run_compose
from .paths import xdg_config_home
if TYPE_CHECKING:
from collections.abc import Awaitable, Callable, Iterable

View File

@@ -6,24 +6,69 @@ CLI commands are thin wrappers around these functions.
from __future__ import annotations
from typing import TYPE_CHECKING
import asyncio
from typing import TYPE_CHECKING, NamedTuple
from .compose import parse_external_networks, parse_host_volumes
from .compose import parse_devices, parse_external_networks, parse_host_volumes
from .console import console, err_console
from .executor import (
CommandResult,
check_networks_exist,
check_paths_exist,
check_service_running,
run_command,
run_compose,
run_compose_on_host,
)
from .state import get_service_host, set_multi_host_service, set_service_host
from .state import (
get_orphaned_services,
get_service_host,
remove_service,
set_multi_host_service,
set_service_host,
)
if TYPE_CHECKING:
from .config import Config
class OperationInterruptedError(Exception):
"""Raised when a command is interrupted by Ctrl+C."""
class PreflightResult(NamedTuple):
"""Result of pre-flight checks for a service on a host."""
missing_paths: list[str]
missing_networks: list[str]
missing_devices: list[str]
@property
def ok(self) -> bool:
"""Return True if all checks passed."""
return not (self.missing_paths or self.missing_networks or self.missing_devices)
async def _run_compose_step(
cfg: Config,
service: str,
command: str,
*,
raw: bool,
host: str | None = None,
) -> CommandResult:
"""Run a compose command, handle raw output newline, and check for interrupts."""
if host:
result = await run_compose_on_host(cfg, service, host, command, raw=raw)
else:
result = await run_compose(cfg, service, command, raw=raw)
if raw:
print() # Ensure newline after raw output
if result.interrupted:
raise OperationInterruptedError
return result
def get_service_paths(cfg: Config, service: str) -> list[str]:
"""Get all required paths for a service (compose_dir + volumes)."""
paths = [str(cfg.compose_dir)]
@@ -31,58 +76,111 @@ def get_service_paths(cfg: Config, service: str) -> list[str]:
return paths
async def _check_mounts_for_migration(
cfg: Config,
service: str,
target_host: str,
) -> list[str]:
"""Check if mount paths exist on target host. Returns list of missing paths."""
paths = get_service_paths(cfg, service)
exists = await check_paths_exist(cfg, target_host, paths)
return [p for p, found in exists.items() if not found]
async def discover_service_host(cfg: Config, service: str) -> tuple[str, str | list[str] | None]:
"""Discover where a service is running.
For multi-host services, checks all assigned hosts in parallel.
For single-host, checks assigned host first, then others.
async def _check_networks_for_migration(
cfg: Config,
service: str,
target_host: str,
) -> list[str]:
"""Check if Docker networks exist on target host. Returns list of missing networks."""
networks = parse_external_networks(cfg, service)
if not networks:
return []
exists = await check_networks_exist(cfg, target_host, networks)
return [n for n, found in exists.items() if not found]
async def _preflight_check(
cfg: Config,
service: str,
target_host: str,
) -> tuple[list[str], list[str]]:
"""Run pre-flight checks for a service on target host.
Returns (missing_paths, missing_networks).
Returns (service_name, host_or_hosts_or_none).
"""
missing_paths = await _check_mounts_for_migration(cfg, service, target_host)
missing_networks = await _check_networks_for_migration(cfg, service, target_host)
return missing_paths, missing_networks
assigned_hosts = cfg.get_hosts(service)
if cfg.is_multi_host(service):
# Check all assigned hosts in parallel
checks = await asyncio.gather(
*[check_service_running(cfg, service, h) for h in assigned_hosts]
)
running = [h for h, is_running in zip(assigned_hosts, checks, strict=True) if is_running]
return service, running if running else None
# Single-host: check assigned host first, then others
if await check_service_running(cfg, service, assigned_hosts[0]):
return service, assigned_hosts[0]
for host in cfg.hosts:
if host != assigned_hosts[0] and await check_service_running(cfg, service, host):
return service, host
return service, None
async def check_service_requirements(
cfg: Config,
service: str,
host_name: str,
) -> PreflightResult:
"""Check if a service can run on a specific host.
Verifies that all required paths (volumes), networks, and devices exist.
"""
# Check mount paths
paths = get_service_paths(cfg, service)
path_exists = await check_paths_exist(cfg, host_name, paths)
missing_paths = [p for p, found in path_exists.items() if not found]
# Check external networks
networks = parse_external_networks(cfg, service)
missing_networks: list[str] = []
if networks:
net_exists = await check_networks_exist(cfg, host_name, networks)
missing_networks = [n for n, found in net_exists.items() if not found]
# Check devices
devices = parse_devices(cfg, service)
missing_devices: list[str] = []
if devices:
dev_exists = await check_paths_exist(cfg, host_name, devices)
missing_devices = [d for d, found in dev_exists.items() if not found]
return PreflightResult(missing_paths, missing_networks, missing_devices)
async def _cleanup_and_rollback(
cfg: Config,
service: str,
target_host: str,
current_host: str,
prefix: str,
*,
was_running: bool,
raw: bool = False,
) -> None:
"""Clean up failed start and attempt rollback to old host if it was running."""
err_console.print(
f"{prefix} [yellow]![/] Cleaning up failed start on [magenta]{target_host}[/]"
)
await run_compose(cfg, service, "down", raw=raw)
if not was_running:
err_console.print(
f"{prefix} [dim]Service was not running on [magenta]{current_host}[/], skipping rollback[/]"
)
return
err_console.print(f"{prefix} [yellow]![/] Rolling back to [magenta]{current_host}[/]...")
rollback_result = await run_compose_on_host(cfg, service, current_host, "up -d", raw=raw)
if rollback_result.success:
console.print(f"{prefix} [green]✓[/] Rollback succeeded on [magenta]{current_host}[/]")
else:
err_console.print(f"{prefix} [red]✗[/] Rollback failed - service is down")
def _report_preflight_failures(
service: str,
target_host: str,
missing_paths: list[str],
missing_networks: list[str],
preflight: PreflightResult,
) -> None:
"""Report pre-flight check failures."""
err_console.print(
f"[cyan]\\[{service}][/] [red]✗[/] Cannot start on [magenta]{target_host}[/]:"
)
for path in missing_paths:
for path in preflight.missing_paths:
err_console.print(f" [red]✗[/] missing path: {path}")
for net in missing_networks:
for net in preflight.missing_networks:
err_console.print(f" [red]✗[/] missing network: {net}")
if preflight.missing_networks:
err_console.print(f" [dim]hint: cf init-network {target_host}[/]")
for dev in preflight.missing_devices:
err_console.print(f" [red]✗[/] missing device: {dev}")
async def _up_multi_host_service(
@@ -100,9 +198,9 @@ async def _up_multi_host_service(
# Pre-flight checks on all hosts
for host_name in host_names:
missing_paths, missing_networks = await _preflight_check(cfg, service, host_name)
if missing_paths or missing_networks:
_report_preflight_failures(service, host_name, missing_paths, missing_networks)
preflight = await check_service_requirements(cfg, service, host_name)
if not preflight.ok:
_report_preflight_failures(service, host_name, preflight)
results.append(
CommandResult(service=f"{service}@{host_name}", exit_code=1, success=False)
)
@@ -130,6 +228,97 @@ async def _up_multi_host_service(
return results
async def _migrate_service(
cfg: Config,
service: str,
current_host: str,
target_host: str,
prefix: str,
*,
raw: bool = False,
) -> CommandResult | None:
"""Migrate a service from current_host to target_host.
Pre-pulls/builds images on target, then stops service on current host.
Returns failure result if migration prep fails, None on success.
"""
console.print(
f"{prefix} Migrating from [magenta]{current_host}[/] → [magenta]{target_host}[/]..."
)
# Prepare images on target host before stopping old service to minimize downtime.
# Pull handles image-based services; build handles Dockerfile-based services.
# --ignore-buildable makes pull skip images that have build: defined.
for cmd, label in [("pull --ignore-buildable", "Pull"), ("build", "Build")]:
result = await _run_compose_step(cfg, service, cmd, raw=raw)
if not result.success:
err_console.print(
f"{prefix} [red]✗[/] {label} failed on [magenta]{target_host}[/], "
"leaving service on current host"
)
return result
# Stop on current host
down_result = await _run_compose_step(cfg, service, "down", raw=raw, host=current_host)
return down_result if not down_result.success else None
async def _up_single_service(
cfg: Config,
service: str,
prefix: str,
*,
raw: bool,
) -> CommandResult:
"""Start a single-host service with migration support."""
target_host = cfg.get_hosts(service)[0]
current_host = get_service_host(cfg, service)
# Pre-flight check: verify paths, networks, and devices exist on target
preflight = await check_service_requirements(cfg, service, target_host)
if not preflight.ok:
_report_preflight_failures(service, target_host, preflight)
return CommandResult(service=service, exit_code=1, success=False)
# If service is deployed elsewhere, migrate it
did_migration = False
was_running = False
if current_host and current_host != target_host:
if current_host in cfg.hosts:
was_running = await check_service_running(cfg, service, current_host)
failure = await _migrate_service(
cfg, service, current_host, target_host, prefix, raw=raw
)
if failure:
return failure
did_migration = True
else:
err_console.print(
f"{prefix} [yellow]![/] was on "
f"[magenta]{current_host}[/] (not in config), skipping down"
)
# Start on target host
console.print(f"{prefix} Starting on [magenta]{target_host}[/]...")
up_result = await _run_compose_step(cfg, service, "up -d", raw=raw)
# Update state on success, or rollback on failure
if up_result.success:
set_service_host(cfg, service, target_host)
elif did_migration and current_host:
await _cleanup_and_rollback(
cfg,
service,
target_host,
current_host,
prefix,
was_running=was_running,
raw=raw,
)
return up_result
async def up_services(
cfg: Config,
services: list[str],
@@ -140,54 +329,16 @@ async def up_services(
results: list[CommandResult] = []
total = len(services)
for idx, service in enumerate(services, 1):
prefix = f"[dim][{idx}/{total}][/] [cyan]\\[{service}][/]"
try:
for idx, service in enumerate(services, 1):
prefix = f"[dim][{idx}/{total}][/] [cyan]\\[{service}][/]"
# Handle multi-host services separately (no migration)
if cfg.is_multi_host(service):
multi_results = await _up_multi_host_service(cfg, service, prefix, raw=raw)
results.extend(multi_results)
continue
target_host = cfg.get_hosts(service)[0]
current_host = get_service_host(cfg, service)
# Pre-flight check: verify paths and networks exist on target
missing_paths, missing_networks = await _preflight_check(cfg, service, target_host)
if missing_paths or missing_networks:
_report_preflight_failures(service, target_host, missing_paths, missing_networks)
results.append(CommandResult(service=service, exit_code=1, success=False))
continue
# If service is deployed elsewhere, migrate it
if current_host and current_host != target_host:
if current_host in cfg.hosts:
console.print(
f"{prefix} Migrating from "
f"[magenta]{current_host}[/] → [magenta]{target_host}[/]..."
)
down_result = await run_compose_on_host(cfg, service, current_host, "down", raw=raw)
if raw:
print() # Ensure newline after raw output
if not down_result.success:
results.append(down_result)
continue
if cfg.is_multi_host(service):
results.extend(await _up_multi_host_service(cfg, service, prefix, raw=raw))
else:
err_console.print(
f"{prefix} [yellow]![/] was on "
f"[magenta]{current_host}[/] (not in config), skipping down"
)
# Start on target host
console.print(f"{prefix} Starting on [magenta]{target_host}[/]...")
up_result = await run_compose(cfg, service, "up -d", raw=raw)
if raw:
print() # Ensure newline after raw output (progress bars end with \r)
results.append(up_result)
# Update state on success
if up_result.success:
set_service_host(cfg, service, target_host)
results.append(await _up_single_service(cfg, service, prefix, raw=raw))
except OperationInterruptedError:
raise KeyboardInterrupt from None
return results
@@ -196,17 +347,95 @@ async def check_host_compatibility(
cfg: Config,
service: str,
) -> dict[str, tuple[int, int, list[str]]]:
"""Check which hosts can run a service based on mount paths.
"""Check which hosts can run a service based on paths, networks, and devices.
Returns dict of host_name -> (found_count, total_count, missing_paths).
Returns dict of host_name -> (found_count, total_count, missing_items).
"""
# Get total requirements count
paths = get_service_paths(cfg, service)
networks = parse_external_networks(cfg, service)
devices = parse_devices(cfg, service)
total = len(paths) + len(networks) + len(devices)
results: dict[str, tuple[int, int, list[str]]] = {}
for host_name in cfg.hosts:
exists = await check_paths_exist(cfg, host_name, paths)
found = sum(1 for v in exists.values() if v)
missing = [p for p, v in exists.items() if not v]
results[host_name] = (found, len(paths), missing)
preflight = await check_service_requirements(cfg, service, host_name)
all_missing = (
preflight.missing_paths + preflight.missing_networks + preflight.missing_devices
)
found = total - len(all_missing)
results[host_name] = (found, total, all_missing)
return results
async def stop_orphaned_services(cfg: Config) -> list[CommandResult]:
"""Stop orphaned services (in state but not in config).
Runs docker compose down on each service on its tracked host(s).
Only removes from state on successful stop.
Returns list of CommandResults for each service@host.
"""
orphaned = get_orphaned_services(cfg)
if not orphaned:
return []
results: list[CommandResult] = []
tasks: list[tuple[str, str, asyncio.Task[CommandResult]]] = []
# Build list of (service, host, task) for all orphaned services
for service, hosts in orphaned.items():
host_list = hosts if isinstance(hosts, list) else [hosts]
for host in host_list:
# Skip hosts no longer in config
if host not in cfg.hosts:
console.print(
f" [yellow]![/] {service}@{host}: host no longer in config, skipping"
)
results.append(
CommandResult(
service=f"{service}@{host}",
exit_code=1,
success=False,
stderr="host no longer in config",
)
)
continue
coro = run_compose_on_host(cfg, service, host, "down")
tasks.append((service, host, asyncio.create_task(coro)))
# Run all down commands in parallel
if tasks:
for service, host, task in tasks:
try:
result = await task
results.append(result)
if result.success:
console.print(f" [green]✓[/] {service}@{host}: stopped")
else:
console.print(f" [red]✗[/] {service}@{host}: {result.stderr or 'failed'}")
except Exception as e:
console.print(f" [red]✗[/] {service}@{host}: {e}")
results.append(
CommandResult(
service=f"{service}@{host}",
exit_code=1,
success=False,
stderr=str(e),
)
)
# Remove from state only for services where ALL hosts succeeded
for service, hosts in orphaned.items():
host_list = hosts if isinstance(hosts, list) else [hosts]
all_succeeded = all(
r.success
for r in results
if r.service.startswith(f"{service}@") or r.service == service
)
if all_succeeded:
remove_service(cfg, service)
return results

33
src/compose_farm/paths.py Normal file
View File

@@ -0,0 +1,33 @@
"""Path utilities - lightweight module with no heavy dependencies."""
from __future__ import annotations
import os
from pathlib import Path
def xdg_config_home() -> Path:
"""Get XDG config directory, respecting XDG_CONFIG_HOME env var."""
return Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config"))
def default_config_path() -> Path:
"""Get the default user config path."""
return xdg_config_home() / "compose-farm" / "compose-farm.yaml"
def config_search_paths() -> list[Path]:
"""Get search paths for config files."""
return [Path("compose-farm.yaml"), default_config_path()]
def find_config_path() -> Path | None:
"""Find the config file path, checking CF_CONFIG env var and search paths."""
if env_path := os.environ.get("CF_CONFIG"):
p = Path(env_path)
if p.exists() and p.is_file():
return p
for p in config_search_paths():
if p.exists() and p.is_file():
return p
return None

View File

@@ -142,3 +142,25 @@ def get_services_needing_migration(config: Config) -> list[str]:
if current_host and current_host != configured_host:
needs_migration.append(service)
return needs_migration
def get_orphaned_services(config: Config) -> dict[str, str | list[str]]:
"""Get services that are in state but not in config.
These are services that were previously deployed but have been
removed from the config file (e.g., commented out).
Returns a dict mapping service name to host(s) where it's deployed.
"""
state = load_state(config)
return {service: hosts for service, hosts in state.items() if service not in config.services}
def get_services_not_in_state(config: Config) -> list[str]:
"""Get services that are in config but not in state.
These are services that should be running but aren't tracked
(e.g., newly added to config, or previously stopped as orphans).
"""
state = load_state(config)
return [service for service in config.services if service not in state]

View File

@@ -0,0 +1,7 @@
"""Compose Farm Web UI."""
from __future__ import annotations
from compose_farm.web.app import create_app
__all__ = ["create_app"]

View File

@@ -0,0 +1,51 @@
"""FastAPI application setup."""
from __future__ import annotations
import sys
from contextlib import asynccontextmanager, suppress
from typing import TYPE_CHECKING
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
from pydantic import ValidationError
from compose_farm.web.deps import STATIC_DIR, get_config
from compose_farm.web.routes import actions, api, pages
if TYPE_CHECKING:
from collections.abc import AsyncGenerator
@asynccontextmanager
async def lifespan(_app: FastAPI) -> AsyncGenerator[None, None]:
"""Application lifespan handler."""
# Startup: pre-load config (ignore errors - handled per-request)
with suppress(ValidationError, FileNotFoundError):
get_config()
yield
# Shutdown: nothing to clean up
def create_app() -> FastAPI:
"""Create and configure the FastAPI application."""
app = FastAPI(
title="Compose Farm",
description="Web UI for managing Docker Compose services across multiple hosts",
lifespan=lifespan,
)
# Mount static files
app.mount("/static", StaticFiles(directory=str(STATIC_DIR)), name="static")
app.include_router(pages.router)
app.include_router(api.router, prefix="/api")
app.include_router(actions.router, prefix="/api")
# WebSocket routes use Unix-only modules (fcntl, pty)
if sys.platform != "win32":
from compose_farm.web.ws import router as ws_router # noqa: PLC0415
app.include_router(ws_router)
return app

View File

@@ -0,0 +1,32 @@
"""Shared dependencies for web modules.
This module contains shared config and template accessors to avoid circular imports
between app.py and route modules.
"""
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
from fastapi.templating import Jinja2Templates
if TYPE_CHECKING:
from compose_farm.config import Config
# Paths
WEB_DIR = Path(__file__).parent
TEMPLATES_DIR = WEB_DIR / "templates"
STATIC_DIR = WEB_DIR / "static"
def get_config() -> Config:
"""Load config from disk (always fresh)."""
from compose_farm.config import load_config # noqa: PLC0415
return load_config()
def get_templates() -> Jinja2Templates:
"""Get Jinja2 templates instance."""
return Jinja2Templates(directory=str(TEMPLATES_DIR))

View File

@@ -0,0 +1,5 @@
"""Web routes."""
from compose_farm.web.routes import actions, api, pages
__all__ = ["actions", "api", "pages"]

View File

@@ -0,0 +1,95 @@
"""Action routes for service operations."""
from __future__ import annotations
import asyncio
import uuid
from typing import TYPE_CHECKING, Any
from fastapi import APIRouter, HTTPException
if TYPE_CHECKING:
from collections.abc import Callable, Coroutine
from compose_farm.web.deps import get_config
from compose_farm.web.streaming import run_cli_streaming, run_compose_streaming, tasks
router = APIRouter(tags=["actions"])
# Store task references to prevent garbage collection
_background_tasks: set[asyncio.Task[None]] = set()
def _start_task(coro_factory: Callable[[str], Coroutine[Any, Any, None]]) -> str:
"""Create a task, register it, and return the task_id."""
task_id = str(uuid.uuid4())
tasks[task_id] = {"status": "running", "output": []}
task: asyncio.Task[None] = asyncio.create_task(coro_factory(task_id))
_background_tasks.add(task)
task.add_done_callback(_background_tasks.discard)
return task_id
async def _run_service_action(name: str, command: str) -> dict[str, Any]:
"""Run a compose command for a service."""
config = get_config()
if name not in config.services:
raise HTTPException(status_code=404, detail=f"Service '{name}' not found")
task_id = _start_task(lambda tid: run_compose_streaming(config, name, command, tid))
return {"task_id": task_id, "service": name, "command": command}
@router.post("/service/{name}/up")
async def up_service(name: str) -> dict[str, Any]:
"""Start a service."""
return await _run_service_action(name, "up")
@router.post("/service/{name}/down")
async def down_service(name: str) -> dict[str, Any]:
"""Stop a service."""
return await _run_service_action(name, "down")
@router.post("/service/{name}/restart")
async def restart_service(name: str) -> dict[str, Any]:
"""Restart a service (down + up)."""
return await _run_service_action(name, "restart")
@router.post("/service/{name}/pull")
async def pull_service(name: str) -> dict[str, Any]:
"""Pull latest images for a service."""
return await _run_service_action(name, "pull")
@router.post("/service/{name}/update")
async def update_service(name: str) -> dict[str, Any]:
"""Update a service (pull + build + down + up)."""
return await _run_service_action(name, "update")
@router.post("/service/{name}/logs")
async def logs_service(name: str) -> dict[str, Any]:
"""Show logs for a service."""
return await _run_service_action(name, "logs")
@router.post("/apply")
async def apply_all() -> dict[str, Any]:
"""Run cf apply to reconcile all services."""
config = get_config()
task_id = _start_task(lambda tid: run_cli_streaming(config, ["apply"], tid))
return {"task_id": task_id, "command": "apply"}
@router.post("/refresh")
async def refresh_state() -> dict[str, Any]:
"""Refresh state from running services."""
config = get_config()
task_id = _start_task(lambda tid: run_cli_streaming(config, ["refresh"], tid))
return {"task_id": task_id, "command": "refresh"}

View File

@@ -0,0 +1,212 @@
"""JSON API routes."""
from __future__ import annotations
import contextlib
import json
from typing import TYPE_CHECKING, Annotated, Any
import yaml
if TYPE_CHECKING:
from pathlib import Path
from fastapi import APIRouter, Body, HTTPException
from fastapi.responses import HTMLResponse
from compose_farm.executor import run_compose_on_host
from compose_farm.paths import find_config_path
from compose_farm.state import load_state
from compose_farm.web.deps import get_config, get_templates
router = APIRouter(tags=["api"])
def _validate_yaml(content: str) -> None:
"""Validate YAML content, raise HTTPException on error."""
try:
yaml.safe_load(content)
except yaml.YAMLError as e:
raise HTTPException(status_code=400, detail=f"Invalid YAML: {e}") from e
def _get_service_compose_path(name: str) -> Path:
"""Get compose path for service, raising HTTPException if not found."""
config = get_config()
if name not in config.services:
raise HTTPException(status_code=404, detail=f"Service '{name}' not found")
compose_path = config.get_compose_path(name)
if not compose_path:
raise HTTPException(status_code=404, detail="Compose file not found")
return compose_path
def _get_compose_services(config: Any, service: str, hosts: list[str]) -> list[dict[str, Any]]:
"""Get container info from compose file (fast, local read).
Returns one entry per container per host for multi-host services.
"""
compose_path = config.get_compose_path(service)
if not compose_path or not compose_path.exists():
return []
compose_data = yaml.safe_load(compose_path.read_text()) or {}
raw_services = compose_data.get("services", {})
if not isinstance(raw_services, dict):
return []
# Project name is the directory name (docker compose default)
project_name = compose_path.parent.name
containers = []
for host in hosts:
for svc_name, svc_def in raw_services.items():
# Use container_name if set, otherwise default to {project}-{service}-1
if isinstance(svc_def, dict) and svc_def.get("container_name"):
container_name = svc_def["container_name"]
else:
container_name = f"{project_name}-{svc_name}-1"
containers.append(
{
"Name": container_name,
"Service": svc_name,
"Host": host,
"State": "unknown", # Status requires Docker query
}
)
return containers
async def _get_container_states(
config: Any, service: str, containers: list[dict[str, Any]]
) -> list[dict[str, Any]]:
"""Query Docker for actual container states on a single host."""
if not containers:
return containers
# All containers should be on the same host
host_name = containers[0]["Host"]
result = await run_compose_on_host(config, service, host_name, "ps --format json", stream=False)
if not result.success:
return containers
# Build state map
state_map: dict[str, str] = {}
for line in result.stdout.strip().split("\n"):
if line.strip():
with contextlib.suppress(json.JSONDecodeError):
data = json.loads(line)
state_map[data.get("Name", "")] = data.get("State", "unknown")
# Update container states
for c in containers:
if c["Name"] in state_map:
c["State"] = state_map[c["Name"]]
return containers
def _render_containers(
service: str, host: str, containers: list[dict[str, Any]], *, show_header: bool = False
) -> str:
"""Render containers HTML using Jinja template."""
templates = get_templates()
template = templates.env.get_template("partials/containers.html")
module = template.make_module()
result: str = module.host_containers(service, host, containers, show_header=show_header)
return result
@router.get("/service/{name}/containers", response_class=HTMLResponse)
async def get_containers(name: str, host: str | None = None) -> HTMLResponse:
"""Get containers for a service as HTML buttons.
If host is specified, queries Docker for that host's status.
Otherwise returns all hosts with loading spinners that auto-fetch.
"""
config = get_config()
if name not in config.services:
raise HTTPException(status_code=404, detail=f"Service '{name}' not found")
# Get hosts where service is running from state
state = load_state(config)
current_hosts = state.get(name)
if not current_hosts:
return HTMLResponse('<span class="text-base-content/60">Service not running</span>')
all_hosts = current_hosts if isinstance(current_hosts, list) else [current_hosts]
# If host specified, return just that host's containers with status
if host:
if host not in all_hosts:
return HTMLResponse(f'<span class="text-error">Host {host} not found</span>')
containers = _get_compose_services(config, name, [host])
containers = await _get_container_states(config, name, containers)
return HTMLResponse(_render_containers(name, host, containers))
# Initial load: return all hosts with loading spinners, each fetches its own status
html_parts = []
is_multi_host = len(all_hosts) > 1
for h in all_hosts:
host_id = f"containers-{name}-{h}".replace(".", "-")
containers = _get_compose_services(config, name, [h])
if is_multi_host:
html_parts.append(f'<div class="font-semibold text-sm mt-3 mb-1">{h}</div>')
# Container for this host that auto-fetches its own status
html_parts.append(f"""
<div id="{host_id}"
hx-get="/api/service/{name}/containers?host={h}"
hx-trigger="load"
hx-target="this"
hx-select="unset"
hx-swap="innerHTML">
{_render_containers(name, h, containers)}
</div>
""")
return HTMLResponse("".join(html_parts))
@router.put("/service/{name}/compose")
async def save_compose(
name: str, content: Annotated[str, Body(media_type="text/plain")]
) -> dict[str, Any]:
"""Save compose file content."""
compose_path = _get_service_compose_path(name)
_validate_yaml(content)
compose_path.write_text(content)
return {"success": True, "message": "Compose file saved"}
@router.put("/service/{name}/env")
async def save_env(
name: str, content: Annotated[str, Body(media_type="text/plain")]
) -> dict[str, Any]:
"""Save .env file content."""
env_path = _get_service_compose_path(name).parent / ".env"
env_path.write_text(content)
return {"success": True, "message": ".env file saved"}
@router.put("/config")
async def save_config(
content: Annotated[str, Body(media_type="text/plain")],
) -> dict[str, Any]:
"""Save compose-farm.yaml config file."""
config_path = find_config_path()
if not config_path:
raise HTTPException(status_code=404, detail="Config file not found")
_validate_yaml(content)
config_path.write_text(content)
return {"success": True, "message": "Config saved"}

View File

@@ -0,0 +1,267 @@
"""HTML page routes."""
from __future__ import annotations
import yaml
from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse
from pydantic import ValidationError
from compose_farm.paths import find_config_path
from compose_farm.state import (
get_orphaned_services,
get_service_host,
get_services_needing_migration,
get_services_not_in_state,
load_state,
)
from compose_farm.web.deps import get_config, get_templates
router = APIRouter()
@router.get("/", response_class=HTMLResponse)
async def index(request: Request) -> HTMLResponse:
"""Dashboard page - combined view of all cluster info."""
templates = get_templates()
# Try to load config, handle errors gracefully
config_error = None
try:
config = get_config()
except (ValidationError, FileNotFoundError) as e:
# Extract error message
if isinstance(e, ValidationError):
config_error = "; ".join(err.get("msg", str(err)) for err in e.errors())
else:
config_error = str(e)
# Read raw config content for the editor
config_path = find_config_path()
config_content = config_path.read_text() if config_path else ""
return templates.TemplateResponse(
"index.html",
{
"request": request,
"config_error": config_error,
"hosts": {},
"services": {},
"config_content": config_content,
"state_content": "",
"running_count": 0,
"stopped_count": 0,
"orphaned": [],
"migrations": [],
"not_started": [],
"services_by_host": {},
},
)
# Get state
deployed = load_state(config)
# Stats
running_count = len(deployed)
stopped_count = len(config.services) - running_count
# Pending operations
orphaned = get_orphaned_services(config)
migrations = get_services_needing_migration(config)
not_started = get_services_not_in_state(config)
# Group services by host
services_by_host: dict[str, list[str]] = {}
for svc, host in deployed.items():
if isinstance(host, list):
for h in host:
services_by_host.setdefault(h, []).append(svc)
else:
services_by_host.setdefault(host, []).append(svc)
# Config file content
config_content = ""
if config.config_path and config.config_path.exists():
config_content = config.config_path.read_text()
# State file content
state_content = yaml.dump({"deployed": deployed}, default_flow_style=False, sort_keys=False)
return templates.TemplateResponse(
"index.html",
{
"request": request,
"config_error": None,
# Config data
"hosts": config.hosts,
"services": config.services,
"config_content": config_content,
# State data
"state_content": state_content,
# Stats
"running_count": running_count,
"stopped_count": stopped_count,
# Pending operations
"orphaned": orphaned,
"migrations": migrations,
"not_started": not_started,
# Services by host
"services_by_host": services_by_host,
},
)
@router.get("/service/{name}", response_class=HTMLResponse)
async def service_detail(request: Request, name: str) -> HTMLResponse:
"""Service detail page."""
config = get_config()
templates = get_templates()
# Get compose file content
compose_path = config.get_compose_path(name)
compose_content = ""
if compose_path and compose_path.exists():
compose_content = compose_path.read_text()
# Get .env file content
env_content = ""
env_path = None
if compose_path:
env_path = compose_path.parent / ".env"
if env_path.exists():
env_content = env_path.read_text()
# Get host info
hosts = config.get_hosts(name)
# Get state
current_host = get_service_host(config, name)
return templates.TemplateResponse(
"service.html",
{
"request": request,
"name": name,
"hosts": hosts,
"current_host": current_host,
"compose_content": compose_content,
"compose_path": str(compose_path) if compose_path else None,
"env_content": env_content,
"env_path": str(env_path) if env_path else None,
},
)
@router.get("/partials/sidebar", response_class=HTMLResponse)
async def sidebar_partial(request: Request) -> HTMLResponse:
"""Sidebar service list partial."""
config = get_config()
templates = get_templates()
state = load_state(config)
# Build service -> host mapping (empty string for multi-host services)
service_hosts = {
svc: "" if host_val == "all" or isinstance(host_val, list) else host_val
for svc, host_val in config.services.items()
}
return templates.TemplateResponse(
"partials/sidebar.html",
{
"request": request,
"services": sorted(config.services.keys()),
"service_hosts": service_hosts,
"hosts": sorted(config.hosts.keys()),
"state": state,
},
)
@router.get("/partials/config-error", response_class=HTMLResponse)
async def config_error_partial(request: Request) -> HTMLResponse:
"""Config error banner partial."""
templates = get_templates()
try:
get_config()
return HTMLResponse("") # No error
except (ValidationError, FileNotFoundError) as e:
if isinstance(e, ValidationError):
error = "; ".join(err.get("msg", str(err)) for err in e.errors())
else:
error = str(e)
return templates.TemplateResponse(
"partials/config_error.html", {"request": request, "config_error": error}
)
@router.get("/partials/stats", response_class=HTMLResponse)
async def stats_partial(request: Request) -> HTMLResponse:
"""Stats cards partial."""
config = get_config()
templates = get_templates()
deployed = load_state(config)
running_count = len(deployed)
stopped_count = len(config.services) - running_count
return templates.TemplateResponse(
"partials/stats.html",
{
"request": request,
"hosts": config.hosts,
"services": config.services,
"running_count": running_count,
"stopped_count": stopped_count,
},
)
@router.get("/partials/pending", response_class=HTMLResponse)
async def pending_partial(request: Request, expanded: bool = True) -> HTMLResponse:
"""Pending operations partial."""
config = get_config()
templates = get_templates()
orphaned = get_orphaned_services(config)
migrations = get_services_needing_migration(config)
not_started = get_services_not_in_state(config)
return templates.TemplateResponse(
"partials/pending.html",
{
"request": request,
"orphaned": orphaned,
"migrations": migrations,
"not_started": not_started,
"expanded": expanded,
},
)
@router.get("/partials/services-by-host", response_class=HTMLResponse)
async def services_by_host_partial(request: Request, expanded: bool = True) -> HTMLResponse:
"""Services by host partial."""
config = get_config()
templates = get_templates()
deployed = load_state(config)
# Group services by host
services_by_host: dict[str, list[str]] = {}
for svc, host in deployed.items():
if isinstance(host, list):
for h in host:
services_by_host.setdefault(h, []).append(svc)
else:
services_by_host.setdefault(host, []).append(svc)
return templates.TemplateResponse(
"partials/services_by_host.html",
{
"request": request,
"hosts": config.hosts,
"services_by_host": services_by_host,
"expanded": expanded,
},
)

View File

@@ -0,0 +1,55 @@
/* Editors (Monaco) - wrapper makes it resizable */
.editor-wrapper {
resize: vertical;
overflow: hidden;
min-height: 150px;
}
.editor-wrapper .yaml-editor,
.editor-wrapper .env-editor,
.editor-wrapper .yaml-viewer {
height: 100%;
border: 1px solid oklch(var(--bc) / 0.2);
border-radius: 0.5rem;
}
.editor-wrapper.yaml-wrapper { height: 400px; }
.editor-wrapper.env-wrapper { height: 250px; }
.editor-wrapper.viewer-wrapper { height: 300px; }
/* Terminal - no custom CSS needed, using h-full class in HTML */
/* Prevent save button resize when text changes */
#save-btn, #save-config-btn {
min-width: 5rem;
}
/* Rainbow hover effect for headers */
.rainbow-hover {
transition: color 0.3s;
}
.rainbow-hover:hover {
background: linear-gradient(
90deg,
#e07070,
#e0a070,
#d0d070,
#70c080,
#7090d0,
#9080b0,
#b080a0,
#e07070
);
background-size: 16em 100%;
background-clip: text;
-webkit-background-clip: text;
color: transparent;
animation: rainbow 4s linear infinite;
}
@keyframes rainbow {
to {
background-position: 16em center;
}
}

View File

@@ -0,0 +1,546 @@
/**
* Compose Farm Web UI JavaScript
*/
// ANSI escape codes for terminal output
const ANSI = {
RED: '\x1b[31m',
GREEN: '\x1b[32m',
DIM: '\x1b[2m',
RESET: '\x1b[0m',
CRLF: '\r\n'
};
// Store active terminals and editors
const terminals = {};
const editors = {};
let monacoLoaded = false;
let monacoLoading = false;
// Terminal color theme (dark mode matching PicoCSS)
const TERMINAL_THEME = {
background: '#1a1a2e',
foreground: '#e4e4e7',
cursor: '#e4e4e7',
cursorAccent: '#1a1a2e',
black: '#18181b',
red: '#ef4444',
green: '#22c55e',
yellow: '#eab308',
blue: '#3b82f6',
magenta: '#a855f7',
cyan: '#06b6d4',
white: '#e4e4e7',
brightBlack: '#52525b',
brightRed: '#f87171',
brightGreen: '#4ade80',
brightYellow: '#facc15',
brightBlue: '#60a5fa',
brightMagenta: '#c084fc',
brightCyan: '#22d3ee',
brightWhite: '#fafafa'
};
/**
* Create a terminal with fit addon and resize observer
* @param {HTMLElement} container - Container element
* @param {object} extraOptions - Additional terminal options
* @param {function} onResize - Optional callback called with (cols, rows) after resize
* @returns {{term: Terminal, fitAddon: FitAddon}}
*/
function createTerminal(container, extraOptions = {}, onResize = null) {
container.innerHTML = '';
const term = new Terminal({
convertEol: true,
theme: TERMINAL_THEME,
fontSize: 13,
fontFamily: 'Monaco, Menlo, "Ubuntu Mono", monospace',
scrollback: 5000,
...extraOptions
});
const fitAddon = new FitAddon.FitAddon();
term.loadAddon(fitAddon);
term.open(container);
fitAddon.fit();
const handleResize = () => {
fitAddon.fit();
if (onResize) {
onResize(term.cols, term.rows);
}
};
window.addEventListener('resize', handleResize);
new ResizeObserver(handleResize).observe(container);
return { term, fitAddon };
}
/**
* Create WebSocket connection with standard handlers
* @param {string} path - WebSocket path
* @returns {WebSocket}
*/
function createWebSocket(path) {
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
return new WebSocket(`${protocol}//${window.location.host}${path}`);
}
/**
* Initialize a terminal and connect to WebSocket for streaming
*/
function initTerminal(elementId, taskId) {
const container = document.getElementById(elementId);
if (!container) {
console.error('Terminal container not found:', elementId);
return;
}
const { term, fitAddon } = createTerminal(container);
const ws = createWebSocket(`/ws/terminal/${taskId}`);
ws.onopen = () => {
term.write(`${ANSI.DIM}[Connected]${ANSI.RESET}${ANSI.CRLF}`);
setTerminalLoading(true);
};
ws.onmessage = (event) => term.write(event.data);
ws.onclose = () => setTerminalLoading(false);
ws.onerror = (error) => {
term.write(`${ANSI.RED}[WebSocket Error]${ANSI.RESET}${ANSI.CRLF}`);
console.error('WebSocket error:', error);
setTerminalLoading(false);
};
terminals[taskId] = { term, ws, fitAddon };
return { term, ws };
}
window.initTerminal = initTerminal;
/**
* Initialize an interactive exec terminal
*/
let execTerminal = null;
let execWs = null;
function initExecTerminal(service, container, host) {
const containerEl = document.getElementById('exec-terminal-container');
const terminalEl = document.getElementById('exec-terminal');
if (!containerEl || !terminalEl) {
console.error('Exec terminal elements not found');
return;
}
containerEl.classList.remove('hidden');
// Clean up existing
if (execWs) { execWs.close(); execWs = null; }
if (execTerminal) { execTerminal.dispose(); execTerminal = null; }
// Create WebSocket first so resize callback can use it
execWs = createWebSocket(`/ws/exec/${service}/${container}/${host}`);
// Resize callback sends size to WebSocket
const sendSize = (cols, rows) => {
if (execWs && execWs.readyState === WebSocket.OPEN) {
execWs.send(JSON.stringify({ type: 'resize', cols, rows }));
}
};
const { term } = createTerminal(terminalEl, { cursorBlink: true }, sendSize);
execTerminal = term;
execWs.onopen = () => { sendSize(term.cols, term.rows); term.focus(); };
execWs.onmessage = (event) => term.write(event.data);
execWs.onclose = () => term.write(`${ANSI.CRLF}${ANSI.DIM}[Connection closed]${ANSI.RESET}${ANSI.CRLF}`);
execWs.onerror = (error) => {
term.write(`${ANSI.RED}[WebSocket Error]${ANSI.RESET}${ANSI.CRLF}`);
console.error('Exec WebSocket error:', error);
};
term.onData((data) => {
if (execWs && execWs.readyState === WebSocket.OPEN) {
execWs.send(data);
}
});
}
window.initExecTerminal = initExecTerminal;
/**
* Refresh dashboard partials while preserving collapse states
*/
function refreshDashboard() {
const isExpanded = (id) => document.getElementById(id)?.checked ?? true;
htmx.ajax('GET', '/partials/sidebar', {target: '#sidebar nav', swap: 'innerHTML'});
htmx.ajax('GET', '/partials/stats', {target: '#stats-cards', swap: 'outerHTML'});
htmx.ajax('GET', `/partials/pending?expanded=${isExpanded('pending-collapse')}`, {target: '#pending-operations', swap: 'outerHTML'});
htmx.ajax('GET', `/partials/services-by-host?expanded=${isExpanded('services-by-host-collapse')}`, {target: '#services-by-host', swap: 'outerHTML'});
htmx.ajax('GET', '/partials/config-error', {target: '#config-error', swap: 'innerHTML'});
}
/**
* Load Monaco editor dynamically (only once)
*/
function loadMonaco(callback) {
if (monacoLoaded) {
callback();
return;
}
if (monacoLoading) {
// Wait for it to load
const checkInterval = setInterval(() => {
if (monacoLoaded) {
clearInterval(checkInterval);
callback();
}
}, 100);
return;
}
monacoLoading = true;
// Load the Monaco loader script
const script = document.createElement('script');
script.src = 'https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/loader.js';
script.onload = function() {
require.config({ paths: { vs: 'https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs' }});
require(['vs/editor/editor.main'], function() {
monacoLoaded = true;
monacoLoading = false;
callback();
});
};
document.head.appendChild(script);
}
/**
* Create a Monaco editor instance
* @param {HTMLElement} container - Container element
* @param {string} content - Initial content
* @param {string} language - Editor language (yaml, plaintext, etc.)
* @param {boolean} readonly - Whether editor is read-only
* @returns {object} Monaco editor instance
*/
function createEditor(container, content, language, readonly = false) {
const options = {
value: content,
language: language,
theme: 'vs-dark',
minimap: { enabled: false },
automaticLayout: true,
scrollBeyondLastLine: false,
fontSize: 14,
lineNumbers: 'on',
wordWrap: 'on'
};
if (readonly) {
options.readOnly = true;
options.domReadOnly = true;
}
const editor = monaco.editor.create(container, options);
// Add Command+S / Ctrl+S handler for editable editors
if (!readonly) {
editor.addCommand(monaco.KeyMod.CtrlCmd | monaco.KeyCode.KeyS, function() {
saveAllEditors();
});
}
return editor;
}
/**
* Initialize all Monaco editors on the page
*/
function initMonacoEditors() {
// Dispose existing editors
Object.values(editors).forEach(ed => {
if (ed && ed.dispose) ed.dispose();
});
Object.keys(editors).forEach(key => delete editors[key]);
const editorConfigs = [
{ id: 'compose-editor', language: 'yaml', readonly: false },
{ id: 'env-editor', language: 'plaintext', readonly: false },
{ id: 'config-editor', language: 'yaml', readonly: false },
{ id: 'state-viewer', language: 'yaml', readonly: true }
];
// Check if any editor elements exist
const hasEditors = editorConfigs.some(({ id }) => document.getElementById(id));
if (!hasEditors) return;
// Load Monaco and create editors
loadMonaco(() => {
editorConfigs.forEach(({ id, language, readonly }) => {
const el = document.getElementById(id);
if (!el) return;
const content = el.dataset.content || '';
editors[id] = createEditor(el, content, language, readonly);
if (!readonly) {
editors[id].saveUrl = el.dataset.saveUrl;
}
});
});
}
/**
* Save all editors
*/
async function saveAllEditors() {
const saveBtn = document.getElementById('save-btn') || document.getElementById('save-config-btn');
const results = [];
for (const [id, editor] of Object.entries(editors)) {
if (!editor || !editor.saveUrl) continue;
const content = editor.getValue();
try {
const response = await fetch(editor.saveUrl, {
method: 'PUT',
headers: { 'Content-Type': 'text/plain' },
body: content
});
const data = await response.json();
if (!response.ok || !data.success) {
results.push({ id, success: false, error: data.detail || 'Unknown error' });
} else {
results.push({ id, success: true });
}
} catch (e) {
results.push({ id, success: false, error: e.message });
}
}
// Show result
if (saveBtn && results.length > 0) {
saveBtn.textContent = 'Saved!';
setTimeout(() => saveBtn.textContent = saveBtn.id === 'save-config-btn' ? 'Save Config' : 'Save All', 2000);
refreshDashboard();
}
}
/**
* Initialize save button handler
*/
function initSaveButton() {
const saveBtn = document.getElementById('save-btn') || document.getElementById('save-config-btn');
if (!saveBtn) return;
saveBtn.onclick = saveAllEditors;
}
/**
* Global keyboard shortcut handler
*/
function initKeyboardShortcuts() {
document.addEventListener('keydown', function(e) {
// Command+S (Mac) or Ctrl+S (Windows/Linux)
if ((e.metaKey || e.ctrlKey) && e.key === 's') {
// Only handle if we have editors and no Monaco editor is focused
if (Object.keys(editors).length > 0) {
// Check if any Monaco editor is focused
const focusedEditor = Object.values(editors).find(ed => ed && ed.hasTextFocus && ed.hasTextFocus());
if (!focusedEditor) {
e.preventDefault();
saveAllEditors();
}
}
}
});
}
/**
* Initialize page components
*/
function initPage() {
initMonacoEditors();
initSaveButton();
}
// Initialize on page load
document.addEventListener('DOMContentLoaded', function() {
initPage();
initKeyboardShortcuts();
});
// Re-initialize after HTMX swaps main content
document.body.addEventListener('htmx:afterSwap', function(evt) {
if (evt.detail.target.id === 'main-content') {
initPage();
}
});
/**
* Expand terminal collapse and scroll to it
*/
function expandTerminal() {
const toggle = document.getElementById('terminal-toggle');
if (toggle) toggle.checked = true;
const collapse = document.getElementById('terminal-collapse');
if (collapse) {
collapse.scrollIntoView({ behavior: 'smooth', block: 'start' });
}
}
/**
* Show/hide terminal loading spinner
*/
function setTerminalLoading(loading) {
const spinner = document.getElementById('terminal-spinner');
if (spinner) {
spinner.classList.toggle('hidden', !loading);
}
}
// Handle action responses (terminal streaming)
document.body.addEventListener('htmx:afterRequest', function(evt) {
if (!evt.detail.successful || !evt.detail.xhr) return;
const text = evt.detail.xhr.responseText;
// Only try to parse if it looks like JSON (starts with {)
if (!text || !text.trim().startsWith('{')) return;
try {
const response = JSON.parse(text);
if (response.task_id) {
// Expand terminal and scroll to it
expandTerminal();
// Wait for xterm to be loaded if needed
const tryInit = (attempts) => {
if (typeof Terminal !== 'undefined' && typeof FitAddon !== 'undefined') {
initTerminal('terminal-output', response.task_id);
} else if (attempts > 0) {
setTimeout(() => tryInit(attempts - 1), 100);
} else {
console.error('xterm.js failed to load');
}
};
tryInit(20); // Try for up to 2 seconds
}
} catch (e) {
// Not valid JSON, ignore
}
});
// Command Palette
(function() {
const dialog = document.getElementById('cmd-palette');
const input = document.getElementById('cmd-input');
const list = document.getElementById('cmd-list');
const fab = document.getElementById('cmd-fab');
if (!dialog || !input || !list) return;
const colors = { service: '#22c55e', action: '#eab308', nav: '#3b82f6' };
let commands = [];
let filtered = [];
let selected = 0;
const post = (url) => () => htmx.ajax('POST', url, {swap: 'none'});
const nav = (url) => () => window.location.href = url;
const cmd = (type, name, desc, action) => ({ type, name, desc, action });
function buildCommands() {
const actions = [
cmd('action', 'Apply', 'Make reality match config', post('/api/apply')),
cmd('action', 'Refresh', 'Update state from reality', post('/api/refresh')),
cmd('nav', 'Dashboard', 'Go to dashboard', nav('/')),
];
// Add service-specific actions if on a service page
const match = window.location.pathname.match(/^\/service\/(.+)$/);
if (match) {
const svc = decodeURIComponent(match[1]);
const svcCmd = (name, desc, endpoint) => cmd('service', name, `${desc} ${svc}`, post(`/api/service/${svc}/${endpoint}`));
actions.unshift(
svcCmd('Up', 'Start', 'up'),
svcCmd('Down', 'Stop', 'down'),
svcCmd('Restart', 'Restart', 'restart'),
svcCmd('Pull', 'Pull', 'pull'),
svcCmd('Update', 'Pull + restart', 'update'),
svcCmd('Logs', 'View logs for', 'logs'),
);
}
// Add nav commands for all services from sidebar
const services = [...document.querySelectorAll('#sidebar-services li[data-svc] a[href]')].map(a => {
const name = a.getAttribute('href').replace('/service/', '');
return cmd('nav', name, 'Go to service', nav(`/service/${name}`));
});
commands = [...actions, ...services];
}
function filter() {
const q = input.value.toLowerCase();
filtered = commands.filter(c => c.name.toLowerCase().includes(q));
selected = Math.max(0, Math.min(selected, filtered.length - 1));
}
function render() {
list.innerHTML = filtered.map((c, i) => `
<a class="flex justify-between items-center px-3 py-2 rounded-r cursor-pointer hover:bg-base-200 border-l-4 ${i === selected ? 'bg-base-300' : ''}" style="border-left-color: ${colors[c.type] || '#666'}" data-idx="${i}">
<span><span class="opacity-50 text-xs mr-2">${c.type}</span>${c.name}</span>
<span class="opacity-40 text-xs">${c.desc}</span>
</a>
`).join('') || '<div class="opacity-50 p-2">No matches</div>';
}
function open() {
buildCommands();
selected = 0;
input.value = '';
filter();
render();
dialog.showModal();
input.focus();
}
function exec() {
if (filtered[selected]) {
dialog.close();
filtered[selected].action();
}
}
// Keyboard: Cmd+K to open
document.addEventListener('keydown', e => {
if ((e.metaKey || e.ctrlKey) && e.key === 'k') {
e.preventDefault();
open();
}
});
// Input filtering
input.addEventListener('input', () => { filter(); render(); });
// Keyboard nav inside palette
dialog.addEventListener('keydown', e => {
if (!dialog.open) return;
if (e.key === 'ArrowDown') { e.preventDefault(); selected = Math.min(selected + 1, filtered.length - 1); render(); }
else if (e.key === 'ArrowUp') { e.preventDefault(); selected = Math.max(selected - 1, 0); render(); }
else if (e.key === 'Enter') { e.preventDefault(); exec(); }
});
// Click to execute
list.addEventListener('click', e => {
const a = e.target.closest('a[data-idx]');
if (a) {
selected = parseInt(a.dataset.idx, 10);
exec();
}
});
// FAB click to open
if (fab) fab.addEventListener('click', open);
})();

View File

@@ -0,0 +1,114 @@
"""Streaming executor adapter for web UI."""
from __future__ import annotations
import asyncio
import os
from pathlib import Path
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from compose_farm.config import Config
# ANSI escape codes for terminal output
RED = "\x1b[31m"
GREEN = "\x1b[32m"
DIM = "\x1b[2m"
RESET = "\x1b[0m"
CRLF = "\r\n"
def _get_ssh_auth_sock() -> str | None:
"""Get SSH_AUTH_SOCK, auto-detecting forwarded agent if needed."""
sock = os.environ.get("SSH_AUTH_SOCK")
if sock and Path(sock).is_socket():
return sock
# Try to find a forwarded SSH agent socket
agent_dir = Path.home() / ".ssh" / "agent"
if agent_dir.is_dir():
sockets = sorted(
agent_dir.glob("s.*.sshd.*"), key=lambda p: p.stat().st_mtime, reverse=True
)
for s in sockets:
if s.is_socket():
return str(s)
return None
# In-memory task registry
tasks: dict[str, dict[str, Any]] = {}
async def stream_to_task(task_id: str, message: str) -> None:
"""Send a message to a task's output buffer."""
if task_id in tasks:
tasks[task_id]["output"].append(message)
async def run_cli_streaming(
config: Config,
args: list[str],
task_id: str,
) -> None:
"""Run a cf CLI command as subprocess and stream output to task buffer.
This reuses all CLI logic including Rich formatting, progress bars, etc.
The subprocess gets a pseudo-TTY via FORCE_COLOR so Rich outputs ANSI codes.
"""
try:
# Build command - config option goes after the subcommand
cmd = ["cf", *args, f"--config={config.config_path}"]
# Show command being executed
cmd_display = " ".join(["cf", *args])
await stream_to_task(task_id, f"{DIM}$ {cmd_display}{RESET}{CRLF}")
# Force color output even though there's no real TTY
# Set COLUMNS for Rich/Typer to format output correctly
env = {"FORCE_COLOR": "1", "TERM": "xterm-256color", "COLUMNS": "120"}
# Ensure SSH agent is available (auto-detect if needed)
ssh_sock = _get_ssh_auth_sock()
if ssh_sock:
env["SSH_AUTH_SOCK"] = ssh_sock
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT,
env={**os.environ, **env},
)
# Stream output line by line
if process.stdout:
async for line in process.stdout:
text = line.decode("utf-8", errors="replace")
# Convert \n to \r\n for xterm.js
if text.endswith("\n") and not text.endswith("\r\n"):
text = text[:-1] + "\r\n"
await stream_to_task(task_id, text)
exit_code = await process.wait()
tasks[task_id]["status"] = "completed" if exit_code == 0 else "failed"
except Exception as e:
await stream_to_task(task_id, f"{RED}Error: {e}{RESET}{CRLF}")
tasks[task_id]["status"] = "failed"
async def run_compose_streaming(
config: Config,
service: str,
command: str,
task_id: str,
) -> None:
"""Run a compose command (up/down/pull/restart) via CLI subprocess."""
# Split command into args (e.g., "up -d" -> ["up", "-d"])
args = command.split()
cli_cmd = args[0] # up, down, pull, restart
extra_args = args[1:] # -d, etc.
# Build CLI args
cli_args = [cli_cmd, service, *extra_args]
await run_cli_streaming(config, cli_args, task_id)

View File

@@ -0,0 +1,67 @@
{% from "partials/icons.html" import github, hamburger %}
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>{% block title %}Compose Farm{% endblock %}</title>
<!-- daisyUI + Tailwind -->
<link href="https://cdn.jsdelivr.net/npm/daisyui@5" data-vendor="daisyui.css" rel="stylesheet" type="text/css" />
<script src="https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4" data-vendor="tailwind.js"></script>
<!-- xterm.js -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/css/xterm.css" data-vendor="xterm.css">
<!-- Custom styles -->
<link rel="stylesheet" href="/static/app.css">
</head>
<body class="min-h-screen bg-base-200">
<div class="drawer lg:drawer-open">
<input id="drawer-toggle" type="checkbox" class="drawer-toggle" />
<!-- Main content -->
<div class="drawer-content flex flex-col">
<!-- Mobile navbar with hamburger -->
<header class="navbar bg-base-100 border-b border-base-300 lg:hidden">
<label for="drawer-toggle" class="btn btn-ghost btn-square">
{{ hamburger() }}
</label>
<span class="font-semibold rainbow-hover">Compose Farm</span>
</header>
<main id="main-content" class="flex-1 p-6 overflow-y-auto" hx-boost="true" hx-target="#main-content" hx-select="#main-content" hx-swap="outerHTML">
{% block content %}{% endblock %}
</main>
</div>
<!-- Sidebar -->
<div class="drawer-side">
<label for="drawer-toggle" class="drawer-overlay" aria-label="close sidebar"></label>
<aside id="sidebar" class="w-64 bg-base-100 border-r border-base-300 flex flex-col min-h-screen">
<header class="p-4 border-b border-base-300">
<h2 class="text-lg font-semibold flex items-center gap-2">
<span class="rainbow-hover">Compose Farm</span>
<a href="https://github.com/basnijholt/compose-farm" target="_blank" title="GitHub" class="opacity-50 hover:opacity-100 transition-opacity">
{{ github() }}
</a>
</h2>
</header>
<nav class="flex-1 overflow-y-auto p-2" hx-get="/partials/sidebar" hx-trigger="load" hx-swap="innerHTML">
<span class="loading loading-spinner loading-sm"></span> Loading...
</nav>
</aside>
</div>
</div>
<!-- Command Palette -->
{% include "partials/command_palette.html" %}
<!-- Scripts - HTMX first -->
<script src="https://unpkg.com/htmx.org@2.0.4" data-vendor="htmx.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/lib/xterm.js" data-vendor="xterm.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@xterm/addon-fit@0.10.0/lib/addon-fit.js" data-vendor="xterm-fit.js"></script>
<script src="/static/app.js"></script>
{% block scripts %}{% endblock %}
</body>
</html>

View File

@@ -0,0 +1,73 @@
{% extends "base.html" %}
{% from "partials/components.html" import page_header, collapse, stat_card, table, action_btn %}
{% from "partials/icons.html" import check, refresh_cw, save, settings, server, database %}
{% block title %}Dashboard - Compose Farm{% endblock %}
{% block content %}
<div class="max-w-5xl">
{{ page_header("Compose Farm", "Cluster overview and management") }}
<!-- Stats Cards -->
{% include "partials/stats.html" %}
<!-- Global Actions -->
<div class="flex flex-wrap gap-2 mb-6">
{{ action_btn("Apply", "/api/apply", "primary", "Make reality match config", check()) }}
{{ action_btn("Refresh", "/api/refresh", "outline", "Update state from reality", refresh_cw()) }}
<button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button>
</div>
{% include "partials/terminal.html" %}
<!-- Config Error Banner -->
<div id="config-error">
{% if config_error %}
{% include "partials/config_error.html" %}
{% endif %}
</div>
<!-- Config Editor -->
{% call collapse("Edit Config", badge="compose-farm.yaml", icon=settings(), checked=config_error) %}
<div class="editor-wrapper yaml-wrapper">
<div id="config-editor" class="yaml-editor" data-content="{{ config_content | e }}" data-save-url="/api/config"></div>
</div>
{% endcall %}
<!-- Pending Operations -->
{% include "partials/pending.html" %}
<!-- Services by Host -->
{% include "partials/services_by_host.html" %}
<!-- Hosts Configuration -->
{% call collapse("Hosts (" ~ (hosts | length) ~ ")", icon=server()) %}
{% call table() %}
<thead>
<tr>
<th>Name</th>
<th>Address</th>
<th>User</th>
<th>Port</th>
</tr>
</thead>
<tbody>
{% for name, host in hosts.items() %}
<tr class="hover:bg-base-300">
<td class="font-semibold">{{ name }}</td>
<td><code class="text-sm">{{ host.address }}</code></td>
<td><code class="text-sm">{{ host.user }}</code></td>
<td><code class="text-sm">{{ host.port }}</code></td>
</tr>
{% endfor %}
</tbody>
{% endcall %}
{% endcall %}
<!-- State Viewer -->
{% call collapse("Raw State", badge="compose-farm-state.yaml", icon=database()) %}
<div class="editor-wrapper viewer-wrapper">
<div id="state-viewer" class="yaml-viewer" data-content="{{ state_content | e }}"></div>
</div>
{% endcall %}
</div>
{% endblock %}

View File

@@ -0,0 +1,19 @@
{% from "partials/icons.html" import search, command %}
<dialog id="cmd-palette" class="modal">
<div class="modal-box max-w-lg p-0">
<label class="input input-lg bg-base-100 border-0 border-b border-base-300 w-full rounded-none rounded-t-box sticky top-0 z-10 focus-within:outline-none">
{{ search(20) }}
<input type="text" id="cmd-input" class="grow" placeholder="Type a command..." autocomplete="off" />
<kbd class="kbd kbd-sm opacity-50">esc</kbd>
</label>
<div id="cmd-list" class="flex flex-col p-2 max-h-80 overflow-y-auto">
<!-- Populated by JS -->
</div>
</div>
<form method="dialog" class="modal-backdrop"><button>close</button></form>
</dialog>
<!-- Floating button to open command palette -->
<button id="cmd-fab" class="btn btn-circle glass shadow-lg fixed bottom-6 right-6 z-50 hover:ring hover:ring-base-content/50" title="Command Palette (⌘K)">
{{ command(24) }}
</button>

View File

@@ -0,0 +1,52 @@
{# Page header with title and optional subtitle (supports HTML) #}
{% macro page_header(title, subtitle=None) %}
<div class="mb-6">
<h1 class="text-3xl font-bold rainbow-hover">{{ title }}</h1>
{% if subtitle %}
<p class="text-base-content/60 text-lg">{{ subtitle | safe }}</p>
{% endif %}
</div>
{% endmacro %}
{# Collapsible section #}
{% macro collapse(title, id=None, checked=False, badge=None, icon=None) %}
<div class="collapse collapse-arrow bg-base-100 shadow mb-4">
<input type="checkbox" {% if id %}id="{{ id }}"{% endif %} {% if checked %}checked{% endif %} />
<div class="collapse-title font-medium flex items-center gap-2">
{% if icon %}{{ icon }}{% endif %}{{ title }}
{% if badge %}<code class="text-xs ml-2 opacity-60">{{ badge }}</code>{% endif %}
</div>
<div class="collapse-content">
{{ caller() }}
</div>
</div>
{% endmacro %}
{# Action button with htmx #}
{% macro action_btn(label, url, style="outline", title=None, icon=None) %}
<button hx-post="{{ url }}"
hx-swap="none"
class="btn btn-{{ style }}"
{% if title %}title="{{ title }}"{% endif %}>
{% if icon %}{{ icon }}{% endif %}{{ label }}
</button>
{% endmacro %}
{# Stat card for dashboard #}
{% macro stat_card(label, value, color=None, icon=None) %}
<div class="card bg-base-100 shadow">
<div class="card-body items-center text-center">
<h2 class="card-title text-base-content/60 text-sm gap-1">{% if icon %}{{ icon }}{% endif %}{{ label }}</h2>
<p class="text-4xl font-bold {% if color %}text-{{ color }}{% endif %}">{{ value }}</p>
</div>
</div>
{% endmacro %}
{# Data table wrapper #}
{% macro table() %}
<div class="overflow-x-auto">
<table class="table table-zebra">
{{ caller() }}
</table>
</div>
{% endmacro %}

View File

@@ -0,0 +1,8 @@
{% from "partials/icons.html" import alert_triangle %}
<div class="alert alert-error mb-4">
{{ alert_triangle(size=20) }}
<div>
<h3 class="font-bold">Configuration Error</h3>
<div class="text-sm">{{ config_error }}</div>
</div>
</div>

View File

@@ -0,0 +1,27 @@
{# Container list for a service on a single host #}
{% from "partials/icons.html" import terminal %}
{% macro container_row(service, container, host) %}
<div class="flex items-center gap-2 mb-2">
{% if container.State == "running" %}
<span class="badge badge-success">running</span>
{% elif container.State == "unknown" %}
<span class="badge badge-ghost"><span class="loading loading-spinner loading-xs"></span></span>
{% else %}
<span class="badge badge-warning">{{ container.State }}</span>
{% endif %}
<code class="text-sm flex-1">{{ container.Name }}</code>
<button class="btn btn-sm btn-outline"
onclick="initExecTerminal('{{ service }}', '{{ container.Name }}', '{{ host }}')">
{{ terminal() }} Shell
</button>
</div>
{% endmacro %}
{% macro host_containers(service, host, containers, show_header=False) %}
{% if show_header %}
<div class="font-semibold text-sm mt-3 mb-1">{{ host }}</div>
{% endif %}
{% for container in containers %}
{{ container_row(service, container, host) }}
{% endfor %}
{% endmacro %}

View File

@@ -0,0 +1,148 @@
{# Lucide-style icons (https://lucide.dev) - 24x24 viewBox, 2px stroke, round caps #}
{# Brand icons #}
{% macro github(size=16) %}
<svg height="{{ size }}" width="{{ size }}" viewBox="0 0 16 16" fill="currentColor"><path d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"/></svg>
{% endmacro %}
{# UI icons #}
{% macro hamburger(size=20) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<line x1="4" x2="20" y1="6" y2="6"/><line x1="4" x2="20" y1="12" y2="12"/><line x1="4" x2="20" y1="18" y2="18"/>
</svg>
{% endmacro %}
{% macro command(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M15 6v12a3 3 0 1 0 3-3H6a3 3 0 1 0 3 3V6a3 3 0 1 0-3 3h12a3 3 0 1 0-3-3"/>
</svg>
{% endmacro %}
{# Action icons #}
{% macro play(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<polygon points="6 3 20 12 6 21 6 3"/>
</svg>
{% endmacro %}
{% macro square(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<rect width="14" height="14" x="5" y="5" rx="2"/>
</svg>
{% endmacro %}
{% macro rotate_cw(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M21 12a9 9 0 1 1-9-9c2.52 0 4.93 1 6.74 2.74L21 8"/><path d="M21 3v5h-5"/>
</svg>
{% endmacro %}
{% macro download(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"/><polyline points="7 10 12 15 17 10"/><line x1="12" x2="12" y1="15" y2="3"/>
</svg>
{% endmacro %}
{% macro cloud_download(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M12 13v8l-4-4"/><path d="m12 21 4-4"/><path d="M4.393 15.269A7 7 0 1 1 15.71 8h1.79a4.5 4.5 0 0 1 2.436 8.284"/>
</svg>
{% endmacro %}
{% macro file_text(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M15 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V7Z"/><path d="M14 2v4a2 2 0 0 0 2 2h4"/><path d="M10 9H8"/><path d="M16 13H8"/><path d="M16 17H8"/>
</svg>
{% endmacro %}
{% macro save(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M15.2 3a2 2 0 0 1 1.4.6l3.8 3.8a2 2 0 0 1 .6 1.4V19a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V5a2 2 0 0 1 2-2z"/><path d="M17 21v-7a1 1 0 0 0-1-1H8a1 1 0 0 0-1 1v7"/><path d="M7 3v4a1 1 0 0 0 1 1h7"/>
</svg>
{% endmacro %}
{% macro check(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M20 6 9 17l-5-5"/>
</svg>
{% endmacro %}
{% macro refresh_cw(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"/><path d="M21 3v5h-5"/><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"/><path d="M8 16H3v5"/>
</svg>
{% endmacro %}
{% macro terminal(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<polyline points="4 17 10 11 4 5"/><line x1="12" x2="20" y1="19" y2="19"/>
</svg>
{% endmacro %}
{# Stats/navigation icons #}
{% macro server(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<rect width="20" height="8" x="2" y="2" rx="2" ry="2"/><rect width="20" height="8" x="2" y="14" rx="2" ry="2"/><line x1="6" x2="6.01" y1="6" y2="6"/><line x1="6" x2="6.01" y1="18" y2="18"/>
</svg>
{% endmacro %}
{% macro layers(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="m12.83 2.18a2 2 0 0 0-1.66 0L2.6 6.08a1 1 0 0 0 0 1.83l8.58 3.91a2 2 0 0 0 1.66 0l8.58-3.9a1 1 0 0 0 0-1.83Z"/><path d="m22 17.65-9.17 4.16a2 2 0 0 1-1.66 0L2 17.65"/><path d="m22 12.65-9.17 4.16a2 2 0 0 1-1.66 0L2 12.65"/>
</svg>
{% endmacro %}
{% macro circle_check(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<circle cx="12" cy="12" r="10"/><path d="m9 12 2 2 4-4"/>
</svg>
{% endmacro %}
{% macro circle_x(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<circle cx="12" cy="12" r="10"/><path d="m15 9-6 6"/><path d="m9 9 6 6"/>
</svg>
{% endmacro %}
{% macro home(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M15 21v-8a1 1 0 0 0-1-1h-4a1 1 0 0 0-1 1v8"/><path d="M3 10a2 2 0 0 1 .709-1.528l7-5.999a2 2 0 0 1 2.582 0l7 5.999A2 2 0 0 1 21 10v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2z"/>
</svg>
{% endmacro %}
{% macro box(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M21 8a2 2 0 0 0-1-1.73l-7-4a2 2 0 0 0-2 0l-7 4A2 2 0 0 0 3 8v8a2 2 0 0 0 1 1.73l7 4a2 2 0 0 0 2 0l7-4A2 2 0 0 0 21 16Z"/><path d="m3.3 7 8.7 5 8.7-5"/><path d="M12 22V12"/>
</svg>
{% endmacro %}
{# Section icons #}
{% macro settings(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M12.22 2h-.44a2 2 0 0 0-2 2v.18a2 2 0 0 1-1 1.73l-.43.25a2 2 0 0 1-2 0l-.15-.08a2 2 0 0 0-2.73.73l-.22.38a2 2 0 0 0 .73 2.73l.15.1a2 2 0 0 1 1 1.72v.51a2 2 0 0 1-1 1.74l-.15.09a2 2 0 0 0-.73 2.73l.22.38a2 2 0 0 0 2.73.73l.15-.08a2 2 0 0 1 2 0l.43.25a2 2 0 0 1 1 1.73V20a2 2 0 0 0 2 2h.44a2 2 0 0 0 2-2v-.18a2 2 0 0 1 1-1.73l.43-.25a2 2 0 0 1 2 0l.15.08a2 2 0 0 0 2.73-.73l.22-.39a2 2 0 0 0-.73-2.73l-.15-.08a2 2 0 0 1-1-1.74v-.5a2 2 0 0 1 1-1.74l.15-.09a2 2 0 0 0 .73-2.73l-.22-.38a2 2 0 0 0-2.73-.73l-.15.08a2 2 0 0 1-2 0l-.43-.25a2 2 0 0 1-1-1.73V4a2 2 0 0 0-2-2z"/><circle cx="12" cy="12" r="3"/>
</svg>
{% endmacro %}
{% macro file_code(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M10 12.5 8 15l2 2.5"/><path d="m14 12.5 2 2.5-2 2.5"/><path d="M14 2v4a2 2 0 0 0 2 2h4"/><path d="M15 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V7z"/>
</svg>
{% endmacro %}
{% macro database(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<ellipse cx="12" cy="5" rx="9" ry="3"/><path d="M3 5V19A9 3 0 0 0 21 19V5"/><path d="M3 12A9 3 0 0 0 21 12"/>
</svg>
{% endmacro %}
{% macro search(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<circle cx="11" cy="11" r="8"/><path d="m21 21-4.3-4.3"/>
</svg>
{% endmacro %}
{% macro alert_triangle(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="m21.73 18-8-14a2 2 0 0 0-3.48 0l-8 14A2 2 0 0 0 4 21h16a2 2 0 0 0 1.73-3"/><path d="M12 9v4"/><path d="M12 17h.01"/>
</svg>
{% endmacro %}

View File

@@ -0,0 +1,38 @@
{% from "partials/components.html" import collapse %}
<div id="pending-operations">
{% if orphaned or migrations or not_started %}
{% call collapse("Pending Operations", id="pending-collapse", checked=expanded|default(true)) %}
{% if orphaned %}
<h4 class="font-semibold mt-2 mb-1">Orphaned Services (will be stopped)</h4>
<ul class="list-disc list-inside mb-4">
{% for svc, host in orphaned.items() %}
<li><a href="/service/{{ svc }}" class="badge badge-warning hover:badge-primary">{{ svc }}</a> on {{ host }}</li>
{% endfor %}
</ul>
{% endif %}
{% if migrations %}
<h4 class="font-semibold mt-2 mb-1">Services Needing Migration</h4>
<ul class="list-disc list-inside mb-4">
{% for svc in migrations %}
<li><a href="/service/{{ svc }}" class="badge badge-info hover:badge-primary">{{ svc }}</a></li>
{% endfor %}
</ul>
{% endif %}
{% if not_started %}
<h4 class="font-semibold mt-2 mb-1">Services Not Started</h4>
<ul class="menu menu-horizontal bg-base-200 rounded-box mb-2">
{% for svc in not_started | sort %}
<li><a href="/service/{{ svc }}">{{ svc }}</a></li>
{% endfor %}
</ul>
{% endif %}
{% endcall %}
{% else %}
<div role="alert" class="alert alert-success mb-4">
<svg xmlns="http://www.w3.org/2000/svg" class="stroke-current shrink-0 h-6 w-6" fill="none" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" /></svg>
<span>All services are in sync with configuration.</span>
</div>
{% endif %}
</div>

View File

@@ -0,0 +1,41 @@
{% from "partials/components.html" import collapse %}
{% from "partials/icons.html" import layers, search %}
<div id="services-by-host">
{% call collapse("Services by Host", id="services-by-host-collapse", checked=expanded|default(true), icon=layers()) %}
<div class="flex flex-wrap gap-2 mb-4 items-center">
<label class="input input-sm input-bordered flex items-center gap-2 bg-base-200">
{{ search() }}<input type="text" id="sbh-filter" class="w-32" placeholder="Filter..." onkeyup="sbhFilter()" />
</label>
<select id="sbh-host-select" class="select select-sm select-bordered bg-base-200" onchange="sbhFilter()">
<option value="">All hosts</option>
{% for h in services_by_host.keys() | sort %}<option value="{{ h }}">{{ h }}</option>{% endfor %}
</select>
</div>
{% for host_name, host_services in services_by_host.items() | sort %}
<div class="sbh-group" data-h="{{ host_name }}">
<h4 class="font-semibold mt-3 mb-1">{{ host_name }}{% if host_name in hosts %}<code class="text-xs ml-2 opacity-60">{{ hosts[host_name].address }}</code>{% endif %}</h4>
<ul class="menu menu-horizontal bg-base-200 rounded-box mb-2 flex-wrap">
{% for svc in host_services | sort %}<li data-s="{{ svc | lower }}"><a href="/service/{{ svc }}">{{ svc }}</a></li>{% endfor %}
</ul>
</div>
{% else %}
<p class="text-base-content/60 italic">No services currently running.</p>
{% endfor %}
<script>
function sbhFilter() {
const q = (document.getElementById('sbh-filter')?.value || '').toLowerCase();
const h = document.getElementById('sbh-host-select')?.value || '';
document.querySelectorAll('.sbh-group').forEach(g => {
if (h && g.dataset.h !== h) { g.hidden = true; return; }
let n = 0;
g.querySelectorAll('li[data-s]').forEach(li => {
const show = !q || li.dataset.s.includes(q);
li.hidden = !show;
if (show) n++;
});
g.hidden = !n;
});
}
</script>
{% endcall %}
</div>

View File

@@ -0,0 +1,45 @@
{% from "partials/icons.html" import home, search %}
<!-- Dashboard Link -->
<div class="mb-4">
<ul class="menu" hx-boost="true" hx-target="#main-content" hx-select="#main-content" hx-swap="outerHTML">
<li><a href="/" class="font-semibold">{{ home() }} Dashboard</a></li>
</ul>
</div>
<!-- Services Section -->
<div class="mb-4">
<h4 class="text-xs uppercase tracking-wide text-base-content/60 px-3 py-1">Services <span class="opacity-50" id="sidebar-count">({{ services | length }})</span></h4>
<div class="px-2 mb-2 flex flex-col gap-1">
<label class="input input-xs input-bordered flex items-center gap-2 bg-base-200">
{{ search(14) }}<input type="text" id="sidebar-filter" placeholder="Filter..." onkeyup="sidebarFilter()" />
</label>
<select id="sidebar-host-select" class="select select-xs select-bordered bg-base-200 w-full" onchange="sidebarFilter()">
<option value="">All hosts</option>
{% for h in hosts %}<option value="{{ h }}">{{ h }}</option>{% endfor %}
</select>
</div>
<ul class="menu menu-sm" id="sidebar-services" hx-boost="true" hx-target="#main-content" hx-select="#main-content" hx-swap="outerHTML">
{% for service in services %}
<li data-svc="{{ service | lower }}" data-h="{{ service_hosts.get(service, '') }}">
<a href="/service/{{ service }}" class="flex items-center gap-2">
{% if service in state %}<span class="status status-success" title="In state file"></span>
{% else %}<span class="status status-neutral" title="Not in state file"></span>{% endif %}
{{ service }}
</a>
</li>
{% endfor %}
</ul>
</div>
<script>
function sidebarFilter() {
const q = (document.getElementById('sidebar-filter')?.value || '').toLowerCase();
const h = document.getElementById('sidebar-host-select')?.value || '';
let n = 0;
document.querySelectorAll('#sidebar-services li').forEach(li => {
const show = (!q || li.dataset.svc.includes(q)) && (!h || !li.dataset.h || li.dataset.h === h);
li.hidden = !show;
if (show) n++;
});
document.getElementById('sidebar-count').textContent = '(' + n + ')';
}
</script>

View File

@@ -0,0 +1,8 @@
{% from "partials/components.html" import stat_card %}
{% from "partials/icons.html" import server, layers, circle_check, circle_x %}
<div id="stats-cards" class="grid grid-cols-2 md:grid-cols-4 gap-4 mb-6">
{{ stat_card("Hosts", hosts | length, icon=server()) }}
{{ stat_card("Services", services | length, icon=layers()) }}
{{ stat_card("Running", running_count, "success", circle_check()) }}
{{ stat_card("Stopped", stopped_count, icon=circle_x()) }}
</div>

View File

@@ -0,0 +1,14 @@
{% from "partials/icons.html" import terminal %}
<!-- Shared Terminal Component -->
<div class="collapse collapse-arrow bg-base-100 shadow mb-4" id="terminal-collapse">
<input type="checkbox" id="terminal-toggle" />
<div class="collapse-title font-medium flex items-center gap-2">
{{ terminal() }} Terminal Output
<span id="terminal-spinner" class="loading loading-spinner loading-sm hidden"></span>
</div>
<div class="collapse-content">
<div id="terminal-container" class="bg-[#1a1a2e] rounded-lg h-[300px] border border-white/10 resize-y overflow-hidden">
<div id="terminal-output" class="h-full"></div>
</div>
</div>
</div>

View File

@@ -0,0 +1,67 @@
{% extends "base.html" %}
{% from "partials/components.html" import collapse, action_btn %}
{% from "partials/icons.html" import play, square, rotate_cw, download, cloud_download, file_text, save, file_code, terminal, settings %}
{% block title %}{{ name }} - Compose Farm{% endblock %}
{% block content %}
<div class="max-w-5xl">
<div class="mb-6">
<h1 class="text-3xl font-bold rainbow-hover">{{ name }}</h1>
<div class="flex flex-wrap items-center gap-2 mt-2">
{% if current_host %}
<span class="badge badge-success">Running on {{ current_host }}</span>
{% else %}
<span class="badge badge-neutral">Not running</span>
{% endif %}
<span class="badge badge-outline">{{ hosts | join(', ') }}</span>
</div>
</div>
<!-- Action Buttons -->
<div class="flex flex-wrap gap-2 mb-6">
<!-- Lifecycle -->
{{ action_btn("Up", "/api/service/" ~ name ~ "/up", "primary", "Start service (docker compose up -d)", play()) }}
{{ action_btn("Down", "/api/service/" ~ name ~ "/down", "outline", "Stop service (docker compose down)", square()) }}
{{ action_btn("Restart", "/api/service/" ~ name ~ "/restart", "secondary", "Restart service (down + up)", rotate_cw()) }}
{{ action_btn("Update", "/api/service/" ~ name ~ "/update", "accent", "Update to latest (pull + build + down + up)", download()) }}
<div class="divider divider-horizontal mx-0"></div>
<!-- Other -->
{{ action_btn("Pull", "/api/service/" ~ name ~ "/pull", "outline", "Pull latest images (no restart)", cloud_download()) }}
{{ action_btn("Logs", "/api/service/" ~ name ~ "/logs", "outline", "Show recent logs", file_text()) }}
<button id="save-btn" class="btn btn-outline">{{ save() }} Save All</button>
</div>
{% call collapse("Compose File", badge=compose_path, icon=file_code()) %}
<div class="editor-wrapper yaml-wrapper">
<div id="compose-editor" class="yaml-editor" data-content="{{ compose_content | e }}" data-save-url="/api/service/{{ name }}/compose"></div>
</div>
{% endcall %}
{% call collapse(".env File", badge=env_path, icon=settings()) %}
<div class="editor-wrapper env-wrapper">
<div id="env-editor" class="env-editor" data-content="{{ env_content | e }}" data-save-url="/api/service/{{ name }}/env"></div>
</div>
{% endcall %}
{% include "partials/terminal.html" %}
<!-- Exec Terminal -->
{% if current_host %}
{% call collapse("Container Shell", id="exec-collapse", checked=True, icon=terminal()) %}
<div id="containers-list" class="mb-4"
hx-get="/api/service/{{ name }}/containers"
hx-trigger="load"
hx-target="this"
hx-select="unset"
hx-swap="innerHTML">
<span class="loading loading-spinner loading-sm"></span> Loading containers...
</div>
<div id="exec-terminal-container" class="bg-[#1a1a2e] rounded-lg h-[400px] border border-white/10 hidden">
<div id="exec-terminal" class="h-full"></div>
</div>
{% endcall %}
{% endif %}
</div>
{% endblock %}

236
src/compose_farm/web/ws.py Normal file
View File

@@ -0,0 +1,236 @@
"""WebSocket handler for terminal streaming."""
from __future__ import annotations
import asyncio
import contextlib
import fcntl
import json
import os
import pty
import signal
import struct
import termios
from typing import TYPE_CHECKING, Any
import asyncssh
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
from compose_farm.executor import is_local
from compose_farm.web.deps import get_config
from compose_farm.web.streaming import CRLF, DIM, GREEN, RED, RESET, tasks
if TYPE_CHECKING:
from compose_farm.config import Host
router = APIRouter()
def _parse_resize(msg: str) -> tuple[int, int] | None:
"""Parse a resize message, return (cols, rows) or None if not a resize."""
try:
data = json.loads(msg)
if data.get("type") == "resize":
return int(data["cols"]), int(data["rows"])
except (json.JSONDecodeError, KeyError, TypeError, ValueError):
pass
return None
def _resize_pty(
fd: int, cols: int, rows: int, proc: asyncio.subprocess.Process | None = None
) -> None:
"""Resize a local PTY and send SIGWINCH to the process."""
winsize = struct.pack("HHHH", rows, cols, 0, 0)
fcntl.ioctl(fd, termios.TIOCSWINSZ, winsize)
# Explicitly send SIGWINCH so docker exec forwards it to the container
if proc and proc.pid:
os.kill(proc.pid, signal.SIGWINCH)
async def _bridge_websocket_to_fd(
websocket: WebSocket,
master_fd: int,
proc: asyncio.subprocess.Process,
) -> None:
"""Bridge WebSocket to a local PTY file descriptor."""
loop = asyncio.get_event_loop()
async def read_output() -> None:
while proc.returncode is None:
try:
data = await loop.run_in_executor(None, lambda: os.read(master_fd, 4096))
except BlockingIOError:
await asyncio.sleep(0.01)
continue
except OSError:
break
if not data:
break
await websocket.send_text(data.decode("utf-8", errors="replace"))
read_task = asyncio.create_task(read_output())
try:
while proc.returncode is None:
try:
msg = await asyncio.wait_for(websocket.receive_text(), timeout=0.1)
except TimeoutError:
continue
if size := _parse_resize(msg):
_resize_pty(master_fd, *size, proc)
else:
os.write(master_fd, msg.encode())
finally:
read_task.cancel()
os.close(master_fd)
if proc.returncode is None:
proc.terminate()
async def _bridge_websocket_to_ssh(
websocket: WebSocket,
proc: Any, # asyncssh.SSHClientProcess
) -> None:
"""Bridge WebSocket to an SSH process with PTY."""
assert proc.stdout is not None
assert proc.stdin is not None
async def read_stdout() -> None:
while proc.returncode is None:
data = await proc.stdout.read(4096)
if not data:
break
text = data if isinstance(data, str) else data.decode()
await websocket.send_text(text)
read_task = asyncio.create_task(read_stdout())
try:
while proc.returncode is None:
try:
msg = await asyncio.wait_for(websocket.receive_text(), timeout=0.1)
except TimeoutError:
continue
if size := _parse_resize(msg):
proc.change_terminal_size(*size)
else:
proc.stdin.write(msg)
finally:
read_task.cancel()
proc.terminate()
async def _run_local_exec(websocket: WebSocket, exec_cmd: str) -> None:
"""Run docker exec locally with PTY."""
master_fd, slave_fd = pty.openpty()
proc = await asyncio.create_subprocess_shell(
exec_cmd,
stdin=slave_fd,
stdout=slave_fd,
stderr=slave_fd,
close_fds=True,
)
os.close(slave_fd)
# Set non-blocking
flags = fcntl.fcntl(master_fd, fcntl.F_GETFL)
fcntl.fcntl(master_fd, fcntl.F_SETFL, flags | os.O_NONBLOCK)
await _bridge_websocket_to_fd(websocket, master_fd, proc)
async def _run_remote_exec(websocket: WebSocket, host: Host, exec_cmd: str) -> None:
"""Run docker exec on remote host via SSH with PTY."""
async with asyncssh.connect(
host.address,
port=host.port,
username=host.user,
known_hosts=None,
) as conn:
proc: asyncssh.SSHClientProcess[Any] = await conn.create_process(
exec_cmd,
term_type="xterm-256color",
term_size=(80, 24),
)
async with proc:
await _bridge_websocket_to_ssh(websocket, proc)
async def _run_exec_session(
websocket: WebSocket,
container: str,
host_name: str,
) -> None:
"""Run an interactive docker exec session over WebSocket."""
config = get_config()
host = config.hosts.get(host_name)
if not host:
await websocket.send_text(f"{RED}Host '{host_name}' not found{RESET}{CRLF}")
return
exec_cmd = f"docker exec -it {container} /bin/sh -c 'command -v bash >/dev/null && exec bash || exec sh'"
if is_local(host):
await _run_local_exec(websocket, exec_cmd)
else:
await _run_remote_exec(websocket, host, exec_cmd)
@router.websocket("/ws/exec/{service}/{container}/{host}")
async def exec_websocket(
websocket: WebSocket,
service: str, # noqa: ARG001
container: str,
host: str,
) -> None:
"""WebSocket endpoint for interactive container exec."""
await websocket.accept()
try:
await websocket.send_text(f"{DIM}[Connecting to {container} on {host}...]{RESET}{CRLF}")
await _run_exec_session(websocket, container, host)
await websocket.send_text(f"{CRLF}{DIM}[Disconnected]{RESET}{CRLF}")
except WebSocketDisconnect:
pass
except Exception as e:
with contextlib.suppress(Exception):
await websocket.send_text(f"{RED}Error: {e}{RESET}{CRLF}")
finally:
with contextlib.suppress(Exception):
await websocket.close()
@router.websocket("/ws/terminal/{task_id}")
async def terminal_websocket(websocket: WebSocket, task_id: str) -> None:
"""WebSocket endpoint for terminal streaming."""
await websocket.accept()
if task_id not in tasks:
await websocket.send_text(f"{RED}Error: Task not found{RESET}{CRLF}")
await websocket.close(code=4004)
return
task = tasks[task_id]
sent_count = 0
try:
while True:
# Send any new output
while sent_count < len(task["output"]):
await websocket.send_text(task["output"][sent_count])
sent_count += 1
if task["status"] in ("completed", "failed"):
status = "[Done]" if task["status"] == "completed" else "[Failed]"
color = GREEN if task["status"] == "completed" else RED
await websocket.send_text(f"{CRLF}{color}{status}{RESET}{CRLF}")
await websocket.close()
break
await asyncio.sleep(0.05)
except WebSocketDisconnect:
pass
finally:
tasks.pop(task_id, None)

426
tests/test_cli_lifecycle.py Normal file
View File

@@ -0,0 +1,426 @@
"""Tests for CLI lifecycle commands (apply, down --orphaned)."""
from pathlib import Path
from unittest.mock import patch
import pytest
import typer
from compose_farm.cli.lifecycle import apply, down
from compose_farm.config import Config, Host
from compose_farm.executor import CommandResult
def _make_config(tmp_path: Path, services: dict[str, str] | None = None) -> Config:
"""Create a minimal config for testing."""
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
svc_dict = services or {"svc1": "host1", "svc2": "host2"}
for svc in svc_dict:
svc_dir = compose_dir / svc
svc_dir.mkdir()
(svc_dir / "docker-compose.yml").write_text("services: {}\n")
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text("")
return Config(
compose_dir=compose_dir,
hosts={"host1": Host(address="localhost"), "host2": Host(address="localhost")},
services=svc_dict,
config_path=config_path,
)
def _make_result(service: str, success: bool = True) -> CommandResult:
"""Create a command result."""
return CommandResult(
service=service,
exit_code=0 if success else 1,
success=success,
stdout="",
stderr="",
)
class TestApplyCommand:
"""Tests for the apply command."""
def test_apply_nothing_to_do(self, tmp_path: Path, capsys: pytest.CaptureFixture[str]) -> None:
"""When no migrations, orphans, or missing services, prints success message."""
cfg = _make_config(tmp_path)
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
captured = capsys.readouterr()
assert "Nothing to apply" in captured.out
def test_apply_dry_run_shows_preview(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""Dry run shows what would be done without executing."""
cfg = _make_config(tmp_path)
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.cli.lifecycle.get_orphaned_services",
return_value={"old-svc": "host1"},
),
patch(
"compose_farm.cli.lifecycle.get_services_needing_migration",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_service_host", return_value="host1"),
patch("compose_farm.cli.lifecycle.stop_orphaned_services") as mock_stop,
patch("compose_farm.cli.lifecycle.up_services") as mock_up,
):
apply(dry_run=True, no_orphans=False, full=False, config=None)
captured = capsys.readouterr()
assert "Services to migrate" in captured.out
assert "svc1" in captured.out
assert "Orphaned services to stop" in captured.out
assert "old-svc" in captured.out
assert "dry-run" in captured.out
# Should not have called the actual operations
mock_stop.assert_not_called()
mock_up.assert_not_called()
def test_apply_executes_migrations(self, tmp_path: Path) -> None:
"""Apply runs migrations when services need migration."""
cfg = _make_config(tmp_path)
mock_results = [_make_result("svc1")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch(
"compose_farm.cli.lifecycle.get_services_needing_migration",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_service_host", return_value="host1"),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.up_services") as mock_up,
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
assert call_args[0][1] == ["svc1"] # services list
def test_apply_executes_orphan_cleanup(self, tmp_path: Path) -> None:
"""Apply stops orphaned services."""
cfg = _make_config(tmp_path)
mock_results = [_make_result("old-svc@host1")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.cli.lifecycle.get_orphaned_services",
return_value={"old-svc": "host1"},
),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.stop_orphaned_services") as mock_stop,
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
mock_stop.assert_called_once_with(cfg)
def test_apply_no_orphans_skips_orphan_cleanup(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""--no-orphans flag skips orphan cleanup."""
cfg = _make_config(tmp_path)
mock_results = [_make_result("svc1")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.cli.lifecycle.get_orphaned_services",
return_value={"old-svc": "host1"},
),
patch(
"compose_farm.cli.lifecycle.get_services_needing_migration",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_service_host", return_value="host1"),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.up_services") as mock_up,
patch("compose_farm.cli.lifecycle.stop_orphaned_services") as mock_stop,
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=True, full=False, config=None)
# Should run migrations but not orphan cleanup
mock_up.assert_called_once()
mock_stop.assert_not_called()
# Orphans should not appear in output
captured = capsys.readouterr()
assert "old-svc" not in captured.out
def test_apply_no_orphans_nothing_to_do(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""--no-orphans with only orphans means nothing to do."""
cfg = _make_config(tmp_path)
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.cli.lifecycle.get_orphaned_services",
return_value={"old-svc": "host1"},
),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
):
apply(dry_run=False, no_orphans=True, full=False, config=None)
captured = capsys.readouterr()
assert "Nothing to apply" in captured.out
def test_apply_starts_missing_services(self, tmp_path: Path) -> None:
"""Apply starts services that are in config but not in state."""
cfg = _make_config(tmp_path)
mock_results = [_make_result("svc1")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch(
"compose_farm.cli.lifecycle.get_services_not_in_state",
return_value=["svc1"],
),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.up_services") as mock_up,
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
assert call_args[0][1] == ["svc1"]
def test_apply_dry_run_shows_missing_services(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""Dry run shows services that would be started."""
cfg = _make_config(tmp_path)
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch(
"compose_farm.cli.lifecycle.get_services_not_in_state",
return_value=["svc1"],
),
):
apply(dry_run=True, no_orphans=False, full=False, config=None)
captured = capsys.readouterr()
assert "Services to start" in captured.out
assert "svc1" in captured.out
assert "dry-run" in captured.out
def test_apply_full_refreshes_all_services(self, tmp_path: Path) -> None:
"""--full runs up on all services to pick up config changes."""
cfg = _make_config(tmp_path)
mock_results = [_make_result("svc1"), _make_result("svc2")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.up_services") as mock_up,
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=True, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
# Should refresh all services in config
assert set(call_args[0][1]) == {"svc1", "svc2"}
def test_apply_full_dry_run_shows_refresh(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""--full --dry-run shows services that would be refreshed."""
cfg = _make_config(tmp_path)
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch("compose_farm.cli.lifecycle.get_services_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_services_not_in_state", return_value=[]),
):
apply(dry_run=True, no_orphans=False, full=True, config=None)
captured = capsys.readouterr()
assert "Services to refresh" in captured.out
assert "svc1" in captured.out
assert "svc2" in captured.out
assert "dry-run" in captured.out
def test_apply_full_excludes_already_handled_services(self, tmp_path: Path) -> None:
"""--full doesn't double-process services that are migrating or starting."""
cfg = _make_config(tmp_path, {"svc1": "host1", "svc2": "host2", "svc3": "host1"})
mock_results = [_make_result("svc1"), _make_result("svc3")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
patch(
"compose_farm.cli.lifecycle.get_services_needing_migration",
return_value=["svc1"],
),
patch(
"compose_farm.cli.lifecycle.get_services_not_in_state",
return_value=["svc2"],
),
patch("compose_farm.cli.lifecycle.get_service_host", return_value="host2"),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.up_services") as mock_up,
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=True, config=None)
# up_services should be called 3 times: migrate, start, refresh
assert mock_up.call_count == 3
# Get the third call (refresh) and check it only has svc3
refresh_call = mock_up.call_args_list[2]
assert refresh_call[0][1] == ["svc3"]
class TestDownOrphaned:
"""Tests for down --orphaned flag."""
def test_down_orphaned_no_orphans(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""When no orphans exist, prints success message."""
cfg = _make_config(tmp_path)
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch("compose_farm.cli.lifecycle.get_orphaned_services", return_value={}),
):
down(
services=None,
all_services=False,
orphaned=True,
host=None,
config=None,
)
captured = capsys.readouterr()
assert "No orphaned services to stop" in captured.out
def test_down_orphaned_stops_services(self, tmp_path: Path) -> None:
"""--orphaned stops orphaned services."""
cfg = _make_config(tmp_path)
mock_results = [_make_result("old-svc@host1")]
with (
patch("compose_farm.cli.lifecycle.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.cli.lifecycle.get_orphaned_services",
return_value={"old-svc": "host1"},
),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
),
patch("compose_farm.cli.lifecycle.stop_orphaned_services") as mock_stop,
patch("compose_farm.cli.lifecycle.report_results"),
):
down(
services=None,
all_services=False,
orphaned=True,
host=None,
config=None,
)
mock_stop.assert_called_once_with(cfg)
def test_down_orphaned_with_services_errors(self) -> None:
"""--orphaned cannot be combined with service arguments."""
with pytest.raises(typer.Exit) as exc_info:
down(
services=["svc1"],
all_services=False,
orphaned=True,
host=None,
config=None,
)
assert exc_info.value.exit_code == 1
def test_down_orphaned_with_all_errors(self) -> None:
"""--orphaned cannot be combined with --all."""
with pytest.raises(typer.Exit) as exc_info:
down(
services=None,
all_services=True,
orphaned=True,
host=None,
config=None,
)
assert exc_info.value.exit_code == 1
def test_down_orphaned_with_host_errors(self) -> None:
"""--orphaned cannot be combined with --host."""
with pytest.raises(typer.Exit) as exc_info:
down(
services=None,
all_services=False,
orphaned=True,
host="host1",
config=None,
)
assert exc_info.value.exit_code == 1

58
tests/test_cli_startup.py Normal file
View File

@@ -0,0 +1,58 @@
"""Test CLI startup performance."""
from __future__ import annotations
import shutil
import subprocess
import sys
import time
# Thresholds in seconds, per OS
if sys.platform == "win32":
CLI_STARTUP_THRESHOLD = 2.0
elif sys.platform == "darwin":
CLI_STARTUP_THRESHOLD = 0.35
else: # Linux
CLI_STARTUP_THRESHOLD = 0.25
def test_cli_startup_time() -> None:
"""Verify CLI startup time stays within acceptable bounds.
This test ensures we don't accidentally introduce slow imports
that degrade the user experience.
"""
cf_path = shutil.which("cf")
assert cf_path is not None, "cf command not found in PATH"
# Run up to 6 times, return early if we hit the threshold
times: list[float] = []
for _ in range(6):
start = time.perf_counter()
result = subprocess.run(
[cf_path, "--help"],
check=False,
capture_output=True,
text=True,
)
elapsed = time.perf_counter() - start
times.append(elapsed)
# Verify the command succeeded
assert result.returncode == 0, f"CLI failed: {result.stderr}"
# Pass early if under threshold
if elapsed < CLI_STARTUP_THRESHOLD:
print(f"\nCLI startup: {elapsed:.3f}s (threshold: {CLI_STARTUP_THRESHOLD}s)")
return
# All attempts exceeded threshold
best_time = min(times)
msg = (
f"\nCLI startup times: {[f'{t:.3f}s' for t in times]}\n"
f"Best: {best_time:.3f}s, Threshold: {CLI_STARTUP_THRESHOLD}s"
)
print(msg)
err_msg = f"CLI startup too slow!\n{msg}\nCheck for slow imports."
raise AssertionError(err_msg)

View File

@@ -7,7 +7,6 @@ import pytest
import yaml
from typer.testing import CliRunner
import compose_farm.cli.config as config_cmd_module
from compose_farm.cli import app
from compose_farm.cli.config import (
_generate_template,
@@ -70,16 +69,9 @@ class TestGetConfigFile:
) -> None:
monkeypatch.chdir(tmp_path)
monkeypatch.delenv("CF_CONFIG", raising=False)
# Set XDG_CONFIG_HOME to a nonexistent path - config_search_paths() will
# now return paths that don't exist
monkeypatch.setenv("XDG_CONFIG_HOME", str(tmp_path / "nonexistent"))
# Monkeypatch _CONFIG_PATHS to avoid finding existing files
monkeypatch.setattr(
config_cmd_module,
"_CONFIG_PATHS",
[
tmp_path / "compose-farm.yaml",
tmp_path / "nonexistent" / "compose-farm" / "compose-farm.yaml",
],
)
result = _get_config_file(None)
assert result is None

111
tests/test_operations.py Normal file
View File

@@ -0,0 +1,111 @@
"""Tests for operations module."""
from __future__ import annotations
import inspect
from pathlib import Path
from unittest.mock import patch
import pytest
from compose_farm.cli import lifecycle
from compose_farm.config import Config, Host
from compose_farm.executor import CommandResult
from compose_farm.operations import _migrate_service
@pytest.fixture
def basic_config(tmp_path: Path) -> Config:
"""Create a basic test config."""
compose_dir = tmp_path / "compose"
service_dir = compose_dir / "test-service"
service_dir.mkdir(parents=True)
(service_dir / "docker-compose.yml").write_text("services: {}")
return Config(
compose_dir=compose_dir,
hosts={
"host1": Host(address="localhost"),
"host2": Host(address="localhost"),
},
services={"test-service": "host2"},
)
class TestMigrationCommands:
"""Tests for migration command sequence."""
@pytest.fixture
def config(self, tmp_path: Path) -> Config:
"""Create a test config."""
compose_dir = tmp_path / "compose"
service_dir = compose_dir / "test-service"
service_dir.mkdir(parents=True)
(service_dir / "docker-compose.yml").write_text("services: {}")
return Config(
compose_dir=compose_dir,
hosts={
"host1": Host(address="localhost"),
"host2": Host(address="localhost"),
},
services={"test-service": "host2"},
)
async def test_migration_uses_pull_ignore_buildable(self, config: Config) -> None:
"""Migration should use 'pull --ignore-buildable' to skip buildable images."""
commands_called: list[str] = []
async def mock_run_compose_step(
cfg: Config,
service: str,
command: str,
*,
raw: bool,
host: str | None = None,
) -> CommandResult:
commands_called.append(command)
return CommandResult(
service=service,
exit_code=0,
success=True,
)
with patch(
"compose_farm.operations._run_compose_step",
side_effect=mock_run_compose_step,
):
await _migrate_service(
config,
"test-service",
current_host="host1",
target_host="host2",
prefix="[test]",
raw=False,
)
# Migration should call pull with --ignore-buildable, then build, then down
assert "pull --ignore-buildable" in commands_called
assert "build" in commands_called
assert "down" in commands_called
# pull should come before build
pull_idx = commands_called.index("pull --ignore-buildable")
build_idx = commands_called.index("build")
assert pull_idx < build_idx
class TestUpdateCommandSequence:
"""Tests for update command sequence."""
def test_update_command_sequence_includes_build(self) -> None:
"""Update command should use pull --ignore-buildable and build."""
# This is a static check of the command sequence in lifecycle.py
# The actual command sequence is defined in the update function
source = inspect.getsource(lifecycle.update)
# Verify the command sequence includes pull --ignore-buildable
assert "pull --ignore-buildable" in source
# Verify build is included
assert '"build"' in source or "'build'" in source
# Verify the sequence is pull, build, down, up
assert "down" in source
assert "up -d" in source

View File

@@ -6,7 +6,9 @@ import pytest
from compose_farm.config import Config, Host
from compose_farm.state import (
get_orphaned_services,
get_service_host,
get_services_not_in_state,
load_state,
remove_service,
save_state,
@@ -130,3 +132,110 @@ class TestRemoveService:
result = load_state(config)
assert result["plex"] == "nas01"
class TestGetOrphanedServices:
"""Tests for get_orphaned_services function."""
def test_no_orphans(self, config: Config) -> None:
"""Returns empty dict when all services in state are in config."""
state_file = config.get_state_path()
state_file.write_text("deployed:\n plex: nas01\n")
result = get_orphaned_services(config)
assert result == {}
def test_finds_orphaned_service(self, config: Config) -> None:
"""Returns services in state but not in config."""
state_file = config.get_state_path()
state_file.write_text("deployed:\n plex: nas01\n jellyfin: nas02\n")
result = get_orphaned_services(config)
# plex is in config, jellyfin is not
assert result == {"jellyfin": "nas02"}
def test_finds_orphaned_multi_host_service(self, config: Config) -> None:
"""Returns multi-host orphaned services with host list."""
state_file = config.get_state_path()
state_file.write_text("deployed:\n plex: nas01\n dozzle:\n - nas01\n - nas02\n")
result = get_orphaned_services(config)
assert result == {"dozzle": ["nas01", "nas02"]}
def test_empty_state(self, config: Config) -> None:
"""Returns empty dict when state is empty."""
result = get_orphaned_services(config)
assert result == {}
def test_all_orphaned(self, tmp_path: Path) -> None:
"""Returns all services when none are in config."""
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text("")
cfg = Config(
compose_dir=tmp_path / "compose",
hosts={"nas01": Host(address="192.168.1.10")},
services={}, # No services in config
config_path=config_path,
)
state_file = cfg.get_state_path()
state_file.write_text("deployed:\n plex: nas01\n jellyfin: nas02\n")
result = get_orphaned_services(cfg)
assert result == {"plex": "nas01", "jellyfin": "nas02"}
class TestGetServicesNotInState:
"""Tests for get_services_not_in_state function."""
def test_all_in_state(self, config: Config) -> None:
"""Returns empty list when all services are in state."""
state_file = config.get_state_path()
state_file.write_text("deployed:\n plex: nas01\n")
result = get_services_not_in_state(config)
assert result == []
def test_finds_missing_service(self, tmp_path: Path) -> None:
"""Returns services in config but not in state."""
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text("")
cfg = Config(
compose_dir=tmp_path / "compose",
hosts={"nas01": Host(address="192.168.1.10")},
services={"plex": "nas01", "jellyfin": "nas01"},
config_path=config_path,
)
state_file = cfg.get_state_path()
state_file.write_text("deployed:\n plex: nas01\n")
result = get_services_not_in_state(cfg)
assert result == ["jellyfin"]
def test_empty_state(self, tmp_path: Path) -> None:
"""Returns all services when state is empty."""
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text("")
cfg = Config(
compose_dir=tmp_path / "compose",
hosts={"nas01": Host(address="192.168.1.10")},
services={"plex": "nas01", "jellyfin": "nas01"},
config_path=config_path,
)
result = get_services_not_in_state(cfg)
assert set(result) == {"plex", "jellyfin"}
def test_empty_config(self, config: Config) -> None:
"""Returns empty list when config has no services."""
# config fixture has plex: nas01, but we need empty config
config_path = config.config_path
config_path.write_text("")
cfg = Config(
compose_dir=config.compose_dir,
hosts={"nas01": Host(address="192.168.1.10")},
services={},
config_path=config_path,
)
result = get_services_not_in_state(cfg)
assert result == []

1
tests/web/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Web UI tests."""

87
tests/web/conftest.py Normal file
View File

@@ -0,0 +1,87 @@
"""Fixtures for web UI tests."""
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
import pytest
if TYPE_CHECKING:
from compose_farm.config import Config
@pytest.fixture
def compose_dir(tmp_path: Path) -> Path:
"""Create a temporary compose directory with sample services."""
compose_path = tmp_path / "compose"
compose_path.mkdir()
# Create a sample service
plex_dir = compose_path / "plex"
plex_dir.mkdir()
(plex_dir / "compose.yaml").write_text("""
services:
plex:
image: plexinc/pms-docker
container_name: plex
ports:
- "32400:32400"
""")
(plex_dir / ".env").write_text("PLEX_CLAIM=claim-xxx\n")
# Create another service
sonarr_dir = compose_path / "sonarr"
sonarr_dir.mkdir()
(sonarr_dir / "compose.yaml").write_text("""
services:
sonarr:
image: linuxserver/sonarr
""")
return compose_path
@pytest.fixture
def config_file(tmp_path: Path, compose_dir: Path) -> Path:
"""Create a temporary config file and state file."""
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text(f"""
compose_dir: {compose_dir}
hosts:
server-1:
address: 192.168.1.10
user: docker
server-2:
address: 192.168.1.11
services:
plex: server-1
sonarr: server-2
""")
# State file must be alongside config file
state_path = tmp_path / "compose-farm-state.yaml"
state_path.write_text("""
deployed:
plex: server-1
""")
return config_path
@pytest.fixture
def mock_config(config_file: Path, monkeypatch: pytest.MonkeyPatch) -> Config:
"""Patch get_config to return a test config."""
from compose_farm.config import load_config
from compose_farm.web import deps as web_deps
from compose_farm.web.routes import api as web_api
config = load_config(config_file)
# Patch in all modules that import get_config
monkeypatch.setattr(web_deps, "get_config", lambda: config)
monkeypatch.setattr(web_api, "get_config", lambda: config)
return config

106
tests/web/test_helpers.py Normal file
View File

@@ -0,0 +1,106 @@
"""Tests for web API helper functions."""
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
import pytest
from fastapi import HTTPException
if TYPE_CHECKING:
from compose_farm.config import Config
class TestValidateYaml:
"""Tests for _validate_yaml helper."""
def test_valid_yaml(self) -> None:
from compose_farm.web.routes.api import _validate_yaml
# Should not raise
_validate_yaml("key: value")
_validate_yaml("list:\n - item1\n - item2")
_validate_yaml("")
def test_invalid_yaml(self) -> None:
from compose_farm.web.routes.api import _validate_yaml
with pytest.raises(HTTPException) as exc_info:
_validate_yaml("key: [unclosed")
assert exc_info.value.status_code == 400
assert "Invalid YAML" in exc_info.value.detail
class TestGetServiceComposePath:
"""Tests for _get_service_compose_path helper."""
def test_service_found(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _get_service_compose_path
path = _get_service_compose_path("plex")
assert isinstance(path, Path)
assert path.name == "compose.yaml"
assert path.parent.name == "plex"
def test_service_not_found(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _get_service_compose_path
with pytest.raises(HTTPException) as exc_info:
_get_service_compose_path("nonexistent")
assert exc_info.value.status_code == 404
assert "not found" in exc_info.value.detail
class TestRenderContainers:
"""Tests for container template rendering."""
def test_render_running_container(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _render_containers
containers = [{"Name": "plex", "State": "running"}]
html = _render_containers("plex", "server-1", containers)
assert "badge-success" in html
assert "plex" in html
assert "initExecTerminal" in html
def test_render_unknown_state(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _render_containers
containers = [{"Name": "plex", "State": "unknown"}]
html = _render_containers("plex", "server-1", containers)
assert "loading-spinner" in html
def test_render_other_state(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _render_containers
containers = [{"Name": "plex", "State": "exited"}]
html = _render_containers("plex", "server-1", containers)
assert "badge-warning" in html
assert "exited" in html
def test_render_with_header(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _render_containers
containers = [{"Name": "plex", "State": "running"}]
html = _render_containers("plex", "server-1", containers, show_header=True)
assert "server-1" in html
assert "font-semibold" in html
def test_render_multiple_containers(self, mock_config: Config) -> None:
from compose_farm.web.routes.api import _render_containers
containers = [
{"Name": "app-web-1", "State": "running"},
{"Name": "app-db-1", "State": "running"},
]
html = _render_containers("app", "server-1", containers)
assert "app-web-1" in html
assert "app-db-1" in html

987
uv.lock generated

File diff suppressed because it is too large Load Diff