Compare commits

...

345 Commits

Author SHA1 Message Date
Bas Nijholt
6fdb43e1e9 Add self-healing: detect and stop stray containers (#128)
* Add self-healing: detect and stop rogue containers

Adds the ability to detect and stop "rogue" containers - stacks running
on hosts they shouldn't be according to config.

Changes:
- `cf refresh`: Now scans ALL hosts and warns about rogues/duplicates
- `cf apply`: Stops rogue containers before migrations (new phase)
- New `--no-rogues` flag to skip rogue detection

Implementation:
- Add StackDiscoveryResult for full host scanning results
- Add discover_stack_on_all_hosts() to check all hosts in parallel
- Add stop_rogue_stacks() to stop containers on unauthorized hosts
- Update tests to include new no_rogues parameter

* Update README.md

* fix: Update refresh tests for _discover_stacks_full return type

The function now returns a tuple (discovered, rogues, duplicates)
for rogue/duplicate detection. Update test mocks accordingly.

* Rename "rogue" terminology to "stray" for consistency

Terminology update across the codebase:
- rogue_hosts -> stray_hosts
- is_rogue -> is_stray
- stop_rogue_stacks -> stop_stray_stacks
- _discover_rogues -> _discover_strays
- --no-rogues -> --no-strays
- _report_rogue_stacks -> _report_stray_stacks

"Stray" better complements "orphaned" (both evoke lost things)
while clearly indicating the stack is running somewhere it
shouldn't be.

* Update README.md

* Move asyncio import to top level

* Fix remaining rogue -> stray in docstrings and README

* Refactor: Extract shared helpers to reduce duplication

1. Extract _stop_stacks_on_hosts helper in operations.py
   - Shared by stop_orphaned_stacks and stop_stray_stacks
   - Reduces ~50 lines of duplicated code

2. Refactor _discover_strays to reuse _discover_stacks_full
   - Removes duplicate discovery logic from lifecycle.py
   - Calls management._discover_stacks_full and merges duplicates

* Add PR review prompt

* Fix typos in PR review prompt

* Move import to top level (no in-function imports)

* Update README.md

* Remove obvious comments
2025-12-22 10:22:09 -08:00
Bas Nijholt
620e797671 fix: Add entrypoint to create passwd entry for non-root users (#127) 2025-12-22 07:31:59 -08:00
Bas Nijholt
031a2af6f3 fix: Correct SSH key volume mount path in docker-compose.yml (#126) 2025-12-22 06:55:59 -08:00
Bas Nijholt
f69eed7721 docs(readme): position as Dockge for multi-host (#123)
* docs(readme): position as Dockge for multi-host

- Reference Dockge (which we've used) instead of Portainer
- Move Portainer mention to "Your files" bullet as contrast
- Link to Dockge repo

* docs(readme): add agentless bullet, link Dockge

- Add "Agentless" bullet highlighting SSH-only approach
- Link to Dockge as contrast (they require agents for multi-host)
- Update NOTE to focus on agentless, CLI-first positioning
2025-12-21 23:28:26 -08:00
Bas Nijholt
5a1fd4e29f docs(readme): add value propositions and fix image URL (#122)
- Add bullet points highlighting key benefits after NOTE block
- Update NOTE to position as file-based Portainer alternative
- Fix hero image URL from http to https
- Add alt text to hero image for accessibility
2025-12-21 23:17:18 -08:00
Bas Nijholt
26dea691ca feat(docker): make container user configurable via CF_UID/CF_GID (#118)
* feat(docker): make container user configurable via CF_UID/CF_GID

Add support for running compose-farm containers as a non-root user
to preserve file ownership on mounted volumes. This prevents files
like compose-farm-state.yaml and web UI config edits from being
owned by root on NFS mounts.

Set CF_UID, CF_GID, and CF_HOME environment variables to run as
your user. Defaults to root (0:0) for backwards compatibility.

* docs: document non-root user configuration for Docker

- Add CF_UID/CF_GID/CF_HOME documentation to README and getting-started
- Add XDG config volume mount for backup/log persistence across restarts
- Update SSH volume examples to use CF_HOME variable

* fix(docker): allow non-root user access and add USER env for SSH

- Add `chmod 755 /root` to Dockerfile so non-root users can access
  the installed tool at /root/.local/share/uv/tools/compose-farm
- Add USER environment variable to docker-compose.yml for SSH to work
  when running as non-root (UID not in /etc/passwd)
- Update docs to include CF_USER in the setup instructions
- Support building from local source with SETUPTOOLS_SCM_PRETEND_VERSION

* fix(docker): revert local build changes, keep only chmod 755 /root

Remove the local source build logic that was added during testing.
The only required change is `chmod 755 /root` to allow non-root users
to access the installed tool.

* docs: add .envrc.example for direnv users

* docs: mention direnv option in README and getting-started
2025-12-21 22:19:40 -08:00
Bas Nijholt
56d64bfe7a fix(web): exclude orphaned stacks from running count (#119)
The dashboard showed "stopped: -1" when orphaned stacks existed because
running_count included stacks in state but removed from config. Now only
stacks that are both in config AND deployed are counted as running.
2025-12-21 21:59:05 -08:00
Bas Nijholt
5ddbdcdf9e docs(demos): update recordings and fix demo scripts (#115) 2025-12-21 19:17:16 -08:00
Bas Nijholt
dd16becad1 feat(web): add Repo command to command palette (#117)
Adds a new "Repo" command that opens the GitHub repository in a new tab,
similar to the existing "Docs" command.
2025-12-21 15:25:04 -08:00
Bas Nijholt
df683a223f fix(web): wait for terminal expand transition before scrolling (#116)
- Extracts generic `expandCollapse(toggle, scrollTarget)` function for reuse with any DaisyUI collapse
- Fixes scrolling when clicking action buttons (pull, logs, etc.) while terminal is collapsed - now waits for CSS transition before scrolling
- Fixes shell via command palette - expands Container Shell and scrolls to actual terminal (not collapse header)
- Fixes scroll position not resetting when navigating via command palette
2025-12-21 15:17:59 -08:00
Bas Nijholt
fdb00e7655 refactor(web): store backups in XDG config directory (#113)
* refactor(web): store backups in XDG config directory

Move file backups from `.backups/` alongside the file to
`~/.config/compose-farm/backups/` (respecting XDG_CONFIG_HOME).
The original file path is mirrored inside to avoid name collisions.

* docs(web): document automatic backup location

* refactor(paths): extract shared config_dir() function

* fix(web): use path anchor for Windows compatibility
2025-12-21 15:08:15 -08:00
Bas Nijholt
90657a025f docs: fix missing CLI options and improve docs-review prompt (#114)
* docs: fix missing CLI options and improve docs-review prompt

- Add missing --config option docs for cf ssh setup and cf ssh status
- Enhance .prompts/docs-review.md with:
  - Quick reference table mapping docs to source files
  - Runnable bash commands for quick checks
  - Specific code paths instead of vague references
  - Web UI documentation section
  - Common gotchas section
  - Ready-to-apply fix template format
  - Post-fix verification steps

* docs: add self-review step to docs-review prompt

* docs: make docs-review prompt discovery-based and less brittle

- Use discovery commands (git ls-files, grep, find) instead of hardcoded lists
- Add 'What This Prompt Is For' section clarifying manual vs automated checks
- Simplify checklist to 10 sections focused on judgment-based review
- Remove hardcoded file paths in favor of search patterns
- Make commands dynamically discover CLI structure

* docs: simplify docs-review prompt, avoid duplicating automated checks

- Remove checks already handled by CI (README help output, command table)
- Focus on judgment-based review: accuracy, completeness, clarity
- Reduce from 270 lines to 117 lines
- Highlight that docs/commands.md options tables are manually maintained
2025-12-21 15:07:37 -08:00
Bas Nijholt
7ae8ea0229 feat(web): add tooltips to sidebar header icons (#111)
Use daisyUI tooltip component with bottom positioning for the docs,
GitHub, and theme switcher icons in the sidebar header, matching the
tooltip style used elsewhere in the web UI.
2025-12-21 14:16:57 -08:00
Bas Nijholt
612242eea9 feat(web): add Open Website button and command for stacks with Traefik labels (#110)
* feat(web): add Open Website button and command for stacks with Traefik labels

Parse traefik.http.routers.*.rule labels to extract Host() rules and
display "Open Website" button(s) on stack pages. Also adds the command
to the command palette.

- Add extract_website_urls() function to compose.py
- Determine scheme (http/https) from entrypoint (websecure/web)
- Prefer HTTPS when same host has both protocols
- Support environment variable interpolation
- Add external_link icon from Lucide
- Add comprehensive tests for URL extraction

* refactor: move extract_website_urls to traefik.py and reuse existing parsing

Instead of duplicating the Traefik label parsing logic in compose.py,
reuse generate_traefik_config() with check_all=True to get the parsed
router configuration, then extract Host() rules from it.

- Move extract_website_urls from compose.py to traefik.py
- Reuse generate_traefik_config for label parsing
- Move tests from test_compose.py to test_traefik.py
- Update import in pages.py

* test: add comprehensive tests for extract_website_urls

Cover real-world patterns found in stacks:
- Multiple Host() in one rule with || operator
- Host() combined with PathPrefix (e.g., && PathPrefix(`/api`))
- Multiple services in one stack (like arr stack)
- Labels in list format (- key=value)
- No entrypoints (defaults to http)
- Multiple entrypoints including websecure
2025-12-21 14:16:46 -08:00
Bas Nijholt
ea650bff8a fix: Skip buildable images in pull command (#109)
* fix: Skip buildable images in pull command

Add --ignore-buildable flag to pull command, matching the behavior
of the update command. This prevents pull from failing when a stack
contains services with local build directives (no remote image).

* test: Fix flaky command palette close detection

Use state="hidden" instead of :not([open]) selector when waiting
for the command palette to close. The old approach failed because
wait_for_selector defaults to waiting for visibility, but a closed
<dialog> element is hidden by design.
2025-12-21 10:28:10 -08:00
renovate[bot]
140bca4fd6 ⬆️ Update actions/upload-pages-artifact action to v4 (#108)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-21 10:27:58 -08:00
renovate[bot]
6dad6be8da ⬆️ Update actions/checkout action to v6 (#107)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-21 10:27:51 -08:00
Bas Nijholt
d7f931e301 feat(web): improve container row layout for mobile (#106)
- Stack container name/status above buttons on mobile screens
- Use card-like background for visual separation
- Buttons align right on desktop, full width on mobile
2025-12-21 10:27:36 -08:00
Bas Nijholt
471936439e feat(web): add Edit Config command to command palette (#105)
- Added "Edit Config" command to the command palette (Cmd/Ctrl+K)
- Navigates to console page, focuses the Monaco editor, and scrolls to it
- Uses `#editor` URL hash to signal editor focus instead of terminal focus
2025-12-21 01:24:03 -08:00
Bas Nijholt
36e4bef46d feat(web): add shell command to command palette for services (#104)
- Add "Shell: {service}" commands to the command palette when on a stack page
- Allows quick shell access to containers via `Cmd+K` → type "shell" → select service
- Add `get_container_name()` helper in `compose.py` for consistent container name resolution (used by both api.py and pages.py)
2025-12-21 01:23:54 -08:00
Bas Nijholt
2cac0bf263 feat(web): add Pull All and Update All to command palette (#103)
The dashboard buttons for Pull All and Update All are now also
available in the command palette (Cmd/Ctrl+K) for keyboard access.
2025-12-21 01:00:57 -08:00
Bas Nijholt
3d07cbdff0 fix(web): show stderr in console shell sessions (#102)
- Remove `2>/dev/null` from shell command that was suppressing all stderr output
- Command errors like "command not found" are now properly displayed to users
2025-12-21 00:50:58 -08:00
Bas Nijholt
0f67c17281 test: parallel execution and timeout constants (#101)
- Enable `-n auto` for all test commands in justfile (parallel execution)
- Add redis stack to test fixtures (missing stack was causing test failure)
- Replace hardcoded timeouts with constants: `TIMEOUT` (10s) and `SHORT_TIMEOUT` (5s)
- Rename `test-unit` → `test-cli` and `test-browser` → `test-web`
- Skip CLI startup test when running in parallel mode (`-n auto`)
- Update test assertions for 5 stacks (was 4)
2025-12-21 00:48:52 -08:00
Bas Nijholt
bd22a1a55e fix: Reject unknown keys in config with Pydantic strict mode (#100)
Add extra="forbid" to Host and Config models so typos like
`username` instead of `user` raise an error instead of being
silently ignored. Also simplify _parse_hosts to pass dicts
directly to Pydantic instead of manual field extraction.
2025-12-21 00:19:18 -08:00
Bas Nijholt
cc54e89b33 feat: add justfile for development commands (#99)
- Adds a justfile with common development commands for easier workflow
- Commands: `install`, `test`, `test-unit`, `test-browser`, `lint`, `web`, `kill-web`, `doc`, `kill-doc`, `clean`
2025-12-20 23:24:30 -08:00
Bas Nijholt
f71e5cffd6 feat(web): add service commands to command palette with fuzzy matching (#95)
- Add service-level commands to the command palette when viewing a stack detail page
- Services are extracted from the compose file and exposed via a `data-services` attribute
- Commands are grouped by action (all Logs together, all Pull together, etc.) with services sorted alphabetically
- Service commands appear with a teal indicator to distinguish from stack-level commands (green)
- Implement word-boundary fuzzy matching for better filtering UX:
  - `rest plex` matches `Restart: plex-server`
  - `server` matches `plex-server` (hyphenated names split into words)
  - Query words must match the START of command words (prevents false positives like `r ba` matching `Logs: bazarr`)

Available service commands:
- `Restart: <service>` - Restart a specific service
- `Pull: <service>` - Pull image for a service
- `Logs: <service>` - View logs for a service
- `Stop: <service>` - Stop a service
- `Up: <service>` - Start a service
2025-12-20 23:23:53 -08:00
Bas Nijholt
0e32729763 fix(web): add tooltips to console page buttons (#98)
* fix(web): add tooltips to console page buttons

Add descriptive tooltips to Connect, Open, and Save buttons on the
console page, matching the tooltip style used on dashboard and stack
pages.

* fix(web): show platform-appropriate keyboard shortcuts

Detect Mac vs other platforms and display ⌘ or Ctrl accordingly for
keyboard shortcuts. The command palette FAB dynamically updates, and
tooltips use ⌘/Ctrl notation to cover both platforms.
2025-12-20 22:59:50 -08:00
Bas Nijholt
b0b501fa98 docs: update example services in documentation and tests (#96) 2025-12-20 22:45:13 -08:00
Bas Nijholt
7e00596046 docs: fix inaccuracies and add missing command documentation (#97)
- Add missing --service option docs for up, stop, restart, update, pull, ps, logs
- Add stop command to command overview table
- Add compose passthrough command documentation
- Add --all option and [STACKS] argument to refresh command
- Fix ServiceConfig reference to Host in architecture.md
- Update lifecycle.py description to include stop and compose commands
- Fix uv installation syntax in web-ui.md (--with web -> [web])
- Add missing cf ssh --help and cf web --help output blocks in README
2025-12-20 22:37:26 -08:00
Bas Nijholt
d1e4d9b05c docs: update documentation for new CLI features (#94) 2025-12-20 21:36:47 -08:00
Bas Nijholt
3fbae630f9 feat(cli): add compose passthrough command (#93)
Adds `cf compose <stack> <command> [args...]` to run any docker compose
command on a stack without needing dedicated wrappers. Useful for
commands like top, images, exec, run, config, etc.

Multi-host stacks require --host to specify which host to run on.
2025-12-20 21:26:05 -08:00
Bas Nijholt
3e3c919714 fix(web): service action buttons fixes and additions (#92)
* fix(web): use --service flag in service action endpoint

* feat(web): add Start button to service actions

* feat(web): add Pull button to service actions
2025-12-20 21:11:44 -08:00
Bas Nijholt
59b797a89d feat: add service-level commands with --service flag (#91)
Add support for targeting specific services within a stack:

CLI:
- New `stop` command for stopping services without removing containers
- Add `--service` / `-s` flag to: up, pull, restart, update, stop, logs, ps
- Service flag requires exactly one stack to be specified

Web API:
- Add `stop` to allowed stack commands
- New endpoint: POST /api/stack/{name}/service/{service}/{command}
- Supports: logs, pull, restart, up, stop

Web UI:
- Add action buttons to container rows: logs, restart, stop, shell
- Add rotate_ccw and scroll_text icons for new buttons
2025-12-20 20:56:48 -08:00
Bas Nijholt
7caf006e07 feat(web): add Rich logging for better error debugging (#90)
Add structured logging with Rich tracebacks to web UI components:
- Configure RichHandler in app.py for formatted output
- Log SSH/file operation failures in API routes with full tracebacks
- Log WebSocket exec/shell errors for connection issues
- Add warning logs for failed container state queries

Errors now show detailed tracebacks in container logs instead of
just returning 500 status codes.
2025-12-20 20:47:34 -08:00
Bas Nijholt
45040b75f1 feat(web): add Pull All and Update All buttons to dashboard (#89)
- Add "Pull All" and "Update All" buttons to dashboard for bulk operations
- Switch from native `title` attribute to DaisyUI tooltips for instant, styled tooltips
- Add tooltips to save buttons clarifying what they save
- Add tooltip to container shell button
- Fix tooltip z-index so they appear above sidebar
- Fix tooltip clipping by removing `overflow-y-auto` from main content
- Position container shell tooltip to the left to avoid clipping
2025-12-20 20:41:26 -08:00
Bas Nijholt
fa1c5c1044 docs: update theme to indigo with system preference support (#88)
Switch from teal to indigo primary color to match Zensical docs theme.
Add system preference detection and orange accent for dark mode.
2025-12-20 20:18:28 -08:00
Bas Nijholt
67e832f687 docs: clarify config file locations and update install URL (#86) 2025-12-20 20:12:06 -08:00
Bas Nijholt
da986fab6a fix: improve command palette theme filtering (#87)
- Normalize spaces after colons so "theme:dark" matches "theme: dark"
- Also handles multiple spaces like "theme:  dark"
2025-12-20 20:03:16 -08:00
Bas Nijholt
5dd6e2ca05 fix: improve theme picker usability in command palette (#85) 2025-12-20 20:00:05 -08:00
Bas Nijholt
16435065de fix: video autoplay for Safari and Chrome with instant navigation (#84) 2025-12-20 19:49:05 -08:00
Bas Nijholt
5921b5e405 docs: update web-workflow demo recording (#83) 2025-12-20 18:09:24 -08:00
Bas Nijholt
f0cd85b5f5 fix: prevent terminal reconnection to wrong page after navigation (#81) 2025-12-20 16:41:28 -08:00
Bas Nijholt
fe95443733 fix: Safari video autoplay on first page load (#82) 2025-12-20 16:41:04 -08:00
Bas Nijholt
8df9288156 docs: add Quick Demo GIFs to README (#80)
* docs: add Quick Demo GIFs to README

Add the same CLI and Web UI demo GIFs that appear on the docs homepage.

* docs: add Table of Contents header
2025-12-20 16:20:42 -08:00
Bas Nijholt
124bde7575 docs: improve Web UI workflow demo with comprehensive showcase (#78) 2025-12-20 16:14:33 -08:00
Bas Nijholt
350947ad12 Rename services to stacks terminology (#79) 2025-12-20 16:00:41 -08:00
Bas Nijholt
bb019bcae6 feat: add ty type checker alongside mypy (#77)
Add Astral's ty type checker (written in Rust, 10-100x faster than mypy)
as a second type checking layer. Both run in pre-commit and CI.

Fixed type issues caught by ty:
- config.py: explicit Host constructor to avoid dict unpacking issues
- executor.py: wrap subprocess.run in closure for asyncio.to_thread
- api.py: use getattr for Jinja TemplateModule macro access
- test files: fix playwright driver_path tuple handling, pytest rootpath typing
2025-12-20 15:43:51 -08:00
Bas Nijholt
6d50f90344 ci: run docs build on PRs (#76)
- Add pull_request trigger with same path filters
- Skip Pages setup and artifact upload on PRs (only build)
- Skip deploy job entirely on PRs
- Update concurrency to include ref for parallel PR builds
2025-12-20 15:16:25 -08:00
Bas Nijholt
474b7ca044 docs: add early Web UI links to homepage (#75) 2025-12-20 15:08:04 -08:00
Bas Nijholt
7555d8443b fix(docs): add Web UI to sidebar and fix video paths (#74)
- Add Web UI page to navigation in zensical.toml
- Use absolute paths for video assets in web-ui.md
- Add web-workflow demo video to homepage Quick Demo section
2025-12-20 15:05:10 -08:00
Bas Nijholt
de46c3ff0f feat: add web UI demo recording system (#69) 2025-12-20 15:00:03 -08:00
Bas Nijholt
fff064cf03 Clarify single-host vs multi-host docs (#73) 2025-12-20 14:15:43 -08:00
Bas Nijholt
187f83b61d feat: add service arguments to refresh command (#70) 2025-12-20 13:14:09 -08:00
Bas Nijholt
d2b9113b9d feat(web): add documentation link to sidebar and command palette (#72)
Adds a docs icon next to GitHub in the header and a "Docs" command
in the command palette (⌘K) that opens https://compose-farm.nijho.lt/
in a new tab.
2025-12-20 13:13:20 -08:00
Bas Nijholt
be77eb7c75 fix(docs): use absolute paths for video assets (#71)
Relative paths like `assets/install.webm` resolved incorrectly on
subpages (e.g., /getting-started/assets/install.webm instead of
/assets/install.webm), causing 404 errors for videos on those pages.
2025-12-20 12:51:34 -08:00
Bas Nijholt
81e1a482f4 fix(docs): use Nerd Font icon for emoji in quickstart demo (#68) 2025-12-20 12:36:29 -08:00
Bas Nijholt
435b014251 docs: move demo up and add Dockge comparison (#67) 2025-12-20 10:28:59 -08:00
Bas Nijholt
58585ac73c docs: fix inaccuracies and add missing documentation (#66) 2025-12-20 10:27:15 -08:00
Bas Nijholt
5a848ec416 fix(docs): fix video display on GitHub Pages (#65) 2025-12-20 10:14:51 -08:00
Bas Nijholt
b4595cb117 docs: add comprehensive Zensical-based documentation (#62) 2025-12-20 09:57:59 -08:00
Bas Nijholt
5f1c31b780 feat: show docker compose command before execution (#64) 2025-12-20 00:35:35 -08:00
Bas Nijholt
9974f87976 feat: add bootstrap script for one-liner installation (#63)
Adds a curl-able install script that installs uv (if needed) and
compose-farm as a uv tool. Updated README with the one-liner.
2025-12-19 23:54:00 -08:00
Bas Nijholt
8b16484ce2 feat(web): add theme switcher with 35 DaisyUI themes (#61) 2025-12-19 22:33:10 -08:00
Bas Nijholt
d75f9cca64 refactor(web): organize app.js into logical sections (#60)
Reorganize JavaScript into 8 clear sections for better maintainability:
- Constants (ANSI, theme, language map)
- State (all globals in one place)
- Utilities (createWebSocket, whenXtermReady, etc.)
- Terminal (all xterm.js functions together)
- Editor (all Monaco functions together)
- UI Helpers (dashboard refresh, sidebar filter)
- Command Palette (self-contained IIFE)
- Initialization (entry points and event handlers)

No functional changes - only reordering and section headers added.
2025-12-19 20:23:39 -08:00
Bas Nijholt
7ccb0734a2 refactor(web): consolidate JS patterns and use icon macros (#58) 2025-12-19 14:55:31 -08:00
Bas Nijholt
61a845fad8 test: add comprehensive browser tests for HTMX/JS functionality (#59) 2025-12-19 14:27:00 -08:00
Bas Nijholt
e7efae0153 refactor: remove dead code and reduce duplication (#57)
- Delete unused add_service_to_host/remove_service_from_host from state.py
  (42 lines of dead code never called anywhere)

- Extract _stream_output_lines helper in executor.py to deduplicate
  identical read_stream functions in _run_local_command and _run_ssh_command

- Simplify unique-list logic in compose.py using dict.fromkeys()
  instead of manual seen/unique set/list pattern

Total: -67 lines
2025-12-18 23:56:49 -08:00
Bas Nijholt
b4ebe15dd1 refactor: simplify codebase with reduced abstractions (#56)
- Remove dead code: `run_host_operation` in cli/common.py (never called)
- Inline `_report_*` helpers in lifecycle.py (each called once)
- Merge `validate_host` into `validate_hosts` with flexible str|list param
- Merge `_report_no_config_found` and `_report_config_path_not_exists`
  into single `_report_missing_config` function
- Simplify `_get_editor` from 18 lines to 6 using walrus operator
- Extract `COMPOSE_FILENAMES` constant to avoid duplication in config.py
- Extract `_stream_subprocess` helper to reduce duplication in streaming.py

Net reduction: ~130 lines of code with no functionality changes.
2025-12-18 23:45:34 -08:00
Bas Nijholt
9f55dcdd6e refactor(web): Modernize JavaScript with cleaner patterns (#55) 2025-12-18 23:02:07 -08:00
Bas Nijholt
0694bbe56d feat(web): Show (local) label in sidebar host selector (#52) 2025-12-18 21:59:41 -08:00
Bas Nijholt
3045948d0a feat(web): Show (local) label in sidebar host selector (#50)
* feat(web): Show (local) label in sidebar host selector

Add local host detection to sidebar partial, matching the console page
behavior where the current machine is labeled with "(local)" in the
host dropdown.

* refactor: Extract get_local_host() helper to deps.py

DRY up the local host detection logic that was duplicated between
console and sidebar_partial routes.

* revert
2025-12-18 20:12:29 -08:00
Bas Nijholt
1fa17b4e07 feat(web): Auto-refresh dashboard and clean up HTMX inheritance (#49) 2025-12-18 20:07:31 -08:00
Bas Nijholt
cd25a1914c fix(web): Show exit code for stopped containers instead of loading spinner (#51)
One-shot containers (like CLI tools) were showing a perpetual loading
spinner because they weren't in `docker compose ps` output. Now we:
- Use `ps -a` to include stopped/exited containers
- Display exit code: neutral badge for clean exit (0), error badge for failures
- Show "created" state for containers that were never started
2025-12-18 20:03:12 -08:00
Bas Nijholt
a71200b199 feat(test): Add Playwright browser tests for web UI (#48) 2025-12-18 18:26:23 -08:00
Bas Nijholt
967d68b14a revert: Remove mobile rainbow glow adjustments
Reverts #46 and #47. The reduced background-size caused a green
tint at rest. The improvement in animation visibility wasn't
worth the trade-off.
2025-12-18 16:16:31 -08:00
Bas Nijholt
b7614aeab7 fix(web): Adjust mobile rainbow glow to avoid green edge (#47)
500% background-size showed too much of the gradient at rest,
revealing green (#bfff80) at the button edge. 650% shows ~15%
of the gradient, landing safely on white while still improving
color visibility during animation.
2025-12-18 16:11:58 -08:00
Bas Nijholt
d931784935 fix(web): Make rainbow glow animation more visible on mobile (#46)
The 900% background-size meant only ~11% of the gradient was visible
at any time. On smaller screens, the rainbow colors would flash by
too quickly during the intro animation, appearing mostly white.

Use a CSS variable for background-size and reduce it to 500% on
mobile (<768px), showing ~20% of the gradient for a more visible
rainbow effect.
2025-12-18 15:53:03 -08:00
Bas Nijholt
4755065229 feat(web): Add collapsible blocks to console terminal and editor (#44) 2025-12-18 15:52:36 -08:00
Bas Nijholt
e86bbf7681 fix(web): Make task-not-found message more general (#45) 2025-12-18 15:37:08 -08:00
Bas Nijholt
be136eb916 fix(web): Show friendlier message when task not found after restart
After a self-update, the browser tries to reconnect to the old task_id
but the in-memory task registry is empty (new container). Show a
helpful message instead of a scary "Error" message.
2025-12-18 15:34:07 -08:00
Bas Nijholt
78a223878f fix(web): Use nohup for self-updates to survive container death (#41) 2025-12-18 15:29:37 -08:00
Bas Nijholt
f5be23d626 fix(web): Ensure URL updates after HTMX navigation in command palette (#43)
* fix(web): Ensure URL updates after HTMX navigation in command palette

Use history.pushState() after HTMX swap completes to ensure
window.location.pathname is correct when rebuilding commands.

* docs: Add rule about unchecked checklists in PR descriptions
2025-12-18 15:22:10 -08:00
Bas Nijholt
3bdc483c2a feat(web): Add rainbow glow effect to command palette button (#42) 2025-12-18 15:13:49 -08:00
Bas Nijholt
3a3591a0f7 feat(web): Allow reconnection to running tasks after navigation (#38) 2025-12-18 14:27:06 -08:00
Bas Nijholt
7f8ea49d7f fix(web): Enable TTY for self-update SSH to show progress bars (#40)
* fix(web): Add PATH for self-update SSH command

Non-interactive SSH sessions don't source profile files, so `cf` isn't
found when installed in ~/.local/bin. Prepend common install locations
to PATH before running the remote command.

* fix(web): Enable TTY for self-update SSH to show progress bars
2025-12-18 14:19:21 -08:00
Bas Nijholt
1e67bde96c fix(web): Add PATH for self-update SSH command (#39)
Non-interactive SSH sessions don't source profile files, so `cf` isn't
found when installed in ~/.local/bin. Prepend common install locations
to PATH before running the remote command.
2025-12-18 14:17:03 -08:00
Bas Nijholt
d8353dbb7e fix: Skip socket paths in preflight volume checks (#37)
Socket paths like SSH_AUTH_SOCK are machine-local and shouldn't be
validated on remote hosts during preflight checks.
2025-12-18 13:59:06 -08:00
Bas Nijholt
2e6146a94b feat(ps): Add service filtering to ps command (#33) 2025-12-18 13:31:18 -08:00
Bas Nijholt
87849a8161 fix(web): Run self-updates via SSH to survive container restart (#35) 2025-12-18 13:10:30 -08:00
Bas Nijholt
c8bf792a9a refactor: Store SSH keys in subdirectory for cleaner volume mounting (#36)
* refactor: Store SSH keys in subdirectory for cleaner volume mounting

Change SSH key location from ~/.ssh/compose-farm (file) to
~/.ssh/compose-farm/id_ed25519 (file in directory).

This allows docker-compose to mount just the compose-farm directory
to /root/.ssh without exposing all host SSH keys to the container.

Also make host path the default option in docker-compose.yml with
clearer comments about the two options.

* docs: Update README for new SSH key directory structure

* docs: Clarify cf ssh setup must run inside container
2025-12-18 13:07:41 -08:00
Bas Nijholt
d37295fbee feat(web): Add distinct color for Dashboard/Console in command palette (#34)
Give Dashboard and Console a purple accent to visually distinguish
them from service navigation items in the Command K palette.
2025-12-18 12:38:28 -08:00
Bas Nijholt
266f541d35 fix(web): Auto-scroll Command K palette when navigating with arrow keys (#32)
When using arrow keys to navigate through the command palette list,
items outside the visible area now scroll into view automatically.
2025-12-18 12:30:29 -08:00
Bas Nijholt
aabdd550ba feat(cli): Add progress bar to ssh status host connectivity check (#31)
Use run_parallel_with_progress for visual feedback during host checks.
Results are now sorted alphabetically for consistent output.

Also adds code style rule to CLAUDE.md about keeping imports at top level.
2025-12-18 12:21:47 -08:00
Bas Nijholt
8ff60a1e3e refactor(ssh): Unify ssh_status to use run_command like check command (#29) 2025-12-18 12:17:47 -08:00
Bas Nijholt
2497bd727a feat(web): Navigate to dashboard for Apply/Refresh from command palette (#28)
When triggering Apply or Refresh from the command palette on a non-dashboard
page, navigate to the dashboard first and then execute the action, opening
the terminal output.
2025-12-18 12:12:50 -08:00
Bas Nijholt
e37d9d87ba feat(web): Add icons to Command K palette items (#27) 2025-12-18 12:08:55 -08:00
Bas Nijholt
80a1906d90 fix(web): Fix console page not initializing on HTMX navigation (#26)
* fix(web): Fix console page not initializing on HTMX navigation

Move inline script from {% block scripts %} to inside {% block content %}
so it's included in HTMX swaps. The script block was outside #main-content,
so hx-select="#main-content" was discarding it during navigation.

Also wrap script in IIFE to prevent let re-declaration errors when
navigating back to the console page.

* refactor(web): Simplify console script using var instead of IIFE
2025-12-18 12:05:30 -08:00
Bas Nijholt
282de12336 feat(cli): Add ssh subcommand for SSH key management (#22) 2025-12-18 11:58:33 -08:00
Bas Nijholt
2c5308aea3 fix(web): Add Console navigation to Command K palette (#25)
The Command K menu was missing an option to navigate to the Console page,
even though it's available in the sidebar.
2025-12-18 11:55:30 -08:00
Bas Nijholt
5057202938 refactor: DRY cleanup and message consistency (#24) 2025-12-18 11:45:32 -08:00
Bas Nijholt
5e1b9987dd fix(web): Set PTY as controlling terminal for local shell sessions (#23)
Local shell sessions weren't receiving SIGINT (Ctrl+C) because the PTY
wasn't set as the controlling terminal. Add preexec_fn that calls
setsid() and TIOCSCTTY to properly set up the terminal.
2025-12-18 11:12:37 -08:00
Bas Nijholt
d9c26f7f2c Merge pull request #21 from basnijholt/refactor/dry-cleanup
refactor: DRY cleanup - consolidate duplicate code patterns
2025-12-18 11:12:24 -08:00
Bas Nijholt
adfcd4bb31 style: Capitalize "Hint:" consistently 2025-12-18 11:05:53 -08:00
Bas Nijholt
95f7d9c3cf style(cli): Unify "not found" message format with color highlighting
- Services use [cyan] highlighting consistently
- Hosts use [magenta] highlighting consistently
- All use the same "X not found in config" pattern
2025-12-18 11:05:05 -08:00
Bas Nijholt
4c1674cfd8 style(cli): Unify error message format with ✗ prefix
All CLI error messages now consistently use the [red]✗[/] prefix
pattern instead of wrapping the entire message in [red]...[/red].
2025-12-18 11:04:28 -08:00
Bas Nijholt
f65ca8420e fix(web): Filter empty hosts from services_by_host
Preserve original behavior where only hosts with running services are
shown in the dashboard, rather than all configured hosts.
2025-12-18 11:00:01 -08:00
Bas Nijholt
85aff2c271 refactor(state): Move group_services_by_host to state.py
Consolidate duplicate service grouping logic from monitoring.py and
pages.py into a shared function in state.py.
2025-12-18 10:55:53 -08:00
Bas Nijholt
61ca24bb8e refactor(cli): Remove unused get_description parameter
All callers used the same pattern (r[0]), so hardcode it in the helper
and remove the parameter entirely.
2025-12-18 10:54:12 -08:00
Bas Nijholt
ed36588358 refactor(cli): Add validate_host and validate_hosts helpers
Extract common host validation patterns into reusable helpers.
Also simplifies validate_host_for_service to use the new validate_host
helper internally.
2025-12-18 10:49:57 -08:00
Bas Nijholt
80c8079a8c refactor(executor): Add ssh_connect_kwargs helper
Extract common asyncssh.connect parameters into a reusable
ssh_connect_kwargs() function. Used by executor.py, api.py, and ws.py.

Lines: 2608 → 2601 (-7)
2025-12-18 10:48:29 -08:00
Bas Nijholt
763bedf9f6 refactor(cli): Extract config not found helpers
Consolidate repeated "config not found" and "path doesn't exist"
messages into _report_no_config_found() and _report_config_path_not_exists()
helper functions. Also unifies the UX to always show status of searched
paths.
2025-12-18 10:46:58 -08:00
Bas Nijholt
641f7e91a8 refactor(cli): Consolidate _report_*_errors() functions
Merge _report_mount_errors, _report_network_errors, and _report_device_errors
into a single _report_requirement_errors function that takes a category
parameter.

Lines: 2634 → 2608 (-26)
2025-12-18 10:43:49 -08:00
Bas Nijholt
4e8e925d59 refactor(cli): Add run_parallel_with_progress helper
Extract common async progress bar pattern into a reusable helper in
common.py. Updates _discover_services, _check_ssh_connectivity,
_check_service_requirements, _get_container_counts, and _snapshot_services
to use the new helper.

Lines: 2642 → 2634 (-8)
2025-12-18 10:42:45 -08:00
Bas Nijholt
d84858dcfb fix(docker): Add restart policy to web service (#19)
* fix(docker): Add restart policy to containers

* fix: Only add restart policy to web service
2025-12-18 10:39:09 -08:00
Bas Nijholt
3121ee04eb feat(web): Show ⌘K shortcut on command palette button (#20) 2025-12-18 10:38:57 -08:00
Bas Nijholt
a795132a04 refactor(cli): Move format_host to common.py
Consolidate duplicate _format_host() function from lifecycle.py and
management.py into a single format_host() function in common.py.

Lines: 2647 → 2642 (-5)
2025-12-18 10:38:52 -08:00
Bas Nijholt
a6e491575a feat(web): Add Console page with terminal and editor (#17) 2025-12-18 10:29:15 -08:00
Bas Nijholt
78bf90afd9 docs: Improve Releases section in CLAUDE.md 2025-12-18 10:04:56 -08:00
Bas Nijholt
76b60bdd96 feat(web): Add Console page with terminal and editor
Add a new Console page accessible from the sidebar that provides:
- Interactive terminal with full shell access to any configured host
- SSH agent forwarding for authentication to remote hosts
- Monaco editor for viewing/editing files on hosts
- Host selector dropdown with local host listed first
- Auto-loads compose-farm config file on page load

Changes:
- Add /console route and console.html template
- Add /ws/shell/{host} WebSocket endpoint for shell sessions
- Add /api/console/file GET/PUT endpoints for remote file operations
- Update sidebar to include Console navigation link
2025-12-18 10:02:54 -08:00
Bas Nijholt
98bfb1bf6d fix(executor): Disable SSH host key checking in raw mode (#18)
Add SSH options to match asyncssh behavior:
- StrictHostKeyChecking=no
- UserKnownHostsFile=/dev/null
- LogLevel=ERROR (suppress warnings)
- Use -tt to force TTY allocation without stdin TTY

Fixes "Host key verification failed" errors when running from web UI.
2025-12-18 09:59:22 -08:00
Bas Nijholt
3c1cc79684 refactor(docker): Use multi-stage build to reduce image size
Reduces image size from 880MB to 139MB (84% smaller) by:
- Building with uv in a separate stage
- Using python:3.14-alpine as runtime base (no uv overhead)
- Pre-compiling bytecode with --compile-bytecode
- Copying only the tool virtualenv and bin symlinks to runtime
2025-12-18 00:58:06 -08:00
Bas Nijholt
12bbcee374 feat(web): Handle invalid config gracefully with error banner (#16) 2025-12-18 00:40:19 -08:00
Bas Nijholt
6e73ae0157 feat(web): Add command palette with Cmd+K (#15) 2025-12-18 00:12:38 -08:00
Bas Nijholt
d90b951a8c feat(web): Vendor CDN assets at build time for offline use
Add a Hatch build hook that downloads JS/CSS dependencies during wheel
builds and rewrites base.html to use local paths. This allows the web UI
to work in environments without internet access.

- Add data-vendor attributes to base.html for declarative asset mapping
- Download htmx, tailwind, daisyui, and xterm.js during build
- Bundle LICENSES.txt with attribution for vendored dependencies
- Source HTML keeps CDN links for development; wheel has local paths
2025-12-17 23:45:06 -08:00
Bas Nijholt
14558131ed feat(web): Add search and host filter to sidebar and services list
- Add search input and host dropdown to sidebar for filtering services
- Add search input and host dropdown to Services by Host section
- Show dynamic service count in sidebar that updates with filter
- Multi-host services appear when any host is selected
2025-12-17 23:35:18 -08:00
Bas Nijholt
a422363337 Update uv.lock 2025-12-17 23:21:37 -08:00
Bas Nijholt
1278d0b3af fix(web): Remove config caching so changes are detected immediately
Config was cached with @lru_cache, causing the web UI to show stale
sync status after external config file edits.
2025-12-17 23:17:25 -08:00
Bas Nijholt
c8ab6271a8 feat(web): Add rainbow hover effect to Compose Farm headers
Animated gradient appears on hover for both sidebar and mobile navbar headers.
2025-12-17 23:09:57 -08:00
Bas Nijholt
957e828a5b feat(web): Add Lucide icons to web UI (#14) 2025-12-17 23:04:53 -08:00
Bas Nijholt
5afda8cbb2 Add web UI with FastAPI + HTMX + xterm.js (#13) 2025-12-17 22:52:40 -08:00
Bas Nijholt
1bbf324f1e Validate services exist in config with friendly error
Show a clean error message instead of a traceback when a service
is not found in config. Includes a hint about adding to config.
2025-12-17 11:04:48 -08:00
Bas Nijholt
1be5b987a2 Support "." as shorthand for current directory service name
Running `cf up .` now resolves to the current directory name,
allowing quick operations when inside a service directory.
2025-12-17 10:57:48 -08:00
Bas Nijholt
6b684b19f2 Run startup time test 6 times 2025-12-17 09:08:45 -08:00
Bas Nijholt
4a37982e30 Cleanup 2025-12-17 09:07:52 -08:00
Bas Nijholt
55cb44e0e7 Drop service discovery mention 2025-12-17 09:07:30 -08:00
Bas Nijholt
5c242d08bf Add cf apply to post 2025-12-17 09:06:59 -08:00
Bas Nijholt
5bf65d3849 Raise Linux CLI startup threshold to 0.25s for CI headroom 2025-12-17 09:05:53 -08:00
Bas Nijholt
21d5dfa175 Fix check-readme-commands hook to use uv run for CI compatibility 2025-12-17 08:58:45 -08:00
Bas Nijholt
e49ad29999 Use OS-specific thresholds for CLI startup test (Linux: 0.2s, macOS: 0.35s, Windows: 2s) 2025-12-17 08:57:50 -08:00
Bas Nijholt
cdbe74ed89 Return early from CLI startup test when under threshold 2025-12-17 08:56:35 -08:00
Bas Nijholt
129970379c Increase CLI startup threshold to 0.35s for macOS/Windows CI 2025-12-17 08:55:40 -08:00
Bas Nijholt
c5c47d14dd Add CLI startup time test to catch slow imports
Runs `cf --help` and fails if startup exceeds 200ms. Shows timing
info in CI logs on both pass and failure.
2025-12-17 08:53:32 -08:00
Bas Nijholt
95f19e7333 Add pre-commit hook to verify all CLI commands are documented in README
Extracts commands from the Typer app and checks each has a corresponding
--help section in the README. Runs when README.md or CLI files change.
2025-12-17 08:45:16 -08:00
Bas Nijholt
9c6edd3f18 refactor(docs): move reddit-post.md into docs folder 2025-12-17 08:45:16 -08:00
github-actions[bot]
bda9210354 Update README.md 2025-12-17 16:35:24 +00:00
Bas Nijholt
f57951e8dc Fix cf up -h output in README.md 2025-12-17 08:34:52 -08:00
basnijholt
ba8c04caf8 chore(docs): update TOC 2025-12-17 16:31:40 +00:00
Bas Nijholt
ff0658117d Add all --help outputs 2025-12-17 08:31:14 -08:00
Bas Nijholt
920b593d5f Fix mypy error: add type annotation for proc variable 2025-12-17 00:17:20 -08:00
Bas Nijholt
27d9b08ce2 Add -f shorthand for --full in apply command 2025-12-17 00:10:16 -08:00
Bas Nijholt
700cdacb4d Add 'a' alias for apply command (cf a = cf apply) 2025-12-17 00:09:45 -08:00
Bas Nijholt
3c7a532704 Add comments explaining lazy imports for startup performance 2025-12-17 00:08:28 -08:00
Bas Nijholt
6048f37ad5 Lazy import pydantic for faster CLI startup
- Create paths.py module with lightweight path utilities (no pydantic)
- Move Config imports to TYPE_CHECKING blocks in CLI modules
- Lazy import load_config only when needed

Combined with asyncssh lazy loading, cf --help now starts in ~150ms
instead of ~500ms (70% faster).
2025-12-17 00:07:15 -08:00
Bas Nijholt
f18952633f Lazy import asyncssh for faster CLI startup
Move asyncssh import inside _run_ssh_command() so it's only loaded
when actually executing SSH commands. This cuts CLI import time
from 414ms to 200ms (52% faster).

cf --help now starts in ~260ms instead of ~500ms.
2025-12-16 23:59:37 -08:00
Bas Nijholt
437257e631 Add cf config symlink command
Creates a symlink from ~/.config/compose-farm/compose-farm.yaml to a
local config file using absolute paths (avoids broken relative symlinks).

Usage:
  cf config symlink                     # Link to ./compose-farm.yaml
  cf config symlink /path/to/config.yaml  # Link to specific file

State files are automatically stored next to the actual config (not
the symlink) since config_path.resolve() follows symlinks.
2025-12-16 23:56:58 -08:00
Bas Nijholt
c720170f26 Add --full flag to apply command
When --full is passed, apply also runs 'docker compose up' on all
services (not just missing/migrating ones) to pick up any config
changes (compose file, .env, etc).

- cf apply          # Fast: state reconciliation only
- cf apply --full   # Thorough: also refresh all running services
2025-12-16 23:54:19 -08:00
Bas Nijholt
d9c03d6509 Feature apply as the hero command in README
- Update intro and NOTE block to lead with cf apply
- Rewrite "How It Works" to show declarative workflow first
- Move apply to top of command table (bolded)
- Reorder examples to show apply as "the main command"
- Update automation bullet to highlight one-command reconciliation
2025-12-16 23:49:49 -08:00
Bas Nijholt
3b7066711f Merge pull request #12 from basnijholt/feature/orphaned-services
Add apply command and refactor CLI for clearer UX
2025-12-16 23:34:34 -08:00
Bas Nijholt
6a630c40a1 Update apply command description to include starting missing services 2025-12-16 23:32:27 -08:00
Bas Nijholt
9f9c042b66 Remove up --migrate flag in favor of apply
Simplifies CLI by having one clear reconciliation command:
- cf up <service>  = start specific services (auto-migrates if needed)
- cf apply         = full reconcile (stop orphans + migrate + start missing)

The --migrate flag was redundant with 'apply --no-orphans'.
2025-12-16 23:27:19 -08:00
github-actions[bot]
2a6d7d0b85 Update README.md 2025-12-17 07:21:38 +00:00
Bas Nijholt
6d813ccd84 Merge af9c760fb8 into affed2edcf 2025-12-17 07:21:23 +00:00
Bas Nijholt
af9c760fb8 Add missing service detection to apply command
Previously, apply only handled:
1. Stopping orphans (in state, not in config)
2. Migrating services (in state, wrong host)

Now it also handles:
3. Starting missing services (in config, not in state)

This fixes the case where a service was stopped as an orphan, then
re-added to config - apply would say "nothing to do" instead of
starting it.

Added get_services_not_in_state() to state.py and updated tests.
2025-12-16 23:21:09 -08:00
Bas Nijholt
90656b05e3 Add tests for apply command and down --orphaned flag
Tests cover:
- apply: nothing to do, dry-run preview, migrations, orphan cleanup, --no-orphans
- down --orphaned: no orphans, stops services, error on invalid combinations

Lifecycle.py coverage improved from 20% to 61%.
2025-12-16 23:15:46 -08:00
github-actions[bot]
d7a3d4e8c7 Update README.md 2025-12-17 07:10:52 +00:00
Bas Nijholt
35f0b8bf99 Merge be6b391121 into affed2edcf 2025-12-16 23:10:36 -08:00
Bas Nijholt
be6b391121 Refactor CLI commands for clearer UX
Separate "read state from reality" from "write config to reality":
- Rename `sync` to `refresh` (updates local state from running services)
- Add `apply` command (makes reality match config: migrate + stop orphans)
- Add `down --orphaned` flag (stops services removed from config)
- Modify `up --migrate` to only handle migrations (not orphans)

The new mental model:
- `refresh` = Reality → State (discover what's running)
- `apply` = Config → Reality (reconcile: migrate services + stop orphans)

Also extract private helper functions for reporting to match codebase style.
2025-12-16 23:06:42 -08:00
Bas Nijholt
7f56ba6a41 Add orphaned service detection and cleanup
When services are removed from config but still tracked in state,
`cf up --migrate` now stops them automatically. This makes the
config truly declarative - comment out a service, run migrate,
and it stops.

Changes:
- Add get_orphaned_services() to state.py for detecting orphans
- Add stop_orphaned_services() to operations.py for cleanup
- Update lifecycle.py to call stop_orphaned_services on --migrate
- Refactor _report_orphaned_services to use shared function
- Rename "missing_from_config" to "unmanaged" for clarity
- Add tests for get_orphaned_services
- Only remove from state on successful down (not on failure)
2025-12-16 22:53:26 -08:00
Bas Nijholt
4b3d7a861e Fix migration and update for services with buildable images
Use `pull --ignore-buildable` to skip images that have `build:` defined
in the compose file, preventing pull failures for locally-built images
like gitea-runner-custom. The build step then handles these images.
2025-12-16 19:42:24 -08:00
Bas Nijholt
affed2edcf Refactor operations.py into smaller helpers
- Add PreflightResult NamedTuple with .ok property
- Extract _run_compose_step to handle raw output and interrupts
- Extract _up_single_service for single-host migration logic
- Deduplicate pull/build handling in _migrate_service
- Add was_running check to only rollback if service was running
2025-12-16 17:08:25 -08:00
Bas Nijholt
34642e8b8e Rollback to old host when migration up fails
When up fails on the target host after migration has already stopped
the service on the old host, attempt to restart on the old host as a
rollback. This prevents services from being left down after a failed
migration attempt.
2025-12-16 16:59:09 -08:00
Bas Nijholt
4c8b6c5209 Add init-network hint when network is missing 2025-12-16 16:22:06 -08:00
Bas Nijholt
2b38ed28c0 Skip traefik regeneration when all services fail
Don't update traefik config if all services in the operation failed.
This prevents adding routes for services that aren't actually running.
2025-12-16 16:21:09 -08:00
Bas Nijholt
26b57895ce Clean up orphaned containers when migration up fails
If `up` fails after migration (when we've already run `down` on the
old host), run `down` on the target host to clean up any containers
that were created but couldn't start (e.g., due to missing devices).

This prevents orphaned containers from lingering on the failed host.
2025-12-16 16:16:09 -08:00
Bas Nijholt
367da13fae Fix path existence check for permission denied
Use stat instead of test -e to distinguish "no such file" from
"permission denied". If stat fails with permission denied, the
path exists (just not accessible), so report it as existing.

Fixes false "missing path" errors for directories with restricted
permissions like /mnt/data/immich/library.
2025-12-16 15:08:38 -08:00
Bas Nijholt
d6ecd42559 Consolidate service requirement checks into shared function
Move check_service_requirements() to operations.py as a public function
that verifies paths, networks, and devices exist on a target host. Both
CLI check command and pre-flight migration checks now use this shared
function, eliminating duplication. Also adds device checking to the
check command output.
2025-12-16 14:53:59 -08:00
Bas Nijholt
233c33fa52 Add device checking to pre-flight migration checks
Services with devices: mappings (e.g., /dev/dri for GPU acceleration)
now have those devices verified on the target host before migration.
This prevents the scenario where a service is stopped on the old host
but fails to start on the new host due to missing devices.

Adds parse_devices() to extract host device paths from compose files.
2025-12-16 14:35:52 -08:00
Bas Nijholt
43974c5743 Abort on first Ctrl+C during migrations
Detect when SSH subprocess is killed by signal (exit code < 0 or 255)
and treat it as an interrupt. This allows single Ctrl+C to abort the
entire operation instead of requiring two presses.
2025-12-16 14:31:33 -08:00
Bas Nijholt
cf94a62f37 docs: Clarify pull/build comments in migration 2025-12-16 14:26:48 -08:00
Bas Nijholt
81b4074827 Pre-build Dockerfile services during migration
After pulling images, also run build for services with Dockerfiles.
This ensures build-based services have their images ready before
stopping the old service, minimizing downtime.

If build fails, abort the migration and leave the service running
on the old host.

Extract _migrate_service helper to reduce function complexity.
2025-12-16 14:17:19 -08:00
Bas Nijholt
455657c8df Abort migration if pre-pull fails
If pulling images on the target host fails (e.g., rate limit),
abort the migration and leave the service running on the old host.
This prevents downtime when Docker Hub rate limits are hit.
2025-12-16 14:14:35 -08:00
Bas Nijholt
ee5a92788a Pre-pull images during migration to reduce downtime
When migrating a service to a new host, pull images on the target
host before stopping the service on the old host. This minimizes
downtime since images are cached when the up command runs.

Migration flow:
1. Pull images on new host (service still running on old)
2. Down on old host
3. Up on new host (fast, images already pulled)
2025-12-16 14:12:53 -08:00
Bas Nijholt
2ba396a419 docs: Move Compose Farm to first column in comparison table 2025-12-16 13:48:40 -08:00
Bas Nijholt
7144d58160 build: Include LICENSE file in package distribution 2025-12-16 13:37:15 -08:00
Bas Nijholt
279fa2e5ef Create LICENSE 2025-12-16 13:36:35 -08:00
Bas Nijholt
dbe0b8b597 docs: Add app.py to CLAUDE.md architecture diagram 2025-12-16 13:14:51 -08:00
Bas Nijholt
b7315d255a refactor: Split CLI into modular subpackage (#11) 2025-12-16 13:08:08 -08:00
renovate[bot]
f003d2931f ⬆️ Update actions/checkout action to v6 (#5)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-16 12:19:45 -08:00
renovate[bot]
6f7c557065 ⬆️ Update actions/setup-python action to v6 (#6)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-16 12:18:34 -08:00
renovate[bot]
ecb6ee46b1 ⬆️ Update astral-sh/setup-uv action to v7 (#8)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-16 12:18:28 -08:00
renovate[bot]
354967010f ⬆️ Update redis Docker tag to v8 (#9)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-16 12:18:22 -08:00
github-actions[bot]
57122f31a3 Update README.md 2025-12-16 20:01:03 +00:00
Bas Nijholt
cbbcec0d14 Add config subcommand for managing configuration files (#10) 2025-12-16 12:00:44 -08:00
Bas Nijholt
de38c35b8a docs: Add one-liner showing manual equivalent 2025-12-16 11:19:56 -08:00
github-actions[bot]
def996ddf4 Update README.md 2025-12-16 19:14:07 +00:00
Bas Nijholt
790e32e96b Fix test_load_config_not_found for CF_CONFIG env var 2025-12-16 11:13:44 -08:00
Bas Nijholt
fd75c4d87f Add CLI --help output to README 2025-12-16 11:12:43 -08:00
Bas Nijholt
411a99cbc4 Wait for PyPI propagation before Docker build
Also add Python 3.14 to classifiers.
2025-12-16 11:04:35 -08:00
Bas Nijholt
d2c6ab72b2 Add CF_CONFIG env var for simpler Docker workflow
Config search order is now:
1. --config CLI option
2. CF_CONFIG environment variable
3. ./compose-farm.yaml
4. ~/.config/compose-farm/compose-farm.yaml

Docker workflow simplified: mount compose_dir once, set CF_CONFIG
to config file within it. No more symlink issues or multiple mounts.
2025-12-16 10:12:55 -08:00
Bas Nijholt
3656584eda Friendly error when config path is a directory
Docker creates empty directories for missing file mounts,
causing confusing IsADirectoryError tracebacks. Now shows
a clear message explaining the likely cause.
2025-12-16 09:49:40 -08:00
Bas Nijholt
8be370098d Use env vars for docker-compose.yml mounts
- CF_CONFIG_DIR: config directory (default: ~/.config/compose-farm)
- CF_COMPOSE_DIR: compose directory (default: /opt/compose)

Mounts preserve paths so compose_dir in config works correctly.
2025-12-16 09:49:34 -08:00
Bas Nijholt
45057cb6df feat: Add docker-compose.yml for easier Docker usage
Example compose file that mounts SSH agent and config.
Users uncomment the compose_dir mount for their setup.
2025-12-16 09:40:18 -08:00
Bas Nijholt
3f24484d60 fix: Fix VERSION expansion in Dockerfile 2025-12-16 09:24:46 -08:00
Bas Nijholt
b6d50a22b4 fix: Wait for PyPI upload before building Docker image
Use workflow_run trigger to wait for "Upload Python Package" workflow
to complete successfully before building the Docker image. This ensures
the version is available on PyPI when uv tries to install it.
2025-12-16 09:21:35 -08:00
Bas Nijholt
8a658210e1 docs: Add Docker installation instructions with SSH agent 2025-12-16 09:16:43 -08:00
Bas Nijholt
583aaaa080 feat: Add Docker image and GitHub workflow
- Dockerfile using ghcr.io/astral-sh/uv:python3.14-alpine
- Installs compose-farm via uv tool install
- Includes openssh-client for remote host connections
- GitHub workflow builds and pushes to ghcr.io on release
- Supports manual workflow dispatch with version input
- Tags: semver (x.y.z, x.y, x) and latest
2025-12-16 09:11:09 -08:00
Bas Nijholt
22ca4f64e8 docs: Add command quick-reference table to Usage section 2025-12-16 08:30:15 -08:00
Bas Nijholt
32e798fcaa chore: Remove obsolete PLAN.md
The traefik-file feature described in this planning document has been
fully implemented. All open questions have been resolved.
2025-12-15 23:27:27 -08:00
Bas Nijholt
ced81c8b50 refactor: Make internal CLI symbols private
Rename module-internal type aliases, TypeVar, and constants with _ prefix:
- _T, _ServicesArg, _AllOption, _ConfigOption, _LogPathOption, _HostOption
- _MISSING_PATH_PREVIEW_LIMIT
- _DEFAULT_NETWORK_NAME, _DEFAULT_NETWORK_SUBNET, _DEFAULT_NETWORK_GATEWAY

These are only used within cli.py and should not be part of the public API.
2025-12-15 20:57:41 -08:00
Bas Nijholt
7ec4b71101 refactor: Remove unnecessary console aliasing in executor
Import console and err_console directly instead of aliasing to
_console and _err_console. Rename inner function variable to
'out' to avoid shadowing the module-level console import.
2025-12-15 20:36:39 -08:00
Bas Nijholt
94aa58d380 refactor: Make internal constants and classes private
Rename module-internal constants and classes with _ prefix:
- compose.py: SINGLE_PART, PUBLISHED_TARGET_PARTS, HOST_PUBLISHED_PARTS, MIN_VOLUME_PARTS
- logs.py: DIGEST_HEX_LENGTH
- traefik.py: LIST_VALUE_KEYS, MIN_ROUTER_PARTS, MIN_SERVICE_LABEL_PARTS,
  TraefikServiceSource, TRAEFIK_CONFIG_HEADER

These items are only used within their respective modules and should
not be part of the public API.
2025-12-15 20:33:48 -08:00
Bas Nijholt
f8d88e6f97 refactor: Remove run_compose_multi_host and rename report_preflight_failures to _report_preflight_failures
Eliminate the public run_compose_multi_host helper, which was a thin wrapper around the internal _run_sequential_commands_multi_host function, and mark the preflight failure reporting function as internal by prefixing it with an underscore.
Updated all internal calls accordingly.
2025-12-15 20:27:02 -08:00
Bas Nijholt
a95f6309b0 Remove dead code and make internal APIs public
Remove functions that were replaced by _with_progress variants in cli.py:
- discover_running_services, check_mounts_on_configured_hosts,
  check_networks_on_configured_hosts, _check_resources from operations.py
- snapshot_services from logs.py
- get_service_hosts from state.py

Make previously private functions public (remove underscore prefix):
- is_local in executor.py
- isoformat, collect_service_entries, load_existing_entries,
  merge_entries, write_toml in logs.py
- load_env, interpolate, parse_ports in compose.py

Update tests to use renamed public functions.
2025-12-15 20:19:28 -08:00
Bas Nijholt
502de018af docs: Add high availability row to comparison table 2025-12-15 19:51:57 -08:00
Bas Nijholt
a3e8daad33 docs: refine comparison table in README 2025-12-15 16:06:17 -08:00
Bas Nijholt
78a2f65c94 docs: Move comparison link after declarative setup line 2025-12-15 15:48:15 -08:00
Bas Nijholt
1689a6833a docs: Link to comparison section from Why Compose Farm 2025-12-15 15:46:26 -08:00
Bas Nijholt
6d2f32eadf docs: Add feature comparison table with emojis 2025-12-15 15:44:16 -08:00
Bas Nijholt
c549dd50c9 docs: Move comparison section to end, simplify format 2025-12-15 15:41:09 -08:00
Bas Nijholt
82312e9421 docs: add comparison with alternatives to README 2025-12-15 15:37:08 -08:00
Bas Nijholt
e13b367188 docs: Add shields to README 2025-12-15 15:31:30 -08:00
Bas Nijholt
d73049cc1b docs: Add declarative philosophy to Why Compose Farm 2025-12-15 15:17:04 -08:00
Bas Nijholt
4373b23cd3 docs: Simplify xkcd explanation, lead with simplicity 2025-12-15 14:54:29 -08:00
Bas Nijholt
73eb6ccf41 docs: Center xkcd image 2025-12-15 14:52:57 -08:00
Bas Nijholt
6ca48d0d56 docs: Add console.py to CLAUDE.md architecture 2025-12-15 14:52:40 -08:00
Bas Nijholt
b82599005e docs: Add xkcd reference and clarify this is not a new standard 2025-12-15 14:37:33 -08:00
Bas Nijholt
b044053674 docs: Emphasize zero changes required to compose files 2025-12-15 14:19:52 -08:00
Bas Nijholt
e4f03bcd94 docs: Clarify autokuma demonstrates multi-host feature 2025-12-15 14:14:47 -08:00
Bas Nijholt
ac3797912f Add AutoKuma labels to example services 2025-12-15 14:14:07 -08:00
Bas Nijholt
429a1f6e7e docs: Fix outdated .env instructions in examples README 2025-12-15 14:13:09 -08:00
Bas Nijholt
fab20e0796 Add header comment to generated traefik file-provider config
Includes repository link and explanation of what the file does.
Header is added automatically by render_traefik_config().
2025-12-15 14:11:49 -08:00
Bas Nijholt
1bc6baa0b0 Add realistic traefik file-provider example 2025-12-15 14:10:00 -08:00
Bas Nijholt
996e0748f8 style: Simplify compose-farm.yaml comments 2025-12-15 14:08:50 -08:00
Bas Nijholt
ca46fdfaa4 Replace trivial examples with real-world services
- traefik: Reverse proxy with Let's Encrypt DNS challenge
- mealie: Single container with resource limits
- uptime-kuma: Monitoring with Docker socket and user mapping
- paperless-ngx: Multi-container stack (Redis + SQLite)
- autokuma: Multi-host service (runs on all hosts)

Each example demonstrates dual Traefik routes:
- HTTPS (websecure): Custom domain with Let's Encrypt TLS
- HTTP (web): .local domain for LAN access without TLS

Includes compose-farm.yaml with multi-host config and
compose-farm-state.yaml showing deployed state.
2025-12-15 14:08:14 -08:00
Bas Nijholt
b480797e5b Add XDG_CONFIG_HOME support for config paths
Respect the XDG_CONFIG_HOME environment variable when looking for
config files and log paths. Falls back to ~/.config if not set.
2025-12-15 13:06:59 -08:00
Bas Nijholt
c47fdf847e Use _progress_bar helper for all progress bars
Standardize UI by using the same progress bar configuration
everywhere. Removes unused TaskProgressColumn import.
2025-12-15 13:03:50 -08:00
Bas Nijholt
3ca9562013 Consolidate console instances and progress bar patterns
- Add console.py module with shared Console instances
- Add _progress_bar() helper to reduce progress bar boilerplate
- Update cli.py, executor.py, operations.py to use shared console
2025-12-15 12:56:23 -08:00
Bas Nijholt
3104d5de28 Refactor state.py with context manager and cli.py with helper to reduce duplication 2025-12-15 11:10:44 -08:00
Bas Nijholt
fd141cbc8c Refactor executor and operations to eliminate code duplication 2025-12-15 11:07:58 -08:00
Bas Nijholt
aa0c15b6b3 Add project metadata to pyproject.toml
Add license, maintainers, keywords, classifiers, and project URLs
for better discoverability on PyPI.
2025-12-15 11:07:13 -08:00
Bas Nijholt
4630a3e551 Merge pull request #4 from basnijholt/feature/multi-host-services
Add multi-host service support
2025-12-15 10:58:36 -08:00
Bas Nijholt
b70d5c52f1 fix: Use strict=True in zip() for equal-length lists 2025-12-15 10:56:47 -08:00
Bas Nijholt
5d8635ba7b ci: Use prek in CI instead of separate ruff/mypy commands
prek is a faster, Rust-based alternative to pre-commit.
Also updates ruff-pre-commit to v0.14.9 to match project version.
2025-12-15 10:53:34 -08:00
Bas Nijholt
27dad9d9d5 style: Format cli.py 2025-12-15 10:51:51 -08:00
Bas Nijholt
abb4417b15 Add orphaned service detection in check command
Warns when services are in state but not in config. These are
services that were removed from config but may still be running.
Also refactors remote checks into helper function.
2025-12-15 10:43:10 -08:00
Bas Nijholt
388cca5591 Add summary output showing succeeded/failed service counts 2025-12-15 10:38:36 -08:00
Bas Nijholt
8aa019e25f fix: Move imports to top-level in test file 2025-12-15 10:29:29 -08:00
Bas Nijholt
e4061cfbde Fix down command for multi-host services
The result.service for multi-host services is 'svc@host' format.
Extract base service name before removing from state.
2025-12-15 10:09:15 -08:00
Bas Nijholt
9a1f20e2d4 Add per-host control and partial state tracking
- Track partial success: if some hosts succeed, state reflects
  only the hosts that actually started
- Add --host flag to up/down: operate on a specific host only
  - `cf up autokuma --host nuc` starts only on nuc
  - `cf down autokuma --host nuc` stops only on nuc
- Add state helpers: add_service_to_host, remove_service_from_host
- Validate --host is allowed for the service's configured hosts
2025-12-15 09:51:49 -08:00
Bas Nijholt
3b45736729 Validate multi-host config edge cases
- Block 'all' as a host name (reserved keyword)
- Reject empty host lists
- Reject duplicate hosts in explicit lists
2025-12-15 09:46:29 -08:00
Bas Nijholt
1d88fa450a docs: Explain why multi-host services are needed 2025-12-15 09:37:41 -08:00
Bas Nijholt
31ee6be163 Add multi-host service support
Allows services to run on multiple hosts using the "all" keyword
or an explicit list of hosts:

  services:
    autokuma: all              # Run on all configured hosts
    dozzle: [host1, host2]     # Run on specific hosts

Key changes:
- config.py: Added get_hosts() and is_multi_host() methods
- executor.py: Added run_compose_multi_host() for parallel execution
- operations.py: Added _up_multi_host_service() with pre-flight checks
- state.py: Track list of hosts for multi-host services
- cli.py: Updated discovery, stats, and check for multi-host
- README.md: Added documentation

Multi-host services:
- Run on all target hosts in parallel
- Skip migration logic (always run everywhere)
- Show [service@host] prefix in output
- Track all running hosts in state
2025-12-15 09:14:25 -08:00
Bas Nijholt
096a2ca5f4 sync: Remove extra spacing (transient bar already leaves one) 2025-12-14 20:41:38 -08:00
Bas Nijholt
fb04f6f64d sync: Add spacing before capturing progress bar 2025-12-14 20:39:02 -08:00
Bas Nijholt
d8e54aa347 check: Fix double newline spacing in output 2025-12-14 20:35:25 -08:00
Bas Nijholt
b2b6b421ba check: Add spacing before mounts/networks progress bar 2025-12-14 20:33:36 -08:00
Bas Nijholt
c6b35f02f0 check: Add spacing before SSH progress bar 2025-12-14 20:32:16 -08:00
Bas Nijholt
7e43b0a6b8 check: Fix spacing after transient progress bar 2025-12-14 20:31:20 -08:00
Bas Nijholt
2915b287ba check: Add SSH connectivity check as first remote step
- Check SSH connectivity to all remote hosts before mount/network checks
- Skip local hosts (no SSH needed)
- Show progress bar during SSH checks
- Report unreachable hosts with clear error messages
- Add newline spacing for better output formatting
2025-12-14 20:30:36 -08:00
Bas Nijholt
ae561db0c9 check: Add progress bar and parallelize mount/network checks
- Parallelize mount and network checking for all services
- Add Rich progress bar showing count, elapsed time, and service name
- Move all inline imports to top-level (contextlib, datetime, logs)
- Also sort state file entries alphabetically for consistency
2025-12-14 20:24:54 -08:00
Bas Nijholt
2d132747c4 sync: Enhance progress bars with count and elapsed time
Show "Discovering ━━━━ 32/65 • 0:00:05 • service-name" format with:
- M of N complete count
- Elapsed time
- Current service name
2025-12-14 20:15:39 -08:00
Bas Nijholt
2848163a04 sync: Add progress bars and parallelize operations
- Parallelize service discovery across all services
- Parallelize image digest capture
- Show transient Rich progress bars during both operations
- Significantly faster sync due to concurrent SSH connections
2025-12-14 20:13:42 -08:00
Bas Nijholt
76aa6e11d2 logs: Make --all and --host mutually exclusive
These options conflict conceptually - --all means all services across
all hosts, while --host means all services on a specific host.
2025-12-14 20:10:28 -08:00
Bas Nijholt
d377df15b4 logs: Add --host filter and contextual --tail default
- Add --host/-H option to filter logs to services on a specific host
- Default --tail to 20 lines when showing multiple services (--all, --host, or >1 service)
- Default to 100 lines for single service
- Add tests for contextual default and host filtering
2025-12-14 20:04:40 -08:00
Bas Nijholt
334c17cc28 logs: Use contextual default for --tail option
Default to 20 lines when --all is specified (quick overview),
100 lines otherwise (detailed view for specific services).
2025-12-14 19:59:12 -08:00
Bas Nijholt
f148b5bd3a docs: Add TrueNAS NFS root squash configuration guide
Explains how to set maproot_user/maproot_group to root/wheel
in TrueNAS to disable root squash, allowing Docker containers
running as root to write to NFS-mounted volumes.
2025-12-14 17:24:46 -08:00
Bas Nijholt
54af649d76 Make stats progress bar transient 2025-12-14 15:31:00 -08:00
Bas Nijholt
f6e5a5fa56 Add progress bar when querying hosts in stats --live 2025-12-14 15:30:37 -08:00
Bas Nijholt
01aa24d0db style: Add borders to stats summary table 2025-12-14 15:25:17 -08:00
Bas Nijholt
3e702ef72e Add stats command for overview of hosts and services
Shows hosts table (address, configured/running services) and summary
(total hosts, services, compose files on disk, pending migrations).
Use --live to query Docker for actual container counts.
2025-12-14 15:24:30 -08:00
Bas Nijholt
a31218f7e5 docs: Remove trivial progress counter detail 2025-12-14 15:16:45 -08:00
Bas Nijholt
5decb3ed95 docs: Add --migrate flag and hybrid SSH approach 2025-12-14 15:14:56 -08:00
Bas Nijholt
da61436fbb Use native ssh for raw mode, asyncssh for streaming
- Raw mode uses subprocess with `ssh -t` for proper TTY handling
- Progress bars now render correctly on remote hosts
- asyncssh still used for non-raw parallel streaming with prefixes
- Remove redundant header prints (operations.py handles them)
2025-12-14 15:12:48 -08:00
Bas Nijholt
b6025af0c8 Fix newline after raw output to prevent line mixing 2025-12-14 14:49:33 -08:00
Bas Nijholt
ab914677c4 Add progress counter [n/total] to up command 2025-12-14 14:48:48 -08:00
Bas Nijholt
c0b421f812 Add --migrate flag to up command
Automatically detects services where state differs from config
and migrates only those. Usage: cf up --migrate or cf up -m
2025-12-14 14:47:43 -08:00
Bas Nijholt
2a446c800f Always use raw output for up command
- Print service header before raw output (local and SSH)
- up command always uses raw=True since migrations are sequential
- Gives clean progress bar output without per-line prefixes
2025-12-14 14:44:53 -08:00
Bas Nijholt
dc541c0298 test: Skip shell-dependent tests on Windows/Mac 2025-12-14 14:28:31 -08:00
Bas Nijholt
4d9b8b5ba4 docs: Add TrueNAS NFS crossmnt workaround
Documents how to access child ZFS datasets over NFS by injecting
the crossmnt option into /etc/exports. Includes Python script and
setup instructions for cron-based persistence.
2025-12-14 14:11:10 -08:00
Bas Nijholt
566a07d3a4 Refactor: separate concerns into dedicated modules
- Extract compose.py from traefik.py for generic compose parsing
  (env loading, interpolation, ports, volumes, networks)
- Rename ssh.py to executor.py for clarity
- Extract operations.py from cli.py for business logic
  (up_services, discover_running_services, preflight checks)
- Update CLAUDE.md with new architecture diagram
- Add docs/dev/future-improvements.md for low-priority items

CLI is now a thin layer that delegates to operations module.
All 70 tests pass.
2025-12-14 12:49:24 -08:00
Bas Nijholt
921ce6f13a Add raw output mode for single-service operations
When operating on a single service, pass output directly to
stdout/stderr instead of prefixing each line with [service].
This enables proper handling of \r progress bars during
docker pull, up, etc.
2025-12-14 12:15:36 -08:00
Bas Nijholt
708e09a8cc Show target host when starting services 2025-12-14 12:09:07 -08:00
Bas Nijholt
04154b84f6 Add tests for network and path checking
- test_traefik: Tests for parse_external_networks()
- test_ssh: Tests for check_paths_exist() and check_networks_exist()
2025-12-14 12:08:35 -08:00
Bas Nijholt
2bc9b09e58 Add Docker network validation and init-network command
- check: Validates external networks exist on configured hosts
- up: Pre-flight check blocks if networks missing on target host
- init-network: Creates Docker network with consistent subnet/gateway
  across hosts (default: mynetwork 172.20.0.0/16)

Networks defined as `external: true` in compose files are now
checked before starting or migrating services.
2025-12-14 12:06:36 -08:00
Bas Nijholt
16d517dcd0 docs: Update README and CLAUDE.md for redesigned check command 2025-12-14 10:56:04 -08:00
Bas Nijholt
5e8d09b010 Redesign check command: unified validation + host compatibility
Merged check-mounts into check command. Now provides:
- Config validation (compose files exist)
- Traefik label validation
- Mount path validation (SSH-based)
- Host compatibility matrix when checking specific services

Usage:
  cf check              # Full validation of all services
  cf check --local      # Skip SSH mount checks (fast)
  cf check jellyfin     # Check service + show which hosts can run it

Removed standalone check-mounts command (merged into check).
2025-12-14 10:43:34 -08:00
Bas Nijholt
6fc3535449 Add pre-flight mount check before migration
When migrating a service to a new host, check that all required volume
mount paths exist on the target host BEFORE running down on the old host.
This prevents failed migrations where the service is stopped but can't
start on the new host due to missing NFS mounts.
2025-12-14 10:30:56 -08:00
Bas Nijholt
9158dba0ce Add check-mounts command to verify NFS paths exist
New command to verify volume mount paths exist on target hosts before
migration. Parses bind mounts from compose files and SSHs to hosts to
check each path exists.

- check_paths_exist() in ssh.py: batch check multiple paths efficiently
- parse_host_volumes() in traefik.py: extract bind mount paths from compose
- check-mounts command in cli.py: groups by host, reports missing paths

Usage: cf check-mounts plex jellyfin
       cf check-mounts --all
2025-12-14 10:25:26 -08:00
Bas Nijholt
7b2c431ca3 fix: Change whoami example port to 18082 to avoid conflicts 2025-12-14 09:46:20 -08:00
Bas Nijholt
9deb460cfc Add Traefik example to examples directory
- traefik/docker-compose.yml: Traefik with docker and file providers
- whoami/docker-compose.yml: Test service with Traefik labels
- Updated compose-farm.yaml with traefik_file auto-regeneration
- Updated README.md with Traefik usage instructions
2025-12-14 09:44:03 -08:00
Bas Nijholt
2ce6f2473b docs: Add Traefik config options to example 2025-12-14 01:19:13 -08:00
Bas Nijholt
04d8444168 docs: Use consistent server-1/server-2 naming in example config 2025-12-14 01:18:50 -08:00
Bas Nijholt
b539c4ba76 docs: Update CLAUDE.md with all modules and commands 2025-12-14 01:17:30 -08:00
Bas Nijholt
473bc089c7 docs: Use consistent server-1/server-2 naming throughout 2025-12-14 01:15:46 -08:00
Bas Nijholt
50f405eb77 docs: Use uv tool install for CLI tools 2025-12-14 01:14:12 -08:00
Bas Nijholt
fd0d3bcbcf docs: Use clearer host names in NFS example 2025-12-14 01:13:58 -08:00
Bas Nijholt
f2e8ab0387 docs: Recommend uv for installation 2025-12-14 01:13:24 -08:00
Bas Nijholt
dfbf2748c7 docs: Reorganize README for better flow 2025-12-14 01:12:09 -08:00
Bas Nijholt
57b0ba5916 CSS for logo 2025-12-14 00:59:59 -08:00
Bas Nijholt
e668fb0faf Add logo 2025-12-14 00:58:58 -08:00
Bas Nijholt
2702203cb5 fix: Handle non-string address in getaddrinfo result 2025-12-14 00:55:11 -08:00
Bas Nijholt
27f17a2451 Remove unused PortMapping.protocol field 2025-12-14 00:52:47 -08:00
Bas Nijholt
98c2492d21 docs: Add cf alias and check command to README 2025-12-14 00:41:26 -08:00
Bas Nijholt
04339cbb9a Group CLI commands into Lifecycle, Monitoring, Configuration 2025-12-14 00:37:18 -08:00
Bas Nijholt
cdb3b1d257 Show friendly error when config file not found
Instead of a Python traceback, display a clean error message with
the red ✗ symbol when the config file cannot be found.
2025-12-14 00:31:36 -08:00
Bas Nijholt
0913769729 Fix check command to validate all services with check_all flag 2025-12-14 00:23:23 -08:00
Bas Nijholt
3a1d5b77b5 Add traefik port validation to check command 2025-12-14 00:19:17 -08:00
Bas Nijholt
e12002ce86 Add test for network_mode: service:X port lookup 2025-12-14 00:03:11 -08:00
Bas Nijholt
676a6fe72d Support network_mode: service:X for port lookup in traefik config 2025-12-14 00:02:07 -08:00
Bas Nijholt
f29f8938fe Add -h as alias for --help 2025-12-13 23:56:33 -08:00
Bas Nijholt
4c0e147786 Escape log output to prevent Rich markup errors 2025-12-13 23:55:44 -08:00
Bas Nijholt
cba61118de Add cf alias for compose-farm command 2025-12-13 23:54:00 -08:00
Bas Nijholt
32dc6b3665 Skip empty lines in streaming output 2025-12-13 23:50:35 -08:00
Bas Nijholt
7d98e664e9 Auto-detect local IPs to skip SSH when on target host 2025-12-13 23:48:28 -08:00
Bas Nijholt
6763403700 Fix duplicate prefix before traefik config message 2025-12-13 23:46:41 -08:00
Bas Nijholt
feb0e13bfd Add check command to find missing services 2025-12-13 23:43:47 -08:00
Bas Nijholt
b86f6d190f Add Rich styling to CLI output
- Service names in cyan, host names in magenta
- Success checkmarks, warning/error symbols
- Colored sync diff indicators (+/-/~)
- Unicode arrows for migrations
2025-12-13 23:40:07 -08:00
Bas Nijholt
5ed15b5445 docs: Add Docker Swarm overlay network notes 2025-12-13 23:16:09 -08:00
Bas Nijholt
761b6dd2d1 Rename state file to compose-farm-state.yaml (not hidden) 2025-12-13 23:01:40 -08:00
Bas Nijholt
e86c2b6d47 docs: Simplify Traefik port requirement note 2025-12-13 22:59:50 -08:00
basnijholt
9353b74c35 chore(docs): update TOC 2025-12-14 06:58:15 +00:00
Bas Nijholt
b7e8e0f3a9 docs: Add limitations and best practices section
Documents cross-host networking limitations:
- Docker DNS doesn't work across hosts
- Dependent services should stay in same compose file
- Ports must be published for cross-host communication
2025-12-13 22:58:01 -08:00
Bas Nijholt
b6c02587bc Rename traefik_host to traefik_service
Instead of specifying the host directly, specify the service name
that runs Traefik. The host is then looked up from the services
mapping, avoiding redundancy.
2025-12-13 22:43:33 -08:00
Bas Nijholt
d412c42ca4 Store state file alongside config file
State is now stored at .compose-farm-state.yaml in the same
directory as the config file. This allows multiple compose-farm
setups with independent state.

State functions now require a Config parameter to locate the
state file via config.get_state_path().
2025-12-13 22:38:11 -08:00
Bas Nijholt
13e0adbbb9 Add traefik_host config to skip local services
When traefik_host is set, services on that host are skipped in
file-provider generation since Traefik's docker provider handles
them directly. This allows running compose-farm from any host
while still generating correct file-provider config.
2025-12-13 22:34:20 -08:00
Bas Nijholt
68c41eb37c Improve missing ports warning message
Replace technical "L3 reachability" phrasing with actionable
guidance: "Add a ports: mapping for cross-host routing."
2025-12-13 22:29:20 -08:00
Bas Nijholt
8af088bb5d Add traefik_file config for auto-regeneration
When traefik_file is set in config, compose-farm automatically
regenerates the Traefik file-provider config after up, down,
restart, and update commands. Eliminates the need to manually
run traefik-file after service changes.
2025-12-13 22:24:29 -08:00
Bas Nijholt
1308eeca12 fix: Skip local services in traefik-file generation
Local services (localhost, local, 127.0.0.1) are handled by Traefik's
docker provider directly. Generating file-provider entries for them
creates conflicting routes with broken localhost URLs (since Traefik
runs in a container where localhost is isolated).

Now traefik-file only generates config for remote services.
2025-12-13 19:51:57 -08:00
Bas Nijholt
a66a68f395 docs: Clarify no merge commits to main rule 2025-12-13 19:44:30 -08:00
Bas Nijholt
6ea25c862e docs: Add traefik directory merging instructions 2025-12-13 19:41:16 -08:00
Bas Nijholt
280524b546 docs: Use GitHub admonition for TL;DR 2025-12-13 19:34:31 -08:00
Bas Nijholt
db9360771b docs: Shorten TL;DR 2025-12-13 19:33:28 -08:00
Bas Nijholt
c7590ed0b7 docs: Move TOC below TL;DR 2025-12-13 19:32:41 -08:00
Bas Nijholt
bb563b9d4b docs: Add TL;DR to README 2025-12-13 19:31:05 -08:00
Bas Nijholt
fe160ee116 fix: Move traefik import to top-level 2025-12-13 17:07:29 -08:00
Bas Nijholt
4c7f49414f docs: Update README for sync command and auto-migration
- Replace snapshot with sync command
- Add auto-migration documentation
- Update compose file naming convention
2025-12-13 16:55:07 -08:00
Bas Nijholt
bebe5b34ba Merge snapshot into sync command
The sync command now performs both operations:
- Discovers running services and updates state.yaml
- Captures image digests and updates dockerfarm-log.toml

Removes the standalone snapshot command to keep the API simple.
2025-12-13 16:53:49 -08:00
Bas Nijholt
5d21e64781 Add sync command to discover running services and update state
The sync command queries all hosts to find where services are actually
running and updates the state file to match reality. Supports --dry-run
to preview changes without modifying state. Useful for initial setup
or after manual changes.
2025-12-13 15:58:29 -08:00
Bas Nijholt
114c7b6eb6 Add check_service_running for discovering running services
Adds a helper function to check if a service has running containers
on a specific host by executing `docker compose ps --status running -q`.
2025-12-13 15:58:29 -08:00
Bas Nijholt
20e281a23e Add tests for state module
Tests cover load, save, get, set, and remove operations
for service deployment state tracking.
2025-12-13 15:58:29 -08:00
Bas Nijholt
ec33d28d6c Add auto-migration support to up/down commands
- up: Detects if service is deployed on a different host and
  automatically runs down on the old host before up on the new
- down: Removes service from state tracking after successful stop
- Enables seamless service migration by just changing the config
2025-12-13 15:58:29 -08:00
Bas Nijholt
a818b7726e Add run_compose_on_host for cross-host operations
Allows running compose commands on a specific host rather than
the configured host for a service. Used for migration when
stopping a service on the old host before starting on the new.
2025-12-13 15:58:29 -08:00
Bas Nijholt
cead3904bf Add state module for tracking deployed services
Tracks which host each service is deployed on in
~/.config/compose-farm/state.yaml. This enables automatic
migration when a service's host assignment changes.
2025-12-13 15:58:29 -08:00
Bas Nijholt
8f5e14d621 Fix pre-commit issues 2025-12-13 14:54:28 -08:00
Bas Nijholt
ea220058ec Support multiple compose filename conventions
Try compose.yaml, compose.yml, docker-compose.yml, and
docker-compose.yaml when locating compose files for a service.
2025-12-13 14:52:43 -08:00
170 changed files with 22677 additions and 1202 deletions

6
.envrc.example Normal file
View File

@@ -0,0 +1,6 @@
# Run containers as current user (preserves file ownership on NFS mounts)
# Copy this file to .envrc and run: direnv allow
export CF_UID=$(id -u)
export CF_GID=$(id -g)
export CF_HOME=$HOME
export CF_USER=$USER

2
.gitattributes vendored Normal file
View File

@@ -0,0 +1,2 @@
*.gif filter=lfs diff=lfs merge=lfs -text
*.webm filter=lfs diff=lfs merge=lfs -text

88
.github/check_readme_commands.py vendored Executable file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env python3
"""Check that all CLI commands are documented in the README."""
from __future__ import annotations
import re
import sys
from pathlib import Path
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import typer
from compose_farm.cli import app
def get_all_commands(typer_app: typer.Typer, prefix: str = "cf") -> set[str]:
"""Extract all command names from a Typer app, including nested subcommands."""
commands = set()
# Get registered commands (skip hidden ones like aliases)
for command in typer_app.registered_commands:
if command.hidden:
continue
name = command.name
if not name and command.callback:
name = getattr(command.callback, "__name__", None)
if name:
commands.add(f"{prefix} {name}")
# Get registered sub-apps (like 'config')
for group in typer_app.registered_groups:
sub_app = group.typer_instance
sub_name = group.name
if sub_app and sub_name:
commands.add(f"{prefix} {sub_name}")
# Don't recurse into subcommands - we only document the top-level subcommand
return commands
def get_documented_commands(readme_path: Path) -> set[str]:
"""Extract commands documented in README from help output sections."""
content = readme_path.read_text()
# Match patterns like: <code>cf command --help</code>
pattern = r"<code>(cf\s+[\w-]+)\s+--help</code>"
matches = re.findall(pattern, content)
return set(matches)
def main() -> int:
"""Check that all CLI commands are documented in the README."""
readme_path = Path(__file__).parent.parent / "README.md"
if not readme_path.exists():
print(f"ERROR: README.md not found at {readme_path}")
return 1
cli_commands = get_all_commands(app)
documented_commands = get_documented_commands(readme_path)
# Also check for the main 'cf' help
if "<code>cf --help</code>" in readme_path.read_text():
documented_commands.add("cf")
cli_commands.add("cf")
missing = cli_commands - documented_commands
extra = documented_commands - cli_commands
if missing or extra:
if missing:
print("ERROR: Commands missing from README --help documentation:")
for cmd in sorted(missing):
print(f" - {cmd}")
if extra:
print("WARNING: Commands documented but not in CLI:")
for cmd in sorted(extra):
print(f" - {cmd}")
return 1
print(f"✓ All {len(cli_commands)} commands documented in README")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -16,10 +16,10 @@ jobs:
python-version: ["3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Install uv
uses: astral-sh/setup-uv@v6
uses: astral-sh/setup-uv@v7
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
@@ -27,8 +27,8 @@ jobs:
- name: Install dependencies
run: uv sync --all-extras --dev
- name: Run tests
run: uv run pytest
- name: Run tests (excluding browser tests)
run: uv run pytest -m "not browser"
- name: Upload coverage reports to Codecov
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.13'
@@ -36,13 +36,33 @@ jobs:
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
browser-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Install uv
uses: astral-sh/setup-uv@v7
- name: Set up Python
run: uv python install 3.13
- name: Install dependencies
run: uv sync --all-extras --dev
- name: Install Playwright browsers
run: uv run playwright install chromium --with-deps
- name: Run browser tests
run: uv run pytest -m browser -n auto -v
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Install uv
uses: astral-sh/setup-uv@v6
uses: astral-sh/setup-uv@v7
- name: Set up Python
run: uv python install 3.12
@@ -50,11 +70,5 @@ jobs:
- name: Install dependencies
run: uv sync --all-extras --dev
- name: Run ruff check
run: uv run ruff check .
- name: Run ruff format check
run: uv run ruff format --check .
- name: Run mypy
run: uv run mypy src/compose_farm
- name: Run pre-commit (via prek)
uses: j178/prek-action@v1

92
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,92 @@
name: Build and Push Docker Image
on:
workflow_run:
workflows: ["Upload Python Package"]
types: [completed]
workflow_dispatch:
inputs:
version:
description: 'Version to build (leave empty for latest)'
required: false
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push:
runs-on: ubuntu-latest
# Only run if PyPI upload succeeded (or manual dispatch)
if: ${{ github.event_name == 'workflow_dispatch' || github.event.workflow_run.conclusion == 'success' }}
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract version
id: version
run: |
if [ "${{ github.event_name }}" = "workflow_run" ]; then
# Get version from the tag that triggered the release
VERSION="${{ github.event.workflow_run.head_branch }}"
# Strip 'v' prefix if present
VERSION="${VERSION#v}"
elif [ -n "${{ github.event.inputs.version }}" ]; then
VERSION="${{ github.event.inputs.version }}"
else
VERSION=""
fi
echo "version=$VERSION" >> $GITHUB_OUTPUT
- name: Wait for PyPI
if: steps.version.outputs.version != ''
run: |
VERSION="${{ steps.version.outputs.version }}"
echo "Waiting for compose-farm==$VERSION on PyPI..."
for i in {1..30}; do
if curl -sf "https://pypi.org/pypi/compose-farm/$VERSION/json" > /dev/null; then
echo "✓ Version $VERSION available on PyPI"
exit 0
fi
echo "Attempt $i: not yet available, waiting 10s..."
sleep 10
done
echo "✗ Timeout waiting for PyPI"
exit 1
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}},value=v${{ steps.version.outputs.version }}
type=semver,pattern={{major}}.{{minor}},value=v${{ steps.version.outputs.version }}
type=semver,pattern={{major}},value=v${{ steps.version.outputs.version }}
type=raw,value=latest
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
VERSION=${{ steps.version.outputs.version }}
cache-from: type=gha
cache-to: type=gha,mode=max

66
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,66 @@
name: Docs
on:
push:
branches: [main]
paths:
- "docs/**"
- "zensical.toml"
- ".github/workflows/docs.yml"
pull_request:
paths:
- "docs/**"
- "zensical.toml"
- ".github/workflows/docs.yml"
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: "pages-${{ github.ref }}"
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
with:
lfs: true
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Set up Python
run: uv python install 3.12
- name: Install Zensical
run: uv tool install zensical
- name: Build docs
run: zensical build
- name: Setup Pages
if: github.event_name != 'pull_request'
uses: actions/configure-pages@v5
- name: Upload artifact
if: github.event_name != 'pull_request'
uses: actions/upload-pages-artifact@v4
with:
path: "./site"
deploy:
if: github.event_name != 'pull_request'
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -13,9 +13,9 @@ jobs:
permissions:
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Install uv
uses: astral-sh/setup-uv@v6
uses: astral-sh/setup-uv@v7
- name: Build
run: uv build
- name: Publish package distributions to PyPI

View File

@@ -11,16 +11,16 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
persist-credentials: false
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
- name: Install uv
uses: astral-sh/setup-uv@v6
uses: astral-sh/setup-uv@v7
- name: Run markdown-code-runner
env:

3
.gitignore vendored
View File

@@ -42,3 +42,6 @@ htmlcov/
compose-farm.yaml
!examples/compose-farm.yaml
coverage.xml
.env
homepage/
site/

View File

@@ -1,4 +1,13 @@
repos:
- repo: local
hooks:
- id: check-readme-commands
name: Check README documents all CLI commands
entry: uv run python .github/check_readme_commands.py
language: system
files: ^(README\.md|src/compose_farm/cli/.*)$
pass_filenames: false
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
@@ -10,18 +19,24 @@ repos:
- id: debug-statements
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.4
rev: v0.14.9
hooks:
- id: ruff
args: [--fix]
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.14.0
- repo: local
hooks:
- id: mypy
additional_dependencies:
- pydantic>=2.0.0
- typer>=0.9.0
- asyncssh>=2.14.0
- types-PyYAML
name: mypy (type checker)
entry: uv run mypy src tests
language: system
types: [python]
pass_filenames: false
- id: ty
name: ty (type checker)
entry: uv run ty check
language: system
types: [python]
pass_filenames: false

117
.prompts/docs-review.md Normal file
View File

@@ -0,0 +1,117 @@
Review documentation for accuracy, completeness, and consistency. Focus on things that require judgment—automated checks handle the rest.
## What's Already Automated
Don't waste time on these—CI and pre-commit hooks handle them:
- **README help output**: `markdown-code-runner` regenerates `cf --help` blocks in CI
- **README command table**: Pre-commit hook verifies commands are listed
- **Linting/formatting**: Handled by pre-commit
## What This Review Is For
Focus on things that require judgment:
1. **Accuracy**: Does the documentation match what the code actually does?
2. **Completeness**: Are there undocumented features, options, or behaviors?
3. **Clarity**: Would a new user understand this? Are examples realistic?
4. **Consistency**: Do different docs contradict each other?
5. **Freshness**: Has the code changed in ways the docs don't reflect?
## Review Process
### 1. Check Recent Changes
```bash
# What changed recently that might need doc updates?
git log --oneline -20 | grep -iE "feat|fix|add|remove|change|option"
# What code files changed?
git diff --name-only HEAD~20 | grep "\.py$"
```
Look for new features, changed defaults, renamed options, or removed functionality.
### 2. Verify docs/commands.md Options Tables
The README auto-updates help output, but `docs/commands.md` has **manually maintained options tables**. These can drift.
For each command's options table, compare against `cf <command> --help`:
- Are all options listed?
- Are short flags correct?
- Are defaults accurate?
- Are descriptions accurate?
**Pay special attention to subcommands** (`cf config *`, `cf ssh *`)—these have their own options that are easy to miss.
### 3. Verify docs/configuration.md
Compare against Pydantic models in the source:
```bash
# Find the config models
grep -r "class.*BaseModel" src/ --include="*.py" -A 15
```
Check:
- All config keys documented
- Types and defaults match code
- Config file search order is accurate
- Example YAML would actually work
### 4. Verify docs/architecture.md
```bash
# What source files actually exist?
git ls-files "src/**/*.py"
```
Check:
- Listed files exist
- No files are missing from the list
- Descriptions match what the code does
### 5. Check Examples
For examples in any doc:
- Would the YAML/commands actually work?
- Are service names, paths, and options realistic?
- Do examples use current syntax (not deprecated options)?
### 6. Cross-Reference Consistency
The same info appears in multiple places. Check for conflicts:
- README.md vs docs/index.md
- docs/commands.md vs CLAUDE.md command tables
- Config examples across different docs
### 7. Self-Check This Prompt
This prompt can become outdated too. If you notice:
- New automated checks that should be listed above
- New doc files that need review guidelines
- Patterns that caused issues
Include prompt updates in your fixes.
## Output Format
Categorize findings:
1. **Critical**: Wrong info that would break user workflows
2. **Inaccuracy**: Technical errors (wrong defaults, paths, types)
3. **Missing**: Undocumented features or options
4. **Outdated**: Was true, no longer is
5. **Inconsistency**: Docs contradict each other
6. **Minor**: Typos, unclear wording
For each issue, provide a ready-to-apply fix:
```
### Issue: [Brief description]
- **File**: docs/commands.md:652
- **Problem**: `cf ssh setup` has `--config` option but it's not documented
- **Fix**: Add `--config, -c PATH` to the options table
- **Verify**: `cf ssh setup --help`
```

15
.prompts/pr-review.md Normal file
View File

@@ -0,0 +1,15 @@
Review the pull request for:
- **Code cleanliness**: Is the implementation clean and well-structured?
- **DRY principle**: Does it avoid duplication?
- **Code reuse**: Are there parts that should be reused from other places?
- **Organization**: Is everything in the right place?
- **Consistency**: Is it in the same style as other parts of the codebase?
- **Simplicity**: Is it not over-engineered? Remember KISS and YAGNI. No dead code paths and NO defensive programming.
- **User experience**: Does it provide a good user experience?
- **PR**: Is the PR description and title clear and informative?
- **Tests**: Are there tests, and do they cover the changes adequately? Are they testing something meaningful or are they just trivial?
- **Live tests**: Test the changes in a REAL live environment to ensure they work as expected, use the config in `/opt/stacks/compose-farm.yaml`.
- **Rules**: Does the code follow the project's coding standards and guidelines as laid out in @CLAUDE.md?
Look at `git diff origin/main..HEAD` for the changes made in this pull request.

51
.prompts/update-demos.md Normal file
View File

@@ -0,0 +1,51 @@
Update demo recordings to match the current compose-farm.yaml configuration.
## Key Gotchas
1. **Never `git checkout` without asking** - check for uncommitted changes first
2. **Prefer `nas` stacks** - demos run locally on nas, SSH adds latency
3. **Terminal captures keyboard** - use `blur()` to release focus before command palette
4. **Clicking sidebar navigates away** - clicking h1 scrolls to top
5. **Buttons have icons, not text** - use `[data-tip="..."]` selectors
6. **`record.py` auto-restores config** - no manual cleanup needed after CLI demos
## Stacks Used in Demos
| Stack | CLI Demos | Web Demos | Notes |
|-------|-----------|-----------|-------|
| `audiobookshelf` | quickstart, migration, apply | - | Migrates nas→anton |
| `grocy` | update | navigation, stack, workflow, console | - |
| `immich` | logs, compose | shell | Multiple containers |
| `dozzle` | - | workflow | - |
## CLI Demos
**Files:** `docs/demos/cli/*.tape`
Check:
- `quickstart.tape`: `bat -r` line ranges match current config structure
- `migration.tape`: nvim keystrokes work, stack exists on nas
- `compose.tape`: exec commands produce meaningful output
Run: `python docs/demos/cli/record.py [demo]`
## Web Demos
**Files:** `docs/demos/web/demo_*.py`
Check:
- Stack names in demos still exist in config
- Selectors match current templates (grep for IDs in `templates/`)
- Shell demo uses command palette for ALL navigation
Run: `python docs/demos/web/record.py [demo]`
## Before Recording
```bash
# Check for uncommitted config changes
git -C /opt/stacks diff compose-farm.yaml
# Verify stacks are running
cf ps audiobookshelf grocy immich dozzle
```

138
CLAUDE.md
View File

@@ -9,20 +9,86 @@
## Architecture
```
compose_farm/
├── config.py # Pydantic models, YAML loading
├── ssh.py # asyncssh execution, streaming
└── cli.py # Typer commands
src/compose_farm/
├── cli/ # CLI subpackage
│ ├── __init__.py # Imports modules to trigger command registration
│ ├── app.py # Shared Typer app instance, version callback
│ ├── common.py # Shared helpers, options, progress bar utilities
│ ├── config.py # Config subcommand (init, show, path, validate, edit, symlink)
│ ├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose commands
│ ├── management.py # refresh, check, init-network, traefik-file commands
│ ├── monitoring.py # logs, ps, stats commands
│ ├── ssh.py # SSH key management (setup, status, keygen)
│ └── web.py # Web UI server command
├── config.py # Pydantic models, YAML loading
├── compose.py # Compose file parsing (.env, ports, volumes, networks)
├── console.py # Shared Rich console instances
├── executor.py # SSH/local command execution, streaming output
├── operations.py # Business logic (up, migrate, discover, preflight checks)
├── state.py # Deployment state tracking (which stack on which host)
├── logs.py # Image digest snapshots (dockerfarm-log.toml)
├── paths.py # Path utilities, config file discovery
├── ssh_keys.py # SSH key path constants and utilities
├── traefik.py # Traefik file-provider config generation from labels
└── web/ # Web UI (FastAPI + HTMX)
```
## Web UI Icons
Icons use [Lucide](https://lucide.dev/). Add new icons as macros in `web/templates/partials/icons.html` by copying SVG paths from their site. The `action_btn`, `stat_card`, and `collapse` macros in `components.html` accept an optional `icon` parameter.
## HTMX Patterns
- **Multi-element refresh**: Use custom events, not `hx-swap-oob`. Elements have `hx-trigger="cf:refresh from:body"` and JS calls `document.body.dispatchEvent(new CustomEvent('cf:refresh'))`. Simpler to debug/test.
- **SPA navigation**: Sidebar uses `hx-boost="true"` to AJAX-ify links.
- **Attribute inheritance**: Set `hx-target`/`hx-swap` on parent elements.
## Key Design Decisions
1. **asyncssh over Paramiko/Fabric**: Native async support, built-in streaming
2. **Parallel by default**: Multiple services run concurrently via `asyncio.gather`
3. **Streaming output**: Real-time stdout/stderr with `[service]` prefix
1. **Hybrid SSH approach**: asyncssh for parallel streaming with prefixes; native `ssh -t` for raw mode (progress bars)
2. **Parallel by default**: Multiple stacks run concurrently via `asyncio.gather`
3. **Streaming output**: Real-time stdout/stderr with `[stack]` prefix using Rich
4. **SSH key auth only**: Uses ssh-agent, no password handling (YAGNI)
5. **NFS assumption**: Compose files at same path on all hosts
6. **Local execution**: When host is `localhost`/`local`, skip SSH and run locally
6. **Local IP auto-detection**: Skips SSH when target host matches local machine's IP
7. **State tracking**: Tracks where stacks are deployed for auto-migration
8. **Pre-flight checks**: Verifies NFS mounts and Docker networks exist before starting/migrating
## Code Style
- **Imports at top level**: Never add imports inside functions unless they are explicitly marked with `# noqa: PLC0415` and a comment explaining it speeds up CLI startup. Heavy modules like `pydantic`, `yaml`, and `rich.table` are lazily imported to keep `cf --help` fast.
## Development Commands
Use `just` for common tasks. Run `just` to list available commands:
| Command | Description |
|---------|-------------|
| `just install` | Install dev dependencies |
| `just test` | Run all tests |
| `just test-cli` | Run CLI tests (parallel) |
| `just test-web` | Run web UI tests (parallel) |
| `just lint` | Lint, format, and type check |
| `just web` | Start web UI (port 9001) |
| `just doc` | Build and serve docs (port 9002) |
| `just clean` | Clean build artifacts |
## Testing
Run tests with `just test` or `uv run pytest`. Browser tests require Chromium (system-installed or via `playwright install chromium`):
```bash
# Unit tests only (parallel)
uv run pytest -m "not browser" -n auto
# Browser tests only (parallel)
uv run pytest -m browser -n auto
# All tests
uv run pytest
```
Browser tests are marked with `@pytest.mark.browser`. They use Playwright to test HTMX behavior, JavaScript functionality (sidebar filter, command palette, terminals), and content stability during navigation.
## Communication Notes
@@ -31,17 +97,53 @@ compose_farm/
## Git Safety
- Never amend commits.
- Never merge into a branch; prefer fast-forward or rebase as directed.
- **NEVER merge anything into main.** Always commit directly or use fast-forward/rebase.
- Never force push.
## Pull Requests
- Never include unchecked checklists (e.g., `- [ ] ...`) in PR descriptions. Either omit the checklist or use checked items.
- **NEVER run `gh pr merge`**. PRs are merged via the GitHub UI, not the CLI.
## Releases
Use `gh release create` to create releases. The tag is created automatically.
```bash
# Check current version
git tag --sort=-v:refname | head -1
# Create release (minor version bump: v0.21.1 -> v0.22.0)
gh release create v0.22.0 --title "v0.22.0" --notes "release notes here"
```
Versioning:
- **Patch** (v0.21.0 → v0.21.1): Bug fixes
- **Minor** (v0.21.1 → v0.22.0): New features, non-breaking changes
Write release notes manually describing what changed. Group by features and bug fixes.
## Commands Quick Reference
| Command | Docker Compose Equivalent |
|---------|--------------------------|
| `up` | `docker compose up -d` |
| `down` | `docker compose down` |
| `pull` | `docker compose pull` |
| `restart` | `down` + `up -d` |
| `update` | `pull` + `down` + `up -d` |
| `logs` | `docker compose logs` |
| `ps` | `docker compose ps` |
CLI available as `cf` or `compose-farm`.
| Command | Description |
|---------|-------------|
| `up` | Start stacks (`docker compose up -d`), auto-migrates if host changed |
| `down` | Stop stacks (`docker compose down`). Use `--orphaned` to stop stacks removed from config |
| `stop` | Stop services without removing containers (`docker compose stop`) |
| `pull` | Pull latest images |
| `restart` | `down` + `up -d` |
| `update` | `pull` + `build` + `down` + `up -d` |
| `apply` | Make reality match config: migrate stacks + stop orphans. Use `--dry-run` to preview |
| `compose` | Run any docker compose command on a stack (passthrough) |
| `logs` | Show stack logs |
| `ps` | Show status of all stacks |
| `stats` | Show overview (hosts, stacks, pending migrations; `--live` for container counts) |
| `refresh` | Update state from reality: discover running stacks, capture image digests |
| `check` | Validate config, traefik labels, mounts, networks; show host compatibility |
| `init-network` | Create Docker network on hosts with consistent subnet/gateway |
| `traefik-file` | Generate Traefik file-provider config from compose labels |
| `config` | Manage config files (init, show, path, validate, edit, symlink) |
| `ssh` | Manage SSH keys (setup, status, keygen) |
| `web` | Start web UI server |

28
Dockerfile Normal file
View File

@@ -0,0 +1,28 @@
# syntax=docker/dockerfile:1
# Build stage - install with uv
FROM ghcr.io/astral-sh/uv:python3.14-alpine AS builder
ARG VERSION
RUN uv tool install --compile-bytecode "compose-farm[web]${VERSION:+==$VERSION}"
# Runtime stage - minimal image without uv
FROM python:3.14-alpine
# Install only runtime requirements
RUN apk add --no-cache openssh-client
# Copy installed tool virtualenv and bin symlinks from builder
COPY --from=builder /root/.local/share/uv/tools/compose-farm /root/.local/share/uv/tools/compose-farm
COPY --from=builder /usr/local/bin/cf /usr/local/bin/compose-farm /usr/local/bin/
# Allow non-root users to access the installed tool
# (required when running with user: "${CF_UID:-0}:${CF_GID:-0}")
RUN chmod 755 /root
# Allow non-root users to add passwd entries (required for SSH)
RUN chmod 666 /etc/passwd
# Entrypoint creates /etc/passwd entry for non-root UIDs (required for SSH)
ENTRYPOINT ["sh", "-c", "[ $(id -u) != 0 ] && echo ${USER:-u}:x:$(id -u):$(id -g)::${HOME:-/}:/bin/sh >> /etc/passwd; exec cf \"$@\"", "--"]
CMD ["--help"]

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Bas Nijholt
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

35
PLAN.md
View File

@@ -1,35 +0,0 @@
# Compose Farm Traefik Multihost Ingress Plan
## Goal
Generate a Traefik file-provider fragment from existing docker-compose Traefik labels (no config duplication) so a single front-door Traefik on 192.168.1.66 with wildcard `*.lab.mydomain.org` can route to services running on other hosts. Keep the current simplicity (SSH + docker compose); no Swarm/K8s.
## Requirements
- Traefik stays on main host; keep current `dynamic.yml` and Docker provider for local containers.
- Add a watched directory provider (any path works) and load a generated fragment (e.g., `compose-farm.generated.yml`).
- No edits to compose files: reuse existing `traefik.*` labels as the single source of truth; Compose Farm only reads them.
- Generator infers routing from labels and reachability from `ports:` mappings; prefer host-published ports so Traefik can reach services across hosts. Upstreams point to `<host address>:<published host port>`; warn if no published port is found.
- Only minimal data in `compose-farm.yaml`: hosts map and service→host mapping (already present).
- No new orchestration/discovery layers; respect KISS/YAGNI/DRY.
## Non-Goals
- No Swarm/Kubernetes adoption.
- No global Docker provider across hosts.
- No health checks/service discovery layer.
## Current State (Dec 2025)
- Compose Farm: Typer CLI wrapping `docker compose` over SSH; config in `compose-farm.yaml`; parallel by default; snapshot/log tooling present.
- Traefik: single instance on 192.168.1.66, wildcard `*.lab.mydomain.org`, Docker provider for local services, file provider via `dynamic.yml` already in use.
## Proposed Implementation Steps
1) Add generator command: `compose-farm traefik-file --output <path>`.
2) Resolve per-service host from `compose-farm.yaml`; read compose file at `{compose_dir}/{service}/docker-compose.yml`.
3) Parse `traefik.*` labels to build routers/services/middlewares as in compose; map container port to published host port (from `ports:`) to form upstream URLs with host address.
4) Emit file-provider YAML to the watched directory (recommended default: `/mnt/data/traefik/dynamic.d/compose-farm.generated.yml`, but user chooses via `--output`).
5) Warnings: if no published port is found, warn that cross-host reachability requires L3 reachability to container IPs.
6) Tests: label parsing, port mapping, YAML render; scenario with published port; scenario without published port.
7) Docs: update README/CLAUDE to describe directory provider flags and the generator workflow; note that compose files remain unchanged.
## Open Questions
- How to derive target host address: use `hosts.<name>.address` verbatim, or allow override per service? (Default: use host address.)
- Should we support multiple hosts/backends per service for LB/HA? (Start with single server.)
- Where to store generated file by default? (Default to user-specified `--output`; maybe fallback to `./compose-farm-traefik.yml`.)

1312
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -3,23 +3,28 @@
compose_dir: /opt/compose
# Optional: Auto-regenerate Traefik file-provider config after up/down/restart/update
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik # Skip stacks on same host (docker provider handles them)
hosts:
# Full form with all options
nas01:
server-1:
address: 192.168.1.10
user: docker
port: 22
# Short form (just address, user defaults to current user)
nas02: 192.168.1.11
server-2: 192.168.1.11
# Local execution (no SSH)
local: localhost
services:
# Map service names to hosts
# Compose file expected at: {compose_dir}/{service}/docker-compose.yml
plex: nas01
jellyfin: nas02
sonarr: nas01
radarr: nas02
stacks:
# Map stack names to hosts
# Compose file expected at: {compose_dir}/{stack}/compose.yaml
traefik: server-1 # Traefik runs here
plex: server-2 # Stacks on other hosts get file-provider entries
jellyfin: server-2
grafana: server-1
nextcloud: local

68
docker-compose.yml Normal file
View File

@@ -0,0 +1,68 @@
services:
cf:
image: ghcr.io/basnijholt/compose-farm:latest
# Run as current user to preserve file ownership on mounted volumes
# Set CF_UID=$(id -u) CF_GID=$(id -g) in your environment or .env file
# Defaults to root (0:0) for backwards compatibility
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
# Compose directory (contains compose files AND compose-farm.yaml config)
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
# SSH keys for passwordless auth (generated by `cf ssh setup`)
# Choose ONE option below (use the same option for both cf and web services):
# Option 1: Host path (default) - keys at ~/.ssh/compose-farm/id_ed25519
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
# Option 2: Named volume - managed by Docker, shared between services
# - cf-ssh:${CF_HOME:-/root}/.ssh
environment:
- SSH_AUTH_SOCK=/ssh-agent
# Config file path (state stored alongside it)
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
- USER=${CF_USER:-root}
web:
image: ghcr.io/basnijholt/compose-farm:latest
restart: unless-stopped
command: web --host 0.0.0.0 --port 9000
# Run as current user to preserve file ownership on mounted volumes
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
# SSH keys - use the SAME option as cf service above
# Option 1: Host path (default)
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
# Option 2: Named volume
# - cf-ssh:${CF_HOME:-/root}/.ssh
# XDG config dir for backups and image digest logs (persists across restarts)
- ${CF_XDG_CONFIG:-~/.config/compose-farm}:${CF_HOME:-/root}/.config/compose-farm
environment:
- SSH_AUTH_SOCK=/ssh-agent
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# Used to detect self-updates and run via SSH to survive container restart
- CF_WEB_STACK=compose-farm
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
- USER=${CF_USER:-root}
labels:
- traefik.enable=true
- traefik.http.routers.compose-farm.rule=Host(`compose-farm.${DOMAIN}`)
- traefik.http.routers.compose-farm.entrypoints=websecure
- traefik.http.routers.compose-farm-local.rule=Host(`compose-farm.local`)
- traefik.http.routers.compose-farm-local.entrypoints=web
- traefik.http.services.compose-farm.loadbalancer.server.port=9000
networks:
- mynetwork
networks:
mynetwork:
external: true
volumes:
cf-ssh:
# Only used if Option 2 is selected above

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
compose-farm.nijho.lt

345
docs/architecture.md Normal file
View File

@@ -0,0 +1,345 @@
---
icon: lucide/layers
---
# Architecture
This document explains how Compose Farm works under the hood.
## Design Philosophy
Compose Farm follows three core principles:
1. **KISS** - Keep it simple. It's a thin wrapper around `docker compose` over SSH.
2. **YAGNI** - No orchestration, no service discovery, no health checks until needed.
3. **Zero changes** - Your existing compose files work unchanged.
## High-Level Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Compose Farm CLI │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │
│ │ Config │ │ State │ │Operations│ │ Executor │ │
│ │ Parser │ │ Tracker │ │ Logic │ │ (SSH/Local) │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────────┬─────────┘ │
└───────┼─────────────┼─────────────┼─────────────────┼───────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌───────────────────────────────────────────────────────────────┐
│ SSH / Local │
└───────────────────────────────────────────────────────────────┘
│ │
▼ ▼
┌───────────────┐ ┌───────────────┐
│ Host: nuc │ │ Host: hp │
│ │ │ │
│ docker compose│ │ docker compose│
│ up -d │ │ up -d │
└───────────────┘ └───────────────┘
```
## Core Components
### Configuration (`src/compose_farm/config.py`)
Pydantic models for YAML configuration:
- **Config** - Root configuration with compose_dir, hosts, stacks
- **Host** - Host address, SSH user, and port
Key features:
- Validation with Pydantic
- Multi-host stack expansion (`all` → list of hosts)
- YAML loading with sensible defaults
### State Tracking (`src/compose_farm/state.py`)
Tracks deployment state in `compose-farm-state.yaml` (stored alongside the config file):
```yaml
deployed:
plex: nuc
grafana: nuc
```
Used for:
- Detecting migrations (stack moved to different host)
- Identifying orphans (stacks removed from config)
- `cf ps` status display
### Operations (`src/compose_farm/operations.py`)
Business logic for stack operations:
- **up** - Start stack, handle migration if needed
- **down** - Stop stack
- **preflight checks** - Verify mounts, networks exist before operations
- **discover** - Find running stacks on hosts
- **migrate** - Down on old host, up on new host
### Executor (`src/compose_farm/executor.py`)
SSH and local command execution:
- **Hybrid SSH approach**: asyncssh for parallel streaming, native `ssh -t` for raw mode
- **Parallel by default**: Multiple stacks via `asyncio.gather`
- **Streaming output**: Real-time stdout/stderr with `[stack]` prefix
- **Local detection**: Skips SSH when target matches local machine IP
### CLI (`src/compose_farm/cli/`)
Typer-based CLI with subcommand modules:
```
cli/
├── app.py # Shared Typer app, version callback
├── common.py # Shared helpers, options, progress utilities
├── config.py # config subcommand (init, show, path, validate, edit, symlink)
├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose
├── management.py # refresh, check, init-network, traefik-file
├── monitoring.py # logs, ps, stats
├── ssh.py # SSH key management (setup, status, keygen)
└── web.py # Web UI server command
```
## Command Flow
### cf up plex
```
1. Load configuration
└─► Parse compose-farm.yaml
└─► Validate stack exists
2. Check state
└─► Load state.yaml
└─► Is plex already running?
└─► Is it on a different host? (migration needed)
3. Pre-flight checks
└─► SSH to target host
└─► Check compose file exists
└─► Check required mounts exist
└─► Check required networks exist
4. Execute migration (if needed)
└─► SSH to old host
└─► Run: docker compose down
5. Start stack
└─► SSH to target host
└─► cd /opt/compose/plex
└─► Run: docker compose up -d
6. Update state
└─► Write new state to state.yaml
7. Generate Traefik config (if configured)
└─► Regenerate traefik file-provider
```
### cf apply
```
1. Load configuration and state
2. Compute diff
├─► Orphans: in state, not in config
├─► Migrations: in both, different host
└─► Missing: in config, not in state
3. Stop orphans
└─► For each orphan: cf down
4. Migrate stacks
└─► For each migration: down old, up new
5. Start missing
└─► For each missing: cf up
6. Update state
```
## SSH Execution
### Parallel Streaming (asyncssh)
For most operations, Compose Farm uses asyncssh:
```python
async def run_command(host, command):
async with asyncssh.connect(host) as conn:
result = await conn.run(command)
return result.stdout, result.stderr
```
Multiple stacks run concurrently via `asyncio.gather`.
### Raw Mode (native ssh)
For commands needing PTY (progress bars, interactive):
```bash
ssh -t user@host "docker compose pull"
```
### Local Detection
When target host IP matches local machine:
```python
if is_local(host_address):
# Run locally, no SSH
subprocess.run(command)
else:
# SSH to remote
ssh.run(command)
```
## State Management
### State File
Location: `compose-farm-state.yaml` (stored alongside the config file)
```yaml
deployed:
plex: nuc
grafana: nuc
```
Image digests are stored separately in `dockerfarm-log.toml` (also in the config directory).
### State Transitions
```
Config Change State Change Action
─────────────────────────────────────────────────────
Add stack Missing cf up
Remove stack Orphaned cf down
Change host Migration down old, up new
No change No change none (or refresh)
```
### cf refresh
Syncs state with reality by querying Docker on each host:
```bash
docker ps --format '{{.Names}}'
```
Updates state.yaml to match what's actually running.
## Compose File Discovery
For each stack, Compose Farm looks for compose files in:
```
{compose_dir}/{stack}/
├── compose.yaml # preferred
├── compose.yml
├── docker-compose.yml
└── docker-compose.yaml
```
First match wins.
## Traefik Integration
### Label Extraction
Compose Farm parses Traefik labels from compose files:
```yaml
stacks:
plex:
labels:
- traefik.enable=true
- traefik.http.routers.plex.rule=Host(`plex.example.com`)
- traefik.http.services.plex.loadbalancer.server.port=32400
```
### File Provider Generation
Converts labels to Traefik file-provider YAML:
```yaml
http:
routers:
plex:
rule: Host(`plex.example.com`)
service: plex
services:
plex:
loadBalancer:
servers:
- url: http://192.168.1.10:32400
```
### Variable Resolution
Supports `${VAR}` and `${VAR:-default}` from:
1. Service's `.env` file
2. Current environment
## Error Handling
### Pre-flight Failures
Before any operation, Compose Farm checks:
- SSH connectivity
- Compose file existence
- Required mounts
- Required networks
If checks fail, operation aborts with clear error.
### Partial Failures
When operating on multiple stacks:
- Each stack is independent
- Failures are logged, but other stacks continue
- Exit code reflects overall success/failure
## Performance Considerations
### Parallel Execution
Services are started/stopped in parallel:
```python
await asyncio.gather(*[
up_stack(stack) for stack in stacks
])
```
### SSH Multiplexing
For repeated connections to the same host, SSH reuses connections.
### Caching
- Config is parsed once per command
- State is loaded once, written once
- Host discovery results are cached during command
## Web UI Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Web UI │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ FastAPI │ │ Jinja │ │ HTMX │ │
│ │ Backend │ │ Templates │ │ Dynamic Updates │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ │
│ Pattern: Custom events, not hx-swap-oob │
│ Elements trigger on: cf:refresh from:body │
└─────────────────────────────────────────────────────────────┘
```
Icons use [Lucide](https://lucide.dev/). Add new icons as macros in `web/templates/partials/icons.html`.

3
docs/assets/apply.gif Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01dabdd8f62773823ba2b8dc74f9931f1a1b88215117e6a080004096025491b0
size 901456

3
docs/assets/apply.webm Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:134c903a6b3acfb933617b33755b0cdb9bac2a59e5e35b64236e248a141d396d
size 206883

3
docs/assets/compose.gif Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8b3cdb3486ec79b3ddb2f7571c13d54ac9aed182edfe708eff76a966a90cfc7
size 1132310

3
docs/assets/compose.webm Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a3c4d4a62f062f717df4e6752efced3caea29004dc90fe97fd7633e7f0ded9db
size 341057

3
docs/assets/install.gif Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c1bb48cc2f364681515a4d8bd0c586d133f5a32789b7bb64524ad7d9ed0a8e9
size 543135

3
docs/assets/install.webm Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5f82d96137f039f21964c15c1550aa1b1f0bb2d52c04d012d253dbfbd6fad096
size 268086

3
docs/assets/logs.gif Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2a4045b00d90928f42c7764b3c24751576cfb68a34c6e84d12b4e282d2e67378
size 146467

3
docs/assets/logs.webm Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1b94416ed3740853f863e19bf45f26241a203fb0d7d187160a537f79aa544fa
size 60353

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:848d9c48fb7511da7996149277c038589fad1ee406ff2f30c28f777fc441d919
size 1183641

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e747ee71bb38b19946005d5a4def4d423dadeaaade452dec875c4cb2d24a5b77
size 407373

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d32c9a3eec06e57df085ad347e6bf61e323f8bd8322d0c540f0b9d4834196dfd
size 3589776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c54eda599389dac74c24c83527f95cd1399e653d7faf2972c2693d90e590597
size 1085344

3
docs/assets/update.gif Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:62f9b5ec71496197a3f1c3e3bca8967d603838804279ea7dbf00a70d3391ff6c
size 127123

3
docs/assets/update.webm Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac2b93d3630af87b44a135723c5d10e8287529bed17c28301b2802cd9593e9e8
size 98748

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7b50a7e9836c496c0989363d1440fa0a6ccdaa38ee16aae92b389b3cf3c3732f
size 2385110

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ccbb3d5366c7734377e12f98cca0b361028f5722124f1bb7efa231f6aeffc116
size 2208044

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:269993b52721ce70674d3aab2a4cd8c58aa621d4ba0739afedae661c90965b26
size 3678371

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0098b55bb6a52fa39f807a01fa352ce112bcb446e2a2acb963fb02d21b28c934
size 3088813

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4bf9d8c247d278799d1daea784fc662a22f12b1bd7883f808ef30f35025ebca6
size 4166443

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:02d5124217a94849bf2971d6d13d28da18c557195a81b9cca121fb7c07f0501b
size 3523244

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:412a0e68f8e52801cafbb9a703ca9577e7c14cc7c0e439160b9185961997f23c
size 4435697

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e600a1d3216b44497a889f91eac94d62ef7207b4ed0471465dcb72408caa28e
size 3764693

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3c07a283f4f70c4ab205b0f0acb5d6f55e3ced4c12caa7a8d5914ffe3548233a
size 5768166

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:562228841de976d70ee80999b930eadf3866a13ff2867d900279993744c44671
size 6667918

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:845746ac1cb101c3077d420c4f3fda3ca372492582dc123ac8a031a68ae9b6b1
size 12943150

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:189259558b5760c02583885168d7b0b47cf476cba81c7c028ec770f9d6033129
size 12415357

372
docs/best-practices.md Normal file
View File

@@ -0,0 +1,372 @@
---
icon: lucide/lightbulb
---
# Best Practices
Tips, limitations, and recommendations for using Compose Farm effectively.
## Limitations
### No Cross-Host Networking
Compose Farm moves containers between hosts but **does not provide cross-host networking**. Docker's internal DNS and networks don't span hosts.
**What breaks when you move a stack:**
| Feature | Works? | Why |
|---------|--------|-----|
| `http://redis:6379` | No | Docker DNS doesn't cross hosts |
| Docker network names | No | Networks are per-host |
| `DATABASE_URL=postgres://db:5432` | No | Container name won't resolve |
| Host IP addresses | Yes | Use `192.168.1.10:5432` |
### What Compose Farm Doesn't Do
- No overlay networking (use Swarm/Kubernetes)
- No service discovery across hosts
- No automatic dependency tracking between compose files
- No health checks or restart policies beyond Docker's
- No secrets management beyond Docker's
## Stack Organization
### Keep Dependencies Together
If services talk to each other, keep them in the same compose file on the same host:
```yaml
# /opt/compose/myapp/docker-compose.yml
services:
app:
image: myapp
depends_on:
- db
- redis
db:
image: postgres
redis:
image: redis
```
```yaml
# compose-farm.yaml
stacks:
myapp: nuc # All three containers stay together
```
### Separate Standalone Stacks
Stacks whose services don't talk to other containers can be anywhere:
```yaml
stacks:
# These can run on any host
plex: nuc
jellyfin: hp
homeassistant: nas
# These should stay together
myapp: nuc # includes app + db + redis
```
### Cross-Host Communication
If services MUST communicate across hosts, publish ports:
```yaml
# Instead of
DATABASE_URL=postgres://db:5432
# Use
DATABASE_URL=postgres://192.168.1.10:5432
```
```yaml
# And publish the port
services:
db:
ports:
- "5432:5432"
```
## Multi-Host Stacks
### When to Use `all`
Use `all` for stacks that need local access to each host:
```yaml
stacks:
# Need Docker socket
dozzle: all # Log viewer
portainer-agent: all # Portainer agents
autokuma: all # Auto-creates monitors
# Need host metrics
node-exporter: all # Prometheus metrics
promtail: all # Log shipping
```
### Host-Specific Lists
For stacks on specific hosts only:
```yaml
stacks:
# Only on compute nodes
gitlab-runner: [nuc, hp]
# Only on storage nodes
minio: [nas-1, nas-2]
```
## Migration Safety
### Pre-flight Checks
Before migrating, Compose Farm verifies:
- Compose file is accessible on new host
- Required mounts exist on new host
- Required networks exist on new host
### Data Considerations
**Compose Farm doesn't move data.** Ensure:
1. **Shared storage**: Data volumes on NFS/shared storage
2. **External databases**: Data in external DB, not container
3. **Backup first**: Always backup before migration
### Safe Migration Pattern
```bash
# 1. Preview changes
cf apply --dry-run
# 2. Verify target host can run the stack
cf check myservice
# 3. Apply changes
cf apply
```
## State Management
### When to Refresh
Run `cf refresh` after:
- Manual `docker compose` commands
- Container restarts
- Host reboots
- Any changes outside Compose Farm
```bash
cf refresh --dry-run # Preview
cf refresh # Sync
```
### State Conflicts
If state doesn't match reality:
```bash
# See what's actually running
cf refresh --dry-run
# Sync state
cf refresh
# Then apply config
cf apply
```
## Shared Storage
### NFS Best Practices
```bash
# Mount options for Docker compatibility
nas:/compose /opt/compose nfs rw,hard,intr,rsize=8192,wsize=8192 0 0
```
### Directory Ownership
Ensure consistent UID/GID across hosts:
```yaml
services:
myapp:
environment:
- PUID=1000
- PGID=1000
```
### Config vs Data
Keep config and data separate:
```
/opt/compose/ # Shared: compose files + config
├── plex/
│ ├── docker-compose.yml
│ └── config/ # Small config files OK
/mnt/data/ # Shared: large media files
├── movies/
├── tv/
└── music/
/opt/appdata/ # Local: per-host app data
├── plex/
└── grafana/
```
## Performance
### Parallel Operations
Compose Farm runs operations in parallel. For large deployments:
```bash
# Good: parallel by default
cf up --all
# Avoid: sequential updates when possible
for svc in plex grafana nextcloud; do
cf update $svc
done
```
### SSH Connection Reuse
SSH connections are reused within a command. For many operations:
```bash
# One command, one connection per host
cf update --all
# Multiple commands, multiple connections (slower)
cf update plex && cf update grafana && cf update nextcloud
```
## Traefik Setup
### Stack Placement
Put Traefik on a reliable host:
```yaml
stacks:
traefik: nuc # Primary host with good uptime
```
### Same-Host Stacks
Stacks on the same host as Traefik use Docker provider:
```yaml
traefik_stack: traefik
stacks:
traefik: nuc
portainer: nuc # Docker provider handles this
plex: hp # File provider handles this
```
### Middleware in Separate File
Define middlewares outside Compose Farm's generated file:
```yaml
# /opt/traefik/dynamic.d/middlewares.yml
http:
middlewares:
redirect-https:
redirectScheme:
scheme: https
```
## Backup Strategy
### What to Backup
| Item | Location | Method |
|------|----------|--------|
| Compose Farm config | `~/.config/compose-farm/` | Git or copy |
| Compose files | `/opt/compose/` | Git |
| State file | `~/.config/compose-farm/compose-farm-state.yaml` | Optional (can refresh) |
| App data | `/opt/appdata/` | Backup solution |
### Disaster Recovery
```bash
# Restore config
cp backup/compose-farm.yaml ~/.config/compose-farm/
# Refresh state from running containers
cf refresh
# Or start fresh
cf apply
```
## Troubleshooting
### Common Issues
**Stack won't start:**
```bash
cf check myservice # Verify mounts/networks
cf logs myservice # Check container logs
```
**Migration fails:**
```bash
cf check myservice # Verify new host is ready
cf init-network newhost # Create network if missing
```
**State out of sync:**
```bash
cf refresh --dry-run # See differences
cf refresh # Sync state
```
**SSH issues:**
```bash
cf ssh status # Check key status
cf ssh setup # Re-setup keys
```
## Security Considerations
### SSH Keys
- Use dedicated SSH key for Compose Farm
- Limit key to specific hosts if possible
- Don't store keys in Docker images
### Network Exposure
- Published ports are accessible from network
- Use firewalls for sensitive services
- Consider VPN for cross-host communication
### Secrets
- Don't commit `.env` files with secrets
- Use Docker secrets or external secret management
- Avoid secrets in compose file labels
## Comparison: When to Use Alternatives
| Scenario | Solution |
|----------|----------|
| 2-10 hosts, static stacks | **Compose Farm** |
| Cross-host container networking | Docker Swarm |
| Auto-scaling, self-healing | Kubernetes |
| Infrastructure as code | Ansible + Compose Farm |
| High availability requirements | Kubernetes or Swarm |

777
docs/commands.md Normal file
View File

@@ -0,0 +1,777 @@
---
icon: lucide/terminal
---
# Commands Reference
The Compose Farm CLI is available as both `compose-farm` and the shorter alias `cf`.
## Command Overview
| Category | Command | Description |
|----------|---------|-------------|
| **Lifecycle** | `apply` | Make reality match config |
| | `up` | Start stacks |
| | `down` | Stop stacks |
| | `stop` | Stop services without removing containers |
| | `restart` | Restart stacks (down + up) |
| | `update` | Update stacks (pull + build + down + up) |
| | `pull` | Pull latest images |
| | `compose` | Run any docker compose command |
| **Monitoring** | `ps` | Show stack status |
| | `logs` | Show stack logs |
| | `stats` | Show overview statistics |
| **Configuration** | `check` | Validate config and mounts |
| | `refresh` | Sync state from reality |
| | `init-network` | Create Docker network |
| | `traefik-file` | Generate Traefik config |
| | `config` | Manage config files |
| | `ssh` | Manage SSH keys |
| **Server** | `web` | Start web UI |
## Global Options
```bash
cf --version, -v # Show version
cf --help, -h # Show help
```
---
## Lifecycle Commands
### cf apply
Make reality match your configuration. The primary reconciliation command.
<video autoplay loop muted playsinline>
<source src="/assets/apply.webm" type="video/webm">
</video>
```bash
cf apply [OPTIONS]
```
**Options:**
| Option | Description |
|--------|-------------|
| `--dry-run, -n` | Preview changes without executing |
| `--no-orphans` | Skip stopping orphaned stacks |
| `--full, -f` | Also refresh running stacks |
| `--config, -c PATH` | Path to config file |
**What it does:**
1. Stops orphaned stacks (in state but removed from config)
2. Migrates stacks on wrong host
3. Starts missing stacks (in config but not running)
**Examples:**
```bash
# Preview what would change
cf apply --dry-run
# Apply all changes
cf apply
# Only start/migrate, don't stop orphans
cf apply --no-orphans
# Also refresh all running stacks
cf apply --full
```
---
### cf up
Start stacks. Auto-migrates if host assignment changed.
```bash
cf up [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Start all stacks |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Start specific stacks
cf up plex grafana
# Start all stacks
cf up --all
# Start all stacks on a specific host
cf up --all --host nuc
# Start a specific service within a stack
cf up immich --service database
```
**Auto-migration:**
If you change a stack's host in config and run `cf up`:
1. Verifies mounts/networks exist on new host
2. Runs `down` on old host
3. Runs `up -d` on new host
4. Updates state
---
### cf down
Stop stacks.
```bash
cf down [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Stop all stacks |
| `--orphaned` | Stop orphaned stacks only |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Stop specific stacks
cf down plex
# Stop all stacks
cf down --all
# Stop stacks removed from config
cf down --orphaned
# Stop all stacks on a host
cf down --all --host nuc
```
---
### cf stop
Stop services without removing containers.
```bash
cf stop [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Stop all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Stop specific stacks
cf stop plex
# Stop all stacks
cf stop --all
# Stop a specific service within a stack
cf stop immich --service database
```
---
### cf restart
Restart stacks (down + up). With `--service`, restarts just that service.
```bash
cf restart [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Restart all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
cf restart plex
cf restart --all
# Restart a specific service
cf restart immich --service database
```
---
### cf update
Update stacks (pull + build + down + up). With `--service`, updates just that service.
<video autoplay loop muted playsinline>
<source src="/assets/update.webm" type="video/webm">
</video>
```bash
cf update [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Update all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Update specific stack
cf update plex
# Update all stacks
cf update --all
# Update a specific service
cf update immich --service database
```
---
### cf pull
Pull latest images.
```bash
cf pull [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Pull for all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
cf pull plex
cf pull --all
# Pull a specific service
cf pull immich --service database
```
---
### cf compose
Run any docker compose command on a stack. This is a passthrough to docker compose for commands not wrapped by cf.
<video autoplay loop muted playsinline>
<source src="/assets/compose.webm" type="video/webm">
</video>
```bash
cf compose [OPTIONS] STACK COMMAND [ARGS]...
```
**Arguments:**
| Argument | Description |
|----------|-------------|
| `STACK` | Stack to operate on (use `.` for current dir) |
| `COMMAND` | Docker compose command to run |
| `ARGS` | Additional arguments passed to docker compose |
**Options:**
| Option | Description |
|--------|-------------|
| `--host, -H TEXT` | Filter to stacks on this host (required for multi-host stacks) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Show docker compose help
cf compose mystack --help
# View running processes
cf compose mystack top
# List images
cf compose mystack images
# Interactive shell
cf compose mystack exec web bash
# View parsed config
cf compose mystack config
# Use current directory as stack
cf compose . ps
```
---
## Monitoring Commands
### cf ps
Show status of stacks.
```bash
cf ps [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Show all stacks (default) |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Show all stacks
cf ps
# Show specific stacks
cf ps plex grafana
# Filter by host
cf ps --host nuc
# Show status of a specific service
cf ps immich --service database
```
---
### cf logs
Show stack logs.
<video autoplay loop muted playsinline>
<source src="/assets/logs.webm" type="video/webm">
</video>
```bash
cf logs [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Show logs for all stacks |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--follow, -f` | Follow logs (live stream) |
| `--tail, -n INTEGER` | Number of lines (default: 20 for --all, 100 otherwise) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Show last 100 lines
cf logs plex
# Follow logs
cf logs -f plex
# Show last 50 lines of multiple stacks
cf logs -n 50 plex grafana
# Show last 20 lines of all stacks
cf logs --all
# Show logs for a specific service
cf logs immich --service database
```
---
### cf stats
Show overview statistics.
```bash
cf stats [OPTIONS]
```
**Options:**
| Option | Description |
|--------|-------------|
| `--live, -l` | Query Docker for live container counts |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Config/state overview
cf stats
# Include live container counts
cf stats --live
```
---
## Configuration Commands
### cf check
Validate configuration, mounts, and networks.
```bash
cf check [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--local` | Skip SSH-based checks (faster) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Full validation with SSH
cf check
# Fast local-only validation
cf check --local
# Check specific stack and show host compatibility
cf check jellyfin
```
---
### cf refresh
Update local state from running stacks.
```bash
cf refresh [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Refresh all stacks |
| `--dry-run, -n` | Show what would change |
| `--log-path, -l PATH` | Path to Dockerfarm TOML log |
| `--config, -c PATH` | Path to config file |
Without arguments, refreshes all stacks (same as `--all`). With stack names, refreshes only those stacks.
**Examples:**
```bash
# Sync state with reality (all stacks)
cf refresh
# Preview changes
cf refresh --dry-run
# Refresh specific stacks only
cf refresh plex sonarr
```
---
### cf init-network
Create Docker network on hosts with consistent settings.
```bash
cf init-network [OPTIONS] [HOSTS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--network, -n TEXT` | Network name (default: mynetwork) |
| `--subnet, -s TEXT` | Network subnet (default: 172.20.0.0/16) |
| `--gateway, -g TEXT` | Network gateway (default: 172.20.0.1) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Create on all hosts
cf init-network
# Create on specific hosts
cf init-network nuc hp
# Custom network settings
cf init-network -n production -s 10.0.0.0/16 -g 10.0.0.1
```
---
### cf traefik-file
Generate Traefik file-provider config from compose labels.
```bash
cf traefik-file [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Generate for all stacks |
| `--output, -o PATH` | Output file (stdout if omitted) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Preview to stdout
cf traefik-file --all
# Write to file
cf traefik-file --all -o /opt/traefik/dynamic.d/cf.yml
# Specific stacks
cf traefik-file plex jellyfin -o /opt/traefik/cf.yml
```
---
### cf config
Manage configuration files.
```bash
cf config COMMAND
```
**Subcommands:**
| Command | Description |
|---------|-------------|
| `init` | Create new config with examples |
| `show` | Display config with highlighting |
| `path` | Print config file path |
| `validate` | Validate syntax and schema |
| `edit` | Open in $EDITOR |
| `symlink` | Create symlink from default location |
**Options by subcommand:**
| Subcommand | Options |
|------------|---------|
| `init` | `--path/-p PATH`, `--force/-f` |
| `show` | `--path/-p PATH`, `--raw/-r` |
| `edit` | `--path/-p PATH` |
| `path` | `--path/-p PATH` |
| `validate` | `--path/-p PATH` |
| `symlink` | `--force/-f` |
**Examples:**
```bash
# Create config at default location
cf config init
# Create config at custom path
cf config init --path /opt/compose-farm/config.yaml
# Show config with syntax highlighting
cf config show
# Show raw config (for copy-paste)
cf config show --raw
# Validate config
cf config validate
# Edit config in $EDITOR
cf config edit
# Print config path
cf config path
# Create symlink to local config
cf config symlink
# Create symlink to specific file
cf config symlink /opt/compose-farm/config.yaml
```
---
### cf ssh
Manage SSH keys for passwordless authentication.
```bash
cf ssh COMMAND
```
**Subcommands:**
| Command | Description |
|---------|-------------|
| `setup` | Generate key and copy to all hosts |
| `status` | Show SSH key status and host connectivity |
| `keygen` | Generate key without distributing |
**Options for `cf ssh setup`:**
| Option | Description |
|--------|-------------|
| `--config, -c PATH` | Path to config file |
| `--force, -f` | Regenerate key even if it exists |
**Options for `cf ssh status`:**
| Option | Description |
|--------|-------------|
| `--config, -c PATH` | Path to config file |
**Options for `cf ssh keygen`:**
| Option | Description |
|--------|-------------|
| `--force, -f` | Regenerate key even if it exists |
**Examples:**
```bash
# Set up SSH keys (generates and distributes)
cf ssh setup
# Check status and connectivity
cf ssh status
# Generate key only (don't distribute)
cf ssh keygen
```
---
## Server Commands
### cf web
Start the web UI server.
```bash
cf web [OPTIONS]
```
**Options:**
| Option | Description |
|--------|-------------|
| `--host, -H TEXT` | Host to bind to (default: 0.0.0.0) |
| `--port, -p INTEGER` | Port to listen on (default: 8000) |
| `--reload, -r` | Enable auto-reload for development |
**Note:** Requires web dependencies: `pip install compose-farm[web]`
**Examples:**
```bash
# Start on default port
cf web
# Start on custom port
cf web --port 3000
# Development mode with auto-reload
cf web --reload
```
---
## Common Patterns
### Daily Operations
```bash
# Morning: check status
cf ps
cf stats --live
# Update a specific stack
cf update plex
# View logs
cf logs -f plex
```
### Maintenance
```bash
# Update all stacks
cf update --all
# Refresh state after manual changes
cf refresh
```
### Migration
```bash
# Preview what would change
cf apply --dry-run
# Move a stack: edit config, then
cf up plex # auto-migrates
# Or reconcile everything
cf apply
```
### Troubleshooting
```bash
# Validate config
cf check --local
cf check
# Check specific stack
cf check jellyfin
# Sync state
cf refresh --dry-run
cf refresh
```

418
docs/configuration.md Normal file
View File

@@ -0,0 +1,418 @@
---
icon: lucide/settings
---
# Configuration Reference
Compose Farm uses a YAML configuration file to define hosts and stack assignments.
## Config File Location
Compose Farm looks for configuration in this order:
1. `-c` / `--config` flag (if provided)
2. `CF_CONFIG` environment variable
3. `./compose-farm.yaml` (current directory)
4. `$XDG_CONFIG_HOME/compose-farm/compose-farm.yaml` (defaults to `~/.config`)
Use `-c` / `--config` to specify a custom path:
```bash
cf ps -c /path/to/config.yaml
```
Or set the environment variable:
```bash
export CF_CONFIG=/path/to/config.yaml
```
## Examples
### Single host (local-only)
```yaml
# Required: directory containing compose files
compose_dir: /opt/stacks
# Define local host
hosts:
local: localhost
# Map stacks to the local host
stacks:
plex: local
grafana: local
nextcloud: local
```
### Multi-host (full example)
```yaml
# Required: directory containing compose files (same path on all hosts)
compose_dir: /opt/compose
# Optional: auto-regenerate Traefik config
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik
# Define Docker hosts
hosts:
nuc:
address: 192.168.1.10
user: docker
hp:
address: 192.168.1.11
user: admin
# Map stacks to hosts
stacks:
# Single-host stacks
plex: nuc
grafana: nuc
nextcloud: hp
# Multi-host stacks
dozzle: all # Run on ALL hosts
node-exporter: [nuc, hp] # Run on specific hosts
```
## Settings Reference
### compose_dir (required)
Directory containing your compose stack folders. Must be the same path on all hosts.
```yaml
compose_dir: /opt/compose
```
**Directory structure:**
```
/opt/compose/
├── plex/
│ ├── docker-compose.yml # or compose.yaml
│ └── .env # optional environment file
├── grafana/
│ └── docker-compose.yml
└── ...
```
Supported compose file names (checked in order):
- `compose.yaml`
- `compose.yml`
- `docker-compose.yml`
- `docker-compose.yaml`
### traefik_file
Path to auto-generated Traefik file-provider config. When set, Compose Farm regenerates this file after `up`, `down`, `restart`, and `update` commands.
```yaml
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
```
### traefik_stack
Stack name running Traefik. Stacks on the same host are skipped in file-provider config (Traefik's docker provider handles them).
```yaml
traefik_stack: traefik
```
## Hosts Configuration
### Basic Host
```yaml
hosts:
myserver:
address: 192.168.1.10
```
### With SSH User
```yaml
hosts:
myserver:
address: 192.168.1.10
user: docker
```
If `user` is omitted, the current user is used.
### With Custom SSH Port
```yaml
hosts:
myserver:
address: 192.168.1.10
user: docker
port: 2222 # SSH port (default: 22)
```
### Localhost
For stacks running on the same machine where you invoke Compose Farm:
```yaml
hosts:
local: localhost
```
No SSH is used for localhost stacks.
### Multiple Hosts
```yaml
hosts:
nuc:
address: 192.168.1.10
user: docker
hp:
address: 192.168.1.11
user: admin
truenas:
address: 192.168.1.100
local: localhost
```
## Stacks Configuration
### Single-Host Stack
```yaml
stacks:
plex: nuc
grafana: nuc
nextcloud: hp
```
### Multi-Host Stack
For stacks that need to run on every host (e.g., log shippers, monitoring agents):
```yaml
stacks:
# Run on ALL configured hosts
dozzle: all
promtail: all
# Run on specific hosts
node-exporter: [nuc, hp, truenas]
```
**Common multi-host stacks:**
- **Dozzle** - Docker log viewer (needs local socket)
- **Promtail/Alloy** - Log shipping (needs local socket)
- **node-exporter** - Host metrics (needs /proc, /sys)
- **AutoKuma** - Uptime Kuma monitors (needs local socket)
### Stack Names
Stack names must match directory names in `compose_dir`:
```yaml
compose_dir: /opt/compose
stacks:
plex: nuc # expects /opt/compose/plex/docker-compose.yml
my-app: hp # expects /opt/compose/my-app/docker-compose.yml
```
## State File
Compose Farm tracks deployment state in `compose-farm-state.yaml`, stored alongside the config file.
For example, if your config is at `~/.config/compose-farm/compose-farm.yaml`, the state file will be at `~/.config/compose-farm/compose-farm-state.yaml`.
```yaml
deployed:
plex: nuc
grafana: nuc
```
This file records which stacks are deployed and on which host.
**Don't edit manually.** Use `cf refresh` to sync state with reality.
## Environment Variables
### In Compose Files
Your compose files can use `.env` files as usual:
```
/opt/compose/plex/
├── docker-compose.yml
└── .env
```
Compose Farm runs `docker compose` which handles `.env` automatically.
### In Traefik Labels
When generating Traefik config, Compose Farm resolves `${VAR}` and `${VAR:-default}` from:
1. The stack's `.env` file
2. Current environment
## Config Commands
### Initialize Config
```bash
cf config init
```
Creates a new config file with documented examples.
### Validate Config
```bash
cf config validate
```
Checks syntax and schema.
### Show Config
```bash
cf config show
```
Displays current config with syntax highlighting.
### Edit Config
```bash
cf config edit
```
Opens config in `$EDITOR`.
### Show Config Path
```bash
cf config path
```
Prints the config file location (useful for scripting).
### Create Symlink
```bash
cf config symlink # Link to ./compose-farm.yaml
cf config symlink /path/to/my-config.yaml # Link to specific file
```
Creates a symlink from the default location (`~/.config/compose-farm/compose-farm.yaml`) to your config file. Use `--force` to overwrite an existing symlink.
## Validation
### Local Validation
Fast validation without SSH:
```bash
cf check --local
```
Checks:
- Config syntax
- Stack-to-host mappings
- Compose file existence
### Full Validation
```bash
cf check
```
Additional SSH-based checks:
- Host connectivity
- Mount point existence
- Docker network existence
- Traefik label validation
### Stack-Specific Check
```bash
cf check jellyfin
```
Shows which hosts can run the stack (have required mounts/networks).
## Example Configurations
### Minimal
```yaml
compose_dir: /opt/compose
hosts:
server: 192.168.1.10
stacks:
myapp: server
```
### Home Lab
```yaml
compose_dir: /opt/compose
hosts:
nuc:
address: 192.168.1.10
user: docker
nas:
address: 192.168.1.100
user: admin
stacks:
# Media
plex: nuc
jellyfin: nuc
immich: nuc
# Infrastructure
traefik: nuc
portainer: nuc
# Monitoring (on all hosts)
dozzle: all
```
### Production
```yaml
compose_dir: /opt/compose
traefik_file: /opt/traefik/dynamic.d/cf.yml
traefik_stack: traefik
hosts:
web-1:
address: 10.0.1.10
user: deploy
web-2:
address: 10.0.1.11
user: deploy
db:
address: 10.0.1.20
user: deploy
stacks:
# Load balanced
api: [web-1, web-2]
# Single instance
postgres: db
redis: db
# Infrastructure
traefik: web-1
# Monitoring
promtail: all
```

17
docs/demos/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Demo Recordings
Demo recording infrastructure for Compose Farm documentation.
## Structure
```
docs/demos/
├── cli/ # VHS-based CLI terminal recordings
└── web/ # Playwright-based web UI recordings
```
## Output
All recordings output to `docs/assets/` as WebM (primary) and GIF (fallback).
See subdirectory READMEs for usage.

33
docs/demos/cli/README.md Normal file
View File

@@ -0,0 +1,33 @@
# CLI Demo Recordings
VHS-based terminal demo recordings for Compose Farm CLI.
## Requirements
- [VHS](https://github.com/charmbracelet/vhs): `go install github.com/charmbracelet/vhs@latest`
## Usage
```bash
# Record all demos
python docs/demos/cli/record.py
# Record specific demos
python docs/demos/cli/record.py quickstart migration
```
## Demos
| Tape | Description |
|------|-------------|
| `install.tape` | Installing with `uv tool install` |
| `quickstart.tape` | `cf ps`, `cf up`, `cf logs` |
| `logs.tape` | Viewing logs |
| `compose.tape` | `cf compose` passthrough (--help, images, exec) |
| `update.tape` | `cf update` |
| `migration.tape` | Service migration |
| `apply.tape` | `cf apply` |
## Output
GIF and WebM files saved to `docs/assets/`.

39
docs/demos/cli/apply.tape Normal file
View File

@@ -0,0 +1,39 @@
# Apply Demo
# Shows cf apply previewing and reconciling state
Output docs/assets/apply.gif
Output docs/assets/apply.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 600
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# Preview what would change"
Enter
Sleep 500ms
Type "cf apply --dry-run"
Enter
Wait
Type "# Check current status"
Enter
Sleep 500ms
Type "cf stats"
Enter
Wait+Screen /Summary/
Sleep 2s
Type "# Apply the changes"
Enter
Sleep 500ms
Type "cf apply"
Enter
# Wait for shell prompt (command complete)
Wait
Sleep 4s

View File

@@ -0,0 +1,50 @@
# Compose Demo
# Shows that cf compose passes through ANY docker compose command
Output docs/assets/compose.gif
Output docs/assets/compose.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 550
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# cf compose runs ANY docker compose command on the right host"
Enter
Sleep 500ms
Type "# See ALL available compose commands"
Enter
Sleep 500ms
Type "cf compose immich --help"
Enter
Sleep 4s
Type "# Show images"
Enter
Sleep 500ms
Type "cf compose immich images"
Enter
Wait+Screen /immich/
Sleep 2s
Type "# Open shell in a container"
Enter
Sleep 500ms
Type "cf compose immich exec immich-machine-learning sh"
Enter
Wait+Screen /#/
Sleep 1s
Type "python3 --version"
Enter
Sleep 1s
Type "exit"
Enter
Sleep 500ms

View File

@@ -0,0 +1,42 @@
# Installation Demo
# Shows installing compose-farm with uv
Output docs/assets/install.gif
Output docs/assets/install.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 600
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Env FORCE_COLOR "1"
Hide
Type "export PATH=$HOME/.local/bin:$PATH && uv tool uninstall compose-farm 2>/dev/null; clear"
Enter
Show
Type "# Install with uv (recommended)"
Enter
Sleep 500ms
Type "uv tool install compose-farm"
Enter
Wait+Screen /Installed|already installed/
Type "# Verify installation"
Enter
Sleep 500ms
Type "cf --version"
Enter
Wait+Screen /compose-farm/
Sleep 1s
Type "cf --help | less"
Enter
Sleep 2s
PageDown
Sleep 2s
Type "q"
Sleep 2s

21
docs/demos/cli/logs.tape Normal file
View File

@@ -0,0 +1,21 @@
# Logs Demo
# Shows viewing stack logs
Output docs/assets/logs.gif
Output docs/assets/logs.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 550
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# View recent logs"
Enter
Sleep 500ms
Type "cf logs immich --tail 20"
Enter
Wait+Screen /immich/
Sleep 2s

View File

@@ -0,0 +1,71 @@
# Migration Demo
# Shows automatic stack migration when host changes
Output docs/assets/migration.gif
Output docs/assets/migration.webm
Set Shell "bash"
Set FontSize 14
Set Width 1000
Set Height 600
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# Current status: audiobookshelf on 'nas'"
Enter
Sleep 500ms
Type "cf ps audiobookshelf"
Enter
Wait+Screen /PORTS/
Type "# Edit config to move it to 'anton'"
Enter
Sleep 1s
Type "nvim /opt/stacks/compose-farm.yaml"
Enter
Wait+Screen /stacks:/
# Search for audiobookshelf
Type "/audiobookshelf"
Enter
Sleep 1s
# Move to the host value (nas) and change it
Type "f:"
Sleep 500ms
Type "w"
Sleep 500ms
Type "ciw"
Sleep 500ms
Type "anton"
Escape
Sleep 1s
# Save and quit
Type ":wq"
Enter
Sleep 1s
Type "# Run up - automatically migrates!"
Enter
Sleep 500ms
Type "cf up audiobookshelf"
Enter
# Wait for migration phases: first the stop on old host
Wait+Screen /Migrating|down/
# Then wait for start on new host
Wait+Screen /Starting|up/
# Finally wait for completion
Wait
Type "# Verify: audiobookshelf now on 'anton'"
Enter
Sleep 500ms
Type "cf ps audiobookshelf"
Enter
Wait+Screen /PORTS/
Sleep 3s

View File

@@ -0,0 +1,91 @@
# Quick Start Demo
# Shows basic cf commands
Output docs/assets/quickstart.gif
Output docs/assets/quickstart.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 600
Set Theme "Catppuccin Mocha"
Set FontFamily "FiraCode Nerd Font"
Set TypingSpeed 50ms
Env BAT_PAGING "always"
Type "# Config is just: stack  host"
Enter
Sleep 500ms
Type "# First, define your hosts..."
Enter
Sleep 500ms
Type "bat -r 1:16 compose-farm.yaml"
Enter
Sleep 3s
Type "q"
Sleep 500ms
Type "# Then map each stack to a host"
Enter
Sleep 500ms
Type "bat -r 17:35 compose-farm.yaml"
Enter
Sleep 3s
Type "q"
Sleep 500ms
Type "# Check stack status"
Enter
Sleep 500ms
Type "cf ps immich"
Enter
Wait+Screen /PORTS/
Type "# Start a stack"
Enter
Sleep 500ms
Type "cf up immich"
Enter
Wait
Type "# View logs"
Enter
Sleep 500ms
Type "cf logs immich --tail 5"
Enter
Wait+Screen /immich/
Sleep 2s
Type "#  The magic: move between hosts (nas  anton)"
Enter
Sleep 500ms
Type "# Change host in config (using sed)"
Enter
Sleep 500ms
Type "sed -i 's/audiobookshelf: nas/audiobookshelf: anton/' compose-farm.yaml"
Enter
Sleep 500ms
Type "# Apply changes - auto-migrates!"
Enter
Sleep 500ms
Type "cf apply"
Enter
Sleep 15s
Type "# Verify: now on anton"
Enter
Sleep 500ms
Type "cf ps audiobookshelf"
Enter
Sleep 5s

134
docs/demos/cli/record.py Executable file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env python3
"""Record CLI demos using VHS."""
import shutil
import subprocess
import sys
from pathlib import Path
from rich.console import Console
from compose_farm.config import load_config
from compose_farm.state import load_state
console = Console()
SCRIPT_DIR = Path(__file__).parent
STACKS_DIR = Path("/opt/stacks")
CONFIG_FILE = STACKS_DIR / "compose-farm.yaml"
OUTPUT_DIR = SCRIPT_DIR.parent.parent / "assets"
DEMOS = ["install", "quickstart", "logs", "compose", "update", "migration", "apply"]
def _run(cmd: list[str], **kw) -> bool:
return subprocess.run(cmd, check=False, **kw).returncode == 0
def _set_config(host: str) -> None:
"""Set audiobookshelf host in config file."""
_run(["sed", "-i", f"s/audiobookshelf: .*/audiobookshelf: {host}/", str(CONFIG_FILE)])
def _get_hosts() -> tuple[str | None, str | None]:
"""Return (config_host, state_host) for audiobookshelf."""
config = load_config()
state = load_state(config)
return config.stacks.get("audiobookshelf"), state.get("audiobookshelf")
def _setup_state(demo: str) -> bool:
"""Set up required state for demo. Returns False on failure."""
if demo not in ("migration", "apply"):
return True
config_host, state_host = _get_hosts()
if demo == "migration":
# Migration needs audiobookshelf on nas in BOTH config and state
if config_host != "nas":
console.print("[yellow]Setting up: config → nas[/yellow]")
_set_config("nas")
if state_host != "nas":
console.print("[yellow]Setting up: state → nas[/yellow]")
if not _run(["cf", "apply"], cwd=STACKS_DIR):
return False
elif demo == "apply":
# Apply needs config=nas, state=anton (so there's something to apply)
if config_host != "nas":
console.print("[yellow]Setting up: config → nas[/yellow]")
_set_config("nas")
if state_host == "nas":
console.print("[yellow]Setting up: state → anton[/yellow]")
_set_config("anton")
if not _run(["cf", "apply"], cwd=STACKS_DIR):
return False
_set_config("nas")
return True
def _record(name: str, index: int, total: int) -> bool:
"""Record a single demo."""
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] {name}")
if _run(["vhs", str(SCRIPT_DIR / f"{name}.tape")], cwd=STACKS_DIR):
console.print("[green] ✓ Done[/green]")
return True
console.print("[red] ✗ Failed[/red]")
return False
def _reset_after(demo: str, next_demo: str | None) -> None:
"""Reset state after demos that modify audiobookshelf."""
if demo not in ("quickstart", "migration"):
return
_set_config("nas")
if next_demo != "apply": # Let apply demo show the migration
_run(["cf", "apply"], cwd=STACKS_DIR)
def _restore_config(original: str) -> None:
"""Restore original config and sync state."""
console.print("[yellow]Restoring original config...[/yellow]")
CONFIG_FILE.write_text(original)
_run(["cf", "apply"], cwd=STACKS_DIR)
def _main() -> int:
if not shutil.which("vhs"):
console.print("[red]VHS not found. Install: brew install vhs[/red]")
return 1
if not _run(["git", "-C", str(STACKS_DIR), "diff", "--quiet", "compose-farm.yaml"]):
console.print("[red]compose-farm.yaml has uncommitted changes[/red]")
return 1
demos = [d for d in sys.argv[1:] if d in DEMOS] or DEMOS
if sys.argv[1:] and not demos:
console.print(f"[red]Unknown demo. Available: {', '.join(DEMOS)}[/red]")
return 1
# Save original config to restore after recording
original_config = CONFIG_FILE.read_text()
try:
for i, demo in enumerate(demos, 1):
if not _setup_state(demo):
return 1
if not _record(demo, i, len(demos)):
return 1
_reset_after(demo, demos[i] if i < len(demos) else None)
finally:
_restore_config(original_config)
# Move outputs
OUTPUT_DIR.mkdir(exist_ok=True)
for f in (STACKS_DIR / "docs/assets").glob("*.[gw]*"):
shutil.move(str(f), str(OUTPUT_DIR / f.name))
console.print(f"\n[green]Done![/green] Saved to {OUTPUT_DIR}")
return 0
if __name__ == "__main__":
sys.exit(_main())

View File

@@ -0,0 +1,32 @@
# Update Demo
# Shows updating stacks (pull + build + down + up)
Output docs/assets/update.gif
Output docs/assets/update.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 500
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# Update a single stack"
Enter
Sleep 500ms
Type "cf update grocy"
Enter
# Wait for command to complete (chain waits for longer timeout)
Wait+Screen /pull/
Wait+Screen /grocy/
Wait@60s
Type "# Check current status"
Enter
Sleep 500ms
Type "cf ps grocy"
Enter
Wait+Screen /PORTS/
Sleep 1s

45
docs/demos/web/README.md Normal file
View File

@@ -0,0 +1,45 @@
# Web UI Demo Recordings
Playwright-based demo recording for Compose Farm web UI.
## Requirements
- Chromium: `playwright install chromium`
- ffmpeg: `apt install ffmpeg` or `brew install ffmpeg`
## Usage
```bash
# Record all demos
python docs/demos/web/record.py
# Record specific demo
python docs/demos/web/record.py navigation
```
## Demos
| Demo | Description |
|------|-------------|
| `navigation` | Command palette fuzzy search and navigation |
| `stack` | Stack restart/logs via command palette |
| `themes` | Theme switching with arrow key preview |
| `workflow` | Full workflow: filter, navigate, logs, themes |
| `console` | Console terminal running cf commands |
| `shell` | Container shell exec with top |
## Output
WebM and GIF files saved to `docs/assets/web-{demo}.{webm,gif}`.
## Files
- `record.py` - Orchestration script
- `conftest.py` - Playwright fixtures, helper functions
- `demo_*.py` - Individual demo scripts
## Notes
- Uses real config at `/opt/stacks/compose-farm.yaml`
- Adjust `pause(page, ms)` calls to control timing
- Viewport: 1280x720

View File

@@ -0,0 +1 @@
"""Web UI demo recording scripts."""

224
docs/demos/web/conftest.py Normal file
View File

@@ -0,0 +1,224 @@
"""Shared fixtures for web UI demo recordings.
Based on tests/web/test_htmx_browser.py patterns for consistency.
"""
from __future__ import annotations
import os
import re
import shutil
import socket
import threading
import time
import urllib.request
from pathlib import Path
from typing import TYPE_CHECKING, Any
from unittest.mock import patch
import pytest
import uvicorn
from compose_farm.config import Config as CFConfig
from compose_farm.config import load_config
from compose_farm.state import load_state as _original_load_state
from compose_farm.web.app import create_app
from compose_farm.web.cdn import CDN_ASSETS, ensure_vendor_cache
if TYPE_CHECKING:
from collections.abc import Generator
from playwright.sync_api import BrowserContext, Page, Route
# Stacks to exclude from demo recordings (exact match)
DEMO_EXCLUDE_STACKS = {"arr"}
def _get_filtered_config() -> CFConfig:
"""Load config but filter out excluded stacks."""
config = load_config()
filtered_stacks = {
name: host for name, host in config.stacks.items() if name not in DEMO_EXCLUDE_STACKS
}
return CFConfig(
compose_dir=config.compose_dir,
hosts=config.hosts,
stacks=filtered_stacks,
traefik_file=config.traefik_file,
traefik_stack=config.traefik_stack,
config_path=config.config_path,
)
def _get_filtered_state(config: CFConfig) -> dict[str, str | list[str]]:
"""Load state but filter out excluded stacks."""
state = _original_load_state(config)
return {name: host for name, host in state.items() if name not in DEMO_EXCLUDE_STACKS}
@pytest.fixture(scope="session")
def vendor_cache(request: pytest.FixtureRequest) -> Path:
"""Download CDN assets once and cache to disk for faster recordings."""
cache_dir = Path(str(request.config.rootdir)) / ".pytest_cache" / "vendor"
return ensure_vendor_cache(cache_dir)
@pytest.fixture(scope="session")
def browser_type_launch_args() -> dict[str, str]:
"""Configure Playwright to use system Chromium if available."""
for name in ["chromium", "chromium-browser", "google-chrome", "chrome"]:
path = shutil.which(name)
if path:
return {"executable_path": path}
return {}
# Path to real compose-farm config
REAL_CONFIG_PATH = Path("/opt/stacks/compose-farm.yaml")
@pytest.fixture(scope="module")
def server_url() -> Generator[str, None, None]:
"""Start demo server using real config (with filtered stacks) and return URL."""
os.environ["CF_CONFIG"] = str(REAL_CONFIG_PATH)
# Patch at source module level so all callers get filtered versions
patches = [
# Patch load_state at source - all functions calling it get filtered state
patch("compose_farm.state.load_state", _get_filtered_state),
# Patch get_config where imported
patch("compose_farm.web.routes.pages.get_config", _get_filtered_config),
patch("compose_farm.web.routes.api.get_config", _get_filtered_config),
patch("compose_farm.web.routes.actions.get_config", _get_filtered_config),
patch("compose_farm.web.app.get_config", _get_filtered_config),
patch("compose_farm.web.ws.get_config", _get_filtered_config),
]
for p in patches:
p.start()
with socket.socket() as s:
s.bind(("127.0.0.1", 0))
port = s.getsockname()[1]
app = create_app()
uvicorn_config = uvicorn.Config(app, host="127.0.0.1", port=port, log_level="error")
server = uvicorn.Server(uvicorn_config)
thread = threading.Thread(target=server.run, daemon=True)
thread.start()
url = f"http://127.0.0.1:{port}"
server_ready = False
for _ in range(50):
try:
urllib.request.urlopen(url, timeout=0.5) # noqa: S310
server_ready = True
break
except Exception:
time.sleep(0.1)
if not server_ready:
msg = f"Demo server failed to start on {url}"
raise RuntimeError(msg)
yield url
server.should_exit = True
thread.join(timeout=2)
os.environ.pop("CF_CONFIG", None)
for p in patches:
p.stop()
@pytest.fixture(scope="module")
def recording_output_dir(tmp_path_factory: pytest.TempPathFactory) -> Path:
"""Directory for video recordings."""
return Path(tmp_path_factory.mktemp("recordings"))
@pytest.fixture
def recording_context(
browser: Any, # pytest-playwright's browser fixture
vendor_cache: Path,
recording_output_dir: Path,
) -> Generator[BrowserContext, None, None]:
"""Browser context with video recording enabled."""
context = browser.new_context(
viewport={"width": 1280, "height": 720},
record_video_dir=str(recording_output_dir),
record_video_size={"width": 1280, "height": 720},
)
# Set up CDN interception
cache = {url: (vendor_cache / f, ct) for url, (f, ct) in CDN_ASSETS.items()}
def handle_cdn(route: Route) -> None:
url = route.request.url
for url_prefix, (filepath, content_type) in cache.items():
if url.startswith(url_prefix):
route.fulfill(status=200, content_type=content_type, body=filepath.read_bytes())
return
route.abort("failed")
context.route(re.compile(r"https://(cdn\.jsdelivr\.net|unpkg\.com)/.*"), handle_cdn)
yield context
context.close()
@pytest.fixture
def recording_page(recording_context: BrowserContext) -> Generator[Page, None, None]:
"""Page with recording and slow motion enabled."""
page = recording_context.new_page()
yield page
page.close()
# Demo helper functions
def pause(page: Page, ms: int = 500) -> None:
"""Pause for visibility in recording."""
page.wait_for_timeout(ms)
def slow_type(page: Page, selector: str, text: str, delay: int = 100) -> None:
"""Type with visible delay between keystrokes."""
page.type(selector, text, delay=delay)
def open_command_palette(page: Page) -> None:
"""Open command palette with Ctrl+K."""
page.keyboard.press("Control+k")
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 300)
def close_command_palette(page: Page) -> None:
"""Close command palette with Escape."""
page.keyboard.press("Escape")
page.wait_for_selector("#cmd-palette:not([open])", timeout=2000)
pause(page, 200)
def wait_for_sidebar(page: Page) -> None:
"""Wait for sidebar to load with stacks."""
page.wait_for_selector("#sidebar-stacks", timeout=5000)
pause(page, 300)
def navigate_to_stack(page: Page, stack: str) -> None:
"""Navigate to a stack page via sidebar click."""
page.locator("#sidebar-stacks a", has_text=stack).click()
page.wait_for_url(f"**/stack/{stack}", timeout=5000)
pause(page, 500)
def select_command(page: Page, command: str) -> None:
"""Filter and select a command from the palette."""
page.locator("#cmd-input").fill(command)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 200)

View File

@@ -0,0 +1,77 @@
"""Demo: Console terminal.
Records a ~30 second demo showing:
- Navigating to Console page
- Running cf commands in the terminal
- Showing the Compose Farm config in Monaco editor
Run: pytest docs/demos/web/demo_console.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import (
pause,
slow_type,
wait_for_sidebar,
)
if TYPE_CHECKING:
from playwright.sync_api import Page
@pytest.mark.browser # type: ignore[misc]
def test_demo_console(recording_page: Page, server_url: str) -> None:
"""Record console terminal demo."""
page = recording_page
# Start on dashboard
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 800)
# Navigate to Console page via sidebar menu
page.locator(".menu a", has_text="Console").click()
page.wait_for_url("**/console", timeout=5000)
pause(page, 1000)
# Wait for terminal to be ready (auto-connects)
page.wait_for_selector("#console-terminal .xterm", timeout=10000)
pause(page, 1500)
# Run fastfetch first
slow_type(page, "#console-terminal .xterm-helper-textarea", "fastfetch", delay=80)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 2500) # Wait for output
# Type cf stats command
slow_type(page, "#console-terminal .xterm-helper-textarea", "cf stats", delay=80)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 3000) # Wait for output
# Type cf ps command
slow_type(page, "#console-terminal .xterm-helper-textarea", "cf ps grocy", delay=80)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 2500) # Wait for output
# Smoothly scroll down to show the Editor section with Compose Farm config
page.evaluate("""
const editor = document.getElementById('console-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Wait for Monaco editor to load with config content
page.wait_for_selector("#console-editor .monaco-editor", timeout=10000)
pause(page, 2500) # Let viewer see the Compose Farm config file
# Final pause
pause(page, 800)

View File

@@ -0,0 +1,74 @@
"""Demo: Command palette navigation.
Records a ~15 second demo showing:
- Opening command palette with Ctrl+K
- Fuzzy search filtering
- Arrow key navigation
- Stack and page navigation
Run: pytest docs/demos/web/demo_navigation.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import (
open_command_palette,
pause,
slow_type,
wait_for_sidebar,
)
if TYPE_CHECKING:
from playwright.sync_api import Page
@pytest.mark.browser # type: ignore[misc]
def test_demo_navigation(recording_page: Page, server_url: str) -> None:
"""Record command palette navigation demo."""
page = recording_page
# Start on dashboard
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 1000) # Let viewer see dashboard
# Open command palette with keyboard shortcut
open_command_palette(page)
pause(page, 500)
# Type partial stack name for fuzzy search
slow_type(page, "#cmd-input", "grocy", delay=120)
pause(page, 800)
# Arrow down to show selection movement
page.keyboard.press("ArrowDown")
pause(page, 400)
page.keyboard.press("ArrowUp")
pause(page, 400)
# Press Enter to navigate to stack
page.keyboard.press("Enter")
page.wait_for_url("**/stack/grocy", timeout=5000)
pause(page, 1500) # Show stack page
# Open palette again to navigate elsewhere
open_command_palette(page)
pause(page, 400)
# Navigate to another stack (immich) to show more navigation
slow_type(page, "#cmd-input", "imm", delay=120)
pause(page, 600)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/immich", timeout=5000)
pause(page, 1200) # Show immich stack page
# Open palette one more time, navigate back to dashboard
open_command_palette(page)
slow_type(page, "#cmd-input", "dashb", delay=120)
pause(page, 500)
page.keyboard.press("Enter")
page.wait_for_url(server_url, timeout=5000)
pause(page, 1000) # Final dashboard view

View File

@@ -0,0 +1,106 @@
"""Demo: Container shell exec via command palette.
Records a ~35 second demo showing:
- Navigating to immich stack (multiple containers)
- Using command palette with fuzzy matching ("sh mach") to open shell
- Running a command
- Using command palette to switch to server container shell
- Running another command
Run: pytest docs/demos/web/demo_shell.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import (
open_command_palette,
pause,
slow_type,
wait_for_sidebar,
)
if TYPE_CHECKING:
from playwright.sync_api import Page
@pytest.mark.browser # type: ignore[misc]
def test_demo_shell(recording_page: Page, server_url: str) -> None:
"""Record container shell demo."""
page = recording_page
# Start on dashboard
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 800)
# Navigate to immich via command palette (has multiple containers)
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "immich", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/immich", timeout=5000)
pause(page, 1500)
# Wait for containers list to load (so shell commands are available)
page.wait_for_selector("#containers-list button", timeout=10000)
pause(page, 800)
# Use command palette with fuzzy matching: "sh mach" -> "Shell: immich-machine-learning"
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "sh mach", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
pause(page, 1000)
# Wait for exec terminal to appear
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
# Smoothly scroll down to make the terminal visible
page.evaluate("""
const terminal = document.getElementById('exec-terminal');
if (terminal) {
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200)
# Run python version command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "python3 --version", delay=60)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 1500)
# Blur the terminal to release focus (won't scroll)
page.evaluate("document.activeElement?.blur()")
pause(page, 500)
# Use command palette to switch to server container: "sh serv" -> "Shell: immich-server"
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "sh serv", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
pause(page, 1000)
# Wait for new terminal
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
# Scroll to terminal
page.evaluate("""
const terminal = document.getElementById('exec-terminal');
if (terminal) {
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200)
# Run ls command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ls /usr/src/app", delay=60)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 2000)

View File

@@ -0,0 +1,101 @@
"""Demo: Stack actions.
Records a ~30 second demo showing:
- Navigating to a stack page
- Viewing compose file in Monaco editor
- Triggering Restart action via command palette
- Watching terminal output stream
- Triggering Logs action
Run: pytest docs/demos/web/demo_stack.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import (
open_command_palette,
pause,
slow_type,
wait_for_sidebar,
)
if TYPE_CHECKING:
from playwright.sync_api import Page
@pytest.mark.browser # type: ignore[misc]
def test_demo_stack(recording_page: Page, server_url: str) -> None:
"""Record stack actions demo."""
page = recording_page
# Start on dashboard
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 800)
# Navigate to grocy via command palette
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "grocy", delay=100)
pause(page, 500)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/grocy", timeout=5000)
pause(page, 1000) # Show stack page
# Click on Compose File collapse to show the Monaco editor
# The collapse uses a checkbox input, click it via the parent collapse div
compose_collapse = page.locator(".collapse", has_text="Compose File").first
compose_collapse.locator("input[type=checkbox]").click(force=True)
pause(page, 500)
# Wait for Monaco editor to load and show content
page.wait_for_selector("#compose-editor .monaco-editor", timeout=10000)
pause(page, 2000) # Let viewer see the compose file
# Smoothly scroll down to show more of the editor
page.evaluate("""
const editor = document.getElementById('compose-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Close the compose file section
compose_collapse.locator("input[type=checkbox]").click(force=True)
pause(page, 500)
# Open command palette for stack actions
open_command_palette(page)
pause(page, 400)
# Filter to Restart action
slow_type(page, "#cmd-input", "restart", delay=120)
pause(page, 600)
# Execute Restart
page.keyboard.press("Enter")
pause(page, 300)
# Wait for terminal to expand and show output
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500) # Let viewer see terminal streaming
# Open palette again for Logs
open_command_palette(page)
pause(page, 400)
# Filter to Logs action
slow_type(page, "#cmd-input", "logs", delay=120)
pause(page, 600)
# Execute Logs
page.keyboard.press("Enter")
pause(page, 300)
# Show log output
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500) # Final view of logs

View File

@@ -0,0 +1,81 @@
"""Demo: Theme switching.
Records a ~15 second demo showing:
- Opening theme picker via theme button
- Live theme preview on arrow navigation
- Selecting different themes
- Theme persistence
Run: pytest docs/demos/web/demo_themes.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import (
pause,
slow_type,
wait_for_sidebar,
)
if TYPE_CHECKING:
from playwright.sync_api import Page
@pytest.mark.browser # type: ignore[misc]
def test_demo_themes(recording_page: Page, server_url: str) -> None:
"""Record theme switching demo."""
page = recording_page
# Start on dashboard
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 1000) # Show initial theme
# Click theme button to open theme picker
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 600)
# Arrow through many themes to show live preview effect
for _ in range(12):
page.keyboard.press("ArrowDown")
pause(page, 350) # Show each preview
# Go back up through a few (land on valentine, not cyberpunk)
for _ in range(4):
page.keyboard.press("ArrowUp")
pause(page, 350)
# Select current theme with Enter
page.keyboard.press("Enter")
pause(page, 1000)
# Close palette with Escape
page.keyboard.press("Escape")
pause(page, 800)
# Open again and use search to find specific theme
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 400)
# Type to filter to a light theme (theme button pre-populates "theme:")
slow_type(page, "#cmd-input", "cup", delay=100)
pause(page, 500)
page.keyboard.press("Enter")
pause(page, 1000)
# Close and return to dark
page.keyboard.press("Escape")
pause(page, 500)
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 300)
slow_type(page, "#cmd-input", "dark", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 800)

View File

@@ -0,0 +1,189 @@
"""Demo: Full workflow.
Records a comprehensive demo (~60 seconds) combining all major features:
1. Console page: terminal with fastfetch, cf pull command
2. Editor showing Compose Farm YAML config
3. Command palette navigation to grocy stack
4. Stack actions: up, logs
5. Switch to dozzle stack via command palette, run update
6. Dashboard overview
7. Theme cycling via command palette
This demo is used on the homepage and Web UI page as the main showcase.
Run: pytest docs/demos/web/demo_workflow.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import open_command_palette, pause, slow_type, wait_for_sidebar
if TYPE_CHECKING:
from playwright.sync_api import Page
def _demo_console_terminal(page: Page, server_url: str) -> None:
"""Demo part 1: Console page with terminal and editor."""
# Start on dashboard briefly
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 800)
# Navigate to Console page via command palette
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "cons", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
page.wait_for_url("**/console", timeout=5000)
pause(page, 800)
# Wait for terminal to be ready
page.wait_for_selector("#console-terminal .xterm", timeout=10000)
pause(page, 1000)
# Run fastfetch first
slow_type(page, "#console-terminal .xterm-helper-textarea", "fastfetch", delay=60)
pause(page, 200)
page.keyboard.press("Enter")
pause(page, 2000) # Wait for output
# Run cf pull on a stack to show Compose Farm in action
slow_type(page, "#console-terminal .xterm-helper-textarea", "cf pull grocy", delay=60)
pause(page, 200)
page.keyboard.press("Enter")
pause(page, 3000) # Wait for pull output
def _demo_config_editor(page: Page) -> None:
"""Demo part 2: Show the Compose Farm config in editor."""
# Smoothly scroll down to show the Editor section
# Use JavaScript for smooth scrolling animation
page.evaluate("""
const editor = document.getElementById('console-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Wait for Monaco editor to load with config content
page.wait_for_selector("#console-editor .monaco-editor", timeout=10000)
pause(page, 2000) # Let viewer see the Compose Farm config file
def _demo_stack_actions(page: Page) -> None:
"""Demo part 3: Navigate to stack and run actions."""
# Click on sidebar to take focus away from terminal, then use command palette
page.locator("#sidebar-stacks").click()
pause(page, 300)
# Navigate to grocy via command palette
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "grocy", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/grocy", timeout=5000)
pause(page, 1000)
# Open Compose File editor to show the compose.yaml
compose_collapse = page.locator(".collapse", has_text="Compose File").first
compose_collapse.locator("input[type=checkbox]").click(force=True)
pause(page, 500)
# Wait for Monaco editor to load and show content
page.wait_for_selector("#compose-editor .monaco-editor", timeout=10000)
pause(page, 2000) # Let viewer see the compose file
# Close the compose file section
compose_collapse.locator("input[type=checkbox]").click(force=True)
pause(page, 500)
# Run Up action via command palette
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "up", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 200)
# Wait for terminal output
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500)
# Show logs
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "logs", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 200)
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500)
# Switch to dozzle via command palette (on nas for lower latency)
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "dozzle", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/dozzle", timeout=5000)
pause(page, 1000)
# Run update action
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "upda", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 200)
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500)
def _demo_dashboard_and_themes(page: Page, server_url: str) -> None:
"""Demo part 4: Dashboard and theme cycling."""
# Navigate to dashboard via command palette
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "dash", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
page.wait_for_url(server_url, timeout=5000)
pause(page, 800)
# Scroll to top of page to ensure dashboard is fully visible
page.evaluate("window.scrollTo(0, 0)")
pause(page, 600)
# Open theme picker and arrow down to Dracula (shows live preview)
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 400)
# Arrow down through themes with live preview until we reach Dracula
for _ in range(19):
page.keyboard.press("ArrowDown")
pause(page, 180)
# Select Dracula theme and end on it
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 1500)
@pytest.mark.browser # type: ignore[misc]
def test_demo_workflow(recording_page: Page, server_url: str) -> None:
"""Record full workflow demo."""
page = recording_page
_demo_console_terminal(page, server_url)
_demo_config_editor(page)
_demo_stack_actions(page)
_demo_dashboard_and_themes(page, server_url)

258
docs/demos/web/record.py Executable file
View File

@@ -0,0 +1,258 @@
#!/usr/bin/env python3
"""Record all web UI demos.
This script orchestrates recording of web UI demos using Playwright,
then converts the WebM recordings to GIF format.
Usage:
python docs/demos/web/record.py # Record all demos
python docs/demos/web/record.py navigation # Record specific demo
Requirements:
- Playwright with Chromium: playwright install chromium
- ffmpeg for GIF conversion: apt install ffmpeg / brew install ffmpeg
"""
from __future__ import annotations
import os
import re
import shutil
import subprocess
import sys
from pathlib import Path
from rich.console import Console
console = Console()
SCRIPT_DIR = Path(__file__).parent
REPO_DIR = SCRIPT_DIR.parent.parent.parent
OUTPUT_DIR = REPO_DIR / "docs" / "assets"
DEMOS = [
"navigation",
"stack",
"themes",
"workflow",
"console",
"shell",
]
# High-quality ffmpeg settings for VP8 encoding
# See: https://github.com/microsoft/playwright/issues/10855
# See: https://github.com/microsoft/playwright/issues/31424
#
# MAX_QUALITY: Lossless-like, largest files
# BALANCED_QUALITY: ~43% file size, nearly indistinguishable quality
MAX_QUALITY_ARGS = "-c:v vp8 -qmin 0 -qmax 0 -crf 0 -deadline best -speed 0 -b:v 0 -threads 0"
BALANCED_QUALITY_ARGS = "-c:v vp8 -qmin 0 -qmax 10 -crf 4 -deadline best -speed 0 -b:v 0 -threads 0"
# Choose which quality to use
VIDEO_QUALITY_ARGS = MAX_QUALITY_ARGS
def patch_playwright_video_quality() -> None:
"""Patch Playwright's videoRecorder.js to use high-quality encoding settings."""
from playwright._impl._driver import compute_driver_executable # noqa: PLC0415
# compute_driver_executable returns (node_path, cli_path)
result = compute_driver_executable()
node_path = result[0] if isinstance(result, tuple) else result
driver_path = Path(node_path).parent
video_recorder = driver_path / "package" / "lib" / "server" / "chromium" / "videoRecorder.js"
if not video_recorder.exists():
msg = f"videoRecorder.js not found at {video_recorder}"
raise FileNotFoundError(msg)
content = video_recorder.read_text()
# Check if already patched
if "deadline best" in content:
return # Already patched
# Pattern to match the ffmpeg args line
pattern = (
r"-c:v vp8 -qmin \d+ -qmax \d+ -crf \d+ -deadline \w+ -speed \d+ -b:v \w+ -threads \d+"
)
if not re.search(pattern, content):
msg = "Could not find ffmpeg args pattern in videoRecorder.js"
raise ValueError(msg)
# Replace with high-quality settings
new_content = re.sub(pattern, VIDEO_QUALITY_ARGS, content)
video_recorder.write_text(new_content)
console.print("[green]Patched Playwright for high-quality video recording[/green]")
def record_demo(name: str, index: int, total: int) -> Path | None:
"""Run a single demo and return the video path."""
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] web-{name}")
demo_file = SCRIPT_DIR / f"demo_{name}.py"
if not demo_file.exists():
console.print(f"[red] Demo file not found: {demo_file}[/red]")
return None
# Create temp output dir for this recording
temp_dir = SCRIPT_DIR / ".recordings"
temp_dir.mkdir(exist_ok=True)
# Run pytest with video recording
# Set PYTHONPATH so conftest.py imports work
env = {**os.environ, "PYTHONPATH": str(SCRIPT_DIR)}
result = subprocess.run(
[
sys.executable,
"-m",
"pytest",
str(demo_file),
"-v",
"--no-cov",
"-x", # Stop on first failure
f"--basetemp={temp_dir}",
],
check=False,
cwd=REPO_DIR,
capture_output=True,
text=True,
env=env,
)
if result.returncode != 0:
console.print(f"[red] Failed to record {name}[/red]")
console.print(result.stdout)
console.print(result.stderr)
return None
# Find the recorded video
videos = list(temp_dir.rglob("*.webm"))
if not videos:
console.print(f"[red] No video found for {name}[/red]")
return None
# Use the most recent video
video = max(videos, key=lambda p: p.stat().st_mtime)
console.print(f"[green] Recorded: {video.name}[/green]")
return video
def convert_to_gif(webm_path: Path, output_name: str) -> Path:
"""Convert WebM to GIF using ffmpeg with palette optimization."""
gif_path = OUTPUT_DIR / f"{output_name}.gif"
palette_path = webm_path.parent / "palette.png"
# Two-pass approach for better quality
# Pass 1: Generate palette
subprocess.run(
[ # noqa: S607
"ffmpeg",
"-y",
"-i",
str(webm_path),
"-vf",
"fps=10,scale=1280:-1:flags=lanczos,palettegen=stats_mode=diff",
str(palette_path),
],
check=True,
capture_output=True,
)
# Pass 2: Generate GIF with palette
subprocess.run(
[ # noqa: S607
"ffmpeg",
"-y",
"-i",
str(webm_path),
"-i",
str(palette_path),
"-lavfi",
"fps=10,scale=1280:-1:flags=lanczos[x];[x][1:v]paletteuse=dither=bayer:bayer_scale=5:diff_mode=rectangle",
str(gif_path),
],
check=True,
capture_output=True,
)
palette_path.unlink(missing_ok=True)
return gif_path
def move_recording(video_path: Path, name: str) -> tuple[Path, Path]:
"""Move WebM and convert to GIF, returning both paths."""
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
output_name = f"web-{name}"
webm_dest = OUTPUT_DIR / f"{output_name}.webm"
shutil.copy2(video_path, webm_dest)
console.print(f"[blue] WebM: {webm_dest.relative_to(REPO_DIR)}[/blue]")
gif_path = convert_to_gif(video_path, output_name)
console.print(f"[blue] GIF: {gif_path.relative_to(REPO_DIR)}[/blue]")
return webm_dest, gif_path
def cleanup() -> None:
"""Clean up temporary recording files."""
temp_dir = SCRIPT_DIR / ".recordings"
if temp_dir.exists():
shutil.rmtree(temp_dir)
def main() -> int:
"""Record all web UI demos."""
console.print("[blue]Recording web UI demos...[/blue]")
console.print(f"Output directory: {OUTPUT_DIR}")
console.print()
# Patch Playwright for high-quality video recording
patch_playwright_video_quality()
# Determine which demos to record
if len(sys.argv) > 1:
demos_to_record = [d for d in sys.argv[1:] if d in DEMOS]
if not demos_to_record:
console.print(f"[red]Unknown demo(s). Available: {', '.join(DEMOS)}[/red]")
return 1
else:
demos_to_record = DEMOS
results: dict[str, tuple[Path | None, Path | None]] = {}
try:
for i, demo in enumerate(demos_to_record, 1):
video_path = record_demo(demo, i, len(demos_to_record))
if video_path:
webm, gif = move_recording(video_path, demo)
results[demo] = (webm, gif)
else:
results[demo] = (None, None)
console.print()
finally:
cleanup()
# Summary
console.print("[blue]=== Summary ===[/blue]")
success_count = sum(1 for w, _ in results.values() if w is not None)
console.print(f"Recorded: {success_count}/{len(demos_to_record)} demos")
console.print()
for demo, (webm, gif) in results.items(): # type: ignore[assignment]
status = "[green]OK[/green]" if webm else "[red]FAILED[/red]"
console.print(f" {demo}: {status}")
if webm:
console.print(f" {webm.relative_to(REPO_DIR)}")
if gif:
console.print(f" {gif.relative_to(REPO_DIR)}")
return 0 if success_count == len(demos_to_record) else 1
if __name__ == "__main__":
sys.exit(main())

340
docs/getting-started.md Normal file
View File

@@ -0,0 +1,340 @@
---
icon: lucide/rocket
---
# Getting Started
This guide walks you through installing Compose Farm and setting up your first multi-host deployment.
## Prerequisites
Before you begin, ensure you have:
- **[uv](https://docs.astral.sh/uv/)** (recommended) or Python 3.11+
- **SSH key-based authentication** to your Docker hosts
- **Docker and Docker Compose** installed on all target hosts
- **Shared storage** for compose files (NFS, Syncthing, etc.)
## Installation
<video autoplay loop muted playsinline>
<source src="/assets/install.webm" type="video/webm">
</video>
### One-liner (recommended)
```bash
curl -fsSL https://compose-farm.nijho.lt/install | sh
```
This installs [uv](https://docs.astral.sh/uv/) if needed, then installs compose-farm.
### Using uv
If you already have [uv](https://docs.astral.sh/uv/) installed:
```bash
uv tool install compose-farm
```
### Using pip
If you already have Python 3.11+ installed:
```bash
pip install compose-farm
```
### Using Docker
```bash
docker run --rm \
-v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent \
-v ./compose-farm.yaml:/root/.config/compose-farm/compose-farm.yaml:ro \
ghcr.io/basnijholt/compose-farm up --all
```
**Running as non-root user** (recommended for NFS mounts):
By default, containers run as root. To preserve file ownership on mounted volumes, set these environment variables in your `.env` file:
```bash
# Add to .env file (one-time setup)
echo "CF_UID=$(id -u)" >> .env
echo "CF_GID=$(id -g)" >> .env
echo "CF_HOME=$HOME" >> .env
echo "CF_USER=$USER" >> .env
```
Or use [direnv](https://direnv.net/) to auto-set these variables when entering the directory:
```bash
cp .envrc.example .envrc && direnv allow
```
This ensures files like `compose-farm-state.yaml` and web UI edits are owned by your user instead of root. The `CF_USER` variable is required for SSH to work when running as a non-root user.
### Verify Installation
```bash
cf --version
cf --help
```
## SSH Setup
Compose Farm uses SSH to run commands on remote hosts. You need passwordless SSH access.
### Option 1: SSH Agent (default)
If you already have SSH keys loaded in your agent:
```bash
# Verify keys are loaded
ssh-add -l
# Test connection
ssh user@192.168.1.10 "docker --version"
```
### Option 2: Dedicated Key (recommended for Docker)
For persistent access when running in Docker:
```bash
# Generate and distribute key to all hosts
cf ssh setup
# Check status
cf ssh status
```
This creates `~/.ssh/compose-farm/id_ed25519` and copies the public key to each host.
## Shared Storage Setup
Compose files must be accessible at the **same path** on all hosts. Common approaches:
### NFS Mount
```bash
# On each Docker host
sudo mount nas:/volume1/compose /opt/compose
# Or add to /etc/fstab
nas:/volume1/compose /opt/compose nfs defaults 0 0
```
### Directory Structure
```
/opt/compose/ # compose_dir in config
├── plex/
│ └── docker-compose.yml
├── grafana/
│ └── docker-compose.yml
├── nextcloud/
│ └── docker-compose.yml
└── jellyfin/
└── docker-compose.yml
```
## Configuration
### Create Config File
Create `compose-farm.yaml` in the directory where you'll run commands. For example, if your stacks are in `/opt/stacks`, place the config there too:
```bash
cd /opt/stacks
cf config init
```
Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config. You can also symlink a working directory config to the global location:
```bash
# Create config in your stacks directory, symlink to ~/.config
cf config symlink /opt/stacks/compose-farm.yaml
```
This way, `cf` commands work from anywhere while the config lives with your stacks.
#### Single host example
```yaml
# Where compose files are located (one folder per stack)
compose_dir: /opt/stacks
hosts:
local: localhost
stacks:
plex: local
grafana: local
nextcloud: local
```
#### Multi-host example
```yaml
# Where compose files are located (same path on all hosts)
compose_dir: /opt/compose
# Define your Docker hosts
hosts:
nuc:
address: 192.168.1.10
user: docker # SSH user
hp:
address: 192.168.1.11
# user defaults to current user
# Map stacks to hosts
stacks:
plex: nuc
grafana: nuc
nextcloud: hp
```
Each entry in `stacks:` maps to a folder under `compose_dir` that contains a compose file.
For cross-host HTTP routing, add Traefik labels and configure `traefik_file` (see [Traefik Integration](traefik.md)).
### Validate Configuration
```bash
cf check --local
```
This validates syntax without SSH connections. For full validation:
```bash
cf check
```
## First Commands
### Check Status
```bash
cf ps
```
Shows all configured stacks and their status.
### Start All Stacks
```bash
cf up --all
```
Starts all stacks on their assigned hosts.
### Start Specific Stacks
```bash
cf up plex grafana
```
### Apply Configuration
The most powerful command - reconciles reality with your config:
```bash
cf apply --dry-run # Preview changes
cf apply # Execute changes
```
This will:
1. Start stacks in config but not running
2. Migrate stacks on wrong host
3. Stop stacks removed from config
## Docker Network Setup
If your stacks use an external Docker network:
```bash
# Create network on all hosts
cf init-network
# Or specific hosts
cf init-network nuc hp
```
Default network: `mynetwork` with subnet `172.20.0.0/16`
## Example Workflow
### 1. Add a New Stack
Create the compose file:
```bash
# On any host (shared storage)
mkdir -p /opt/compose/gitea
cat > /opt/compose/gitea/docker-compose.yml << 'EOF'
services:
gitea:
image: docker.gitea.com/gitea:latest
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
volumes:
- /opt/config/gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "2222:22"
restart: unless-stopped
EOF
```
Add to config:
```yaml
stacks:
# ... existing stacks
gitea: nuc
```
Start the stack:
```bash
cf up gitea
```
### 2. Move a Stack to Another Host
Edit `compose-farm.yaml`:
```yaml
stacks:
plex: hp # Changed from nuc
```
Apply the change:
```bash
cf up plex
# Automatically: down on nuc, up on hp
```
Or use apply to reconcile everything:
```bash
cf apply
```
### 3. Update All Stacks
```bash
cf update --all
# Runs: pull + build + down + up for each stack
```
## Next Steps
- [Configuration Reference](configuration.md) - All config options
- [Commands Reference](commands.md) - Full CLI documentation
- [Traefik Integration](traefik.md) - Multi-host routing
- [Best Practices](best-practices.md) - Tips and limitations

167
docs/index.md Normal file
View File

@@ -0,0 +1,167 @@
---
icon: lucide/server
---
# Compose Farm
A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
## What is Compose Farm?
Compose Farm lets you manage Docker Compose stacks across multiple machines from a single command line. Think [Dockge](https://dockge.kuma.pet/) but with a CLI and web interface, designed for multi-host deployments.
Define which stacks run where in one YAML file, then use `cf apply` to make reality match your configuration.
It also works great on a single host with one folder per stack; just map stacks to `localhost`.
## Quick Demo
**CLI:**
<video autoplay loop muted playsinline>
<source src="/assets/quickstart.webm" type="video/webm">
</video>
**[Web UI](web-ui.md):**
<video autoplay loop muted playsinline>
<source src="/assets/web-workflow.webm" type="video/webm">
</video>
## Why Compose Farm?
| Problem | Compose Farm Solution |
|---------|----------------------|
| 100+ containers on one machine | Distribute across multiple hosts |
| Kubernetes too complex | Just SSH + docker compose |
| Swarm in maintenance mode | Zero infrastructure changes |
| Manual SSH for each host | Single command for all |
**It's a convenience wrapper, not a new paradigm.** Your existing `docker-compose.yml` files work unchanged.
## Quick Start
### Single host
No SSH, shared storage, or Traefik file-provider required.
```yaml
# compose-farm.yaml
compose_dir: /opt/stacks
hosts:
local: localhost
stacks:
plex: local
jellyfin: local
traefik: local
```
```bash
cf apply # Start/stop stacks to match config
```
### Multi-host
Requires SSH plus a shared `compose_dir` path on all hosts (NFS or sync).
```yaml
# compose-farm.yaml
compose_dir: /opt/compose
hosts:
server-1:
address: 192.168.1.10
server-2:
address: 192.168.1.11
stacks:
plex: server-1
jellyfin: server-2
grafana: server-1
```
```bash
cf apply # Stacks start, migrate, or stop as needed
```
Each entry in `stacks:` maps to a folder under `compose_dir` that contains a compose file.
For cross-host HTTP routing, add Traefik labels and configure `traefik_file` to generate file-provider config.
### Installation
```bash
uv tool install compose-farm
# or
pip install compose-farm
```
### Configuration
Create `compose-farm.yaml` in the directory where you'll run commands (e.g., `/opt/stacks`), or in `~/.config/compose-farm/`:
```yaml
compose_dir: /opt/compose
hosts:
nuc:
address: 192.168.1.10
user: docker
hp:
address: 192.168.1.11
stacks:
plex: nuc
grafana: nuc
nextcloud: hp
```
See [Configuration](configuration.md) for all options and the full search order.
### Usage
```bash
# Make reality match config
cf apply
# Start specific stacks
cf up plex grafana
# Check status
cf ps
# View logs
cf logs -f plex
```
## Key Features
- **Declarative configuration**: One YAML defines where everything runs
- **Auto-migration**: Change a host assignment, run `cf up`, stack moves automatically
<video autoplay loop muted playsinline>
<source src="/assets/migration.webm" type="video/webm">
</video>
- **Parallel execution**: Multiple stacks start/stop concurrently
- **State tracking**: Knows which stacks are running where
- **Traefik integration**: Generate file-provider config for cross-host routing
- **Zero changes**: Your compose files work as-is
## Requirements
- [uv](https://docs.astral.sh/uv/) (recommended) or Python 3.11+
- SSH key-based authentication to your Docker hosts
- Docker and Docker Compose on all target hosts
- Shared storage (compose files at same path on all hosts)
## Documentation
- [Getting Started](getting-started.md) - Installation and first steps
- [Configuration](configuration.md) - All configuration options
- [Commands](commands.md) - CLI reference
- [Web UI](web-ui.md) - Browser-based management interface
- [Architecture](architecture.md) - How it works under the hood
- [Traefik Integration](traefik.md) - Multi-host routing setup
- [Best Practices](best-practices.md) - Tips and limitations
## License
MIT

29
docs/install Normal file
View File

@@ -0,0 +1,29 @@
#!/bin/sh
# Compose Farm bootstrap script
# Usage: curl -fsSL https://compose-farm.nijho.lt/install | sh
#
# This script installs uv (if needed) and then installs compose-farm as a uv tool.
set -e
if ! command -v uv >/dev/null 2>&1; then
echo "uv is not installed. Installing..."
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "uv installation complete!"
echo ""
if [ -x ~/.local/bin/uv ]; then
~/.local/bin/uv tool install compose-farm
else
echo "Please restart your shell and run this script again"
echo ""
exit 0
fi
else
uv tool install compose-farm
fi
echo ""
echo "compose-farm is installed!"
echo "Run 'cf --help' to get started."
echo "If 'cf' is not found, restart your shell or run: source ~/.bashrc"

View File

@@ -0,0 +1,21 @@
// Fix Safari video autoplay issues
(function() {
function initVideos() {
document.querySelectorAll('video[autoplay]').forEach(function(video) {
video.load();
video.play().catch(function() {});
});
}
// For initial page load (needed for Chrome)
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', initVideos);
} else {
initVideos();
}
// For MkDocs instant navigation (needed for Safari)
if (typeof document$ !== 'undefined') {
document$.subscribe(initVideos);
}
})();

79
docs/reddit-post.md Normal file
View File

@@ -0,0 +1,79 @@
# Title options
- Multi-host Docker Compose without Kubernetes or file changes
- I built a CLI to run Docker Compose across hosts. Zero changes to your files.
- I made a CLI to run Docker Compose across multiple hosts without Kubernetes or Swarm
---
I've been running 100+ Docker Compose stacks on a single machine, and it kept running out of memory. I needed to spread stacks across multiple hosts, but:
- **Kubernetes** felt like overkill. I don't need pods, ingress controllers, or 10x more YAML.
- **Docker Swarm** is basically in maintenance mode.
- Both require rewriting my compose files.
So I built **Compose Farm**, a simple CLI that runs `docker compose` commands over SSH. No agents, no cluster setup, no changes to your existing compose files.
## How it works
One YAML file maps stacks to hosts:
```yaml
compose_dir: /opt/stacks
hosts:
nuc: 192.168.1.10
hp: 192.168.1.11
stacks:
plex: nuc
jellyfin: hp
grafana: nuc
nextcloud: nuc
```
Then just:
```bash
cf up plex # runs on nuc via SSH
cf apply # makes config state match desired state on all hosts (like Terraform apply)
cf up --all # starts everything on their assigned hosts
cf logs -f plex # streams logs
cf ps # shows status across all hosts
```
## Auto-migration
Change a stack's host in the config and run `cf up`. It stops the stack on the old host and starts it on the new one. No manual SSH needed.
```yaml
# Before
plex: nuc
# After (just change this)
plex: hp
```
```bash
cf up plex # migrates automatically
```
## Requirements
- SSH key auth to your hosts
- Same paths on all hosts (I use NFS from my NAS)
- That's it. No agents, no daemons.
## What it doesn't do
- No high availability (if a host goes down, stacks don't auto-migrate)
- No overlay networking (containers on different hosts can't talk via Docker DNS)
- No health checks or automatic restarts
It's a convenience wrapper around `docker compose` + SSH. If you need failover or cross-host container networking, you probably do need Swarm or Kubernetes.
## Links
- GitHub: https://github.com/basnijholt/compose-farm
- Install: `uv tool install compose-farm` or `pip install compose-farm`
Happy to answer questions or take feedback!

385
docs/traefik.md Normal file
View File

@@ -0,0 +1,385 @@
---
icon: lucide/globe
---
# Traefik Integration
Compose Farm can generate Traefik file-provider configuration for routing traffic across multiple hosts.
## The Problem
When you run Traefik on one host but stacks on others, Traefik's docker provider can't see remote containers. The file provider bridges this gap.
```
Internet
┌─────────────────────────────────────────────────────────────┐
│ Host: nuc │
│ │
│ ┌─────────┐ │
│ │ Traefik │◄─── Docker provider sees local containers │
│ │ │ │
│ │ │◄─── File provider sees remote stacks │
│ └────┬────┘ (from compose-farm.yml) │
│ │ │
└───────┼─────────────────────────────────────────────────────┘
├────────────────────┐
│ │
▼ ▼
┌───────────────┐ ┌───────────────┐
│ Host: hp │ │ Host: nas │
│ │ │ │
│ plex:32400 │ │ jellyfin:8096 │
└───────────────┘ └───────────────┘
```
## How It Works
1. Your compose files have standard Traefik labels
2. Compose Farm reads labels and generates file-provider config
3. Traefik watches the generated file
4. Traffic routes to remote stacks via host IP + published port
## Setup
### Step 1: Configure Traefik File Provider
Add directory watching to your Traefik config:
```yaml
# traefik.yml or docker-compose.yml command
providers:
file:
directory: /opt/traefik/dynamic.d
watch: true
```
Or via command line:
```yaml
services:
traefik:
command:
- --providers.file.directory=/dynamic.d
- --providers.file.watch=true
volumes:
- /opt/traefik/dynamic.d:/dynamic.d:ro
```
### Step 2: Add Traefik Labels to Services
Your compose files use standard Traefik labels:
```yaml
# /opt/compose/plex/docker-compose.yml
services:
plex:
image: lscr.io/linuxserver/plex
ports:
- "32400:32400" # IMPORTANT: Must publish port!
labels:
- traefik.enable=true
- traefik.http.routers.plex.rule=Host(`plex.example.com`)
- traefik.http.routers.plex.entrypoints=websecure
- traefik.http.routers.plex.tls.certresolver=letsencrypt
- traefik.http.services.plex.loadbalancer.server.port=32400
```
**Important:** Services must publish ports for cross-host routing. Traefik connects via `host_ip:published_port`.
### Step 3: Generate File Provider Config
```bash
cf traefik-file --all -o /opt/traefik/dynamic.d/compose-farm.yml
```
This generates:
```yaml
# /opt/traefik/dynamic.d/compose-farm.yml
http:
routers:
plex:
rule: Host(`plex.example.com`)
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: plex
services:
plex:
loadBalancer:
servers:
- url: http://192.168.1.11:32400
```
## Auto-Regeneration
Configure automatic regeneration in `compose-farm.yaml`:
```yaml
compose_dir: /opt/compose
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik
hosts:
nuc:
address: 192.168.1.10
hp:
address: 192.168.1.11
stacks:
traefik: nuc # Traefik runs here
plex: hp # Routed via file-provider
grafana: hp
```
With `traefik_file` set, these commands auto-regenerate the config:
- `cf up`
- `cf down`
- `cf restart`
- `cf update`
- `cf apply`
### traefik_stack Option
When set, stacks on the **same host as Traefik** are skipped in file-provider output. Traefik's docker provider handles them directly.
```yaml
traefik_stack: traefik # traefik runs on nuc
stacks:
traefik: nuc # NOT in file-provider (docker provider)
portainer: nuc # NOT in file-provider (docker provider)
plex: hp # IN file-provider (cross-host)
```
## Label Syntax
### Routers
```yaml
labels:
# Basic router
- traefik.http.routers.myapp.rule=Host(`app.example.com`)
- traefik.http.routers.myapp.entrypoints=websecure
# With TLS
- traefik.http.routers.myapp.tls=true
- traefik.http.routers.myapp.tls.certresolver=letsencrypt
# With middleware
- traefik.http.routers.myapp.middlewares=auth@file
```
### Services
```yaml
labels:
# Load balancer port
- traefik.http.services.myapp.loadbalancer.server.port=8080
# Health check
- traefik.http.services.myapp.loadbalancer.healthcheck.path=/health
```
### Middlewares
Middlewares should be defined in a separate file (not generated by Compose Farm):
```yaml
# /opt/traefik/dynamic.d/middlewares.yml
http:
middlewares:
auth:
basicAuth:
users:
- "user:$apr1$..."
```
Reference in labels:
```yaml
labels:
- traefik.http.routers.myapp.middlewares=auth@file
```
## Variable Substitution
Labels can use environment variables:
```yaml
labels:
- traefik.http.routers.myapp.rule=Host(`${DOMAIN}`)
```
Compose Farm resolves variables from:
1. Stack's `.env` file
2. Current environment
```bash
# /opt/compose/myapp/.env
DOMAIN=app.example.com
```
## Port Resolution
Compose Farm determines the target URL from published ports:
```yaml
ports:
- "8080:80" # Uses 8080
- "192.168.1.11:8080:80" # Uses 8080 on specific IP
```
If no suitable port is found, a warning is shown.
## Complete Example
### compose-farm.yaml
```yaml
compose_dir: /opt/compose
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik
hosts:
nuc:
address: 192.168.1.10
hp:
address: 192.168.1.11
nas:
address: 192.168.1.100
stacks:
traefik: nuc
plex: hp
jellyfin: nas
grafana: nuc
nextcloud: nuc
```
### /opt/compose/plex/docker-compose.yml
```yaml
services:
plex:
image: lscr.io/linuxserver/plex
container_name: plex
ports:
- "32400:32400"
labels:
- traefik.enable=true
- traefik.http.routers.plex.rule=Host(`plex.example.com`)
- traefik.http.routers.plex.entrypoints=websecure
- traefik.http.routers.plex.tls.certresolver=letsencrypt
- traefik.http.services.plex.loadbalancer.server.port=32400
# ... other config
```
### Generated compose-farm.yml
```yaml
http:
routers:
plex:
rule: Host(`plex.example.com`)
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: plex
jellyfin:
rule: Host(`jellyfin.example.com`)
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: jellyfin
services:
plex:
loadBalancer:
servers:
- url: http://192.168.1.11:32400
jellyfin:
loadBalancer:
servers:
- url: http://192.168.1.100:8096
```
Note: `grafana` and `nextcloud` are NOT in the file because they're on the same host as Traefik (`nuc`).
## Combining with Existing Config
If you have existing Traefik dynamic config:
```bash
# Move existing config to directory
mkdir -p /opt/traefik/dynamic.d
mv /opt/traefik/dynamic.yml /opt/traefik/dynamic.d/manual.yml
# Generate Compose Farm config
cf traefik-file --all -o /opt/traefik/dynamic.d/compose-farm.yml
# Update Traefik to watch directory
# --providers.file.directory=/dynamic.d
```
Traefik merges all YAML files in the directory.
## Troubleshooting
### Stack Not Accessible
1. **Check port is published:**
```yaml
ports:
- "8080:80" # Must be published, not just exposed
```
2. **Check label syntax:**
```bash
cf check mystack
```
3. **Verify generated config:**
```bash
cf traefik-file mystack
```
4. **Check Traefik logs:**
```bash
docker logs traefik
```
### Config Not Regenerating
1. **Verify traefik_file is set:**
```bash
cf config show | grep traefik
```
2. **Check file permissions:**
```bash
ls -la /opt/traefik/dynamic.d/
```
3. **Manually regenerate:**
```bash
cf traefik-file --all -o /opt/traefik/dynamic.d/compose-farm.yml
```
### Variable Not Resolved
1. **Check .env file exists:**
```bash
cat /opt/compose/myservice/.env
```
2. **Test variable resolution:**
```bash
cd /opt/compose/myservice
docker compose config
```

169
docs/truenas-nested-nfs.md Normal file
View File

@@ -0,0 +1,169 @@
# TrueNAS NFS: Accessing Child ZFS Datasets
When NFS-exporting a parent ZFS dataset on TrueNAS, child datasets appear as **empty directories** to NFS clients. This document explains the problem and provides a workaround.
## The Problem
TrueNAS structures storage as ZFS datasets. A common pattern is:
```
tank/data <- parent dataset (NFS exported)
tank/data/app1 <- child dataset
tank/data/app2 <- child dataset
```
When you create an NFS share for `tank/data`, clients mount it and see the `app1/` and `app2/` directories—but they're empty. This happens because each ZFS dataset is a separate filesystem, and NFS doesn't traverse into child filesystems by default.
## The Solution: `crossmnt`
The NFS `crossmnt` export option tells the server to allow clients to traverse into child filesystems. However, TrueNAS doesn't expose this option in the UI.
### Workaround Script
This Python script injects `crossmnt` into `/etc/exports`:
```python
#!/usr/bin/env python3
"""
Add crossmnt to TrueNAS NFS exports for child dataset visibility.
Usage: fix-nfs-crossmnt.py /mnt/pool/dataset
Setup:
1. scp fix-nfs-crossmnt.py root@truenas.local:/root/
2. chmod +x /root/fix-nfs-crossmnt.py
3. Test: /root/fix-nfs-crossmnt.py /mnt/pool/dataset
4. Add cron job: TrueNAS UI > System > Advanced > Cron Jobs
Command: /root/fix-nfs-crossmnt.py /mnt/pool/dataset
Schedule: */5 * * * *
"""
import re
import subprocess
import sys
from pathlib import Path
EXPORTS_FILE = Path("/etc/exports")
def main():
if len(sys.argv) != 2:
print(f"Usage: {sys.argv[0]} /mnt/pool/dataset", file=sys.stderr)
return 1
export_path = sys.argv[1]
content = EXPORTS_FILE.read_text()
if f'"{export_path}"' not in content:
print(f"ERROR: {export_path} not found in {EXPORTS_FILE}", file=sys.stderr)
return 1
lines = content.splitlines()
result = []
in_block = False
modified = False
for line in lines:
if f'"{export_path}"' in line:
in_block = True
elif line.startswith('"'):
in_block = False
if in_block and line[:1] in (" ", "\t") and "crossmnt" not in line:
line = re.sub(r"\)(\\\s*)?$", r",crossmnt)\1", line)
modified = True
result.append(line)
if not modified:
return 0 # Already applied
EXPORTS_FILE.write_text("\n".join(result) + "\n")
subprocess.run(["exportfs", "-ra"], check=True)
print(f"Added crossmnt to {export_path}")
return 0
if __name__ == "__main__":
sys.exit(main())
```
## Setup Instructions
### 1. Copy the script to TrueNAS
```bash
scp fix-nfs-crossmnt.py root@truenas.local:/root/
ssh root@truenas.local chmod +x /root/fix-nfs-crossmnt.py
```
### 2. Test manually
```bash
ssh root@truenas.local
# Run the script
/root/fix-nfs-crossmnt.py /mnt/tank/data
# Verify crossmnt was added
cat /etc/exports
```
You should see `,crossmnt` added to the client options:
```
"/mnt/tank/data"\
192.168.1.10(sec=sys,rw,no_subtree_check,crossmnt)\
192.168.1.11(sec=sys,rw,no_subtree_check,crossmnt)
```
### 3. Verify on NFS client
```bash
# Before: empty directory
ls /mnt/data/app1/
# (nothing)
# After: actual contents visible
ls /mnt/data/app1/
# config.yaml data/ logs/
```
### 4. Make it persistent
TrueNAS regenerates `/etc/exports` when you modify NFS shares in the UI. To survive this, set up a cron job:
1. Go to **TrueNAS UI → System → Advanced → Cron Jobs → Add**
2. Configure:
- **Description:** Fix NFS crossmnt
- **Command:** `/root/fix-nfs-crossmnt.py /mnt/tank/data`
- **Run As User:** root
- **Schedule:** `*/5 * * * *` (every 5 minutes)
- **Enabled:** checked
3. Save
The script is idempotent—it only modifies the file if `crossmnt` is missing, and skips the write entirely if already applied.
## How It Works
1. Parses `/etc/exports` to find the specified export block
2. Adds `,crossmnt` before the closing `)` on each client line
3. Writes the file only if changes were made
4. Runs `exportfs -ra` to reload the NFS configuration
## Why Not Use SMB Instead?
SMB handles child datasets seamlessly, but:
- NFS is simpler for Linux-to-Linux with matching UIDs
- SMB requires more complex permission mapping for Docker volumes
- Many existing setups already use NFS
## Related Links
- [TrueNAS Forum: Add crossmnt option to NFS exports](https://forums.truenas.com/t/add-crossmnt-option-to-nfs-exports/10573)
- [exports(5) man page](https://man7.org/linux/man-pages/man5/exports.5.html) - see `crossmnt` option
## Tested On
- TrueNAS SCALE 24.10

View File

@@ -0,0 +1,65 @@
# TrueNAS NFS: Disabling Root Squash
When running Docker containers on NFS-mounted storage, containers that run as root will fail to write files unless root squash is disabled. This document explains the problem and solution.
## The Problem
By default, NFS uses "root squash" which maps the root user (UID 0) on clients to `nobody` on the server. This is a security feature to prevent remote root users from having root access to the NFS server's files.
However, many Docker containers run as root internally. When these containers try to write to NFS-mounted volumes, the writes fail with "Permission denied" because the NFS server sees them as `nobody`, not `root`.
Example error in container logs:
```
System.UnauthorizedAccessException: Access to the path '/data' is denied.
Error: EACCES: permission denied, mkdir '/app/data'
```
## The Solution
In TrueNAS, configure the NFS share to map remote root to local root:
### TrueNAS SCALE UI
1. Go to **Shares → NFS**
2. Edit your share
3. Under **Advanced Options**:
- **Maproot User**: `root`
- **Maproot Group**: `wheel`
4. Save
### Result in /etc/exports
```
"/mnt/pool/data"\
192.168.1.25(sec=sys,rw,no_root_squash,no_subtree_check)\
192.168.1.26(sec=sys,rw,no_root_squash,no_subtree_check)
```
The `no_root_squash` option means remote root is treated as root on the server.
## Why `wheel`?
On FreeBSD/TrueNAS, the root user's primary group is `wheel` (GID 0), not `root` like on Linux. So `root:wheel` = `0:0`.
## Security Considerations
Disabling root squash means any machine that can mount the NFS share has full root access to those files. This is acceptable when:
- The NFS clients are on a trusted private network
- Only known hosts (by IP) are allowed to mount the share
- The data isn't security-critical
For home lab setups with Docker containers, this is typically fine.
## Alternative: Run Containers as Non-Root
If you prefer to keep root squash enabled, you can run containers as a non-root user:
1. **LinuxServer.io images**: Set `PUID=1000` and `PGID=1000` environment variables
2. **Other images**: Add `user: "1000:1000"` to the compose service
However, not all containers support running as non-root (they may need to bind to privileged ports, create system directories, etc.).
## Tested On
- TrueNAS SCALE 24.10

132
docs/web-ui.md Normal file
View File

@@ -0,0 +1,132 @@
---
icon: lucide/layout-dashboard
---
# Web UI
Compose Farm includes a web interface for managing stacks from your browser. Start it with:
```bash
cf web
```
Then open [http://localhost:8000](http://localhost:8000).
## Features
### Full Workflow
Console terminal, config editor, stack navigation, actions (up, logs, update), dashboard overview, and theme switching - all in one flow.
<video autoplay loop muted playsinline>
<source src="/assets/web-workflow.webm" type="video/webm">
</video>
### Stack Actions
Navigate to any stack and use the command palette to trigger actions like restart, pull, update, or view logs. Output streams in real-time via WebSocket.
<video autoplay loop muted playsinline>
<source src="/assets/web-stack.webm" type="video/webm">
</video>
### Theme Switching
35 themes available via the command palette. Type `theme:` to filter, then use arrow keys to preview themes live before selecting.
<video autoplay loop muted playsinline>
<source src="/assets/web-themes.webm" type="video/webm">
</video>
### Command Palette
Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy search to quickly navigate, trigger actions, or change themes.
<video autoplay loop muted playsinline>
<source src="/assets/web-navigation.webm" type="video/webm">
</video>
## Pages
### Dashboard (`/`)
- Stack overview with status indicators
- Host statistics
- Pending operations (migrations, orphaned stacks)
- Quick actions via command palette
### Stack Detail (`/stack/{name}`)
- Compose file editor (Monaco)
- Environment file editor
- Action buttons: Up, Down, Restart, Update, Pull, Logs
- Container shell access (exec into running containers)
- Terminal output for running commands
Files are automatically backed up before saving to `~/.config/compose-farm/backups/`.
### Console (`/console`)
- Full shell access to any host
- File editor for remote files
- Monaco editor with syntax highlighting
<video autoplay loop muted playsinline>
<source src="/assets/web-console.webm" type="video/webm">
</video>
### Container Shell
Click the Shell button on any running container to exec into it directly from the browser.
<video autoplay loop muted playsinline>
<source src="/assets/web-shell.webm" type="video/webm">
</video>
## Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| `Ctrl+K` / `Cmd+K` | Open command palette |
| `Ctrl+S` / `Cmd+S` | Save editors |
| `Escape` | Close command palette |
| `Arrow keys` | Navigate command list |
| `Enter` | Execute selected command |
## Starting the Server
```bash
# Default: http://0.0.0.0:8000
cf web
# Custom port
cf web --port 3000
# Development mode with auto-reload
cf web --reload
# Bind to specific interface
cf web --host 127.0.0.1
```
## Requirements
The web UI requires additional dependencies:
```bash
# If installed via pip
pip install compose-farm[web]
# If installed via uv
uv tool install 'compose-farm[web]'
```
## Architecture
The web UI uses:
- **FastAPI** - Backend API and WebSocket handling
- **HTMX** - Dynamic page updates without full reloads
- **DaisyUI + Tailwind** - Theming and styling
- **Monaco Editor** - Code editing for compose/env files
- **xterm.js** - Terminal emulation for logs and shell access

View File

@@ -1,42 +1,171 @@
# Compose Farm Examples
This folder contains example Docker Compose services for testing Compose Farm locally.
Real-world examples demonstrating compose-farm patterns for multi-host Docker deployments.
## Stacks
| Stack | Type | Demonstrates |
|---------|------|--------------|
| [traefik](traefik/) | Infrastructure | Reverse proxy, Let's Encrypt, file-provider |
| [mealie](mealie/) | Single container | Traefik labels, resource limits, environment vars |
| [uptime-kuma](uptime-kuma/) | Single container | Docker socket, user mapping, custom DNS |
| [paperless-ngx](paperless-ngx/) | Multi-container | Redis + App stack (SQLite) |
| [autokuma](autokuma/) | Multi-host | Demonstrates `all` keyword (runs on every host) |
## Key Patterns
### External Network
All stacks connect to a shared external network for inter-service communication:
```yaml
networks:
mynetwork:
external: true
```
Create it on each host with consistent settings:
```bash
compose-farm init-network --network mynetwork --subnet 172.20.0.0/16
```
### Traefik Labels (Dual Routes)
Stacks expose two routes for different access patterns:
1. **HTTPS route** (`websecure` entrypoint): For your custom domain with Let's Encrypt TLS
2. **HTTP route** (`web` entrypoint): For `.local` domains on your LAN (no TLS needed)
This pattern allows accessing stacks via:
- `https://mealie.example.com` - from anywhere, with TLS
- `http://mealie.local` - from your local network, no TLS overhead
```yaml
labels:
# HTTPS route for custom domain (e.g., mealie.example.com)
- traefik.enable=true
- traefik.http.routers.myapp.rule=Host(`myapp.${DOMAIN}`)
- traefik.http.routers.myapp.entrypoints=websecure
- traefik.http.services.myapp.loadbalancer.server.port=8080
# HTTP route for .local domain (e.g., myapp.local)
- traefik.http.routers.myapp-local.rule=Host(`myapp.local`)
- traefik.http.routers.myapp-local.entrypoints=web
```
> **Note:** `.local` domains require local DNS (e.g., Pi-hole, Technitium) to resolve to your Traefik host.
### Environment Variables
Each stack has a `.env` file for secrets and domain configuration.
Edit these files to set your domain and credentials:
```bash
# Example: set your domain
echo "DOMAIN=example.com" > mealie/.env
```
Variables like `${DOMAIN}` are substituted at runtime by Docker Compose.
### NFS Volume Mounts
All data is stored on shared NFS storage at `/mnt/data/`:
```yaml
volumes:
- /mnt/data/myapp:/app/data
```
This allows stacks to migrate between hosts without data loss.
### Multi-Host Stacks
Stacks that need to run on every host (e.g., monitoring agents):
```yaml
# In compose-farm.yaml
stacks:
autokuma: all # Runs on every configured host
```
### Multi-Container Stacks
Database-backed apps with multiple services:
```yaml
services:
redis:
image: redis:7
app:
depends_on:
- redis
```
> **NFS + PostgreSQL Warning:** PostgreSQL should NOT run on NFS storage due to
> fsync and file locking issues. Use SQLite (safe for single-writer on NFS) or
> keep PostgreSQL data on local volumes (non-migratable).
### AutoKuma Labels (Optional)
The autokuma example demonstrates compose-farm's **multi-host feature** - running the same stack on all hosts using the `all` keyword. AutoKuma itself is not part of compose-farm; it's just a good example because it needs to run on every host to monitor local Docker containers.
[AutoKuma](https://github.com/BigBoot/AutoKuma) automatically creates Uptime Kuma monitors from Docker labels:
```yaml
labels:
- kuma.myapp.http.name=My App
- kuma.myapp.http.url=https://myapp.${DOMAIN}
```
## Quick Start
```bash
cd examples
# Check status of all services
# 1. Create the shared network on all hosts
compose-farm init-network
# 2. Start Traefik first (the reverse proxy)
compose-farm up traefik
# 3. Start other stacks
compose-farm up mealie uptime-kuma
# 4. Check status
compose-farm ps
# Pull images
compose-farm pull --all
# 5. Generate Traefik file-provider config for cross-host routing
compose-farm traefik-file --all
# Start hello-world (runs and exits)
compose-farm up hello
# 6. View logs
compose-farm logs mealie
# Start nginx (stays running)
compose-farm up nginx
# Check nginx is running
curl localhost:8080
# View logs
compose-farm logs nginx
# Stop nginx
compose-farm down nginx
# Update all (pull + restart)
compose-farm update --all
# 7. Stop everything
compose-farm down --all
```
## Services
## Configuration
- **hello**: Simple hello-world container (exits immediately)
- **nginx**: Nginx web server on port 8080
The `compose-farm.yaml` shows a multi-host setup:
## Config
- **primary** (192.168.1.10): Runs Traefik and heavy stacks
- **secondary** (192.168.1.11): Runs lighter stacks
- **autokuma**: Runs on ALL hosts to monitor local containers
The `compose-farm.yaml` in this directory configures both services to run locally (no SSH).
When Traefik runs on `primary` and a stack runs on `secondary`, compose-farm
automatically generates file-provider config so Traefik can route to it.
## Traefik File-Provider
When stacks run on different hosts than Traefik, use `traefik-file` to generate routing config:
```bash
# Generate config for all stacks
compose-farm traefik-file --all -o traefik/dynamic.d/compose-farm.yml
# Or configure auto-generation in compose-farm.yaml:
traefik_file: /opt/stacks/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik
```
With `traefik_file` configured, compose-farm automatically regenerates the config after `up`, `down`, `restart`, and `update` commands.

4
examples/autokuma/.env Normal file
View File

@@ -0,0 +1,4 @@
# Copy to .env and fill in your values
DOMAIN=example.com
UPTIME_KUMA_USERNAME=admin
UPTIME_KUMA_PASSWORD=your-uptime-kuma-password

View File

@@ -0,0 +1,31 @@
# AutoKuma - Automatic Uptime Kuma monitor creation from Docker labels
# Demonstrates: Multi-host service (runs on ALL hosts)
#
# This service monitors Docker containers on each host and automatically
# creates Uptime Kuma monitors based on container labels.
#
# In compose-farm.yaml, configure as:
# autokuma: all
#
# This runs the same container on every host, so each host's local
# Docker socket is monitored.
name: autokuma
services:
autokuma:
image: ghcr.io/bigboot/autokuma:latest
container_name: autokuma
restart: unless-stopped
environment:
# Connect to your Uptime Kuma instance
AUTOKUMA__KUMA__URL: https://uptime.${DOMAIN}
AUTOKUMA__KUMA__USERNAME: ${UPTIME_KUMA_USERNAME}
AUTOKUMA__KUMA__PASSWORD: ${UPTIME_KUMA_PASSWORD}
# Tag for auto-created monitors
AUTOKUMA__TAG__NAME: autokuma
AUTOKUMA__TAG__COLOR: "#10B981"
volumes:
# Access local Docker socket to discover containers
- /var/run/docker.sock:/var/run/docker.sock:ro
# Custom DNS for resolving internal domains
dns:
- 192.168.1.1 # Your local DNS server

View File

@@ -0,0 +1,9 @@
deployed:
autokuma:
- primary
- secondary
- local
mealie: secondary
paperless-ngx: primary
traefik: primary
uptime-kuma: secondary

View File

@@ -1,11 +1,40 @@
# Example Compose Farm config for local testing
# Run from the examples directory: cd examples && compose-farm ps
# Example Compose Farm configuration
# Demonstrates a multi-host setup with NFS shared storage
#
# To test locally: Update the host addresses and run from the examples directory
compose_dir: .
compose_dir: /opt/stacks/compose-farm/examples
# Auto-regenerate Traefik file-provider config after up/down/restart/update
traefik_file: /opt/stacks/compose-farm/examples/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik # Skip Traefik's host in file-provider (docker provider handles it)
hosts:
# Primary server - runs Traefik and most stacks
# Full form with all options
primary:
address: 192.168.1.10
user: deploy
port: 22
# Secondary server - runs some stacks for load distribution
# Short form (user defaults to current user, port defaults to 22)
secondary: 192.168.1.11
# Local execution (no SSH) - for testing or when running on the host itself
local: localhost
services:
hello: local
nginx: local
stacks:
# Infrastructure (runs on primary where Traefik is)
traefik: primary
# Multi-host stacks (runs on ALL hosts)
# AutoKuma monitors Docker containers on each host
autokuma: all
# Primary server stacks
paperless-ngx: primary
# Secondary server stacks (distributed for performance)
mealie: secondary
uptime-kuma: secondary

View File

@@ -1,4 +0,0 @@
services:
hello:
image: hello-world
container_name: sdc-hello

2
examples/mealie/.env Normal file
View File

@@ -0,0 +1,2 @@
# Copy to .env and fill in your values
DOMAIN=example.com

View File

@@ -0,0 +1,47 @@
# Mealie - Recipe manager
# Simple single-container service with Traefik labels
#
# Demonstrates:
# - HTTPS route: mealie.${DOMAIN} (e.g., mealie.example.com) with Let's Encrypt
# - HTTP route: mealie.local for LAN access without TLS
# - External network, resource limits, environment variables
name: mealie
services:
mealie:
image: ghcr.io/mealie-recipes/mealie:latest
container_name: mealie
restart: unless-stopped
networks:
- mynetwork
ports:
- "9925:9000"
deploy:
resources:
limits:
memory: 1000M
volumes:
- /mnt/data/mealie:/app/data
environment:
ALLOW_SIGNUP: "false"
PUID: 1000
PGID: 1000
TZ: America/Los_Angeles
MAX_WORKERS: 1
WEB_CONCURRENCY: 1
BASE_URL: https://mealie.${DOMAIN}
labels:
# HTTPS route: mealie.example.com (requires DOMAIN in .env)
- traefik.enable=true
- traefik.http.routers.mealie.rule=Host(`mealie.${DOMAIN}`)
- traefik.http.routers.mealie.entrypoints=websecure
- traefik.http.services.mealie.loadbalancer.server.port=9000
# HTTP route: mealie.local (for LAN access, no TLS)
- traefik.http.routers.mealie-local.rule=Host(`mealie.local`)
- traefik.http.routers.mealie-local.entrypoints=web
# AutoKuma: automatically create Uptime Kuma monitor
- kuma.mealie.http.name=Mealie
- kuma.mealie.http.url=https://mealie.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,6 +0,0 @@
services:
nginx:
image: nginx:alpine
container_name: sdc-nginx
ports:
- "8080:80"

View File

@@ -0,0 +1,3 @@
# Copy to .env and fill in your values
DOMAIN=example.com
PAPERLESS_SECRET_KEY=change-me-to-a-random-string

View File

@@ -0,0 +1,60 @@
# Paperless-ngx - Document management system
#
# Demonstrates:
# - HTTPS route: paperless.${DOMAIN} (e.g., paperless.example.com) with Let's Encrypt
# - HTTP route: paperless.local for LAN access without TLS
# - Multi-container stack (Redis + App with SQLite)
#
# NOTE: This example uses SQLite (the default) instead of PostgreSQL.
# PostgreSQL should NOT be used with NFS storage due to fsync/locking issues.
# If you need PostgreSQL, use local volumes for the database.
name: paperless-ngx
services:
redis:
image: redis:8
container_name: paperless-redis
restart: unless-stopped
networks:
- mynetwork
volumes:
- /mnt/data/paperless/redis:/data
paperless:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
container_name: paperless
restart: unless-stopped
depends_on:
- redis
networks:
- mynetwork
ports:
- "8000:8000"
volumes:
# SQLite database stored here (safe on NFS for single-writer)
- /mnt/data/paperless/data:/usr/src/paperless/data
- /mnt/data/paperless/media:/usr/src/paperless/media
- /mnt/data/paperless/export:/usr/src/paperless/export
- /mnt/data/paperless/consume:/usr/src/paperless/consume
environment:
PAPERLESS_REDIS: redis://redis:6379
PAPERLESS_URL: https://paperless.${DOMAIN}
PAPERLESS_SECRET_KEY: ${PAPERLESS_SECRET_KEY}
USERMAP_UID: 1000
USERMAP_GID: 1000
labels:
# HTTPS route: paperless.example.com (requires DOMAIN in .env)
- traefik.enable=true
- traefik.http.routers.paperless.rule=Host(`paperless.${DOMAIN}`)
- traefik.http.routers.paperless.entrypoints=websecure
- traefik.http.services.paperless.loadbalancer.server.port=8000
- traefik.docker.network=mynetwork
# HTTP route: paperless.local (for LAN access, no TLS)
- traefik.http.routers.paperless-local.rule=Host(`paperless.local`)
- traefik.http.routers.paperless-local.entrypoints=web
# AutoKuma: automatically create Uptime Kuma monitor
- kuma.paperless.http.name=Paperless
- kuma.paperless.http.url=https://paperless.${DOMAIN}
networks:
mynetwork:
external: true

5
examples/traefik/.env Normal file
View File

@@ -0,0 +1,5 @@
# Copy to .env and fill in your values
DOMAIN=example.com
ACME_EMAIL=you@example.com
CF_API_EMAIL=you@example.com
CF_API_KEY=your-cloudflare-api-key

View File

@@ -0,0 +1,58 @@
# Traefik reverse proxy with Let's Encrypt and file-provider support
# This is the foundation service - other services route through it
#
# Entrypoints:
# - web (port 80): HTTP for .local domains (no TLS needed on LAN)
# - websecure (port 443): HTTPS with Let's Encrypt for custom domains
name: traefik
services:
traefik:
image: traefik:v3.2
container_name: traefik
command:
- --api.dashboard=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=mynetwork
# File provider for routing to services on other hosts
- --providers.file.directory=/dynamic.d
- --providers.file.watch=true
# HTTP entrypoint for .local domains (LAN access, no TLS)
- --entrypoints.web.address=:80
# HTTPS entrypoint for custom domains (with Let's Encrypt TLS)
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.asDefault=true
- --entrypoints.websecure.http.tls.certresolver=letsencrypt
# Let's Encrypt DNS challenge (using Cloudflare as example)
- --certificatesresolvers.letsencrypt.acme.email=${ACME_EMAIL}
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- --certificatesresolvers.letsencrypt.acme.dnschallenge.resolvers=1.1.1.1:53
environment:
# Cloudflare API token for DNS challenge
CF_API_EMAIL: ${CF_API_EMAIL}
CF_API_KEY: ${CF_API_KEY}
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /mnt/data/traefik/letsencrypt:/letsencrypt
- ./dynamic.d:/dynamic.d:ro
networks:
- mynetwork
labels:
- traefik.enable=true
# Dashboard accessible at traefik.yourdomain.com
- traefik.http.routers.traefik.rule=Host(`traefik.${DOMAIN}`)
- traefik.http.routers.traefik.entrypoints=websecure
- traefik.http.routers.traefik.service=api@internal
# AutoKuma: automatically create Uptime Kuma monitor
- kuma.traefik.http.name=Traefik
- kuma.traefik.http.url=https://traefik.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -0,0 +1,40 @@
# Auto-generated by compose-farm
# https://github.com/basnijholt/compose-farm
#
# This file routes traffic to services running on hosts other than Traefik's host.
# Services on Traefik's host use the Docker provider directly.
#
# Regenerate with: compose-farm traefik-file --all -o <this-file>
# Or configure traefik_file in compose-farm.yaml for automatic updates.
http:
routers:
mealie:
rule: Host(`mealie.example.com`)
entrypoints:
- websecure
service: mealie
mealie-local:
rule: Host(`mealie.local`)
entrypoints:
- web
service: mealie
uptime:
rule: Host(`uptime.example.com`)
entrypoints:
- websecure
service: uptime
uptime-local:
rule: Host(`uptime.local`)
entrypoints:
- web
service: uptime
services:
mealie:
loadbalancer:
servers:
- url: http://192.168.1.11:9925
uptime:
loadbalancer:
servers:
- url: http://192.168.1.11:3001

View File

@@ -0,0 +1,2 @@
# Copy to .env and fill in your values
DOMAIN=example.com

View File

@@ -0,0 +1,43 @@
# Uptime Kuma - Monitoring dashboard
#
# Demonstrates:
# - HTTPS route: uptime.${DOMAIN} (e.g., uptime.example.com) with Let's Encrypt
# - HTTP route: uptime.local for LAN access without TLS
# - Docker socket access, user mapping for NFS, custom DNS
name: uptime-kuma
services:
uptime-kuma:
image: louislam/uptime-kuma:2
container_name: uptime-kuma
restart: unless-stopped
# Run as non-root user (important for NFS volumes)
user: "1000:1000"
networks:
- mynetwork
ports:
- "3001:3001"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /mnt/data/uptime-kuma:/app/data
environment:
PUID: 1000
PGID: 1000
# Custom DNS for internal domain resolution
dns:
- 192.168.1.1 # Your local DNS server
labels:
# HTTPS route: uptime.example.com (requires DOMAIN in .env)
- traefik.enable=true
- traefik.http.routers.uptime.rule=Host(`uptime.${DOMAIN}`)
- traefik.http.routers.uptime.entrypoints=websecure
- traefik.http.services.uptime.loadbalancer.server.port=3001
# HTTP route: uptime.local (for LAN access, no TLS)
- traefik.http.routers.uptime-local.rule=Host(`uptime.local`)
- traefik.http.routers.uptime-local.entrypoints=web
# AutoKuma: automatically create Uptime Kuma monitor
- kuma.uptime.http.name=Uptime Kuma
- kuma.uptime.http.url=https://uptime.${DOMAIN}
networks:
mynetwork:
external: true

170
hatch_build.py Normal file
View File

@@ -0,0 +1,170 @@
"""Hatch build hook to vendor CDN assets for offline use.
During wheel builds, this hook:
1. Parses base.html to find elements with data-vendor attributes
2. Downloads each CDN asset to a temporary vendor directory
3. Rewrites base.html to use local /static/vendor/ paths
4. Fetches and bundles license information
5. Includes everything in the wheel via force_include
The source base.html keeps CDN links for development; only the
distributed wheel has vendored assets.
"""
from __future__ import annotations
import re
import shutil
import subprocess
import tempfile
from pathlib import Path
from typing import Any
from urllib.request import Request, urlopen
from hatchling.builders.hooks.plugin.interface import BuildHookInterface
# Matches elements with data-vendor attribute: extracts URL and target filename
# Example: <script src="https://..." data-vendor="htmx.js">
# Captures: (1) src/href, (2) URL, (3) attributes between, (4) vendor filename
VENDOR_PATTERN = re.compile(r'(src|href)="(https://[^"]+)"([^>]*?)data-vendor="([^"]+)"')
# License URLs for each package (GitHub raw URLs)
LICENSE_URLS: dict[str, tuple[str, str]] = {
"htmx": ("MIT", "https://raw.githubusercontent.com/bigskysoftware/htmx/master/LICENSE"),
"xterm": ("MIT", "https://raw.githubusercontent.com/xtermjs/xterm.js/master/LICENSE"),
"daisyui": ("MIT", "https://raw.githubusercontent.com/saadeghi/daisyui/master/LICENSE"),
"tailwindcss": (
"MIT",
"https://raw.githubusercontent.com/tailwindlabs/tailwindcss/master/LICENSE",
),
}
def _download(url: str) -> bytes:
"""Download a URL, trying urllib first then curl as fallback."""
# Try urllib first
try:
req = Request( # noqa: S310
url, headers={"User-Agent": "Mozilla/5.0 (compatible; compose-farm build)"}
)
with urlopen(req, timeout=30) as resp: # noqa: S310
return resp.read() # type: ignore[no-any-return]
except Exception: # noqa: S110
pass # Fall through to curl
# Fallback to curl (handles SSL proxies better)
result = subprocess.run(
["curl", "-fsSL", "--max-time", "30", url], # noqa: S607
capture_output=True,
check=True,
)
return bytes(result.stdout)
def _generate_licenses_file(temp_dir: Path) -> None:
"""Download and combine license files into LICENSES.txt."""
lines = [
"# Vendored Dependencies - License Information",
"",
"This file contains license information for JavaScript/CSS libraries",
"bundled with compose-farm for offline use.",
"",
"=" * 70,
"",
]
for pkg_name, (license_type, license_url) in LICENSE_URLS.items():
lines.append(f"## {pkg_name} ({license_type})")
lines.append(f"Source: {license_url}")
lines.append("")
lines.append(_download(license_url).decode("utf-8"))
lines.append("")
lines.append("=" * 70)
lines.append("")
(temp_dir / "LICENSES.txt").write_text("\n".join(lines))
class VendorAssetsHook(BuildHookInterface): # type: ignore[misc]
"""Hatch build hook that vendors CDN assets into the wheel."""
PLUGIN_NAME = "vendor-assets"
def initialize(
self,
_version: str,
build_data: dict[str, Any],
) -> None:
"""Download CDN assets and prepare them for inclusion in the wheel."""
# Only run for wheel builds
if self.target_name != "wheel":
return
# Paths
src_dir = Path(self.root) / "src" / "compose_farm"
base_html_path = src_dir / "web" / "templates" / "base.html"
if not base_html_path.exists():
return
# Create temp directory for vendored assets
temp_dir = Path(tempfile.mkdtemp(prefix="compose_farm_vendor_"))
vendor_dir = temp_dir / "vendor"
vendor_dir.mkdir()
# Read and parse base.html
html_content = base_html_path.read_text()
url_to_filename: dict[str, str] = {}
# Find all elements with data-vendor attribute and download them
for match in VENDOR_PATTERN.finditer(html_content):
url = match.group(2)
filename = match.group(4)
if url in url_to_filename:
continue
url_to_filename[url] = filename
content = _download(url)
(vendor_dir / filename).write_bytes(content)
if not url_to_filename:
return
# Generate LICENSES.txt
_generate_licenses_file(vendor_dir)
# Rewrite HTML to use local paths (remove data-vendor, update URL)
def replace_vendor_tag(match: re.Match[str]) -> str:
attr = match.group(1) # src or href
url = match.group(2)
between = match.group(3) # attributes between URL and data-vendor
filename = match.group(4)
if url in url_to_filename:
return f'{attr}="/static/vendor/{filename}"{between}'
return match.group(0)
modified_html = VENDOR_PATTERN.sub(replace_vendor_tag, html_content)
# Write modified base.html to temp
templates_dir = temp_dir / "templates"
templates_dir.mkdir()
(templates_dir / "base.html").write_text(modified_html)
# Add to force_include to override files in the wheel
force_include = build_data.setdefault("force_include", {})
force_include[str(vendor_dir)] = "compose_farm/web/static/vendor"
force_include[str(templates_dir / "base.html")] = "compose_farm/web/templates/base.html"
# Store temp_dir path for cleanup
self._temp_dir = temp_dir
def finalize(
self,
_version: str,
_build_data: dict[str, Any],
_artifact_path: str,
) -> None:
"""Clean up temporary directory after build."""
if hasattr(self, "_temp_dir") and self._temp_dir.exists():
shutil.rmtree(self._temp_dir, ignore_errors=True)

60
justfile Normal file
View File

@@ -0,0 +1,60 @@
# Compose Farm Development Commands
# Run `just` to see available commands
# Default: list available commands
default:
@just --list
# Install development dependencies
install:
uv sync --all-extras --dev
# Run all tests (parallel)
test:
uv run pytest -n auto
# Run CLI tests only (parallel, with coverage)
test-cli:
uv run pytest -m "not browser" -n auto
# Run web UI tests only (parallel)
test-web:
uv run pytest -m browser -n auto
# Lint, format, and type check
lint:
uv run ruff check --fix .
uv run ruff format .
uv run mypy src
uv run ty check src
# Start web UI in development mode with auto-reload
web:
uv run cf web --reload --port 9001
# Kill the web server
kill-web:
lsof -ti :9001 | xargs kill -9 2>/dev/null || true
# Build docs and serve locally
doc:
uvx zensical build
python -m http.server -d site 9002
# Kill the docs server
kill-doc:
lsof -ti :9002 | xargs kill -9 2>/dev/null || true
# Record CLI demos (all or specific: just record-cli quickstart)
record-cli *demos:
python docs/demos/cli/record.py {{demos}}
# Record web UI demos (all or specific: just record-web navigation)
record-web *demos:
python docs/demos/web/record.py {{demos}}
# Clean up build artifacts and caches
clean:
rm -rf .pytest_cache .mypy_cache .ruff_cache .coverage htmlcov dist build
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true

View File

@@ -3,19 +3,68 @@ name = "compose-farm"
dynamic = ["version"]
description = "Compose Farm - run docker compose commands across multiple hosts"
readme = "README.md"
license = "MIT"
license-files = ["LICENSE"]
authors = [
{ name = "Bas Nijholt", email = "bas@nijho.lt" }
]
maintainers = [
{ name = "Bas Nijholt", email = "bas@nijho.lt" }
]
requires-python = ">=3.11"
keywords = [
"docker",
"docker-compose",
"ssh",
"devops",
"deployment",
"container",
"orchestration",
"multi-host",
"homelab",
"self-hosted",
]
classifiers = [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: System :: Systems Administration",
"Topic :: Utilities",
"Typing :: Typed",
]
dependencies = [
"typer>=0.9.0",
"pydantic>=2.0.0",
"asyncssh>=2.14.0",
"pyyaml>=6.0",
"rich>=13.0.0",
]
[project.optional-dependencies]
web = [
"fastapi[standard]>=0.109.0",
"jinja2>=3.1.0",
"websockets>=12.0",
]
[project.urls]
Homepage = "https://github.com/basnijholt/compose-farm"
Repository = "https://github.com/basnijholt/compose-farm"
Documentation = "https://github.com/basnijholt/compose-farm#readme"
Issues = "https://github.com/basnijholt/compose-farm/issues"
Changelog = "https://github.com/basnijholt/compose-farm/releases"
[project.scripts]
compose-farm = "compose_farm.cli:app"
cf = "compose_farm.cli:app"
[build-system]
requires = ["hatchling", "hatch-vcs"]
@@ -30,6 +79,9 @@ version-file = "src/compose_farm/_version.py"
[tool.hatch.build.targets.wheel]
packages = ["src/compose_farm"]
[tool.hatch.build.hooks.custom]
# Vendors CDN assets (JS/CSS) into the wheel for offline use
[tool.ruff]
target-version = "py311"
line-length = 100
@@ -59,7 +111,7 @@ ignore = [
]
[tool.ruff.lint.per-file-ignores]
"tests/*" = ["S101", "PLR2004", "S108", "D102", "D103"] # relaxed docstrings + asserts in tests
"tests/*" = ["S101", "PLR2004", "S108", "D102", "D103", "PLC0415", "ARG001", "ARG002", "TC003"] # relaxed for tests
[tool.ruff.lint.mccabe]
max-complexity = 18
@@ -77,6 +129,14 @@ ignore_missing_imports = true
module = "tests.*"
disallow_untyped_decorators = false
[[tool.mypy.overrides]]
module = "compose_farm.web.*"
disallow_untyped_decorators = false
[[tool.mypy.overrides]]
module = "docs.demos.web.*"
disallow_untyped_decorators = false
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
@@ -89,6 +149,9 @@ addopts = [
"--no-cov-on-fail",
"-v",
]
markers = [
"browser: marks tests as browser tests (deselect with '-m \"not browser\"')",
]
[tool.coverage.run]
omit = []
@@ -101,9 +164,19 @@ exclude_lines = [
'if __name__ == "__main__":',
]
[tool.ty.environment]
python-version = "3.11"
[tool.ty.src]
exclude = [
"hatch_build.py", # Build-time only, hatchling not in dev deps
"docs/demos/**", # Demo scripts with local conftest imports
]
[dependency-groups]
dev = [
"mypy>=1.19.0",
"ty>=0.0.1a13",
"pre-commit>=4.5.0",
"pytest>=9.0.2",
"pytest-asyncio>=1.3.0",
@@ -111,4 +184,15 @@ dev = [
"ruff>=0.14.8",
"types-pyyaml>=6.0.12.20250915",
"markdown-code-runner>=0.7.0",
# Web deps for type checking (these ship with inline types)
"fastapi>=0.109.0",
"uvicorn[standard]>=0.27.0",
"jinja2>=3.1.0",
"websockets>=12.0",
# For FastAPI TestClient
"httpx>=0.28.0",
# For browser tests (use system chromium via nix-shell -p chromium)
"pytest-playwright>=0.7.0",
# For parallel test execution
"pytest-xdist>=3.0.0",
]

View File

@@ -1,247 +0,0 @@
"""CLI interface using Typer."""
from __future__ import annotations
import asyncio
from pathlib import Path
from typing import TYPE_CHECKING, Annotated, TypeVar
import typer
import yaml
from . import __version__
from .config import Config, load_config
from .logs import snapshot_services
from .ssh import (
CommandResult,
run_on_services,
run_sequential_on_services,
)
if TYPE_CHECKING:
from collections.abc import Coroutine
T = TypeVar("T")
def _version_callback(value: bool) -> None:
"""Print version and exit."""
if value:
typer.echo(f"compose-farm {__version__}")
raise typer.Exit
app = typer.Typer(
name="compose-farm",
help="Compose Farm - run docker compose commands across multiple hosts",
no_args_is_help=True,
)
@app.callback()
def main(
version: Annotated[
bool,
typer.Option(
"--version",
"-v",
help="Show version and exit",
callback=_version_callback,
is_eager=True,
),
] = False,
) -> None:
"""Compose Farm - run docker compose commands across multiple hosts."""
def _get_services(
services: list[str],
all_services: bool,
config_path: Path | None,
) -> tuple[list[str], Config]:
"""Resolve service list and load config."""
config = load_config(config_path)
if all_services:
return list(config.services.keys()), config
if not services:
typer.echo("Error: Specify services or use --all", err=True)
raise typer.Exit(1)
return list(services), config
def _run_async(coro: Coroutine[None, None, T]) -> T:
"""Run async coroutine."""
return asyncio.run(coro)
def _report_results(results: list[CommandResult]) -> None:
"""Report command results and exit with appropriate code."""
failed = [r for r in results if not r.success]
if failed:
for r in failed:
typer.echo(f"[{r.service}] Failed with exit code {r.exit_code}", err=True)
raise typer.Exit(1)
ServicesArg = Annotated[
list[str] | None,
typer.Argument(help="Services to operate on"),
]
AllOption = Annotated[
bool,
typer.Option("--all", "-a", help="Run on all services"),
]
ConfigOption = Annotated[
Path | None,
typer.Option("--config", "-c", help="Path to config file"),
]
LogPathOption = Annotated[
Path | None,
typer.Option("--log-path", "-l", help="Path to Dockerfarm TOML log"),
]
@app.command()
def up(
services: ServicesArg = None,
all_services: AllOption = False,
config: ConfigOption = None,
) -> None:
"""Start services (docker compose up -d)."""
svc_list, cfg = _get_services(services or [], all_services, config)
results = _run_async(run_on_services(cfg, svc_list, "up -d"))
_report_results(results)
@app.command()
def down(
services: ServicesArg = None,
all_services: AllOption = False,
config: ConfigOption = None,
) -> None:
"""Stop services (docker compose down)."""
svc_list, cfg = _get_services(services or [], all_services, config)
results = _run_async(run_on_services(cfg, svc_list, "down"))
_report_results(results)
@app.command()
def pull(
services: ServicesArg = None,
all_services: AllOption = False,
config: ConfigOption = None,
) -> None:
"""Pull latest images (docker compose pull)."""
svc_list, cfg = _get_services(services or [], all_services, config)
results = _run_async(run_on_services(cfg, svc_list, "pull"))
_report_results(results)
@app.command()
def restart(
services: ServicesArg = None,
all_services: AllOption = False,
config: ConfigOption = None,
) -> None:
"""Restart services (down + up)."""
svc_list, cfg = _get_services(services or [], all_services, config)
results = _run_async(run_sequential_on_services(cfg, svc_list, ["down", "up -d"]))
_report_results(results)
@app.command()
def update(
services: ServicesArg = None,
all_services: AllOption = False,
config: ConfigOption = None,
) -> None:
"""Update services (pull + down + up)."""
svc_list, cfg = _get_services(services or [], all_services, config)
results = _run_async(run_sequential_on_services(cfg, svc_list, ["pull", "down", "up -d"]))
_report_results(results)
@app.command()
def logs(
services: ServicesArg = None,
all_services: AllOption = False,
follow: Annotated[bool, typer.Option("--follow", "-f", help="Follow logs")] = False,
tail: Annotated[int, typer.Option("--tail", "-n", help="Number of lines")] = 100,
config: ConfigOption = None,
) -> None:
"""Show service logs."""
svc_list, cfg = _get_services(services or [], all_services, config)
cmd = f"logs --tail {tail}"
if follow:
cmd += " -f"
results = _run_async(run_on_services(cfg, svc_list, cmd))
_report_results(results)
@app.command()
def ps(
config: ConfigOption = None,
) -> None:
"""Show status of all services."""
cfg = load_config(config)
results = _run_async(run_on_services(cfg, list(cfg.services.keys()), "ps"))
_report_results(results)
@app.command()
def snapshot(
services: ServicesArg = None,
all_services: AllOption = False,
log_path: LogPathOption = None,
config: ConfigOption = None,
) -> None:
"""Record current image digests into the Dockerfarm TOML log."""
svc_list, cfg = _get_services(services or [], all_services, config)
try:
path = _run_async(snapshot_services(cfg, svc_list, log_path=log_path))
except RuntimeError as exc: # pragma: no cover - error path
typer.echo(str(exc), err=True)
raise typer.Exit(1) from exc
typer.echo(f"Snapshot written to {path}")
@app.command("traefik-file")
def traefik_file(
services: ServicesArg = None,
all_services: AllOption = False,
output: Annotated[
Path | None,
typer.Option(
"--output",
"-o",
help="Write Traefik file-provider YAML to this path (stdout if omitted)",
),
] = None,
config: ConfigOption = None,
) -> None:
"""Generate a Traefik file-provider fragment from compose Traefik labels."""
from .traefik import generate_traefik_config
svc_list, cfg = _get_services(services or [], all_services, config)
try:
dynamic, warnings = generate_traefik_config(cfg, svc_list)
except (FileNotFoundError, ValueError) as exc:
typer.echo(str(exc), err=True)
raise typer.Exit(1) from exc
rendered = yaml.safe_dump(dynamic, sort_keys=False)
if output:
output.parent.mkdir(parents=True, exist_ok=True)
output.write_text(rendered)
typer.echo(f"Traefik config written to {output}")
else:
typer.echo(rendered)
for warning in warnings:
typer.echo(warning, err=True)
if __name__ == "__main__":
app()

Some files were not shown because too many files have changed in this diff Show More