Compare commits

...

43 Commits

Author SHA1 Message Date
Bas Nijholt
2a923e6e81 fix: Include field name in config validation error messages (#131)
Previously, Pydantic validation errors like "Extra inputs are not
permitted" didn't show which field caused the error. Now the error
message includes the field location (e.g., "unknown_key: Extra inputs
are not permitted").
2025-12-22 22:35:19 -08:00
Bas Nijholt
5f2e081298 perf: Batch snapshot collection to 1 SSH call per host (#130)
## Summary

Optimize `cf refresh` SSH calls from O(stacks) to O(hosts):
- Discovery: 1 SSH call per host (unchanged)
- Snapshots: 1 SSH call per host (was 1 per stack)

For 50 stacks across 4 hosts: 54 → 8 SSH calls.

## Changes

**Performance:**
- Use `docker ps` + `docker image inspect` instead of `docker compose images` per stack
- Batch snapshot collection by host in `collect_stacks_entries_on_host()`

**Architecture:**
- Add `build_discovery_results()` to `operations.py` (business logic)
- Keep progress bar wrapper in `cli/management.py` (presentation)
- Remove dead code: `discover_all_stacks_on_all_hosts()`, `collect_all_stacks_entries()`
2025-12-22 22:19:32 -08:00
Bas Nijholt
6fbc7430cb perf: Optimize stray detection to use 1 SSH call per host (#129)
* perf: Optimize stray detection to use 1 SSH call per host

Previously, stray detection checked each stack on each host individually,
resulting in (stacks * hosts) SSH calls. For 50 stacks across 4 hosts,
this meant ~200 parallel SSH connections, causing "Connection lost" errors.

Now queries each host once for all running compose projects using:
  docker ps --format '{{.Label "com.docker.compose.project"}}' | sort -u

This reduces SSH calls from ~200 to just 4 (one per host).

Changes:
- Add get_running_stacks_on_host() in executor.py
- Add discover_all_stacks_on_all_hosts() in operations.py
- Update _discover_stacks_full() to use the batch approach

* Remove unused function and add tests

- Remove discover_stack_on_all_hosts() which is no longer used
- Add tests for get_running_stacks_on_host()
- Add tests for discover_all_stacks_on_all_hosts()
  - Verifies it returns correct StackDiscoveryResult
  - Verifies stray detection works
  - Verifies it makes only 1 call per host (not per stack)
2025-12-22 12:09:59 -08:00
Bas Nijholt
6fdb43e1e9 Add self-healing: detect and stop stray containers (#128)
* Add self-healing: detect and stop rogue containers

Adds the ability to detect and stop "rogue" containers - stacks running
on hosts they shouldn't be according to config.

Changes:
- `cf refresh`: Now scans ALL hosts and warns about rogues/duplicates
- `cf apply`: Stops rogue containers before migrations (new phase)
- New `--no-rogues` flag to skip rogue detection

Implementation:
- Add StackDiscoveryResult for full host scanning results
- Add discover_stack_on_all_hosts() to check all hosts in parallel
- Add stop_rogue_stacks() to stop containers on unauthorized hosts
- Update tests to include new no_rogues parameter

* Update README.md

* fix: Update refresh tests for _discover_stacks_full return type

The function now returns a tuple (discovered, rogues, duplicates)
for rogue/duplicate detection. Update test mocks accordingly.

* Rename "rogue" terminology to "stray" for consistency

Terminology update across the codebase:
- rogue_hosts -> stray_hosts
- is_rogue -> is_stray
- stop_rogue_stacks -> stop_stray_stacks
- _discover_rogues -> _discover_strays
- --no-rogues -> --no-strays
- _report_rogue_stacks -> _report_stray_stacks

"Stray" better complements "orphaned" (both evoke lost things)
while clearly indicating the stack is running somewhere it
shouldn't be.

* Update README.md

* Move asyncio import to top level

* Fix remaining rogue -> stray in docstrings and README

* Refactor: Extract shared helpers to reduce duplication

1. Extract _stop_stacks_on_hosts helper in operations.py
   - Shared by stop_orphaned_stacks and stop_stray_stacks
   - Reduces ~50 lines of duplicated code

2. Refactor _discover_strays to reuse _discover_stacks_full
   - Removes duplicate discovery logic from lifecycle.py
   - Calls management._discover_stacks_full and merges duplicates

* Add PR review prompt

* Fix typos in PR review prompt

* Move import to top level (no in-function imports)

* Update README.md

* Remove obvious comments
2025-12-22 10:22:09 -08:00
Bas Nijholt
620e797671 fix: Add entrypoint to create passwd entry for non-root users (#127) 2025-12-22 07:31:59 -08:00
Bas Nijholt
031a2af6f3 fix: Correct SSH key volume mount path in docker-compose.yml (#126) 2025-12-22 06:55:59 -08:00
Bas Nijholt
f69eed7721 docs(readme): position as Dockge for multi-host (#123)
* docs(readme): position as Dockge for multi-host

- Reference Dockge (which we've used) instead of Portainer
- Move Portainer mention to "Your files" bullet as contrast
- Link to Dockge repo

* docs(readme): add agentless bullet, link Dockge

- Add "Agentless" bullet highlighting SSH-only approach
- Link to Dockge as contrast (they require agents for multi-host)
- Update NOTE to focus on agentless, CLI-first positioning
2025-12-21 23:28:26 -08:00
Bas Nijholt
5a1fd4e29f docs(readme): add value propositions and fix image URL (#122)
- Add bullet points highlighting key benefits after NOTE block
- Update NOTE to position as file-based Portainer alternative
- Fix hero image URL from http to https
- Add alt text to hero image for accessibility
2025-12-21 23:17:18 -08:00
Bas Nijholt
26dea691ca feat(docker): make container user configurable via CF_UID/CF_GID (#118)
* feat(docker): make container user configurable via CF_UID/CF_GID

Add support for running compose-farm containers as a non-root user
to preserve file ownership on mounted volumes. This prevents files
like compose-farm-state.yaml and web UI config edits from being
owned by root on NFS mounts.

Set CF_UID, CF_GID, and CF_HOME environment variables to run as
your user. Defaults to root (0:0) for backwards compatibility.

* docs: document non-root user configuration for Docker

- Add CF_UID/CF_GID/CF_HOME documentation to README and getting-started
- Add XDG config volume mount for backup/log persistence across restarts
- Update SSH volume examples to use CF_HOME variable

* fix(docker): allow non-root user access and add USER env for SSH

- Add `chmod 755 /root` to Dockerfile so non-root users can access
  the installed tool at /root/.local/share/uv/tools/compose-farm
- Add USER environment variable to docker-compose.yml for SSH to work
  when running as non-root (UID not in /etc/passwd)
- Update docs to include CF_USER in the setup instructions
- Support building from local source with SETUPTOOLS_SCM_PRETEND_VERSION

* fix(docker): revert local build changes, keep only chmod 755 /root

Remove the local source build logic that was added during testing.
The only required change is `chmod 755 /root` to allow non-root users
to access the installed tool.

* docs: add .envrc.example for direnv users

* docs: mention direnv option in README and getting-started
2025-12-21 22:19:40 -08:00
Bas Nijholt
56d64bfe7a fix(web): exclude orphaned stacks from running count (#119)
The dashboard showed "stopped: -1" when orphaned stacks existed because
running_count included stacks in state but removed from config. Now only
stacks that are both in config AND deployed are counted as running.
2025-12-21 21:59:05 -08:00
Bas Nijholt
5ddbdcdf9e docs(demos): update recordings and fix demo scripts (#115) 2025-12-21 19:17:16 -08:00
Bas Nijholt
dd16becad1 feat(web): add Repo command to command palette (#117)
Adds a new "Repo" command that opens the GitHub repository in a new tab,
similar to the existing "Docs" command.
2025-12-21 15:25:04 -08:00
Bas Nijholt
df683a223f fix(web): wait for terminal expand transition before scrolling (#116)
- Extracts generic `expandCollapse(toggle, scrollTarget)` function for reuse with any DaisyUI collapse
- Fixes scrolling when clicking action buttons (pull, logs, etc.) while terminal is collapsed - now waits for CSS transition before scrolling
- Fixes shell via command palette - expands Container Shell and scrolls to actual terminal (not collapse header)
- Fixes scroll position not resetting when navigating via command palette
2025-12-21 15:17:59 -08:00
Bas Nijholt
fdb00e7655 refactor(web): store backups in XDG config directory (#113)
* refactor(web): store backups in XDG config directory

Move file backups from `.backups/` alongside the file to
`~/.config/compose-farm/backups/` (respecting XDG_CONFIG_HOME).
The original file path is mirrored inside to avoid name collisions.

* docs(web): document automatic backup location

* refactor(paths): extract shared config_dir() function

* fix(web): use path anchor for Windows compatibility
2025-12-21 15:08:15 -08:00
Bas Nijholt
90657a025f docs: fix missing CLI options and improve docs-review prompt (#114)
* docs: fix missing CLI options and improve docs-review prompt

- Add missing --config option docs for cf ssh setup and cf ssh status
- Enhance .prompts/docs-review.md with:
  - Quick reference table mapping docs to source files
  - Runnable bash commands for quick checks
  - Specific code paths instead of vague references
  - Web UI documentation section
  - Common gotchas section
  - Ready-to-apply fix template format
  - Post-fix verification steps

* docs: add self-review step to docs-review prompt

* docs: make docs-review prompt discovery-based and less brittle

- Use discovery commands (git ls-files, grep, find) instead of hardcoded lists
- Add 'What This Prompt Is For' section clarifying manual vs automated checks
- Simplify checklist to 10 sections focused on judgment-based review
- Remove hardcoded file paths in favor of search patterns
- Make commands dynamically discover CLI structure

* docs: simplify docs-review prompt, avoid duplicating automated checks

- Remove checks already handled by CI (README help output, command table)
- Focus on judgment-based review: accuracy, completeness, clarity
- Reduce from 270 lines to 117 lines
- Highlight that docs/commands.md options tables are manually maintained
2025-12-21 15:07:37 -08:00
Bas Nijholt
7ae8ea0229 feat(web): add tooltips to sidebar header icons (#111)
Use daisyUI tooltip component with bottom positioning for the docs,
GitHub, and theme switcher icons in the sidebar header, matching the
tooltip style used elsewhere in the web UI.
2025-12-21 14:16:57 -08:00
Bas Nijholt
612242eea9 feat(web): add Open Website button and command for stacks with Traefik labels (#110)
* feat(web): add Open Website button and command for stacks with Traefik labels

Parse traefik.http.routers.*.rule labels to extract Host() rules and
display "Open Website" button(s) on stack pages. Also adds the command
to the command palette.

- Add extract_website_urls() function to compose.py
- Determine scheme (http/https) from entrypoint (websecure/web)
- Prefer HTTPS when same host has both protocols
- Support environment variable interpolation
- Add external_link icon from Lucide
- Add comprehensive tests for URL extraction

* refactor: move extract_website_urls to traefik.py and reuse existing parsing

Instead of duplicating the Traefik label parsing logic in compose.py,
reuse generate_traefik_config() with check_all=True to get the parsed
router configuration, then extract Host() rules from it.

- Move extract_website_urls from compose.py to traefik.py
- Reuse generate_traefik_config for label parsing
- Move tests from test_compose.py to test_traefik.py
- Update import in pages.py

* test: add comprehensive tests for extract_website_urls

Cover real-world patterns found in stacks:
- Multiple Host() in one rule with || operator
- Host() combined with PathPrefix (e.g., && PathPrefix(`/api`))
- Multiple services in one stack (like arr stack)
- Labels in list format (- key=value)
- No entrypoints (defaults to http)
- Multiple entrypoints including websecure
2025-12-21 14:16:46 -08:00
Bas Nijholt
ea650bff8a fix: Skip buildable images in pull command (#109)
* fix: Skip buildable images in pull command

Add --ignore-buildable flag to pull command, matching the behavior
of the update command. This prevents pull from failing when a stack
contains services with local build directives (no remote image).

* test: Fix flaky command palette close detection

Use state="hidden" instead of :not([open]) selector when waiting
for the command palette to close. The old approach failed because
wait_for_selector defaults to waiting for visibility, but a closed
<dialog> element is hidden by design.
2025-12-21 10:28:10 -08:00
renovate[bot]
140bca4fd6 ⬆️ Update actions/upload-pages-artifact action to v4 (#108)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-21 10:27:58 -08:00
renovate[bot]
6dad6be8da ⬆️ Update actions/checkout action to v6 (#107)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-21 10:27:51 -08:00
Bas Nijholt
d7f931e301 feat(web): improve container row layout for mobile (#106)
- Stack container name/status above buttons on mobile screens
- Use card-like background for visual separation
- Buttons align right on desktop, full width on mobile
2025-12-21 10:27:36 -08:00
Bas Nijholt
471936439e feat(web): add Edit Config command to command palette (#105)
- Added "Edit Config" command to the command palette (Cmd/Ctrl+K)
- Navigates to console page, focuses the Monaco editor, and scrolls to it
- Uses `#editor` URL hash to signal editor focus instead of terminal focus
2025-12-21 01:24:03 -08:00
Bas Nijholt
36e4bef46d feat(web): add shell command to command palette for services (#104)
- Add "Shell: {service}" commands to the command palette when on a stack page
- Allows quick shell access to containers via `Cmd+K` → type "shell" → select service
- Add `get_container_name()` helper in `compose.py` for consistent container name resolution (used by both api.py and pages.py)
2025-12-21 01:23:54 -08:00
Bas Nijholt
2cac0bf263 feat(web): add Pull All and Update All to command palette (#103)
The dashboard buttons for Pull All and Update All are now also
available in the command palette (Cmd/Ctrl+K) for keyboard access.
2025-12-21 01:00:57 -08:00
Bas Nijholt
3d07cbdff0 fix(web): show stderr in console shell sessions (#102)
- Remove `2>/dev/null` from shell command that was suppressing all stderr output
- Command errors like "command not found" are now properly displayed to users
2025-12-21 00:50:58 -08:00
Bas Nijholt
0f67c17281 test: parallel execution and timeout constants (#101)
- Enable `-n auto` for all test commands in justfile (parallel execution)
- Add redis stack to test fixtures (missing stack was causing test failure)
- Replace hardcoded timeouts with constants: `TIMEOUT` (10s) and `SHORT_TIMEOUT` (5s)
- Rename `test-unit` → `test-cli` and `test-browser` → `test-web`
- Skip CLI startup test when running in parallel mode (`-n auto`)
- Update test assertions for 5 stacks (was 4)
2025-12-21 00:48:52 -08:00
Bas Nijholt
bd22a1a55e fix: Reject unknown keys in config with Pydantic strict mode (#100)
Add extra="forbid" to Host and Config models so typos like
`username` instead of `user` raise an error instead of being
silently ignored. Also simplify _parse_hosts to pass dicts
directly to Pydantic instead of manual field extraction.
2025-12-21 00:19:18 -08:00
Bas Nijholt
cc54e89b33 feat: add justfile for development commands (#99)
- Adds a justfile with common development commands for easier workflow
- Commands: `install`, `test`, `test-unit`, `test-browser`, `lint`, `web`, `kill-web`, `doc`, `kill-doc`, `clean`
2025-12-20 23:24:30 -08:00
Bas Nijholt
f71e5cffd6 feat(web): add service commands to command palette with fuzzy matching (#95)
- Add service-level commands to the command palette when viewing a stack detail page
- Services are extracted from the compose file and exposed via a `data-services` attribute
- Commands are grouped by action (all Logs together, all Pull together, etc.) with services sorted alphabetically
- Service commands appear with a teal indicator to distinguish from stack-level commands (green)
- Implement word-boundary fuzzy matching for better filtering UX:
  - `rest plex` matches `Restart: plex-server`
  - `server` matches `plex-server` (hyphenated names split into words)
  - Query words must match the START of command words (prevents false positives like `r ba` matching `Logs: bazarr`)

Available service commands:
- `Restart: <service>` - Restart a specific service
- `Pull: <service>` - Pull image for a service
- `Logs: <service>` - View logs for a service
- `Stop: <service>` - Stop a service
- `Up: <service>` - Start a service
2025-12-20 23:23:53 -08:00
Bas Nijholt
0e32729763 fix(web): add tooltips to console page buttons (#98)
* fix(web): add tooltips to console page buttons

Add descriptive tooltips to Connect, Open, and Save buttons on the
console page, matching the tooltip style used on dashboard and stack
pages.

* fix(web): show platform-appropriate keyboard shortcuts

Detect Mac vs other platforms and display ⌘ or Ctrl accordingly for
keyboard shortcuts. The command palette FAB dynamically updates, and
tooltips use ⌘/Ctrl notation to cover both platforms.
2025-12-20 22:59:50 -08:00
Bas Nijholt
b0b501fa98 docs: update example services in documentation and tests (#96) 2025-12-20 22:45:13 -08:00
Bas Nijholt
7e00596046 docs: fix inaccuracies and add missing command documentation (#97)
- Add missing --service option docs for up, stop, restart, update, pull, ps, logs
- Add stop command to command overview table
- Add compose passthrough command documentation
- Add --all option and [STACKS] argument to refresh command
- Fix ServiceConfig reference to Host in architecture.md
- Update lifecycle.py description to include stop and compose commands
- Fix uv installation syntax in web-ui.md (--with web -> [web])
- Add missing cf ssh --help and cf web --help output blocks in README
2025-12-20 22:37:26 -08:00
Bas Nijholt
d1e4d9b05c docs: update documentation for new CLI features (#94) 2025-12-20 21:36:47 -08:00
Bas Nijholt
3fbae630f9 feat(cli): add compose passthrough command (#93)
Adds `cf compose <stack> <command> [args...]` to run any docker compose
command on a stack without needing dedicated wrappers. Useful for
commands like top, images, exec, run, config, etc.

Multi-host stacks require --host to specify which host to run on.
2025-12-20 21:26:05 -08:00
Bas Nijholt
3e3c919714 fix(web): service action buttons fixes and additions (#92)
* fix(web): use --service flag in service action endpoint

* feat(web): add Start button to service actions

* feat(web): add Pull button to service actions
2025-12-20 21:11:44 -08:00
Bas Nijholt
59b797a89d feat: add service-level commands with --service flag (#91)
Add support for targeting specific services within a stack:

CLI:
- New `stop` command for stopping services without removing containers
- Add `--service` / `-s` flag to: up, pull, restart, update, stop, logs, ps
- Service flag requires exactly one stack to be specified

Web API:
- Add `stop` to allowed stack commands
- New endpoint: POST /api/stack/{name}/service/{service}/{command}
- Supports: logs, pull, restart, up, stop

Web UI:
- Add action buttons to container rows: logs, restart, stop, shell
- Add rotate_ccw and scroll_text icons for new buttons
2025-12-20 20:56:48 -08:00
Bas Nijholt
7caf006e07 feat(web): add Rich logging for better error debugging (#90)
Add structured logging with Rich tracebacks to web UI components:
- Configure RichHandler in app.py for formatted output
- Log SSH/file operation failures in API routes with full tracebacks
- Log WebSocket exec/shell errors for connection issues
- Add warning logs for failed container state queries

Errors now show detailed tracebacks in container logs instead of
just returning 500 status codes.
2025-12-20 20:47:34 -08:00
Bas Nijholt
45040b75f1 feat(web): add Pull All and Update All buttons to dashboard (#89)
- Add "Pull All" and "Update All" buttons to dashboard for bulk operations
- Switch from native `title` attribute to DaisyUI tooltips for instant, styled tooltips
- Add tooltips to save buttons clarifying what they save
- Add tooltip to container shell button
- Fix tooltip z-index so they appear above sidebar
- Fix tooltip clipping by removing `overflow-y-auto` from main content
- Position container shell tooltip to the left to avoid clipping
2025-12-20 20:41:26 -08:00
Bas Nijholt
fa1c5c1044 docs: update theme to indigo with system preference support (#88)
Switch from teal to indigo primary color to match Zensical docs theme.
Add system preference detection and orange accent for dark mode.
2025-12-20 20:18:28 -08:00
Bas Nijholt
67e832f687 docs: clarify config file locations and update install URL (#86) 2025-12-20 20:12:06 -08:00
Bas Nijholt
da986fab6a fix: improve command palette theme filtering (#87)
- Normalize spaces after colons so "theme:dark" matches "theme: dark"
- Also handles multiple spaces like "theme:  dark"
2025-12-20 20:03:16 -08:00
Bas Nijholt
5dd6e2ca05 fix: improve theme picker usability in command palette (#85) 2025-12-20 20:00:05 -08:00
Bas Nijholt
16435065de fix: video autoplay for Safari and Chrome with instant navigation (#84) 2025-12-20 19:49:05 -08:00
104 changed files with 3571 additions and 1001 deletions

6
.envrc.example Normal file
View File

@@ -0,0 +1,6 @@
# Run containers as current user (preserves file ownership on NFS mounts)
# Copy this file to .envrc and run: direnv allow
export CF_UID=$(id -u)
export CF_GID=$(id -g)
export CF_HOME=$HOME
export CF_USER=$USER

View File

@@ -54,7 +54,7 @@ jobs:
run: uv run playwright install chromium --with-deps
- name: Run browser tests
run: uv run pytest -m browser -v --no-cov
run: uv run pytest -m browser -n auto -v
lint:
runs-on: ubuntu-latest

View File

@@ -27,7 +27,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
lfs: true
@@ -49,7 +49,7 @@ jobs:
- name: Upload artifact
if: github.event_name != 'pull_request'
uses: actions/upload-pages-artifact@v3
uses: actions/upload-pages-artifact@v4
with:
path: "./site"

1
.gitignore vendored
View File

@@ -44,3 +44,4 @@ compose-farm.yaml
coverage.xml
.env
homepage/
site/

View File

@@ -21,7 +21,7 @@ repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.9
hooks:
- id: ruff
- id: ruff-check
args: [--fix]
- id: ruff-format

View File

@@ -1,94 +1,117 @@
Review all documentation in this repository for accuracy, completeness, and consistency. Cross-reference documentation against the actual codebase to identify issues.
Review documentation for accuracy, completeness, and consistency. Focus on things that require judgment—automated checks handle the rest.
## Scope
## What's Already Automated
Review all documentation files:
- docs/*.md (primary documentation)
- README.md (repository landing page)
- CLAUDE.md (development guidelines)
- examples/README.md (example configurations)
Don't waste time on these—CI and pre-commit hooks handle them:
## Review Checklist
- **README help output**: `markdown-code-runner` regenerates `cf --help` blocks in CI
- **README command table**: Pre-commit hook verifies commands are listed
- **Linting/formatting**: Handled by pre-commit
### 1. Command Documentation
## What This Review Is For
For each documented command, verify against the CLI source code:
Focus on things that require judgment:
- Command exists in codebase
- All options are documented with correct names, types, and defaults
- Short options (-x) match long options (--xxx)
- Examples would work as written
- Check for undocumented commands or options
1. **Accuracy**: Does the documentation match what the code actually does?
2. **Completeness**: Are there undocumented features, options, or behaviors?
3. **Clarity**: Would a new user understand this? Are examples realistic?
4. **Consistency**: Do different docs contradict each other?
5. **Freshness**: Has the code changed in ways the docs don't reflect?
Run `--help` for each command to verify.
## Review Process
### 2. Configuration Documentation
### 1. Check Recent Changes
Verify against Pydantic models in the config module:
```bash
# What changed recently that might need doc updates?
git log --oneline -20 | grep -iE "feat|fix|add|remove|change|option"
- All config keys are documented
- Types match Pydantic field types
- Required vs optional fields are correct
- Default values are accurate
- Config file search order matches code
- Example YAML is valid and uses current schema
# What code files changed?
git diff --name-only HEAD~20 | grep "\.py$"
```
### 3. Architecture Documentation
Look for new features, changed defaults, renamed options, or removed functionality.
Verify against actual directory structure:
### 2. Verify docs/commands.md Options Tables
- File paths match actual source code location
- All modules listed actually exist
- No modules are missing from the list
- Component descriptions match code functionality
- CLI module list includes all command files
The README auto-updates help output, but `docs/commands.md` has **manually maintained options tables**. These can drift.
### 4. State and Data Files
For each command's options table, compare against `cf <command> --help`:
- Are all options listed?
- Are short flags correct?
- Are defaults accurate?
- Are descriptions accurate?
Verify against state and path modules:
**Pay special attention to subcommands** (`cf config *`, `cf ssh *`)—these have their own options that are easy to miss.
- State file name and location are correct
- State file format matches actual structure
- Log file name and location are correct
- What triggers state/log updates is accurate
### 3. Verify docs/configuration.md
### 5. Installation Documentation
Compare against Pydantic models in the source:
Verify against pyproject.toml:
```bash
# Find the config models
grep -r "class.*BaseModel" src/ --include="*.py" -A 15
```
- Python version requirement matches requires-python
- Package name is correct
- Optional dependencies are documented
- CLI entry points are mentioned
- Installation methods work as documented
Check:
- All config keys documented
- Types and defaults match code
- Config file search order is accurate
- Example YAML would actually work
### 6. Feature Claims
### 4. Verify docs/architecture.md
For each claimed feature, verify it exists and works as described.
```bash
# What source files actually exist?
git ls-files "src/**/*.py"
```
### 7. Cross-Reference Consistency
Check:
- Listed files exist
- No files are missing from the list
- Descriptions match what the code does
Check for conflicts between documentation files:
### 5. Check Examples
- README vs docs/index.md (should be consistent)
- CLAUDE.md vs actual code structure
- Command tables match across files
- Config examples are consistent
For examples in any doc:
- Would the YAML/commands actually work?
- Are service names, paths, and options realistic?
- Do examples use current syntax (not deprecated options)?
### 6. Cross-Reference Consistency
The same info appears in multiple places. Check for conflicts:
- README.md vs docs/index.md
- docs/commands.md vs CLAUDE.md command tables
- Config examples across different docs
### 7. Self-Check This Prompt
This prompt can become outdated too. If you notice:
- New automated checks that should be listed above
- New doc files that need review guidelines
- Patterns that caused issues
Include prompt updates in your fixes.
## Output Format
Provide findings in these categories:
Categorize findings:
1. **Critical Issues**: Incorrect information that would cause user problems
2. **Inaccuracies**: Technical errors, wrong defaults, incorrect paths
3. **Missing Documentation**: Features/commands that exist but aren't documented
4. **Outdated Content**: Information that was once true but no longer is
5. **Inconsistencies**: Conflicts between different documentation files
6. **Minor Issues**: Typos, formatting, unclear wording
7. **Verified Accurate**: Sections confirmed to be correct
1. **Critical**: Wrong info that would break user workflows
2. **Inaccuracy**: Technical errors (wrong defaults, paths, types)
3. **Missing**: Undocumented features or options
4. **Outdated**: Was true, no longer is
5. **Inconsistency**: Docs contradict each other
6. **Minor**: Typos, unclear wording
For each issue, include:
- File path and line number (if applicable)
- What the documentation says
- What the code actually does
- Suggested fix
For each issue, provide a ready-to-apply fix:
```
### Issue: [Brief description]
- **File**: docs/commands.md:652
- **Problem**: `cf ssh setup` has `--config` option but it's not documented
- **Fix**: Add `--config, -c PATH` to the options table
- **Verify**: `cf ssh setup --help`
```

15
.prompts/pr-review.md Normal file
View File

@@ -0,0 +1,15 @@
Review the pull request for:
- **Code cleanliness**: Is the implementation clean and well-structured?
- **DRY principle**: Does it avoid duplication?
- **Code reuse**: Are there parts that should be reused from other places?
- **Organization**: Is everything in the right place?
- **Consistency**: Is it in the same style as other parts of the codebase?
- **Simplicity**: Is it not over-engineered? Remember KISS and YAGNI. No dead code paths and NO defensive programming.
- **User experience**: Does it provide a good user experience?
- **PR**: Is the PR description and title clear and informative?
- **Tests**: Are there tests, and do they cover the changes adequately? Are they testing something meaningful or are they just trivial?
- **Live tests**: Test the changes in a REAL live environment to ensure they work as expected, use the config in `/opt/stacks/compose-farm.yaml`.
- **Rules**: Does the code follow the project's coding standards and guidelines as laid out in @CLAUDE.md?
Look at `git diff origin/main..HEAD` for the changes made in this pull request.

51
.prompts/update-demos.md Normal file
View File

@@ -0,0 +1,51 @@
Update demo recordings to match the current compose-farm.yaml configuration.
## Key Gotchas
1. **Never `git checkout` without asking** - check for uncommitted changes first
2. **Prefer `nas` stacks** - demos run locally on nas, SSH adds latency
3. **Terminal captures keyboard** - use `blur()` to release focus before command palette
4. **Clicking sidebar navigates away** - clicking h1 scrolls to top
5. **Buttons have icons, not text** - use `[data-tip="..."]` selectors
6. **`record.py` auto-restores config** - no manual cleanup needed after CLI demos
## Stacks Used in Demos
| Stack | CLI Demos | Web Demos | Notes |
|-------|-----------|-----------|-------|
| `audiobookshelf` | quickstart, migration, apply | - | Migrates nas→anton |
| `grocy` | update | navigation, stack, workflow, console | - |
| `immich` | logs, compose | shell | Multiple containers |
| `dozzle` | - | workflow | - |
## CLI Demos
**Files:** `docs/demos/cli/*.tape`
Check:
- `quickstart.tape`: `bat -r` line ranges match current config structure
- `migration.tape`: nvim keystrokes work, stack exists on nas
- `compose.tape`: exec commands produce meaningful output
Run: `python docs/demos/cli/record.py [demo]`
## Web Demos
**Files:** `docs/demos/web/demo_*.py`
Check:
- Stack names in demos still exist in config
- Selectors match current templates (grep for IDs in `templates/`)
- Shell demo uses command palette for ALL navigation
Run: `python docs/demos/web/record.py [demo]`
## Before Recording
```bash
# Check for uncommitted config changes
git -C /opt/stacks diff compose-farm.yaml
# Verify stacks are running
cf ps audiobookshelf grocy immich dozzle
```

View File

@@ -15,7 +15,7 @@ src/compose_farm/
│ ├── app.py # Shared Typer app instance, version callback
│ ├── common.py # Shared helpers, options, progress bar utilities
│ ├── config.py # Config subcommand (init, show, path, validate, edit, symlink)
│ ├── lifecycle.py # up, down, pull, restart, update, apply commands
│ ├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose commands
│ ├── management.py # refresh, check, init-network, traefik-file commands
│ ├── monitoring.py # logs, ps, stats commands
│ ├── ssh.py # SSH key management (setup, status, keygen)
@@ -58,22 +58,37 @@ Icons use [Lucide](https://lucide.dev/). Add new icons as macros in `web/templat
- **Imports at top level**: Never add imports inside functions unless they are explicitly marked with `# noqa: PLC0415` and a comment explaining it speeds up CLI startup. Heavy modules like `pydantic`, `yaml`, and `rich.table` are lazily imported to keep `cf --help` fast.
## Development Commands
Use `just` for common tasks. Run `just` to list available commands:
| Command | Description |
|---------|-------------|
| `just install` | Install dev dependencies |
| `just test` | Run all tests |
| `just test-cli` | Run CLI tests (parallel) |
| `just test-web` | Run web UI tests (parallel) |
| `just lint` | Lint, format, and type check |
| `just web` | Start web UI (port 9001) |
| `just doc` | Build and serve docs (port 9002) |
| `just clean` | Clean build artifacts |
## Testing
Run tests with `uv run pytest`. Browser tests require Chromium (system-installed or via `playwright install chromium`):
Run tests with `just test` or `uv run pytest`. Browser tests require Chromium (system-installed or via `playwright install chromium`):
```bash
# Unit tests only (skip browser tests, can parallelize)
# Unit tests only (parallel)
uv run pytest -m "not browser" -n auto
# Browser tests only (run sequentially, no coverage)
uv run pytest -m browser --no-cov
# Browser tests only (parallel)
uv run pytest -m browser -n auto
# All tests
uv run pytest --no-cov
uv run pytest
```
Browser tests are marked with `@pytest.mark.browser`. They use Playwright to test HTMX behavior, JavaScript functionality (sidebar filter, command palette, terminals), and content stability during navigation. Run sequentially (no `-n`) to avoid resource contention.
Browser tests are marked with `@pytest.mark.browser`. They use Playwright to test HTMX behavior, JavaScript functionality (sidebar filter, command palette, terminals), and content stability during navigation.
## Communication Notes
@@ -116,10 +131,12 @@ CLI available as `cf` or `compose-farm`.
|---------|-------------|
| `up` | Start stacks (`docker compose up -d`), auto-migrates if host changed |
| `down` | Stop stacks (`docker compose down`). Use `--orphaned` to stop stacks removed from config |
| `stop` | Stop services without removing containers (`docker compose stop`) |
| `pull` | Pull latest images |
| `restart` | `down` + `up -d` |
| `update` | `pull` + `build` + `down` + `up -d` |
| `apply` | Make reality match config: migrate stacks + stop orphans. Use `--dry-run` to preview |
| `compose` | Run any docker compose command on a stack (passthrough) |
| `logs` | Show stack logs |
| `ps` | Show status of all stacks |
| `stats` | Show overview (hosts, stacks, pending migrations; `--live` for container counts) |

View File

@@ -16,5 +16,13 @@ RUN apk add --no-cache openssh-client
COPY --from=builder /root/.local/share/uv/tools/compose-farm /root/.local/share/uv/tools/compose-farm
COPY --from=builder /usr/local/bin/cf /usr/local/bin/compose-farm /usr/local/bin/
ENTRYPOINT ["cf"]
# Allow non-root users to access the installed tool
# (required when running with user: "${CF_UID:-0}:${CF_GID:-0}")
RUN chmod 755 /root
# Allow non-root users to add passwd entries (required for SSH)
RUN chmod 666 /etc/passwd
# Entrypoint creates /etc/passwd entry for non-root UIDs (required for SSH)
ENTRYPOINT ["sh", "-c", "[ $(id -u) != 0 ] && echo ${USER:-u}:x:$(id -u):$(id -g)::${HOME:-/}:/bin/sh >> /etc/passwd; exec cf \"$@\"", "--"]
CMD ["--help"]

264
README.md
View File

@@ -5,12 +5,19 @@
[![License](https://img.shields.io/github/license/basnijholt/compose-farm)](LICENSE)
[![GitHub stars](https://img.shields.io/github/stars/basnijholt/compose-farm)](https://github.com/basnijholt/compose-farm/stargazers)
<img src="http://files.nijho.lt/compose-farm.png" align="right" style="width: 300px;" />
<img src="https://files.nijho.lt/compose-farm.png" alt="Compose Farm logo" align="right" style="width: 300px;" />
A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
> [!NOTE]
> Run `docker compose` commands across multiple hosts via SSH. One YAML maps stacks to hosts. Run `cf apply` and reality matches your config—stacks start, migrate, or stop as needed. No Kubernetes, no Swarm, no magic.
> Agentless multi-host Docker Compose. CLI-first with a web UI. Your files stay as plain folders—version-controllable, no lock-in. Run `cf apply` and reality matches your config.
**Why Compose Farm?**
- **Your files, your control** — Plain folders + YAML, not locked in Portainer. Version control everything.
- **Agentless** — Just SSH, no agents to deploy (unlike [Dockge](https://github.com/louislam/dockge)).
- **Zero changes required** — Existing compose files work as-is.
- **Grows with you** — Start single-host, scale to multi-host seamlessly.
- **Declarative** — Change config, run `cf apply`, reality matches.
## Quick Demo
@@ -155,7 +162,7 @@ If you need containers on different hosts to communicate seamlessly, you need Do
```bash
# One-liner (installs uv if needed)
curl -fsSL https://raw.githubusercontent.com/basnijholt/compose-farm/main/bootstrap.sh | sh
curl -fsSL https://compose-farm.nijho.lt/install | sh
# Or if you already have uv/pip
uv tool install compose-farm
@@ -177,6 +184,24 @@ docker run --rm \
ghcr.io/basnijholt/compose-farm up --all
```
**Running as non-root user** (recommended for NFS mounts):
By default, containers run as root. To preserve file ownership on mounted volumes
(e.g., `compose-farm-state.yaml`, config edits), set these environment variables:
```bash
# Add to .env file (one-time setup)
echo "CF_UID=$(id -u)" >> .env
echo "CF_GID=$(id -g)" >> .env
echo "CF_HOME=$HOME" >> .env
echo "CF_USER=$USER" >> .env
```
Or use [direnv](https://direnv.net/) (copies `.envrc.example` to `.envrc`):
```bash
cp .envrc.example .envrc && direnv allow
```
</details>
## SSH Authentication
@@ -216,13 +241,13 @@ When running in Docker, mount a volume to persist the SSH keys. Choose ONE optio
**Option 1: Host path (default)** - keys at `~/.ssh/compose-farm/id_ed25519`
```yaml
volumes:
- ~/.ssh/compose-farm:/root/.ssh
- ~/.ssh/compose-farm:${CF_HOME:-/root}/.ssh
```
**Option 2: Named volume** - managed by Docker
```yaml
volumes:
- cf-ssh:/root/.ssh
- cf-ssh:${CF_HOME:-/root}/.ssh
```
Run setup once after starting the container (while the SSH agent still works):
@@ -233,11 +258,13 @@ docker compose exec web cf ssh setup
The keys will persist across restarts.
**Note:** When running as non-root (with `CF_UID`/`CF_GID`), set `CF_HOME` to your home directory so SSH finds the keys at the correct path.
</details>
## Configuration
Create `~/.config/compose-farm/compose-farm.yaml` (or `./compose-farm.yaml` in your working directory):
Create `compose-farm.yaml` in the directory where you'll run commands (e.g., `/opt/stacks`). This keeps config near your stacks. Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config, or symlink from one to the other with `cf config symlink`.
### Single-host example
@@ -270,7 +297,7 @@ hosts:
stacks:
plex: server-1
jellyfin: server-2
sonarr: server-1
grafana: server-1
# Multi-host stacks (run on multiple/all hosts)
autokuma: all # Runs on ALL configured hosts
@@ -332,7 +359,8 @@ The CLI is available as both `compose-farm` and the shorter `cf` alias.
|---------|-------------|
| **`cf apply`** | **Make reality match config (start + migrate + stop orphans)** |
| `cf up <stack>` | Start stack (auto-migrates if host changed) |
| `cf down <stack>` | Stop stack |
| `cf down <stack>` | Stop and remove stack containers |
| `cf stop <stack>` | Stop stack without removing containers |
| `cf restart <stack>` | down + up |
| `cf update <stack>` | pull + build + down + up |
| `cf pull <stack>` | Pull latest images |
@@ -421,15 +449,6 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ copy it or customize the installation. │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Lifecycle ──────────────────────────────────────────────────────────────────╮
│ up Start stacks (docker compose up -d). Auto-migrates if host │
│ changed. │
│ down Stop stacks (docker compose down). │
│ pull Pull latest images (docker compose pull). │
│ restart Restart stacks (down + up). │
│ update Update stacks (pull + build + down + up). │
│ apply Make reality match config (start, migrate, stop as needed). │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Configuration ──────────────────────────────────────────────────────────────╮
│ traefik-file Generate a Traefik file-provider fragment from compose │
│ Traefik labels. │
@@ -439,8 +458,24 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ config Manage compose-farm configuration files. │
│ ssh Manage SSH keys for passwordless authentication. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Lifecycle ──────────────────────────────────────────────────────────────────╮
│ up Start stacks (docker compose up -d). Auto-migrates if host │
│ changed. │
│ down Stop stacks (docker compose down). │
│ stop Stop services without removing containers (docker compose │
│ stop). │
│ pull Pull latest images (docker compose pull). │
│ restart Restart stacks (down + up). With --service, restarts just │
│ that service. │
│ update Update stacks (pull + build + down + up). With --service, │
│ updates just that service. │
│ apply Make reality match config (start, migrate, stop │
│ strays/orphans as needed). │
│ compose Run any docker compose command on a stack. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Monitoring ─────────────────────────────────────────────────────────────────╮
│ logs Show stack logs.
│ logs Show stack logs. With --service, shows logs for just that
│ service. │
│ ps Show status of stacks. │
│ stats Show overview statistics for hosts and stacks. │
╰──────────────────────────────────────────────────────────────────────────────╯
@@ -479,10 +514,11 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks
│ --host -H TEXT Filter to stacks on this host
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
│ --all -a Run on all stacks │
│ --host -H TEXT Filter to stacks on this host │
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -528,6 +564,41 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
</details>
<details>
<summary>See the output of <code>cf stop --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf stop --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf stop [OPTIONS] [STACKS]...
Stop services without removing containers (docker compose stop).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf pull --help</code></summary>
@@ -551,9 +622,10 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -579,15 +651,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf restart [OPTIONS] [STACKS]...
Restart stacks (down + up).
Restart stacks (down + up). With --service, restarts just that service.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -613,15 +686,17 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf update [OPTIONS] [STACKS]...
Update stacks (pull + build + down + up).
Update stacks (pull + build + down + up). With --service, updates just that
service.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -647,22 +722,25 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf apply [OPTIONS]
Make reality match config (start, migrate, stop as needed).
Make reality match config (start, migrate, stop strays/orphans as needed).
This is the "reconcile" command that ensures running stacks match your
config file. It will:
1. Stop orphaned stacks (in state but removed from config)
2. Migrate stacks on wrong host (host in state ≠ host in config)
3. Start missing stacks (in config but not in state)
2. Stop stray stacks (running on unauthorized hosts)
3. Migrate stacks on wrong host (host in state ≠ host in config)
4. Start missing stacks (in config but not in state)
Use --dry-run to preview changes before applying.
Use --no-orphans to only migrate/start without stopping orphaned stacks.
Use --no-orphans to skip stopping orphaned stacks.
Use --no-strays to skip stopping stray stacks.
Use --full to also run 'up' on all stacks (picks up compose/env changes).
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --dry-run -n Show what would change without executing │
│ --no-orphans Only migrate, don't stop orphaned stacks │
│ --no-strays Don't stop stray stacks (running on wrong host) │
│ --full -f Also run up on all stacks to apply config │
│ changes │
│ --config -c PATH Path to config file │
@@ -675,6 +753,53 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
</details>
<details>
<summary>See the output of <code>cf compose --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf compose --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf compose [OPTIONS] STACK COMMAND [ARGS]...
Run any docker compose command on a stack.
Passthrough to docker compose for commands not wrapped by cf.
Options after COMMAND are passed to docker compose, not cf.
Examples:
cf compose mystack --help - show docker compose help
cf compose mystack top - view running processes
cf compose mystack images - list images
cf compose mystack exec web bash - interactive shell
cf compose mystack config - view parsed config
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ * stack TEXT Stack to operate on (use '.' for current dir) │
│ [required] │
│ * command TEXT Docker compose command [required] │
│ args [ARGS]... Additional arguments │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --host -H TEXT Filter to stacks on this host │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
**Configuration**
<details>
@@ -890,6 +1015,26 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
<!-- cf ssh --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf ssh [OPTIONS] COMMAND [ARGS]...
Manage SSH keys for passwordless authentication.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────╮
│ keygen Generate SSH key (does not distribute to hosts). │
│ setup Generate SSH key and distribute to all configured hosts. │
│ status Show SSH key status and host connectivity. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
@@ -912,19 +1057,20 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf logs [OPTIONS] [STACKS]...
Show stack logs.
Show stack logs. With --service, shows logs for just that service.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks
│ --host -H TEXT Filter to stacks on this host
│ --follow -f Follow logs
│ --tail -n INTEGER Number of lines (default: 20 for --all, 100
otherwise)
--config -c PATH Path to config file
│ --help -h Show this message and exit.
│ --all -a Run on all stacks │
│ --host -H TEXT Filter to stacks on this host │
│ --service -s TEXT Target a specific service within the stack
│ --follow -f Follow logs
--tail -n INTEGER Number of lines (default: 20 for --all, 100
otherwise)
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -956,15 +1102,17 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Without arguments: shows all stacks (same as --all).
With stack names: shows only those stacks.
With --host: shows stacks on that host.
With --service: filters to a specific service within the stack.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks
│ --host -H TEXT Filter to stacks on this host
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
│ --all -a Run on all stacks │
│ --host -H TEXT Filter to stacks on this host │
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -1021,6 +1169,24 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
<!-- cf web --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf web [OPTIONS]
Start the web UI server.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --host -H TEXT Host to bind to [default: 0.0.0.0] │
│ --port -p INTEGER Port to listen on [default: 8000] │
│ --reload -r Enable auto-reload for development │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>

View File

@@ -26,5 +26,5 @@ stacks:
traefik: server-1 # Traefik runs here
plex: server-2 # Stacks on other hosts get file-provider entries
jellyfin: server-2
sonarr: server-1
radarr: local
grafana: server-1
nextcloud: local

View File

@@ -1,6 +1,10 @@
services:
cf:
image: ghcr.io/basnijholt/compose-farm:latest
# Run as current user to preserve file ownership on mounted volumes
# Set CF_UID=$(id -u) CF_GID=$(id -g) in your environment or .env file
# Defaults to root (0:0) for backwards compatibility
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
# Compose directory (contains compose files AND compose-farm.yaml config)
@@ -8,31 +12,43 @@ services:
# SSH keys for passwordless auth (generated by `cf ssh setup`)
# Choose ONE option below (use the same option for both cf and web services):
# Option 1: Host path (default) - keys at ~/.ssh/compose-farm/id_ed25519
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:/root/.ssh
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
# Option 2: Named volume - managed by Docker, shared between services
# - cf-ssh:/root/.ssh
# - cf-ssh:${CF_HOME:-/root}/.ssh
environment:
- SSH_AUTH_SOCK=/ssh-agent
# Config file path (state stored alongside it)
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
- USER=${CF_USER:-root}
web:
image: ghcr.io/basnijholt/compose-farm:latest
restart: unless-stopped
command: web --host 0.0.0.0 --port 9000
# Run as current user to preserve file ownership on mounted volumes
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
# SSH keys - use the SAME option as cf service above
# Option 1: Host path (default)
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:/root/.ssh
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
# Option 2: Named volume
# - cf-ssh:/root/.ssh
# - cf-ssh:${CF_HOME:-/root}/.ssh
# XDG config dir for backups and image digest logs (persists across restarts)
- ${CF_XDG_CONFIG:-~/.config/compose-farm}:${CF_HOME:-/root}/.config/compose-farm
environment:
- SSH_AUTH_SOCK=/ssh-agent
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# Used to detect self-updates and run via SSH to survive container restart
- CF_WEB_STACK=compose-farm
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
- USER=${CF_USER:-root}
labels:
- traefik.enable=true
- traefik.http.routers.compose-farm.rule=Host(`compose-farm.${DOMAIN}`)

View File

@@ -47,8 +47,7 @@ Compose Farm follows three core principles:
Pydantic models for YAML configuration:
- **Config** - Root configuration with compose_dir, hosts, stacks
- **HostConfig** - Host address and SSH user
- **ServiceConfig** - Service-to-host mappings
- **Host** - Host address, SSH user, and port
Key features:
- Validation with Pydantic
@@ -62,7 +61,7 @@ Tracks deployment state in `compose-farm-state.yaml` (stored alongside the confi
```yaml
deployed:
plex: nuc
sonarr: nuc
grafana: nuc
```
Used for:
@@ -98,7 +97,7 @@ cli/
├── app.py # Shared Typer app, version callback
├── common.py # Shared helpers, options, progress utilities
├── config.py # config subcommand (init, show, path, validate, edit, symlink)
├── lifecycle.py # up, down, pull, restart, update, apply
├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose
├── management.py # refresh, check, init-network, traefik-file
├── monitoring.py # logs, ps, stats
├── ssh.py # SSH key management (setup, status, keygen)
@@ -208,7 +207,7 @@ Location: `compose-farm-state.yaml` (stored alongside the config file)
```yaml
deployed:
plex: nuc
sonarr: nuc
grafana: nuc
```
Image digests are stored separately in `dockerfarm-log.toml` (also in the config directory).

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bb1372a59a4ed1ac74d3864d7a84dd5311fce4cb6c6a00bf3a574bc2f98d5595
size 895927
oid sha256:01dabdd8f62773823ba2b8dc74f9931f1a1b88215117e6a080004096025491b0
size 901456

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f339a85f3d930db5a020c9f77e106edc5f44ea7dee6f68557106721493c24ef8
size 205907
oid sha256:134c903a6b3acfb933617b33755b0cdb9bac2a59e5e35b64236e248a141d396d
size 206883

3
docs/assets/compose.gif Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8b3cdb3486ec79b3ddb2f7571c13d54ac9aed182edfe708eff76a966a90cfc7
size 1132310

3
docs/assets/compose.webm Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a3c4d4a62f062f717df4e6752efced3caea29004dc90fe97fd7633e7f0ded9db
size 341057

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:388aa49a1269145698f9763452aaf6b9c6232ea9229abe1dae304df558e29695
size 403442
oid sha256:6c1bb48cc2f364681515a4d8bd0c586d133f5a32789b7bb64524ad7d9ed0a8e9
size 543135

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9b8bf4dcb8ee67270d4a88124b4dd4abe0dab518e73812ee73f7c66d77f146e2
size 228025
oid sha256:5f82d96137f039f21964c15c1550aa1b1f0bb2d52c04d012d253dbfbd6fad096
size 268086

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:16b9a28137dfae25488e2094de85766a039457f5dca20c2d84ac72e3967c10b9
size 164237
oid sha256:2a4045b00d90928f42c7764b3c24751576cfb68a34c6e84d12b4e282d2e67378
size 146467

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e0fbe697a1f8256ce3b9a6a64c7019d42769134df9b5b964e5abe98a29e918fd
size 68242
oid sha256:f1b94416ed3740853f863e19bf45f26241a203fb0d7d187160a537f79aa544fa
size 60353

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:629b8c80b98eb996b75439745676fd99a83f391ca25f778a71bd59173f814c2f
size 1194931
oid sha256:848d9c48fb7511da7996149277c038589fad1ee406ff2f30c28f777fc441d919
size 1183641

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:33fd46f2d8538cc43be4cb553b3af9d8b412f282ee354b6373e2793fe41c799b
size 405057
oid sha256:e747ee71bb38b19946005d5a4def4d423dadeaaade452dec875c4cb2d24a5b77
size 407373

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ccd96e33faba5f297999917d89834b29d58bd2a8929eea8d62875e3d8830bd5c
size 3198466
oid sha256:d32c9a3eec06e57df085ad347e6bf61e323f8bd8322d0c540f0b9d4834196dfd
size 3589776

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:979a1a21303bbf284b3510981066ef05c41c1035b34392fecc7bee472116e6db
size 967564
oid sha256:6c54eda599389dac74c24c83527f95cd1399e653d7faf2972c2693d90e590597
size 1085344

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2067f4967a93b7ee3a8db7750c435f41b1fccd2919f3443da4b848c20cc54f23
size 124559
oid sha256:62f9b5ec71496197a3f1c3e3bca8967d603838804279ea7dbf00a70d3391ff6c
size 127123

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5471bd94e6d1b9d415547fa44de6021fdad2e1cc5b8b295680e217104aa749d6
size 98149
oid sha256:ac2b93d3630af87b44a135723c5d10e8287529bed17c28301b2802cd9593e9e8
size 98748

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dac5660cfe6574857ec055fac7822f25b7c5fcb10a836b19c86142515e2fbf75
size 1816075
oid sha256:7b50a7e9836c496c0989363d1440fa0a6ccdaa38ee16aae92b389b3cf3c3732f
size 2385110

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d4efec8ef5a99f2cb31d55cd71cdbf0bb8dd0cd6281571886b7c1f8b41c3f9da
size 1660764
oid sha256:ccbb3d5366c7734377e12f98cca0b361028f5722124f1bb7efa231f6aeffc116
size 2208044

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9348dd36e79192344476d61fbbffdb122a96ecc5829fbece1818590cfc521521
size 3373003
oid sha256:269993b52721ce70674d3aab2a4cd8c58aa621d4ba0739afedae661c90965b26
size 3678371

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bebbf8151434ba37bf5e46566a4e8b57812944281926f579d056bdc835ca26aa
size 2729799
oid sha256:0098b55bb6a52fa39f807a01fa352ce112bcb446e2a2acb963fb02d21b28c934
size 3088813

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3712afff6fcde00eb951264bb24d4301deb085d082b4e95ed4c1893a571938ee
size 1528294
oid sha256:4bf9d8c247d278799d1daea784fc662a22f12b1bd7883f808ef30f35025ebca6
size 4166443

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b218d400836a50661c9cdcce2d2b1e285cc5fe592cb42f58aae41f3e7d60684
size 1327413
oid sha256:02d5124217a94849bf2971d6d13d28da18c557195a81b9cca121fb7c07f0501b
size 3523244

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a232ddc1b9ddd9bf6b5d99c05153e1094be56f1952f02636ca498eb7484e096
size 3808675
oid sha256:412a0e68f8e52801cafbb9a703ca9577e7c14cc7c0e439160b9185961997f23c
size 4435697

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5a7c9f5f6d47074a6af135190fda6d0a1936cd7a0b04b3aa04ea7d99167a9e05
size 3333014
oid sha256:0e600a1d3216b44497a889f91eac94d62ef7207b4ed0471465dcb72408caa28e
size 3764693

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:66f4547ed2e83b302d795875588d9a085af76071a480f1096f2bb64344b80c42
size 5428670
oid sha256:3c07a283f4f70c4ab205b0f0acb5d6f55e3ced4c12caa7a8d5914ffe3548233a
size 5768166

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:75c8cdeefbbdcab2a240821d3410539f2a2cbe0a015897f4135404c80c3ac32c
size 6578366
oid sha256:562228841de976d70ee80999b930eadf3866a13ff2867d900279993744c44671
size 6667918

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ff2e3ca5a46397efcd5f3a595e7d3c179266cc4f3f5f528b428f5ef2a423028e
size 12649149
oid sha256:845746ac1cb101c3077d420c4f3fda3ca372492582dc123ac8a031a68ae9b6b1
size 12943150

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d739c5f77ddd9d90b609e31df620b35988081b7341fe225eb717d71a87caa88
size 12284953
oid sha256:189259558b5760c02583885168d7b0b47cf476cba81c7c028ec770f9d6033129
size 12415357

View File

@@ -221,7 +221,7 @@ Keep config and data separate:
/opt/appdata/ # Local: per-host app data
├── plex/
└── sonarr/
└── grafana/
```
## Performance
@@ -235,7 +235,7 @@ Compose Farm runs operations in parallel. For large deployments:
cf up --all
# Avoid: sequential updates when possible
for svc in plex sonarr radarr; do
for svc in plex grafana nextcloud; do
cf update $svc
done
```
@@ -249,7 +249,7 @@ SSH connections are reused within a command. For many operations:
cf update --all
# Multiple commands, multiple connections (slower)
cf update plex && cf update sonarr && cf update radarr
cf update plex && cf update grafana && cf update nextcloud
```
## Traefik Setup
@@ -297,7 +297,7 @@ http:
|------|----------|--------|
| Compose Farm config | `~/.config/compose-farm/` | Git or copy |
| Compose files | `/opt/compose/` | Git |
| State file | `~/.config/compose-farm/state.yaml` | Optional (can refresh) |
| State file | `~/.config/compose-farm/compose-farm-state.yaml` | Optional (can refresh) |
| App data | `/opt/appdata/` | Backup solution |
### Disaster Recovery

View File

@@ -13,9 +13,11 @@ The Compose Farm CLI is available as both `compose-farm` and the shorter alias `
| **Lifecycle** | `apply` | Make reality match config |
| | `up` | Start stacks |
| | `down` | Stop stacks |
| | `stop` | Stop services without removing containers |
| | `restart` | Restart stacks (down + up) |
| | `update` | Update stacks (pull + build + down + up) |
| | `pull` | Pull latest images |
| | `compose` | Run any docker compose command |
| **Monitoring** | `ps` | Show stack status |
| | `logs` | Show stack logs |
| | `stats` | Show overview statistics |
@@ -43,7 +45,7 @@ cf --help, -h # Show help
Make reality match your configuration. The primary reconciliation command.
<video autoplay loop muted playsinline>
<source src="/assets/apply.webm#t=0.001" type="video/webm">
<source src="/assets/apply.webm" type="video/webm">
</video>
```bash
@@ -97,19 +99,23 @@ cf up [OPTIONS] [STACKS]...
|--------|-------------|
| `--all, -a` | Start all stacks |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Start specific stacks
cf up plex sonarr
cf up plex grafana
# Start all stacks
cf up --all
# Start all stacks on a specific host
cf up --all --host nuc
# Start a specific service within a stack
cf up immich --service database
```
**Auto-migration:**
@@ -158,9 +164,40 @@ cf down --all --host nuc
---
### cf stop
Stop services without removing containers.
```bash
cf stop [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Stop all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Stop specific stacks
cf stop plex
# Stop all stacks
cf stop --all
# Stop a specific service within a stack
cf stop immich --service database
```
---
### cf restart
Restart stacks (down + up).
Restart stacks (down + up). With `--service`, restarts just that service.
```bash
cf restart [OPTIONS] [STACKS]...
@@ -171,6 +208,7 @@ cf restart [OPTIONS] [STACKS]...
| Option | Description |
|--------|-------------|
| `--all, -a` | Restart all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
@@ -178,16 +216,19 @@ cf restart [OPTIONS] [STACKS]...
```bash
cf restart plex
cf restart --all
# Restart a specific service
cf restart immich --service database
```
---
### cf update
Update stacks (pull + build + down + up).
Update stacks (pull + build + down + up). With `--service`, updates just that service.
<video autoplay loop muted playsinline>
<source src="/assets/update.webm#t=0.001" type="video/webm">
<source src="/assets/update.webm" type="video/webm">
</video>
```bash
@@ -199,6 +240,7 @@ cf update [OPTIONS] [STACKS]...
| Option | Description |
|--------|-------------|
| `--all, -a` | Update all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
@@ -209,6 +251,9 @@ cf update plex
# Update all stacks
cf update --all
# Update a specific service
cf update immich --service database
```
---
@@ -226,6 +271,7 @@ cf pull [OPTIONS] [STACKS]...
| Option | Description |
|--------|-------------|
| `--all, -a` | Pull for all stacks |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
@@ -233,6 +279,60 @@ cf pull [OPTIONS] [STACKS]...
```bash
cf pull plex
cf pull --all
# Pull a specific service
cf pull immich --service database
```
---
### cf compose
Run any docker compose command on a stack. This is a passthrough to docker compose for commands not wrapped by cf.
<video autoplay loop muted playsinline>
<source src="/assets/compose.webm" type="video/webm">
</video>
```bash
cf compose [OPTIONS] STACK COMMAND [ARGS]...
```
**Arguments:**
| Argument | Description |
|----------|-------------|
| `STACK` | Stack to operate on (use `.` for current dir) |
| `COMMAND` | Docker compose command to run |
| `ARGS` | Additional arguments passed to docker compose |
**Options:**
| Option | Description |
|--------|-------------|
| `--host, -H TEXT` | Filter to stacks on this host (required for multi-host stacks) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# Show docker compose help
cf compose mystack --help
# View running processes
cf compose mystack top
# List images
cf compose mystack images
# Interactive shell
cf compose mystack exec web bash
# View parsed config
cf compose mystack config
# Use current directory as stack
cf compose . ps
```
---
@@ -253,6 +353,7 @@ cf ps [OPTIONS] [STACKS]...
|--------|-------------|
| `--all, -a` | Show all stacks (default) |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--config, -c PATH` | Path to config file |
**Examples:**
@@ -262,10 +363,13 @@ cf ps [OPTIONS] [STACKS]...
cf ps
# Show specific stacks
cf ps plex sonarr
cf ps plex grafana
# Filter by host
cf ps --host nuc
# Show status of a specific service
cf ps immich --service database
```
---
@@ -275,7 +379,7 @@ cf ps --host nuc
Show stack logs.
<video autoplay loop muted playsinline>
<source src="/assets/logs.webm#t=0.001" type="video/webm">
<source src="/assets/logs.webm" type="video/webm">
</video>
```bash
@@ -288,6 +392,7 @@ cf logs [OPTIONS] [STACKS]...
|--------|-------------|
| `--all, -a` | Show logs for all stacks |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--follow, -f` | Follow logs (live stream) |
| `--tail, -n INTEGER` | Number of lines (default: 20 for --all, 100 otherwise) |
| `--config, -c PATH` | Path to config file |
@@ -302,10 +407,13 @@ cf logs plex
cf logs -f plex
# Show last 50 lines of multiple stacks
cf logs -n 50 plex sonarr
cf logs -n 50 plex grafana
# Show last 20 lines of all stacks
cf logs --all
# Show logs for a specific service
cf logs immich --service database
```
---
@@ -374,25 +482,31 @@ cf check jellyfin
Update local state from running stacks.
```bash
cf refresh [OPTIONS]
cf refresh [OPTIONS] [STACKS]...
```
**Options:**
| Option | Description |
|--------|-------------|
| `--all, -a` | Refresh all stacks |
| `--dry-run, -n` | Show what would change |
| `--log-path, -l PATH` | Path to Dockerfarm TOML log |
| `--config, -c PATH` | Path to config file |
Without arguments, refreshes all stacks (same as `--all`). With stack names, refreshes only those stacks.
**Examples:**
```bash
# Sync state with reality
# Sync state with reality (all stacks)
cf refresh
# Preview changes
cf refresh --dry-run
# Refresh specific stacks only
cf refresh plex sonarr
```
---
@@ -539,7 +653,20 @@ cf ssh COMMAND
| `status` | Show SSH key status and host connectivity |
| `keygen` | Generate key without distributing |
**Options for `cf ssh setup` and `cf ssh keygen`:**
**Options for `cf ssh setup`:**
| Option | Description |
|--------|-------------|
| `--config, -c PATH` | Path to config file |
| `--force, -f` | Regenerate key even if it exists |
**Options for `cf ssh status`:**
| Option | Description |
|--------|-------------|
| `--config, -c PATH` | Path to config file |
**Options for `cf ssh keygen`:**
| Option | Description |
|--------|-------------|

View File

@@ -42,8 +42,8 @@ hosts:
# Map stacks to the local host
stacks:
plex: local
sonarr: local
radarr: local
grafana: local
nextcloud: local
```
### Multi-host (full example)
@@ -69,8 +69,8 @@ hosts:
stacks:
# Single-host stacks
plex: nuc
sonarr: nuc
radarr: hp
grafana: nuc
nextcloud: hp
# Multi-host stacks
dozzle: all # Run on ALL hosts
@@ -94,7 +94,7 @@ compose_dir: /opt/compose
├── plex/
│ ├── docker-compose.yml # or compose.yaml
│ └── .env # optional environment file
├── sonarr/
├── grafana/
│ └── docker-compose.yml
└── ...
```
@@ -185,8 +185,8 @@ hosts:
```yaml
stacks:
plex: nuc
sonarr: nuc
radarr: hp
grafana: nuc
nextcloud: hp
```
### Multi-Host Stack
@@ -229,7 +229,7 @@ For example, if your config is at `~/.config/compose-farm/compose-farm.yaml`, th
```yaml
deployed:
plex: nuc
sonarr: nuc
grafana: nuc
```
This file records which stacks are deployed and on which host.
@@ -373,8 +373,8 @@ hosts:
stacks:
# Media
plex: nuc
sonarr: nuc
radarr: nuc
jellyfin: nuc
immich: nuc
# Infrastructure
traefik: nuc
@@ -388,7 +388,6 @@ stacks:
```yaml
compose_dir: /opt/compose
network: production
traefik_file: /opt/traefik/dynamic.d/cf.yml
traefik_stack: traefik

View File

@@ -10,10 +10,10 @@ VHS-based terminal demo recordings for Compose Farm CLI.
```bash
# Record all demos
./docs/demos/cli/record.sh
python docs/demos/cli/record.py
# Record single demo
cd /opt/stacks && vhs docs/demos/cli/quickstart.tape
# Record specific demos
python docs/demos/cli/record.py quickstart migration
```
## Demos
@@ -23,6 +23,7 @@ cd /opt/stacks && vhs docs/demos/cli/quickstart.tape
| `install.tape` | Installing with `uv tool install` |
| `quickstart.tape` | `cf ps`, `cf up`, `cf logs` |
| `logs.tape` | Viewing logs |
| `compose.tape` | `cf compose` passthrough (--help, images, exec) |
| `update.tape` | `cf update` |
| `migration.tape` | Service migration |
| `apply.tape` | `cf apply` |

View File

@@ -0,0 +1,50 @@
# Compose Demo
# Shows that cf compose passes through ANY docker compose command
Output docs/assets/compose.gif
Output docs/assets/compose.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 550
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# cf compose runs ANY docker compose command on the right host"
Enter
Sleep 500ms
Type "# See ALL available compose commands"
Enter
Sleep 500ms
Type "cf compose immich --help"
Enter
Sleep 4s
Type "# Show images"
Enter
Sleep 500ms
Type "cf compose immich images"
Enter
Wait+Screen /immich/
Sleep 2s
Type "# Open shell in a container"
Enter
Sleep 500ms
Type "cf compose immich exec immich-machine-learning sh"
Enter
Wait+Screen /#/
Sleep 1s
Type "python3 --version"
Enter
Sleep 1s
Type "exit"
Enter
Sleep 500ms

View File

@@ -21,7 +21,7 @@ Type "# First, define your hosts..."
Enter
Sleep 500ms
Type "bat -r 1:11 compose-farm.yaml"
Type "bat -r 1:16 compose-farm.yaml"
Enter
Sleep 3s
Type "q"
@@ -31,7 +31,7 @@ Type "# Then map each stack to a host"
Enter
Sleep 500ms
Type "bat -r 13:30 compose-farm.yaml"
Type "bat -r 17:35 compose-farm.yaml"
Enter
Sleep 3s
Type "q"

134
docs/demos/cli/record.py Executable file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env python3
"""Record CLI demos using VHS."""
import shutil
import subprocess
import sys
from pathlib import Path
from rich.console import Console
from compose_farm.config import load_config
from compose_farm.state import load_state
console = Console()
SCRIPT_DIR = Path(__file__).parent
STACKS_DIR = Path("/opt/stacks")
CONFIG_FILE = STACKS_DIR / "compose-farm.yaml"
OUTPUT_DIR = SCRIPT_DIR.parent.parent / "assets"
DEMOS = ["install", "quickstart", "logs", "compose", "update", "migration", "apply"]
def _run(cmd: list[str], **kw) -> bool:
return subprocess.run(cmd, check=False, **kw).returncode == 0
def _set_config(host: str) -> None:
"""Set audiobookshelf host in config file."""
_run(["sed", "-i", f"s/audiobookshelf: .*/audiobookshelf: {host}/", str(CONFIG_FILE)])
def _get_hosts() -> tuple[str | None, str | None]:
"""Return (config_host, state_host) for audiobookshelf."""
config = load_config()
state = load_state(config)
return config.stacks.get("audiobookshelf"), state.get("audiobookshelf")
def _setup_state(demo: str) -> bool:
"""Set up required state for demo. Returns False on failure."""
if demo not in ("migration", "apply"):
return True
config_host, state_host = _get_hosts()
if demo == "migration":
# Migration needs audiobookshelf on nas in BOTH config and state
if config_host != "nas":
console.print("[yellow]Setting up: config → nas[/yellow]")
_set_config("nas")
if state_host != "nas":
console.print("[yellow]Setting up: state → nas[/yellow]")
if not _run(["cf", "apply"], cwd=STACKS_DIR):
return False
elif demo == "apply":
# Apply needs config=nas, state=anton (so there's something to apply)
if config_host != "nas":
console.print("[yellow]Setting up: config → nas[/yellow]")
_set_config("nas")
if state_host == "nas":
console.print("[yellow]Setting up: state → anton[/yellow]")
_set_config("anton")
if not _run(["cf", "apply"], cwd=STACKS_DIR):
return False
_set_config("nas")
return True
def _record(name: str, index: int, total: int) -> bool:
"""Record a single demo."""
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] {name}")
if _run(["vhs", str(SCRIPT_DIR / f"{name}.tape")], cwd=STACKS_DIR):
console.print("[green] ✓ Done[/green]")
return True
console.print("[red] ✗ Failed[/red]")
return False
def _reset_after(demo: str, next_demo: str | None) -> None:
"""Reset state after demos that modify audiobookshelf."""
if demo not in ("quickstart", "migration"):
return
_set_config("nas")
if next_demo != "apply": # Let apply demo show the migration
_run(["cf", "apply"], cwd=STACKS_DIR)
def _restore_config(original: str) -> None:
"""Restore original config and sync state."""
console.print("[yellow]Restoring original config...[/yellow]")
CONFIG_FILE.write_text(original)
_run(["cf", "apply"], cwd=STACKS_DIR)
def _main() -> int:
if not shutil.which("vhs"):
console.print("[red]VHS not found. Install: brew install vhs[/red]")
return 1
if not _run(["git", "-C", str(STACKS_DIR), "diff", "--quiet", "compose-farm.yaml"]):
console.print("[red]compose-farm.yaml has uncommitted changes[/red]")
return 1
demos = [d for d in sys.argv[1:] if d in DEMOS] or DEMOS
if sys.argv[1:] and not demos:
console.print(f"[red]Unknown demo. Available: {', '.join(DEMOS)}[/red]")
return 1
# Save original config to restore after recording
original_config = CONFIG_FILE.read_text()
try:
for i, demo in enumerate(demos, 1):
if not _setup_state(demo):
return 1
if not _record(demo, i, len(demos)):
return 1
_reset_after(demo, demos[i] if i < len(demos) else None)
finally:
_restore_config(original_config)
# Move outputs
OUTPUT_DIR.mkdir(exist_ok=True)
for f in (STACKS_DIR / "docs/assets").glob("*.[gw]*"):
shutil.move(str(f), str(OUTPUT_DIR / f.name))
console.print(f"\n[green]Done![/green] Saved to {OUTPUT_DIR}")
return 0
if __name__ == "__main__":
sys.exit(_main())

View File

@@ -1,89 +0,0 @@
#!/usr/bin/env bash
# Record all VHS demos
# Run this on a Docker host with compose-farm configured
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEMOS_DIR="$(dirname "$SCRIPT_DIR")"
DOCS_DIR="$(dirname "$DEMOS_DIR")"
REPO_DIR="$(dirname "$DOCS_DIR")"
OUTPUT_DIR="$DOCS_DIR/assets"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[0;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Check for VHS
if ! command -v vhs &> /dev/null; then
echo "VHS not found. Install with:"
echo " brew install vhs"
echo " # or"
echo " go install github.com/charmbracelet/vhs@latest"
exit 1
fi
# Ensure output directory exists
mkdir -p "$OUTPUT_DIR"
# Temp output dir (VHS runs from /opt/stacks, so relative paths go here)
TEMP_OUTPUT="/opt/stacks/docs/assets"
mkdir -p "$TEMP_OUTPUT"
# Change to /opt/stacks so cf commands use installed version (not editable install)
cd /opt/stacks
# Ensure compose-farm.yaml has no uncommitted changes (safety check)
if ! git diff --quiet compose-farm.yaml; then
echo -e "${RED}Error: compose-farm.yaml has uncommitted changes${NC}"
echo "Commit or stash your changes before recording demos"
exit 1
fi
echo -e "${BLUE}Recording VHS demos...${NC}"
echo "Output directory: $OUTPUT_DIR"
echo ""
# Function to record a tape
record_tape() {
local tape=$1
local name=$(basename "$tape" .tape)
echo -e "${GREEN}Recording:${NC} $name"
if vhs "$tape"; then
echo -e "${GREEN} ✓ Done${NC}"
else
echo -e "${RED} ✗ Failed${NC}"
return 1
fi
}
# Record demos in logical order
echo -e "${YELLOW}=== Phase 1: Basic demos ===${NC}"
record_tape "$SCRIPT_DIR/install.tape"
record_tape "$SCRIPT_DIR/quickstart.tape"
record_tape "$SCRIPT_DIR/logs.tape"
echo -e "${YELLOW}=== Phase 2: Update demo ===${NC}"
record_tape "$SCRIPT_DIR/update.tape"
echo -e "${YELLOW}=== Phase 3: Migration demo ===${NC}"
record_tape "$SCRIPT_DIR/migration.tape"
git -C /opt/stacks checkout compose-farm.yaml # Reset after migration
echo -e "${YELLOW}=== Phase 4: Apply demo ===${NC}"
record_tape "$SCRIPT_DIR/apply.tape"
# Move GIFs and WebMs from temp location to repo
echo ""
echo -e "${BLUE}Moving recordings to repo...${NC}"
mv "$TEMP_OUTPUT"/*.gif "$OUTPUT_DIR/" 2>/dev/null || true
mv "$TEMP_OUTPUT"/*.webm "$OUTPUT_DIR/" 2>/dev/null || true
rmdir "$TEMP_OUTPUT" 2>/dev/null || true
rmdir "$(dirname "$TEMP_OUTPUT")" 2>/dev/null || true
echo ""
echo -e "${GREEN}Done!${NC} Recordings saved to $OUTPUT_DIR/"
ls -la "$OUTPUT_DIR"/*.gif "$OUTPUT_DIR"/*.webm 2>/dev/null || echo "No recordings found (check for errors above)"

View File

@@ -60,10 +60,14 @@ def test_demo_console(recording_page: Page, server_url: str) -> None:
page.keyboard.press("Enter")
pause(page, 2500) # Wait for output
# Scroll down to show the Editor section with Compose Farm config
editor_section = page.locator(".collapse", has_text="Editor").first
editor_section.scroll_into_view_if_needed()
pause(page, 800)
# Smoothly scroll down to show the Editor section with Compose Farm config
page.evaluate("""
const editor = document.getElementById('console-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Wait for Monaco editor to load with config content
page.wait_for_selector("#console-editor .monaco-editor", timeout=10000)

View File

@@ -1,9 +1,11 @@
"""Demo: Container shell exec.
"""Demo: Container shell exec via command palette.
Records a ~25 second demo showing:
- Navigating to a stack page
- Clicking Shell button on a container
- Running top command inside the container
Records a ~35 second demo showing:
- Navigating to immich stack (multiple containers)
- Using command palette with fuzzy matching ("sh mach") to open shell
- Running a command
- Using command palette to switch to server container shell
- Running another command
Run: pytest docs/demos/web/demo_shell.py -v --no-cov
"""
@@ -14,6 +16,7 @@ from typing import TYPE_CHECKING
import pytest
from conftest import (
open_command_palette,
pause,
slow_type,
wait_for_sidebar,
@@ -33,39 +36,71 @@ def test_demo_shell(recording_page: Page, server_url: str) -> None:
wait_for_sidebar(page)
pause(page, 800)
# Navigate to a stack with a running container (grocy)
page.locator("#sidebar-stacks a", has_text="grocy").click()
page.wait_for_url("**/stack/grocy", timeout=5000)
# Navigate to immich via command palette (has multiple containers)
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "immich", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/immich", timeout=5000)
pause(page, 1500)
# Wait for containers list to load (loaded via HTMX)
# Wait for containers list to load (so shell commands are available)
page.wait_for_selector("#containers-list button", timeout=10000)
pause(page, 800)
# Click Shell button on the first container
shell_btn = page.locator("#containers-list button", has_text="Shell").first
shell_btn.click()
# Use command palette with fuzzy matching: "sh mach" -> "Shell: immich-machine-learning"
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "sh mach", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
pause(page, 1000)
# Wait for exec terminal to appear
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
# Scroll down to make the terminal visible
page.locator("#exec-terminal").scroll_into_view_if_needed()
pause(page, 2000)
# Smoothly scroll down to make the terminal visible
page.evaluate("""
const terminal = document.getElementById('exec-terminal');
if (terminal) {
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200)
# Run top command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "top", delay=100)
# Run python version command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "python3 --version", delay=60)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 4000) # Let top run for a bit
pause(page, 1500)
# Press q to quit top
page.keyboard.press("q")
# Blur the terminal to release focus (won't scroll)
page.evaluate("document.activeElement?.blur()")
pause(page, 500)
# Use command palette to switch to server container: "sh serv" -> "Shell: immich-server"
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "sh serv", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
pause(page, 1000)
# Run another command to show it's interactive
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ps aux | head", delay=60)
# Wait for new terminal
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
# Scroll to terminal
page.evaluate("""
const terminal = document.getElementById('exec-terminal');
if (terminal) {
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200)
# Run ls command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ls /usr/src/app", delay=60)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 2000)

View File

@@ -55,9 +55,14 @@ def test_demo_stack(recording_page: Page, server_url: str) -> None:
page.wait_for_selector("#compose-editor .monaco-editor", timeout=10000)
pause(page, 2000) # Let viewer see the compose file
# Scroll down slightly to show more of the editor
page.locator("#compose-editor").scroll_into_view_if_needed()
pause(page, 1500)
# Smoothly scroll down to show more of the editor
page.evaluate("""
const editor = document.getElementById('compose-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Close the compose file section
compose_collapse.locator("input[type=checkbox]").click(force=True)

View File

@@ -63,7 +63,7 @@ def test_demo_themes(recording_page: Page, server_url: str) -> None:
pause(page, 400)
# Type to filter to a light theme (theme button pre-populates "theme:")
slow_type(page, "#cmd-input", " cup", delay=100)
slow_type(page, "#cmd-input", "cup", delay=100)
pause(page, 500)
page.keyboard.press("Enter")
pause(page, 1000)
@@ -75,7 +75,7 @@ def test_demo_themes(recording_page: Page, server_url: str) -> None:
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 300)
slow_type(page, "#cmd-input", " dark", delay=100)
slow_type(page, "#cmd-input", "dark", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 800)

View File

@@ -5,7 +5,7 @@ Records a comprehensive demo (~60 seconds) combining all major features:
2. Editor showing Compose Farm YAML config
3. Command palette navigation to grocy stack
4. Stack actions: up, logs
5. Switch to mealie stack via command palette, run update
5. Switch to dozzle stack via command palette, run update
6. Dashboard overview
7. Theme cycling via command palette
@@ -126,13 +126,13 @@ def _demo_stack_actions(page: Page) -> None:
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500)
# Switch to mealie via command palette
# Switch to dozzle via command palette (on nas for lower latency)
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "mealie", delay=100)
slow_type(page, "#cmd-input", "dozzle", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/mealie", timeout=5000)
page.wait_for_url("**/stack/dozzle", timeout=5000)
pause(page, 1000)
# Run update action
@@ -162,32 +162,20 @@ def _demo_dashboard_and_themes(page: Page, server_url: str) -> None:
page.evaluate("window.scrollTo(0, 0)")
pause(page, 600)
# Open theme picker and arrow down to Luxury (shows live preview)
# Theme order: light, dark, cupcake, bumblebee, emerald, corporate, synthwave,
# retro, cyberpunk, valentine, halloween, garden, forest, aqua, lofi, pastel,
# fantasy, wireframe, black, luxury (index 19)
# Open theme picker and arrow down to Dracula (shows live preview)
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 400)
# Arrow down through themes with live preview until we reach Luxury
# Arrow down through themes with live preview until we reach Dracula
for _ in range(19):
page.keyboard.press("ArrowDown")
pause(page, 180)
# Select Luxury theme
# Select Dracula theme and end on it
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 1000)
# Return to dark theme
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 300)
slow_type(page, "#cmd-input", " dark", delay=80)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 1000)
pause(page, 1500)
@pytest.mark.browser # type: ignore[misc]

View File

@@ -88,9 +88,9 @@ def patch_playwright_video_quality() -> None:
console.print("[green]Patched Playwright for high-quality video recording[/green]")
def record_demo(name: str) -> Path | None:
def record_demo(name: str, index: int, total: int) -> Path | None:
"""Run a single demo and return the video path."""
console.print(f"[green]Recording:[/green] web-{name}")
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] web-{name}")
demo_file = SCRIPT_DIR / f"demo_{name}.py"
if not demo_file.exists():
@@ -227,9 +227,7 @@ def main() -> int:
try:
for i, demo in enumerate(demos_to_record, 1):
console.print(f"[yellow]=== Demo {i}/{len(demos_to_record)}: {demo} ===[/yellow]")
video_path = record_demo(demo)
video_path = record_demo(demo, i, len(demos_to_record))
if video_path:
webm, gif = move_recording(video_path, demo)
results[demo] = (webm, gif)

View File

@@ -18,13 +18,13 @@ Before you begin, ensure you have:
## Installation
<video autoplay loop muted playsinline>
<source src="/assets/install.webm#t=0.001" type="video/webm">
<source src="/assets/install.webm" type="video/webm">
</video>
### One-liner (recommended)
```bash
curl -fsSL https://raw.githubusercontent.com/basnijholt/compose-farm/main/bootstrap.sh | sh
curl -fsSL https://compose-farm.nijho.lt/install | sh
```
This installs [uv](https://docs.astral.sh/uv/) if needed, then installs compose-farm.
@@ -54,6 +54,25 @@ docker run --rm \
ghcr.io/basnijholt/compose-farm up --all
```
**Running as non-root user** (recommended for NFS mounts):
By default, containers run as root. To preserve file ownership on mounted volumes, set these environment variables in your `.env` file:
```bash
# Add to .env file (one-time setup)
echo "CF_UID=$(id -u)" >> .env
echo "CF_GID=$(id -g)" >> .env
echo "CF_HOME=$HOME" >> .env
echo "CF_USER=$USER" >> .env
```
Or use [direnv](https://direnv.net/) to auto-set these variables when entering the directory:
```bash
cp .envrc.example .envrc && direnv allow
```
This ensures files like `compose-farm-state.yaml` and web UI edits are owned by your user instead of root. The `CF_USER` variable is required for SSH to work when running as a non-root user.
### Verify Installation
```bash
@@ -111,9 +130,9 @@ nas:/volume1/compose /opt/compose nfs defaults 0 0
/opt/compose/ # compose_dir in config
├── plex/
│ └── docker-compose.yml
├── sonarr/
├── grafana/
│ └── docker-compose.yml
├── radarr/
├── nextcloud/
│ └── docker-compose.yml
└── jellyfin/
└── docker-compose.yml
@@ -123,7 +142,21 @@ nas:/volume1/compose /opt/compose nfs defaults 0 0
### Create Config File
Create `~/.config/compose-farm/compose-farm.yaml`:
Create `compose-farm.yaml` in the directory where you'll run commands. For example, if your stacks are in `/opt/stacks`, place the config there too:
```bash
cd /opt/stacks
cf config init
```
Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config. You can also symlink a working directory config to the global location:
```bash
# Create config in your stacks directory, symlink to ~/.config
cf config symlink /opt/stacks/compose-farm.yaml
```
This way, `cf` commands work from anywhere while the config lives with your stacks.
#### Single host example
@@ -136,8 +169,8 @@ hosts:
stacks:
plex: local
sonarr: local
radarr: local
grafana: local
nextcloud: local
```
#### Multi-host example
@@ -157,8 +190,8 @@ hosts:
# Map stacks to hosts
stacks:
plex: nuc
sonarr: nuc
radarr: hp
grafana: nuc
nextcloud: hp
```
Each entry in `stacks:` maps to a folder under `compose_dir` that contains a compose file.
@@ -197,7 +230,7 @@ Starts all stacks on their assigned hosts.
### Start Specific Stacks
```bash
cf up plex sonarr
cf up plex grafana
```
### Apply Configuration
@@ -236,19 +269,22 @@ Create the compose file:
```bash
# On any host (shared storage)
mkdir -p /opt/compose/prowlarr
cat > /opt/compose/prowlarr/docker-compose.yml << 'EOF'
mkdir -p /opt/compose/gitea
cat > /opt/compose/gitea/docker-compose.yml << 'EOF'
services:
prowlarr:
image: lscr.io/linuxserver/prowlarr:latest
container_name: prowlarr
gitea:
image: docker.gitea.com/gitea:latest
container_name: gitea
environment:
- PUID=1000
- PGID=1000
- USER_UID=1000
- USER_GID=1000
volumes:
- /opt/config/prowlarr:/config
- /opt/config/gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "9696:9696"
- "3000:3000"
- "2222:22"
restart: unless-stopped
EOF
```
@@ -258,13 +294,13 @@ Add to config:
```yaml
stacks:
# ... existing stacks
prowlarr: nuc
gitea: nuc
```
Start the stack:
```bash
cf up prowlarr
cf up gitea
```
### 2. Move a Stack to Another Host

View File

@@ -17,12 +17,12 @@ It also works great on a single host with one folder per stack; just map stacks
**CLI:**
<video autoplay loop muted playsinline>
<source src="/assets/quickstart.webm#t=0.001" type="video/webm">
<source src="/assets/quickstart.webm" type="video/webm">
</video>
**[Web UI](web-ui.md):**
<video autoplay loop muted playsinline>
<source src="/assets/web-workflow.webm#t=0.001" type="video/webm">
<source src="/assets/web-workflow.webm" type="video/webm">
</video>
## Why Compose Farm?
@@ -76,7 +76,7 @@ hosts:
stacks:
plex: server-1
jellyfin: server-2
sonarr: server-1
grafana: server-1
```
```bash
@@ -96,7 +96,7 @@ pip install compose-farm
### Configuration
Create `~/.config/compose-farm/compose-farm.yaml`:
Create `compose-farm.yaml` in the directory where you'll run commands (e.g., `/opt/stacks`), or in `~/.config/compose-farm/`:
```yaml
compose_dir: /opt/compose
@@ -110,10 +110,12 @@ hosts:
stacks:
plex: nuc
sonarr: nuc
radarr: hp
grafana: nuc
nextcloud: hp
```
See [Configuration](configuration.md) for all options and the full search order.
### Usage
```bash
@@ -121,7 +123,7 @@ stacks:
cf apply
# Start specific stacks
cf up plex sonarr
cf up plex grafana
# Check status
cf ps
@@ -136,7 +138,7 @@ cf logs -f plex
- **Auto-migration**: Change a host assignment, run `cf up`, stack moves automatically
<video autoplay loop muted playsinline>
<source src="/assets/migration.webm#t=0.001" type="video/webm">
<source src="/assets/migration.webm" type="video/webm">
</video>
- **Parallel execution**: Multiple stacks start/stop concurrently
- **State tracking**: Knows which stacks are running where

2
bootstrap.sh → docs/install Executable file → Normal file
View File

@@ -1,6 +1,6 @@
#!/bin/sh
# Compose Farm bootstrap script
# Usage: curl -fsSL https://raw.githubusercontent.com/basnijholt/compose-farm/main/bootstrap.sh | sh
# Usage: curl -fsSL https://compose-farm.nijho.lt/install | sh
#
# This script installs uv (if needed) and then installs compose-farm as a uv tool.

View File

@@ -0,0 +1,21 @@
// Fix Safari video autoplay issues
(function() {
function initVideos() {
document.querySelectorAll('video[autoplay]').forEach(function(video) {
video.load();
video.play().catch(function() {});
});
}
// For initial page load (needed for Chrome)
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', initVideos);
} else {
initVideos();
}
// For MkDocs instant navigation (needed for Safari)
if (typeof document$ !== 'undefined') {
document$.subscribe(initVideos);
}
})();

View File

@@ -27,8 +27,8 @@ hosts:
stacks:
plex: nuc
jellyfin: hp
sonarr: nuc
radarr: nuc
grafana: nuc
nextcloud: nuc
```
Then just:

View File

@@ -133,7 +133,7 @@ hosts:
stacks:
traefik: nuc # Traefik runs here
plex: hp # Routed via file-provider
sonarr: hp
grafana: hp
```
With `traefik_file` set, these commands auto-regenerate the config:
@@ -256,8 +256,8 @@ stacks:
traefik: nuc
plex: hp
jellyfin: nas
sonarr: nuc
radarr: nuc
grafana: nuc
nextcloud: nuc
```
### /opt/compose/plex/docker-compose.yml
@@ -309,7 +309,7 @@ http:
- url: http://192.168.1.100:8096
```
Note: `sonarr` and `radarr` are NOT in the file because they're on the same host as Traefik (`nuc`).
Note: `grafana` and `nextcloud` are NOT in the file because they're on the same host as Traefik (`nuc`).
## Combining with Existing Config

View File

@@ -19,7 +19,7 @@ Then open [http://localhost:8000](http://localhost:8000).
Console terminal, config editor, stack navigation, actions (up, logs, update), dashboard overview, and theme switching - all in one flow.
<video autoplay loop muted playsinline>
<source src="/assets/web-workflow.webm#t=0.001" type="video/webm">
<source src="/assets/web-workflow.webm" type="video/webm">
</video>
### Stack Actions
@@ -27,7 +27,7 @@ Console terminal, config editor, stack navigation, actions (up, logs, update), d
Navigate to any stack and use the command palette to trigger actions like restart, pull, update, or view logs. Output streams in real-time via WebSocket.
<video autoplay loop muted playsinline>
<source src="/assets/web-stack.webm#t=0.001" type="video/webm">
<source src="/assets/web-stack.webm" type="video/webm">
</video>
### Theme Switching
@@ -35,7 +35,7 @@ Navigate to any stack and use the command palette to trigger actions like restar
35 themes available via the command palette. Type `theme:` to filter, then use arrow keys to preview themes live before selecting.
<video autoplay loop muted playsinline>
<source src="/assets/web-themes.webm#t=0.001" type="video/webm">
<source src="/assets/web-themes.webm" type="video/webm">
</video>
### Command Palette
@@ -43,7 +43,7 @@ Navigate to any stack and use the command palette to trigger actions like restar
Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy search to quickly navigate, trigger actions, or change themes.
<video autoplay loop muted playsinline>
<source src="/assets/web-navigation.webm#t=0.001" type="video/webm">
<source src="/assets/web-navigation.webm" type="video/webm">
</video>
## Pages
@@ -63,6 +63,8 @@ Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy sear
- Container shell access (exec into running containers)
- Terminal output for running commands
Files are automatically backed up before saving to `~/.config/compose-farm/backups/`.
### Console (`/console`)
- Full shell access to any host
@@ -70,7 +72,7 @@ Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy sear
- Monaco editor with syntax highlighting
<video autoplay loop muted playsinline>
<source src="/assets/web-console.webm#t=0.001" type="video/webm">
<source src="/assets/web-console.webm" type="video/webm">
</video>
### Container Shell
@@ -78,7 +80,7 @@ Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy sear
Click the Shell button on any running container to exec into it directly from the browser.
<video autoplay loop muted playsinline>
<source src="/assets/web-shell.webm#t=0.001" type="video/webm">
<source src="/assets/web-shell.webm" type="video/webm">
</video>
## Keyboard Shortcuts
@@ -116,7 +118,7 @@ The web UI requires additional dependencies:
pip install compose-farm[web]
# If installed via uv
uv tool install compose-farm --with web
uv tool install 'compose-farm[web]'
```
## Architecture

60
justfile Normal file
View File

@@ -0,0 +1,60 @@
# Compose Farm Development Commands
# Run `just` to see available commands
# Default: list available commands
default:
@just --list
# Install development dependencies
install:
uv sync --all-extras --dev
# Run all tests (parallel)
test:
uv run pytest -n auto
# Run CLI tests only (parallel, with coverage)
test-cli:
uv run pytest -m "not browser" -n auto
# Run web UI tests only (parallel)
test-web:
uv run pytest -m browser -n auto
# Lint, format, and type check
lint:
uv run ruff check --fix .
uv run ruff format .
uv run mypy src
uv run ty check src
# Start web UI in development mode with auto-reload
web:
uv run cf web --reload --port 9001
# Kill the web server
kill-web:
lsof -ti :9001 | xargs kill -9 2>/dev/null || true
# Build docs and serve locally
doc:
uvx zensical build
python -m http.server -d site 9002
# Kill the docs server
kill-doc:
lsof -ti :9002 | xargs kill -9 2>/dev/null || true
# Record CLI demos (all or specific: just record-cli quickstart)
record-cli *demos:
python docs/demos/cli/record.py {{demos}}
# Record web UI demos (all or specific: just record-web navigation)
record-web *demos:
python docs/demos/web/record.py {{demos}}
# Clean up build artifacts and caches
clean:
rm -rf .pytest_cache .mypy_cache .ruff_cache .coverage htmlcov dist build
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true

View File

@@ -23,6 +23,7 @@ app = typer.Typer(
help="Compose Farm - run docker compose commands across multiple hosts",
no_args_is_help=True,
context_settings={"help_option_names": ["-h", "--help"]},
rich_markup_mode="rich",
)

View File

@@ -59,6 +59,10 @@ HostOption = Annotated[
str | None,
typer.Option("--host", "-H", help="Filter to stacks on this host"),
]
ServiceOption = Annotated[
str | None,
typer.Option("--service", "-s", help="Target a specific service within the stack"),
]
# --- Constants (internal) ---
_MISSING_PATH_PREVIEW_LIMIT = 2

View File

@@ -2,15 +2,20 @@
from __future__ import annotations
from typing import Annotated
from pathlib import Path
from typing import TYPE_CHECKING, Annotated
import typer
if TYPE_CHECKING:
from compose_farm.config import Config
from compose_farm.cli.app import app
from compose_farm.cli.common import (
AllOption,
ConfigOption,
HostOption,
ServiceOption,
StacksArg,
format_host,
get_stacks,
@@ -18,10 +23,17 @@ from compose_farm.cli.common import (
maybe_regenerate_traefik,
report_results,
run_async,
validate_host_for_stack,
validate_stacks,
)
from compose_farm.cli.management import _discover_stacks_full
from compose_farm.console import MSG_DRY_RUN, console, print_error, print_success
from compose_farm.executor import run_on_stacks, run_sequential_on_stacks
from compose_farm.operations import stop_orphaned_stacks, up_stacks
from compose_farm.executor import run_compose_on_host, run_on_stacks, run_sequential_on_stacks
from compose_farm.operations import (
stop_orphaned_stacks,
stop_stray_stacks,
up_stacks,
)
from compose_farm.state import (
get_orphaned_stacks,
get_stack_host,
@@ -36,11 +48,19 @@ def up(
stacks: StacksArg = None,
all_stacks: AllOption = False,
host: HostOption = None,
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Start stacks (docker compose up -d). Auto-migrates if host changed."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config, host=host)
results = run_async(up_stacks(cfg, stack_list, raw=True))
if service:
if len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# For service-level up, use run_on_stacks directly (no migration logic)
results = run_async(run_on_stacks(cfg, stack_list, f"up -d {service}", raw=True))
else:
results = run_async(up_stacks(cfg, stack_list, raw=True))
maybe_regenerate_traefik(cfg, results)
report_results(results)
@@ -98,16 +118,39 @@ def down(
report_results(results)
@app.command(rich_help_panel="Lifecycle")
def stop(
stacks: StacksArg = None,
all_stacks: AllOption = False,
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Stop services without removing containers (docker compose stop)."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
if service and len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
cmd = f"stop {service}" if service else "stop"
raw = len(stack_list) == 1
results = run_async(run_on_stacks(cfg, stack_list, cmd, raw=raw))
report_results(results)
@app.command(rich_help_panel="Lifecycle")
def pull(
stacks: StacksArg = None,
all_stacks: AllOption = False,
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Pull latest images (docker compose pull)."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
if service and len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
cmd = f"pull --ignore-buildable {service}" if service else "pull --ignore-buildable"
raw = len(stack_list) == 1
results = run_async(run_on_stacks(cfg, stack_list, "pull", raw=raw))
results = run_async(run_on_stacks(cfg, stack_list, cmd, raw=raw))
report_results(results)
@@ -115,12 +158,21 @@ def pull(
def restart(
stacks: StacksArg = None,
all_stacks: AllOption = False,
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Restart stacks (down + up)."""
"""Restart stacks (down + up). With --service, restarts just that service."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
raw = len(stack_list) == 1
results = run_async(run_sequential_on_stacks(cfg, stack_list, ["down", "up -d"], raw=raw))
if service:
if len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# For service-level restart, use docker compose restart (more efficient)
raw = True
results = run_async(run_on_stacks(cfg, stack_list, f"restart {service}", raw=raw))
else:
raw = len(stack_list) == 1
results = run_async(run_sequential_on_stacks(cfg, stack_list, ["down", "up -d"], raw=raw))
maybe_regenerate_traefik(cfg, results)
report_results(results)
@@ -129,22 +181,58 @@ def restart(
def update(
stacks: StacksArg = None,
all_stacks: AllOption = False,
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Update stacks (pull + build + down + up)."""
"""Update stacks (pull + build + down + up). With --service, updates just that service."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
raw = len(stack_list) == 1
results = run_async(
run_sequential_on_stacks(
cfg, stack_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
if service:
if len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# For service-level update: pull + build + stop + up (stop instead of down)
raw = True
results = run_async(
run_sequential_on_stacks(
cfg,
stack_list,
[
f"pull --ignore-buildable {service}",
f"build {service}",
f"stop {service}",
f"up -d {service}",
],
raw=raw,
)
)
else:
raw = len(stack_list) == 1
results = run_async(
run_sequential_on_stacks(
cfg, stack_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
)
)
)
maybe_regenerate_traefik(cfg, results)
report_results(results)
def _discover_strays(cfg: Config) -> dict[str, list[str]]:
"""Discover stacks running on unauthorized hosts by scanning all hosts."""
_, strays, duplicates = _discover_stacks_full(cfg)
# Merge duplicates into strays (for single-host stacks on multiple hosts,
# keep correct host and stop others)
for stack, running_hosts in duplicates.items():
configured = cfg.get_hosts(stack)[0]
stray_hosts = [h for h in running_hosts if h != configured]
if stray_hosts:
strays[stack] = stray_hosts
return strays
@app.command(rich_help_panel="Lifecycle")
def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs these branches)
dry_run: Annotated[
bool,
typer.Option("--dry-run", "-n", help="Show what would change without executing"),
@@ -153,23 +241,29 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
bool,
typer.Option("--no-orphans", help="Only migrate, don't stop orphaned stacks"),
] = False,
no_strays: Annotated[
bool,
typer.Option("--no-strays", help="Don't stop stray stacks (running on wrong host)"),
] = False,
full: Annotated[
bool,
typer.Option("--full", "-f", help="Also run up on all stacks to apply config changes"),
] = False,
config: ConfigOption = None,
) -> None:
"""Make reality match config (start, migrate, stop as needed).
"""Make reality match config (start, migrate, stop strays/orphans as needed).
This is the "reconcile" command that ensures running stacks match your
config file. It will:
1. Stop orphaned stacks (in state but removed from config)
2. Migrate stacks on wrong host (host in state ≠ host in config)
3. Start missing stacks (in config but not in state)
2. Stop stray stacks (running on unauthorized hosts)
3. Migrate stacks on wrong host (host in state ≠ host in config)
4. Start missing stacks (in config but not in state)
Use --dry-run to preview changes before applying.
Use --no-orphans to only migrate/start without stopping orphaned stacks.
Use --no-orphans to skip stopping orphaned stacks.
Use --no-strays to skip stopping stray stacks.
Use --full to also run 'up' on all stacks (picks up compose/env changes).
"""
cfg = load_config_or_exit(config)
@@ -177,16 +271,28 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
migrations = get_stacks_needing_migration(cfg)
missing = get_stacks_not_in_state(cfg)
strays: dict[str, list[str]] = {}
if not no_strays:
console.print("[dim]Scanning hosts for stray containers...[/]")
strays = _discover_strays(cfg)
# For --full: refresh all stacks not already being started/migrated
handled = set(migrations) | set(missing)
to_refresh = [stack for stack in cfg.stacks if stack not in handled] if full else []
has_orphans = bool(orphaned) and not no_orphans
has_strays = bool(strays)
has_migrations = bool(migrations)
has_missing = bool(missing)
has_refresh = bool(to_refresh)
if not has_orphans and not has_migrations and not has_missing and not has_refresh:
if (
not has_orphans
and not has_strays
and not has_migrations
and not has_missing
and not has_refresh
):
print_success("Nothing to apply - reality matches config")
return
@@ -195,6 +301,14 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
console.print(f"[yellow]Orphaned stacks to stop ({len(orphaned)}):[/]")
for svc, hosts in orphaned.items():
console.print(f" [cyan]{svc}[/] on [magenta]{format_host(hosts)}[/]")
if has_strays:
console.print(f"[red]Stray stacks to stop ({len(strays)}):[/]")
for stack, hosts in strays.items():
configured = cfg.get_hosts(stack)
console.print(
f" [cyan]{stack}[/] on [magenta]{', '.join(hosts)}[/] "
f"[dim](should be on {', '.join(configured)})[/]"
)
if has_migrations:
console.print(f"[cyan]Stacks to migrate ({len(migrations)}):[/]")
for stack in migrations:
@@ -223,21 +337,26 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
console.print("[yellow]Stopping orphaned stacks...[/]")
all_results.extend(run_async(stop_orphaned_stacks(cfg)))
# 2. Migrate stacks on wrong host
# 2. Stop stray stacks (running on unauthorized hosts)
if has_strays:
console.print("[red]Stopping stray stacks...[/]")
all_results.extend(run_async(stop_stray_stacks(cfg, strays)))
# 3. Migrate stacks on wrong host
if has_migrations:
console.print("[cyan]Migrating stacks...[/]")
migrate_results = run_async(up_stacks(cfg, migrations, raw=True))
all_results.extend(migrate_results)
maybe_regenerate_traefik(cfg, migrate_results)
# 3. Start missing stacks (reuse up_stacks which handles state updates)
# 4. Start missing stacks (reuse up_stacks which handles state updates)
if has_missing:
console.print("[green]Starting missing stacks...[/]")
start_results = run_async(up_stacks(cfg, missing, raw=True))
all_results.extend(start_results)
maybe_regenerate_traefik(cfg, start_results)
# 4. Refresh remaining stacks (--full: run up to apply config changes)
# 5. Refresh remaining stacks (--full: run up to apply config changes)
if has_refresh:
console.print("[blue]Refreshing stacks...[/]")
refresh_results = run_async(up_stacks(cfg, to_refresh, raw=True))
@@ -247,5 +366,62 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
report_results(all_results)
@app.command(
rich_help_panel="Lifecycle",
context_settings={"allow_interspersed_args": False},
)
def compose(
stack: Annotated[str, typer.Argument(help="Stack to operate on (use '.' for current dir)")],
command: Annotated[str, typer.Argument(help="Docker compose command")],
args: Annotated[list[str] | None, typer.Argument(help="Additional arguments")] = None,
host: HostOption = None,
config: ConfigOption = None,
) -> None:
"""Run any docker compose command on a stack.
Passthrough to docker compose for commands not wrapped by cf.
Options after COMMAND are passed to docker compose, not cf.
Examples:
cf compose mystack --help - show docker compose help
cf compose mystack top - view running processes
cf compose mystack images - list images
cf compose mystack exec web bash - interactive shell
cf compose mystack config - view parsed config
"""
cfg = load_config_or_exit(config)
# Resolve "." to current directory name
resolved_stack = Path.cwd().name if stack == "." else stack
validate_stacks(cfg, [resolved_stack])
# Handle multi-host stacks
hosts = cfg.get_hosts(resolved_stack)
if len(hosts) > 1:
if host is None:
print_error(
f"Stack [cyan]{resolved_stack}[/] runs on multiple hosts: {', '.join(hosts)}\n"
f"Use [bold]--host[/] to specify which host"
)
raise typer.Exit(1)
validate_host_for_stack(cfg, resolved_stack, host)
target_host = host
else:
target_host = hosts[0]
# Build the full compose command
full_cmd = command
if args:
full_cmd += " " + " ".join(args)
# Run with raw=True for proper TTY handling (progress bars, interactive)
result = run_async(run_compose_on_host(cfg, resolved_stack, target_host, full_cmd, raw=True))
print() # Ensure newline after raw output
if not result.success:
raise typer.Exit(result.exit_code)
# Alias: cf a = cf apply
app.command("a", hidden=True)(apply)

View File

@@ -37,22 +37,23 @@ from compose_farm.console import (
)
from compose_farm.executor import (
CommandResult,
get_running_stacks_on_host,
is_local,
run_command,
)
from compose_farm.logs import (
DEFAULT_LOG_PATH,
SnapshotEntry,
collect_stack_entries,
collect_stacks_entries_on_host,
isoformat,
load_existing_entries,
merge_entries,
write_toml,
)
from compose_farm.operations import (
build_discovery_results,
check_host_compatibility,
check_stack_requirements,
discover_stack_host,
)
from compose_farm.state import get_orphaned_stacks, load_state, save_state
from compose_farm.traefik import generate_traefik_config, render_traefik_config
@@ -60,38 +61,39 @@ from compose_farm.traefik import generate_traefik_config, render_traefik_config
# --- Sync helpers ---
def _discover_stacks(cfg: Config, stacks: list[str] | None = None) -> dict[str, str | list[str]]:
"""Discover running stacks with a progress bar."""
stack_list = stacks if stacks is not None else list(cfg.stacks)
results = run_parallel_with_progress(
"Discovering",
stack_list,
lambda s: discover_stack_host(cfg, s),
)
return {svc: host for svc, host in results if host is not None}
def _snapshot_stacks(
cfg: Config,
stacks: list[str],
discovered: dict[str, str | list[str]],
log_path: Path | None,
) -> Path:
"""Capture image digests with a progress bar."""
"""Capture image digests using batched SSH calls (1 per host).
Args:
cfg: Configuration
discovered: Dict mapping stack -> host(s) where it's running
log_path: Optional path to write the log file
Returns:
Path to the written log file.
"""
effective_log_path = log_path or DEFAULT_LOG_PATH
now_dt = datetime.now(UTC)
now_iso = isoformat(now_dt)
async def collect_stack(stack: str) -> tuple[str, list[SnapshotEntry]]:
try:
return stack, await collect_stack_entries(cfg, stack, now=now_dt)
except RuntimeError:
return stack, []
# Group stacks by host for batched SSH calls
stacks_by_host: dict[str, set[str]] = {}
for stack, hosts in discovered.items():
# Use first host for multi-host stacks (they use the same images)
host = hosts[0] if isinstance(hosts, list) else hosts
stacks_by_host.setdefault(host, set()).add(stack)
results = run_parallel_with_progress(
"Capturing",
stacks,
collect_stack,
)
# Collect entries with 1 SSH call per host (with progress bar)
async def collect_on_host(host: str) -> tuple[str, list[SnapshotEntry]]:
entries = await collect_stacks_entries_on_host(cfg, host, stacks_by_host[host], now=now_dt)
return host, entries
results = run_parallel_with_progress("Capturing", list(stacks_by_host.keys()), collect_on_host)
snapshot_entries = [entry for _, entries in results for entry in entries]
if not snapshot_entries:
@@ -147,6 +149,61 @@ def _report_sync_changes(
console.print(f" [red]-[/] [cyan]{stack}[/] (was on [magenta]{host_str}[/])")
def _discover_stacks_full(
cfg: Config,
stacks: list[str] | None = None,
) -> tuple[dict[str, str | list[str]], dict[str, list[str]], dict[str, list[str]]]:
"""Discover running stacks with full host scanning for stray detection.
Queries each host once for all running stacks (with progress bar),
then delegates to build_discovery_results for categorization.
"""
all_hosts = list(cfg.hosts.keys())
# Query each host for running stacks (with progress bar)
async def get_stacks_on_host(host: str) -> tuple[str, set[str]]:
running = await get_running_stacks_on_host(cfg, host)
return host, running
host_results = run_parallel_with_progress("Discovering", all_hosts, get_stacks_on_host)
running_on_host: dict[str, set[str]] = dict(host_results)
return build_discovery_results(cfg, running_on_host, stacks)
def _report_stray_stacks(
strays: dict[str, list[str]],
cfg: Config,
) -> None:
"""Report stacks running on unauthorized hosts."""
if strays:
console.print(f"\n[red]Stray stacks[/] (running on wrong host, {len(strays)}):")
console.print("[dim]Run [bold]cf apply[/bold] to stop them.[/]")
for stack in sorted(strays):
stray_hosts = strays[stack]
configured = cfg.get_hosts(stack)
console.print(
f" [red]![/] [cyan]{stack}[/] on [magenta]{', '.join(stray_hosts)}[/] "
f"[dim](should be on {', '.join(configured)})[/]"
)
def _report_duplicate_stacks(duplicates: dict[str, list[str]], cfg: Config) -> None:
"""Report single-host stacks running on multiple hosts."""
if duplicates:
console.print(
f"\n[yellow]Duplicate stacks[/] (running on multiple hosts, {len(duplicates)}):"
)
console.print("[dim]Run [bold]cf apply[/bold] to stop extras.[/]")
for stack in sorted(duplicates):
hosts = duplicates[stack]
configured = cfg.get_hosts(stack)[0]
console.print(
f" [yellow]![/] [cyan]{stack}[/] on [magenta]{', '.join(hosts)}[/] "
f"[dim](should only be on {configured})[/]"
)
# --- Check helpers ---
@@ -440,7 +497,7 @@ def refresh(
current_state = load_state(cfg)
discovered = _discover_stacks(cfg, stack_list)
discovered, strays, duplicates = _discover_stacks_full(cfg, stack_list)
# Calculate changes (only for the stacks we're refreshing)
added = [s for s in discovered if s not in current_state]
@@ -463,6 +520,9 @@ def refresh(
else:
print_success("State is already in sync.")
_report_stray_stacks(strays, cfg)
_report_duplicate_stacks(duplicates, cfg)
if dry_run:
console.print(f"\n{MSG_DRY_RUN}")
return
@@ -475,10 +535,10 @@ def refresh(
save_state(cfg, new_state)
print_success(f"State updated: {len(new_state)} stacks tracked.")
# Capture image digests for running stacks
# Capture image digests for running stacks (1 SSH call per host)
if discovered:
try:
path = _snapshot_stacks(cfg, list(discovered.keys()), log_path)
path = _snapshot_stacks(cfg, discovered, log_path)
print_success(f"Digests written to {path}")
except RuntimeError as exc:
print_warning(str(exc))

View File

@@ -14,6 +14,7 @@ from compose_farm.cli.common import (
AllOption,
ConfigOption,
HostOption,
ServiceOption,
StacksArg,
get_stacks,
load_config_or_exit,
@@ -21,7 +22,7 @@ from compose_farm.cli.common import (
run_async,
run_parallel_with_progress,
)
from compose_farm.console import console
from compose_farm.console import console, print_error
from compose_farm.executor import run_command, run_on_stacks
from compose_farm.state import get_stacks_needing_migration, group_stacks_by_host, load_state
@@ -118,6 +119,7 @@ def logs(
stacks: StacksArg = None,
all_stacks: AllOption = False,
host: HostOption = None,
service: ServiceOption = None,
follow: Annotated[bool, typer.Option("--follow", "-f", help="Follow logs")] = False,
tail: Annotated[
int | None,
@@ -125,8 +127,11 @@ def logs(
] = None,
config: ConfigOption = None,
) -> None:
"""Show stack logs."""
"""Show stack logs. With --service, shows logs for just that service."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config, host=host)
if service and len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# Default to fewer lines when showing multiple stacks
many_stacks = all_stacks or host is not None or len(stack_list) > 1
@@ -134,6 +139,8 @@ def logs(
cmd = f"logs --tail {effective_tail}"
if follow:
cmd += " -f"
if service:
cmd += f" {service}"
results = run_async(run_on_stacks(cfg, stack_list, cmd))
report_results(results)
@@ -143,6 +150,7 @@ def ps(
stacks: StacksArg = None,
all_stacks: AllOption = False,
host: HostOption = None,
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Show status of stacks.
@@ -150,9 +158,14 @@ def ps(
Without arguments: shows all stacks (same as --all).
With stack names: shows only those stacks.
With --host: shows stacks on that host.
With --service: filters to a specific service within the stack.
"""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config, host=host, default_all=True)
results = run_async(run_on_stacks(cfg, stack_list, "ps"))
if service and len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
cmd = f"ps {service}" if service else "ps"
results = run_async(run_on_stacks(cfg, stack_list, cmd))
report_results(results)

View File

@@ -336,3 +336,18 @@ def get_ports_for_service(
if isinstance(ref_def, dict):
return _parse_ports(ref_def.get("ports"), env)
return _parse_ports(definition.get("ports"), env)
def get_container_name(
service_name: str,
service_def: dict[str, Any] | None,
project_name: str,
) -> str:
"""Get the container name for a service.
Uses container_name from compose if set, otherwise defaults to {project}-{service}-1.
This matches Docker Compose's default naming convention.
"""
if isinstance(service_def, dict) and service_def.get("container_name"):
return str(service_def["container_name"])
return f"{project_name}-{service_name}-1"

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import getpass
from pathlib import Path
from typing import Any
import yaml
from pydantic import BaseModel, Field, model_validator
@@ -14,7 +15,7 @@ from .paths import config_search_paths, find_config_path
COMPOSE_FILENAMES = ("compose.yaml", "compose.yml", "docker-compose.yml", "docker-compose.yaml")
class Host(BaseModel):
class Host(BaseModel, extra="forbid"):
"""SSH host configuration."""
address: str
@@ -22,7 +23,7 @@ class Host(BaseModel):
port: int = 22
class Config(BaseModel):
class Config(BaseModel, extra="forbid"):
"""Main configuration."""
compose_dir: Path = Path("/opt/compose")
@@ -113,7 +114,7 @@ class Config(BaseModel):
return found
def _parse_hosts(raw_hosts: dict[str, str | dict[str, str | int]]) -> dict[str, Host]:
def _parse_hosts(raw_hosts: dict[str, Any]) -> dict[str, Host]:
"""Parse hosts from config, handling both simple and full forms."""
hosts = {}
for name, value in raw_hosts.items():
@@ -122,11 +123,7 @@ def _parse_hosts(raw_hosts: dict[str, str | dict[str, str | int]]) -> dict[str,
hosts[name] = Host(address=value)
else:
# Full form: hostname: {address: ..., user: ..., port: ...}
hosts[name] = Host(
address=str(value.get("address", "")),
user=str(value["user"]) if "user" in value else getpass.getuser(),
port=int(value["port"]) if "port" in value else 22,
)
hosts[name] = Host(**value)
return hosts

View File

@@ -497,6 +497,28 @@ async def check_stack_running(
return result.success and bool(result.stdout.strip())
async def get_running_stacks_on_host(
config: Config,
host_name: str,
) -> set[str]:
"""Get all running compose stacks on a host in a single SSH call.
Uses docker ps with the compose.project label to identify running stacks.
Much more efficient than checking each stack individually.
"""
host = config.hosts[host_name]
# Get unique project names from running containers
command = "docker ps --format '{{.Label \"com.docker.compose.project\"}}' | sort -u"
result = await run_command(host, command, stack=host_name, stream=False, prefix="")
if not result.success:
return set()
# Filter out empty lines and return as set
return {line.strip() for line in result.stdout.splitlines() if line.strip()}
async def _batch_check_existence(
config: Config,
host_name: str,

View File

@@ -6,21 +6,22 @@ import json
import tomllib
from dataclasses import dataclass
from datetime import UTC, datetime
from typing import TYPE_CHECKING, Any
from typing import TYPE_CHECKING
from .executor import run_compose
from .executor import run_command
from .paths import xdg_config_home
if TYPE_CHECKING:
from collections.abc import Awaitable, Callable, Iterable
from collections.abc import Iterable
from pathlib import Path
from .config import Config
from .executor import CommandResult
# Separator used to split output sections
_SECTION_SEPARATOR = "---CF-SEP---"
DEFAULT_LOG_PATH = xdg_config_home() / "compose-farm" / "dockerfarm-log.toml"
_DIGEST_HEX_LENGTH = 64
@dataclass(frozen=True)
@@ -56,87 +57,97 @@ def _escape(value: str) -> str:
return value.replace("\\", "\\\\").replace('"', '\\"')
def _parse_images_output(raw: str) -> list[dict[str, Any]]:
"""Parse `docker compose images --format json` output.
Handles both a JSON array and newline-separated JSON objects for robustness.
"""
raw = raw.strip()
if not raw:
return []
def _parse_image_digests(image_json: str) -> dict[str, str]:
"""Parse docker image inspect JSON to build image tag -> digest map."""
if not image_json:
return {}
try:
parsed = json.loads(raw)
image_data = json.loads(image_json)
except json.JSONDecodeError:
objects = []
for line in raw.splitlines():
if not line.strip():
continue
objects.append(json.loads(line))
return objects
return {}
if isinstance(parsed, list):
return parsed
if isinstance(parsed, dict):
return [parsed]
return []
image_digests: dict[str, str] = {}
for img in image_data:
tags = img.get("RepoTags") or []
digests = img.get("RepoDigests") or []
digest = digests[0].split("@")[-1] if digests else img.get("Id", "")
for tag in tags:
image_digests[tag] = digest
if img.get("Id"):
image_digests[img["Id"]] = digest
return image_digests
def _extract_image_fields(record: dict[str, Any]) -> tuple[str, str]:
"""Extract image name and digest with fallbacks."""
image = record.get("Image") or record.get("Repository") or record.get("Name") or ""
tag = record.get("Tag") or record.get("Version")
if tag and ":" not in image.rsplit("/", 1)[-1]:
image = f"{image}:{tag}"
digest = (
record.get("Digest")
or record.get("Image ID")
or record.get("ImageID")
or record.get("ID")
or ""
)
if digest and not digest.startswith("sha256:") and len(digest) == _DIGEST_HEX_LENGTH:
digest = f"sha256:{digest}"
return image, digest
async def collect_stack_entries(
async def collect_stacks_entries_on_host(
config: Config,
stack: str,
host_name: str,
stacks: set[str],
*,
now: datetime,
run_compose_fn: Callable[..., Awaitable[CommandResult]] = run_compose,
) -> list[SnapshotEntry]:
"""Run `docker compose images` for a stack and normalize results."""
result = await run_compose_fn(config, stack, "images --format json", stream=False)
"""Collect image entries for stacks on one host using 2 docker commands.
Uses `docker ps` to get running containers + their compose project labels,
then `docker image inspect` to get digests for all unique images.
Much faster than running N `docker compose images` commands.
"""
if not stacks:
return []
host = config.hosts[host_name]
# Single SSH call with 2 docker commands:
# 1. Get project|image pairs from running containers
# 2. Get image info (including digests) for all unique images
command = (
f"docker ps --format '{{{{.Label \"com.docker.compose.project\"}}}}|{{{{.Image}}}}' && "
f"echo '{_SECTION_SEPARATOR}' && "
"docker image inspect $(docker ps --format '{{.Image}}' | sort -u) 2>/dev/null || true"
)
result = await run_command(host, command, host_name, stream=False, prefix="")
if not result.success:
msg = result.stderr or f"compose images exited with {result.exit_code}"
error = f"[{stack}] Unable to read images: {msg}"
raise RuntimeError(error)
return []
records = _parse_images_output(result.stdout)
# Use first host for snapshots (multi-host stacks use same images on all hosts)
host_name = config.get_hosts(stack)[0]
compose_path = config.get_compose_path(stack)
# Split output into two sections
parts = result.stdout.split(_SECTION_SEPARATOR)
if len(parts) != 2: # noqa: PLR2004
return []
entries: list[SnapshotEntry] = []
for record in records:
image, digest = _extract_image_fields(record)
if not digest:
container_lines, image_json = parts[0].strip(), parts[1].strip()
# Parse project|image pairs, filtering to only stacks we care about
stack_images: dict[str, set[str]] = {}
for line in container_lines.splitlines():
if "|" not in line:
continue
entries.append(
SnapshotEntry(
stack=stack,
host=host_name,
compose_file=compose_path,
image=image,
digest=digest,
captured_at=now,
)
)
project, image = line.split("|", 1)
if project in stacks:
stack_images.setdefault(project, set()).add(image)
if not stack_images:
return []
# Parse image inspect JSON to build image -> digest map
image_digests = _parse_image_digests(image_json)
# Build entries
entries: list[SnapshotEntry] = []
for stack, images in stack_images.items():
for image in images:
digest = image_digests.get(image, "")
if digest:
entries.append(
SnapshotEntry(
stack=stack,
host=host_name,
compose_file=config.get_compose_path(stack),
image=image,
digest=digest,
captured_at=now,
)
)
return entries

View File

@@ -76,29 +76,37 @@ def get_stack_paths(cfg: Config, stack: str) -> list[str]:
return paths
async def discover_stack_host(cfg: Config, stack: str) -> tuple[str, str | list[str] | None]:
"""Discover where a stack is running.
class StackDiscoveryResult(NamedTuple):
"""Result of discovering where a stack is running across all hosts."""
For multi-host stacks, checks all assigned hosts in parallel.
For single-host, checks assigned host first, then others.
stack: str
configured_hosts: list[str] # From config (where it SHOULD run)
running_hosts: list[str] # From reality (where it IS running)
Returns (stack_name, host_or_hosts_or_none).
"""
assigned_hosts = cfg.get_hosts(stack)
@property
def is_multi_host(self) -> bool:
"""Check if this is a multi-host stack."""
return len(self.configured_hosts) > 1
if cfg.is_multi_host(stack):
# Check all assigned hosts in parallel
checks = await asyncio.gather(*[check_stack_running(cfg, stack, h) for h in assigned_hosts])
running = [h for h, is_running in zip(assigned_hosts, checks, strict=True) if is_running]
return stack, running if running else None
@property
def stray_hosts(self) -> list[str]:
"""Hosts where stack is running but shouldn't be."""
return [h for h in self.running_hosts if h not in self.configured_hosts]
# Single-host: check assigned host first, then others
if await check_stack_running(cfg, stack, assigned_hosts[0]):
return stack, assigned_hosts[0]
for host in cfg.hosts:
if host != assigned_hosts[0] and await check_stack_running(cfg, stack, host):
return stack, host
return stack, None
@property
def missing_hosts(self) -> list[str]:
"""Hosts where stack should be running but isn't."""
return [h for h in self.configured_hosts if h not in self.running_hosts]
@property
def is_stray(self) -> bool:
"""Stack is running on unauthorized host(s)."""
return len(self.stray_hosts) > 0
@property
def is_duplicate(self) -> bool:
"""Single-host stack running on multiple hosts."""
return not self.is_multi_host and len(self.running_hosts) > 1
async def check_stack_requirements(
@@ -359,26 +367,33 @@ async def check_host_compatibility(
return results
async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
"""Stop orphaned stacks (in state but not in config).
async def _stop_stacks_on_hosts(
cfg: Config,
stacks_to_hosts: dict[str, list[str]],
label: str = "",
) -> list[CommandResult]:
"""Stop stacks on specific hosts.
Runs docker compose down on each stack on its tracked host(s).
Only removes from state on successful stop.
Shared helper for stop_orphaned_stacks and stop_stray_stacks.
Args:
cfg: Config object.
stacks_to_hosts: Dict mapping stack name to list of hosts to stop on.
label: Optional label for success message (e.g., "stray", "orphaned").
Returns:
List of CommandResults for each stack@host.
Returns list of CommandResults for each stack@host.
"""
orphaned = get_orphaned_stacks(cfg)
if not orphaned:
if not stacks_to_hosts:
return []
results: list[CommandResult] = []
tasks: list[tuple[str, str, asyncio.Task[CommandResult]]] = []
suffix = f" ({label})" if label else ""
# Build list of (stack, host, task) for all orphaned stacks
for stack, hosts in orphaned.items():
host_list = hosts if isinstance(hosts, list) else [hosts]
for host in host_list:
# Skip hosts no longer in config
for stack, hosts in stacks_to_hosts.items():
for host in hosts:
if host not in cfg.hosts:
print_warning(f"{stack}@{host}: host no longer in config, skipping")
results.append(
@@ -393,30 +408,48 @@ async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
coro = run_compose_on_host(cfg, stack, host, "down")
tasks.append((stack, host, asyncio.create_task(coro)))
# Run all down commands in parallel
if tasks:
for stack, host, task in tasks:
try:
result = await task
results.append(result)
if result.success:
print_success(f"{stack}@{host}: stopped")
else:
print_error(f"{stack}@{host}: {result.stderr or 'failed'}")
except Exception as e:
print_error(f"{stack}@{host}: {e}")
results.append(
CommandResult(
stack=f"{stack}@{host}",
exit_code=1,
success=False,
stderr=str(e),
)
for stack, host, task in tasks:
try:
result = await task
results.append(result)
if result.success:
print_success(f"{stack}@{host}: stopped{suffix}")
else:
print_error(f"{stack}@{host}: {result.stderr or 'failed'}")
except Exception as e:
print_error(f"{stack}@{host}: {e}")
results.append(
CommandResult(
stack=f"{stack}@{host}",
exit_code=1,
success=False,
stderr=str(e),
)
)
return results
async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
"""Stop orphaned stacks (in state but not in config).
Runs docker compose down on each stack on its tracked host(s).
Only removes from state on successful stop.
Returns list of CommandResults for each stack@host.
"""
orphaned = get_orphaned_stacks(cfg)
if not orphaned:
return []
normalized: dict[str, list[str]] = {
stack: (hosts if isinstance(hosts, list) else [hosts]) for stack, hosts in orphaned.items()
}
results = await _stop_stacks_on_hosts(cfg, normalized)
# Remove from state only for stacks where ALL hosts succeeded
for stack, hosts in orphaned.items():
host_list = hosts if isinstance(hosts, list) else [hosts]
for stack in normalized:
all_succeeded = all(
r.success for r in results if r.stack.startswith(f"{stack}@") or r.stack == stack
)
@@ -424,3 +457,77 @@ async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
remove_stack(cfg, stack)
return results
async def stop_stray_stacks(
cfg: Config,
strays: dict[str, list[str]],
) -> list[CommandResult]:
"""Stop stacks running on unauthorized hosts.
Args:
cfg: Config object.
strays: Dict mapping stack name to list of stray hosts.
Returns:
List of CommandResults for each stack@host stopped.
"""
return await _stop_stacks_on_hosts(cfg, strays, label="stray")
def build_discovery_results(
cfg: Config,
running_on_host: dict[str, set[str]],
stacks: list[str] | None = None,
) -> tuple[dict[str, str | list[str]], dict[str, list[str]], dict[str, list[str]]]:
"""Build discovery results from per-host running stacks.
Takes the raw data of which stacks are running on which hosts and
categorizes them into discovered (running correctly), strays (wrong host),
and duplicates (single-host stack on multiple hosts).
Args:
cfg: Config object.
running_on_host: Dict mapping host -> set of running stack names.
stacks: Optional list of stacks to check. Defaults to all configured stacks.
Returns:
Tuple of (discovered, strays, duplicates):
- discovered: stack -> host(s) where running correctly
- strays: stack -> list of unauthorized hosts
- duplicates: stack -> list of all hosts (for single-host stacks on multiple)
"""
stack_list = stacks if stacks is not None else list(cfg.stacks)
all_hosts = list(running_on_host.keys())
# Build StackDiscoveryResult for each stack
results: list[StackDiscoveryResult] = [
StackDiscoveryResult(
stack=stack,
configured_hosts=cfg.get_hosts(stack),
running_hosts=[h for h in all_hosts if stack in running_on_host[h]],
)
for stack in stack_list
]
discovered: dict[str, str | list[str]] = {}
strays: dict[str, list[str]] = {}
duplicates: dict[str, list[str]] = {}
for result in results:
correct_hosts = [h for h in result.running_hosts if h in result.configured_hosts]
if correct_hosts:
if result.is_multi_host:
discovered[result.stack] = correct_hosts
else:
discovered[result.stack] = correct_hosts[0]
if result.is_stray:
strays[result.stack] = result.stray_hosts
if result.is_duplicate:
duplicates[result.stack] = result.running_hosts
return discovered, strays, duplicates

View File

@@ -11,9 +11,19 @@ def xdg_config_home() -> Path:
return Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config"))
def config_dir() -> Path:
"""Get the compose-farm config directory."""
return xdg_config_home() / "compose-farm"
def default_config_path() -> Path:
"""Get the default user config path."""
return xdg_config_home() / "compose-farm" / "compose-farm.yaml"
return config_dir() / "compose-farm.yaml"
def backup_dir() -> Path:
"""Get the backup directory for file edits."""
return config_dir() / "backups"
def config_search_paths() -> list[Path]:

View File

@@ -8,6 +8,7 @@ use host-published ports for cross-host reachability.
from __future__ import annotations
import re
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any
@@ -383,3 +384,53 @@ def render_traefik_config(dynamic: dict[str, Any]) -> str:
"""Render Traefik dynamic config as YAML with a header comment."""
body = yaml.safe_dump(dynamic, sort_keys=False)
return _TRAEFIK_CONFIG_HEADER + body
_HOST_RULE_PATTERN = re.compile(r"Host\(`([^`]+)`\)")
def extract_website_urls(config: Config, stack: str) -> list[str]:
"""Extract website URLs from Traefik labels in a stack's compose file.
Reuses generate_traefik_config to parse labels, then extracts Host() rules
from router configurations.
Returns a list of unique URLs, preferring HTTPS over HTTP.
"""
try:
dynamic, _ = generate_traefik_config(config, [stack], check_all=True)
except FileNotFoundError:
return []
routers = dynamic.get("http", {}).get("routers", {})
if not routers:
return []
# Track URLs with their scheme preference (https > http)
urls: dict[str, str] = {} # host -> scheme
for router_info in routers.values():
if not isinstance(router_info, dict):
continue
rule = router_info.get("rule", "")
entrypoints = router_info.get("entrypoints", [])
# entrypoints can be a list or string
if isinstance(entrypoints, list):
entrypoints_str = ",".join(entrypoints)
else:
entrypoints_str = str(entrypoints)
# Determine scheme from entrypoint
scheme = "https" if "websecure" in entrypoints_str else "http"
# Extract host(s) from rule
for match in _HOST_RULE_PATTERN.finditer(str(rule)):
host = match.group(1)
# Prefer https over http
if host not in urls or scheme == "https":
urls[host] = scheme
# Build URL list, sorted for consistency
return sorted(f"{scheme}://{host}" for host, scheme in urls.items())

View File

@@ -3,6 +3,7 @@
from __future__ import annotations
import asyncio
import logging
import sys
from contextlib import asynccontextmanager, suppress
from typing import TYPE_CHECKING
@@ -10,11 +11,22 @@ from typing import TYPE_CHECKING
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
from pydantic import ValidationError
from rich.logging import RichHandler
from compose_farm.web.deps import STATIC_DIR, get_config
from compose_farm.web.routes import actions, api, pages
from compose_farm.web.streaming import TASK_TTL_SECONDS, cleanup_stale_tasks
# Configure logging with Rich handler for compose_farm.web modules
logging.basicConfig(
level=logging.INFO,
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(rich_tracebacks=True, show_path=False)],
)
# Set our web modules to INFO level (uvicorn handles its own logging)
logging.getLogger("compose_farm.web").setLevel(logging.INFO)
if TYPE_CHECKING:
from collections.abc import AsyncGenerator

View File

@@ -38,7 +38,17 @@ def get_templates() -> Jinja2Templates:
def extract_config_error(exc: Exception) -> str:
"""Extract a user-friendly error message from a config exception."""
if isinstance(exc, ValidationError):
return "; ".join(err.get("msg", str(err)) for err in exc.errors())
parts = []
for err in exc.errors():
msg = err.get("msg", str(err))
loc = err.get("loc", ())
if loc:
# Format location as dot-separated path (e.g., "hosts.nas.port")
loc_str = ".".join(str(part) for part in loc)
parts.append(f"{loc_str}: {msg}")
else:
parts.append(msg)
return "; ".join(parts)
return str(exc)

View File

@@ -33,12 +33,15 @@ def _start_task(coro_factory: Callable[[str], Coroutine[Any, Any, None]]) -> str
# Allowed stack commands
ALLOWED_COMMANDS = {"up", "down", "restart", "pull", "update", "logs"}
ALLOWED_COMMANDS = {"up", "down", "restart", "pull", "update", "logs", "stop"}
# Allowed service-level commands (no 'down' - use 'stop' for individual services)
ALLOWED_SERVICE_COMMANDS = {"logs", "pull", "restart", "up", "stop"}
@router.post("/stack/{name}/{command}")
async def stack_action(name: str, command: str) -> dict[str, Any]:
"""Run a compose command for a stack (up, down, restart, pull, update, logs)."""
"""Run a compose command for a stack (up, down, restart, pull, update, logs, stop)."""
if command not in ALLOWED_COMMANDS:
raise HTTPException(status_code=404, detail=f"Unknown command '{command}'")
@@ -50,6 +53,23 @@ async def stack_action(name: str, command: str) -> dict[str, Any]:
return {"task_id": task_id, "stack": name, "command": command}
@router.post("/stack/{name}/service/{service}/{command}")
async def service_action(name: str, service: str, command: str) -> dict[str, Any]:
"""Run a compose command for a specific service within a stack."""
if command not in ALLOWED_SERVICE_COMMANDS:
raise HTTPException(status_code=404, detail=f"Unknown command '{command}'")
config = get_config()
if name not in config.stacks:
raise HTTPException(status_code=404, detail=f"Stack '{name}' not found")
# Use --service flag to target specific service
task_id = _start_task(
lambda tid: run_compose_streaming(config, name, f"{command} --service {service}", tid)
)
return {"task_id": task_id, "stack": name, "service": service, "command": command}
@router.post("/apply")
async def apply_all() -> dict[str, Any]:
"""Run cf apply to reconcile all stacks."""
@@ -64,3 +84,19 @@ async def refresh_state() -> dict[str, Any]:
config = get_config()
task_id = _start_task(lambda tid: run_cli_streaming(config, ["refresh"], tid))
return {"task_id": task_id, "command": "refresh"}
@router.post("/pull-all")
async def pull_all() -> dict[str, Any]:
"""Pull latest images for all stacks."""
config = get_config()
task_id = _start_task(lambda tid: run_cli_streaming(config, ["pull", "--all"], tid))
return {"task_id": task_id, "command": "pull --all"}
@router.post("/update-all")
async def update_all() -> dict[str, Any]:
"""Update all stacks (pull + build + down + up)."""
config = get_config()
task_id = _start_task(lambda tid: run_cli_streaming(config, ["update", "--all"], tid))
return {"task_id": task_id, "command": "update --all"}

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import asyncio
import contextlib
import json
import logging
import shlex
from datetime import UTC, datetime
from pathlib import Path
@@ -18,11 +19,14 @@ import yaml
from fastapi import APIRouter, Body, HTTPException, Query
from fastapi.responses import HTMLResponse
from compose_farm.compose import get_container_name
from compose_farm.executor import is_local, run_compose_on_host, ssh_connect_kwargs
from compose_farm.paths import find_config_path
from compose_farm.paths import backup_dir, find_config_path
from compose_farm.state import load_state
from compose_farm.web.deps import get_config, get_templates
logger = logging.getLogger(__name__)
router = APIRouter(tags=["api"])
@@ -37,26 +41,30 @@ def _validate_yaml(content: str) -> None:
def _backup_file(file_path: Path) -> Path | None:
"""Create a timestamped backup of a file if it exists and content differs.
Backups are stored in a .backups directory alongside the file.
Backups are stored in XDG config dir under compose-farm/backups/.
The original file's absolute path is mirrored in the backup directory.
Returns the backup path if created, None if no backup was needed.
"""
if not file_path.exists():
return None
# Create backup directory
backup_dir = file_path.parent / ".backups"
backup_dir.mkdir(exist_ok=True)
# Create backup directory mirroring original path structure
# e.g., /opt/stacks/plex/compose.yaml -> ~/.config/compose-farm/backups/opt/stacks/plex/
# On Windows: C:\Users\foo\stacks -> backups/Users/foo/stacks
resolved = file_path.resolve()
file_backup_dir = backup_dir() / resolved.parent.relative_to(resolved.anchor)
file_backup_dir.mkdir(parents=True, exist_ok=True)
# Generate timestamped backup filename
timestamp = datetime.now(tz=UTC).strftime("%Y%m%d_%H%M%S")
backup_name = f"{file_path.name}.{timestamp}"
backup_path = backup_dir / backup_name
backup_path = file_backup_dir / backup_name
# Copy current content to backup
backup_path.write_text(file_path.read_text())
# Clean up old backups (keep last 200)
backups = sorted(backup_dir.glob(f"{file_path.name}.*"), reverse=True)
backups = sorted(file_backup_dir.glob(f"{file_path.name}.*"), reverse=True)
for old_backup in backups[200:]:
old_backup.unlink()
@@ -113,14 +121,9 @@ def _get_compose_services(config: Any, stack: str, hosts: list[str]) -> list[dic
containers = []
for host in hosts:
for svc_name, svc_def in raw_services.items():
# Use container_name if set, otherwise default to {project}-{service}-1
if isinstance(svc_def, dict) and svc_def.get("container_name"):
container_name = svc_def["container_name"]
else:
container_name = f"{project_name}-{svc_name}-1"
containers.append(
{
"Name": container_name,
"Name": get_container_name(svc_name, svc_def, project_name),
"Service": svc_name,
"Host": host,
"State": "unknown", # Status requires Docker query
@@ -144,6 +147,12 @@ async def _get_container_states(
config, stack, host_name, "ps -a --format json", stream=False
)
if not result.success:
logger.warning(
"Failed to get container states for %s on %s: %s",
stack,
host_name,
result.stderr or result.stdout,
)
return containers
# Build state map: name -> (state, exit_code)
@@ -350,6 +359,7 @@ async def read_console_file(
except PermissionError:
raise HTTPException(status_code=403, detail=f"Permission denied: {path}") from None
except Exception as e:
logger.exception("Failed to read file %s from host %s", path, host)
raise HTTPException(status_code=500, detail=str(e)) from e
@@ -373,4 +383,5 @@ async def write_console_file(
except PermissionError:
raise HTTPException(status_code=403, detail=f"Permission denied: {path}") from None
except Exception as e:
logger.exception("Failed to write file %s to host %s", path, host)
raise HTTPException(status_code=500, detail=str(e)) from e

View File

@@ -7,6 +7,7 @@ from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse
from pydantic import ValidationError
from compose_farm.compose import get_container_name
from compose_farm.paths import find_config_path
from compose_farm.state import (
get_orphaned_stacks,
@@ -16,6 +17,7 @@ from compose_farm.state import (
group_running_stacks_by_host,
load_state,
)
from compose_farm.traefik import extract_website_urls
from compose_farm.web.deps import (
extract_config_error,
get_config,
@@ -89,8 +91,8 @@ async def index(request: Request) -> HTMLResponse:
# Get state
deployed = load_state(config)
# Stats
running_count = len(deployed)
# Stats (only count stacks that are both in config AND deployed)
running_count = sum(1 for stack in deployed if stack in config.stacks)
stopped_count = len(config.stacks) - running_count
# Pending operations
@@ -159,6 +161,29 @@ async def stack_detail(request: Request, name: str) -> HTMLResponse:
# Get state
current_host = get_stack_host(config, name)
# Get service names and container info from compose file
services: list[str] = []
containers: dict[str, dict[str, str]] = {}
shell_host = current_host[0] if isinstance(current_host, list) else current_host
if compose_content:
compose_data = yaml.safe_load(compose_content) or {}
raw_services = compose_data.get("services", {})
if isinstance(raw_services, dict):
services = list(raw_services.keys())
# Build container info for shell access (only if stack is running)
if shell_host:
project_name = compose_path.parent.name if compose_path else name
containers = {
svc: {
"container": get_container_name(svc, svc_def, project_name),
"host": shell_host,
}
for svc, svc_def in raw_services.items()
}
# Extract website URLs from Traefik labels
website_urls = extract_website_urls(config, name)
return templates.TemplateResponse(
"stack.html",
{
@@ -170,6 +195,9 @@ async def stack_detail(request: Request, name: str) -> HTMLResponse:
"compose_path": str(compose_path) if compose_path else None,
"env_content": env_content,
"env_path": str(env_path) if env_path else None,
"services": services,
"containers": containers,
"website_urls": website_urls,
},
)
@@ -222,7 +250,8 @@ async def stats_partial(request: Request) -> HTMLResponse:
templates = get_templates()
deployed = load_state(config)
running_count = len(deployed)
# Only count stacks that are both in config AND deployed
running_count = sum(1 for stack in deployed if stack in config.stacks)
stopped_count = len(config.stacks) - running_count
return templates.TemplateResponse(

View File

@@ -1,3 +1,9 @@
/* Tooltips - ensure they appear above sidebar and other elements */
.tooltip::before,
.tooltip::after {
z-index: 1000;
}
/* Sidebar inputs - remove focus outline (DaisyUI 5 uses outline + outline-offset) */
#sidebar .input:focus,
#sidebar .input:focus-within,

View File

@@ -57,6 +57,10 @@ const LANGUAGE_MAP = {
'env': 'plaintext'
};
// Detect Mac for keyboard shortcut display
const IS_MAC = navigator.platform.toUpperCase().indexOf('MAC') >= 0;
const MOD_KEY = IS_MAC ? '⌘' : 'Ctrl';
// ============================================================================
// STATE
// ============================================================================
@@ -219,7 +223,9 @@ function initExecTerminal(stack, container, host) {
return;
}
// Unhide the terminal container first, then expand/scroll
containerEl.classList.remove('hidden');
expandCollapse(document.getElementById('exec-collapse'), containerEl);
// Clean up existing (use wrapper's dispose to clean up ResizeObserver)
if (execWs) { execWs.close(); execWs = null; }
@@ -255,17 +261,42 @@ function initExecTerminal(stack, container, host) {
window.initExecTerminal = initExecTerminal;
/**
* Expand a collapse component and scroll to a target element
* @param {HTMLInputElement} toggle - The checkbox input that controls the collapse
* @param {HTMLElement} [scrollTarget] - Element to scroll to (defaults to collapse container)
*/
function expandCollapse(toggle, scrollTarget = null) {
if (!toggle) return;
// Find the parent collapse container
const collapse = toggle.closest('.collapse');
if (!collapse) return;
const target = scrollTarget || collapse;
const scrollToTarget = () => {
target.scrollIntoView({ behavior: 'smooth', block: 'start' });
};
if (!toggle.checked) {
// Collapsed - expand first, then scroll after transition
const onTransitionEnd = () => {
collapse.removeEventListener('transitionend', onTransitionEnd);
scrollToTarget();
};
collapse.addEventListener('transitionend', onTransitionEnd);
toggle.checked = true;
} else {
// Already expanded - just scroll
scrollToTarget();
}
}
/**
* Expand terminal collapse and scroll to it
*/
function expandTerminal() {
const toggle = document.getElementById('terminal-toggle');
if (toggle) toggle.checked = true;
const collapse = document.getElementById('terminal-collapse');
if (collapse) {
collapse.scrollIntoView({ behavior: 'smooth', block: 'start' });
}
expandCollapse(document.getElementById('terminal-toggle'));
}
/**
@@ -512,16 +543,23 @@ function playFabIntro() {
const THEMES = ['light', 'dark', 'cupcake', 'bumblebee', 'emerald', 'corporate', 'synthwave', 'retro', 'cyberpunk', 'valentine', 'halloween', 'garden', 'forest', 'aqua', 'lofi', 'pastel', 'fantasy', 'wireframe', 'black', 'luxury', 'dracula', 'cmyk', 'autumn', 'business', 'acid', 'lemonade', 'night', 'coffee', 'winter', 'dim', 'nord', 'sunset', 'caramellatte', 'abyss', 'silk'];
const THEME_KEY = 'cf_theme';
const colors = { stack: '#22c55e', action: '#eab308', nav: '#3b82f6', app: '#a855f7', theme: '#ec4899' };
const colors = { stack: '#22c55e', action: '#eab308', nav: '#3b82f6', app: '#a855f7', theme: '#ec4899', service: '#14b8a6' };
let commands = [];
let filtered = [];
let selected = 0;
let originalTheme = null; // Store theme when palette opens for preview/restore
const post = (url) => () => htmx.ajax('POST', url, {swap: 'none'});
const nav = (url) => () => {
const nav = (url, afterNav) => () => {
// Set hash before HTMX swap so inline scripts can read it
const hashIndex = url.indexOf('#');
if (hashIndex !== -1) {
window.location.hash = url.substring(hashIndex);
}
htmx.ajax('GET', url, {target: '#main-content', select: '#main-content', swap: 'outerHTML'}).then(() => {
history.pushState({}, '', url);
window.scrollTo(0, 0);
afterNav?.();
});
};
// Navigate to dashboard (if needed) and trigger action
@@ -529,6 +567,7 @@ function playFabIntro() {
if (window.location.pathname !== '/') {
await htmx.ajax('GET', '/', {target: '#main-content', select: '#main-content', swap: 'outerHTML'});
history.pushState({}, '', '/');
window.scrollTo(0, 0);
}
htmx.ajax('POST', `/api/${endpoint}`, {swap: 'none'});
};
@@ -564,10 +603,14 @@ function playFabIntro() {
const actions = [
cmd('action', 'Apply', 'Make reality match config', dashboardAction('apply'), icons.check),
cmd('action', 'Refresh', 'Update state from reality', dashboardAction('refresh'), icons.refresh_cw),
cmd('action', 'Pull All', 'Pull latest images for all stacks', dashboardAction('pull-all'), icons.cloud_download),
cmd('action', 'Update All', 'Update all stacks', dashboardAction('update-all'), icons.refresh_cw),
cmd('app', 'Theme', 'Change color theme', openThemePicker, icons.palette),
cmd('app', 'Dashboard', 'Go to dashboard', nav('/'), icons.home),
cmd('app', 'Console', 'Go to console', nav('/console'), icons.terminal),
cmd('app', 'Edit Config', 'Edit compose-farm.yaml', nav('/console#editor'), icons.file_code),
cmd('app', 'Docs', 'Open documentation', openExternal('https://compose-farm.nijho.lt/'), icons.book_open),
cmd('app', 'Repo', 'Open GitHub repository', openExternal('https://github.com/basnijholt/compose-farm'), icons.external_link),
];
// Add stack-specific actions if on a stack page
@@ -583,6 +626,50 @@ function playFabIntro() {
stackCmd('Update', 'Pull + restart', 'update', icons.refresh_cw),
stackCmd('Logs', 'View logs for', 'logs', icons.file_text),
);
// Add Open Website commands if website URLs are available
const websiteUrlsAttr = document.querySelector('[data-website-urls]')?.getAttribute('data-website-urls');
if (websiteUrlsAttr) {
const websiteUrls = JSON.parse(websiteUrlsAttr);
for (const url of websiteUrls) {
const displayUrl = url.replace(/^https?:\/\//, '');
const label = websiteUrls.length > 1 ? `Open: ${displayUrl}` : 'Open Website';
actions.unshift(cmd('stack', label, `Open ${displayUrl} in browser`, openExternal(url), icons.external_link));
}
}
// Add service-specific commands from data-services and data-containers attributes
// Grouped by action (all Logs together, all Pull together, etc.) with services sorted alphabetically
const servicesAttr = document.querySelector('[data-services]')?.getAttribute('data-services');
const containersAttr = document.querySelector('[data-containers]')?.getAttribute('data-containers');
if (servicesAttr) {
const services = servicesAttr.split(',').filter(s => s).sort();
// Parse container info for shell access: {service: {container, host}}
const containers = containersAttr ? JSON.parse(containersAttr) : {};
const svcCmd = (action, service, desc, endpoint, icon) =>
cmd('service', `${action}: ${service}`, desc, post(`/api/stack/${stack}/service/${service}/${endpoint}`), icon);
const svcActions = [
['Logs', 'View logs for service', 'logs', icons.file_text],
['Pull', 'Pull image for service', 'pull', icons.cloud_download],
['Restart', 'Restart service', 'restart', icons.rotate_cw],
['Stop', 'Stop service', 'stop', icons.square],
['Up', 'Start service', 'up', icons.play],
];
for (const [action, desc, endpoint, icon] of svcActions) {
for (const service of services) {
actions.push(svcCmd(action, service, desc, endpoint, icon));
}
}
// Add Shell commands if container info is available
for (const service of services) {
const info = containers[service];
if (info?.container && info?.host) {
actions.push(cmd('service', `Shell: ${service}`, 'Open interactive shell',
() => initExecTerminal(stack, info.container, info.host), icons.terminal));
}
}
}
}
// Add nav commands for all stacks from sidebar
@@ -601,8 +688,21 @@ function playFabIntro() {
}
function filter() {
const q = input.value.toLowerCase();
filtered = commands.filter(c => c.name.toLowerCase().includes(q));
// Fuzzy matching: all query words must match the START of a word in the command name
// Examples: "r ba" matches "Restart: bazarr" but NOT "Logs: bazarr"
const q = input.value.toLowerCase().trim();
// Split query into words and strip non-alphanumeric chars
const queryWords = q.split(/[^a-z0-9]+/).filter(w => w);
filtered = commands.filter(c => {
const name = c.name.toLowerCase();
// Split command name into words (split on non-alphanumeric)
const nameWords = name.split(/[^a-z0-9]+/).filter(w => w);
// Each query word must match the start of some word in the command name
return queryWords.every(qw =>
nameWords.some(nw => nw.startsWith(qw))
);
});
selected = Math.max(0, Math.min(selected, filtered.length - 1));
}
@@ -634,7 +734,7 @@ function playFabIntro() {
input.value = initialFilter;
filter();
// If opening theme picker, select current theme
if (initialFilter === 'theme:') {
if (initialFilter.startsWith('theme:')) {
const currentIdx = filtered.findIndex(c => c.themeId === originalTheme);
if (currentIdx >= 0) selected = currentIdx;
}
@@ -749,12 +849,26 @@ function initKeyboardShortcuts() {
});
}
/**
* Update keyboard shortcut display based on OS
* Replaces ⌘ with Ctrl on non-Mac platforms
*/
function updateShortcutKeys() {
// Update elements with class 'shortcut-key' that contain ⌘
document.querySelectorAll('.shortcut-key').forEach(el => {
if (el.textContent === '⌘') {
el.textContent = MOD_KEY;
}
});
}
/**
* Initialize page components
*/
function initPage() {
initMonacoEditors();
initSaveButton();
updateShortcutKeys();
}
/**

View File

@@ -39,7 +39,7 @@
<span class="font-semibold rainbow-hover">Compose Farm</span>
</header>
<main id="main-content" class="flex-1 p-6 overflow-y-auto">
<main id="main-content" class="flex-1 p-6">
{% block content %}{% endblock %}
</main>
</div>
@@ -51,15 +51,21 @@
<header class="p-4 border-b border-base-300">
<h2 class="text-lg font-semibold flex items-center gap-2">
<span class="rainbow-hover">Compose Farm</span>
<a href="https://compose-farm.nijho.lt/" target="_blank" title="Docs" class="opacity-50 hover:opacity-100 transition-opacity">
{{ book_open() }}
</a>
<a href="https://github.com/basnijholt/compose-farm" target="_blank" title="GitHub" class="opacity-50 hover:opacity-100 transition-opacity">
{{ github() }}
</a>
<button type="button" id="theme-btn" class="opacity-50 hover:opacity-100 transition-opacity cursor-pointer" title="Change theme (opens command palette)">
{{ palette() }}
</button>
<div class="tooltip tooltip-bottom" data-tip="Docs">
<a href="https://compose-farm.nijho.lt/" target="_blank" class="opacity-50 hover:opacity-100 transition-opacity">
{{ book_open() }}
</a>
</div>
<div class="tooltip tooltip-bottom" data-tip="GitHub">
<a href="https://github.com/basnijholt/compose-farm" target="_blank" class="opacity-50 hover:opacity-100 transition-opacity">
{{ github() }}
</a>
</div>
<div class="tooltip tooltip-bottom" data-tip="Change theme">
<button type="button" id="theme-btn" class="opacity-50 hover:opacity-100 transition-opacity cursor-pointer">
{{ palette() }}
</button>
</div>
</h2>
</header>
<nav class="flex-1 overflow-y-auto p-2" hx-get="/partials/sidebar" hx-trigger="load, cf:refresh from:body" hx-swap="innerHTML">

View File

@@ -15,7 +15,7 @@
<option value="{{ name }}">{{ name }}{% if name == local_host %} (local){% endif %}</option>
{% endfor %}
</select>
<button id="console-connect-btn" class="btn btn-sm btn-primary" onclick="connectConsole()">Connect</button>
<div class="tooltip" data-tip="Connect to host via SSH"><button id="console-connect-btn" class="btn btn-sm btn-primary" onclick="connectConsole()">Connect</button></div>
<span id="console-status" class="text-sm opacity-60"></span>
</div>
@@ -29,11 +29,11 @@
<div class="flex items-center justify-between mb-2">
<div class="flex items-center gap-4">
<input type="text" id="console-file-path" class="input input-sm input-bordered w-96" placeholder="Enter file path (e.g., ~/docker-compose.yaml)" value="{{ config_path }}">
<button class="btn btn-sm btn-outline" onclick="loadFile()">Open</button>
<div class="tooltip" data-tip="Load file from host"><button class="btn btn-sm btn-outline" onclick="loadFile()">Open</button></div>
</div>
<div class="flex items-center gap-2">
<span id="editor-status" class="text-sm opacity-60"></span>
<button id="console-save-btn" class="btn btn-sm btn-primary" onclick="saveFile()">{{ save() }} Save</button>
<div class="tooltip" data-tip="Save file to host (⌘/Ctrl+S)"><button id="console-save-btn" class="btn btn-sm btn-primary" onclick="saveFile()">{{ save() }} Save</button></div>
</div>
</div>
<div id="console-editor" class="resize-y overflow-hidden rounded-lg" style="height: 512px; min-height: 200px;"></div>
@@ -97,7 +97,10 @@ function connectConsole() {
consoleWs.onopen = () => {
statusEl.textContent = `Connected to ${host}`;
sendSize(term.cols, term.rows);
term.focus();
// Focus terminal unless #editor hash is present (command palette Edit Config)
if (window.location.hash !== '#editor') {
term.focus();
}
// Auto-load the default file once editor is ready
const pathInput = document.getElementById('console-file-path');
if (pathInput && pathInput.value) {
@@ -133,6 +136,14 @@ function initConsoleEditor() {
loadMonaco(() => {
consoleEditor = createEditor(editorEl, '', 'plaintext', { onSave: saveFile });
// Focus editor if #editor hash is present (command palette Edit Config)
if (window.location.hash === '#editor') {
// Small delay for Monaco to fully initialize before focusing
setTimeout(() => {
consoleEditor.focus();
editorEl.scrollIntoView({ behavior: 'smooth', block: 'center' });
}, 100);
}
});
}

View File

@@ -1,6 +1,6 @@
{% extends "base.html" %}
{% from "partials/components.html" import page_header, collapse, stat_card, table, action_btn %}
{% from "partials/icons.html" import check, refresh_cw, save, settings, server, database %}
{% from "partials/icons.html" import check, refresh_cw, save, settings, server, database, cloud_download, rotate_cw %}
{% block title %}Dashboard - Compose Farm{% endblock %}
{% block content %}
@@ -17,7 +17,9 @@
<div class="flex flex-wrap gap-2 mb-6">
{{ action_btn("Apply", "/api/apply", "primary", "Make reality match config", check()) }}
{{ action_btn("Refresh", "/api/refresh", "outline", "Update state from reality", refresh_cw()) }}
<button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button>
{{ action_btn("Pull All", "/api/pull-all", "outline", "Pull latest images for all stacks", cloud_download()) }}
{{ action_btn("Update All", "/api/update-all", "outline", "Update all stacks (pull + build + down + up)", rotate_cw()) }}
<div class="tooltip" data-tip="Save compose-farm.yaml config file"><button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button></div>
</div>
{% include "partials/terminal.html" %}

View File

@@ -1,4 +1,4 @@
{% from "partials/icons.html" import search, play, square, rotate_cw, cloud_download, refresh_cw, file_text, check, home, terminal, box, palette, book_open %}
{% from "partials/icons.html" import search, play, square, rotate_cw, cloud_download, refresh_cw, file_text, file_code, check, home, terminal, box, palette, book_open, external_link %}
<!-- Icons for command palette (referenced by JS) -->
<template id="cmd-icons">
@@ -14,6 +14,8 @@
<span data-icon="box">{{ box() }}</span>
<span data-icon="palette">{{ palette() }}</span>
<span data-icon="book_open">{{ book_open() }}</span>
<span data-icon="file_code">{{ file_code() }}</span>
<span data-icon="external_link">{{ external_link() }}</span>
</template>
<dialog id="cmd-palette" class="modal">
<div class="modal-box max-w-lg p-0">
@@ -30,8 +32,8 @@
</dialog>
<!-- Floating button to open command palette -->
<button id="cmd-fab" class="fixed bottom-6 right-6 z-50" title="Command Palette (⌘K)">
<button id="cmd-fab" class="fixed bottom-6 right-6 z-50" title="Command Palette (⌘/Ctrl+K)">
<div class="cmd-fab-inner">
<span>⌘ + K</span>
<span class="shortcut-key"></span><span class="shortcut-plus"> + </span><span class="shortcut-key">K</span>
</div>
</button>

View File

@@ -25,12 +25,13 @@
{# Action button with htmx #}
{% macro action_btn(label, url, style="outline", title=None, icon=None) %}
{% if title %}<div class="tooltip" data-tip="{{ title }}">{% endif %}
<button hx-post="{{ url }}"
hx-swap="none"
class="btn btn-{{ style }}"
{% if title %}title="{{ title }}"{% endif %}>
class="btn btn-{{ style }}">
{% if icon %}{{ icon }}{% endif %}{{ label }}
</button>
{% if title %}</div>{% endif %}
{% endmacro %}
{# Stat card for dashboard #}

View File

@@ -1,27 +1,68 @@
{# Container list for a stack on a single host #}
{% from "partials/icons.html" import terminal %}
{% from "partials/icons.html" import terminal, rotate_ccw, scroll_text, square, play, cloud_download %}
{% macro container_row(stack, container, host) %}
<div class="flex items-center gap-2 mb-2">
{% if container.State == "running" %}
<span class="badge badge-success">running</span>
{% elif container.State == "unknown" %}
<span class="badge badge-ghost"><span class="loading loading-spinner loading-xs"></span></span>
{% elif container.State == "exited" %}
{% if container.ExitCode == 0 %}
<span class="badge badge-neutral">exited (0)</span>
<div class="flex flex-col sm:flex-row sm:items-center gap-2 mb-3 p-2 bg-base-200 rounded-lg">
<div class="flex items-center gap-2 min-w-0">
{% if container.State == "running" %}
<span class="badge badge-success">running</span>
{% elif container.State == "unknown" %}
<span class="badge badge-ghost"><span class="loading loading-spinner loading-xs"></span></span>
{% elif container.State == "exited" %}
{% if container.ExitCode == 0 %}
<span class="badge badge-neutral">exited (0)</span>
{% else %}
<span class="badge badge-error">exited ({{ container.ExitCode }})</span>
{% endif %}
{% elif container.State == "created" %}
<span class="badge badge-neutral">created</span>
{% else %}
<span class="badge badge-error">exited ({{ container.ExitCode }})</span>
<span class="badge badge-warning">{{ container.State }}</span>
{% endif %}
{% elif container.State == "created" %}
<span class="badge badge-neutral">created</span>
{% else %}
<span class="badge badge-warning">{{ container.State }}</span>
{% endif %}
<code class="text-sm flex-1">{{ container.Name }}</code>
<button class="btn btn-sm btn-outline"
onclick="initExecTerminal('{{ stack }}', '{{ container.Name }}', '{{ host }}')">
{{ terminal() }} Shell
</button>
<code class="text-sm truncate">{{ container.Name }}</code>
</div>
<div class="join sm:ml-auto shrink-0">
<div class="tooltip tooltip-top" data-tip="View logs">
<button class="btn btn-sm btn-outline join-item"
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/logs"
hx-swap="none">
{{ scroll_text() }}
</button>
</div>
<div class="tooltip tooltip-top" data-tip="Restart service">
<button class="btn btn-sm btn-outline join-item"
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/restart"
hx-swap="none">
{{ rotate_ccw() }}
</button>
</div>
<div class="tooltip tooltip-top" data-tip="Pull image">
<button class="btn btn-sm btn-outline join-item"
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/pull"
hx-swap="none">
{{ cloud_download() }}
</button>
</div>
<div class="tooltip tooltip-top" data-tip="Start service">
<button class="btn btn-sm btn-outline join-item"
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/up"
hx-swap="none">
{{ play() }}
</button>
</div>
<div class="tooltip tooltip-top" data-tip="Stop service">
<button class="btn btn-sm btn-outline join-item"
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/stop"
hx-swap="none">
{{ square() }}
</button>
</div>
<div class="tooltip tooltip-top" data-tip="Open shell">
<button class="btn btn-sm btn-outline join-item"
onclick="initExecTerminal('{{ stack }}', '{{ container.Name }}', '{{ host }}')">
{{ terminal() }}
</button>
</div>
</div>
</div>
{% endmacro %}

View File

@@ -43,6 +43,18 @@
</svg>
{% endmacro %}
{% macro rotate_ccw(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M3 12a9 9 0 1 0 9-9 9.75 9.75 0 0 0-6.74 2.74L3 8"/><path d="M3 3v5h5"/>
</svg>
{% endmacro %}
{% macro scroll_text(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M15 12h-5"/><path d="M15 8h-5"/><path d="M19 17V5a2 2 0 0 0-2-2H4"/><path d="M8 21h12a2 2 0 0 0 2-2v-1a1 1 0 0 0-1-1H11a1 1 0 0 0-1 1v1a2 2 0 1 1-4 0V5a2 2 0 1 0-4 0v2a1 1 0 0 0 1 1h3"/>
</svg>
{% endmacro %}
{% macro download(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"/><polyline points="7 10 12 15 17 10"/><line x1="12" x2="12" y1="15" y2="3"/>
@@ -158,3 +170,9 @@
<circle cx="13.5" cy="6.5" r="0.5" fill="currentColor"/><circle cx="17.5" cy="10.5" r="0.5" fill="currentColor"/><circle cx="8.5" cy="7.5" r="0.5" fill="currentColor"/><circle cx="6.5" cy="12.5" r="0.5" fill="currentColor"/><path d="M12 2C6.5 2 2 6.5 2 12s4.5 10 10 10c.926 0 1.648-.746 1.648-1.688 0-.437-.18-.835-.437-1.125-.29-.289-.438-.652-.438-1.125a1.64 1.64 0 0 1 1.668-1.668h1.996c3.051 0 5.555-2.503 5.555-5.555C21.965 6.012 17.461 2 12 2z"/>
</svg>
{% endmacro %}
{% macro external_link(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M15 3h6v6"/><path d="M10 14 21 3"/><path d="M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"/>
</svg>
{% endmacro %}

View File

@@ -1,10 +1,10 @@
{% extends "base.html" %}
{% from "partials/components.html" import collapse, action_btn %}
{% from "partials/icons.html" import play, square, rotate_cw, download, cloud_download, file_text, save, file_code, terminal, settings %}
{% from "partials/icons.html" import play, square, rotate_cw, download, cloud_download, file_text, save, file_code, terminal, settings, external_link %}
{% block title %}{{ name }} - Compose Farm{% endblock %}
{% block content %}
<div class="max-w-5xl">
<div class="max-w-5xl" data-services="{{ services | join(',') }}" data-containers='{{ containers | tojson }}' data-website-urls='{{ website_urls | tojson }}'>
<div class="mb-6">
<h1 class="text-3xl font-bold rainbow-hover">{{ name }}</h1>
<div class="flex flex-wrap items-center gap-2 mt-2">
@@ -30,7 +30,20 @@
<!-- Other -->
{{ action_btn("Pull", "/api/stack/" ~ name ~ "/pull", "outline", "Pull latest images (no restart)", cloud_download()) }}
{{ action_btn("Logs", "/api/stack/" ~ name ~ "/logs", "outline", "Show recent logs", file_text()) }}
<button id="save-btn" class="btn btn-outline">{{ save() }} Save All</button>
<div class="tooltip" data-tip="Save compose and .env files"><button id="save-btn" class="btn btn-outline">{{ save() }} Save All</button></div>
{% if website_urls %}
<div class="divider divider-horizontal mx-0"></div>
<!-- Open Website -->
{% for url in website_urls %}
<div class="tooltip" data-tip="Open {{ url }}">
<a href="{{ url }}" target="_blank" rel="noopener noreferrer" class="btn btn-outline">
{{ external_link() }} {% if website_urls | length > 1 %}{{ url | replace('https://', '') | replace('http://', '') }}{% else %}Open Website{% endif %}
</a>
</div>
{% endfor %}
{% endif %}
</div>
{% call collapse("Compose File", badge=compose_path, icon=file_code()) %}

View File

@@ -6,6 +6,7 @@ import asyncio
import contextlib
import fcntl
import json
import logging
import os
import pty
import shlex
@@ -21,6 +22,8 @@ from compose_farm.executor import is_local, ssh_connect_kwargs
from compose_farm.web.deps import get_config
from compose_farm.web.streaming import CRLF, DIM, GREEN, RED, RESET, tasks
logger = logging.getLogger(__name__)
# Shell command to prefer bash over sh
SHELL_FALLBACK = "command -v bash >/dev/null && exec bash || exec sh"
@@ -34,7 +37,7 @@ def _parse_resize(msg: str) -> tuple[int, int] | None:
"""Parse a resize message, return (cols, rows) or None if not a resize."""
try:
data = json.loads(msg)
if data.get("type") == "resize":
if isinstance(data, dict) and data.get("type") == "resize":
return int(data["cols"]), int(data["rows"])
except (json.JSONDecodeError, KeyError, TypeError, ValueError):
pass
@@ -214,6 +217,7 @@ async def exec_websocket(
except WebSocketDisconnect:
pass
except Exception as e:
logger.exception("WebSocket exec error for %s on %s", container, host)
with contextlib.suppress(Exception):
await websocket.send_text(f"{RED}Error: {e}{RESET}{CRLF}")
finally:
@@ -232,8 +236,8 @@ async def _run_shell_session(
await websocket.send_text(f"{RED}Host '{host_name}' not found{RESET}{CRLF}")
return
# Start interactive shell in home directory (avoid login shell to prevent job control warnings)
shell_cmd = "cd ~ && exec bash -i 2>/dev/null || exec sh -i"
# Start interactive shell in home directory
shell_cmd = "cd ~ && exec bash -i || exec sh -i"
if is_local(host):
# Local: use argv list with shell -c to interpret the command
@@ -258,6 +262,7 @@ async def shell_websocket(
except WebSocketDisconnect:
pass
except Exception as e:
logger.exception("WebSocket shell error for host %s", host)
with contextlib.suppress(Exception):
await websocket.send_text(f"{RED}Error: {e}{RESET}{CRLF}")
finally:

View File

@@ -58,8 +58,9 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
captured = capsys.readouterr()
assert "Nothing to apply" in captured.out
@@ -82,10 +83,11 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch("compose_farm.cli.lifecycle.stop_orphaned_stacks") as mock_stop,
patch("compose_farm.cli.lifecycle.up_stacks") as mock_up,
):
apply(dry_run=True, no_orphans=False, full=False, config=None)
apply(dry_run=True, no_orphans=False, no_strays=False, full=False, config=None)
captured = capsys.readouterr()
assert "Stacks to migrate" in captured.out
@@ -112,6 +114,7 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -120,7 +123,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
@@ -139,6 +142,7 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -146,7 +150,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.stop_orphaned_stacks") as mock_stop,
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
mock_stop.assert_called_once_with(cfg)
@@ -169,6 +173,7 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -178,7 +183,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=True, full=False, config=None)
apply(dry_run=False, no_orphans=True, no_strays=False, full=False, config=None)
# Should run migrations but not orphan cleanup
mock_up.assert_called_once()
@@ -202,8 +207,9 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=False, no_orphans=True, full=False, config=None)
apply(dry_run=False, no_orphans=True, no_strays=False, full=False, config=None)
captured = capsys.readouterr()
assert "Nothing to apply" in captured.out
@@ -221,6 +227,7 @@ class TestApplyCommand:
"compose_farm.cli.lifecycle.get_stacks_not_in_state",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -229,7 +236,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
@@ -249,8 +256,9 @@ class TestApplyCommand:
"compose_farm.cli.lifecycle.get_stacks_not_in_state",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=True, no_orphans=False, full=False, config=None)
apply(dry_run=True, no_orphans=False, no_strays=False, full=False, config=None)
captured = capsys.readouterr()
assert "Stacks to start" in captured.out
@@ -267,6 +275,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -275,7 +284,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=True, config=None)
apply(dry_run=False, no_orphans=False, no_strays=False, full=True, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
@@ -293,8 +302,9 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=True, no_orphans=False, full=True, config=None)
apply(dry_run=True, no_orphans=False, no_strays=False, full=True, config=None)
captured = capsys.readouterr()
assert "Stacks to refresh" in captured.out
@@ -319,6 +329,7 @@ class TestApplyCommand:
return_value=["svc2"],
),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host2"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -327,7 +338,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, full=True, config=None)
apply(dry_run=False, no_orphans=False, no_strays=False, full=True, config=None)
# up_stacks should be called 3 times: migrate, start, refresh
assert mock_up.call_count == 3

View File

@@ -2,11 +2,14 @@
from __future__ import annotations
import os
import shutil
import subprocess
import sys
import time
import pytest
# Thresholds in seconds, per OS
if sys.platform == "win32":
CLI_STARTUP_THRESHOLD = 2.0
@@ -16,6 +19,10 @@ else: # Linux
CLI_STARTUP_THRESHOLD = 0.25
@pytest.mark.skipif(
"PYTEST_XDIST_WORKER" in os.environ,
reason="Skip in parallel mode due to resource contention",
)
def test_cli_startup_time() -> None:
"""Verify CLI startup time stays within acceptable bounds.

60
tests/test_compose.py Normal file
View File

@@ -0,0 +1,60 @@
"""Tests for compose file parsing utilities."""
from __future__ import annotations
import pytest
from compose_farm.compose import get_container_name
class TestGetContainerName:
"""Test get_container_name helper function."""
def test_explicit_container_name(self) -> None:
"""Uses container_name from service definition when set."""
service_def = {"image": "nginx", "container_name": "my-custom-name"}
result = get_container_name("web", service_def, "myproject")
assert result == "my-custom-name"
def test_default_naming_pattern(self) -> None:
"""Falls back to {project}-{service}-1 pattern."""
service_def = {"image": "nginx"}
result = get_container_name("web", service_def, "myproject")
assert result == "myproject-web-1"
def test_none_service_def(self) -> None:
"""Handles None service definition gracefully."""
result = get_container_name("web", None, "myproject")
assert result == "myproject-web-1"
def test_empty_service_def(self) -> None:
"""Handles empty service definition."""
result = get_container_name("web", {}, "myproject")
assert result == "myproject-web-1"
def test_container_name_none_value(self) -> None:
"""Handles container_name set to None."""
service_def = {"image": "nginx", "container_name": None}
result = get_container_name("web", service_def, "myproject")
assert result == "myproject-web-1"
def test_container_name_empty_string(self) -> None:
"""Handles container_name set to empty string."""
service_def = {"image": "nginx", "container_name": ""}
result = get_container_name("web", service_def, "myproject")
assert result == "myproject-web-1"
@pytest.mark.parametrize(
("service_name", "project_name", "expected"),
[
("redis", "plex", "plex-redis-1"),
("plex-server", "media", "media-plex-server-1"),
("db", "my-app", "my-app-db-1"),
],
)
def test_various_naming_combinations(
self, service_name: str, project_name: str, expected: str
) -> None:
"""Test various service/project name combinations."""
result = get_container_name(service_name, {"image": "test"}, project_name)
assert result == expected

View File

@@ -11,6 +11,7 @@ from compose_farm.executor import (
_run_local_command,
check_networks_exist,
check_paths_exist,
get_running_stacks_on_host,
is_local,
run_command,
run_compose,
@@ -239,3 +240,31 @@ class TestCheckNetworksExist:
result = await check_networks_exist(config, "local", [])
assert result == {}
@linux_only
class TestGetRunningStacksOnHost:
"""Tests for get_running_stacks_on_host function (requires Docker)."""
async def test_returns_set_of_stacks(self, tmp_path: Path) -> None:
"""Function returns a set of stack names."""
config = Config(
compose_dir=tmp_path,
hosts={"local": Host(address="localhost")},
stacks={},
)
result = await get_running_stacks_on_host(config, "local")
assert isinstance(result, set)
async def test_filters_empty_lines(self, tmp_path: Path) -> None:
"""Empty project names are filtered out."""
config = Config(
compose_dir=tmp_path,
hosts={"local": Host(address="localhost")},
stacks={},
)
# Result should not contain empty strings
result = await get_running_stacks_on_host(config, "local")
assert "" not in result

View File

@@ -10,8 +10,8 @@ import pytest
from compose_farm.config import Config, Host
from compose_farm.executor import CommandResult
from compose_farm.logs import (
_parse_images_output,
collect_stack_entries,
_SECTION_SEPARATOR,
collect_stacks_entries_on_host,
isoformat,
load_existing_entries,
merge_entries,
@@ -19,74 +19,252 @@ from compose_farm.logs import (
)
def test_parse_images_output_handles_list_and_lines() -> None:
data = [
{"Service": "svc", "Image": "redis", "Digest": "sha256:abc"},
{"Service": "svc", "Image": "db", "Digest": "sha256:def"},
def _make_mock_output(
project_images: dict[str, list[str]], image_info: list[dict[str, object]]
) -> str:
"""Build mock output matching the 2-docker-command format."""
# Section 1: project|image pairs from docker ps
ps_lines = [
f"{project}|{image}" for project, images in project_images.items() for image in images
]
as_array = _parse_images_output(json.dumps(data))
assert len(as_array) == 2
as_lines = _parse_images_output("\n".join(json.dumps(item) for item in data))
assert len(as_lines) == 2
# Section 2: JSON array from docker image inspect
image_json = json.dumps(image_info)
return f"{chr(10).join(ps_lines)}\n{_SECTION_SEPARATOR}\n{image_json}"
@pytest.mark.asyncio
async def test_snapshot_preserves_first_seen(tmp_path: Path) -> None:
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
stack_dir = compose_dir / "svc"
stack_dir.mkdir()
(stack_dir / "docker-compose.yml").write_text("services: {}\n")
class TestCollectStacksEntriesOnHost:
"""Tests for collect_stacks_entries_on_host (2 docker commands per host)."""
config = Config(
compose_dir=compose_dir,
hosts={"local": Host(address="localhost")},
stacks={"svc": "local"},
)
@pytest.fixture
def config_with_stacks(self, tmp_path: Path) -> Config:
"""Create a config with multiple stacks."""
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
for stack in ["plex", "jellyfin", "sonarr"]:
stack_dir = compose_dir / stack
stack_dir.mkdir()
(stack_dir / "docker-compose.yml").write_text("services: {}\n")
sample_output = json.dumps([{"Service": "svc", "Image": "redis", "Digest": "sha256:abc"}])
async def fake_run_compose(
_cfg: Config, stack: str, compose_cmd: str, *, stream: bool = True
) -> CommandResult:
assert compose_cmd == "images --format json"
assert stream is False or stream is True
return CommandResult(
stack=stack,
exit_code=0,
success=True,
stdout=sample_output,
stderr="",
return Config(
compose_dir=compose_dir,
hosts={"host1": Host(address="localhost"), "host2": Host(address="localhost")},
stacks={"plex": "host1", "jellyfin": "host1", "sonarr": "host2"},
)
log_path = tmp_path / "dockerfarm-log.toml"
@pytest.mark.asyncio
async def test_single_ssh_call(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Verify only 1 SSH call is made regardless of stack count."""
call_count = {"count": 0}
# First snapshot
first_time = datetime(2025, 1, 1, tzinfo=UTC)
first_entries = await collect_stack_entries(
config, "svc", now=first_time, run_compose_fn=fake_run_compose
)
first_iso = isoformat(first_time)
merged = merge_entries([], first_entries, now_iso=first_iso)
meta = {"generated_at": first_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
call_count["count"] += 1
output = _make_mock_output(
{"plex": ["plex:latest"], "jellyfin": ["jellyfin:latest"]},
[
{
"RepoTags": ["plex:latest"],
"Id": "sha256:aaa",
"RepoDigests": ["plex@sha256:aaa"],
},
{
"RepoTags": ["jellyfin:latest"],
"Id": "sha256:bbb",
"RepoDigests": ["jellyfin@sha256:bbb"],
},
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
after_first = tomllib.loads(log_path.read_text())
first_seen = after_first["entries"][0]["first_seen"]
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
# Second snapshot
second_time = datetime(2025, 2, 1, tzinfo=UTC)
second_entries = await collect_stack_entries(
config, "svc", now=second_time, run_compose_fn=fake_run_compose
)
second_iso = isoformat(second_time)
existing = load_existing_entries(log_path)
merged = merge_entries(existing, second_entries, now_iso=second_iso)
meta = {"generated_at": second_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex", "jellyfin"}, now=now
)
after_second = tomllib.loads(log_path.read_text())
entry = after_second["entries"][0]
assert entry["first_seen"] == first_seen
assert entry["last_seen"].startswith("2025-02-01")
assert call_count["count"] == 1
assert len(entries) == 2
@pytest.mark.asyncio
async def test_filters_to_requested_stacks(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Only return entries for stacks we asked for, even if others are running."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
# Docker ps shows 3 stacks, but we only want plex
output = _make_mock_output(
{
"plex": ["plex:latest"],
"jellyfin": ["jellyfin:latest"],
"other": ["other:latest"],
},
[
{
"RepoTags": ["plex:latest"],
"Id": "sha256:aaa",
"RepoDigests": ["plex@sha256:aaa"],
},
{
"RepoTags": ["jellyfin:latest"],
"Id": "sha256:bbb",
"RepoDigests": ["j@sha256:bbb"],
},
{
"RepoTags": ["other:latest"],
"Id": "sha256:ccc",
"RepoDigests": ["o@sha256:ccc"],
},
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex"}, now=now
)
assert len(entries) == 1
assert entries[0].stack == "plex"
@pytest.mark.asyncio
async def test_multiple_images_per_stack(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Stack with multiple containers/images returns multiple entries."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
output = _make_mock_output(
{"plex": ["plex:latest", "redis:7"]},
[
{
"RepoTags": ["plex:latest"],
"Id": "sha256:aaa",
"RepoDigests": ["p@sha256:aaa"],
},
{"RepoTags": ["redis:7"], "Id": "sha256:bbb", "RepoDigests": ["r@sha256:bbb"]},
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex"}, now=now
)
assert len(entries) == 2
images = {e.image for e in entries}
assert images == {"plex:latest", "redis:7"}
@pytest.mark.asyncio
async def test_empty_stacks_returns_empty(self, config_with_stacks: Config) -> None:
"""Empty stack set returns empty entries without making SSH call."""
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(config_with_stacks, "host1", set(), now=now)
assert entries == []
@pytest.mark.asyncio
async def test_ssh_failure_returns_empty(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""SSH failure returns empty list instead of raising."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
return CommandResult(stack=stack, exit_code=1, success=False, stdout="", stderr="error")
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex"}, now=now
)
assert entries == []
class TestSnapshotMerging:
"""Tests for merge_entries preserving first_seen."""
@pytest.fixture
def config(self, tmp_path: Path) -> Config:
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
stack_dir = compose_dir / "svc"
stack_dir.mkdir()
(stack_dir / "docker-compose.yml").write_text("services: {}\n")
return Config(
compose_dir=compose_dir,
hosts={"local": Host(address="localhost")},
stacks={"svc": "local"},
)
@pytest.mark.asyncio
async def test_preserves_first_seen(
self, tmp_path: Path, config: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Repeated snapshots preserve first_seen timestamp."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
output = _make_mock_output(
{"svc": ["redis:latest"]},
[
{
"RepoTags": ["redis:latest"],
"Id": "sha256:abc",
"RepoDigests": ["r@sha256:abc"],
}
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
log_path = tmp_path / "dockerfarm-log.toml"
# First snapshot
first_time = datetime(2025, 1, 1, tzinfo=UTC)
first_entries = await collect_stacks_entries_on_host(
config, "local", {"svc"}, now=first_time
)
first_iso = isoformat(first_time)
merged = merge_entries([], first_entries, now_iso=first_iso)
meta = {"generated_at": first_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
after_first = tomllib.loads(log_path.read_text())
first_seen = after_first["entries"][0]["first_seen"]
# Second snapshot
second_time = datetime(2025, 2, 1, tzinfo=UTC)
second_entries = await collect_stacks_entries_on_host(
config, "local", {"svc"}, now=second_time
)
second_iso = isoformat(second_time)
existing = load_existing_entries(log_path)
merged = merge_entries(existing, second_entries, now_iso=second_iso)
meta = {"generated_at": second_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
after_second = tomllib.loads(log_path.read_text())
entry = after_second["entries"][0]
assert entry["first_seen"] == first_seen
assert entry["last_seen"].startswith("2025-02-01")

View File

@@ -11,7 +11,10 @@ import pytest
from compose_farm.cli import lifecycle
from compose_farm.config import Config, Host
from compose_farm.executor import CommandResult
from compose_farm.operations import _migrate_stack
from compose_farm.operations import (
_migrate_stack,
build_discovery_results,
)
@pytest.fixture
@@ -109,3 +112,83 @@ class TestUpdateCommandSequence:
# Verify the sequence is pull, build, down, up
assert "down" in source
assert "up -d" in source
class TestBuildDiscoveryResults:
"""Tests for build_discovery_results function."""
@pytest.fixture
def config(self, tmp_path: Path) -> Config:
"""Create a test config with multiple stacks."""
compose_dir = tmp_path / "compose"
for stack in ["plex", "jellyfin", "sonarr"]:
(compose_dir / stack).mkdir(parents=True)
(compose_dir / stack / "docker-compose.yml").write_text("services: {}")
return Config(
compose_dir=compose_dir,
hosts={
"host1": Host(address="localhost"),
"host2": Host(address="localhost"),
},
stacks={"plex": "host1", "jellyfin": "host1", "sonarr": "host2"},
)
def test_discovers_correctly_running_stacks(self, config: Config) -> None:
"""Stacks running on correct hosts are discovered."""
running_on_host = {
"host1": {"plex", "jellyfin"},
"host2": {"sonarr"},
}
discovered, strays, duplicates = build_discovery_results(config, running_on_host)
assert discovered == {"plex": "host1", "jellyfin": "host1", "sonarr": "host2"}
assert strays == {}
assert duplicates == {}
def test_detects_stray_stacks(self, config: Config) -> None:
"""Stacks running on wrong hosts are marked as strays."""
running_on_host = {
"host1": set(),
"host2": {"plex"}, # plex should be on host1
}
discovered, strays, _duplicates = build_discovery_results(config, running_on_host)
assert "plex" not in discovered
assert strays == {"plex": ["host2"]}
def test_detects_duplicates(self, config: Config) -> None:
"""Single-host stacks running on multiple hosts are duplicates."""
running_on_host = {
"host1": {"plex"},
"host2": {"plex"}, # plex running on both hosts
}
discovered, strays, duplicates = build_discovery_results(
config, running_on_host, stacks=["plex"]
)
# plex is correctly running on host1
assert discovered == {"plex": "host1"}
# plex is also a stray on host2
assert strays == {"plex": ["host2"]}
# plex is a duplicate (single-host stack on multiple hosts)
assert duplicates == {"plex": ["host1", "host2"]}
def test_filters_to_requested_stacks(self, config: Config) -> None:
"""Only returns results for requested stacks."""
running_on_host = {
"host1": {"plex", "jellyfin"},
"host2": {"sonarr"},
}
discovered, _strays, _duplicates = build_discovery_results(
config, running_on_host, stacks=["plex"]
)
# Only plex should be in results
assert discovered == {"plex": "host1"}
assert "jellyfin" not in discovered
assert "sonarr" not in discovered

View File

@@ -211,8 +211,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas02"}, # plex moved to nas02
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas02"}, {}, {}), # plex moved to nas02
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -247,8 +247,12 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas01", "grafana": "nas02"}, # jellyfin not running
"compose_farm.cli.management._discover_stacks_full",
return_value=(
{"plex": "nas01", "grafana": "nas02"},
{},
{},
), # jellyfin not running
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -281,8 +285,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas01"}, # only plex running
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas01"}, {}, {}), # only plex running
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -315,8 +319,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas01"}, # jellyfin not running
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas01"}, {}, {}), # jellyfin not running
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -350,8 +354,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas02"}, # would change
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas02"}, {}, {}), # would change
),
patch("compose_farm.cli.management.save_state") as mock_save,
):

View File

@@ -6,7 +6,7 @@ import yaml
from compose_farm.compose import parse_external_networks
from compose_farm.config import Config, Host
from compose_farm.traefik import generate_traefik_config
from compose_farm.traefik import extract_website_urls, generate_traefik_config
def _write_compose(path: Path, data: dict[str, object]) -> None:
@@ -212,22 +212,22 @@ def test_generate_follows_network_mode_service_for_ports(tmp_path: Path) -> None
"image": "gluetun",
"ports": ["5080:5080", "9696:9696"],
},
"qbittorrent": {
"image": "qbittorrent",
"syncthing": {
"image": "syncthing",
"network_mode": "service:vpn",
"labels": [
"traefik.enable=true",
"traefik.http.routers.torrent.rule=Host(`torrent.example.com`)",
"traefik.http.services.torrent.loadbalancer.server.port=5080",
"traefik.http.routers.sync.rule=Host(`sync.example.com`)",
"traefik.http.services.sync.loadbalancer.server.port=5080",
],
},
"prowlarr": {
"image": "prowlarr",
"searxng": {
"image": "searxng",
"network_mode": "service:vpn",
"labels": [
"traefik.enable=true",
"traefik.http.routers.prowlarr.rule=Host(`prowlarr.example.com`)",
"traefik.http.services.prowlarr.loadbalancer.server.port=9696",
"traefik.http.routers.searxng.rule=Host(`searxng.example.com`)",
"traefik.http.services.searxng.loadbalancer.server.port=9696",
],
},
}
@@ -238,10 +238,10 @@ def test_generate_follows_network_mode_service_for_ports(tmp_path: Path) -> None
assert warnings == []
# Both services should get their ports from the vpn service
torrent_servers = dynamic["http"]["services"]["torrent"]["loadbalancer"]["servers"]
assert torrent_servers == [{"url": "http://192.168.1.10:5080"}]
prowlarr_servers = dynamic["http"]["services"]["prowlarr"]["loadbalancer"]["servers"]
assert prowlarr_servers == [{"url": "http://192.168.1.10:9696"}]
sync_servers = dynamic["http"]["services"]["sync"]["loadbalancer"]["servers"]
assert sync_servers == [{"url": "http://192.168.1.10:5080"}]
searxng_servers = dynamic["http"]["services"]["searxng"]["loadbalancer"]["servers"]
assert searxng_servers == [{"url": "http://192.168.1.10:9696"}]
def test_parse_external_networks_single(tmp_path: Path) -> None:
@@ -336,3 +336,330 @@ def test_parse_external_networks_missing_compose(tmp_path: Path) -> None:
networks = parse_external_networks(cfg, "app")
assert networks == []
class TestExtractWebsiteUrls:
"""Test extract_website_urls function."""
def _create_config(self, tmp_path: Path) -> Config:
"""Create a test config."""
return Config(
compose_dir=tmp_path,
hosts={"nas": Host(address="192.168.1.10")},
stacks={"mystack": "nas"},
)
def test_extract_https_url(self, tmp_path: Path) -> None:
"""Extracts HTTPS URL from websecure entrypoint."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
"traefik.http.routers.web.entrypoints": "websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.example.com"]
def test_extract_http_url(self, tmp_path: Path) -> None:
"""Extracts HTTP URL from web entrypoint."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.local`)",
"traefik.http.routers.web.entrypoints": "web",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["http://app.local"]
def test_extract_multiple_urls(self, tmp_path: Path) -> None:
"""Extracts multiple URLs from different routers."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
"traefik.http.routers.web.entrypoints": "websecure",
"traefik.http.routers.web-local.rule": "Host(`app.local`)",
"traefik.http.routers.web-local.entrypoints": "web",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["http://app.local", "https://app.example.com"]
def test_https_preferred_over_http(self, tmp_path: Path) -> None:
"""HTTPS is preferred when same host has both."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
# Same host with different entrypoints
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web-http.rule": "Host(`app.example.com`)",
"traefik.http.routers.web-http.entrypoints": "web",
"traefik.http.routers.web-https.rule": "Host(`app.example.com`)",
"traefik.http.routers.web-https.entrypoints": "websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.example.com"]
def test_traefik_disabled(self, tmp_path: Path) -> None:
"""Returns empty list when traefik.enable is false."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "false",
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
"traefik.http.routers.web.entrypoints": "websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == []
def test_no_traefik_labels(self, tmp_path: Path) -> None:
"""Returns empty list when no traefik labels."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == []
def test_compose_file_not_exists(self, tmp_path: Path) -> None:
"""Returns empty list when compose file doesn't exist."""
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == []
def test_env_variable_interpolation(self, tmp_path: Path) -> None:
"""Interpolates environment variables in host rule."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
env_file = stack_dir / ".env"
env_file.write_text("DOMAIN=example.com\n")
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.${DOMAIN}`)",
"traefik.http.routers.web.entrypoints": "websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.example.com"]
def test_multiple_hosts_in_one_rule_with_or(self, tmp_path: Path) -> None:
"""Extracts multiple hosts from a single rule with || operator."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.example.com`) || Host(`app.backup.com`)",
"traefik.http.routers.web.entrypoints": "websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.backup.com", "https://app.example.com"]
def test_host_with_path_prefix(self, tmp_path: Path) -> None:
"""Extracts host from rule that includes PathPrefix."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.example.com`) && PathPrefix(`/api`)",
"traefik.http.routers.web.entrypoints": "websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.example.com"]
def test_multiple_services_in_stack(self, tmp_path: Path) -> None:
"""Extracts URLs from multiple services in one stack (like arr stack)."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"radarr": {
"image": "radarr",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.radarr.rule": "Host(`radarr.example.com`)",
"traefik.http.routers.radarr.entrypoints": "websecure",
},
},
"sonarr": {
"image": "sonarr",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.sonarr.rule": "Host(`sonarr.example.com`)",
"traefik.http.routers.sonarr.entrypoints": "websecure",
},
},
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://radarr.example.com", "https://sonarr.example.com"]
def test_labels_in_list_format(self, tmp_path: Path) -> None:
"""Handles labels in list format (- key=value)."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": [
"traefik.enable=true",
"traefik.http.routers.web.rule=Host(`app.example.com`)",
"traefik.http.routers.web.entrypoints=websecure",
],
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.example.com"]
def test_no_entrypoints_defaults_to_http(self, tmp_path: Path) -> None:
"""When no entrypoints specified, defaults to http."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["http://app.example.com"]
def test_multiple_entrypoints_with_websecure(self, tmp_path: Path) -> None:
"""When entrypoints includes websecure, use https."""
stack_dir = tmp_path / "mystack"
stack_dir.mkdir()
compose_file = stack_dir / "compose.yaml"
compose_data = {
"services": {
"web": {
"image": "nginx",
"labels": {
"traefik.enable": "true",
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
"traefik.http.routers.web.entrypoints": "web,websecure",
},
}
}
}
compose_file.write_text(yaml.dump(compose_data))
config = self._create_config(tmp_path)
urls = extract_website_urls(config, "mystack")
assert urls == ["https://app.example.com"]

View File

@@ -31,12 +31,21 @@ services:
(plex_dir / ".env").write_text("PLEX_CLAIM=claim-xxx\n")
# Create another stack
sonarr_dir = compose_path / "sonarr"
sonarr_dir.mkdir()
(sonarr_dir / "compose.yaml").write_text("""
grafana_dir = compose_path / "grafana"
grafana_dir.mkdir()
(grafana_dir / "compose.yaml").write_text("""
services:
sonarr:
image: linuxserver/sonarr
grafana:
image: grafana/grafana
""")
# Create a single-service stack for testing service commands
redis_dir = compose_path / "redis"
redis_dir.mkdir()
(redis_dir / "compose.yaml").write_text("""
services:
redis:
image: redis:alpine
""")
return compose_path
@@ -58,7 +67,8 @@ hosts:
stacks:
plex: server-1
sonarr: server-2
grafana: server-2
redis: server-1
""")
# State file must be alongside config file

View File

@@ -2,53 +2,65 @@
from pathlib import Path
import pytest
from compose_farm.web.routes.api import _backup_file, _save_with_backup
def test_backup_creates_timestamped_file(tmp_path: Path) -> None:
"""Test that backup creates file in .backups with correct content."""
test_file = tmp_path / "test.yaml"
@pytest.fixture
def xdg_backup_dir(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
"""Set XDG_CONFIG_HOME to tmp_path and return the backup directory path."""
monkeypatch.setenv("XDG_CONFIG_HOME", str(tmp_path))
return tmp_path / "compose-farm" / "backups"
def test_backup_creates_timestamped_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
"""Test that backup creates file in XDG backup dir with correct content."""
test_file = tmp_path / "stacks" / "test.yaml"
test_file.parent.mkdir(parents=True)
test_file.write_text("original content")
backup_path = _backup_file(test_file)
assert backup_path is not None
assert backup_path.parent.name == ".backups"
assert backup_path.is_relative_to(xdg_backup_dir)
assert backup_path.name.startswith("test.yaml.")
assert backup_path.read_text() == "original content"
def test_backup_returns_none_for_nonexistent_file(tmp_path: Path) -> None:
def test_backup_returns_none_for_nonexistent_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
"""Test that backup returns None if file doesn't exist."""
assert _backup_file(tmp_path / "nonexistent.yaml") is None
def test_save_creates_new_file(tmp_path: Path) -> None:
def test_save_creates_new_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
"""Test that save creates new file without backup."""
test_file = tmp_path / "new.yaml"
assert _save_with_backup(test_file, "content") is True
assert test_file.read_text() == "content"
assert not (tmp_path / ".backups").exists()
assert not xdg_backup_dir.exists()
def test_save_skips_unchanged_content(tmp_path: Path) -> None:
def test_save_skips_unchanged_content(tmp_path: Path, xdg_backup_dir: Path) -> None:
"""Test that save returns False and creates no backup if unchanged."""
test_file = tmp_path / "test.yaml"
test_file.write_text("same")
assert _save_with_backup(test_file, "same") is False
assert not (tmp_path / ".backups").exists()
assert not xdg_backup_dir.exists()
def test_save_creates_backup_before_overwrite(tmp_path: Path) -> None:
def test_save_creates_backup_before_overwrite(tmp_path: Path, xdg_backup_dir: Path) -> None:
"""Test that save backs up original before overwriting."""
test_file = tmp_path / "test.yaml"
test_file = tmp_path / "stacks" / "test.yaml"
test_file.parent.mkdir(parents=True)
test_file.write_text("original")
assert _save_with_backup(test_file, "new") is True
assert test_file.read_text() == "new"
backups = list((tmp_path / ".backups").glob("test.yaml.*"))
# Find backup in XDG dir
backups = list(xdg_backup_dir.rglob("test.yaml.*"))
assert len(backups) == 1
assert backups[0].read_text() == "original"

Some files were not shown because too many files have changed in this diff Show More