Compare commits

...

24 Commits

Author SHA1 Message Date
Bas Nijholt
9c72e0937a web: Add clear button to sidebar filter (#168) 2026-01-18 00:11:58 +01:00
Bas Nijholt
74cc2f3245 fix: add COLUMNS and _TYPER_FORCE_DISABLE_TERMINAL for consistent output (#167) 2026-01-16 22:07:41 +01:00
Bas Nijholt
940bd9585a fix: Detect stale NFS mounts in path existence check (#166) 2026-01-15 12:53:21 +01:00
Bas Nijholt
dd60af61a8 docs: Add Plausible analytics (#165) 2026-01-12 17:23:08 +01:00
Bas Nijholt
2f3720949b Fix compose file resolution on remote hosts (#164) 2026-01-11 00:22:55 +01:00
Bas Nijholt
1e3b1d71ed Drop CF_LOCAL_HOST; limit web-stack inference to containers (#163)
* config: Add local_host and web_stack options

Allow configuring local_host and web_stack in compose-farm.yaml instead
of requiring environment variables. This makes it easier to deploy the
web UI with just a config file mount.

- local_host: specifies which host is "local" for Glances connectivity
- web_stack: identifies the web UI stack for self-update detection

Environment variables (CF_LOCAL_HOST, CF_WEB_STACK) still work as
fallback for backwards compatibility.

Closes #152

* docs: Clarify glances_stack is used by CLI and web UI

* config: Env vars override config, add docs

- Change precedence: environment variables now override config values
  (follows 12-factor app pattern)
- Document all CF_* environment variables in configuration.md
- Update example-config.yaml to mention env var overrides

* config: Consolidate env vars, prefer config options

- Update docker-compose.yml to comment out CF_WEB_STACK and CF_LOCAL_HOST
  (now prefer setting in compose-farm.yaml)
- Update init-env to comment out CF_LOCAL_HOST (can be set in config)
- Update docker-deployment.md with new "Config option" column
- Simplify troubleshooting to prefer config over env vars

* config: Generate CF_LOCAL_HOST with config alternative note

Instead of commenting out CF_LOCAL_HOST, generate it normally but add
a note in the comment that it can also be set as 'local_host' in config.

* config: Extend local_host to all web UI operations

When running the web UI in a Docker container, is_local() can't detect
which host the container is on due to different network namespaces.

Previously local_host/CF_LOCAL_HOST only affected Glances connectivity.
Now it also affects:
- Container exec/shell (runs locally instead of via SSH)
- File editing (uses local filesystem instead of SSH)

Added is_local_host() helper that checks CF_LOCAL_HOST/config.local_host
first, then falls back to is_local() detection.

* refactor: DRY get_web_stack helper, add tests

- Move get_web_stack to deps.py to avoid duplication in streaming.py
  and actions.py
- Add tests for config.local_host and config.web_stack parsing
- Add tests for is_local_host, get_web_stack, and get_local_host helpers
- Tests verify env var precedence over config values

* glances: rely on CF_WEB_STACK for container mode

Restore docker-compose env defaults and document local_host scope.

* web: ignore local_host outside container

Document container-only behavior and adjust tests.

* web: infer local host from web_stack

Drop local_host config option and update docs/tests.

* Remove CF_LOCAL_HOST override

* refactor: move web_stack helpers to Config class

- Add get_web_stack() and get_local_host_from_web_stack() as Config methods
- Remove duplicate _get_local_host_from_web_stack() from glances.py and deps.py
- Update deps.py get_web_stack() to delegate to Config method
- Add comprehensive tests for the new Config methods

* config: remove web_stack config option

The web_stack config option was redundant since:
- In Docker, CF_WEB_STACK env var is always set
- Outside Docker, the container-specific behavior is disabled anyway

Simplify by only using the CF_WEB_STACK environment variable.

* refactor: remove get_web_stack wrapper from deps

Callers now use config.get_web_stack() directly instead of
going through a pointless wrapper function.

* prompts: add rule to identify pointless wrapper functions
2026-01-10 10:48:35 +01:00
Bas Nijholt
c159549a9e web: Fix Glances connection for local host in container rows endpoint (#161) 2026-01-08 13:04:14 +01:00
Bas Nijholt
d65f4cf7f4 cli: Add --containers flag to stats command (#159)
* fix: Ignore _version.py in type checkers

The _version.py file is generated at build time by hatchling,
so mypy and ty can't resolve it during development.

* Update README.md

* cli: Respect --host flag in stats summary and add tests

- Fix --host filter to work in non-containers mode (was ignored)
- Filter hosts table, pending migrations, and --live queries by host
- Add tests for stats --containers functionality

* refactor: Remove redundant _format_bytes wrappers

Use format_bytes directly from glances module instead of wrapper
functions that add no value.

* Fix stats --host filtering

* refactor: Move validate_hosts to top-level imports
2026-01-08 00:05:30 +01:00
Bas Nijholt
7ce2067fcb cli: Default init-env output to current directory (#160)
Previously `cf config init-env` created the .env file next to the
compose-farm.yaml config file. This was unintuitive when working in
stack subdirectories - users expected the file in their current
directory.

Now the default is to create .env in the current working directory,
which matches typical CLI tool behavior. Use `-o /path/to/.env` to
specify a different location.
2026-01-07 23:26:00 +01:00
Bas Nijholt
f32057aa7b cli: Add list command with ls alias (#158) 2026-01-07 15:59:30 +01:00
Bas Nijholt
c3e3aeb538 ci: Only tag as :latest when building the latest release (#157) 2026-01-07 15:30:24 +01:00
Bas Nijholt
009f3b1403 web: Fix theme resetting to first theme in list (#156) 2026-01-07 15:01:34 +01:00
Bas Nijholt
51f74eab42 examples: Add CoreDNS for *.local domain resolution (#155)
* examples: Add CoreDNS for *.local domain resolution

Adds a CoreDNS example that resolves *.local to the Traefik host,
making the .local routes in all examples work out of the box.

Also removes the redundant Multi-Container Stacks section from
README since paperless-ngx already demonstrates this pattern.

* examples: Add coredns .env file
2026-01-07 12:29:53 +01:00
Bas Nijholt
4acf797128 examples: Update paperless-ngx to use PostgreSQL (#153)
Match the real-world setup with Redis + PostgreSQL + App.
Remove NFS + PostgreSQL warning since it works fine in practice.
2026-01-07 03:23:34 -08:00
Andi Powers-Holmes
d167da9d63 Fix external network name parsing (#152)
* fix: external network name parsing

Compose network definitions may have a "name" field defining the actual network name,
which may differ from the key used in the compose file e.g. when overriding the default
compose network, or using a network name containing special characters that are not valid YAML keys.

Fix: check for "name" field on definition and use that if present, else fall back to key.

* tests: Add test for external network name field parsing

Covers the case where a network definition has a "name" field that
differs from the YAML key (e.g., default key with name: compose-net).

---------

Co-authored-by: Bas Nijholt <bas@nijho.lt>
2026-01-07 02:48:35 -08:00
Bas Nijholt
a5eac339db compose: Quote arguments with shlex to preserve spaces (#151) 2026-01-06 15:37:55 +01:00
Bas Nijholt
9f3813eb72 docs: Add missing source files to architecture docs (#150) 2026-01-06 13:07:20 +01:00
Bas Nijholt
b9ae0ad4d5 docs: Add missing options, aliases, and config settings (#149)
- Add --pull and --build options to cf up (from #146)
- Add --no-strays option to cf apply
- Add command aliases section (a, l, r, u, p, s, c, rf, ck, tf)
- Add cf config init-env subcommand documentation
- Add glances_stack config option (from #124)
- Add Host Resource Monitoring section to architecture docs
2026-01-06 11:06:24 +01:00
Bas Nijholt
ca2a4dd6d9 cli: Add short command aliases (#148)
* cli: Add short command aliases

Add single and two-letter aliases for frequently used commands:

- a  → apply
- l  → logs
- r  → restart
- u  → update
- p  → pull
- s  → stats
- c  → compose
- rf → refresh
- ck → check
- tf → traefik-file

Aliases are hidden from --help to keep output clean.

* docs: Document command aliases in README
2026-01-05 18:46:57 +01:00
Bas Nijholt
fafdce5736 docs: Clarify Docker Compose vs Compose Farm commands (#147)
* docs: Clarify Docker Compose vs Compose Farm commands

Split the Usage section into two tables:
- Docker Compose Commands: wrappers with multi-host additions
- Compose Farm Commands: orchestration Docker Compose can't do

Also update the `update` command docstring to clarify it's
a shorthand for `up --pull --build`.

* chore(docs): update TOC

* docs: Add command type distinction to commands.md

Explain that commands are either Docker Compose wrappers with
multi-host superpowers, or Compose Farm originals for orchestration.
Also update `update` description to clarify it's a shorthand.

* Update README.md
2026-01-05 18:37:41 +01:00
Bas Nijholt
6436becff9 up: Add --pull and --build flags for Docker Compose parity (#146)
* up: Add --pull and --build flags for Docker Compose parity

Add `--pull` and `--build` options to `cf up` to match Docker Compose
naming conventions. This allows users to pull images or rebuild before
starting without using the separate `update` command.

- `cf up --pull` adds `--pull always` to the compose command
- `cf up --build` adds `--build` to the compose command
- Both flags work together: `cf up --pull --build`

The `update` command remains unchanged as a convenience wrapper.

* Update README.md

* up: Run stacks in parallel when no migration needed

Refactor up_stacks to categorize stacks and run them appropriately:
- Simple stacks (no migration): run in parallel via asyncio.gather
- Multi-host stacks: run in parallel
- Migration stacks: run sequentially for clear output and rollback

This makes `cf up --all` as fast as `cf update --all` for typical use.

* refactor: DRY up command building with build_up_cmd helper

Consolidate all 'up -d' command construction into a single helper
function. Now used by up, update, and operations module.

Added tests for the helper function.

* update: Delegate to up --pull --build

Simplify update command to just call up with pull=True and build=True.
This removes duplication and ensures consistent behavior.
2026-01-05 15:55:00 +01:00
Bas Nijholt
3460d8a3ea restart: Match Docker Compose semantics (#145)
* restart: Match Docker Compose semantics

Change `cf restart` from doing `down + up` to using `docker compose
restart`, matching the Docker Compose command behavior.

This provides command naming parity with Docker Compose. Users who want
the old behavior can use `cf down mystack && cf up mystack`.

- Update restart implementation to use `docker compose restart`
- Remove traefik regeneration from restart (no longer recreates containers)
- Update all documentation and help text
- Remove restart from self-update SSH handling (no longer involves down)

* web: Clarify Update tooltip uses 'recreate' not 'restart'

Avoid confusion now that 'restart' means something different.

* web: Fix Update All tooltip to use 'recreates'
2026-01-05 14:29:03 +01:00
Bas Nijholt
8dabc27272 update: Only restart containers when images change (#143)
* update: Only restart containers when images change

Use `up -d --pull always --build` instead of separate pull/build/down/up
steps. This avoids unnecessary container restarts when images haven't
changed.

* Update README.md

* docs: Update update command description across all docs

Reflect new behavior: only recreates containers if images changed.

* Update README.md

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-05 10:06:45 +01:00
Bas Nijholt
5e08f1d712 web: Exclude web stack from Update All button (#142) 2026-01-04 19:56:41 +01:00
58 changed files with 1663 additions and 684 deletions

View File

@@ -68,16 +68,35 @@ jobs:
echo "✗ Timeout waiting for PyPI"
exit 1
- name: Check if latest release
id: latest
run: |
VERSION="${{ steps.version.outputs.version }}"
# Get latest release tag from GitHub (strip 'v' prefix)
LATEST=$(gh release view --json tagName -q '.tagName' | sed 's/^v//')
echo "Building version: $VERSION"
echo "Latest release: $LATEST"
if [ "$VERSION" = "$LATEST" ]; then
echo "is_latest=true" >> $GITHUB_OUTPUT
echo "✓ This is the latest release, will tag as :latest"
else
echo "is_latest=false" >> $GITHUB_OUTPUT
echo "⚠ This is NOT the latest release, skipping :latest tag"
fi
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Only tag as 'latest' if this is the latest release (prevents re-runs of old releases from overwriting)
tags: |
type=semver,pattern={{version}},value=v${{ steps.version.outputs.version }}
type=semver,pattern={{major}}.{{minor}},value=v${{ steps.version.outputs.version }}
type=semver,pattern={{major}},value=v${{ steps.version.outputs.version }}
type=raw,value=latest
type=raw,value=latest,enable=${{ steps.latest.outputs.is_latest }}
- name: Build and push
uses: docker/build-push-action@v6

View File

@@ -26,7 +26,9 @@ jobs:
env:
TERM: dumb
NO_COLOR: 1
TERMINAL_WIDTH: 90
COLUMNS: 90 # POSIX terminal width for Rich
TERMINAL_WIDTH: 90 # Typer MAX_WIDTH for help panels
_TYPER_FORCE_DISABLE_TERMINAL: 1 # Prevent Typer forcing terminal mode in CI
run: |
uvx --with . markdown-code-runner README.md
sed -i 's/[[:space:]]*$//' README.md

View File

@@ -59,18 +59,20 @@ Check:
- Config file search order is accurate
- Example YAML would actually work
### 4. Verify docs/architecture.md
### 4. Verify docs/architecture.md and CLAUDE.md
```bash
# What source files actually exist?
git ls-files "src/**/*.py"
```
Check:
Check **both** `docs/architecture.md` and `CLAUDE.md` (Architecture section):
- Listed files exist
- No files are missing from the list
- Descriptions match what the code does
Both files have architecture listings that can drift independently.
### 5. Check Examples
For examples in any doc:

View File

@@ -6,6 +6,7 @@ Review the pull request for:
- **Organization**: Is everything in the right place?
- **Consistency**: Is it in the same style as other parts of the codebase?
- **Simplicity**: Is it not over-engineered? Remember KISS and YAGNI. No dead code paths and NO defensive programming.
- **No pointless wrappers**: Identify functions/methods that just call another function and return its result. Callers should call the underlying function directly instead of going through unnecessary indirection.
- **User experience**: Does it provide a good user experience?
- **PR**: Is the PR description and title clear and informative?
- **Tests**: Are there tests, and do they cover the changes adequately? Are they testing something meaningful or are they just trivial?

View File

@@ -17,18 +17,20 @@ src/compose_farm/
│ ├── config.py # Config subcommand (init, show, path, validate, edit, symlink)
│ ├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose commands
│ ├── management.py # refresh, check, init-network, traefik-file commands
│ ├── monitoring.py # logs, ps, stats commands
│ ├── monitoring.py # logs, ps, stats, list commands
│ ├── ssh.py # SSH key management (setup, status, keygen)
│ └── web.py # Web UI server command
├── config.py # Pydantic models, YAML loading
├── compose.py # Compose file parsing (.env, ports, volumes, networks)
├── config.py # Pydantic models, YAML loading
├── console.py # Shared Rich console instances
├── executor.py # SSH/local command execution, streaming output
├── operations.py # Business logic (up, migrate, discover, preflight checks)
├── state.py # Deployment state tracking (which stack on which host)
├── glances.py # Glances API integration for host resource stats
├── logs.py # Image digest snapshots (dockerfarm-log.toml)
├── operations.py # Business logic (up, migrate, discover, preflight checks)
├── paths.py # Path utilities, config file discovery
├── registry.py # Container registry client for update checking
├── ssh_keys.py # SSH key path constants and utilities
├── state.py # Deployment state tracking (which stack on which host)
├── traefik.py # Traefik file-provider config generation from labels
└── web/ # Web UI (FastAPI + HTMX)
```
@@ -100,6 +102,17 @@ Browser tests are marked with `@pytest.mark.browser`. They use Playwright to tes
- **NEVER merge anything into main.** Always commit directly or use fast-forward/rebase.
- Never force push.
## SSH Agent in Remote Sessions
When pushing to GitHub via SSH fails with "Permission denied (publickey)", fix the SSH agent socket:
```bash
# Find and set the correct SSH agent socket
SSH_AUTH_SOCK=$(ls -t ~/.ssh/agent/s.*.sshd.* 2>/dev/null | head -1) git push origin branch-name
```
This is needed because the SSH agent socket path changes between sessions.
## Pull Requests
- Never include unchecked checklists (e.g., `- [ ] ...`) in PR descriptions. Either omit the checklist or use checked items.
@@ -137,13 +150,14 @@ CLI available as `cf` or `compose-farm`.
| `down` | Stop stacks (`docker compose down`). Use `--orphaned` to stop stacks removed from config |
| `stop` | Stop services without removing containers (`docker compose stop`) |
| `pull` | Pull latest images |
| `restart` | `down` + `up -d` |
| `update` | `pull` + `build` + `down` + `up -d` |
| `restart` | Restart running containers (`docker compose restart`) |
| `update` | Pull, build, recreate only if changed (`up -d --pull always --build`) |
| `apply` | Make reality match config: migrate stacks + stop orphans. Use `--dry-run` to preview |
| `compose` | Run any docker compose command on a stack (passthrough) |
| `logs` | Show stack logs |
| `ps` | Show status of all stacks |
| `stats` | Show overview (hosts, stacks, pending migrations; `--live` for container counts) |
| `list` | List stacks and hosts (`--simple` for scripting, `--host` to filter) |
| `refresh` | Update state from reality: discover running stacks, capture image digests |
| `check` | Validate config, traefik labels, mounts, networks; show host compatibility |
| `init-network` | Create Docker network on hosts with consistent subnet/gateway |

524
README.md
View File

@@ -51,6 +51,9 @@ A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
- [Multi-Host Stacks](#multi-host-stacks)
- [Config Command](#config-command)
- [Usage](#usage)
- [Docker Compose Commands](#docker-compose-commands)
- [Compose Farm Commands](#compose-farm-commands)
- [Aliases](#aliases)
- [CLI `--help` Output](#cli---help-output)
- [Auto-Migration](#auto-migration)
- [Traefik Multihost Ingress (File Provider)](#traefik-multihost-ingress-file-provider)
@@ -363,24 +366,49 @@ Use `cf config init` to get started with a fully documented template.
The CLI is available as both `compose-farm` and the shorter `cf` alias.
### Docker Compose Commands
These wrap `docker compose` with multi-host superpowers:
| Command | Wraps | Compose Farm Additions |
|---------|-------|------------------------|
| `cf up` | `up -d` | `--all`, `--host`, parallel execution, auto-migration |
| `cf down` | `down` | `--all`, `--host`, `--orphaned`, state tracking |
| `cf stop` | `stop` | `--all`, `--service` |
| `cf restart` | `restart` | `--all`, `--service` |
| `cf pull` | `pull` | `--all`, `--service`, parallel execution |
| `cf logs` | `logs` | `--all`, `--host`, multi-stack output |
| `cf ps` | `ps` | `--all`, `--host`, unified cross-host view |
| `cf compose` | any | passthrough for commands not listed above |
### Compose Farm Commands
Multi-host orchestration that Docker Compose can't do:
| Command | Description |
|---------|-------------|
| **`cf apply`** | **Make reality match config (start + migrate + stop orphans)** |
| `cf up <stack>` | Start stack (auto-migrates if host changed) |
| `cf down <stack>` | Stop and remove stack containers |
| `cf stop <stack>` | Stop stack without removing containers |
| `cf restart <stack>` | down + up |
| `cf update <stack>` | pull + build + down + up |
| `cf pull <stack>` | Pull latest images |
| `cf logs -f <stack>` | Follow logs |
| `cf ps` | Show status of all stacks |
| `cf refresh` | Update state from running stacks |
| **`cf apply`** | **Reconcile: start missing, migrate moved, stop orphans** |
| `cf update` | Shorthand for `up --pull --build` |
| `cf refresh` | Sync state from what's actually running |
| `cf check` | Validate config, mounts, networks |
| `cf init-network` | Create Docker network on hosts |
| `cf init-network` | Create Docker network on all hosts |
| `cf traefik-file` | Generate Traefik file-provider config |
| `cf config <cmd>` | Manage config files (init, show, path, validate, edit, symlink) |
| `cf config` | Manage config files (init, show, validate, edit, symlink) |
| `cf ssh` | Manage SSH keys (setup, status, keygen) |
| `cf list` | List all stacks and their assigned hosts |
All commands support `--all` to operate on all stacks.
### Aliases
Short aliases for frequently used commands:
| Alias | Command | Alias | Command |
|-------|---------|-------|---------|
| `cf a` | `apply` | `cf s` | `stats` |
| `cf l` | `logs` | `cf ls` | `list` |
| `cf r` | `restart` | `cf rf` | `refresh` |
| `cf u` | `update` | `cf ck` | `check` |
| `cf p` | `pull` | `cf tf` | `traefik-file` |
| `cf c` | `compose` | | |
Each command replaces: look up host → SSH → find compose file → run `ssh host "cd /opt/compose/plex && docker compose up -d"`.
@@ -400,10 +428,10 @@ cf down --orphaned # stop stacks removed from config
# Pull latest images
cf pull --all
# Restart (down + up)
# Restart running containers
cf restart plex
# Update (pull + build + down + up) - the end-to-end update command
# Update (pull + build, only recreates containers if images changed)
cf update --all
# Update state from reality (discovers running stacks + captures digests)
@@ -450,46 +478,41 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Compose Farm - run docker compose commands across multiple hosts
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --version -v Show version and exit │
│ --install-completion Install completion for the current shell. │
│ --show-completion Show completion for the current shell, to │
copy it or customize the installation. │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Configuration ──────────────────────────────────────────────────────────────╮
│ traefik-file Generate a Traefik file-provider fragment from compose
Traefik labels.
refresh Update local state from running stacks.
check Validate configuration, traefik labels, mounts, and networks.
init-network Create Docker network on hosts with consistent settings.
config Manage compose-farm configuration files.
│ ssh Manage SSH keys for passwordless authentication. │
──────────────────────────────────────────────────────────────────────────────╯
╭─ Lifecycle ──────────────────────────────────────────────────────────────────╮
up Start stacks (docker compose up -d). Auto-migrates if host
changed.
down Stop stacks (docker compose down).
stop Stop services without removing containers (docker compose │
stop).
pull Pull latest images (docker compose pull).
restart Restart stacks (down + up). With --service, restarts just
that service.
│ update Update stacks (pull + build + down + up). With --service, │
│ updates just that service. │
apply Make reality match config (start, migrate, stop
strays/orphans as needed).
compose Run any docker compose command on a stack.
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Monitoring ─────────────────────────────────────────────────────────────────
│ logs Show stack logs. With --service, shows logs for just that │
service.
│ ps Show status of stacks. │
│ stats Show overview statistics for hosts and stacks. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Server ─────────────────────────────────────────────────────────────────────╮
│ web Start the web UI server. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --version -v Show version and exit
│ --install-completion Install completion for the current shell.
│ --show-completion Show completion for the current shell, to copy it or
│ customize the installation.
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Configuration ────────────────────────────────────────────────────────────────────────
│ traefik-file Generate a Traefik file-provider fragment from compose Traefik labels.
refresh Update local state from running stacks.
check Validate configuration, traefik labels, mounts, and networks.
init-network Create Docker network on hosts with consistent settings.
config Manage compose-farm configuration files.
ssh Manage SSH keys for passwordless authentication.
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Lifecycle ────────────────────────────────────────────────────────────────────────────
│ up Start stacks (docker compose up -d). Auto-migrates if host changed. │
down Stop stacks (docker compose down).
stop Stop services without removing containers (docker compose stop).
pull Pull latest images (docker compose pull).
restart Restart running containers (docker compose restart).
update Update stacks (pull + build + up). Shorthand for 'up --pull --build'.
apply Make reality match config (start, migrate, stop strays/orphans as
needed).
compose Run any docker compose command on a stack.
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Monitoring ───────────────────────────────────────────────────────────────────────────╮
logs Show stack logs. With --service, shows logs for just that service.
ps Show status of stacks.
stats Show overview statistics for hosts and stacks.
│ list List all stacks and their assigned hosts. │
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Server ───────────────────────────────────────────────────────────────────────────────╮
web Start the web UI server.
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -518,16 +541,18 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Start stacks (docker compose up -d). Auto-migrates if host changed.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --host -H TEXT Filter to stacks on this host │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --host -H TEXT Filter to stacks on this host
│ --service -s TEXT Target a specific service within the stack
│ --pull Pull images before starting (--pull always)
│ --build Build images before starting
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -554,17 +579,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Stop stacks (docker compose down).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --orphaned Stop orphaned stacks (in state but removed from │
config)
│ --host -H TEXT Filter to stacks on this host
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --orphaned Stop orphaned stacks (in state but removed from config)
--host -H TEXT Filter to stacks on this host
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -591,15 +615,15 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Stop services without removing containers (docker compose stop).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -626,15 +650,15 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Pull latest images (docker compose pull).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -659,17 +683,17 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf restart [OPTIONS] [STACKS]...
Restart stacks (down + up). With --service, restarts just that service.
Restart running containers (docker compose restart).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -694,18 +718,17 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf update [OPTIONS] [STACKS]...
Update stacks (pull + build + down + up). With --service, updates just that
service.
Update stacks (pull + build + up). Shorthand for 'up --pull --build'.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -745,15 +768,14 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Use --no-strays to skip stopping stray stacks.
Use --full to also run 'up' on all stacks (picks up compose/env changes).
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --dry-run -n Show what would change without executing │
│ --no-orphans Only migrate, don't stop orphaned stacks │
│ --no-strays Don't stop stray stacks (running on wrong host) │
│ --full -f Also run up on all stacks to apply config │
changes
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --dry-run -n Show what would change without executing
│ --no-orphans Only migrate, don't stop orphaned stacks
│ --no-strays Don't stop stray stacks (running on wrong host)
│ --full -f Also run up on all stacks to apply config changes
--config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -790,17 +812,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
cf compose mystack exec web bash - interactive shell
cf compose mystack config - view parsed config
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ * stack TEXT Stack to operate on (use '.' for current dir) │
[required]
* command TEXT Docker compose command [required]
│ args [ARGS]... Additional arguments │
──────────────────────────────────────────────────────────────────────────────
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --host -H TEXT Filter to stacks on this host
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ * stack TEXT Stack to operate on (use '.' for current dir) [required]
* command TEXT Docker compose command [required]
args [ARGS]... Additional arguments
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --host -H TEXT Filter to stacks on this host │
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -829,16 +850,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Generate a Traefik file-provider fragment from compose Traefik labels.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --output -o PATH Write Traefik file-provider YAML to this path │
(stdout if omitted)
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --output -o PATH Write Traefik file-provider YAML to this path (stdout if
omitted)
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -874,16 +895,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Use 'cf apply' to make reality match your config (stop orphans, migrate).
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --config -c PATH Path to config file │
│ --log-path -l PATH Path to Dockerfarm TOML log │
│ --dry-run -n Show what would change without writing │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --config -c PATH Path to config file
│ --log-path -l PATH Path to Dockerfarm TOML log
│ --dry-run -n Show what would change without writing
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -916,14 +937,14 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Use --local to skip SSH-based checks for faster validation.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --local Skip SSH-based checks (faster) │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --local Skip SSH-based checks (faster)
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -955,16 +976,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
communication. Uses the same subnet/gateway on all hosts to ensure
consistent networking.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ hosts [HOSTS]... Hosts to create network on (default: all) │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --network -n TEXT Network name [default: mynetwork] │
│ --subnet -s TEXT Network subnet [default: 172.20.0.0/16] │
│ --gateway -g TEXT Network gateway [default: 172.20.0.1] │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ hosts [HOSTS]... Hosts to create network on (default: all)
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --network -n TEXT Network name [default: mynetwork]
│ --subnet -s TEXT Network subnet [default: 172.20.0.0/16]
│ --gateway -g TEXT Network gateway [default: 172.20.0.1]
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -992,19 +1013,18 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Manage compose-farm configuration files.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────╮
│ init Create a new config file with documented example. │
│ edit Open the config file in your default editor. │
│ show Display the config file location and contents. │
│ path Print the config file path (useful for scripting). │
│ validate Validate the config file syntax and schema. │
│ symlink Create a symlink from the default config location to a config │
file.
│ init-env Generate a .env file for Docker deployment. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Commands ─────────────────────────────────────────────────────────────────────────────
│ init Create a new config file with documented example.
│ edit Open the config file in your default editor.
│ show Display the config file location and contents.
│ path Print the config file path (useful for scripting).
│ validate Validate the config file syntax and schema.
│ symlink Create a symlink from the default config location to a config file.
init-env Generate a .env file for Docker deployment.
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -1032,14 +1052,14 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Manage SSH keys for passwordless authentication.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────╮
│ keygen Generate SSH key (does not distribute to hosts). │
│ setup Generate SSH key and distribute to all configured hosts. │
│ status Show SSH key status and host connectivity. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Commands ─────────────────────────────────────────────────────────────────────────────
│ keygen Generate SSH key (does not distribute to hosts).
│ setup Generate SSH key and distribute to all configured hosts.
│ status Show SSH key status and host connectivity.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -1068,19 +1088,18 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Show stack logs. With --service, shows logs for just that service.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --host -H TEXT Filter to stacks on this host │
│ --service -s TEXT Target a specific service within the stack │
│ --follow -f Follow logs │
│ --tail -n INTEGER Number of lines (default: 20 for --all, 100 │
otherwise)
│ --config -c PATH Path to config file
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --host -H TEXT Filter to stacks on this host
│ --service -s TEXT Target a specific service within the stack
│ --follow -f Follow logs
│ --tail -n INTEGER Number of lines (default: 20 for --all, 100 otherwise)
--config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -1113,16 +1132,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
With --host: shows stacks on that host.
With --service: filters to a specific service within the stack.
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
│ stacks [STACKS]... Stacks to operate on │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --all -a Run on all stacks │
│ --host -H TEXT Filter to stacks on this host │
│ --service -s TEXT Target a specific service within the stack │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Arguments ────────────────────────────────────────────────────────────────────────────
│ stacks [STACKS]... Stacks to operate on
╰────────────────────────────────────────────────────────────────────────────────────────
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --all -a Run on all stacks
│ --host -H TEXT Filter to stacks on this host
│ --service -s TEXT Target a specific service within the stack
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -1150,14 +1169,49 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Show overview statistics for hosts and stacks.
Without --live: Shows config/state info (hosts, stacks, pending migrations).
Without flags: Shows config/state info (hosts, stacks, pending migrations).
With --live: Also queries Docker on each host for container counts.
With --containers: Shows per-container resource stats (requires Glances).
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --live -l Query Docker for live container stats │
│ --config -c PATH Path to config file
│ --help -h Show this message and exit.
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --live -l Query Docker for live container stats
│ --containers -C Show per-container resource stats (requires Glances)
│ --host -H TEXT Filter to stacks on this host
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
<!-- OUTPUT:END -->
</details>
<details>
<summary>See the output of <code>cf list --help</code></summary>
<!-- CODE:BASH:START -->
<!-- echo '```yaml' -->
<!-- export NO_COLOR=1 -->
<!-- export TERM=dumb -->
<!-- export TERMINAL_WIDTH=90 -->
<!-- cf list --help -->
<!-- echo '```' -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
```yaml
Usage: cf list [OPTIONS]
List all stacks and their assigned hosts.
╭─ Options ──────────────────────────────────────────────────────────────────────────────╮
│ --host -H TEXT Filter to stacks on this host │
│ --simple -s Plain output (one stack per line, for scripting) │
│ --config -c PATH Path to config file │
│ --help -h Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```
@@ -1186,12 +1240,12 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Start the web UI server.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --host -H TEXT Host to bind to [default: 0.0.0.0] │
│ --port -p INTEGER Port to listen on [default: 8000] │
│ --reload -r Enable auto-reload for development │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────
│ --host -H TEXT Host to bind to [default: 0.0.0.0]
│ --port -p INTEGER Port to listen on [default: 8000]
│ --reload -r Enable auto-reload for development
│ --help -h Show this message and exit.
╰────────────────────────────────────────────────────────────────────────────────────────
```
@@ -1283,12 +1337,12 @@ published ports.
**Auto-regeneration**
To automatically regenerate the Traefik config after `up`, `down`, `restart`, or `update`,
To automatically regenerate the Traefik config after `up`, `down`, or `update`,
add `traefik_file` to your config:
```yaml
compose_dir: /opt/compose
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml # auto-regenerate on up/down/restart/update
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml # auto-regenerate on up/down/update
traefik_stack: traefik # skip stacks on same host (docker provider handles them)
hosts:
@@ -1364,13 +1418,7 @@ glances_stack: glances # Enables resource stats in web UI
3. Deploy: `cf up glances`
4. **(Docker web UI only)** If running the web UI in a Docker container, set `CF_LOCAL_HOST` to your local hostname in `.env`:
```bash
echo "CF_LOCAL_HOST=nas" >> .env # Replace 'nas' with your local host name
```
This tells the web UI to reach the local Glances via container name instead of IP (required due to Docker network isolation).
4. **(Docker web UI only)** The web UI container infers the local host from `CF_WEB_STACK` and reaches Glances via the container name to avoid Docker network isolation issues.
The web UI dashboard will now show a "Host Resources" section with live stats from all hosts. Hosts where Glances is unreachable show an error indicator.

View File

@@ -3,7 +3,7 @@
compose_dir: /opt/compose
# Optional: Auto-regenerate Traefik file-provider config after up/down/restart/update
# Optional: Auto-regenerate Traefik file-provider config after up/down/update
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik # Skip stacks on same host (docker provider handles them)

View File

@@ -47,8 +47,6 @@ services:
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# Used to detect self-updates and run via SSH to survive container restart
- CF_WEB_STACK=compose-farm
# Local host for Glances (use container name instead of IP to avoid Docker network issues)
- CF_LOCAL_HOST=${CF_LOCAL_HOST:-}
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)

View File

@@ -96,7 +96,7 @@ Typer-based CLI with subcommand modules:
cli/
├── app.py # Shared Typer app, version callback
├── common.py # Shared helpers, options, progress utilities
├── config.py # config subcommand (init, show, path, validate, edit, symlink)
├── config.py # config subcommand (init, init-env, show, path, validate, edit, symlink)
├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose
├── management.py # refresh, check, init-network, traefik-file
├── monitoring.py # logs, ps, stats
@@ -343,3 +343,19 @@ For repeated connections to the same host, SSH reuses connections.
```
Icons use [Lucide](https://lucide.dev/). Add new icons as macros in `web/templates/partials/icons.html`.
### Host Resource Monitoring (`src/compose_farm/glances.py`)
Integration with [Glances](https://nicolargo.github.io/glances/) for real-time host stats:
- Fetches CPU, memory, and load from Glances REST API on each host
- Used by web UI dashboard to display host resource usage
- Requires `glances_stack` config option pointing to a Glances stack running on all hosts
### Container Registry Client (`src/compose_farm/registry.py`)
OCI Distribution API client for checking image updates:
- Parses image references (registry, namespace, name, tag, digest)
- Fetches available tags from Docker Hub, GHCR, and other registries
- Compares semantic versions to find newer releases

View File

@@ -8,19 +8,22 @@ The Compose Farm CLI is available as both `compose-farm` and the shorter alias `
## Command Overview
Commands are either **Docker Compose wrappers** (`up`, `down`, `stop`, `restart`, `pull`, `logs`, `ps`, `compose`) with multi-host superpowers, or **Compose Farm originals** (`apply`, `update`, `refresh`, `check`) for orchestration Docker Compose can't do.
| Category | Command | Description |
|----------|---------|-------------|
| **Lifecycle** | `apply` | Make reality match config |
| | `up` | Start stacks |
| | `down` | Stop stacks |
| | `stop` | Stop services without removing containers |
| | `restart` | Restart stacks (down + up) |
| | `update` | Update stacks (pull + build + down + up) |
| | `restart` | Restart running containers |
| | `update` | Shorthand for `up --pull --build` |
| | `pull` | Pull latest images |
| | `compose` | Run any docker compose command |
| **Monitoring** | `ps` | Show stack status |
| | `logs` | Show stack logs |
| | `stats` | Show overview statistics |
| | `list` | List stacks and hosts |
| **Configuration** | `check` | Validate config and mounts |
| | `refresh` | Sync state from reality |
| | `init-network` | Create Docker network |
@@ -36,6 +39,19 @@ cf --version, -v # Show version
cf --help, -h # Show help
```
## Command Aliases
Short aliases for frequently used commands:
| Alias | Command | Alias | Command |
|-------|---------|-------|---------|
| `cf a` | `apply` | `cf s` | `stats` |
| `cf l` | `logs` | `cf ls` | `list` |
| `cf r` | `restart` | `cf rf` | `refresh` |
| `cf u` | `update` | `cf ck` | `check` |
| `cf p` | `pull` | `cf tf` | `traefik-file` |
| `cf c` | `compose` | | |
---
## Lifecycle Commands
@@ -58,14 +74,16 @@ cf apply [OPTIONS]
|--------|-------------|
| `--dry-run, -n` | Preview changes without executing |
| `--no-orphans` | Skip stopping orphaned stacks |
| `--full, -f` | Also refresh running stacks |
| `--no-strays` | Skip stopping stray stacks (running on wrong host) |
| `--full, -f` | Also run up on all stacks (applies compose/env changes, triggers migrations) |
| `--config, -c PATH` | Path to config file |
**What it does:**
1. Stops orphaned stacks (in state but removed from config)
2. Migrates stacks on wrong host
3. Starts missing stacks (in config but not running)
2. Stops stray stacks (running on unauthorized hosts)
3. Migrates stacks on wrong host
4. Starts missing stacks (in config but not running)
**Examples:**
@@ -79,7 +97,10 @@ cf apply
# Only start/migrate, don't stop orphans
cf apply --no-orphans
# Also refresh all running stacks
# Don't stop stray stacks
cf apply --no-strays
# Also run up on all stacks (applies compose/env changes, triggers migrations)
cf apply --full
```
@@ -100,6 +121,8 @@ cf up [OPTIONS] [STACKS]...
| `--all, -a` | Start all stacks |
| `--host, -H TEXT` | Filter to stacks on this host |
| `--service, -s TEXT` | Target a specific service within the stack |
| `--pull` | Pull images before starting (`--pull always`) |
| `--build` | Build images before starting |
| `--config, -c PATH` | Path to config file |
**Examples:**
@@ -197,7 +220,7 @@ cf stop immich --service database
### cf restart
Restart stacks (down + up). With `--service`, restarts just that service.
Restart running containers (`docker compose restart`). With `--service`, restarts just that service.
```bash
cf restart [OPTIONS] [STACKS]...
@@ -225,7 +248,7 @@ cf restart immich --service database
### cf update
Update stacks (pull + build + down + up). With `--service`, updates just that service.
Update stacks (pull + build + up). Shorthand for `up --pull --build`. With `--service`, updates just that service.
<video autoplay loop muted playsinline>
<source src="/assets/update.webm" type="video/webm">
@@ -445,6 +468,40 @@ cf stats --live
---
### cf list
List all stacks and their assigned hosts.
```bash
cf list [OPTIONS]
```
**Options:**
| Option | Description |
|--------|-------------|
| `--host, -H TEXT` | Filter to stacks on this host |
| `--simple, -s` | Plain output for scripting (one stack per line) |
| `--config, -c PATH` | Path to config file |
**Examples:**
```bash
# List all stacks
cf list
# Filter by host
cf list --host nas
# Plain output for scripting
cf list --simple
# Combine: list stack names on a specific host
cf list --host nuc --simple
```
---
## Configuration Commands
### cf check
@@ -587,6 +644,7 @@ cf config COMMAND
| Command | Description |
|---------|-------------|
| `init` | Create new config with examples |
| `init-env` | Generate .env file for Docker deployment |
| `show` | Display config with highlighting |
| `path` | Print config file path |
| `validate` | Validate syntax and schema |
@@ -598,6 +656,7 @@ cf config COMMAND
| Subcommand | Options |
|------------|---------|
| `init` | `--path/-p PATH`, `--force/-f` |
| `init-env` | `--path/-p PATH`, `--output/-o PATH`, `--force/-f` |
| `show` | `--path/-p PATH`, `--raw/-r` |
| `edit` | `--path/-p PATH` |
| `path` | `--path/-p PATH` |
@@ -633,6 +692,12 @@ cf config symlink
# Create symlink to specific file
cf config symlink /opt/compose-farm/config.yaml
# Generate .env file in current directory
cf config init-env
# Generate .env at specific path
cf config init-env -o /opt/stacks/.env
```
---

View File

@@ -107,7 +107,7 @@ Supported compose file names (checked in order):
### traefik_file
Path to auto-generated Traefik file-provider config. When set, Compose Farm regenerates this file after `up`, `down`, `restart`, and `update` commands.
Path to auto-generated Traefik file-provider config. When set, Compose Farm regenerates this file after `up`, `down`, and `update` commands.
```yaml
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
@@ -121,6 +121,16 @@ Stack name running Traefik. Stacks on the same host are skipped in file-provider
traefik_stack: traefik
```
### glances_stack
Stack name running [Glances](https://nicolargo.github.io/glances/) for host resource monitoring. When set, the CLI (`cf stats --containers`) and web UI display CPU, memory, and container stats for all hosts.
```yaml
glances_stack: glances
```
The Glances stack should run on all hosts and expose port 61208. See the README for full setup instructions.
## Hosts Configuration
### Basic Host
@@ -257,6 +267,25 @@ When generating Traefik config, Compose Farm resolves `${VAR}` and `${VAR:-defau
1. The stack's `.env` file
2. Current environment
### Compose Farm Environment Variables
These environment variables configure Compose Farm itself:
| Variable | Description |
|----------|-------------|
| `CF_CONFIG` | Path to config file |
| `CF_WEB_STACK` | Web UI stack name (Docker only, enables self-update detection and local host inference) |
**Docker deployment variables** (used in docker-compose.yml):
| Variable | Description | Generated by |
|----------|-------------|--------------|
| `CF_COMPOSE_DIR` | Compose files directory | `cf config init-env` |
| `CF_UID` / `CF_GID` | User/group ID for containers | `cf config init-env` |
| `CF_HOME` / `CF_USER` | Home directory and username | `cf config init-env` |
| `CF_SSH_DIR` | SSH keys volume mount | Manual |
| `CF_XDG_CONFIG` | Config backup volume mount | Manual |
## Config Commands
### Initialize Config

View File

@@ -1,5 +1,5 @@
# Update Demo
# Shows updating stacks (pull + build + down + up)
# Shows updating stacks (only recreates containers if images changed)
Output docs/assets/update.gif
Output docs/assets/update.webm

View File

@@ -17,14 +17,13 @@ curl -O https://raw.githubusercontent.com/basnijholt/compose-farm/main/docker-co
**2. Generate `.env` file:**
```bash
cf config init-env -o .env
cf config init-env
```
This auto-detects settings from your `compose-farm.yaml`:
- `DOMAIN` from existing traefik labels
- `CF_COMPOSE_DIR` from config
- `CF_UID/GID/HOME/USER` from current user
- `CF_LOCAL_HOST` by matching local IPs to config hosts
Review the output and edit if needed.
@@ -59,17 +58,12 @@ $EDITOR .env
| `DOMAIN` | Extracted from traefik labels in your stacks |
| `CF_COMPOSE_DIR` | From `compose_dir` in your config |
| `CF_UID/GID/HOME/USER` | From current user (for NFS compatibility) |
| `CF_LOCAL_HOST` | By matching local IPs to configured hosts |
If auto-detection fails for any value, edit the `.env` file manually.
### Glances Monitoring
To show host CPU/memory stats in the dashboard, deploy [Glances](https://nicolargo.github.io/glances/) on your hosts. If `CF_LOCAL_HOST` wasn't detected correctly, set it to your local hostname:
```bash
CF_LOCAL_HOST=nas # Replace with your local host name
```
To show host CPU/memory stats in the dashboard, deploy [Glances](https://nicolargo.github.io/glances/) on your hosts. When running the web UI container, Compose Farm infers the local host from `CF_WEB_STACK` and uses the Glances container name for that host.
See [Host Resource Monitoring](https://github.com/basnijholt/compose-farm#host-resource-monitoring-glances) in the README.
@@ -85,15 +79,6 @@ Regenerate keys:
docker compose run --rm cf ssh setup
```
### Glances shows error for local host
Add your local hostname to `.env`:
```bash
echo "CF_LOCAL_HOST=nas" >> .env
docker compose restart web
```
### Files created as root
Add the non-root variables above and restart.
@@ -111,6 +96,6 @@ For advanced users, here's the complete reference:
| `CF_UID` / `CF_GID` | User/group ID | `0` (root) |
| `CF_HOME` | Home directory | `/root` |
| `CF_USER` | Username for SSH | `root` |
| `CF_LOCAL_HOST` | Local hostname for Glances | *(auto-detect)* |
| `CF_WEB_STACK` | Web UI stack name (enables self-update, local host inference) | *(none)* |
| `CF_SSH_DIR` | SSH keys directory | `~/.ssh/compose-farm` |
| `CF_XDG_CONFIG` | Config/backup directory | `~/.config/compose-farm` |

View File

@@ -329,7 +329,7 @@ cf apply
```bash
cf update --all
# Runs: pull + build + down + up for each stack
# Only recreates containers if images changed
```
## Next Steps

View File

@@ -0,0 +1,6 @@
<!-- Privacy-friendly analytics by Plausible -->
<script async src="https://plausible.nijho.lt/js/pa-NRX7MolONWKTUREJpAjkB.js"></script>
<script>
window.plausible=window.plausible||function(){(plausible.q=plausible.q||[]).push(arguments)},plausible.init=plausible.init||function(i){plausible.o=i||{}};
plausible.init()
</script>

View File

@@ -139,7 +139,6 @@ stacks:
With `traefik_file` set, these commands auto-regenerate the config:
- `cf up`
- `cf down`
- `cf restart`
- `cf update`
- `cf apply`

View File

@@ -7,9 +7,10 @@ Real-world examples demonstrating compose-farm patterns for multi-host Docker de
| Stack | Type | Demonstrates |
|---------|------|--------------|
| [traefik](traefik/) | Infrastructure | Reverse proxy, Let's Encrypt, file-provider |
| [coredns](coredns/) | Infrastructure | Wildcard DNS for `*.local` domains |
| [mealie](mealie/) | Single container | Traefik labels, resource limits, environment vars |
| [uptime-kuma](uptime-kuma/) | Single container | Docker socket, user mapping, custom DNS |
| [paperless-ngx](paperless-ngx/) | Multi-container | Redis + App stack (SQLite) |
| [paperless-ngx](paperless-ngx/) | Multi-container | Redis + PostgreSQL + App stack |
| [autokuma](autokuma/) | Multi-host | Demonstrates `all` keyword (runs on every host) |
## Key Patterns
@@ -53,7 +54,8 @@ labels:
- traefik.http.routers.myapp-local.entrypoints=web
```
> **Note:** `.local` domains require local DNS (e.g., Pi-hole, Technitium) to resolve to your Traefik host.
> **Note:** `.local` domains require local DNS to resolve to your Traefik host.
> The [coredns](coredns/) example provides this - edit `Corefile` to set your Traefik IP.
### Environment Variables
@@ -88,23 +90,6 @@ stacks:
autokuma: all # Runs on every configured host
```
### Multi-Container Stacks
Database-backed apps with multiple services:
```yaml
services:
redis:
image: redis:7
app:
depends_on:
- redis
```
> **NFS + PostgreSQL Warning:** PostgreSQL should NOT run on NFS storage due to
> fsync and file locking issues. Use SQLite (safe for single-writer on NFS) or
> keep PostgreSQL data on local volumes (non-migratable).
### AutoKuma Labels (Optional)
The autokuma example demonstrates compose-farm's **multi-host feature** - running the same stack on all hosts using the `all` keyword. AutoKuma itself is not part of compose-farm; it's just a good example because it needs to run on every host to monitor local Docker containers.
@@ -125,8 +110,8 @@ cd examples
# 1. Create the shared network on all hosts
compose-farm init-network
# 2. Start Traefik first (the reverse proxy)
compose-farm up traefik
# 2. Start infrastructure (reverse proxy + DNS)
compose-farm up traefik coredns
# 3. Start other stacks
compose-farm up mealie uptime-kuma
@@ -168,4 +153,4 @@ traefik_file: /opt/stacks/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik
```
With `traefik_file` configured, compose-farm automatically regenerates the config after `up`, `down`, `restart`, and `update` commands.
With `traefik_file` configured, compose-farm automatically regenerates the config after `up`, `down`, and `update` commands.

View File

@@ -3,6 +3,7 @@ deployed:
- primary
- secondary
- local
coredns: primary
mealie: secondary
paperless-ngx: primary
traefik: primary

View File

@@ -5,7 +5,7 @@
compose_dir: /opt/stacks/compose-farm/examples
# Auto-regenerate Traefik file-provider config after up/down/restart/update
# Auto-regenerate Traefik file-provider config after up/down/update
traefik_file: /opt/stacks/compose-farm/examples/traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik # Skip Traefik's host in file-provider (docker provider handles it)
@@ -27,6 +27,7 @@ hosts:
stacks:
# Infrastructure (runs on primary where Traefik is)
traefik: primary
coredns: primary # DNS for *.local resolution
# Multi-host stacks (runs on ALL hosts)
# AutoKuma monitors Docker containers on each host

2
examples/coredns/.env Normal file
View File

@@ -0,0 +1,2 @@
# CoreDNS doesn't need environment variables
# The Traefik IP is configured in the Corefile

22
examples/coredns/Corefile Normal file
View File

@@ -0,0 +1,22 @@
# CoreDNS configuration for .local domain resolution
#
# Resolves *.local to the Traefik host IP (where your reverse proxy runs).
# All other queries are forwarded to upstream DNS.
# Handle .local domains - resolve everything to Traefik's host
local {
template IN A {
answer "{{ .Name }} 60 IN A 192.168.1.10"
}
template IN AAAA {
# Return empty for AAAA to avoid delays on IPv4-only networks
rcode NOERROR
}
}
# Forward everything else to upstream DNS
. {
forward . 1.1.1.1 8.8.8.8
cache 300
errors
}

View File

@@ -0,0 +1,27 @@
# CoreDNS - DNS server for .local domain resolution
#
# Demonstrates:
# - Wildcard DNS for *.local domains
# - Config file mounting from stack directory
# - UDP/TCP port exposure
#
# This enables all the .local routes in the examples to work.
# Point your devices/router DNS to this server's IP.
name: coredns
services:
coredns:
image: coredns/coredns:latest
container_name: coredns
restart: unless-stopped
networks:
- mynetwork
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- ./Corefile:/root/Corefile:ro
command: -conf /root/Corefile
networks:
mynetwork:
external: true

View File

@@ -1,3 +1,4 @@
# Copy to .env and fill in your values
DOMAIN=example.com
PAPERLESS_SECRET_KEY=change-me-to-a-random-string
POSTGRES_PASSWORD=change-me-to-a-secure-password
PAPERLESS_SECRET_KEY=change-me-to-a-long-random-string

View File

@@ -1,44 +1,57 @@
# Paperless-ngx - Document management system
#
# Demonstrates:
# - HTTPS route: paperless.${DOMAIN} (e.g., paperless.example.com) with Let's Encrypt
# - HTTP route: paperless.local for LAN access without TLS
# - Multi-container stack (Redis + App with SQLite)
#
# NOTE: This example uses SQLite (the default) instead of PostgreSQL.
# PostgreSQL should NOT be used with NFS storage due to fsync/locking issues.
# If you need PostgreSQL, use local volumes for the database.
# - HTTPS route: paperless.${DOMAIN} with Let's Encrypt
# - HTTP route: paperless.local for LAN access
# - Multi-container stack (Redis + PostgreSQL + App)
# - Separate env_file for app-specific settings
name: paperless-ngx
services:
redis:
image: redis:8
broker:
image: redis:7
container_name: paperless-redis
restart: unless-stopped
networks:
- mynetwork
volumes:
- /mnt/data/paperless/redis:/data
- /mnt/data/paperless/redisdata:/data
db:
image: postgres:16
container_name: paperless-db
restart: unless-stopped
networks:
- mynetwork
volumes:
- /mnt/data/paperless/pgdata:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGRES_USER: paperless
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
paperless:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
container_name: paperless
restart: unless-stopped
depends_on:
- redis
- db
- broker
networks:
- mynetwork
ports:
- "8000:8000"
volumes:
# SQLite database stored here (safe on NFS for single-writer)
- /mnt/data/paperless/data:/usr/src/paperless/data
- /mnt/data/paperless/media:/usr/src/paperless/media
- /mnt/data/paperless/export:/usr/src/paperless/export
- /mnt/data/paperless/consume:/usr/src/paperless/consume
environment:
PAPERLESS_REDIS: redis://redis:6379
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_URL: https://paperless.${DOMAIN}
PAPERLESS_SECRET_KEY: ${PAPERLESS_SECRET_KEY}
PAPERLESS_TIME_ZONE: America/Los_Angeles
PAPERLESS_OCR_LANGUAGE: eng
USERMAP_UID: 1000
USERMAP_GID: 1000
labels:

View File

@@ -124,6 +124,10 @@ python_version = "3.11"
strict = true
plugins = ["pydantic.mypy"]
[[tool.mypy.overrides]]
module = "compose_farm._version"
ignore_missing_imports = true
[[tool.mypy.overrides]]
module = "asyncssh.*"
ignore_missing_imports = true
@@ -174,8 +178,12 @@ python-version = "3.11"
exclude = [
"hatch_build.py", # Build-time only, hatchling not in dev deps
"docs/demos/**", # Demo scripts with local conftest imports
"src/compose_farm/_version.py", # Generated at build time
]
[tool.ty.rules]
unresolved-import = "ignore" # _version.py is generated at build time
[dependency-groups]
dev = [
"mypy>=1.19.0",

View File

@@ -4,7 +4,6 @@ from __future__ import annotations
import asyncio
import contextlib
import os
from pathlib import Path
from typing import TYPE_CHECKING, Annotated, TypeVar
@@ -69,21 +68,6 @@ ServiceOption = Annotated[
_MISSING_PATH_PREVIEW_LIMIT = 2
_STATS_PREVIEW_LIMIT = 3 # Max number of pending migrations to show by name
# Environment variable to identify the web stack (for self-update ordering)
CF_WEB_STACK = os.environ.get("CF_WEB_STACK", "")
def sort_web_stack_last(stacks: list[str]) -> list[str]:
"""Move the web stack to the end of the list.
When updating all stacks, the web UI stack (compose-farm) should be updated
last. Otherwise, the container restarts mid-process and cancels remaining
updates. The CF_WEB_STACK env var identifies the web stack.
"""
if CF_WEB_STACK and CF_WEB_STACK in stacks:
return [s for s in stacks if s != CF_WEB_STACK] + [CF_WEB_STACK]
return stacks
def format_host(host: str | list[str]) -> str:
"""Format a host value for display."""

View File

@@ -326,23 +326,13 @@ def _detect_domain(cfg: Config) -> str | None:
return None
def _detect_local_host(cfg: Config) -> str | None:
"""Find which config host matches local machine's IPs."""
from compose_farm.executor import is_local # noqa: PLC0415
for name, host in cfg.hosts.items():
if is_local(host):
return name
return None
@config_app.command("init-env")
def config_init_env(
path: _PathOption = None,
output: Annotated[
Path | None,
typer.Option(
"--output", "-o", help="Output .env file path. Defaults to .env in config directory."
"--output", "-o", help="Output .env file path. Defaults to .env in current directory."
),
] = None,
force: _ForceOption = False,
@@ -350,21 +340,21 @@ def config_init_env(
"""Generate a .env file for Docker deployment.
Reads the compose-farm.yaml config and auto-detects settings:
- CF_COMPOSE_DIR from compose_dir
- CF_LOCAL_HOST by detecting which config host matches local IPs
- CF_UID/GID/HOME/USER from current user
- DOMAIN from traefik labels in stacks (if found)
Example::
cf config init-env # Create .env next to config
cf config init-env -o .env # Create .env in current directory
cf config init-env # Create .env in current directory
cf config init-env -o /path/to/.env # Create .env at specific path
"""
config_file, cfg = _load_config_with_path(path)
# Determine output path
env_path = output.expanduser().resolve() if output else config_file.parent / ".env"
# Determine output path (default: current directory)
env_path = output.expanduser().resolve() if output else Path.cwd() / ".env"
if env_path.exists() and not force:
console.print(f"[yellow].env file already exists:[/] {env_path}")
@@ -378,7 +368,6 @@ def config_init_env(
home = os.environ.get("HOME", "/root")
user = os.environ.get("USER", "root")
compose_dir = str(cfg.compose_dir)
local_host = _detect_local_host(cfg)
domain = _detect_domain(cfg)
# Generate .env content
@@ -398,9 +387,6 @@ def config_init_env(
f"CF_HOME={home}",
f"CF_USER={user}",
"",
"# Local hostname for Glances integration",
f"CF_LOCAL_HOST={local_host or '# auto-detect failed - set manually'}",
"",
]
env_path.write_text("\n".join(lines), encoding="utf-8")
@@ -411,7 +397,6 @@ def config_init_env(
console.print(f" DOMAIN: {domain or '[yellow]example.com[/] (edit this)'}")
console.print(f" CF_COMPOSE_DIR: {compose_dir}")
console.print(f" CF_UID/GID: {uid}:{gid}")
console.print(f" CF_LOCAL_HOST: {local_host or '[yellow]not detected[/] (set manually)'}")
console.print()
console.print("[dim]Review and edit as needed:[/dim]")
console.print(f" [cyan]$EDITOR {env_path}[/cyan]")

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
import shlex
from pathlib import Path
from typing import TYPE_CHECKING, Annotated
@@ -23,14 +24,14 @@ from compose_farm.cli.common import (
maybe_regenerate_traefik,
report_results,
run_async,
sort_web_stack_last,
validate_host_for_stack,
validate_stacks,
)
from compose_farm.cli.management import _discover_stacks_full
from compose_farm.console import MSG_DRY_RUN, console, print_error, print_success
from compose_farm.executor import run_compose_on_host, run_on_stacks, run_sequential_on_stacks
from compose_farm.executor import run_compose_on_host, run_on_stacks
from compose_farm.operations import (
build_up_cmd,
stop_orphaned_stacks,
stop_stray_stacks,
up_stacks,
@@ -50,6 +51,14 @@ def up(
all_stacks: AllOption = False,
host: HostOption = None,
service: ServiceOption = None,
pull: Annotated[
bool,
typer.Option("--pull", help="Pull images before starting (--pull always)"),
] = False,
build: Annotated[
bool,
typer.Option("--build", help="Build images before starting"),
] = False,
config: ConfigOption = None,
) -> None:
"""Start stacks (docker compose up -d). Auto-migrates if host changed."""
@@ -59,9 +68,13 @@ def up(
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# For service-level up, use run_on_stacks directly (no migration logic)
results = run_async(run_on_stacks(cfg, stack_list, f"up -d {service}", raw=True))
results = run_async(
run_on_stacks(
cfg, stack_list, build_up_cmd(pull=pull, build=build, service=service), raw=True
)
)
else:
results = run_async(up_stacks(cfg, stack_list, raw=True))
results = run_async(up_stacks(cfg, stack_list, raw=True, pull=pull, build=build))
maybe_regenerate_traefik(cfg, results)
report_results(results)
@@ -162,21 +175,17 @@ def restart(
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Restart stacks (down + up). With --service, restarts just that service."""
"""Restart running containers (docker compose restart)."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
if service:
if len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# For service-level restart, use docker compose restart (more efficient)
raw = True
results = run_async(run_on_stacks(cfg, stack_list, f"restart {service}", raw=raw))
cmd = f"restart {service}"
else:
# Sort web stack last to avoid self-restart canceling remaining restarts
stack_list = sort_web_stack_last(stack_list)
raw = len(stack_list) == 1
results = run_async(run_sequential_on_stacks(cfg, stack_list, ["down", "up -d"], raw=raw))
maybe_regenerate_traefik(cfg, results)
cmd = "restart"
raw = len(stack_list) == 1
results = run_async(run_on_stacks(cfg, stack_list, cmd, raw=raw))
report_results(results)
@@ -187,38 +196,8 @@ def update(
service: ServiceOption = None,
config: ConfigOption = None,
) -> None:
"""Update stacks (pull + build + down + up). With --service, updates just that service."""
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
if service:
if len(stack_list) != 1:
print_error("--service requires exactly one stack")
raise typer.Exit(1)
# For service-level update: pull + build + stop + up (stop instead of down)
raw = True
results = run_async(
run_sequential_on_stacks(
cfg,
stack_list,
[
f"pull --ignore-buildable {service}",
f"build {service}",
f"stop {service}",
f"up -d {service}",
],
raw=raw,
)
)
else:
# Sort web stack last to avoid self-restart canceling remaining updates
stack_list = sort_web_stack_last(stack_list)
raw = len(stack_list) == 1
results = run_async(
run_sequential_on_stacks(
cfg, stack_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
)
)
maybe_regenerate_traefik(cfg, results)
report_results(results)
"""Update stacks (pull + build + up). Shorthand for 'up --pull --build'."""
up(stacks=stacks, all_stacks=all_stacks, service=service, pull=True, build=True, config=config)
def _discover_strays(cfg: Config) -> dict[str, list[str]]:
@@ -333,11 +312,6 @@ def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs the
console.print(f"\n{MSG_DRY_RUN}")
return
# Sort web stack last in each phase to avoid self-restart canceling remaining work
migrations = sort_web_stack_last(migrations)
missing = sort_web_stack_last(missing)
to_refresh = sort_web_stack_last(to_refresh)
# Execute changes
console.print()
all_results = []
@@ -420,10 +394,10 @@ def compose(
else:
target_host = hosts[0]
# Build the full compose command
# Build the full compose command (quote args to preserve spaces)
full_cmd = command
if args:
full_cmd += " " + " ".join(args)
full_cmd += " " + " ".join(shlex.quote(arg) for arg in args)
# Run with raw=True for proper TTY handling (progress bars, interactive)
result = run_async(run_compose_on_host(cfg, resolved_stack, target_host, full_cmd, raw=True))
@@ -433,5 +407,9 @@ def compose(
raise typer.Exit(result.exit_code)
# Alias: cf a = cf apply
app.command("a", hidden=True)(apply)
# Aliases (hidden from help, shown in --help as "Aliases: ...")
app.command("a", hidden=True)(apply) # cf a = cf apply
app.command("r", hidden=True)(restart) # cf r = cf restart
app.command("u", hidden=True)(update) # cf u = cf update
app.command("p", hidden=True)(pull) # cf p = cf pull
app.command("c", hidden=True)(compose) # cf c = cf compose

View File

@@ -659,3 +659,9 @@ def init_network(
failed = [r for r in results if not r.success]
if failed:
raise typer.Exit(1)
# Aliases (hidden from help)
app.command("rf", hidden=True)(refresh) # cf rf = cf refresh
app.command("ck", hidden=True)(check) # cf ck = cf check
app.command("tf", hidden=True)(traefik_file) # cf tf = cf traefik-file

View File

@@ -21,17 +21,22 @@ from compose_farm.cli.common import (
report_results,
run_async,
run_parallel_with_progress,
validate_hosts,
)
from compose_farm.console import console, print_error
from compose_farm.console import console, print_error, print_warning
from compose_farm.executor import run_command, run_on_stacks
from compose_farm.state import get_stacks_needing_migration, group_stacks_by_host, load_state
if TYPE_CHECKING:
from collections.abc import Callable
from compose_farm.config import Config
from compose_farm.glances import ContainerStats
def _get_container_counts(cfg: Config) -> dict[str, int]:
"""Get container counts from all hosts with a progress bar."""
def _get_container_counts(cfg: Config, hosts: list[str] | None = None) -> dict[str, int]:
"""Get container counts from hosts with a progress bar."""
host_list = hosts if hosts is not None else list(cfg.hosts.keys())
async def get_count(host_name: str) -> tuple[str, int]:
host = cfg.hosts[host_name]
@@ -44,7 +49,7 @@ def _get_container_counts(cfg: Config) -> dict[str, int]:
results = run_parallel_with_progress(
"Querying hosts",
list(cfg.hosts.keys()),
host_list,
get_count,
)
return dict(results)
@@ -67,7 +72,7 @@ def _build_host_table(
if show_containers:
table.add_column("Containers", justify="right")
for host_name in sorted(cfg.hosts.keys()):
for host_name in sorted(stacks_by_host.keys()):
host = cfg.hosts[host_name]
configured = len(stacks_by_host[host_name])
running = len(running_by_host[host_name])
@@ -86,19 +91,46 @@ def _build_host_table(
return table
def _state_includes_host(host_value: str | list[str], host_name: str) -> bool:
"""Check whether a state entry includes the given host."""
if isinstance(host_value, list):
return host_name in host_value
return host_value == host_name
def _build_summary_table(
cfg: Config, state: dict[str, str | list[str]], pending: list[str]
cfg: Config,
state: dict[str, str | list[str]],
pending: list[str],
*,
host_filter: str | None = None,
) -> Table:
"""Build the summary table."""
on_disk = cfg.discover_compose_dirs()
if host_filter:
stacks_configured = [stack for stack in cfg.stacks if host_filter in cfg.get_hosts(stack)]
stacks_configured_set = set(stacks_configured)
state = {
stack: hosts
for stack, hosts in state.items()
if _state_includes_host(hosts, host_filter)
}
on_disk = {stack for stack in on_disk if stack in stacks_configured_set}
total_hosts = 1
stacks_configured_count = len(stacks_configured)
stacks_tracked_count = len(state)
else:
total_hosts = len(cfg.hosts)
stacks_configured_count = len(cfg.stacks)
stacks_tracked_count = len(state)
table = Table(title="Summary", show_header=False)
table.add_column("Label", style="dim")
table.add_column("Value", style="bold")
table.add_row("Total hosts", str(len(cfg.hosts)))
table.add_row("Stacks (configured)", str(len(cfg.stacks)))
table.add_row("Stacks (tracked)", str(len(state)))
table.add_row("Total hosts", str(total_hosts))
table.add_row("Stacks (configured)", str(stacks_configured_count))
table.add_row("Stacks (tracked)", str(stacks_tracked_count))
table.add_row("Compose files on disk", str(len(on_disk)))
if pending:
@@ -111,6 +143,81 @@ def _build_summary_table(
return table
def _format_network(rx: int, tx: int, fmt: Callable[[int], str]) -> str:
"""Format network I/O."""
return f"[dim]↓[/]{fmt(rx)} [dim]↑[/]{fmt(tx)}"
def _cpu_style(percent: float) -> str:
"""Rich style for CPU percentage."""
if percent > 80: # noqa: PLR2004
return "red"
if percent > 50: # noqa: PLR2004
return "yellow"
return "green"
def _mem_style(percent: float) -> str:
"""Rich style for memory percentage."""
if percent > 90: # noqa: PLR2004
return "red"
if percent > 70: # noqa: PLR2004
return "yellow"
return "green"
def _status_style(status: str) -> str:
"""Rich style for container status."""
s = status.lower()
if s == "running":
return "green"
if s == "exited":
return "red"
if s == "paused":
return "yellow"
return "dim"
def _build_containers_table(
containers: list[ContainerStats],
host_filter: str | None = None,
) -> Table:
"""Build Rich table for container stats."""
from compose_farm.glances import format_bytes # noqa: PLC0415
table = Table(title="Containers", show_header=True, header_style="bold cyan")
table.add_column("Stack", style="cyan")
table.add_column("Service", style="dim")
table.add_column("Host", style="magenta")
table.add_column("Image")
table.add_column("Status")
table.add_column("Uptime", justify="right")
table.add_column("CPU%", justify="right")
table.add_column("Memory", justify="right")
table.add_column("Net I/O", justify="right")
if host_filter:
containers = [c for c in containers if c.host == host_filter]
# Sort by stack, then service
containers = sorted(containers, key=lambda c: (c.stack.lower(), c.service.lower()))
for c in containers:
table.add_row(
c.stack or c.name,
c.service or c.name,
c.host,
c.image,
f"[{_status_style(c.status)}]{c.status}[/]",
c.uptime or "[dim]-[/]",
f"[{_cpu_style(c.cpu_percent)}]{c.cpu_percent:.1f}%[/]",
f"[{_mem_style(c.memory_percent)}]{format_bytes(c.memory_usage)}[/]",
_format_network(c.network_rx, c.network_tx, format_bytes),
)
return table
# --- Command functions ---
@@ -175,24 +282,66 @@ def stats(
bool,
typer.Option("--live", "-l", help="Query Docker for live container stats"),
] = False,
containers: Annotated[
bool,
typer.Option(
"--containers", "-C", help="Show per-container resource stats (requires Glances)"
),
] = False,
host: HostOption = None,
config: ConfigOption = None,
) -> None:
"""Show overview statistics for hosts and stacks.
Without --live: Shows config/state info (hosts, stacks, pending migrations).
Without flags: Shows config/state info (hosts, stacks, pending migrations).
With --live: Also queries Docker on each host for container counts.
With --containers: Shows per-container resource stats (requires Glances).
"""
cfg = load_config_or_exit(config)
host_filter = None
if host:
validate_hosts(cfg, host)
host_filter = host
# Handle --containers mode
if containers:
if not cfg.glances_stack:
print_error("Glances not configured")
console.print("[dim]Add 'glances_stack: glances' to compose-farm.yaml[/]")
raise typer.Exit(1)
from compose_farm.glances import fetch_all_container_stats # noqa: PLC0415
host_list = [host_filter] if host_filter else None
container_list = run_async(fetch_all_container_stats(cfg, hosts=host_list))
if not container_list:
print_warning("No containers found")
raise typer.Exit(0)
console.print(_build_containers_table(container_list, host_filter=host_filter))
return
# Validate and filter by host if specified
if host_filter:
all_hosts = [host_filter]
selected_hosts = {host_filter: cfg.hosts[host_filter]}
else:
all_hosts = list(cfg.hosts.keys())
selected_hosts = cfg.hosts
state = load_state(cfg)
pending = get_stacks_needing_migration(cfg)
all_hosts = list(cfg.hosts.keys())
stacks_by_host = group_stacks_by_host(cfg.stacks, cfg.hosts, all_hosts)
running_by_host = group_stacks_by_host(state, cfg.hosts, all_hosts)
# Filter pending migrations to selected host(s)
if host_filter:
pending = [stack for stack in pending if host_filter in cfg.get_hosts(stack)]
stacks_by_host = group_stacks_by_host(cfg.stacks, selected_hosts, all_hosts)
running_by_host = group_stacks_by_host(state, selected_hosts, all_hosts)
container_counts: dict[str, int] = {}
if live:
container_counts = _get_container_counts(cfg)
container_counts = _get_container_counts(cfg, all_hosts)
host_table = _build_host_table(
cfg, stacks_by_host, running_by_host, container_counts, show_containers=live
@@ -200,4 +349,46 @@ def stats(
console.print(host_table)
console.print()
console.print(_build_summary_table(cfg, state, pending))
console.print(_build_summary_table(cfg, state, pending, host_filter=host_filter))
@app.command("list", rich_help_panel="Monitoring")
def list_(
host: HostOption = None,
simple: Annotated[
bool,
typer.Option("--simple", "-s", help="Plain output (one stack per line, for scripting)"),
] = False,
config: ConfigOption = None,
) -> None:
"""List all stacks and their assigned hosts."""
cfg = load_config_or_exit(config)
stacks: list[tuple[str, str | list[str]]] = list(cfg.stacks.items())
if host:
stacks = [(s, h) for s, h in stacks if str(h) == host or host in str(h).split(",")]
if simple:
for stack, _ in sorted(stacks):
console.print(stack)
else:
# Assign colors to hosts for visual grouping
host_colors = ["magenta", "cyan", "green", "yellow", "blue", "red"]
unique_hosts = sorted({str(h) for _, h in stacks})
host_color_map = {h: host_colors[i % len(host_colors)] for i, h in enumerate(unique_hosts)}
table = Table(title="Stacks", show_header=True, header_style="bold cyan")
table.add_column("Stack")
table.add_column("Host")
for stack, host_val in sorted(stacks):
color = host_color_map.get(str(host_val), "white")
table.add_row(f"[{color}]{stack}[/]", f"[{color}]{host_val}[/]")
console.print(table)
# Aliases (hidden from help)
app.command("l", hidden=True)(logs) # cf l = cf logs
app.command("ls", hidden=True)(list_) # cf ls = cf list
app.command("s", hidden=True)(stats) # cf s = cf stats

View File

@@ -280,8 +280,11 @@ def parse_external_networks(config: Config, stack: str) -> list[str]:
return []
external_networks: list[str] = []
for name, definition in networks.items():
for key, definition in networks.items():
if isinstance(definition, dict) and definition.get("external") is True:
# Networks may have a "name" field, which may differ from the key.
# Use it if present, else fall back to the key.
name = str(definition.get("name", key))
external_networks.append(name)
return external_networks

View File

@@ -3,6 +3,7 @@
from __future__ import annotations
import getpass
import os
from pathlib import Path
from typing import Any
@@ -96,9 +97,17 @@ class Config(BaseModel, extra="forbid"):
host_names = self.get_hosts(stack)
return self.hosts[host_names[0]]
def get_stack_dir(self, stack: str) -> Path:
"""Get stack directory path."""
return self.compose_dir / stack
def get_compose_path(self, stack: str) -> Path:
"""Get compose file path for a stack (tries compose.yaml first)."""
stack_dir = self.compose_dir / stack
"""Get compose file path for a stack (tries compose.yaml first).
Note: This checks local filesystem. For remote execution, use
get_stack_dir() and let docker compose find the file.
"""
stack_dir = self.get_stack_dir(stack)
for filename in COMPOSE_FILENAMES:
candidate = stack_dir / filename
if candidate.exists():
@@ -116,6 +125,31 @@ class Config(BaseModel, extra="forbid"):
found.add(subdir.name)
return found
def get_web_stack(self) -> str:
"""Get web stack name from CF_WEB_STACK environment variable."""
return os.environ.get("CF_WEB_STACK", "")
def get_local_host_from_web_stack(self) -> str | None:
"""Resolve the local host from the web stack configuration (container only).
When running in the web UI container (CF_WEB_STACK is set), this returns
the host that the web stack runs on. This is used for:
- Glances connectivity (use container name instead of IP)
- Container exec (local docker exec vs SSH)
- File read/write (local filesystem vs SSH)
Returns None if not in container mode or web stack is not configured.
"""
if os.environ.get("CF_WEB_STACK") is None:
return None
web_stack = self.get_web_stack()
if not web_stack or web_stack not in self.stacks:
return None
host_names = self.get_hosts(web_stack)
if len(host_names) != 1:
return None
return host_names[0]
def _parse_hosts(raw_hosts: dict[str, Any]) -> dict[str, Host]:
"""Parse hosts from config, handling both simple and full forms."""

View File

@@ -76,7 +76,7 @@ stacks:
# traefik_file: (optional) Auto-generate Traefik file-provider config
# ------------------------------------------------------------------------------
# When set, compose-farm automatically regenerates this file after
# up/down/restart/update commands. Traefik watches this file for changes.
# up/down/update commands. Traefik watches this file for changes.
#
# traefik_file: /opt/compose/traefik/dynamic.d/compose-farm.yml
@@ -87,3 +87,13 @@ stacks:
# skipped (they're handled by Traefik's Docker provider directly).
#
# traefik_stack: traefik
# ------------------------------------------------------------------------------
# glances_stack: (optional) Stack/container name for Glances
# ------------------------------------------------------------------------------
# When set, enables host resource monitoring via the Glances API. Used by:
# - CLI: `cf stats --containers` shows container stats from all hosts
# - Web UI: displays host resource graphs and container metrics
# This should be the container name that runs Glances on the same Docker network.
#
# glances_stack: glances

View File

@@ -58,22 +58,12 @@ _compose_labels_cache = TTLCache(ttl_seconds=30.0)
def _print_compose_command(
host_name: str,
compose_dir: str,
compose_path: str,
stack: str,
compose_cmd: str,
) -> None:
"""Print the docker compose command being executed.
Shows the host and a simplified command with relative path from compose_dir.
"""
# Show relative path from compose_dir for cleaner output
if compose_path.startswith(compose_dir):
rel_path = compose_path[len(compose_dir) :].lstrip("/")
else:
rel_path = compose_path
"""Print the docker compose command being executed."""
console.print(
f"[dim][magenta]{host_name}[/magenta]: docker compose -f {rel_path} {compose_cmd}[/dim]"
f"[dim][magenta]{host_name}[/magenta]: ({stack}) docker compose {compose_cmd}[/dim]"
)
@@ -362,11 +352,12 @@ async def run_compose(
"""Run a docker compose command for a stack."""
host_name = config.get_hosts(stack)[0]
host = config.hosts[host_name]
compose_path = config.get_compose_path(stack)
stack_dir = config.get_stack_dir(stack)
_print_compose_command(host_name, str(config.compose_dir), str(compose_path), compose_cmd)
_print_compose_command(host_name, stack, compose_cmd)
command = f"docker compose -f {compose_path} {compose_cmd}"
# Use cd to let docker compose find the compose file on the remote host
command = f'cd "{stack_dir}" && docker compose {compose_cmd}'
return await run_command(host, command, stack, stream=stream, raw=raw, prefix=prefix)
@@ -385,11 +376,12 @@ async def run_compose_on_host(
Used for migration - running 'down' on the old host before 'up' on new host.
"""
host = config.hosts[host_name]
compose_path = config.get_compose_path(stack)
stack_dir = config.get_stack_dir(stack)
_print_compose_command(host_name, str(config.compose_dir), str(compose_path), compose_cmd)
_print_compose_command(host_name, stack, compose_cmd)
command = f"docker compose -f {compose_path} {compose_cmd}"
# Use cd to let docker compose find the compose file on the remote host
command = f'cd "{stack_dir}" && docker compose {compose_cmd}'
return await run_command(host, command, stack, stream=stream, raw=raw, prefix=prefix)
@@ -441,14 +433,15 @@ async def _run_sequential_stack_commands_multi_host(
For multi-host stacks, prefix defaults to stack@host format.
"""
host_names = config.get_hosts(stack)
compose_path = config.get_compose_path(stack)
stack_dir = config.get_stack_dir(stack)
final_results: list[CommandResult] = []
for cmd in commands:
command = f"docker compose -f {compose_path} {cmd}"
# Use cd to let docker compose find the compose file on the remote host
command = f'cd "{stack_dir}" && docker compose {cmd}'
tasks = []
for host_name in host_names:
_print_compose_command(host_name, str(config.compose_dir), str(compose_path), cmd)
_print_compose_command(host_name, stack, cmd)
host = config.hosts[host_name]
# For multi-host stacks, always use stack@host prefix to distinguish output
label = f"{stack}@{host_name}" if len(host_names) > 1 else stack
@@ -525,10 +518,11 @@ async def check_stack_running(
) -> bool:
"""Check if a stack has running containers on a specific host."""
host = config.hosts[host_name]
compose_path = config.get_compose_path(stack)
stack_dir = config.get_stack_dir(stack)
# Use ps --status running to check for running containers
command = f"docker compose -f {compose_path} ps --status running -q"
# Use cd to let docker compose find the compose file on the remote host
command = f'cd "{stack_dir}" && docker compose ps --status running -q'
result = await run_command(host, command, stack, stream=False)
# If command succeeded and has output, containers are running
@@ -637,18 +631,28 @@ async def check_paths_exist(
host_name: str,
paths: list[str],
) -> dict[str, bool]:
"""Check if multiple paths exist on a specific host.
"""Check if multiple paths exist and are accessible on a specific host.
Returns a dict mapping path -> exists.
Handles permission denied as "exists" (path is there, just not accessible).
Uses timeout to detect stale NFS mounts that would hang.
"""
# Only report missing if stat says "No such file", otherwise assume exists
# (handles permission denied correctly - path exists, just not accessible)
# Use timeout to detect stale NFS mounts (which hang on access)
# - First try ls with timeout to check accessibility
# - If ls succeeds: path exists and is accessible
# - If ls fails/times out: use stat (also with timeout) to distinguish
# "no such file" from "permission denied" or stale NFS
# - Timeout (exit code 124) is treated as inaccessible (stale NFS mount)
return await _batch_check_existence(
config,
host_name,
paths,
lambda esc: f"stat '{esc}' 2>&1 | grep -q 'No such file' && echo 'N:{esc}' || echo 'Y:{esc}'",
lambda esc: (
f"OUT=$(timeout 2 stat '{esc}' 2>&1); RC=$?; "
f"if [ $RC -eq 124 ]; then echo 'N:{esc}'; "
f"elif echo \"$OUT\" | grep -q 'No such file'; then echo 'N:{esc}'; "
f"else echo 'Y:{esc}'; fi"
),
"mount-check",
)

View File

@@ -16,26 +16,31 @@ if TYPE_CHECKING:
DEFAULT_GLANCES_PORT = 61208
def format_bytes(bytes_val: int) -> str:
"""Format bytes to human readable string (e.g., 1.5 GiB)."""
import humanize # noqa: PLC0415
return humanize.naturalsize(bytes_val, binary=True, format="%.1f")
def _get_glances_address(
host_name: str,
host: Host,
glances_container: str | None,
local_host: str | None = None,
) -> str:
"""Get the address to use for Glances API requests.
When running in a Docker container (CF_WEB_STACK set), the local host's Glances
may not be reachable via its LAN IP due to Docker network isolation. In this case,
we use the Glances container name for the local host.
Set CF_LOCAL_HOST=<hostname> to explicitly specify which host is local.
may not be reachable via its LAN IP due to Docker network isolation. In this
case, we use the Glances container name for the local host.
"""
# Only use container name when running inside a Docker container
# CF_WEB_STACK indicates we're running in the web UI container.
in_container = os.environ.get("CF_WEB_STACK") is not None
if not in_container or not glances_container:
return host.address
# CF_LOCAL_HOST explicitly tells us which host to reach via container name
explicit_local = os.environ.get("CF_LOCAL_HOST")
if explicit_local and host_name == explicit_local:
if local_host and host_name == local_host:
return glances_container
# Fall back to is_local detection (may not work in container)
@@ -145,8 +150,13 @@ async def fetch_all_host_stats(
) -> dict[str, HostStats]:
"""Fetch stats from all hosts in parallel."""
glances_container = config.glances_stack
local_host = config.get_local_host_from_web_stack()
tasks = [
fetch_host_stats(name, _get_glances_address(name, host, glances_container), port)
fetch_host_stats(
name,
_get_glances_address(name, host, glances_container, local_host),
port,
)
for name, host in config.hosts.items()
]
results = await asyncio.gather(*tasks)
@@ -244,11 +254,14 @@ async def fetch_container_stats(
async def fetch_all_container_stats(
config: Config,
port: int = DEFAULT_GLANCES_PORT,
hosts: list[str] | None = None,
) -> list[ContainerStats]:
"""Fetch container stats from all hosts in parallel, enriched with compose labels."""
from .executor import get_container_compose_labels # noqa: PLC0415
glances_container = config.glances_stack
host_names = hosts if hosts is not None else list(config.hosts.keys())
local_host = config.get_local_host_from_web_stack()
async def fetch_host_data(
host_name: str,
@@ -269,8 +282,17 @@ async def fetch_all_container_stats(
return containers
tasks = [
fetch_host_data(name, _get_glances_address(name, host, glances_container))
for name, host in config.hosts.items()
fetch_host_data(
name,
_get_glances_address(
name,
config.hosts[name],
glances_container,
local_host,
),
)
for name in host_names
if name in config.hosts
]
results = await asyncio.gather(*tasks)
# Flatten list of lists

View File

@@ -185,18 +185,38 @@ def _report_preflight_failures(
print_error(f" missing device: {dev}")
def build_up_cmd(
*,
pull: bool = False,
build: bool = False,
service: str | None = None,
) -> str:
"""Build compose 'up' subcommand with optional flags."""
parts = ["up", "-d"]
if pull:
parts.append("--pull always")
if build:
parts.append("--build")
if service:
parts.append(service)
return " ".join(parts)
async def _up_multi_host_stack(
cfg: Config,
stack: str,
prefix: str,
*,
raw: bool = False,
pull: bool = False,
build: bool = False,
) -> list[CommandResult]:
"""Start a multi-host stack on all configured hosts."""
host_names = cfg.get_hosts(stack)
results: list[CommandResult] = []
compose_path = cfg.get_compose_path(stack)
command = f"docker compose -f {compose_path} up -d"
stack_dir = cfg.get_stack_dir(stack)
# Use cd to let docker compose find the compose file on the remote host
command = f'cd "{stack_dir}" && docker compose {build_up_cmd(pull=pull, build=build)}'
# Pre-flight checks on all hosts
for host_name in host_names:
@@ -269,6 +289,8 @@ async def _up_single_stack(
prefix: str,
*,
raw: bool,
pull: bool = False,
build: bool = False,
) -> CommandResult:
"""Start a single-host stack with migration support."""
target_host = cfg.get_hosts(stack)[0]
@@ -297,7 +319,7 @@ async def _up_single_stack(
# Start on target host
console.print(f"{prefix} Starting on [magenta]{target_host}[/]...")
up_result = await _run_compose_step(cfg, stack, "up -d", raw=raw)
up_result = await _run_compose_step(cfg, stack, build_up_cmd(pull=pull, build=build), raw=raw)
# Update state on success, or rollback on failure
if up_result.success:
@@ -316,24 +338,101 @@ async def _up_single_stack(
return up_result
async def _up_stack_simple(
cfg: Config,
stack: str,
*,
raw: bool = False,
pull: bool = False,
build: bool = False,
) -> CommandResult:
"""Start a single-host stack without migration (parallel-safe)."""
target_host = cfg.get_hosts(stack)[0]
# Pre-flight check
preflight = await check_stack_requirements(cfg, stack, target_host)
if not preflight.ok:
_report_preflight_failures(stack, target_host, preflight)
return CommandResult(stack=stack, exit_code=1, success=False)
# Run with streaming for parallel output
result = await run_compose(cfg, stack, build_up_cmd(pull=pull, build=build), raw=raw)
if raw:
print()
if result.interrupted:
raise OperationInterruptedError
# Update state on success
if result.success:
set_stack_host(cfg, stack, target_host)
return result
async def up_stacks(
cfg: Config,
stacks: list[str],
*,
raw: bool = False,
pull: bool = False,
build: bool = False,
) -> list[CommandResult]:
"""Start stacks with automatic migration if host changed."""
"""Start stacks with automatic migration if host changed.
Stacks without migration run in parallel. Migration stacks run sequentially.
"""
# Categorize stacks
multi_host: list[str] = []
needs_migration: list[str] = []
simple: list[str] = []
for stack in stacks:
if cfg.is_multi_host(stack):
multi_host.append(stack)
else:
target = cfg.get_hosts(stack)[0]
current = get_stack_host(cfg, stack)
if current and current != target:
needs_migration.append(stack)
else:
simple.append(stack)
results: list[CommandResult] = []
total = len(stacks)
try:
for idx, stack in enumerate(stacks, 1):
prefix = f"[dim][{idx}/{total}][/] [cyan]\\[{stack}][/]"
# Simple stacks: run in parallel (no migration needed)
if simple:
use_raw = raw and len(simple) == 1
simple_results = await asyncio.gather(
*[
_up_stack_simple(cfg, stack, raw=use_raw, pull=pull, build=build)
for stack in simple
]
)
results.extend(simple_results)
# Multi-host stacks: run in parallel
if multi_host:
multi_results = await asyncio.gather(
*[
_up_multi_host_stack(
cfg, stack, f"[cyan]\\[{stack}][/]", raw=raw, pull=pull, build=build
)
for stack in multi_host
]
)
for result_list in multi_results:
results.extend(result_list)
# Migration stacks: run sequentially for clear output and rollback
if needs_migration:
total = len(needs_migration)
for idx, stack in enumerate(needs_migration, 1):
prefix = f"[dim][{idx}/{total}][/] [cyan]\\[{stack}][/]"
results.append(
await _up_single_stack(cfg, stack, prefix, raw=raw, pull=pull, build=build)
)
if cfg.is_multi_host(stack):
results.extend(await _up_multi_host_stack(cfg, stack, prefix, raw=raw))
else:
results.append(await _up_single_stack(cfg, stack, prefix, raw=raw))
except OperationInterruptedError:
raise KeyboardInterrupt from None

View File

@@ -15,7 +15,7 @@ from pydantic import ValidationError
from compose_farm.executor import is_local
if TYPE_CHECKING:
from compose_farm.config import Config
from compose_farm.config import Config, Host
# Paths
WEB_DIR = Path(__file__).parent
@@ -52,8 +52,35 @@ def extract_config_error(exc: Exception) -> str:
return str(exc)
def is_local_host(host_name: str, host: Host, config: Config) -> bool:
"""Check if a host should be treated as local.
When running in a Docker container, is_local() may not work correctly because
the container has different network IPs. This function first checks if the
host matches the web stack host (container only), then falls back to is_local().
This affects:
- Container exec (local docker exec vs SSH)
- File read/write (local filesystem vs SSH)
- Shell sessions (local shell vs SSH)
"""
local_host = config.get_local_host_from_web_stack()
if local_host and host_name == local_host:
return True
return is_local(host)
def get_local_host(config: Config) -> str | None:
"""Find the local host name from config, if any."""
"""Find the local host name from config, if any.
First checks the web stack host (container only), then falls back to is_local()
detection.
"""
# Web stack host takes precedence in container mode
local_host = config.get_local_host_from_web_stack()
if local_host and local_host in config.hosts:
return local_host
# Fall back to auto-detection
for name, host in config.hosts.items():
if is_local(host):
return name

View File

@@ -96,7 +96,16 @@ async def pull_all() -> dict[str, Any]:
@router.post("/update-all")
async def update_all() -> dict[str, Any]:
"""Update all stacks (pull + build + down + up)."""
"""Update all stacks, excluding the web stack. Only recreates if images changed.
The web stack is excluded to prevent the UI from shutting down mid-operation.
Use 'cf update <web-stack>' manually to update the web UI.
"""
config = get_config()
task_id = _start_task(lambda tid: run_cli_streaming(config, ["update", "--all"], tid))
return {"task_id": task_id, "command": "update --all"}
# Get all stacks except the web stack to avoid self-shutdown
web_stack = config.get_web_stack()
stacks = [s for s in config.stacks if s != web_stack]
if not stacks:
return {"task_id": "", "command": "update (no stacks)", "skipped": True}
task_id = _start_task(lambda tid: run_cli_streaming(config, ["update", *stacks], tid))
return {"task_id": task_id, "command": f"update {' '.join(stacks)}"}

View File

@@ -20,11 +20,11 @@ from fastapi import APIRouter, Body, HTTPException, Query
from fastapi.responses import HTMLResponse
from compose_farm.compose import extract_services, get_container_name, load_compose_data_for_stack
from compose_farm.executor import is_local, run_compose_on_host, ssh_connect_kwargs
from compose_farm.executor import run_compose_on_host, ssh_connect_kwargs
from compose_farm.glances import fetch_all_host_stats
from compose_farm.paths import backup_dir, find_config_path
from compose_farm.state import load_state
from compose_farm.web.deps import get_config, get_templates
from compose_farm.web.deps import get_config, get_templates, is_local_host
logger = logging.getLogger(__name__)
@@ -344,10 +344,11 @@ async def read_console_file(
path: Annotated[str, Query(description="File path")],
) -> dict[str, Any]:
"""Read a file from a host for the console editor."""
config = get_config()
host_config = _get_console_host(host, path)
try:
if is_local(host_config):
if is_local_host(host, host_config, config):
content = await _read_file_local(path)
else:
content = await _read_file_remote(host_config, path)
@@ -368,10 +369,11 @@ async def write_console_file(
content: Annotated[str, Body(media_type="text/plain")],
) -> dict[str, Any]:
"""Write a file to a host from the console editor."""
config = get_config()
host_config = _get_console_host(host, path)
try:
if is_local(host_config):
if is_local_host(host, host_config, config):
saved = await _write_file_local(path, content)
msg = f"Saved: {path}" if saved else "No changes to save"
else:

View File

@@ -7,12 +7,11 @@ import re
from typing import TYPE_CHECKING
from urllib.parse import quote
import humanize
from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse, JSONResponse
from compose_farm.executor import TTLCache
from compose_farm.glances import ContainerStats, fetch_all_container_stats
from compose_farm.glances import ContainerStats, fetch_all_container_stats, format_bytes
from compose_farm.registry import DOCKER_HUB_ALIASES, ImageRef
from compose_farm.web.deps import get_config, get_templates
@@ -32,11 +31,6 @@ MIN_NAME_PARTS = 2
_DASH_HTML = '<span class="text-xs opacity-50">-</span>'
def _format_bytes(bytes_val: int) -> str:
"""Format bytes to human readable string."""
return humanize.naturalsize(bytes_val, binary=True, format="%.1f")
def _parse_image(image: str) -> tuple[str, str]:
"""Parse image string into (name, tag)."""
# Handle registry prefix (e.g., ghcr.io/user/repo:tag)
@@ -177,8 +171,8 @@ def _render_row(c: ContainerStats, idx: int | str) -> str:
f'<td data-sort="{c.status.lower()}"><span class="{_status_class(c.status)}">{c.status}</span></td>'
f'<td data-sort="{uptime_sec}" class="text-xs text-right font-mono">{c.uptime or "-"}</td>'
f'<td data-sort="{cpu}" class="text-right font-mono"><div class="flex flex-col items-end gap-0.5"><div class="w-12 h-2 bg-base-300 rounded-full overflow-hidden"><div class="h-full {cpu_class}" style="width: {min(cpu, 100)}%"></div></div><span class="text-xs">{cpu:.0f}%</span></div></td>'
f'<td data-sort="{c.memory_usage}" class="text-right font-mono"><div class="flex flex-col items-end gap-0.5"><div class="w-12 h-2 bg-base-300 rounded-full overflow-hidden"><div class="h-full {mem_class}" style="width: {min(mem, 100)}%"></div></div><span class="text-xs">{_format_bytes(c.memory_usage)}</span></div></td>'
f'<td data-sort="{c.network_rx + c.network_tx}" class="text-xs text-right font-mono">↓{_format_bytes(c.network_rx)}{_format_bytes(c.network_tx)}</td>'
f'<td data-sort="{c.memory_usage}" class="text-right font-mono"><div class="flex flex-col items-end gap-0.5"><div class="w-12 h-2 bg-base-300 rounded-full overflow-hidden"><div class="h-full {mem_class}" style="width: {min(mem, 100)}%"></div></div><span class="text-xs">{format_bytes(c.memory_usage)}</span></div></td>'
f'<td data-sort="{c.network_rx + c.network_tx}" class="text-xs text-right font-mono">↓{format_bytes(c.network_rx)}{format_bytes(c.network_tx)}</td>'
"</tr>"
)
@@ -250,7 +244,7 @@ async def get_containers_rows_by_host(host_name: str) -> HTMLResponse:
import time # noqa: PLC0415
from compose_farm.executor import get_container_compose_labels # noqa: PLC0415
from compose_farm.glances import fetch_container_stats # noqa: PLC0415
from compose_farm.glances import _get_glances_address, fetch_container_stats # noqa: PLC0415
logger = logging.getLogger(__name__)
config = get_config()
@@ -259,9 +253,10 @@ async def get_containers_rows_by_host(host_name: str) -> HTMLResponse:
return HTMLResponse("")
host = config.hosts[host_name]
glances_address = _get_glances_address(host_name, host, config.glances_stack)
t0 = time.monotonic()
containers, error = await fetch_container_stats(host_name, host.address)
containers, error = await fetch_container_stats(host_name, glances_address)
t1 = time.monotonic()
fetch_ms = (t1 - t0) * 1000

View File

@@ -494,7 +494,9 @@ function refreshDashboard() {
* Filter sidebar stacks by name and host
*/
function sidebarFilter() {
const q = (document.getElementById('sidebar-filter')?.value || '').toLowerCase();
const input = document.getElementById('sidebar-filter');
const clearBtn = document.getElementById('sidebar-filter-clear');
const q = (input?.value || '').toLowerCase();
const h = document.getElementById('sidebar-host-select')?.value || '';
let n = 0;
document.querySelectorAll('#sidebar-stacks li').forEach(li => {
@@ -503,9 +505,26 @@ function sidebarFilter() {
if (show) n++;
});
document.getElementById('sidebar-count').textContent = '(' + n + ')';
// Show/hide clear button based on input value
if (clearBtn) {
clearBtn.classList.toggle('hidden', !q);
}
}
window.sidebarFilter = sidebarFilter;
/**
* Clear sidebar filter input and refresh list
*/
function clearSidebarFilter() {
const input = document.getElementById('sidebar-filter');
if (input) {
input.value = '';
input.focus();
}
sidebarFilter();
}
window.clearSidebarFilter = clearSidebarFilter;
// Play intro animation on command palette button
function playFabIntro() {
const fab = document.getElementById('cmd-fab');
@@ -551,7 +570,6 @@ function playFabIntro() {
let commands = [];
let filtered = [];
let selected = 0;
let originalTheme = null; // Store theme when palette opens for preview/restore
const post = (url) => () => htmx.ajax('POST', url, {swap: 'none'});
const nav = (url, afterNav) => () => {
@@ -575,20 +593,21 @@ function playFabIntro() {
}
htmx.ajax('POST', `/api/${endpoint}`, {swap: 'none'});
};
// Get saved theme from localStorage (source of truth)
const getSavedTheme = () => localStorage.getItem(THEME_KEY) || 'dark';
// Apply theme and save to localStorage
const setTheme = (theme) => () => {
document.documentElement.setAttribute('data-theme', theme);
localStorage.setItem(THEME_KEY, theme);
};
// Preview theme without saving (for hover)
// Preview theme without saving (for hover). Guards against undefined/invalid themes.
const previewTheme = (theme) => {
document.documentElement.setAttribute('data-theme', theme);
if (theme) document.documentElement.setAttribute('data-theme', theme);
};
// Restore original theme (when closing without selection)
// Restore theme from localStorage (source of truth)
const restoreTheme = () => {
if (originalTheme) {
document.documentElement.setAttribute('data-theme', originalTheme);
}
document.documentElement.setAttribute('data-theme', getSavedTheme());
};
// Generate color swatch HTML for a theme
const themeSwatch = (theme) => `<span class="flex gap-0.5" data-theme="${theme}"><span class="w-2 h-4 rounded-l bg-primary"></span><span class="w-2 h-4 bg-secondary"></span><span class="w-2 h-4 bg-accent"></span><span class="w-2 h-4 rounded-r bg-neutral"></span></span>`;
@@ -608,7 +627,7 @@ function playFabIntro() {
cmd('action', 'Apply', 'Make reality match config', dashboardAction('apply'), icons.check),
cmd('action', 'Refresh', 'Update state from reality', dashboardAction('refresh'), icons.refresh_cw),
cmd('action', 'Pull All', 'Pull latest images for all stacks', dashboardAction('pull-all'), icons.cloud_download),
cmd('action', 'Update All', 'Update all stacks', dashboardAction('update-all'), icons.refresh_cw),
cmd('action', 'Update All', 'Update all stacks except web', dashboardAction('update-all'), icons.refresh_cw),
cmd('app', 'Theme', 'Change color theme', openThemePicker, icons.palette),
cmd('app', 'Dashboard', 'Go to dashboard', nav('/'), icons.home),
cmd('app', 'Live Stats', 'View all containers across hosts', nav('/live-stats'), icons.box),
@@ -628,7 +647,7 @@ function playFabIntro() {
stackCmd('Down', 'Stop', 'down', icons.square),
stackCmd('Restart', 'Restart', 'restart', icons.rotate_cw),
stackCmd('Pull', 'Pull', 'pull', icons.cloud_download),
stackCmd('Update', 'Pull + restart', 'update', icons.refresh_cw),
stackCmd('Update', 'Pull + recreate', 'update', icons.refresh_cw),
stackCmd('Logs', 'View logs for', 'logs', icons.file_text),
);
@@ -721,26 +740,24 @@ function playFabIntro() {
// Scroll selected item into view
const sel = list.querySelector(`[data-idx="${selected}"]`);
if (sel) sel.scrollIntoView({ block: 'nearest' });
// Preview theme if selected item is a theme command
// Preview theme if selected item is a theme command, otherwise restore saved
const selectedCmd = filtered[selected];
if (selectedCmd?.themeId) {
previewTheme(selectedCmd.themeId);
} else if (originalTheme) {
// Restore original when navigating away from theme commands
previewTheme(originalTheme);
} else {
restoreTheme();
}
}
function open(initialFilter = '') {
// Store original theme for preview/restore
originalTheme = document.documentElement.getAttribute('data-theme') || 'dark';
buildCommands();
selected = 0;
input.value = initialFilter;
filter();
// If opening theme picker, select current theme
if (initialFilter.startsWith('theme:')) {
const currentIdx = filtered.findIndex(c => c.themeId === originalTheme);
const savedTheme = getSavedTheme();
const currentIdx = filtered.findIndex(c => c.themeId === savedTheme);
if (currentIdx >= 0) selected = currentIdx;
}
render();
@@ -751,10 +768,6 @@ function playFabIntro() {
function exec() {
const cmd = filtered[selected];
if (cmd) {
if (cmd.themeId) {
// Theme command commits the previewed choice.
originalTheme = null;
}
dialog.close();
cmd.action();
}
@@ -794,19 +807,14 @@ function playFabIntro() {
if (a) previewTheme(a.dataset.themeId);
});
// Mouse leaving list restores to selected item's theme (or original)
// Mouse leaving list restores to selected item's theme (or saved)
list.addEventListener('mouseleave', () => {
const cmd = filtered[selected];
previewTheme(cmd?.themeId || originalTheme);
previewTheme(cmd?.themeId || getSavedTheme());
});
// Restore theme when dialog closes without selection (Escape, backdrop click)
dialog.addEventListener('close', () => {
if (originalTheme) {
restoreTheme();
originalTheme = null;
}
});
// Restore theme from localStorage when dialog closes
dialog.addEventListener('close', restoreTheme);
// FAB click to open
if (fab) fab.addEventListener('click', () => open());

View File

@@ -13,8 +13,6 @@ from compose_farm.ssh_keys import get_ssh_auth_sock
if TYPE_CHECKING:
from compose_farm.config import Config
# Environment variable to identify the web stack (for self-update detection)
CF_WEB_STACK = os.environ.get("CF_WEB_STACK", "")
# ANSI escape codes for terminal output
RED = "\x1b[31m"
@@ -95,16 +93,17 @@ async def run_cli_streaming(
tasks[task_id]["completed_at"] = time.time()
def _is_self_update(stack: str, command: str) -> bool:
def _is_self_update(config: Config, stack: str, command: str) -> bool:
"""Check if this is a self-update (updating the web stack itself).
Self-updates need special handling because running 'down' on the container
we're running in would kill the process before 'up' can execute.
"""
if not CF_WEB_STACK or stack != CF_WEB_STACK:
web_stack = config.get_web_stack()
if not web_stack or stack != web_stack:
return False
# Commands that involve 'down' need SSH: update, restart, down
return command in ("update", "restart", "down")
# Commands that involve 'down' need SSH: update, down
return command in ("update", "down")
async def _run_cli_via_ssh(
@@ -114,7 +113,8 @@ async def _run_cli_via_ssh(
) -> None:
"""Run a cf CLI command via SSH for self-updates (survives container restart)."""
try:
host = config.get_host(CF_WEB_STACK)
web_stack = config.get_web_stack()
host = config.get_host(web_stack)
cf_cmd = f"cf {' '.join(args)} --config={config.config_path}"
# Include task_id to prevent collision with concurrent updates
log_file = f"/tmp/cf-self-update-{task_id}.log" # noqa: S108
@@ -170,7 +170,7 @@ async def run_compose_streaming(
cli_args = [cli_cmd, stack, *extra_args]
# Use SSH for self-updates to survive container restart
if _is_self_update(stack, cli_cmd):
if _is_self_update(config, stack, cli_cmd):
await _run_cli_via_ssh(config, cli_args, task_id)
else:
await run_cli_streaming(config, cli_args, task_id)

View File

@@ -18,7 +18,7 @@
{{ action_btn("Apply", "/api/apply", "primary", "Make reality match config", check()) }}
{{ action_btn("Refresh", "/api/refresh", "outline", "Update state from reality", refresh_cw()) }}
{{ action_btn("Pull All", "/api/pull-all", "outline", "Pull latest images for all stacks", cloud_download()) }}
{{ action_btn("Update All", "/api/update-all", "outline", "Update all stacks (pull + build + down + up)", rotate_cw()) }}
{{ action_btn("Update All", "/api/update-all", "outline", "Update all stacks except web (only recreates if changed)", rotate_cw()) }}
<div class="tooltip" data-tip="Save compose-farm.yaml config file"><button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button></div>
</div>

View File

@@ -159,6 +159,12 @@
</svg>
{% endmacro %}
{% macro x(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M18 6 6 18"/><path d="m6 6 12 12"/>
</svg>
{% endmacro %}
{% macro alert_triangle(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="m21.73 18-8-14a2 2 0 0 0-3.48 0l-8 14A2 2 0 0 0 4 21h16a2 2 0 0 0 1.73-3"/><path d="M12 9v4"/><path d="M12 17h.01"/>

View File

@@ -1,4 +1,4 @@
{% from "partials/icons.html" import home, search, terminal, box %}
{% from "partials/icons.html" import home, search, terminal, box, x %}
<!-- Navigation Links -->
<div class="mb-4">
<ul class="menu" hx-boost="true" hx-target="#main-content" hx-select="#main-content" hx-swap="outerHTML">
@@ -13,7 +13,7 @@
<h4 class="text-xs uppercase tracking-wide text-base-content/60 px-3 py-1">Stacks <span class="opacity-50" id="sidebar-count">({{ stacks | length }})</span></h4>
<div class="px-2 mb-2 flex flex-col gap-1">
<label class="input input-xs flex items-center gap-2 bg-base-200">
{{ search(14) }}<input type="text" id="sidebar-filter" placeholder="Filter..." onkeyup="sidebarFilter()" />
{{ search(14) }}<input type="text" id="sidebar-filter" placeholder="Filter..." onkeyup="sidebarFilter()" /><button type="button" id="sidebar-filter-clear" class="hidden opacity-50 hover:opacity-100 cursor-pointer" onclick="clearSidebarFilter()">{{ x(12) }}</button>
</label>
<select id="sidebar-host-select" class="select select-xs bg-base-200 w-full" onchange="sidebarFilter()">
<option value="">All hosts</option>

View File

@@ -22,8 +22,8 @@
<!-- Lifecycle -->
{{ action_btn("Up", "/api/stack/" ~ name ~ "/up", "primary", "Start stack (docker compose up -d)", play()) }}
{{ action_btn("Down", "/api/stack/" ~ name ~ "/down", "outline", "Stop stack (docker compose down)", square()) }}
{{ action_btn("Restart", "/api/stack/" ~ name ~ "/restart", "secondary", "Restart stack (down + up)", rotate_cw()) }}
{{ action_btn("Update", "/api/stack/" ~ name ~ "/update", "accent", "Update to latest (pull + build + down + up)", download()) }}
{{ action_btn("Restart", "/api/stack/" ~ name ~ "/restart", "secondary", "Restart running containers", rotate_cw()) }}
{{ action_btn("Update", "/api/stack/" ~ name ~ "/update", "accent", "Update to latest (only recreates if changed)", download()) }}
<div class="divider divider-horizontal mx-0"></div>

View File

@@ -18,8 +18,8 @@ from typing import TYPE_CHECKING, Any
import asyncssh
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
from compose_farm.executor import is_local, ssh_connect_kwargs
from compose_farm.web.deps import get_config
from compose_farm.executor import ssh_connect_kwargs
from compose_farm.web.deps import get_config, is_local_host
from compose_farm.web.streaming import CRLF, DIM, GREEN, RED, RESET, tasks
logger = logging.getLogger(__name__)
@@ -188,7 +188,7 @@ async def _run_exec_session(
await websocket.send_text(f"{RED}Host '{host_name}' not found{RESET}{CRLF}")
return
if is_local(host):
if is_local_host(host_name, host, config):
# Local: use argv list (no shell interpretation)
argv = ["docker", "exec", "-it", container, "/bin/sh", "-c", SHELL_FALLBACK]
await _run_local_exec(websocket, argv)
@@ -239,7 +239,7 @@ async def _run_shell_session(
# Start interactive shell in home directory
shell_cmd = "cd ~ && exec bash -i || exec sh -i"
if is_local(host):
if is_local_host(host_name, host, config):
# Local: use argv list with shell -c to interpret the command
argv = ["/bin/sh", "-c", shell_cmd]
await _run_local_exec(websocket, argv)

View File

@@ -437,71 +437,3 @@ class TestDownOrphaned:
)
assert exc_info.value.exit_code == 1
class TestSortWebStackLast:
"""Tests for the sort_web_stack_last helper."""
def test_no_web_stack_env(self) -> None:
"""When CF_WEB_STACK is not set, list is unchanged."""
from compose_farm.cli import common
original = common.CF_WEB_STACK
try:
common.CF_WEB_STACK = ""
stacks = ["a", "b", "c"]
result = common.sort_web_stack_last(stacks)
assert result == ["a", "b", "c"]
finally:
common.CF_WEB_STACK = original
def test_web_stack_not_in_list(self) -> None:
"""When web stack is not in list, list is unchanged."""
from compose_farm.cli import common
original = common.CF_WEB_STACK
try:
common.CF_WEB_STACK = "webstack"
stacks = ["a", "b", "c"]
result = common.sort_web_stack_last(stacks)
assert result == ["a", "b", "c"]
finally:
common.CF_WEB_STACK = original
def test_web_stack_moved_to_end(self) -> None:
"""When web stack is in list, it's moved to the end."""
from compose_farm.cli import common
original = common.CF_WEB_STACK
try:
common.CF_WEB_STACK = "webstack"
stacks = ["a", "webstack", "b", "c"]
result = common.sort_web_stack_last(stacks)
assert result == ["a", "b", "c", "webstack"]
finally:
common.CF_WEB_STACK = original
def test_web_stack_already_last(self) -> None:
"""When web stack is already last, list is unchanged."""
from compose_farm.cli import common
original = common.CF_WEB_STACK
try:
common.CF_WEB_STACK = "webstack"
stacks = ["a", "b", "webstack"]
result = common.sort_web_stack_last(stacks)
assert result == ["a", "b", "webstack"]
finally:
common.CF_WEB_STACK = original
def test_empty_list(self) -> None:
"""Empty list returns empty list."""
from compose_farm.cli import common
original = common.CF_WEB_STACK
try:
common.CF_WEB_STACK = "webstack"
result = common.sort_web_stack_last([])
assert result == []
finally:
common.CF_WEB_STACK = original

View File

@@ -0,0 +1,168 @@
"""Tests for CLI monitoring commands (stats)."""
from pathlib import Path
from unittest.mock import patch
import pytest
import typer
from compose_farm.cli.monitoring import _build_summary_table, stats
from compose_farm.config import Config, Host
from compose_farm.glances import ContainerStats
def _make_config(tmp_path: Path, glances_stack: str | None = None) -> Config:
"""Create a minimal config for testing."""
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text("")
return Config(
compose_dir=tmp_path / "compose",
hosts={"host1": Host(address="localhost")},
stacks={"svc1": "host1"},
config_path=config_path,
glances_stack=glances_stack,
)
class TestStatsCommand:
"""Tests for the stats command."""
def test_stats_containers_requires_glances_config(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""--containers fails if glances_stack is not configured."""
cfg = _make_config(tmp_path, glances_stack=None)
with (
patch("compose_farm.cli.monitoring.load_config_or_exit", return_value=cfg),
pytest.raises(typer.Exit) as exc_info,
):
stats(live=False, containers=True, host=None, config=None)
assert exc_info.value.exit_code == 1
captured = capsys.readouterr()
assert "Glances not configured" in captured.err
def test_stats_containers_success(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""--containers fetches and displays container stats."""
cfg = _make_config(tmp_path, glances_stack="glances")
mock_containers = [
ContainerStats(
name="nginx",
host="host1",
status="running",
image="nginx:latest",
cpu_percent=10.5,
memory_usage=100 * 1024 * 1024,
memory_limit=1024 * 1024 * 1024,
memory_percent=10.0,
network_rx=1000,
network_tx=2000,
uptime="1h",
ports="80->80",
engine="docker",
stack="web",
service="nginx",
)
]
async def mock_fetch_async(
cfg: Config, hosts: list[str] | None = None
) -> list[ContainerStats]:
return mock_containers
with (
patch("compose_farm.cli.monitoring.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.glances.fetch_all_container_stats", side_effect=mock_fetch_async
) as mock_fetch,
):
stats(live=False, containers=True, host=None, config=None)
mock_fetch.assert_called_once_with(cfg, hosts=None)
captured = capsys.readouterr()
# Verify table output
assert "nginx" in captured.out
assert "host1" in captured.out
assert "runni" in captured.out
assert "10.5%" in captured.out
def test_stats_containers_empty(
self, tmp_path: Path, capsys: pytest.CaptureFixture[str]
) -> None:
"""--containers handles empty result gracefully."""
cfg = _make_config(tmp_path, glances_stack="glances")
async def mock_fetch_empty(
cfg: Config, hosts: list[str] | None = None
) -> list[ContainerStats]:
return []
with (
patch("compose_farm.cli.monitoring.load_config_or_exit", return_value=cfg),
patch("compose_farm.glances.fetch_all_container_stats", side_effect=mock_fetch_empty),
):
with pytest.raises(typer.Exit) as exc_info:
stats(live=False, containers=True, host=None, config=None)
assert exc_info.value.exit_code == 0
captured = capsys.readouterr()
assert "No containers found" in captured.err
def test_stats_containers_host_filter(self, tmp_path: Path) -> None:
"""--host limits container queries in --containers mode."""
cfg = _make_config(tmp_path, glances_stack="glances")
async def mock_fetch_async(
cfg: Config, hosts: list[str] | None = None
) -> list[ContainerStats]:
return []
with (
patch("compose_farm.cli.monitoring.load_config_or_exit", return_value=cfg),
patch(
"compose_farm.glances.fetch_all_container_stats", side_effect=mock_fetch_async
) as mock_fetch,
pytest.raises(typer.Exit),
):
stats(live=False, containers=True, host="host1", config=None)
mock_fetch.assert_called_once_with(cfg, hosts=["host1"])
def test_stats_summary_respects_host_filter(self, tmp_path: Path) -> None:
"""--host filters summary counts to the selected host."""
compose_dir = tmp_path / "compose"
for name in ("svc1", "svc2", "svc3"):
stack_dir = compose_dir / name
stack_dir.mkdir(parents=True)
(stack_dir / "compose.yaml").write_text("services: {}\n")
config_path = tmp_path / "compose-farm.yaml"
config_path.write_text("")
cfg = Config(
compose_dir=compose_dir,
hosts={
"host1": Host(address="localhost"),
"host2": Host(address="127.0.0.2"),
},
stacks={"svc1": "host1", "svc2": "host2", "svc3": "host1"},
config_path=config_path,
)
state: dict[str, str | list[str]] = {"svc1": "host1", "svc2": "host2"}
table = _build_summary_table(cfg, state, pending=[], host_filter="host1")
labels = table.columns[0]._cells
values = table.columns[1]._cells
summary = dict(zip(labels, values, strict=True))
assert summary["Total hosts"] == "1"
assert summary["Stacks (configured)"] == "2"
assert summary["Stacks (tracked)"] == "1"
assert summary["Compose files on disk"] == "2"

View File

@@ -78,6 +78,76 @@ class TestConfig:
# Defaults to compose.yaml when no file exists
assert path == Path("/opt/compose/plex/compose.yaml")
def test_get_web_stack_returns_env_var(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""get_web_stack returns CF_WEB_STACK env var."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
config = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"compose-farm": "nas"},
)
assert config.get_web_stack() == "compose-farm"
def test_get_web_stack_returns_empty_when_not_set(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""get_web_stack returns empty string when env var not set."""
monkeypatch.delenv("CF_WEB_STACK", raising=False)
config = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"compose-farm": "nas"},
)
assert config.get_web_stack() == ""
def test_get_local_host_from_web_stack_returns_host(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""get_local_host_from_web_stack returns the web stack host in container."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
config = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6"), "nuc": Host(address="192.168.1.2")},
stacks={"compose-farm": "nas"},
)
assert config.get_local_host_from_web_stack() == "nas"
def test_get_local_host_from_web_stack_returns_none_outside_container(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""get_local_host_from_web_stack returns None when not in container."""
monkeypatch.delenv("CF_WEB_STACK", raising=False)
config = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"compose-farm": "nas"},
)
assert config.get_local_host_from_web_stack() is None
def test_get_local_host_from_web_stack_returns_none_for_unknown_stack(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""get_local_host_from_web_stack returns None if web stack not in stacks."""
monkeypatch.setenv("CF_WEB_STACK", "unknown-stack")
config = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"plex": "nas"},
)
assert config.get_local_host_from_web_stack() is None
def test_get_local_host_from_web_stack_returns_none_for_multi_host(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""get_local_host_from_web_stack returns None if web stack runs on multiple hosts."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
config = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6"), "nuc": Host(address="192.168.1.2")},
stacks={"compose-farm": ["nas", "nuc"]},
)
assert config.get_local_host_from_web_stack() is None
class TestLoadConfig:
"""Tests for load_config function."""

View File

@@ -10,7 +10,6 @@ from typer.testing import CliRunner
from compose_farm.cli import app
from compose_farm.cli.config import (
_detect_domain,
_detect_local_host,
_generate_template,
_get_config_file,
_get_editor,
@@ -233,35 +232,6 @@ class TestConfigValidate:
assert "Config file not found" in output or "not found" in output.lower()
class TestDetectLocalHost:
"""Tests for _detect_local_host function."""
def test_detects_localhost(self) -> None:
cfg = Config(
compose_dir=Path("/opt/compose"),
hosts={
"local": Host(address="localhost"),
"remote": Host(address="192.168.1.100"),
},
stacks={"test": "local"},
)
result = _detect_local_host(cfg)
assert result == "local"
def test_returns_none_for_remote_only(self) -> None:
cfg = Config(
compose_dir=Path("/opt/compose"),
hosts={
"remote1": Host(address="192.168.1.100"),
"remote2": Host(address="192.168.1.200"),
},
stacks={"test": "remote1"},
)
result = _detect_local_host(cfg)
# Remote IPs won't match local machine
assert result is None or result in cfg.hosts
class TestDetectDomain:
"""Tests for _detect_domain function."""
@@ -370,7 +340,7 @@ class TestConfigInitEnv:
assert "Aborted" in result.stdout
assert env_file.read_text() == "KEEP_THIS=true"
def test_init_env_defaults_to_config_dir(
def test_init_env_defaults_to_current_dir(
self,
runner: CliRunner,
tmp_path: Path,
@@ -378,12 +348,20 @@ class TestConfigInitEnv:
monkeypatch: pytest.MonkeyPatch,
) -> None:
monkeypatch.delenv("CF_CONFIG", raising=False)
config_file = tmp_path / "compose-farm.yaml"
config_dir = tmp_path / "config"
config_dir.mkdir()
config_file = config_dir / "compose-farm.yaml"
config_file.write_text(yaml.dump(valid_config_data))
# Create a separate working directory
work_dir = tmp_path / "workdir"
work_dir.mkdir()
monkeypatch.chdir(work_dir)
result = runner.invoke(app, ["config", "init-env", "-p", str(config_file)])
assert result.exit_code == 0
# Should create .env in same directory as config
env_file = tmp_path / ".env"
# Should create .env in current directory, not config directory
env_file = work_dir / ".env"
assert env_file.exists()
assert not (config_dir / ".env").exists()

View File

@@ -7,10 +7,9 @@ import pytest
from fastapi.testclient import TestClient
from compose_farm.config import Config, Host
from compose_farm.glances import ContainerStats
from compose_farm.glances import ContainerStats, format_bytes
from compose_farm.web.app import create_app
from compose_farm.web.routes.containers import (
_format_bytes,
_infer_stack_service,
_parse_image,
_parse_uptime_seconds,
@@ -23,25 +22,25 @@ GB = MB * 1024
class TestFormatBytes:
"""Tests for _format_bytes function (uses humanize library)."""
"""Tests for format_bytes function (uses humanize library)."""
def test_bytes(self) -> None:
assert _format_bytes(500) == "500 Bytes"
assert _format_bytes(0) == "0 Bytes"
assert format_bytes(500) == "500 Bytes"
assert format_bytes(0) == "0 Bytes"
def test_kilobytes(self) -> None:
assert _format_bytes(KB) == "1.0 KiB"
assert _format_bytes(KB * 5) == "5.0 KiB"
assert _format_bytes(KB + 512) == "1.5 KiB"
assert format_bytes(KB) == "1.0 KiB"
assert format_bytes(KB * 5) == "5.0 KiB"
assert format_bytes(KB + 512) == "1.5 KiB"
def test_megabytes(self) -> None:
assert _format_bytes(MB) == "1.0 MiB"
assert _format_bytes(MB * 100) == "100.0 MiB"
assert _format_bytes(MB * 512) == "512.0 MiB"
assert format_bytes(MB) == "1.0 MiB"
assert format_bytes(MB * 100) == "100.0 MiB"
assert format_bytes(MB * 512) == "512.0 MiB"
def test_gigabytes(self) -> None:
assert _format_bytes(GB) == "1.0 GiB"
assert _format_bytes(GB * 2) == "2.0 GiB"
assert format_bytes(GB) == "1.0 GiB"
assert format_bytes(GB * 2) == "2.0 GiB"
class TestParseImage:

View File

@@ -2,6 +2,7 @@
import sys
from pathlib import Path
from unittest.mock import AsyncMock, patch
import pytest
@@ -11,10 +12,12 @@ from compose_farm.executor import (
_run_local_command,
check_networks_exist,
check_paths_exist,
check_stack_running,
get_running_stacks_on_host,
is_local,
run_command,
run_compose,
run_compose_on_host,
run_on_stacks,
)
@@ -106,6 +109,108 @@ class TestRunCompose:
# Command may fail due to no docker, but structure is correct
assert result.stack == "test-service"
async def test_run_compose_uses_cd_pattern(self, tmp_path: Path) -> None:
"""Verify run_compose uses 'cd <dir> && docker compose' pattern."""
config = Config(
compose_dir=tmp_path,
hosts={"remote": Host(address="192.168.1.100")},
stacks={"mystack": "remote"},
)
mock_result = CommandResult(stack="mystack", exit_code=0, success=True)
with patch("compose_farm.executor.run_command", new_callable=AsyncMock) as mock_run:
mock_run.return_value = mock_result
await run_compose(config, "mystack", "up -d", stream=False)
# Verify the command uses cd pattern with quoted path
mock_run.assert_called_once()
call_args = mock_run.call_args
command = call_args[0][1] # Second positional arg is command
assert command == f'cd "{tmp_path}/mystack" && docker compose up -d'
async def test_run_compose_works_without_local_compose_file(self, tmp_path: Path) -> None:
"""Verify compose works even when compose file doesn't exist locally.
This is the bug from issue #162 - when running cf from a machine without
NFS mounts, the compose file doesn't exist locally but should still work
on the remote host.
"""
config = Config(
compose_dir=tmp_path, # No compose files exist here
hosts={"remote": Host(address="192.168.1.100")},
stacks={"mystack": "remote"},
)
# Verify no compose file exists locally
assert not (tmp_path / "mystack" / "compose.yaml").exists()
assert not (tmp_path / "mystack" / "compose.yml").exists()
mock_result = CommandResult(stack="mystack", exit_code=0, success=True)
with patch("compose_farm.executor.run_command", new_callable=AsyncMock) as mock_run:
mock_run.return_value = mock_result
result = await run_compose(config, "mystack", "ps", stream=False)
# Should succeed - docker compose on remote will find the file
assert result.success
# Command should use cd pattern, not -f with a specific file
command = mock_run.call_args[0][1]
assert "cd " in command
assert " && docker compose " in command
assert "-f " not in command # Should NOT use -f flag
async def test_run_compose_on_host_uses_cd_pattern(self, tmp_path: Path) -> None:
"""Verify run_compose_on_host uses 'cd <dir> && docker compose' pattern."""
config = Config(
compose_dir=tmp_path,
hosts={"host1": Host(address="192.168.1.1")},
stacks={"mystack": "host1"},
)
mock_result = CommandResult(stack="mystack", exit_code=0, success=True)
with patch("compose_farm.executor.run_command", new_callable=AsyncMock) as mock_run:
mock_run.return_value = mock_result
await run_compose_on_host(config, "mystack", "host1", "down", stream=False)
command = mock_run.call_args[0][1]
assert command == f'cd "{tmp_path}/mystack" && docker compose down'
async def test_check_stack_running_uses_cd_pattern(self, tmp_path: Path) -> None:
"""Verify check_stack_running uses 'cd <dir> && docker compose' pattern."""
config = Config(
compose_dir=tmp_path,
hosts={"host1": Host(address="192.168.1.1")},
stacks={"mystack": "host1"},
)
mock_result = CommandResult(stack="mystack", exit_code=0, success=True, stdout="abc123\n")
with patch("compose_farm.executor.run_command", new_callable=AsyncMock) as mock_run:
mock_run.return_value = mock_result
result = await check_stack_running(config, "mystack", "host1")
assert result is True
command = mock_run.call_args[0][1]
assert command == f'cd "{tmp_path}/mystack" && docker compose ps --status running -q'
async def test_run_compose_quotes_paths_with_spaces(self, tmp_path: Path) -> None:
"""Verify paths with spaces are properly quoted."""
compose_dir = tmp_path / "my compose dir"
compose_dir.mkdir()
config = Config(
compose_dir=compose_dir,
hosts={"remote": Host(address="192.168.1.100")},
stacks={"my-stack": "remote"},
)
mock_result = CommandResult(stack="my-stack", exit_code=0, success=True)
with patch("compose_farm.executor.run_command", new_callable=AsyncMock) as mock_run:
mock_run.return_value = mock_result
await run_compose(config, "my-stack", "up -d", stream=False)
command = mock_run.call_args[0][1]
# Path should be quoted to handle spaces
assert f'cd "{compose_dir}/my-stack"' in command
class TestRunOnStacks:
"""Tests for parallel stack execution."""

View File

@@ -356,7 +356,6 @@ class TestGetGlancesAddress:
def test_returns_host_address_outside_container(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""Without CF_WEB_STACK, always return host address."""
monkeypatch.delenv("CF_WEB_STACK", raising=False)
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
host = Host(address="192.168.1.6")
result = _get_glances_address("nas", host, "glances")
assert result == "192.168.1.6"
@@ -366,33 +365,29 @@ class TestGetGlancesAddress:
) -> None:
"""In container without glances_stack config, return host address."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
host = Host(address="192.168.1.6")
result = _get_glances_address("nas", host, None)
assert result == "192.168.1.6"
def test_returns_container_name_for_explicit_local_host(
def test_returns_container_name_for_web_stack_host(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""CF_LOCAL_HOST explicitly marks which host uses container name."""
"""Local host uses container name in container mode."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
monkeypatch.setenv("CF_LOCAL_HOST", "nas")
host = Host(address="192.168.1.6")
result = _get_glances_address("nas", host, "glances")
result = _get_glances_address("nas", host, "glances", local_host="nas")
assert result == "glances"
def test_returns_host_address_for_non_local_host(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""Non-local hosts use their IP address even in container mode."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
monkeypatch.setenv("CF_LOCAL_HOST", "nas")
host = Host(address="192.168.1.2")
result = _get_glances_address("nuc", host, "glances")
result = _get_glances_address("nuc", host, "glances", local_host="nas")
assert result == "192.168.1.2"
def test_fallback_to_is_local_detection(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""Without CF_LOCAL_HOST, falls back to is_local detection."""
"""Without explicit local host, falls back to is_local detection."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
# Use localhost which should be detected as local
host = Host(address="localhost")
result = _get_glances_address("local", host, "glances")
@@ -403,7 +398,6 @@ class TestGetGlancesAddress:
) -> None:
"""Remote hosts always use their IP, even in container mode."""
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
host = Host(address="192.168.1.100")
result = _get_glances_address("remote", host, "glances")
assert result == "192.168.1.100"

View File

@@ -14,6 +14,7 @@ from compose_farm.executor import CommandResult
from compose_farm.operations import (
_migrate_stack,
build_discovery_results,
build_up_cmd,
)
@@ -95,23 +96,47 @@ class TestMigrationCommands:
assert pull_idx < build_idx
class TestBuildUpCmd:
"""Tests for build_up_cmd helper."""
def test_basic(self) -> None:
"""Basic up command without flags."""
assert build_up_cmd() == "up -d"
def test_with_pull(self) -> None:
"""Up command with pull flag."""
assert build_up_cmd(pull=True) == "up -d --pull always"
def test_with_build(self) -> None:
"""Up command with build flag."""
assert build_up_cmd(build=True) == "up -d --build"
def test_with_pull_and_build(self) -> None:
"""Up command with both flags."""
assert build_up_cmd(pull=True, build=True) == "up -d --pull always --build"
def test_with_service(self) -> None:
"""Up command targeting a specific service."""
assert build_up_cmd(service="web") == "up -d web"
def test_with_all_options(self) -> None:
"""Up command with all options."""
assert (
build_up_cmd(pull=True, build=True, service="web") == "up -d --pull always --build web"
)
class TestUpdateCommandSequence:
"""Tests for update command sequence."""
def test_update_command_sequence_includes_build(self) -> None:
"""Update command should use pull --ignore-buildable and build."""
# This is a static check of the command sequence in lifecycle.py
# The actual command sequence is defined in the update function
def test_update_delegates_to_up_with_pull_and_build(self) -> None:
"""Update command should delegate to up with pull=True and build=True."""
source = inspect.getsource(lifecycle.update)
# Verify the command sequence includes pull --ignore-buildable
assert "pull --ignore-buildable" in source
# Verify build is included
assert '"build"' in source or "'build'" in source
# Verify the sequence is pull, build, down, up
assert "down" in source
assert "up -d" in source
# Verify update calls up with pull=True and build=True
assert "up(" in source
assert "pull=True" in source
assert "build=True" in source
class TestBuildDiscoveryResults:

View File

@@ -338,6 +338,26 @@ def test_parse_external_networks_missing_compose(tmp_path: Path) -> None:
assert networks == []
def test_parse_external_networks_with_name_field(tmp_path: Path) -> None:
"""Network with 'name' field uses actual name, not key."""
cfg = Config(
compose_dir=tmp_path,
hosts={"host1": Host(address="192.168.1.10")},
stacks={"app": "host1"},
)
compose_path = tmp_path / "app" / "compose.yaml"
_write_compose(
compose_path,
{
"services": {"app": {"image": "nginx"}},
"networks": {"default": {"name": "compose-net", "external": True}},
},
)
networks = parse_external_networks(cfg, "app")
assert networks == ["compose-net"]
class TestExtractWebsiteUrls:
"""Test extract_website_urls function."""

View File

@@ -101,6 +101,83 @@ class TestGetStackComposePath:
assert "not found" in exc_info.value.detail
class TestIsLocalHost:
"""Tests for is_local_host helper."""
def test_returns_true_when_web_stack_host_matches(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""is_local_host returns True when host matches web stack host."""
from compose_farm.config import Config, Host
from compose_farm.web.deps import is_local_host
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
config = Config(
hosts={"nas": Host(address="10.99.99.1"), "nuc": Host(address="10.99.99.2")},
stacks={"compose-farm": "nas"},
)
host = config.hosts["nas"]
assert is_local_host("nas", host, config) is True
def test_returns_false_when_web_stack_host_differs(
self, monkeypatch: pytest.MonkeyPatch
) -> None:
"""is_local_host returns False when host does not match web stack host."""
from compose_farm.config import Config, Host
from compose_farm.web.deps import is_local_host
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
config = Config(
hosts={"nas": Host(address="10.99.99.1"), "nuc": Host(address="10.99.99.2")},
stacks={"compose-farm": "nas"},
)
host = config.hosts["nuc"]
# nuc is not local, and not matching the web stack host
assert is_local_host("nuc", host, config) is False
class TestGetLocalHost:
"""Tests for get_local_host helper."""
def test_returns_web_stack_host(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""get_local_host returns the web stack host when in container."""
from compose_farm.config import Config, Host
from compose_farm.web.deps import get_local_host
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
config = Config(
hosts={"nas": Host(address="10.99.99.1"), "nuc": Host(address="10.99.99.2")},
stacks={"compose-farm": "nas"},
)
assert get_local_host(config) == "nas"
def test_ignores_unknown_web_stack(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""get_local_host ignores web stack if it's not in stacks."""
from compose_farm.config import Config, Host
from compose_farm.web.deps import get_local_host
monkeypatch.setenv("CF_WEB_STACK", "unknown-stack")
# Use address that won't match local machine to avoid is_local() fallback
config = Config(
hosts={"nas": Host(address="10.99.99.1")},
stacks={"test": "nas"},
)
# Should fall back to auto-detection (which won't match anything here)
assert get_local_host(config) is None
def test_returns_none_outside_container(self, monkeypatch: pytest.MonkeyPatch) -> None:
"""get_local_host returns None when CF_WEB_STACK not set."""
from compose_farm.config import Config, Host
from compose_farm.web.deps import get_local_host
monkeypatch.delenv("CF_WEB_STACK", raising=False)
config = Config(
hosts={"nas": Host(address="10.99.99.1")},
stacks={"compose-farm": "nas"},
)
assert get_local_host(config) is None
class TestRenderContainers:
"""Tests for container template rendering."""

View File

@@ -26,6 +26,7 @@ nav = [
]
[project.theme]
custom_dir = "docs/overrides"
language = "en"
features = [
@@ -81,6 +82,9 @@ repo = "lucide/github"
[project.extra]
generator = false
[project.extra.analytics]
provider = "custom"
[[project.extra.social]]
icon = "fontawesome/brands/github"
link = "https://github.com/basnijholt/compose-farm"