Compare commits

..

1 Commits

Author SHA1 Message Date
Bas Nijholt
43733143ba fix(web): ensure sidebar header tooltips appear above scrollbar
Add relative positioning and z-index to sidebar header so tooltips
render above the nav element's scrollbar.
2025-12-21 14:18:56 -08:00
82 changed files with 418 additions and 1825 deletions

View File

@@ -1,6 +0,0 @@
# Run containers as current user (preserves file ownership on NFS mounts)
# Copy this file to .envrc and run: direnv allow
export CF_UID=$(id -u)
export CF_GID=$(id -g)
export CF_HOME=$HOME
export CF_USER=$USER

View File

@@ -1,117 +1,118 @@
Review documentation for accuracy, completeness, and consistency. Focus on things that require judgment—automated checks handle the rest.
Review all documentation in this repository for accuracy, completeness, and consistency. Cross-reference documentation against the actual codebase to identify issues.
## What's Already Automated
## Scope
Don't waste time on these—CI and pre-commit hooks handle them:
Review all documentation files:
- docs/*.md (primary documentation)
- README.md (repository landing page)
- CLAUDE.md (development guidelines)
- examples/README.md (example configurations)
- **README help output**: `markdown-code-runner` regenerates `cf --help` blocks in CI
- **README command table**: Pre-commit hook verifies commands are listed
- **Linting/formatting**: Handled by pre-commit
## Review Checklist
## What This Review Is For
### 1. Command Documentation
Focus on things that require judgment:
For each documented command, verify against the CLI source code:
1. **Accuracy**: Does the documentation match what the code actually does?
2. **Completeness**: Are there undocumented features, options, or behaviors?
3. **Clarity**: Would a new user understand this? Are examples realistic?
4. **Consistency**: Do different docs contradict each other?
5. **Freshness**: Has the code changed in ways the docs don't reflect?
- Command exists in codebase
- All options are documented with correct names, types, and defaults
- Short options (-x) match long options (--xxx)
- Examples would work as written
- Check for undocumented commands or options
## Review Process
Run `--help` for each command to verify.
### 1. Check Recent Changes
### 2. Configuration Documentation
```bash
# What changed recently that might need doc updates?
git log --oneline -20 | grep -iE "feat|fix|add|remove|change|option"
Verify against Pydantic models in the config module:
# What code files changed?
git diff --name-only HEAD~20 | grep "\.py$"
```
- All config keys are documented
- Types match Pydantic field types
- Required vs optional fields are correct
- Default values are accurate
- Config file search order matches code
- Example YAML is valid and uses current schema
Look for new features, changed defaults, renamed options, or removed functionality.
### 3. Architecture Documentation
### 2. Verify docs/commands.md Options Tables
Verify against actual directory structure:
The README auto-updates help output, but `docs/commands.md` has **manually maintained options tables**. These can drift.
- File paths match actual source code location
- All modules listed actually exist
- No modules are missing from the list
- Component descriptions match code functionality
- CLI module list includes all command files
For each command's options table, compare against `cf <command> --help`:
- Are all options listed?
- Are short flags correct?
- Are defaults accurate?
- Are descriptions accurate?
### 4. State and Data Files
**Pay special attention to subcommands** (`cf config *`, `cf ssh *`)—these have their own options that are easy to miss.
Verify against state and path modules:
### 3. Verify docs/configuration.md
- State file name and location are correct
- State file format matches actual structure
- Log file name and location are correct
- What triggers state/log updates is accurate
Compare against Pydantic models in the source:
### 5. Installation Documentation
```bash
# Find the config models
grep -r "class.*BaseModel" src/ --include="*.py" -A 15
```
Verify against pyproject.toml:
Check:
- All config keys documented
- Types and defaults match code
- Config file search order is accurate
- Example YAML would actually work
- Python version requirement matches requires-python
- Package name is correct
- Optional dependencies are documented
- CLI entry points are mentioned
- Installation methods work as documented
### 4. Verify docs/architecture.md
### 6. Feature Claims
```bash
# What source files actually exist?
git ls-files "src/**/*.py"
```
For each claimed feature, verify it exists and works as described.
Check:
- Listed files exist
- No files are missing from the list
- Descriptions match what the code does
### 7. Cross-Reference Consistency
### 5. Check Examples
Check for conflicts between documentation files:
For examples in any doc:
- Would the YAML/commands actually work?
- Are service names, paths, and options realistic?
- Do examples use current syntax (not deprecated options)?
- README vs docs/index.md (should be consistent)
- CLAUDE.md vs actual code structure
- Command tables match across files
- Config examples are consistent
### 6. Cross-Reference Consistency
### 8. Recent Changes Check
The same info appears in multiple places. Check for conflicts:
- README.md vs docs/index.md
- docs/commands.md vs CLAUDE.md command tables
- Config examples across different docs
Before starting the review:
### 7. Self-Check This Prompt
- Run `git log --oneline -20` to see recent commits
- Look for commits with `feat:`, `fix:`, or that mention new options/commands
- Cross-reference these against the documentation to catch undocumented features
This prompt can become outdated too. If you notice:
- New automated checks that should be listed above
- New doc files that need review guidelines
- Patterns that caused issues
### 9. Auto-Generated Content
Include prompt updates in your fixes.
For README.md or docs with `<!-- CODE:BASH:START -->` blocks:
- Run `uv run markdown-code-runner <file>` to regenerate outputs
- Check for missing `<!-- OUTPUT:START -->` markers (blocks that never ran)
- Verify help output matches current CLI behavior
### 10. CLI Options Completeness
For each command, run `cf <command> --help` and verify:
- Every option shown in help is documented
- Short flags (-x) are listed alongside long flags (--xxx)
- Default values in help match documented defaults
## Output Format
Categorize findings:
Provide findings in these categories:
1. **Critical**: Wrong info that would break user workflows
2. **Inaccuracy**: Technical errors (wrong defaults, paths, types)
3. **Missing**: Undocumented features or options
4. **Outdated**: Was true, no longer is
5. **Inconsistency**: Docs contradict each other
6. **Minor**: Typos, unclear wording
1. **Critical Issues**: Incorrect information that would cause user problems
2. **Inaccuracies**: Technical errors, wrong defaults, incorrect paths
3. **Missing Documentation**: Features/commands that exist but aren't documented
4. **Outdated Content**: Information that was once true but no longer is
5. **Inconsistencies**: Conflicts between different documentation files
6. **Minor Issues**: Typos, formatting, unclear wording
7. **Verified Accurate**: Sections confirmed to be correct
For each issue, provide a ready-to-apply fix:
```
### Issue: [Brief description]
- **File**: docs/commands.md:652
- **Problem**: `cf ssh setup` has `--config` option but it's not documented
- **Fix**: Add `--config, -c PATH` to the options table
- **Verify**: `cf ssh setup --help`
```
For each issue, include:
- File path and line number (if applicable)
- What the documentation says
- What the code actually does
- Suggested fix

View File

@@ -1,15 +0,0 @@
Review the pull request for:
- **Code cleanliness**: Is the implementation clean and well-structured?
- **DRY principle**: Does it avoid duplication?
- **Code reuse**: Are there parts that should be reused from other places?
- **Organization**: Is everything in the right place?
- **Consistency**: Is it in the same style as other parts of the codebase?
- **Simplicity**: Is it not over-engineered? Remember KISS and YAGNI. No dead code paths and NO defensive programming.
- **User experience**: Does it provide a good user experience?
- **PR**: Is the PR description and title clear and informative?
- **Tests**: Are there tests, and do they cover the changes adequately? Are they testing something meaningful or are they just trivial?
- **Live tests**: Test the changes in a REAL live environment to ensure they work as expected, use the config in `/opt/stacks/compose-farm.yaml`.
- **Rules**: Does the code follow the project's coding standards and guidelines as laid out in @CLAUDE.md?
Look at `git diff origin/main..HEAD` for the changes made in this pull request.

View File

@@ -1,51 +0,0 @@
Update demo recordings to match the current compose-farm.yaml configuration.
## Key Gotchas
1. **Never `git checkout` without asking** - check for uncommitted changes first
2. **Prefer `nas` stacks** - demos run locally on nas, SSH adds latency
3. **Terminal captures keyboard** - use `blur()` to release focus before command palette
4. **Clicking sidebar navigates away** - clicking h1 scrolls to top
5. **Buttons have icons, not text** - use `[data-tip="..."]` selectors
6. **`record.py` auto-restores config** - no manual cleanup needed after CLI demos
## Stacks Used in Demos
| Stack | CLI Demos | Web Demos | Notes |
|-------|-----------|-----------|-------|
| `audiobookshelf` | quickstart, migration, apply | - | Migrates nas→anton |
| `grocy` | update | navigation, stack, workflow, console | - |
| `immich` | logs, compose | shell | Multiple containers |
| `dozzle` | - | workflow | - |
## CLI Demos
**Files:** `docs/demos/cli/*.tape`
Check:
- `quickstart.tape`: `bat -r` line ranges match current config structure
- `migration.tape`: nvim keystrokes work, stack exists on nas
- `compose.tape`: exec commands produce meaningful output
Run: `python docs/demos/cli/record.py [demo]`
## Web Demos
**Files:** `docs/demos/web/demo_*.py`
Check:
- Stack names in demos still exist in config
- Selectors match current templates (grep for IDs in `templates/`)
- Shell demo uses command palette for ALL navigation
Run: `python docs/demos/web/record.py [demo]`
## Before Recording
```bash
# Check for uncommitted config changes
git -C /opt/stacks diff compose-farm.yaml
# Verify stacks are running
cf ps audiobookshelf grocy immich dozzle
```

View File

@@ -16,13 +16,5 @@ RUN apk add --no-cache openssh-client
COPY --from=builder /root/.local/share/uv/tools/compose-farm /root/.local/share/uv/tools/compose-farm
COPY --from=builder /usr/local/bin/cf /usr/local/bin/compose-farm /usr/local/bin/
# Allow non-root users to access the installed tool
# (required when running with user: "${CF_UID:-0}:${CF_GID:-0}")
RUN chmod 755 /root
# Allow non-root users to add passwd entries (required for SSH)
RUN chmod 666 /etc/passwd
# Entrypoint creates /etc/passwd entry for non-root UIDs (required for SSH)
ENTRYPOINT ["sh", "-c", "[ $(id -u) != 0 ] && echo ${USER:-u}:x:$(id -u):$(id -g)::${HOME:-/}:/bin/sh >> /etc/passwd; exec cf \"$@\"", "--"]
ENTRYPOINT ["cf"]
CMD ["--help"]

View File

@@ -5,19 +5,12 @@
[![License](https://img.shields.io/github/license/basnijholt/compose-farm)](LICENSE)
[![GitHub stars](https://img.shields.io/github/stars/basnijholt/compose-farm)](https://github.com/basnijholt/compose-farm/stargazers)
<img src="https://files.nijho.lt/compose-farm.png" alt="Compose Farm logo" align="right" style="width: 300px;" />
<img src="http://files.nijho.lt/compose-farm.png" align="right" style="width: 300px;" />
A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
> [!NOTE]
> Agentless multi-host Docker Compose. CLI-first with a web UI. Your files stay as plain folders—version-controllable, no lock-in. Run `cf apply` and reality matches your config.
**Why Compose Farm?**
- **Your files, your control** — Plain folders + YAML, not locked in Portainer. Version control everything.
- **Agentless** — Just SSH, no agents to deploy (unlike [Dockge](https://github.com/louislam/dockge)).
- **Zero changes required** — Existing compose files work as-is.
- **Grows with you** — Start single-host, scale to multi-host seamlessly.
- **Declarative** — Change config, run `cf apply`, reality matches.
> Run `docker compose` commands across multiple hosts via SSH. One YAML maps stacks to hosts. Run `cf apply` and reality matches your config—stacks start, migrate, or stop as needed. No Kubernetes, no Swarm, no magic.
## Quick Demo
@@ -184,24 +177,6 @@ docker run --rm \
ghcr.io/basnijholt/compose-farm up --all
```
**Running as non-root user** (recommended for NFS mounts):
By default, containers run as root. To preserve file ownership on mounted volumes
(e.g., `compose-farm-state.yaml`, config edits), set these environment variables:
```bash
# Add to .env file (one-time setup)
echo "CF_UID=$(id -u)" >> .env
echo "CF_GID=$(id -g)" >> .env
echo "CF_HOME=$HOME" >> .env
echo "CF_USER=$USER" >> .env
```
Or use [direnv](https://direnv.net/) (copies `.envrc.example` to `.envrc`):
```bash
cp .envrc.example .envrc && direnv allow
```
</details>
## SSH Authentication
@@ -241,13 +216,13 @@ When running in Docker, mount a volume to persist the SSH keys. Choose ONE optio
**Option 1: Host path (default)** - keys at `~/.ssh/compose-farm/id_ed25519`
```yaml
volumes:
- ~/.ssh/compose-farm:${CF_HOME:-/root}/.ssh
- ~/.ssh/compose-farm:/root/.ssh
```
**Option 2: Named volume** - managed by Docker
```yaml
volumes:
- cf-ssh:${CF_HOME:-/root}/.ssh
- cf-ssh:/root/.ssh
```
Run setup once after starting the container (while the SSH agent still works):
@@ -258,8 +233,6 @@ docker compose exec web cf ssh setup
The keys will persist across restarts.
**Note:** When running as non-root (with `CF_UID`/`CF_GID`), set `CF_HOME` to your home directory so SSH finds the keys at the correct path.
</details>
## Configuration
@@ -342,18 +315,14 @@ When you run `cf up autokuma`, it starts the stack on all hosts in parallel. Mul
Compose Farm includes a `config` subcommand to help manage configuration files:
```bash
cf config init # Create a new config file with documented example
cf config init --discover # Auto-detect compose files and interactively create config
cf config show # Display current config with syntax highlighting
cf config path # Print the config file path (useful for scripting)
cf config validate # Validate config syntax and schema
cf config edit # Open config in $EDITOR
cf config example --list # List available example templates
cf config example whoami # Generate sample stack files
cf config example full # Generate complete Traefik + whoami setup
cf config init # Create a new config file with documented example
cf config show # Display current config with syntax highlighting
cf config path # Print the config file path (useful for scripting)
cf config validate # Validate config syntax and schema
cf config edit # Open config in $EDITOR
```
Use `cf config init` to get started with a template, or `cf config init --discover` if you already have compose files.
Use `cf config init` to get started with a fully documented template.
## Usage
@@ -453,15 +422,6 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ copy it or customize the installation. │
│ --help -h Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Configuration ──────────────────────────────────────────────────────────────╮
│ traefik-file Generate a Traefik file-provider fragment from compose │
│ Traefik labels. │
│ refresh Update local state from running stacks. │
│ check Validate configuration, traefik labels, mounts, and networks. │
│ init-network Create Docker network on hosts with consistent settings. │
│ config Manage compose-farm configuration files. │
│ ssh Manage SSH keys for passwordless authentication. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Lifecycle ──────────────────────────────────────────────────────────────────╮
│ up Start stacks (docker compose up -d). Auto-migrates if host │
│ changed. │
@@ -473,10 +433,18 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ that service. │
│ update Update stacks (pull + build + down + up). With --service, │
│ updates just that service. │
│ apply Make reality match config (start, migrate, stop
│ strays/orphans as needed). │
│ apply Make reality match config (start, migrate, stop as needed).
│ compose Run any docker compose command on a stack. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Configuration ──────────────────────────────────────────────────────────────╮
│ traefik-file Generate a Traefik file-provider fragment from compose │
│ Traefik labels. │
│ refresh Update local state from running stacks. │
│ check Validate configuration, traefik labels, mounts, and networks. │
│ init-network Create Docker network on hosts with consistent settings. │
│ config Manage compose-farm configuration files. │
│ ssh Manage SSH keys for passwordless authentication. │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Monitoring ─────────────────────────────────────────────────────────────────╮
│ logs Show stack logs. With --service, shows logs for just that │
│ service. │
@@ -726,25 +694,22 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
Usage: cf apply [OPTIONS]
Make reality match config (start, migrate, stop strays/orphans as needed).
Make reality match config (start, migrate, stop as needed).
This is the "reconcile" command that ensures running stacks match your
config file. It will:
1. Stop orphaned stacks (in state but removed from config)
2. Stop stray stacks (running on unauthorized hosts)
3. Migrate stacks on wrong host (host in state ≠ host in config)
4. Start missing stacks (in config but not in state)
2. Migrate stacks on wrong host (host in state ≠ host in config)
3. Start missing stacks (in config but not in state)
Use --dry-run to preview changes before applying.
Use --no-orphans to skip stopping orphaned stacks.
Use --no-strays to skip stopping stray stacks.
Use --no-orphans to only migrate/start without stopping orphaned stacks.
Use --full to also run 'up' on all stacks (picks up compose/env changes).
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --dry-run -n Show what would change without executing │
│ --no-orphans Only migrate, don't stop orphaned stacks │
│ --no-strays Don't stop stray stacks (running on wrong host) │
│ --full -f Also run up on all stacks to apply config │
│ changes │
│ --config -c PATH Path to config file │
@@ -999,7 +964,6 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ validate Validate the config file syntax and schema. │
│ symlink Create a symlink from the default config location to a config │
│ file. │
│ example Generate example stack files from built-in templates. │
╰──────────────────────────────────────────────────────────────────────────────╯
```

View File

@@ -1,10 +1,6 @@
services:
cf:
image: ghcr.io/basnijholt/compose-farm:latest
# Run as current user to preserve file ownership on mounted volumes
# Set CF_UID=$(id -u) CF_GID=$(id -g) in your environment or .env file
# Defaults to root (0:0) for backwards compatibility
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
# Compose directory (contains compose files AND compose-farm.yaml config)
@@ -12,43 +8,31 @@ services:
# SSH keys for passwordless auth (generated by `cf ssh setup`)
# Choose ONE option below (use the same option for both cf and web services):
# Option 1: Host path (default) - keys at ~/.ssh/compose-farm/id_ed25519
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:/root/.ssh
# Option 2: Named volume - managed by Docker, shared between services
# - cf-ssh:${CF_HOME:-/root}/.ssh
# - cf-ssh:/root/.ssh
environment:
- SSH_AUTH_SOCK=/ssh-agent
# Config file path (state stored alongside it)
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
- USER=${CF_USER:-root}
web:
image: ghcr.io/basnijholt/compose-farm:latest
restart: unless-stopped
command: web --host 0.0.0.0 --port 9000
# Run as current user to preserve file ownership on mounted volumes
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
# SSH keys - use the SAME option as cf service above
# Option 1: Host path (default)
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:/root/.ssh
# Option 2: Named volume
# - cf-ssh:${CF_HOME:-/root}/.ssh
# XDG config dir for backups and image digest logs (persists across restarts)
- ${CF_XDG_CONFIG:-~/.config/compose-farm}:${CF_HOME:-/root}/.config/compose-farm
# - cf-ssh:/root/.ssh
environment:
- SSH_AUTH_SOCK=/ssh-agent
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
# Used to detect self-updates and run via SSH to survive container restart
- CF_WEB_STACK=compose-farm
# HOME must match the user running the container for SSH to find keys
- HOME=${CF_HOME:-/root}
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
- USER=${CF_USER:-root}
labels:
- traefik.enable=true
- traefik.http.routers.compose-farm.rule=Host(`compose-farm.${DOMAIN}`)

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01dabdd8f62773823ba2b8dc74f9931f1a1b88215117e6a080004096025491b0
size 901456
oid sha256:bb1372a59a4ed1ac74d3864d7a84dd5311fce4cb6c6a00bf3a574bc2f98d5595
size 895927

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:134c903a6b3acfb933617b33755b0cdb9bac2a59e5e35b64236e248a141d396d
size 206883
oid sha256:f339a85f3d930db5a020c9f77e106edc5f44ea7dee6f68557106721493c24ef8
size 205907

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8b3cdb3486ec79b3ddb2f7571c13d54ac9aed182edfe708eff76a966a90cfc7
size 1132310

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a3c4d4a62f062f717df4e6752efced3caea29004dc90fe97fd7633e7f0ded9db
size 341057

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:611d6fef767a8e0755367bf0c008dad016f38fa8b3be2362825ef7ef6ec2ec1a
size 2444902

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1e851879acc99234628abce0f8dadeeaf500effe4f78bebc63c4b17a0ae092f1
size 900800

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c1bb48cc2f364681515a4d8bd0c586d133f5a32789b7bb64524ad7d9ed0a8e9
size 543135
oid sha256:388aa49a1269145698f9763452aaf6b9c6232ea9229abe1dae304df558e29695
size 403442

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5f82d96137f039f21964c15c1550aa1b1f0bb2d52c04d012d253dbfbd6fad096
size 268086
oid sha256:9b8bf4dcb8ee67270d4a88124b4dd4abe0dab518e73812ee73f7c66d77f146e2
size 228025

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2a4045b00d90928f42c7764b3c24751576cfb68a34c6e84d12b4e282d2e67378
size 146467
oid sha256:16b9a28137dfae25488e2094de85766a039457f5dca20c2d84ac72e3967c10b9
size 164237

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1b94416ed3740853f863e19bf45f26241a203fb0d7d187160a537f79aa544fa
size 60353
oid sha256:e0fbe697a1f8256ce3b9a6a64c7019d42769134df9b5b964e5abe98a29e918fd
size 68242

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:848d9c48fb7511da7996149277c038589fad1ee406ff2f30c28f777fc441d919
size 1183641
oid sha256:629b8c80b98eb996b75439745676fd99a83f391ca25f778a71bd59173f814c2f
size 1194931

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e747ee71bb38b19946005d5a4def4d423dadeaaade452dec875c4cb2d24a5b77
size 407373
oid sha256:33fd46f2d8538cc43be4cb553b3af9d8b412f282ee354b6373e2793fe41c799b
size 405057

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d32c9a3eec06e57df085ad347e6bf61e323f8bd8322d0c540f0b9d4834196dfd
size 3589776
oid sha256:ccd96e33faba5f297999917d89834b29d58bd2a8929eea8d62875e3d8830bd5c
size 3198466

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c54eda599389dac74c24c83527f95cd1399e653d7faf2972c2693d90e590597
size 1085344
oid sha256:979a1a21303bbf284b3510981066ef05c41c1035b34392fecc7bee472116e6db
size 967564

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:62f9b5ec71496197a3f1c3e3bca8967d603838804279ea7dbf00a70d3391ff6c
size 127123
oid sha256:2067f4967a93b7ee3a8db7750c435f41b1fccd2919f3443da4b848c20cc54f23
size 124559

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac2b93d3630af87b44a135723c5d10e8287529bed17c28301b2802cd9593e9e8
size 98748
oid sha256:5471bd94e6d1b9d415547fa44de6021fdad2e1cc5b8b295680e217104aa749d6
size 98149

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7b50a7e9836c496c0989363d1440fa0a6ccdaa38ee16aae92b389b3cf3c3732f
size 2385110
oid sha256:dac5660cfe6574857ec055fac7822f25b7c5fcb10a836b19c86142515e2fbf75
size 1816075

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ccbb3d5366c7734377e12f98cca0b361028f5722124f1bb7efa231f6aeffc116
size 2208044
oid sha256:d4efec8ef5a99f2cb31d55cd71cdbf0bb8dd0cd6281571886b7c1f8b41c3f9da
size 1660764

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:269993b52721ce70674d3aab2a4cd8c58aa621d4ba0739afedae661c90965b26
size 3678371
oid sha256:9348dd36e79192344476d61fbbffdb122a96ecc5829fbece1818590cfc521521
size 3373003

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0098b55bb6a52fa39f807a01fa352ce112bcb446e2a2acb963fb02d21b28c934
size 3088813
oid sha256:bebbf8151434ba37bf5e46566a4e8b57812944281926f579d056bdc835ca26aa
size 2729799

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4bf9d8c247d278799d1daea784fc662a22f12b1bd7883f808ef30f35025ebca6
size 4166443
oid sha256:3712afff6fcde00eb951264bb24d4301deb085d082b4e95ed4c1893a571938ee
size 1528294

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:02d5124217a94849bf2971d6d13d28da18c557195a81b9cca121fb7c07f0501b
size 3523244
oid sha256:0b218d400836a50661c9cdcce2d2b1e285cc5fe592cb42f58aae41f3e7d60684
size 1327413

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:412a0e68f8e52801cafbb9a703ca9577e7c14cc7c0e439160b9185961997f23c
size 4435697
oid sha256:6a232ddc1b9ddd9bf6b5d99c05153e1094be56f1952f02636ca498eb7484e096
size 3808675

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e600a1d3216b44497a889f91eac94d62ef7207b4ed0471465dcb72408caa28e
size 3764693
oid sha256:5a7c9f5f6d47074a6af135190fda6d0a1936cd7a0b04b3aa04ea7d99167a9e05
size 3333014

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3c07a283f4f70c4ab205b0f0acb5d6f55e3ced4c12caa7a8d5914ffe3548233a
size 5768166
oid sha256:66f4547ed2e83b302d795875588d9a085af76071a480f1096f2bb64344b80c42
size 5428670

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:562228841de976d70ee80999b930eadf3866a13ff2867d900279993744c44671
size 6667918
oid sha256:75c8cdeefbbdcab2a240821d3410539f2a2cbe0a015897f4135404c80c3ac32c
size 6578366

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:845746ac1cb101c3077d420c4f3fda3ca372492582dc123ac8a031a68ae9b6b1
size 12943150
oid sha256:ff2e3ca5a46397efcd5f3a595e7d3c179266cc4f3f5f528b428f5ef2a423028e
size 12649149

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:189259558b5760c02583885168d7b0b47cf476cba81c7c028ec770f9d6033129
size 12415357
oid sha256:2d739c5f77ddd9d90b609e31df620b35988081b7341fe225eb717d71a87caa88
size 12284953

View File

@@ -290,10 +290,6 @@ cf pull immich --service database
Run any docker compose command on a stack. This is a passthrough to docker compose for commands not wrapped by cf.
<video autoplay loop muted playsinline>
<source src="/assets/compose.webm" type="video/webm">
</video>
```bash
cf compose [OPTIONS] STACK COMMAND [ARGS]...
```
@@ -578,10 +574,6 @@ cf traefik-file plex jellyfin -o /opt/traefik/cf.yml
Manage configuration files.
<video autoplay loop muted playsinline>
<source src="/assets/config-example.webm" type="video/webm">
</video>
```bash
cf config COMMAND
```
@@ -596,19 +588,17 @@ cf config COMMAND
| `validate` | Validate syntax and schema |
| `edit` | Open in $EDITOR |
| `symlink` | Create symlink from default location |
| `example` | Generate example stack files |
**Options by subcommand:**
| Subcommand | Options |
|------------|---------|
| `init` | `--path/-p PATH`, `--force/-f`, `--discover/-d` |
| `init` | `--path/-p PATH`, `--force/-f` |
| `show` | `--path/-p PATH`, `--raw/-r` |
| `edit` | `--path/-p PATH` |
| `path` | `--path/-p PATH` |
| `validate` | `--path/-p PATH` |
| `symlink` | `--force/-f` |
| `example` | `--list/-l`, `--output/-o PATH`, `--force/-f` |
**Examples:**
@@ -616,9 +606,6 @@ cf config COMMAND
# Create config at default location
cf config init
# Auto-discover compose files and interactively create config
cf config init --discover
# Create config at custom path
cf config init --path /opt/compose-farm/config.yaml
@@ -642,18 +629,6 @@ cf config symlink
# Create symlink to specific file
cf config symlink /opt/compose-farm/config.yaml
# List available example templates
cf config example --list
# Generate a sample stack (whoami, nginx, postgres)
cf config example whoami
# Generate complete Traefik + whoami setup
cf config example full
# Generate example in specific directory
cf config example nginx --output /opt/compose
```
---
@@ -674,20 +649,7 @@ cf ssh COMMAND
| `status` | Show SSH key status and host connectivity |
| `keygen` | Generate key without distributing |
**Options for `cf ssh setup`:**
| Option | Description |
|--------|-------------|
| `--config, -c PATH` | Path to config file |
| `--force, -f` | Regenerate key even if it exists |
**Options for `cf ssh status`:**
| Option | Description |
|--------|-------------|
| `--config, -c PATH` | Path to config file |
**Options for `cf ssh keygen`:**
**Options for `cf ssh setup` and `cf ssh keygen`:**
| Option | Description |
|--------|-------------|

View File

@@ -10,10 +10,10 @@ VHS-based terminal demo recordings for Compose Farm CLI.
```bash
# Record all demos
python docs/demos/cli/record.py
./docs/demos/cli/record.sh
# Record specific demos
python docs/demos/cli/record.py quickstart migration
# Record single demo
cd /opt/stacks && vhs docs/demos/cli/quickstart.tape
```
## Demos
@@ -23,11 +23,9 @@ python docs/demos/cli/record.py quickstart migration
| `install.tape` | Installing with `uv tool install` |
| `quickstart.tape` | `cf ps`, `cf up`, `cf logs` |
| `logs.tape` | Viewing logs |
| `compose.tape` | `cf compose` passthrough (--help, images, exec) |
| `update.tape` | `cf update` |
| `migration.tape` | Service migration |
| `apply.tape` | `cf apply` |
| `config-example.tape` | `cf config example` - generate example stacks |
## Output

View File

@@ -1,50 +0,0 @@
# Compose Demo
# Shows that cf compose passes through ANY docker compose command
Output docs/assets/compose.gif
Output docs/assets/compose.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 550
Set Theme "Catppuccin Mocha"
Set TypingSpeed 50ms
Type "# cf compose runs ANY docker compose command on the right host"
Enter
Sleep 500ms
Type "# See ALL available compose commands"
Enter
Sleep 500ms
Type "cf compose immich --help"
Enter
Sleep 4s
Type "# Show images"
Enter
Sleep 500ms
Type "cf compose immich images"
Enter
Wait+Screen /immich/
Sleep 2s
Type "# Open shell in a container"
Enter
Sleep 500ms
Type "cf compose immich exec immich-machine-learning sh"
Enter
Wait+Screen /#/
Sleep 1s
Type "python3 --version"
Enter
Sleep 1s
Type "exit"
Enter
Sleep 500ms

View File

@@ -1,110 +0,0 @@
# Config Example Demo
# Shows cf config example command
Output docs/assets/config-example.gif
Output docs/assets/config-example.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 600
Set Theme "Catppuccin Mocha"
Set FontFamily "FiraCode Nerd Font"
Set TypingSpeed 50ms
Env BAT_PAGING "always"
Type "# Generate example stacks with cf config example"
Enter
Sleep 500ms
Type "# List available templates"
Enter
Sleep 500ms
Type "cf config example --list"
Enter
Wait+Screen /Usage:/
Sleep 2s
Type "# Create a directory for our stacks"
Enter
Sleep 500ms
Type "mkdir -p ~/compose && cd ~/compose"
Enter
Wait
Sleep 500ms
Type "# Generate the full Traefik + whoami setup"
Enter
Sleep 500ms
Type "cf config example full"
Enter
Wait
Sleep 2s
Type "# See what was created"
Enter
Sleep 500ms
Type "tree ."
Enter
Wait
Sleep 2s
Type "# View the generated config"
Enter
Sleep 500ms
Type "bat compose-farm.yaml"
Enter
Sleep 3s
Type "q"
Sleep 500ms
Type "# View the traefik compose file"
Enter
Sleep 500ms
Type "bat traefik/compose.yaml"
Enter
Sleep 3s
Type "q"
Sleep 500ms
Type "# Validate the config"
Enter
Sleep 500ms
Type "cf check --local"
Enter
Wait
Sleep 2s
Type "# Create the Docker network"
Enter
Sleep 500ms
Type "cf init-network"
Enter
Wait
Sleep 1s
Type "# Deploy traefik and whoami"
Enter
Sleep 500ms
Type "cf up traefik whoami"
Enter
Wait
Sleep 3s
Type "# Verify it's running"
Enter
Sleep 500ms
Type "cf ps"
Enter
Wait
Sleep 2s

View File

@@ -21,7 +21,7 @@ Type "# First, define your hosts..."
Enter
Sleep 500ms
Type "bat -r 1:16 compose-farm.yaml"
Type "bat -r 1:11 compose-farm.yaml"
Enter
Sleep 3s
Type "q"
@@ -31,7 +31,7 @@ Type "# Then map each stack to a host"
Enter
Sleep 500ms
Type "bat -r 17:35 compose-farm.yaml"
Type "bat -r 13:30 compose-farm.yaml"
Enter
Sleep 3s
Type "q"

View File

@@ -1,134 +0,0 @@
#!/usr/bin/env python3
"""Record CLI demos using VHS."""
import shutil
import subprocess
import sys
from pathlib import Path
from rich.console import Console
from compose_farm.config import load_config
from compose_farm.state import load_state
console = Console()
SCRIPT_DIR = Path(__file__).parent
STACKS_DIR = Path("/opt/stacks")
CONFIG_FILE = STACKS_DIR / "compose-farm.yaml"
OUTPUT_DIR = SCRIPT_DIR.parent.parent / "assets"
DEMOS = ["install", "quickstart", "logs", "compose", "update", "migration", "apply"]
def _run(cmd: list[str], **kw) -> bool:
return subprocess.run(cmd, check=False, **kw).returncode == 0
def _set_config(host: str) -> None:
"""Set audiobookshelf host in config file."""
_run(["sed", "-i", f"s/audiobookshelf: .*/audiobookshelf: {host}/", str(CONFIG_FILE)])
def _get_hosts() -> tuple[str | None, str | None]:
"""Return (config_host, state_host) for audiobookshelf."""
config = load_config()
state = load_state(config)
return config.stacks.get("audiobookshelf"), state.get("audiobookshelf")
def _setup_state(demo: str) -> bool:
"""Set up required state for demo. Returns False on failure."""
if demo not in ("migration", "apply"):
return True
config_host, state_host = _get_hosts()
if demo == "migration":
# Migration needs audiobookshelf on nas in BOTH config and state
if config_host != "nas":
console.print("[yellow]Setting up: config → nas[/yellow]")
_set_config("nas")
if state_host != "nas":
console.print("[yellow]Setting up: state → nas[/yellow]")
if not _run(["cf", "apply"], cwd=STACKS_DIR):
return False
elif demo == "apply":
# Apply needs config=nas, state=anton (so there's something to apply)
if config_host != "nas":
console.print("[yellow]Setting up: config → nas[/yellow]")
_set_config("nas")
if state_host == "nas":
console.print("[yellow]Setting up: state → anton[/yellow]")
_set_config("anton")
if not _run(["cf", "apply"], cwd=STACKS_DIR):
return False
_set_config("nas")
return True
def _record(name: str, index: int, total: int) -> bool:
"""Record a single demo."""
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] {name}")
if _run(["vhs", str(SCRIPT_DIR / f"{name}.tape")], cwd=STACKS_DIR):
console.print("[green] ✓ Done[/green]")
return True
console.print("[red] ✗ Failed[/red]")
return False
def _reset_after(demo: str, next_demo: str | None) -> None:
"""Reset state after demos that modify audiobookshelf."""
if demo not in ("quickstart", "migration"):
return
_set_config("nas")
if next_demo != "apply": # Let apply demo show the migration
_run(["cf", "apply"], cwd=STACKS_DIR)
def _restore_config(original: str) -> None:
"""Restore original config and sync state."""
console.print("[yellow]Restoring original config...[/yellow]")
CONFIG_FILE.write_text(original)
_run(["cf", "apply"], cwd=STACKS_DIR)
def _main() -> int:
if not shutil.which("vhs"):
console.print("[red]VHS not found. Install: brew install vhs[/red]")
return 1
if not _run(["git", "-C", str(STACKS_DIR), "diff", "--quiet", "compose-farm.yaml"]):
console.print("[red]compose-farm.yaml has uncommitted changes[/red]")
return 1
demos = [d for d in sys.argv[1:] if d in DEMOS] or DEMOS
if sys.argv[1:] and not demos:
console.print(f"[red]Unknown demo. Available: {', '.join(DEMOS)}[/red]")
return 1
# Save original config to restore after recording
original_config = CONFIG_FILE.read_text()
try:
for i, demo in enumerate(demos, 1):
if not _setup_state(demo):
return 1
if not _record(demo, i, len(demos)):
return 1
_reset_after(demo, demos[i] if i < len(demos) else None)
finally:
_restore_config(original_config)
# Move outputs
OUTPUT_DIR.mkdir(exist_ok=True)
for f in (STACKS_DIR / "docs/assets").glob("*.[gw]*"):
shutil.move(str(f), str(OUTPUT_DIR / f.name))
console.print(f"\n[green]Done![/green] Saved to {OUTPUT_DIR}")
return 0
if __name__ == "__main__":
sys.exit(_main())

89
docs/demos/cli/record.sh Executable file
View File

@@ -0,0 +1,89 @@
#!/usr/bin/env bash
# Record all VHS demos
# Run this on a Docker host with compose-farm configured
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEMOS_DIR="$(dirname "$SCRIPT_DIR")"
DOCS_DIR="$(dirname "$DEMOS_DIR")"
REPO_DIR="$(dirname "$DOCS_DIR")"
OUTPUT_DIR="$DOCS_DIR/assets"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[0;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Check for VHS
if ! command -v vhs &> /dev/null; then
echo "VHS not found. Install with:"
echo " brew install vhs"
echo " # or"
echo " go install github.com/charmbracelet/vhs@latest"
exit 1
fi
# Ensure output directory exists
mkdir -p "$OUTPUT_DIR"
# Temp output dir (VHS runs from /opt/stacks, so relative paths go here)
TEMP_OUTPUT="/opt/stacks/docs/assets"
mkdir -p "$TEMP_OUTPUT"
# Change to /opt/stacks so cf commands use installed version (not editable install)
cd /opt/stacks
# Ensure compose-farm.yaml has no uncommitted changes (safety check)
if ! git diff --quiet compose-farm.yaml; then
echo -e "${RED}Error: compose-farm.yaml has uncommitted changes${NC}"
echo "Commit or stash your changes before recording demos"
exit 1
fi
echo -e "${BLUE}Recording VHS demos...${NC}"
echo "Output directory: $OUTPUT_DIR"
echo ""
# Function to record a tape
record_tape() {
local tape=$1
local name=$(basename "$tape" .tape)
echo -e "${GREEN}Recording:${NC} $name"
if vhs "$tape"; then
echo -e "${GREEN} ✓ Done${NC}"
else
echo -e "${RED} ✗ Failed${NC}"
return 1
fi
}
# Record demos in logical order
echo -e "${YELLOW}=== Phase 1: Basic demos ===${NC}"
record_tape "$SCRIPT_DIR/install.tape"
record_tape "$SCRIPT_DIR/quickstart.tape"
record_tape "$SCRIPT_DIR/logs.tape"
echo -e "${YELLOW}=== Phase 2: Update demo ===${NC}"
record_tape "$SCRIPT_DIR/update.tape"
echo -e "${YELLOW}=== Phase 3: Migration demo ===${NC}"
record_tape "$SCRIPT_DIR/migration.tape"
git -C /opt/stacks checkout compose-farm.yaml # Reset after migration
echo -e "${YELLOW}=== Phase 4: Apply demo ===${NC}"
record_tape "$SCRIPT_DIR/apply.tape"
# Move GIFs and WebMs from temp location to repo
echo ""
echo -e "${BLUE}Moving recordings to repo...${NC}"
mv "$TEMP_OUTPUT"/*.gif "$OUTPUT_DIR/" 2>/dev/null || true
mv "$TEMP_OUTPUT"/*.webm "$OUTPUT_DIR/" 2>/dev/null || true
rmdir "$TEMP_OUTPUT" 2>/dev/null || true
rmdir "$(dirname "$TEMP_OUTPUT")" 2>/dev/null || true
echo ""
echo -e "${GREEN}Done!${NC} Recordings saved to $OUTPUT_DIR/"
ls -la "$OUTPUT_DIR"/*.gif "$OUTPUT_DIR"/*.webm 2>/dev/null || echo "No recordings found (check for errors above)"

View File

@@ -60,14 +60,10 @@ def test_demo_console(recording_page: Page, server_url: str) -> None:
page.keyboard.press("Enter")
pause(page, 2500) # Wait for output
# Smoothly scroll down to show the Editor section with Compose Farm config
page.evaluate("""
const editor = document.getElementById('console-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Scroll down to show the Editor section with Compose Farm config
editor_section = page.locator(".collapse", has_text="Editor").first
editor_section.scroll_into_view_if_needed()
pause(page, 800)
# Wait for Monaco editor to load with config content
page.wait_for_selector("#console-editor .monaco-editor", timeout=10000)

View File

@@ -1,11 +1,9 @@
"""Demo: Container shell exec via command palette.
"""Demo: Container shell exec.
Records a ~35 second demo showing:
- Navigating to immich stack (multiple containers)
- Using command palette with fuzzy matching ("sh mach") to open shell
- Running a command
- Using command palette to switch to server container shell
- Running another command
Records a ~25 second demo showing:
- Navigating to a stack page
- Clicking Shell button on a container
- Running top command inside the container
Run: pytest docs/demos/web/demo_shell.py -v --no-cov
"""
@@ -16,7 +14,6 @@ from typing import TYPE_CHECKING
import pytest
from conftest import (
open_command_palette,
pause,
slow_type,
wait_for_sidebar,
@@ -36,71 +33,39 @@ def test_demo_shell(recording_page: Page, server_url: str) -> None:
wait_for_sidebar(page)
pause(page, 800)
# Navigate to immich via command palette (has multiple containers)
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "immich", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/immich", timeout=5000)
# Navigate to a stack with a running container (grocy)
page.locator("#sidebar-stacks a", has_text="grocy").click()
page.wait_for_url("**/stack/grocy", timeout=5000)
pause(page, 1500)
# Wait for containers list to load (so shell commands are available)
# Wait for containers list to load (loaded via HTMX)
page.wait_for_selector("#containers-list button", timeout=10000)
pause(page, 800)
# Use command palette with fuzzy matching: "sh mach" -> "Shell: immich-machine-learning"
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "sh mach", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
# Click Shell button on the first container
shell_btn = page.locator("#containers-list button", has_text="Shell").first
shell_btn.click()
pause(page, 1000)
# Wait for exec terminal to appear
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
# Smoothly scroll down to make the terminal visible
page.evaluate("""
const terminal = document.getElementById('exec-terminal');
if (terminal) {
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200)
# Scroll down to make the terminal visible
page.locator("#exec-terminal").scroll_into_view_if_needed()
pause(page, 2000)
# Run python version command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "python3 --version", delay=60)
# Run top command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "top", delay=100)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 1500)
pause(page, 4000) # Let top run for a bit
# Blur the terminal to release focus (won't scroll)
page.evaluate("document.activeElement?.blur()")
pause(page, 500)
# Use command palette to switch to server container: "sh serv" -> "Shell: immich-server"
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "sh serv", delay=100)
pause(page, 600)
page.keyboard.press("Enter")
# Press q to quit top
page.keyboard.press("q")
pause(page, 1000)
# Wait for new terminal
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
# Scroll to terminal
page.evaluate("""
const terminal = document.getElementById('exec-terminal');
if (terminal) {
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200)
# Run ls command
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ls /usr/src/app", delay=60)
# Run another command to show it's interactive
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ps aux | head", delay=60)
pause(page, 300)
page.keyboard.press("Enter")
pause(page, 2000)

View File

@@ -55,14 +55,9 @@ def test_demo_stack(recording_page: Page, server_url: str) -> None:
page.wait_for_selector("#compose-editor .monaco-editor", timeout=10000)
pause(page, 2000) # Let viewer see the compose file
# Smoothly scroll down to show more of the editor
page.evaluate("""
const editor = document.getElementById('compose-editor');
if (editor) {
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
}
""")
pause(page, 1200) # Wait for smooth scroll animation
# Scroll down slightly to show more of the editor
page.locator("#compose-editor").scroll_into_view_if_needed()
pause(page, 1500)
# Close the compose file section
compose_collapse.locator("input[type=checkbox]").click(force=True)

View File

@@ -5,7 +5,7 @@ Records a comprehensive demo (~60 seconds) combining all major features:
2. Editor showing Compose Farm YAML config
3. Command palette navigation to grocy stack
4. Stack actions: up, logs
5. Switch to dozzle stack via command palette, run update
5. Switch to mealie stack via command palette, run update
6. Dashboard overview
7. Theme cycling via command palette
@@ -126,13 +126,13 @@ def _demo_stack_actions(page: Page) -> None:
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
pause(page, 2500)
# Switch to dozzle via command palette (on nas for lower latency)
# Switch to mealie via command palette
open_command_palette(page)
pause(page, 300)
slow_type(page, "#cmd-input", "dozzle", delay=100)
slow_type(page, "#cmd-input", "mealie", delay=100)
pause(page, 400)
page.keyboard.press("Enter")
page.wait_for_url("**/stack/dozzle", timeout=5000)
page.wait_for_url("**/stack/mealie", timeout=5000)
pause(page, 1000)
# Run update action
@@ -162,20 +162,32 @@ def _demo_dashboard_and_themes(page: Page, server_url: str) -> None:
page.evaluate("window.scrollTo(0, 0)")
pause(page, 600)
# Open theme picker and arrow down to Dracula (shows live preview)
# Open theme picker and arrow down to Luxury (shows live preview)
# Theme order: light, dark, cupcake, bumblebee, emerald, corporate, synthwave,
# retro, cyberpunk, valentine, halloween, garden, forest, aqua, lofi, pastel,
# fantasy, wireframe, black, luxury (index 19)
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 400)
# Arrow down through themes with live preview until we reach Dracula
# Arrow down through themes with live preview until we reach Luxury
for _ in range(19):
page.keyboard.press("ArrowDown")
pause(page, 180)
# Select Dracula theme and end on it
# Select Luxury theme
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 1500)
pause(page, 1000)
# Return to dark theme
page.locator("#theme-btn").click()
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
pause(page, 300)
slow_type(page, "#cmd-input", " dark", delay=80)
pause(page, 400)
page.keyboard.press("Enter")
pause(page, 1000)
@pytest.mark.browser # type: ignore[misc]

View File

@@ -88,9 +88,9 @@ def patch_playwright_video_quality() -> None:
console.print("[green]Patched Playwright for high-quality video recording[/green]")
def record_demo(name: str, index: int, total: int) -> Path | None:
def record_demo(name: str) -> Path | None:
"""Run a single demo and return the video path."""
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] web-{name}")
console.print(f"[green]Recording:[/green] web-{name}")
demo_file = SCRIPT_DIR / f"demo_{name}.py"
if not demo_file.exists():
@@ -227,7 +227,9 @@ def main() -> int:
try:
for i, demo in enumerate(demos_to_record, 1):
video_path = record_demo(demo, i, len(demos_to_record))
console.print(f"[yellow]=== Demo {i}/{len(demos_to_record)}: {demo} ===[/yellow]")
video_path = record_demo(demo)
if video_path:
webm, gif = move_recording(video_path, demo)
results[demo] = (webm, gif)

View File

@@ -54,25 +54,6 @@ docker run --rm \
ghcr.io/basnijholt/compose-farm up --all
```
**Running as non-root user** (recommended for NFS mounts):
By default, containers run as root. To preserve file ownership on mounted volumes, set these environment variables in your `.env` file:
```bash
# Add to .env file (one-time setup)
echo "CF_UID=$(id -u)" >> .env
echo "CF_GID=$(id -g)" >> .env
echo "CF_HOME=$HOME" >> .env
echo "CF_USER=$USER" >> .env
```
Or use [direnv](https://direnv.net/) to auto-set these variables when entering the directory:
```bash
cp .envrc.example .envrc && direnv allow
```
This ensures files like `compose-farm-state.yaml` and web UI edits are owned by your user instead of root. The `CF_USER` variable is required for SSH to work when running as a non-root user.
### Verify Installation
```bash
@@ -149,24 +130,6 @@ cd /opt/stacks
cf config init
```
**Already have compose files?** Use `--discover` to auto-detect them and interactively build your config:
```bash
cf config init --discover
```
This scans for directories containing compose files, lets you select which stacks to include, and generates a ready-to-use config.
**Starting fresh?** Generate example stacks to learn from:
```bash
# List available examples
cf config example --list
# Generate a complete Traefik + whoami setup
cf config example full
```
Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config. You can also symlink a working directory config to the global location:
```bash

View File

@@ -63,8 +63,6 @@ Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy sear
- Container shell access (exec into running containers)
- Terminal output for running commands
Files are automatically backed up before saving to `~/.config/compose-farm/backups/`.
### Console (`/console`)
- Full shell access to any host

View File

@@ -45,14 +45,6 @@ doc:
kill-doc:
lsof -ti :9002 | xargs kill -9 2>/dev/null || true
# Record CLI demos (all or specific: just record-cli quickstart)
record-cli *demos:
python docs/demos/cli/record.py {{demos}}
# Record web UI demos (all or specific: just record-web navigation)
record-web *demos:
python docs/demos/web/record.py {{demos}}
# Clean up build artifacts and caches
clean:
rm -rf .pytest_cache .mypy_cache .ruff_cache .coverage htmlcov dist build

View File

@@ -37,12 +37,6 @@ _RawOption = Annotated[
bool,
typer.Option("--raw", "-r", help="Output raw file contents (for copy-paste)."),
]
_DiscoverOption = Annotated[
bool,
typer.Option(
"--discover", "-d", help="Auto-detect compose files and interactively select stacks."
),
]
def _get_editor() -> str:
@@ -74,117 +68,6 @@ def _get_config_file(path: Path | None) -> Path | None:
return config_path.resolve() if config_path else None
def _generate_discovered_config(
compose_dir: Path,
hostname: str,
host_address: str,
selected_stacks: list[str],
) -> str:
"""Generate config YAML from discovered stacks."""
import yaml # noqa: PLC0415
config_data = {
"compose_dir": str(compose_dir),
"hosts": {hostname: host_address},
"stacks": dict.fromkeys(selected_stacks, hostname),
}
header = """\
# Compose Farm configuration
# Documentation: https://github.com/basnijholt/compose-farm
#
# Generated by: cf config init --discover
"""
return header + yaml.dump(config_data, default_flow_style=False, sort_keys=False)
def _interactive_stack_selection(stacks: list[str]) -> list[str]:
"""Interactively select stacks to include."""
from rich.prompt import Confirm, Prompt # noqa: PLC0415
console.print("\n[bold]Found stacks:[/bold]")
for stack in stacks:
console.print(f" [cyan]{stack}[/cyan]")
console.print()
# Fast path: include all
if Confirm.ask(f"Include all {len(stacks)} stacks?", default=True):
return stacks
# Let user specify which to exclude
console.print(
"\n[dim]Enter stack names to exclude (comma-separated), or press Enter to select individually:[/dim]"
)
exclude_input = Prompt.ask("Exclude", default="")
if exclude_input.strip():
exclude = {s.strip() for s in exclude_input.split(",")}
return [s for s in stacks if s not in exclude]
# Fall back to individual selection
console.print()
return [
stack for stack in stacks if Confirm.ask(f" Include [cyan]{stack}[/cyan]?", default=True)
]
def _run_discovery_flow() -> str | None:
"""Run the interactive discovery flow and return generated config content."""
import socket # noqa: PLC0415
from rich.prompt import Prompt # noqa: PLC0415
console.print("[bold]Compose Farm Config Discovery[/bold]")
console.print("[dim]This will scan for compose files and generate a config.[/dim]\n")
# Step 1: Get compose directory
default_dir = Path.cwd()
compose_dir_str = Prompt.ask(
"Compose directory",
default=str(default_dir),
)
compose_dir = Path(compose_dir_str).expanduser().resolve()
if not compose_dir.exists():
print_error(f"Directory does not exist: {compose_dir}")
return None
if not compose_dir.is_dir():
print_error(f"Path is not a directory: {compose_dir}")
return None
# Step 2: Discover stacks
from compose_farm.config import discover_compose_dirs # noqa: PLC0415
console.print(f"\n[dim]Scanning {compose_dir}...[/dim]")
stacks = discover_compose_dirs(compose_dir)
if not stacks:
print_error(f"No compose files found in {compose_dir}")
console.print("[dim]Each stack should be in a subdirectory with a compose.yaml file.[/dim]")
return None
console.print(f"[green]Found {len(stacks)} stack(s)[/green]")
# Step 3: Interactive selection
selected_stacks = _interactive_stack_selection(stacks)
if not selected_stacks:
console.print("\n[yellow]No stacks selected.[/yellow]")
return None
# Step 4: Get hostname and address
default_hostname = socket.gethostname()
hostname = Prompt.ask("\nHost name", default=default_hostname)
host_address = Prompt.ask("Host address", default="localhost")
# Step 5: Generate config
console.print(f"\n[dim]Generating config with {len(selected_stacks)} stack(s)...[/dim]")
return _generate_discovered_config(compose_dir, hostname, host_address, selected_stacks)
def _report_missing_config(explicit_path: Path | None = None) -> None:
"""Report that a config file was not found."""
console.print("[yellow]Config file not found.[/yellow]")
@@ -202,15 +85,11 @@ def _report_missing_config(explicit_path: Path | None = None) -> None:
def config_init(
path: _PathOption = None,
force: _ForceOption = False,
discover: _DiscoverOption = False,
) -> None:
"""Create a new config file with documented example.
The generated config file serves as a template showing all available
options with explanatory comments.
Use --discover to auto-detect compose files and interactively select
which stacks to include.
"""
target_path = (path.expanduser().resolve() if path else None) or default_config_path()
@@ -222,17 +101,11 @@ def config_init(
console.print("[dim]Aborted.[/dim]")
raise typer.Exit(0)
if discover:
template_content = _run_discovery_flow()
if template_content is None:
raise typer.Exit(0)
else:
template_content = _generate_template()
# Create parent directories
target_path.parent.mkdir(parents=True, exist_ok=True)
# Write config file
# Generate and write template
template_content = _generate_template()
target_path.write_text(template_content, encoding="utf-8")
print_success(f"Config file created at: {target_path}")
@@ -420,115 +293,5 @@ def config_symlink(
console.print(f" -> {target_path}")
_ListOption = Annotated[
bool,
typer.Option("--list", "-l", help="List available example templates."),
]
@config_app.command("example")
def config_example(
name: Annotated[
str | None,
typer.Argument(help="Example template name (e.g., whoami, full)"),
] = None,
output_dir: Annotated[
Path | None,
typer.Option("--output", "-o", help="Output directory. Defaults to current directory."),
] = None,
list_examples: _ListOption = False,
force: _ForceOption = False,
) -> None:
"""Generate example stack files from built-in templates.
Examples:
cf config example --list # List available examples
cf config example whoami # Generate whoami stack in ./whoami/
cf config example full # Generate complete Traefik + whoami setup
cf config example nginx -o /opt/compose # Generate in specific directory
"""
from compose_farm.examples import ( # noqa: PLC0415
EXAMPLES,
SINGLE_STACK_EXAMPLES,
list_example_files,
)
# List mode
if list_examples:
console.print("[bold]Available example templates:[/bold]\n")
console.print("[dim]Single stack examples:[/dim]")
for example_name, description in SINGLE_STACK_EXAMPLES.items():
console.print(f" [cyan]{example_name}[/cyan] - {description}")
console.print()
console.print("[dim]Complete setup:[/dim]")
console.print(f" [cyan]full[/cyan] - {EXAMPLES['full']}")
console.print()
console.print("[dim]Usage: cf config example <name>[/dim]")
return
# Interactive selection if no name provided
if name is None:
from rich.prompt import Prompt # noqa: PLC0415
console.print("[bold]Available example templates:[/bold]\n")
example_names = list(EXAMPLES.keys())
for i, (example_name, description) in enumerate(EXAMPLES.items(), 1):
console.print(f" [{i}] [cyan]{example_name}[/cyan] - {description}")
console.print()
choice = Prompt.ask(
"Select example",
choices=[str(i) for i in range(1, len(example_names) + 1)] + example_names,
default="1",
)
# Handle numeric or name input
name = example_names[int(choice) - 1] if choice.isdigit() else choice
# Validate example name
if name not in EXAMPLES:
print_error(f"Unknown example: {name}")
console.print(f"Available examples: {', '.join(EXAMPLES.keys())}")
raise typer.Exit(1)
# Determine output directory
base_dir = (output_dir or Path.cwd()).expanduser().resolve()
# For 'full' example, use current dir; for single stacks, create subdir
target_dir = base_dir if name == "full" else base_dir / name
# Check for existing files
files = list_example_files(name)
existing_files = [f for f, _ in files if (target_dir / f).exists()]
if existing_files and not force:
console.print(f"[yellow]Files already exist in:[/yellow] {target_dir}")
console.print(f"[dim] {len(existing_files)} file(s) would be overwritten[/dim]")
if not typer.confirm("Overwrite existing files?"):
console.print("[dim]Aborted.[/dim]")
raise typer.Exit(0)
# Create directories and copy files
for rel_path, content in files:
file_path = target_dir / rel_path
file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(content, encoding="utf-8")
console.print(f" [green]Created[/green] {file_path}")
print_success(f"Example '{name}' created at: {target_dir}")
# Show appropriate next steps
if name == "full":
console.print("\n[dim]Next steps:[/dim]")
console.print(f" 1. Edit [cyan]{target_dir}/compose-farm.yaml[/cyan] with your host IP")
console.print(" 2. Edit [cyan].env[/cyan] files with your domain")
console.print(" 3. Create Docker network: [cyan]docker network create mynetwork[/cyan]")
console.print(" 4. Deploy: [cyan]cf up traefik whoami[/cyan]")
else:
console.print("\n[dim]Next steps:[/dim]")
console.print(f" 1. Edit [cyan]{target_dir}/.env[/cyan] with your settings")
console.print(f" 2. Add to compose-farm.yaml: [cyan]{name}: <hostname>[/cyan]")
console.print(f" 3. Deploy with: [cyan]cf up {name}[/cyan]")
# Register config subcommand on the shared app
app.add_typer(config_app, name="config", rich_help_panel="Configuration")

View File

@@ -3,13 +3,10 @@
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING, Annotated
from typing import Annotated
import typer
if TYPE_CHECKING:
from compose_farm.config import Config
from compose_farm.cli.app import app
from compose_farm.cli.common import (
AllOption,
@@ -26,14 +23,9 @@ from compose_farm.cli.common import (
validate_host_for_stack,
validate_stacks,
)
from compose_farm.cli.management import _discover_stacks_full
from compose_farm.console import MSG_DRY_RUN, console, print_error, print_success
from compose_farm.executor import run_compose_on_host, run_on_stacks, run_sequential_on_stacks
from compose_farm.operations import (
stop_orphaned_stacks,
stop_stray_stacks,
up_stacks,
)
from compose_farm.operations import stop_orphaned_stacks, up_stacks
from compose_farm.state import (
get_orphaned_stacks,
get_stack_host,
@@ -216,23 +208,8 @@ def update(
report_results(results)
def _discover_strays(cfg: Config) -> dict[str, list[str]]:
"""Discover stacks running on unauthorized hosts by scanning all hosts."""
_, strays, duplicates = _discover_stacks_full(cfg)
# Merge duplicates into strays (for single-host stacks on multiple hosts,
# keep correct host and stop others)
for stack, running_hosts in duplicates.items():
configured = cfg.get_hosts(stack)[0]
stray_hosts = [h for h in running_hosts if h != configured]
if stray_hosts:
strays[stack] = stray_hosts
return strays
@app.command(rich_help_panel="Lifecycle")
def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs these branches)
def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
dry_run: Annotated[
bool,
typer.Option("--dry-run", "-n", help="Show what would change without executing"),
@@ -241,29 +218,23 @@ def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs the
bool,
typer.Option("--no-orphans", help="Only migrate, don't stop orphaned stacks"),
] = False,
no_strays: Annotated[
bool,
typer.Option("--no-strays", help="Don't stop stray stacks (running on wrong host)"),
] = False,
full: Annotated[
bool,
typer.Option("--full", "-f", help="Also run up on all stacks to apply config changes"),
] = False,
config: ConfigOption = None,
) -> None:
"""Make reality match config (start, migrate, stop strays/orphans as needed).
"""Make reality match config (start, migrate, stop as needed).
This is the "reconcile" command that ensures running stacks match your
config file. It will:
1. Stop orphaned stacks (in state but removed from config)
2. Stop stray stacks (running on unauthorized hosts)
3. Migrate stacks on wrong host (host in state ≠ host in config)
4. Start missing stacks (in config but not in state)
2. Migrate stacks on wrong host (host in state ≠ host in config)
3. Start missing stacks (in config but not in state)
Use --dry-run to preview changes before applying.
Use --no-orphans to skip stopping orphaned stacks.
Use --no-strays to skip stopping stray stacks.
Use --no-orphans to only migrate/start without stopping orphaned stacks.
Use --full to also run 'up' on all stacks (picks up compose/env changes).
"""
cfg = load_config_or_exit(config)
@@ -271,28 +242,16 @@ def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs the
migrations = get_stacks_needing_migration(cfg)
missing = get_stacks_not_in_state(cfg)
strays: dict[str, list[str]] = {}
if not no_strays:
console.print("[dim]Scanning hosts for stray containers...[/]")
strays = _discover_strays(cfg)
# For --full: refresh all stacks not already being started/migrated
handled = set(migrations) | set(missing)
to_refresh = [stack for stack in cfg.stacks if stack not in handled] if full else []
has_orphans = bool(orphaned) and not no_orphans
has_strays = bool(strays)
has_migrations = bool(migrations)
has_missing = bool(missing)
has_refresh = bool(to_refresh)
if (
not has_orphans
and not has_strays
and not has_migrations
and not has_missing
and not has_refresh
):
if not has_orphans and not has_migrations and not has_missing and not has_refresh:
print_success("Nothing to apply - reality matches config")
return
@@ -301,14 +260,6 @@ def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs the
console.print(f"[yellow]Orphaned stacks to stop ({len(orphaned)}):[/]")
for svc, hosts in orphaned.items():
console.print(f" [cyan]{svc}[/] on [magenta]{format_host(hosts)}[/]")
if has_strays:
console.print(f"[red]Stray stacks to stop ({len(strays)}):[/]")
for stack, hosts in strays.items():
configured = cfg.get_hosts(stack)
console.print(
f" [cyan]{stack}[/] on [magenta]{', '.join(hosts)}[/] "
f"[dim](should be on {', '.join(configured)})[/]"
)
if has_migrations:
console.print(f"[cyan]Stacks to migrate ({len(migrations)}):[/]")
for stack in migrations:
@@ -337,26 +288,21 @@ def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs the
console.print("[yellow]Stopping orphaned stacks...[/]")
all_results.extend(run_async(stop_orphaned_stacks(cfg)))
# 2. Stop stray stacks (running on unauthorized hosts)
if has_strays:
console.print("[red]Stopping stray stacks...[/]")
all_results.extend(run_async(stop_stray_stacks(cfg, strays)))
# 3. Migrate stacks on wrong host
# 2. Migrate stacks on wrong host
if has_migrations:
console.print("[cyan]Migrating stacks...[/]")
migrate_results = run_async(up_stacks(cfg, migrations, raw=True))
all_results.extend(migrate_results)
maybe_regenerate_traefik(cfg, migrate_results)
# 4. Start missing stacks (reuse up_stacks which handles state updates)
# 3. Start missing stacks (reuse up_stacks which handles state updates)
if has_missing:
console.print("[green]Starting missing stacks...[/]")
start_results = run_async(up_stacks(cfg, missing, raw=True))
all_results.extend(start_results)
maybe_regenerate_traefik(cfg, start_results)
# 5. Refresh remaining stacks (--full: run up to apply config changes)
# 4. Refresh remaining stacks (--full: run up to apply config changes)
if has_refresh:
console.print("[blue]Refreshing stacks...[/]")
refresh_results = run_async(up_stacks(cfg, to_refresh, raw=True))

View File

@@ -50,11 +50,9 @@ from compose_farm.logs import (
write_toml,
)
from compose_farm.operations import (
StackDiscoveryResult,
check_host_compatibility,
check_stack_requirements,
discover_stack_host,
discover_stack_on_all_hosts,
)
from compose_farm.state import get_orphaned_stacks, load_state, save_state
from compose_farm.traefik import generate_traefik_config, render_traefik_config
@@ -149,80 +147,6 @@ def _report_sync_changes(
console.print(f" [red]-[/] [cyan]{stack}[/] (was on [magenta]{host_str}[/])")
def _discover_stacks_full(
cfg: Config,
stacks: list[str] | None = None,
) -> tuple[dict[str, str | list[str]], dict[str, list[str]], dict[str, list[str]]]:
"""Discover running stacks with full host scanning for stray detection.
Returns:
Tuple of (discovered, strays, duplicates):
- discovered: stack -> host(s) where running correctly
- strays: stack -> list of unauthorized hosts
- duplicates: stack -> list of all hosts (for single-host stacks on multiple)
"""
stack_list = stacks if stacks is not None else list(cfg.stacks)
results: list[StackDiscoveryResult] = run_parallel_with_progress(
"Discovering",
stack_list,
lambda s: discover_stack_on_all_hosts(cfg, s),
)
discovered: dict[str, str | list[str]] = {}
strays: dict[str, list[str]] = {}
duplicates: dict[str, list[str]] = {}
for result in results:
correct_hosts = [h for h in result.running_hosts if h in result.configured_hosts]
if correct_hosts:
if result.is_multi_host:
discovered[result.stack] = correct_hosts
else:
discovered[result.stack] = correct_hosts[0]
if result.is_stray:
strays[result.stack] = result.stray_hosts
if result.is_duplicate:
duplicates[result.stack] = result.running_hosts
return discovered, strays, duplicates
def _report_stray_stacks(
strays: dict[str, list[str]],
cfg: Config,
) -> None:
"""Report stacks running on unauthorized hosts."""
if strays:
console.print(f"\n[red]Stray stacks[/] (running on wrong host, {len(strays)}):")
console.print("[dim]Run [bold]cf apply[/bold] to stop them.[/]")
for stack in sorted(strays):
stray_hosts = strays[stack]
configured = cfg.get_hosts(stack)
console.print(
f" [red]![/] [cyan]{stack}[/] on [magenta]{', '.join(stray_hosts)}[/] "
f"[dim](should be on {', '.join(configured)})[/]"
)
def _report_duplicate_stacks(duplicates: dict[str, list[str]], cfg: Config) -> None:
"""Report single-host stacks running on multiple hosts."""
if duplicates:
console.print(
f"\n[yellow]Duplicate stacks[/] (running on multiple hosts, {len(duplicates)}):"
)
console.print("[dim]Run [bold]cf apply[/bold] to stop extras.[/]")
for stack in sorted(duplicates):
hosts = duplicates[stack]
configured = cfg.get_hosts(stack)[0]
console.print(
f" [yellow]![/] [cyan]{stack}[/] on [magenta]{', '.join(hosts)}[/] "
f"[dim](should only be on {configured})[/]"
)
# --- Check helpers ---
@@ -516,7 +440,7 @@ def refresh(
current_state = load_state(cfg)
discovered, strays, duplicates = _discover_stacks_full(cfg, stack_list)
discovered = _discover_stacks(cfg, stack_list)
# Calculate changes (only for the stacks we're refreshing)
added = [s for s in discovered if s not in current_state]
@@ -539,9 +463,6 @@ def refresh(
else:
print_success("State is already in sync.")
_report_stray_stacks(strays, cfg)
_report_duplicate_stacks(duplicates, cfg)
if dry_run:
console.print(f"\n{MSG_DRY_RUN}")
return

View File

@@ -15,17 +15,6 @@ from .paths import config_search_paths, find_config_path
COMPOSE_FILENAMES = ("compose.yaml", "compose.yml", "docker-compose.yml", "docker-compose.yaml")
def discover_compose_dirs(compose_dir: Path) -> list[str]:
"""Find all directories in compose_dir that contain a compose file."""
if not compose_dir.exists():
return []
return sorted(
subdir.name
for subdir in compose_dir.iterdir()
if subdir.is_dir() and any((subdir / f).exists() for f in COMPOSE_FILENAMES)
)
class Host(BaseModel, extra="forbid"):
"""SSH host configuration."""
@@ -116,7 +105,13 @@ class Config(BaseModel, extra="forbid"):
def discover_compose_dirs(self) -> set[str]:
"""Find all directories in compose_dir that contain a compose file."""
return set(discover_compose_dirs(self.compose_dir))
found: set[str] = set()
if not self.compose_dir.exists():
return found
for subdir in self.compose_dir.iterdir():
if subdir.is_dir() and any((subdir / f).exists() for f in COMPOSE_FILENAMES):
found.add(subdir.name)
return found
def _parse_hosts(raw_hosts: dict[str, Any]) -> dict[str, Host]:

View File

@@ -1,41 +0,0 @@
"""Example stack templates for compose-farm."""
from __future__ import annotations
from importlib import resources
from pathlib import Path
# All available examples: name -> description
# "full" is special: multi-stack setup with config file
EXAMPLES = {
"whoami": "Simple HTTP service that returns hostname (great for testing Traefik)",
"nginx": "Basic nginx web server with static files",
"postgres": "PostgreSQL database with persistent volume",
"full": "Complete setup with Traefik + whoami (includes compose-farm.yaml)",
}
# Examples that are single stacks (everything except "full")
SINGLE_STACK_EXAMPLES = {k: v for k, v in EXAMPLES.items() if k != "full"}
def list_example_files(name: str) -> list[tuple[str, str]]:
"""List files in an example template, returning (relative_path, content) tuples."""
if name not in EXAMPLES:
msg = f"Unknown example: {name}. Available: {', '.join(EXAMPLES.keys())}"
raise ValueError(msg)
example_dir = resources.files("compose_farm.examples") / name
example_path = Path(str(example_dir))
files: list[tuple[str, str]] = []
def walk_dir(directory: Path, prefix: str = "") -> None:
for item in sorted(directory.iterdir()):
rel_path = f"{prefix}{item.name}" if prefix else item.name
if item.is_file():
content = item.read_text(encoding="utf-8")
files.append((rel_path, content))
elif item.is_dir() and not item.name.startswith("__"):
walk_dir(item, f"{rel_path}/")
walk_dir(example_path)
return files

View File

@@ -1,41 +0,0 @@
# Compose Farm Full Example
A complete starter setup with Traefik reverse proxy and a test service.
## Quick Start
1. **Create the Docker network** (once per host):
```bash
docker network create --subnet=172.20.0.0/16 --gateway=172.20.0.1 mynetwork
```
2. **Create data directory for Traefik**:
```bash
mkdir -p /mnt/data/traefik
```
3. **Edit configuration**:
- Update `compose-farm.yaml` with your host IP
- Update `.env` files with your domain
4. **Start the stacks**:
```bash
cf up traefik whoami
```
5. **Test**:
- Dashboard: http://localhost:8080
- Whoami: Add `whoami.example.com` to /etc/hosts pointing to your host
## Files
```
full/
├── compose-farm.yaml # Compose Farm config
├── traefik/
│ ├── compose.yaml # Traefik reverse proxy
│ └── .env
└── whoami/
├── compose.yaml # Test HTTP service
└── .env
```

View File

@@ -1,17 +0,0 @@
# Compose Farm configuration
# Edit the host address to match your setup
compose_dir: .
hosts:
local: localhost # For remote hosts, use: myhost: 192.168.1.100
stacks:
traefik: local
whoami: local
nginx: local
postgres: local
# Traefik file-provider integration (optional)
# traefik_file: ./traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik

View File

@@ -1,2 +0,0 @@
# Environment variables for nginx stack
DOMAIN=example.com

View File

@@ -1,26 +0,0 @@
# Nginx - Basic web server
services:
nginx:
image: nginx:alpine
container_name: cf-nginx
user: "1000:1000"
networks:
- mynetwork
volumes:
- /mnt/data/nginx/html:/usr/share/nginx/html:ro
ports:
- "9082:80" # Use 80:80 or 8080:80 in production
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.nginx.rule=Host(`nginx.${DOMAIN}`)
- traefik.http.routers.nginx.entrypoints=websecure
- traefik.http.routers.nginx-local.rule=Host(`nginx.local`)
- traefik.http.routers.nginx-local.entrypoints=web
- traefik.http.services.nginx.loadbalancer.server.port=80
- kuma.nginx.http.name=Nginx
- kuma.nginx.http.url=https://nginx.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,5 +0,0 @@
# Environment variables for postgres stack
# IMPORTANT: Change these values before deploying!
POSTGRES_USER=postgres
POSTGRES_PASSWORD=changeme
POSTGRES_DB=myapp

View File

@@ -1,26 +0,0 @@
# PostgreSQL - Database with persistent storage
services:
postgres:
image: postgres:16-alpine
container_name: cf-postgres
networks:
- mynetwork
environment:
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- /mnt/data/postgres:/var/lib/postgresql/data
ports:
- "5433:5432" # Use 5432:5432 in production
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 30s
timeout: 10s
retries: 3
networks:
mynetwork:
external: true

View File

@@ -1 +0,0 @@
DOMAIN=example.com

View File

@@ -1,37 +0,0 @@
# Traefik - Reverse proxy and load balancer
services:
traefik:
image: traefik:v2.11
container_name: cf-traefik
networks:
- mynetwork
ports:
- "9080:80" # HTTP (use 80:80 in production)
- "9443:443" # HTTPS (use 443:443 in production)
- "9081:8080" # Dashboard - remove in production
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /mnt/data/traefik:/etc/traefik
command:
- --api.dashboard=true
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=mynetwork
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --log.level=INFO
labels:
- traefik.enable=true
- traefik.http.routers.traefik.rule=Host(`traefik.${DOMAIN}`)
- traefik.http.routers.traefik.entrypoints=websecure
- traefik.http.routers.traefik-local.rule=Host(`traefik.local`)
- traefik.http.routers.traefik-local.entrypoints=web
- traefik.http.services.traefik.loadbalancer.server.port=8080
- kuma.traefik.http.name=Traefik
- kuma.traefik.http.url=https://traefik.${DOMAIN}
restart: unless-stopped
networks:
mynetwork:
external: true

View File

@@ -1 +0,0 @@
DOMAIN=example.com

View File

@@ -1,23 +0,0 @@
# Whoami - Test service for Traefik routing
services:
whoami:
image: traefik/whoami:latest
container_name: cf-whoami
networks:
- mynetwork
ports:
- "9000:80"
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.whoami.rule=Host(`whoami.${DOMAIN}`)
- traefik.http.routers.whoami.entrypoints=websecure
- traefik.http.routers.whoami-local.rule=Host(`whoami.local`)
- traefik.http.routers.whoami-local.entrypoints=web
- traefik.http.services.whoami.loadbalancer.server.port=80
- kuma.whoami.http.name=Whoami
- kuma.whoami.http.url=https://whoami.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,2 +0,0 @@
# Environment variables for nginx stack
DOMAIN=example.com

View File

@@ -1,26 +0,0 @@
# Nginx - Basic web server
services:
nginx:
image: nginx:alpine
container_name: cf-nginx
user: "1000:1000"
networks:
- mynetwork
volumes:
- /mnt/data/nginx/html:/usr/share/nginx/html:ro
ports:
- "9082:80"
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.nginx.rule=Host(`nginx.${DOMAIN}`)
- traefik.http.routers.nginx.entrypoints=websecure
- traefik.http.routers.nginx-local.rule=Host(`nginx.local`)
- traefik.http.routers.nginx-local.entrypoints=web
- traefik.http.services.nginx.loadbalancer.server.port=80
- kuma.nginx.http.name=Nginx
- kuma.nginx.http.url=https://nginx.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,5 +0,0 @@
# Environment variables for postgres stack
# IMPORTANT: Change these values before deploying!
POSTGRES_USER=postgres
POSTGRES_PASSWORD=changeme
POSTGRES_DB=myapp

View File

@@ -1,26 +0,0 @@
# PostgreSQL - Database with persistent storage
services:
postgres:
image: postgres:16-alpine
container_name: cf-postgres
networks:
- mynetwork
environment:
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- /mnt/data/postgres:/var/lib/postgresql/data
ports:
- "5433:5432"
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 30s
timeout: 10s
retries: 3
networks:
mynetwork:
external: true

View File

@@ -1,2 +0,0 @@
# Environment variables for whoami stack
DOMAIN=example.com

View File

@@ -1,24 +0,0 @@
# Whoami - Simple HTTP service for testing
# Returns the container hostname - useful for testing load balancers and Traefik
services:
whoami:
image: traefik/whoami:latest
container_name: cf-whoami
networks:
- mynetwork
ports:
- "9000:80"
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.whoami.rule=Host(`whoami.${DOMAIN}`)
- traefik.http.routers.whoami.entrypoints=websecure
- traefik.http.routers.whoami-local.rule=Host(`whoami.local`)
- traefik.http.routers.whoami-local.entrypoints=web
- traefik.http.services.whoami.loadbalancer.server.port=80
- kuma.whoami.http.name=Whoami
- kuma.whoami.http.url=https://whoami.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -101,58 +101,6 @@ async def discover_stack_host(cfg: Config, stack: str) -> tuple[str, str | list[
return stack, None
class StackDiscoveryResult(NamedTuple):
"""Result of discovering where a stack is running across all hosts."""
stack: str
configured_hosts: list[str] # From config (where it SHOULD run)
running_hosts: list[str] # From reality (where it IS running)
@property
def is_multi_host(self) -> bool:
"""Check if this is a multi-host stack."""
return len(self.configured_hosts) > 1
@property
def stray_hosts(self) -> list[str]:
"""Hosts where stack is running but shouldn't be."""
return [h for h in self.running_hosts if h not in self.configured_hosts]
@property
def missing_hosts(self) -> list[str]:
"""Hosts where stack should be running but isn't."""
return [h for h in self.configured_hosts if h not in self.running_hosts]
@property
def is_stray(self) -> bool:
"""Stack is running on unauthorized host(s)."""
return len(self.stray_hosts) > 0
@property
def is_duplicate(self) -> bool:
"""Single-host stack running on multiple hosts."""
return not self.is_multi_host and len(self.running_hosts) > 1
async def discover_stack_on_all_hosts(cfg: Config, stack: str) -> StackDiscoveryResult:
"""Discover where a stack is running across ALL hosts.
Unlike discover_stack_host(), this checks every host in parallel
to detect strays and duplicates.
"""
configured_hosts = cfg.get_hosts(stack)
all_hosts = list(cfg.hosts.keys())
checks = await asyncio.gather(*[check_stack_running(cfg, stack, h) for h in all_hosts])
running_hosts = [h for h, is_running in zip(all_hosts, checks, strict=True) if is_running]
return StackDiscoveryResult(
stack=stack,
configured_hosts=configured_hosts,
running_hosts=running_hosts,
)
async def check_stack_requirements(
cfg: Config,
stack: str,
@@ -411,33 +359,26 @@ async def check_host_compatibility(
return results
async def _stop_stacks_on_hosts(
cfg: Config,
stacks_to_hosts: dict[str, list[str]],
label: str = "",
) -> list[CommandResult]:
"""Stop stacks on specific hosts.
async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
"""Stop orphaned stacks (in state but not in config).
Shared helper for stop_orphaned_stacks and stop_stray_stacks.
Args:
cfg: Config object.
stacks_to_hosts: Dict mapping stack name to list of hosts to stop on.
label: Optional label for success message (e.g., "stray", "orphaned").
Returns:
List of CommandResults for each stack@host.
Runs docker compose down on each stack on its tracked host(s).
Only removes from state on successful stop.
Returns list of CommandResults for each stack@host.
"""
if not stacks_to_hosts:
orphaned = get_orphaned_stacks(cfg)
if not orphaned:
return []
results: list[CommandResult] = []
tasks: list[tuple[str, str, asyncio.Task[CommandResult]]] = []
suffix = f" ({label})" if label else ""
for stack, hosts in stacks_to_hosts.items():
for host in hosts:
# Build list of (stack, host, task) for all orphaned stacks
for stack, hosts in orphaned.items():
host_list = hosts if isinstance(hosts, list) else [hosts]
for host in host_list:
# Skip hosts no longer in config
if host not in cfg.hosts:
print_warning(f"{stack}@{host}: host no longer in config, skipping")
results.append(
@@ -452,48 +393,30 @@ async def _stop_stacks_on_hosts(
coro = run_compose_on_host(cfg, stack, host, "down")
tasks.append((stack, host, asyncio.create_task(coro)))
for stack, host, task in tasks:
try:
result = await task
results.append(result)
if result.success:
print_success(f"{stack}@{host}: stopped{suffix}")
else:
print_error(f"{stack}@{host}: {result.stderr or 'failed'}")
except Exception as e:
print_error(f"{stack}@{host}: {e}")
results.append(
CommandResult(
stack=f"{stack}@{host}",
exit_code=1,
success=False,
stderr=str(e),
# Run all down commands in parallel
if tasks:
for stack, host, task in tasks:
try:
result = await task
results.append(result)
if result.success:
print_success(f"{stack}@{host}: stopped")
else:
print_error(f"{stack}@{host}: {result.stderr or 'failed'}")
except Exception as e:
print_error(f"{stack}@{host}: {e}")
results.append(
CommandResult(
stack=f"{stack}@{host}",
exit_code=1,
success=False,
stderr=str(e),
)
)
)
return results
async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
"""Stop orphaned stacks (in state but not in config).
Runs docker compose down on each stack on its tracked host(s).
Only removes from state on successful stop.
Returns list of CommandResults for each stack@host.
"""
orphaned = get_orphaned_stacks(cfg)
if not orphaned:
return []
normalized: dict[str, list[str]] = {
stack: (hosts if isinstance(hosts, list) else [hosts]) for stack, hosts in orphaned.items()
}
results = await _stop_stacks_on_hosts(cfg, normalized)
# Remove from state only for stacks where ALL hosts succeeded
for stack in normalized:
for stack, hosts in orphaned.items():
host_list = hosts if isinstance(hosts, list) else [hosts]
all_succeeded = all(
r.success for r in results if r.stack.startswith(f"{stack}@") or r.stack == stack
)
@@ -501,20 +424,3 @@ async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
remove_stack(cfg, stack)
return results
async def stop_stray_stacks(
cfg: Config,
strays: dict[str, list[str]],
) -> list[CommandResult]:
"""Stop stacks running on unauthorized hosts.
Args:
cfg: Config object.
strays: Dict mapping stack name to list of stray hosts.
Returns:
List of CommandResults for each stack@host stopped.
"""
return await _stop_stacks_on_hosts(cfg, strays, label="stray")

View File

@@ -11,19 +11,9 @@ def xdg_config_home() -> Path:
return Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config"))
def config_dir() -> Path:
"""Get the compose-farm config directory."""
return xdg_config_home() / "compose-farm"
def default_config_path() -> Path:
"""Get the default user config path."""
return config_dir() / "compose-farm.yaml"
def backup_dir() -> Path:
"""Get the backup directory for file edits."""
return config_dir() / "backups"
return xdg_config_home() / "compose-farm" / "compose-farm.yaml"
def config_search_paths() -> list[Path]:

View File

@@ -21,7 +21,7 @@ from fastapi.responses import HTMLResponse
from compose_farm.compose import get_container_name
from compose_farm.executor import is_local, run_compose_on_host, ssh_connect_kwargs
from compose_farm.paths import backup_dir, find_config_path
from compose_farm.paths import find_config_path
from compose_farm.state import load_state
from compose_farm.web.deps import get_config, get_templates
@@ -41,30 +41,26 @@ def _validate_yaml(content: str) -> None:
def _backup_file(file_path: Path) -> Path | None:
"""Create a timestamped backup of a file if it exists and content differs.
Backups are stored in XDG config dir under compose-farm/backups/.
The original file's absolute path is mirrored in the backup directory.
Backups are stored in a .backups directory alongside the file.
Returns the backup path if created, None if no backup was needed.
"""
if not file_path.exists():
return None
# Create backup directory mirroring original path structure
# e.g., /opt/stacks/plex/compose.yaml -> ~/.config/compose-farm/backups/opt/stacks/plex/
# On Windows: C:\Users\foo\stacks -> backups/Users/foo/stacks
resolved = file_path.resolve()
file_backup_dir = backup_dir() / resolved.parent.relative_to(resolved.anchor)
file_backup_dir.mkdir(parents=True, exist_ok=True)
# Create backup directory
backup_dir = file_path.parent / ".backups"
backup_dir.mkdir(exist_ok=True)
# Generate timestamped backup filename
timestamp = datetime.now(tz=UTC).strftime("%Y%m%d_%H%M%S")
backup_name = f"{file_path.name}.{timestamp}"
backup_path = file_backup_dir / backup_name
backup_path = backup_dir / backup_name
# Copy current content to backup
backup_path.write_text(file_path.read_text())
# Clean up old backups (keep last 200)
backups = sorted(file_backup_dir.glob(f"{file_path.name}.*"), reverse=True)
backups = sorted(backup_dir.glob(f"{file_path.name}.*"), reverse=True)
for old_backup in backups[200:]:
old_backup.unlink()

View File

@@ -91,8 +91,8 @@ async def index(request: Request) -> HTMLResponse:
# Get state
deployed = load_state(config)
# Stats (only count stacks that are both in config AND deployed)
running_count = sum(1 for stack in deployed if stack in config.stacks)
# Stats
running_count = len(deployed)
stopped_count = len(config.stacks) - running_count
# Pending operations
@@ -250,8 +250,7 @@ async def stats_partial(request: Request) -> HTMLResponse:
templates = get_templates()
deployed = load_state(config)
# Only count stacks that are both in config AND deployed
running_count = sum(1 for stack in deployed if stack in config.stacks)
running_count = len(deployed)
stopped_count = len(config.stacks) - running_count
return templates.TemplateResponse(

View File

@@ -223,9 +223,7 @@ function initExecTerminal(stack, container, host) {
return;
}
// Unhide the terminal container first, then expand/scroll
containerEl.classList.remove('hidden');
expandCollapse(document.getElementById('exec-collapse'), containerEl);
// Clean up existing (use wrapper's dispose to clean up ResizeObserver)
if (execWs) { execWs.close(); execWs = null; }
@@ -261,42 +259,17 @@ function initExecTerminal(stack, container, host) {
window.initExecTerminal = initExecTerminal;
/**
* Expand a collapse component and scroll to a target element
* @param {HTMLInputElement} toggle - The checkbox input that controls the collapse
* @param {HTMLElement} [scrollTarget] - Element to scroll to (defaults to collapse container)
*/
function expandCollapse(toggle, scrollTarget = null) {
if (!toggle) return;
// Find the parent collapse container
const collapse = toggle.closest('.collapse');
if (!collapse) return;
const target = scrollTarget || collapse;
const scrollToTarget = () => {
target.scrollIntoView({ behavior: 'smooth', block: 'start' });
};
if (!toggle.checked) {
// Collapsed - expand first, then scroll after transition
const onTransitionEnd = () => {
collapse.removeEventListener('transitionend', onTransitionEnd);
scrollToTarget();
};
collapse.addEventListener('transitionend', onTransitionEnd);
toggle.checked = true;
} else {
// Already expanded - just scroll
scrollToTarget();
}
}
/**
* Expand terminal collapse and scroll to it
*/
function expandTerminal() {
expandCollapse(document.getElementById('terminal-toggle'));
const toggle = document.getElementById('terminal-toggle');
if (toggle) toggle.checked = true;
const collapse = document.getElementById('terminal-collapse');
if (collapse) {
collapse.scrollIntoView({ behavior: 'smooth', block: 'start' });
}
}
/**
@@ -558,7 +531,6 @@ function playFabIntro() {
}
htmx.ajax('GET', url, {target: '#main-content', select: '#main-content', swap: 'outerHTML'}).then(() => {
history.pushState({}, '', url);
window.scrollTo(0, 0);
afterNav?.();
});
};
@@ -567,7 +539,6 @@ function playFabIntro() {
if (window.location.pathname !== '/') {
await htmx.ajax('GET', '/', {target: '#main-content', select: '#main-content', swap: 'outerHTML'});
history.pushState({}, '', '/');
window.scrollTo(0, 0);
}
htmx.ajax('POST', `/api/${endpoint}`, {swap: 'none'});
};
@@ -610,7 +581,6 @@ function playFabIntro() {
cmd('app', 'Console', 'Go to console', nav('/console'), icons.terminal),
cmd('app', 'Edit Config', 'Edit compose-farm.yaml', nav('/console#editor'), icons.file_code),
cmd('app', 'Docs', 'Open documentation', openExternal('https://compose-farm.nijho.lt/'), icons.book_open),
cmd('app', 'Repo', 'Open GitHub repository', openExternal('https://github.com/basnijholt/compose-farm'), icons.external_link),
];
// Add stack-specific actions if on a stack page

View File

@@ -48,7 +48,7 @@
<div class="drawer-side">
<label for="drawer-toggle" class="drawer-overlay" aria-label="close sidebar"></label>
<aside id="sidebar" class="w-64 bg-base-100 border-r border-base-300 flex flex-col min-h-screen">
<header class="p-4 border-b border-base-300">
<header class="p-4 border-b border-base-300 relative z-10">
<h2 class="text-lg font-semibold flex items-center gap-2">
<span class="rainbow-hover">Compose Farm</span>
<div class="tooltip tooltip-bottom" data-tip="Docs">

View File

@@ -37,7 +37,7 @@ def _parse_resize(msg: str) -> tuple[int, int] | None:
"""Parse a resize message, return (cols, rows) or None if not a resize."""
try:
data = json.loads(msg)
if isinstance(data, dict) and data.get("type") == "resize":
if data.get("type") == "resize":
return int(data["cols"]), int(data["rows"])
except (json.JSONDecodeError, KeyError, TypeError, ValueError):
pass

View File

@@ -58,9 +58,8 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, full=False, config=None)
captured = capsys.readouterr()
assert "Nothing to apply" in captured.out
@@ -83,11 +82,10 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch("compose_farm.cli.lifecycle.stop_orphaned_stacks") as mock_stop,
patch("compose_farm.cli.lifecycle.up_stacks") as mock_up,
):
apply(dry_run=True, no_orphans=False, no_strays=False, full=False, config=None)
apply(dry_run=True, no_orphans=False, full=False, config=None)
captured = capsys.readouterr()
assert "Stacks to migrate" in captured.out
@@ -114,7 +112,6 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -123,7 +120,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, full=False, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
@@ -142,7 +139,6 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -150,7 +146,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.stop_orphaned_stacks") as mock_stop,
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, full=False, config=None)
mock_stop.assert_called_once_with(cfg)
@@ -173,7 +169,6 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -183,7 +178,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=True, no_strays=False, full=False, config=None)
apply(dry_run=False, no_orphans=True, full=False, config=None)
# Should run migrations but not orphan cleanup
mock_up.assert_called_once()
@@ -207,9 +202,8 @@ class TestApplyCommand:
),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=False, no_orphans=True, no_strays=False, full=False, config=None)
apply(dry_run=False, no_orphans=True, full=False, config=None)
captured = capsys.readouterr()
assert "Nothing to apply" in captured.out
@@ -227,7 +221,6 @@ class TestApplyCommand:
"compose_farm.cli.lifecycle.get_stacks_not_in_state",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -236,7 +229,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
apply(dry_run=False, no_orphans=False, full=False, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
@@ -256,9 +249,8 @@ class TestApplyCommand:
"compose_farm.cli.lifecycle.get_stacks_not_in_state",
return_value=["svc1"],
),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=True, no_orphans=False, no_strays=False, full=False, config=None)
apply(dry_run=True, no_orphans=False, full=False, config=None)
captured = capsys.readouterr()
assert "Stacks to start" in captured.out
@@ -275,7 +267,6 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -284,7 +275,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, no_strays=False, full=True, config=None)
apply(dry_run=False, no_orphans=False, full=True, config=None)
mock_up.assert_called_once()
call_args = mock_up.call_args
@@ -302,9 +293,8 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
):
apply(dry_run=True, no_orphans=False, no_strays=False, full=True, config=None)
apply(dry_run=True, no_orphans=False, full=True, config=None)
captured = capsys.readouterr()
assert "Stacks to refresh" in captured.out
@@ -329,7 +319,6 @@ class TestApplyCommand:
return_value=["svc2"],
),
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host2"),
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
patch(
"compose_farm.cli.lifecycle.run_async",
return_value=mock_results,
@@ -338,7 +327,7 @@ class TestApplyCommand:
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
patch("compose_farm.cli.lifecycle.report_results"),
):
apply(dry_run=False, no_orphans=False, no_strays=False, full=True, config=None)
apply(dry_run=False, no_orphans=False, full=True, config=None)
# up_stacks should be called 3 times: migrate, start, refresh
assert mock_up.call_count == 3

View File

@@ -228,99 +228,3 @@ class TestConfigValidate:
# Error goes to stderr
output = result.stdout + (result.stderr or "")
assert "Config file not found" in output or "not found" in output.lower()
class TestConfigExample:
"""Tests for cf config example command."""
def test_example_list(self, runner: CliRunner) -> None:
result = runner.invoke(app, ["config", "example", "--list"])
assert result.exit_code == 0
assert "whoami" in result.stdout
assert "nginx" in result.stdout
assert "postgres" in result.stdout
assert "full" in result.stdout
def test_example_whoami(self, runner: CliRunner, tmp_path: Path) -> None:
result = runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path)])
assert result.exit_code == 0
assert "Example 'whoami' created" in result.stdout
assert (tmp_path / "whoami" / "compose.yaml").exists()
assert (tmp_path / "whoami" / ".env").exists()
def test_example_full(self, runner: CliRunner, tmp_path: Path) -> None:
result = runner.invoke(app, ["config", "example", "full", "-o", str(tmp_path)])
assert result.exit_code == 0
assert "Example 'full' created" in result.stdout
assert (tmp_path / "compose-farm.yaml").exists()
assert (tmp_path / "traefik" / "compose.yaml").exists()
assert (tmp_path / "whoami" / "compose.yaml").exists()
assert (tmp_path / "nginx" / "compose.yaml").exists()
assert (tmp_path / "postgres" / "compose.yaml").exists()
def test_example_unknown(self, runner: CliRunner, tmp_path: Path) -> None:
result = runner.invoke(app, ["config", "example", "unknown", "-o", str(tmp_path)])
assert result.exit_code == 1
output = result.stdout + (result.stderr or "")
assert "Unknown example" in output
def test_example_force_overwrites(self, runner: CliRunner, tmp_path: Path) -> None:
# Create first time
runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path)])
# Overwrite with force
result = runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path), "-f"])
assert result.exit_code == 0
def test_example_prompts_on_existing(self, runner: CliRunner, tmp_path: Path) -> None:
# Create first time
runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path)])
# Try again without force, decline
result = runner.invoke(
app, ["config", "example", "whoami", "-o", str(tmp_path)], input="n\n"
)
assert result.exit_code == 0
assert "Aborted" in result.stdout
class TestExamplesModule:
"""Tests for the examples module."""
def test_list_example_files_whoami(self) -> None:
from compose_farm.examples import list_example_files
files = list_example_files("whoami")
file_names = [f for f, _ in files]
assert ".env" in file_names
assert "compose.yaml" in file_names
def test_list_example_files_full(self) -> None:
from compose_farm.examples import list_example_files
files = list_example_files("full")
file_names = [f for f, _ in files]
assert "compose-farm.yaml" in file_names
assert "traefik/compose.yaml" in file_names
assert "whoami/compose.yaml" in file_names
def test_list_example_files_unknown(self) -> None:
from compose_farm.examples import list_example_files
with pytest.raises(ValueError, match="Unknown example"):
list_example_files("unknown")
def test_examples_dict(self) -> None:
from compose_farm.examples import EXAMPLES, SINGLE_STACK_EXAMPLES
assert "whoami" in EXAMPLES
assert "full" in EXAMPLES
assert "full" not in SINGLE_STACK_EXAMPLES
assert "whoami" in SINGLE_STACK_EXAMPLES
class TestConfigInitDiscover:
"""Tests for cf config init --discover."""
def test_discover_option_exists(self, runner: CliRunner) -> None:
result = runner.invoke(app, ["config", "init", "--help"])
assert "--discover" in result.stdout
assert "-d" in result.stdout

View File

@@ -211,8 +211,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas02"}, {}, {}), # plex moved to nas02
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas02"}, # plex moved to nas02
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -247,12 +247,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks_full",
return_value=(
{"plex": "nas01", "grafana": "nas02"},
{},
{},
), # jellyfin not running
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas01", "grafana": "nas02"}, # jellyfin not running
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -285,8 +281,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas01"}, {}, {}), # only plex running
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas01"}, # only plex running
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -319,8 +315,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas01"}, {}, {}), # jellyfin not running
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas01"}, # jellyfin not running
),
patch("compose_farm.cli.management._snapshot_stacks"),
patch("compose_farm.cli.management.save_state") as mock_save,
@@ -354,8 +350,8 @@ class TestRefreshCommand:
return_value=existing_state,
),
patch(
"compose_farm.cli.management._discover_stacks_full",
return_value=({"plex": "nas02"}, {}, {}), # would change
"compose_farm.cli.management._discover_stacks",
return_value={"plex": "nas02"}, # would change
),
patch("compose_farm.cli.management.save_state") as mock_save,
):

View File

@@ -2,65 +2,53 @@
from pathlib import Path
import pytest
from compose_farm.web.routes.api import _backup_file, _save_with_backup
@pytest.fixture
def xdg_backup_dir(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
"""Set XDG_CONFIG_HOME to tmp_path and return the backup directory path."""
monkeypatch.setenv("XDG_CONFIG_HOME", str(tmp_path))
return tmp_path / "compose-farm" / "backups"
def test_backup_creates_timestamped_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
"""Test that backup creates file in XDG backup dir with correct content."""
test_file = tmp_path / "stacks" / "test.yaml"
test_file.parent.mkdir(parents=True)
def test_backup_creates_timestamped_file(tmp_path: Path) -> None:
"""Test that backup creates file in .backups with correct content."""
test_file = tmp_path / "test.yaml"
test_file.write_text("original content")
backup_path = _backup_file(test_file)
assert backup_path is not None
assert backup_path.is_relative_to(xdg_backup_dir)
assert backup_path.parent.name == ".backups"
assert backup_path.name.startswith("test.yaml.")
assert backup_path.read_text() == "original content"
def test_backup_returns_none_for_nonexistent_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
def test_backup_returns_none_for_nonexistent_file(tmp_path: Path) -> None:
"""Test that backup returns None if file doesn't exist."""
assert _backup_file(tmp_path / "nonexistent.yaml") is None
def test_save_creates_new_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
def test_save_creates_new_file(tmp_path: Path) -> None:
"""Test that save creates new file without backup."""
test_file = tmp_path / "new.yaml"
assert _save_with_backup(test_file, "content") is True
assert test_file.read_text() == "content"
assert not xdg_backup_dir.exists()
assert not (tmp_path / ".backups").exists()
def test_save_skips_unchanged_content(tmp_path: Path, xdg_backup_dir: Path) -> None:
def test_save_skips_unchanged_content(tmp_path: Path) -> None:
"""Test that save returns False and creates no backup if unchanged."""
test_file = tmp_path / "test.yaml"
test_file.write_text("same")
assert _save_with_backup(test_file, "same") is False
assert not xdg_backup_dir.exists()
assert not (tmp_path / ".backups").exists()
def test_save_creates_backup_before_overwrite(tmp_path: Path, xdg_backup_dir: Path) -> None:
def test_save_creates_backup_before_overwrite(tmp_path: Path) -> None:
"""Test that save backs up original before overwriting."""
test_file = tmp_path / "stacks" / "test.yaml"
test_file.parent.mkdir(parents=True)
test_file = tmp_path / "test.yaml"
test_file.write_text("original")
assert _save_with_backup(test_file, "new") is True
assert test_file.read_text() == "new"
# Find backup in XDG dir
backups = list(xdg_backup_dir.rglob("test.yaml.*"))
backups = list((tmp_path / ".backups").glob("test.yaml.*"))
assert len(backups) == 1
assert backups[0].read_text() == "original"