mirror of
https://github.com/basnijholt/compose-farm.git
synced 2026-02-03 14:13:26 +00:00
Compare commits
44 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6fdb43e1e9 | ||
|
|
620e797671 | ||
|
|
031a2af6f3 | ||
|
|
f69eed7721 | ||
|
|
5a1fd4e29f | ||
|
|
26dea691ca | ||
|
|
56d64bfe7a | ||
|
|
5ddbdcdf9e | ||
|
|
dd16becad1 | ||
|
|
df683a223f | ||
|
|
fdb00e7655 | ||
|
|
90657a025f | ||
|
|
7ae8ea0229 | ||
|
|
612242eea9 | ||
|
|
ea650bff8a | ||
|
|
140bca4fd6 | ||
|
|
6dad6be8da | ||
|
|
d7f931e301 | ||
|
|
471936439e | ||
|
|
36e4bef46d | ||
|
|
2cac0bf263 | ||
|
|
3d07cbdff0 | ||
|
|
0f67c17281 | ||
|
|
bd22a1a55e | ||
|
|
cc54e89b33 | ||
|
|
f71e5cffd6 | ||
|
|
0e32729763 | ||
|
|
b0b501fa98 | ||
|
|
7e00596046 | ||
|
|
d1e4d9b05c | ||
|
|
3fbae630f9 | ||
|
|
3e3c919714 | ||
|
|
59b797a89d | ||
|
|
7caf006e07 | ||
|
|
45040b75f1 | ||
|
|
fa1c5c1044 | ||
|
|
67e832f687 | ||
|
|
da986fab6a | ||
|
|
5dd6e2ca05 | ||
|
|
16435065de | ||
|
|
5921b5e405 | ||
|
|
f0cd85b5f5 | ||
|
|
fe95443733 | ||
|
|
8df9288156 |
6
.envrc.example
Normal file
6
.envrc.example
Normal file
@@ -0,0 +1,6 @@
|
||||
# Run containers as current user (preserves file ownership on NFS mounts)
|
||||
# Copy this file to .envrc and run: direnv allow
|
||||
export CF_UID=$(id -u)
|
||||
export CF_GID=$(id -g)
|
||||
export CF_HOME=$HOME
|
||||
export CF_USER=$USER
|
||||
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@@ -54,7 +54,7 @@ jobs:
|
||||
run: uv run playwright install chromium --with-deps
|
||||
|
||||
- name: Run browser tests
|
||||
run: uv run pytest -m browser -v --no-cov
|
||||
run: uv run pytest -m browser -n auto -v
|
||||
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
4
.github/workflows/docs.yml
vendored
4
.github/workflows/docs.yml
vendored
@@ -27,7 +27,7 @@ jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
lfs: true
|
||||
|
||||
@@ -49,7 +49,7 @@ jobs:
|
||||
|
||||
- name: Upload artifact
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: actions/upload-pages-artifact@v3
|
||||
uses: actions/upload-pages-artifact@v4
|
||||
with:
|
||||
path: "./site"
|
||||
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -44,3 +44,4 @@ compose-farm.yaml
|
||||
coverage.xml
|
||||
.env
|
||||
homepage/
|
||||
site/
|
||||
|
||||
@@ -1,94 +1,117 @@
|
||||
Review all documentation in this repository for accuracy, completeness, and consistency. Cross-reference documentation against the actual codebase to identify issues.
|
||||
Review documentation for accuracy, completeness, and consistency. Focus on things that require judgment—automated checks handle the rest.
|
||||
|
||||
## Scope
|
||||
## What's Already Automated
|
||||
|
||||
Review all documentation files:
|
||||
- docs/*.md (primary documentation)
|
||||
- README.md (repository landing page)
|
||||
- CLAUDE.md (development guidelines)
|
||||
- examples/README.md (example configurations)
|
||||
Don't waste time on these—CI and pre-commit hooks handle them:
|
||||
|
||||
## Review Checklist
|
||||
- **README help output**: `markdown-code-runner` regenerates `cf --help` blocks in CI
|
||||
- **README command table**: Pre-commit hook verifies commands are listed
|
||||
- **Linting/formatting**: Handled by pre-commit
|
||||
|
||||
### 1. Command Documentation
|
||||
## What This Review Is For
|
||||
|
||||
For each documented command, verify against the CLI source code:
|
||||
Focus on things that require judgment:
|
||||
|
||||
- Command exists in codebase
|
||||
- All options are documented with correct names, types, and defaults
|
||||
- Short options (-x) match long options (--xxx)
|
||||
- Examples would work as written
|
||||
- Check for undocumented commands or options
|
||||
1. **Accuracy**: Does the documentation match what the code actually does?
|
||||
2. **Completeness**: Are there undocumented features, options, or behaviors?
|
||||
3. **Clarity**: Would a new user understand this? Are examples realistic?
|
||||
4. **Consistency**: Do different docs contradict each other?
|
||||
5. **Freshness**: Has the code changed in ways the docs don't reflect?
|
||||
|
||||
Run `--help` for each command to verify.
|
||||
## Review Process
|
||||
|
||||
### 2. Configuration Documentation
|
||||
### 1. Check Recent Changes
|
||||
|
||||
Verify against Pydantic models in the config module:
|
||||
```bash
|
||||
# What changed recently that might need doc updates?
|
||||
git log --oneline -20 | grep -iE "feat|fix|add|remove|change|option"
|
||||
|
||||
- All config keys are documented
|
||||
- Types match Pydantic field types
|
||||
- Required vs optional fields are correct
|
||||
- Default values are accurate
|
||||
- Config file search order matches code
|
||||
- Example YAML is valid and uses current schema
|
||||
# What code files changed?
|
||||
git diff --name-only HEAD~20 | grep "\.py$"
|
||||
```
|
||||
|
||||
### 3. Architecture Documentation
|
||||
Look for new features, changed defaults, renamed options, or removed functionality.
|
||||
|
||||
Verify against actual directory structure:
|
||||
### 2. Verify docs/commands.md Options Tables
|
||||
|
||||
- File paths match actual source code location
|
||||
- All modules listed actually exist
|
||||
- No modules are missing from the list
|
||||
- Component descriptions match code functionality
|
||||
- CLI module list includes all command files
|
||||
The README auto-updates help output, but `docs/commands.md` has **manually maintained options tables**. These can drift.
|
||||
|
||||
### 4. State and Data Files
|
||||
For each command's options table, compare against `cf <command> --help`:
|
||||
- Are all options listed?
|
||||
- Are short flags correct?
|
||||
- Are defaults accurate?
|
||||
- Are descriptions accurate?
|
||||
|
||||
Verify against state and path modules:
|
||||
**Pay special attention to subcommands** (`cf config *`, `cf ssh *`)—these have their own options that are easy to miss.
|
||||
|
||||
- State file name and location are correct
|
||||
- State file format matches actual structure
|
||||
- Log file name and location are correct
|
||||
- What triggers state/log updates is accurate
|
||||
### 3. Verify docs/configuration.md
|
||||
|
||||
### 5. Installation Documentation
|
||||
Compare against Pydantic models in the source:
|
||||
|
||||
Verify against pyproject.toml:
|
||||
```bash
|
||||
# Find the config models
|
||||
grep -r "class.*BaseModel" src/ --include="*.py" -A 15
|
||||
```
|
||||
|
||||
- Python version requirement matches requires-python
|
||||
- Package name is correct
|
||||
- Optional dependencies are documented
|
||||
- CLI entry points are mentioned
|
||||
- Installation methods work as documented
|
||||
Check:
|
||||
- All config keys documented
|
||||
- Types and defaults match code
|
||||
- Config file search order is accurate
|
||||
- Example YAML would actually work
|
||||
|
||||
### 6. Feature Claims
|
||||
### 4. Verify docs/architecture.md
|
||||
|
||||
For each claimed feature, verify it exists and works as described.
|
||||
```bash
|
||||
# What source files actually exist?
|
||||
git ls-files "src/**/*.py"
|
||||
```
|
||||
|
||||
### 7. Cross-Reference Consistency
|
||||
Check:
|
||||
- Listed files exist
|
||||
- No files are missing from the list
|
||||
- Descriptions match what the code does
|
||||
|
||||
Check for conflicts between documentation files:
|
||||
### 5. Check Examples
|
||||
|
||||
- README vs docs/index.md (should be consistent)
|
||||
- CLAUDE.md vs actual code structure
|
||||
- Command tables match across files
|
||||
- Config examples are consistent
|
||||
For examples in any doc:
|
||||
- Would the YAML/commands actually work?
|
||||
- Are service names, paths, and options realistic?
|
||||
- Do examples use current syntax (not deprecated options)?
|
||||
|
||||
### 6. Cross-Reference Consistency
|
||||
|
||||
The same info appears in multiple places. Check for conflicts:
|
||||
- README.md vs docs/index.md
|
||||
- docs/commands.md vs CLAUDE.md command tables
|
||||
- Config examples across different docs
|
||||
|
||||
### 7. Self-Check This Prompt
|
||||
|
||||
This prompt can become outdated too. If you notice:
|
||||
- New automated checks that should be listed above
|
||||
- New doc files that need review guidelines
|
||||
- Patterns that caused issues
|
||||
|
||||
Include prompt updates in your fixes.
|
||||
|
||||
## Output Format
|
||||
|
||||
Provide findings in these categories:
|
||||
Categorize findings:
|
||||
|
||||
1. **Critical Issues**: Incorrect information that would cause user problems
|
||||
2. **Inaccuracies**: Technical errors, wrong defaults, incorrect paths
|
||||
3. **Missing Documentation**: Features/commands that exist but aren't documented
|
||||
4. **Outdated Content**: Information that was once true but no longer is
|
||||
5. **Inconsistencies**: Conflicts between different documentation files
|
||||
6. **Minor Issues**: Typos, formatting, unclear wording
|
||||
7. **Verified Accurate**: Sections confirmed to be correct
|
||||
1. **Critical**: Wrong info that would break user workflows
|
||||
2. **Inaccuracy**: Technical errors (wrong defaults, paths, types)
|
||||
3. **Missing**: Undocumented features or options
|
||||
4. **Outdated**: Was true, no longer is
|
||||
5. **Inconsistency**: Docs contradict each other
|
||||
6. **Minor**: Typos, unclear wording
|
||||
|
||||
For each issue, include:
|
||||
- File path and line number (if applicable)
|
||||
- What the documentation says
|
||||
- What the code actually does
|
||||
- Suggested fix
|
||||
For each issue, provide a ready-to-apply fix:
|
||||
|
||||
```
|
||||
### Issue: [Brief description]
|
||||
|
||||
- **File**: docs/commands.md:652
|
||||
- **Problem**: `cf ssh setup` has `--config` option but it's not documented
|
||||
- **Fix**: Add `--config, -c PATH` to the options table
|
||||
- **Verify**: `cf ssh setup --help`
|
||||
```
|
||||
|
||||
15
.prompts/pr-review.md
Normal file
15
.prompts/pr-review.md
Normal file
@@ -0,0 +1,15 @@
|
||||
Review the pull request for:
|
||||
|
||||
- **Code cleanliness**: Is the implementation clean and well-structured?
|
||||
- **DRY principle**: Does it avoid duplication?
|
||||
- **Code reuse**: Are there parts that should be reused from other places?
|
||||
- **Organization**: Is everything in the right place?
|
||||
- **Consistency**: Is it in the same style as other parts of the codebase?
|
||||
- **Simplicity**: Is it not over-engineered? Remember KISS and YAGNI. No dead code paths and NO defensive programming.
|
||||
- **User experience**: Does it provide a good user experience?
|
||||
- **PR**: Is the PR description and title clear and informative?
|
||||
- **Tests**: Are there tests, and do they cover the changes adequately? Are they testing something meaningful or are they just trivial?
|
||||
- **Live tests**: Test the changes in a REAL live environment to ensure they work as expected, use the config in `/opt/stacks/compose-farm.yaml`.
|
||||
- **Rules**: Does the code follow the project's coding standards and guidelines as laid out in @CLAUDE.md?
|
||||
|
||||
Look at `git diff origin/main..HEAD` for the changes made in this pull request.
|
||||
51
.prompts/update-demos.md
Normal file
51
.prompts/update-demos.md
Normal file
@@ -0,0 +1,51 @@
|
||||
Update demo recordings to match the current compose-farm.yaml configuration.
|
||||
|
||||
## Key Gotchas
|
||||
|
||||
1. **Never `git checkout` without asking** - check for uncommitted changes first
|
||||
2. **Prefer `nas` stacks** - demos run locally on nas, SSH adds latency
|
||||
3. **Terminal captures keyboard** - use `blur()` to release focus before command palette
|
||||
4. **Clicking sidebar navigates away** - clicking h1 scrolls to top
|
||||
5. **Buttons have icons, not text** - use `[data-tip="..."]` selectors
|
||||
6. **`record.py` auto-restores config** - no manual cleanup needed after CLI demos
|
||||
|
||||
## Stacks Used in Demos
|
||||
|
||||
| Stack | CLI Demos | Web Demos | Notes |
|
||||
|-------|-----------|-----------|-------|
|
||||
| `audiobookshelf` | quickstart, migration, apply | - | Migrates nas→anton |
|
||||
| `grocy` | update | navigation, stack, workflow, console | - |
|
||||
| `immich` | logs, compose | shell | Multiple containers |
|
||||
| `dozzle` | - | workflow | - |
|
||||
|
||||
## CLI Demos
|
||||
|
||||
**Files:** `docs/demos/cli/*.tape`
|
||||
|
||||
Check:
|
||||
- `quickstart.tape`: `bat -r` line ranges match current config structure
|
||||
- `migration.tape`: nvim keystrokes work, stack exists on nas
|
||||
- `compose.tape`: exec commands produce meaningful output
|
||||
|
||||
Run: `python docs/demos/cli/record.py [demo]`
|
||||
|
||||
## Web Demos
|
||||
|
||||
**Files:** `docs/demos/web/demo_*.py`
|
||||
|
||||
Check:
|
||||
- Stack names in demos still exist in config
|
||||
- Selectors match current templates (grep for IDs in `templates/`)
|
||||
- Shell demo uses command palette for ALL navigation
|
||||
|
||||
Run: `python docs/demos/web/record.py [demo]`
|
||||
|
||||
## Before Recording
|
||||
|
||||
```bash
|
||||
# Check for uncommitted config changes
|
||||
git -C /opt/stacks diff compose-farm.yaml
|
||||
|
||||
# Verify stacks are running
|
||||
cf ps audiobookshelf grocy immich dozzle
|
||||
```
|
||||
31
CLAUDE.md
31
CLAUDE.md
@@ -15,7 +15,7 @@ src/compose_farm/
|
||||
│ ├── app.py # Shared Typer app instance, version callback
|
||||
│ ├── common.py # Shared helpers, options, progress bar utilities
|
||||
│ ├── config.py # Config subcommand (init, show, path, validate, edit, symlink)
|
||||
│ ├── lifecycle.py # up, down, pull, restart, update, apply commands
|
||||
│ ├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose commands
|
||||
│ ├── management.py # refresh, check, init-network, traefik-file commands
|
||||
│ ├── monitoring.py # logs, ps, stats commands
|
||||
│ ├── ssh.py # SSH key management (setup, status, keygen)
|
||||
@@ -58,22 +58,37 @@ Icons use [Lucide](https://lucide.dev/). Add new icons as macros in `web/templat
|
||||
|
||||
- **Imports at top level**: Never add imports inside functions unless they are explicitly marked with `# noqa: PLC0415` and a comment explaining it speeds up CLI startup. Heavy modules like `pydantic`, `yaml`, and `rich.table` are lazily imported to keep `cf --help` fast.
|
||||
|
||||
## Development Commands
|
||||
|
||||
Use `just` for common tasks. Run `just` to list available commands:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `just install` | Install dev dependencies |
|
||||
| `just test` | Run all tests |
|
||||
| `just test-cli` | Run CLI tests (parallel) |
|
||||
| `just test-web` | Run web UI tests (parallel) |
|
||||
| `just lint` | Lint, format, and type check |
|
||||
| `just web` | Start web UI (port 9001) |
|
||||
| `just doc` | Build and serve docs (port 9002) |
|
||||
| `just clean` | Clean build artifacts |
|
||||
|
||||
## Testing
|
||||
|
||||
Run tests with `uv run pytest`. Browser tests require Chromium (system-installed or via `playwright install chromium`):
|
||||
Run tests with `just test` or `uv run pytest`. Browser tests require Chromium (system-installed or via `playwright install chromium`):
|
||||
|
||||
```bash
|
||||
# Unit tests only (skip browser tests, can parallelize)
|
||||
# Unit tests only (parallel)
|
||||
uv run pytest -m "not browser" -n auto
|
||||
|
||||
# Browser tests only (run sequentially, no coverage)
|
||||
uv run pytest -m browser --no-cov
|
||||
# Browser tests only (parallel)
|
||||
uv run pytest -m browser -n auto
|
||||
|
||||
# All tests
|
||||
uv run pytest --no-cov
|
||||
uv run pytest
|
||||
```
|
||||
|
||||
Browser tests are marked with `@pytest.mark.browser`. They use Playwright to test HTMX behavior, JavaScript functionality (sidebar filter, command palette, terminals), and content stability during navigation. Run sequentially (no `-n`) to avoid resource contention.
|
||||
Browser tests are marked with `@pytest.mark.browser`. They use Playwright to test HTMX behavior, JavaScript functionality (sidebar filter, command palette, terminals), and content stability during navigation.
|
||||
|
||||
## Communication Notes
|
||||
|
||||
@@ -116,10 +131,12 @@ CLI available as `cf` or `compose-farm`.
|
||||
|---------|-------------|
|
||||
| `up` | Start stacks (`docker compose up -d`), auto-migrates if host changed |
|
||||
| `down` | Stop stacks (`docker compose down`). Use `--orphaned` to stop stacks removed from config |
|
||||
| `stop` | Stop services without removing containers (`docker compose stop`) |
|
||||
| `pull` | Pull latest images |
|
||||
| `restart` | `down` + `up -d` |
|
||||
| `update` | `pull` + `build` + `down` + `up -d` |
|
||||
| `apply` | Make reality match config: migrate stacks + stop orphans. Use `--dry-run` to preview |
|
||||
| `compose` | Run any docker compose command on a stack (passthrough) |
|
||||
| `logs` | Show stack logs |
|
||||
| `ps` | Show status of all stacks |
|
||||
| `stats` | Show overview (hosts, stacks, pending migrations; `--live` for container counts) |
|
||||
|
||||
10
Dockerfile
10
Dockerfile
@@ -16,5 +16,13 @@ RUN apk add --no-cache openssh-client
|
||||
COPY --from=builder /root/.local/share/uv/tools/compose-farm /root/.local/share/uv/tools/compose-farm
|
||||
COPY --from=builder /usr/local/bin/cf /usr/local/bin/compose-farm /usr/local/bin/
|
||||
|
||||
ENTRYPOINT ["cf"]
|
||||
# Allow non-root users to access the installed tool
|
||||
# (required when running with user: "${CF_UID:-0}:${CF_GID:-0}")
|
||||
RUN chmod 755 /root
|
||||
|
||||
# Allow non-root users to add passwd entries (required for SSH)
|
||||
RUN chmod 666 /etc/passwd
|
||||
|
||||
# Entrypoint creates /etc/passwd entry for non-root UIDs (required for SSH)
|
||||
ENTRYPOINT ["sh", "-c", "[ $(id -u) != 0 ] && echo ${USER:-u}:x:$(id -u):$(id -g)::${HOME:-/}:/bin/sh >> /etc/passwd; exec cf \"$@\"", "--"]
|
||||
CMD ["--help"]
|
||||
|
||||
276
README.md
276
README.md
@@ -5,12 +5,31 @@
|
||||
[](LICENSE)
|
||||
[](https://github.com/basnijholt/compose-farm/stargazers)
|
||||
|
||||
<img src="http://files.nijho.lt/compose-farm.png" align="right" style="width: 300px;" />
|
||||
<img src="https://files.nijho.lt/compose-farm.png" alt="Compose Farm logo" align="right" style="width: 300px;" />
|
||||
|
||||
A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
|
||||
|
||||
> [!NOTE]
|
||||
> Run `docker compose` commands across multiple hosts via SSH. One YAML maps stacks to hosts. Run `cf apply` and reality matches your config—stacks start, migrate, or stop as needed. No Kubernetes, no Swarm, no magic.
|
||||
> Agentless multi-host Docker Compose. CLI-first with a web UI. Your files stay as plain folders—version-controllable, no lock-in. Run `cf apply` and reality matches your config.
|
||||
|
||||
**Why Compose Farm?**
|
||||
- **Your files, your control** — Plain folders + YAML, not locked in Portainer. Version control everything.
|
||||
- **Agentless** — Just SSH, no agents to deploy (unlike [Dockge](https://github.com/louislam/dockge)).
|
||||
- **Zero changes required** — Existing compose files work as-is.
|
||||
- **Grows with you** — Start single-host, scale to multi-host seamlessly.
|
||||
- **Declarative** — Change config, run `cf apply`, reality matches.
|
||||
|
||||
## Quick Demo
|
||||
|
||||
**CLI:**
|
||||
|
||||

|
||||
|
||||
**Web UI:**
|
||||
|
||||

|
||||
|
||||
## Table of Contents
|
||||
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
@@ -143,7 +162,7 @@ If you need containers on different hosts to communicate seamlessly, you need Do
|
||||
|
||||
```bash
|
||||
# One-liner (installs uv if needed)
|
||||
curl -fsSL https://raw.githubusercontent.com/basnijholt/compose-farm/main/bootstrap.sh | sh
|
||||
curl -fsSL https://compose-farm.nijho.lt/install | sh
|
||||
|
||||
# Or if you already have uv/pip
|
||||
uv tool install compose-farm
|
||||
@@ -165,6 +184,24 @@ docker run --rm \
|
||||
ghcr.io/basnijholt/compose-farm up --all
|
||||
```
|
||||
|
||||
**Running as non-root user** (recommended for NFS mounts):
|
||||
|
||||
By default, containers run as root. To preserve file ownership on mounted volumes
|
||||
(e.g., `compose-farm-state.yaml`, config edits), set these environment variables:
|
||||
|
||||
```bash
|
||||
# Add to .env file (one-time setup)
|
||||
echo "CF_UID=$(id -u)" >> .env
|
||||
echo "CF_GID=$(id -g)" >> .env
|
||||
echo "CF_HOME=$HOME" >> .env
|
||||
echo "CF_USER=$USER" >> .env
|
||||
```
|
||||
|
||||
Or use [direnv](https://direnv.net/) (copies `.envrc.example` to `.envrc`):
|
||||
```bash
|
||||
cp .envrc.example .envrc && direnv allow
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## SSH Authentication
|
||||
@@ -204,13 +241,13 @@ When running in Docker, mount a volume to persist the SSH keys. Choose ONE optio
|
||||
**Option 1: Host path (default)** - keys at `~/.ssh/compose-farm/id_ed25519`
|
||||
```yaml
|
||||
volumes:
|
||||
- ~/.ssh/compose-farm:/root/.ssh
|
||||
- ~/.ssh/compose-farm:${CF_HOME:-/root}/.ssh
|
||||
```
|
||||
|
||||
**Option 2: Named volume** - managed by Docker
|
||||
```yaml
|
||||
volumes:
|
||||
- cf-ssh:/root/.ssh
|
||||
- cf-ssh:${CF_HOME:-/root}/.ssh
|
||||
```
|
||||
|
||||
Run setup once after starting the container (while the SSH agent still works):
|
||||
@@ -221,11 +258,13 @@ docker compose exec web cf ssh setup
|
||||
|
||||
The keys will persist across restarts.
|
||||
|
||||
**Note:** When running as non-root (with `CF_UID`/`CF_GID`), set `CF_HOME` to your home directory so SSH finds the keys at the correct path.
|
||||
|
||||
</details>
|
||||
|
||||
## Configuration
|
||||
|
||||
Create `~/.config/compose-farm/compose-farm.yaml` (or `./compose-farm.yaml` in your working directory):
|
||||
Create `compose-farm.yaml` in the directory where you'll run commands (e.g., `/opt/stacks`). This keeps config near your stacks. Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config, or symlink from one to the other with `cf config symlink`.
|
||||
|
||||
### Single-host example
|
||||
|
||||
@@ -258,7 +297,7 @@ hosts:
|
||||
stacks:
|
||||
plex: server-1
|
||||
jellyfin: server-2
|
||||
sonarr: server-1
|
||||
grafana: server-1
|
||||
|
||||
# Multi-host stacks (run on multiple/all hosts)
|
||||
autokuma: all # Runs on ALL configured hosts
|
||||
@@ -320,7 +359,8 @@ The CLI is available as both `compose-farm` and the shorter `cf` alias.
|
||||
|---------|-------------|
|
||||
| **`cf apply`** | **Make reality match config (start + migrate + stop orphans)** |
|
||||
| `cf up <stack>` | Start stack (auto-migrates if host changed) |
|
||||
| `cf down <stack>` | Stop stack |
|
||||
| `cf down <stack>` | Stop and remove stack containers |
|
||||
| `cf stop <stack>` | Stop stack without removing containers |
|
||||
| `cf restart <stack>` | down + up |
|
||||
| `cf update <stack>` | pull + build + down + up |
|
||||
| `cf pull <stack>` | Pull latest images |
|
||||
@@ -409,15 +449,6 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ copy it or customize the installation. │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Lifecycle ──────────────────────────────────────────────────────────────────╮
|
||||
│ up Start stacks (docker compose up -d). Auto-migrates if host │
|
||||
│ changed. │
|
||||
│ down Stop stacks (docker compose down). │
|
||||
│ pull Pull latest images (docker compose pull). │
|
||||
│ restart Restart stacks (down + up). │
|
||||
│ update Update stacks (pull + build + down + up). │
|
||||
│ apply Make reality match config (start, migrate, stop as needed). │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Configuration ──────────────────────────────────────────────────────────────╮
|
||||
│ traefik-file Generate a Traefik file-provider fragment from compose │
|
||||
│ Traefik labels. │
|
||||
@@ -427,8 +458,24 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ config Manage compose-farm configuration files. │
|
||||
│ ssh Manage SSH keys for passwordless authentication. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Lifecycle ──────────────────────────────────────────────────────────────────╮
|
||||
│ up Start stacks (docker compose up -d). Auto-migrates if host │
|
||||
│ changed. │
|
||||
│ down Stop stacks (docker compose down). │
|
||||
│ stop Stop services without removing containers (docker compose │
|
||||
│ stop). │
|
||||
│ pull Pull latest images (docker compose pull). │
|
||||
│ restart Restart stacks (down + up). With --service, restarts just │
|
||||
│ that service. │
|
||||
│ update Update stacks (pull + build + down + up). With --service, │
|
||||
│ updates just that service. │
|
||||
│ apply Make reality match config (start, migrate, stop │
|
||||
│ strays/orphans as needed). │
|
||||
│ compose Run any docker compose command on a stack. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Monitoring ─────────────────────────────────────────────────────────────────╮
|
||||
│ logs Show stack logs. │
|
||||
│ logs Show stack logs. With --service, shows logs for just that │
|
||||
│ service. │
|
||||
│ ps Show status of stacks. │
|
||||
│ stats Show overview statistics for hosts and stacks. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
@@ -467,10 +514,11 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -516,6 +564,41 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See the output of <code>cf stop --help</code></summary>
|
||||
|
||||
<!-- CODE:BASH:START -->
|
||||
<!-- echo '```yaml' -->
|
||||
<!-- export NO_COLOR=1 -->
|
||||
<!-- export TERM=dumb -->
|
||||
<!-- export TERMINAL_WIDTH=90 -->
|
||||
<!-- cf stop --help -->
|
||||
<!-- echo '```' -->
|
||||
<!-- CODE:END -->
|
||||
<!-- OUTPUT:START -->
|
||||
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
|
||||
```yaml
|
||||
|
||||
Usage: cf stop [OPTIONS] [STACKS]...
|
||||
|
||||
Stop services without removing containers (docker compose stop).
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
|
||||
<!-- OUTPUT:END -->
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See the output of <code>cf pull --help</code></summary>
|
||||
|
||||
@@ -539,9 +622,10 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
│ --all -a Run on all stacks │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -567,15 +651,16 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
Usage: cf restart [OPTIONS] [STACKS]...
|
||||
|
||||
Restart stacks (down + up).
|
||||
Restart stacks (down + up). With --service, restarts just that service.
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
│ --all -a Run on all stacks │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -601,15 +686,17 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
Usage: cf update [OPTIONS] [STACKS]...
|
||||
|
||||
Update stacks (pull + build + down + up).
|
||||
Update stacks (pull + build + down + up). With --service, updates just that
|
||||
service.
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
│ --all -a Run on all stacks │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -635,22 +722,25 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
Usage: cf apply [OPTIONS]
|
||||
|
||||
Make reality match config (start, migrate, stop as needed).
|
||||
Make reality match config (start, migrate, stop strays/orphans as needed).
|
||||
|
||||
This is the "reconcile" command that ensures running stacks match your
|
||||
config file. It will:
|
||||
|
||||
1. Stop orphaned stacks (in state but removed from config)
|
||||
2. Migrate stacks on wrong host (host in state ≠ host in config)
|
||||
3. Start missing stacks (in config but not in state)
|
||||
2. Stop stray stacks (running on unauthorized hosts)
|
||||
3. Migrate stacks on wrong host (host in state ≠ host in config)
|
||||
4. Start missing stacks (in config but not in state)
|
||||
|
||||
Use --dry-run to preview changes before applying.
|
||||
Use --no-orphans to only migrate/start without stopping orphaned stacks.
|
||||
Use --no-orphans to skip stopping orphaned stacks.
|
||||
Use --no-strays to skip stopping stray stacks.
|
||||
Use --full to also run 'up' on all stacks (picks up compose/env changes).
|
||||
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --dry-run -n Show what would change without executing │
|
||||
│ --no-orphans Only migrate, don't stop orphaned stacks │
|
||||
│ --no-strays Don't stop stray stacks (running on wrong host) │
|
||||
│ --full -f Also run up on all stacks to apply config │
|
||||
│ changes │
|
||||
│ --config -c PATH Path to config file │
|
||||
@@ -663,6 +753,53 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See the output of <code>cf compose --help</code></summary>
|
||||
|
||||
<!-- CODE:BASH:START -->
|
||||
<!-- echo '```yaml' -->
|
||||
<!-- export NO_COLOR=1 -->
|
||||
<!-- export TERM=dumb -->
|
||||
<!-- export TERMINAL_WIDTH=90 -->
|
||||
<!-- cf compose --help -->
|
||||
<!-- echo '```' -->
|
||||
<!-- CODE:END -->
|
||||
<!-- OUTPUT:START -->
|
||||
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
|
||||
```yaml
|
||||
|
||||
Usage: cf compose [OPTIONS] STACK COMMAND [ARGS]...
|
||||
|
||||
Run any docker compose command on a stack.
|
||||
|
||||
Passthrough to docker compose for commands not wrapped by cf.
|
||||
Options after COMMAND are passed to docker compose, not cf.
|
||||
|
||||
Examples:
|
||||
cf compose mystack --help - show docker compose help
|
||||
cf compose mystack top - view running processes
|
||||
cf compose mystack images - list images
|
||||
cf compose mystack exec web bash - interactive shell
|
||||
cf compose mystack config - view parsed config
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ * stack TEXT Stack to operate on (use '.' for current dir) │
|
||||
│ [required] │
|
||||
│ * command TEXT Docker compose command [required] │
|
||||
│ args [ARGS]... Additional arguments │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
|
||||
<!-- OUTPUT:END -->
|
||||
|
||||
</details>
|
||||
|
||||
**Configuration**
|
||||
|
||||
<details>
|
||||
@@ -878,6 +1015,26 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
<!-- cf ssh --help -->
|
||||
<!-- echo '```' -->
|
||||
<!-- CODE:END -->
|
||||
<!-- OUTPUT:START -->
|
||||
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
|
||||
```yaml
|
||||
|
||||
Usage: cf ssh [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Manage SSH keys for passwordless authentication.
|
||||
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Commands ───────────────────────────────────────────────────────────────────╮
|
||||
│ keygen Generate SSH key (does not distribute to hosts). │
|
||||
│ setup Generate SSH key and distribute to all configured hosts. │
|
||||
│ status Show SSH key status and host connectivity. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
|
||||
<!-- OUTPUT:END -->
|
||||
|
||||
</details>
|
||||
|
||||
@@ -900,19 +1057,20 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
Usage: cf logs [OPTIONS] [STACKS]...
|
||||
|
||||
Show stack logs.
|
||||
Show stack logs. With --service, shows logs for just that service.
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --follow -f Follow logs │
|
||||
│ --tail -n INTEGER Number of lines (default: 20 for --all, 100 │
|
||||
│ otherwise) │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --follow -f Follow logs │
|
||||
│ --tail -n INTEGER Number of lines (default: 20 for --all, 100 │
|
||||
│ otherwise) │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -944,15 +1102,17 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
Without arguments: shows all stacks (same as --all).
|
||||
With stack names: shows only those stacks.
|
||||
With --host: shows stacks on that host.
|
||||
With --service: filters to a specific service within the stack.
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -1009,6 +1169,24 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
<!-- cf web --help -->
|
||||
<!-- echo '```' -->
|
||||
<!-- CODE:END -->
|
||||
<!-- OUTPUT:START -->
|
||||
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
|
||||
```yaml
|
||||
|
||||
Usage: cf web [OPTIONS]
|
||||
|
||||
Start the web UI server.
|
||||
|
||||
╭─ Options ────────────────────────────────────────────────────────────────────╮
|
||||
│ --host -H TEXT Host to bind to [default: 0.0.0.0] │
|
||||
│ --port -p INTEGER Port to listen on [default: 8000] │
|
||||
│ --reload -r Enable auto-reload for development │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
|
||||
<!-- OUTPUT:END -->
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
@@ -26,5 +26,5 @@ stacks:
|
||||
traefik: server-1 # Traefik runs here
|
||||
plex: server-2 # Stacks on other hosts get file-provider entries
|
||||
jellyfin: server-2
|
||||
sonarr: server-1
|
||||
radarr: local
|
||||
grafana: server-1
|
||||
nextcloud: local
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
services:
|
||||
cf:
|
||||
image: ghcr.io/basnijholt/compose-farm:latest
|
||||
# Run as current user to preserve file ownership on mounted volumes
|
||||
# Set CF_UID=$(id -u) CF_GID=$(id -g) in your environment or .env file
|
||||
# Defaults to root (0:0) for backwards compatibility
|
||||
user: "${CF_UID:-0}:${CF_GID:-0}"
|
||||
volumes:
|
||||
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
|
||||
# Compose directory (contains compose files AND compose-farm.yaml config)
|
||||
@@ -8,31 +12,43 @@ services:
|
||||
# SSH keys for passwordless auth (generated by `cf ssh setup`)
|
||||
# Choose ONE option below (use the same option for both cf and web services):
|
||||
# Option 1: Host path (default) - keys at ~/.ssh/compose-farm/id_ed25519
|
||||
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:/root/.ssh
|
||||
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
|
||||
# Option 2: Named volume - managed by Docker, shared between services
|
||||
# - cf-ssh:/root/.ssh
|
||||
# - cf-ssh:${CF_HOME:-/root}/.ssh
|
||||
environment:
|
||||
- SSH_AUTH_SOCK=/ssh-agent
|
||||
# Config file path (state stored alongside it)
|
||||
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
|
||||
# HOME must match the user running the container for SSH to find keys
|
||||
- HOME=${CF_HOME:-/root}
|
||||
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
|
||||
- USER=${CF_USER:-root}
|
||||
|
||||
web:
|
||||
image: ghcr.io/basnijholt/compose-farm:latest
|
||||
restart: unless-stopped
|
||||
command: web --host 0.0.0.0 --port 9000
|
||||
# Run as current user to preserve file ownership on mounted volumes
|
||||
user: "${CF_UID:-0}:${CF_GID:-0}"
|
||||
volumes:
|
||||
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
|
||||
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
|
||||
# SSH keys - use the SAME option as cf service above
|
||||
# Option 1: Host path (default)
|
||||
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:/root/.ssh
|
||||
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
|
||||
# Option 2: Named volume
|
||||
# - cf-ssh:/root/.ssh
|
||||
# - cf-ssh:${CF_HOME:-/root}/.ssh
|
||||
# XDG config dir for backups and image digest logs (persists across restarts)
|
||||
- ${CF_XDG_CONFIG:-~/.config/compose-farm}:${CF_HOME:-/root}/.config/compose-farm
|
||||
environment:
|
||||
- SSH_AUTH_SOCK=/ssh-agent
|
||||
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
|
||||
# Used to detect self-updates and run via SSH to survive container restart
|
||||
- CF_WEB_STACK=compose-farm
|
||||
# HOME must match the user running the container for SSH to find keys
|
||||
- HOME=${CF_HOME:-/root}
|
||||
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
|
||||
- USER=${CF_USER:-root}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.compose-farm.rule=Host(`compose-farm.${DOMAIN}`)
|
||||
|
||||
@@ -47,8 +47,7 @@ Compose Farm follows three core principles:
|
||||
Pydantic models for YAML configuration:
|
||||
|
||||
- **Config** - Root configuration with compose_dir, hosts, stacks
|
||||
- **HostConfig** - Host address and SSH user
|
||||
- **ServiceConfig** - Service-to-host mappings
|
||||
- **Host** - Host address, SSH user, and port
|
||||
|
||||
Key features:
|
||||
- Validation with Pydantic
|
||||
@@ -62,7 +61,7 @@ Tracks deployment state in `compose-farm-state.yaml` (stored alongside the confi
|
||||
```yaml
|
||||
deployed:
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
grafana: nuc
|
||||
```
|
||||
|
||||
Used for:
|
||||
@@ -98,7 +97,7 @@ cli/
|
||||
├── app.py # Shared Typer app, version callback
|
||||
├── common.py # Shared helpers, options, progress utilities
|
||||
├── config.py # config subcommand (init, show, path, validate, edit, symlink)
|
||||
├── lifecycle.py # up, down, pull, restart, update, apply
|
||||
├── lifecycle.py # up, down, stop, pull, restart, update, apply, compose
|
||||
├── management.py # refresh, check, init-network, traefik-file
|
||||
├── monitoring.py # logs, ps, stats
|
||||
├── ssh.py # SSH key management (setup, status, keygen)
|
||||
@@ -208,7 +207,7 @@ Location: `compose-farm-state.yaml` (stored alongside the config file)
|
||||
```yaml
|
||||
deployed:
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
grafana: nuc
|
||||
```
|
||||
|
||||
Image digests are stored separately in `dockerfarm-log.toml` (also in the config directory).
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bb1372a59a4ed1ac74d3864d7a84dd5311fce4cb6c6a00bf3a574bc2f98d5595
|
||||
size 895927
|
||||
oid sha256:01dabdd8f62773823ba2b8dc74f9931f1a1b88215117e6a080004096025491b0
|
||||
size 901456
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f339a85f3d930db5a020c9f77e106edc5f44ea7dee6f68557106721493c24ef8
|
||||
size 205907
|
||||
oid sha256:134c903a6b3acfb933617b33755b0cdb9bac2a59e5e35b64236e248a141d396d
|
||||
size 206883
|
||||
|
||||
3
docs/assets/compose.gif
Normal file
3
docs/assets/compose.gif
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d8b3cdb3486ec79b3ddb2f7571c13d54ac9aed182edfe708eff76a966a90cfc7
|
||||
size 1132310
|
||||
3
docs/assets/compose.webm
Normal file
3
docs/assets/compose.webm
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a3c4d4a62f062f717df4e6752efced3caea29004dc90fe97fd7633e7f0ded9db
|
||||
size 341057
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:388aa49a1269145698f9763452aaf6b9c6232ea9229abe1dae304df558e29695
|
||||
size 403442
|
||||
oid sha256:6c1bb48cc2f364681515a4d8bd0c586d133f5a32789b7bb64524ad7d9ed0a8e9
|
||||
size 543135
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9b8bf4dcb8ee67270d4a88124b4dd4abe0dab518e73812ee73f7c66d77f146e2
|
||||
size 228025
|
||||
oid sha256:5f82d96137f039f21964c15c1550aa1b1f0bb2d52c04d012d253dbfbd6fad096
|
||||
size 268086
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:16b9a28137dfae25488e2094de85766a039457f5dca20c2d84ac72e3967c10b9
|
||||
size 164237
|
||||
oid sha256:2a4045b00d90928f42c7764b3c24751576cfb68a34c6e84d12b4e282d2e67378
|
||||
size 146467
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e0fbe697a1f8256ce3b9a6a64c7019d42769134df9b5b964e5abe98a29e918fd
|
||||
size 68242
|
||||
oid sha256:f1b94416ed3740853f863e19bf45f26241a203fb0d7d187160a537f79aa544fa
|
||||
size 60353
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:629b8c80b98eb996b75439745676fd99a83f391ca25f778a71bd59173f814c2f
|
||||
size 1194931
|
||||
oid sha256:848d9c48fb7511da7996149277c038589fad1ee406ff2f30c28f777fc441d919
|
||||
size 1183641
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:33fd46f2d8538cc43be4cb553b3af9d8b412f282ee354b6373e2793fe41c799b
|
||||
size 405057
|
||||
oid sha256:e747ee71bb38b19946005d5a4def4d423dadeaaade452dec875c4cb2d24a5b77
|
||||
size 407373
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ccd96e33faba5f297999917d89834b29d58bd2a8929eea8d62875e3d8830bd5c
|
||||
size 3198466
|
||||
oid sha256:d32c9a3eec06e57df085ad347e6bf61e323f8bd8322d0c540f0b9d4834196dfd
|
||||
size 3589776
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:979a1a21303bbf284b3510981066ef05c41c1035b34392fecc7bee472116e6db
|
||||
size 967564
|
||||
oid sha256:6c54eda599389dac74c24c83527f95cd1399e653d7faf2972c2693d90e590597
|
||||
size 1085344
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2067f4967a93b7ee3a8db7750c435f41b1fccd2919f3443da4b848c20cc54f23
|
||||
size 124559
|
||||
oid sha256:62f9b5ec71496197a3f1c3e3bca8967d603838804279ea7dbf00a70d3391ff6c
|
||||
size 127123
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5471bd94e6d1b9d415547fa44de6021fdad2e1cc5b8b295680e217104aa749d6
|
||||
size 98149
|
||||
oid sha256:ac2b93d3630af87b44a135723c5d10e8287529bed17c28301b2802cd9593e9e8
|
||||
size 98748
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dac5660cfe6574857ec055fac7822f25b7c5fcb10a836b19c86142515e2fbf75
|
||||
size 1816075
|
||||
oid sha256:7b50a7e9836c496c0989363d1440fa0a6ccdaa38ee16aae92b389b3cf3c3732f
|
||||
size 2385110
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d4efec8ef5a99f2cb31d55cd71cdbf0bb8dd0cd6281571886b7c1f8b41c3f9da
|
||||
size 1660764
|
||||
oid sha256:ccbb3d5366c7734377e12f98cca0b361028f5722124f1bb7efa231f6aeffc116
|
||||
size 2208044
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9348dd36e79192344476d61fbbffdb122a96ecc5829fbece1818590cfc521521
|
||||
size 3373003
|
||||
oid sha256:269993b52721ce70674d3aab2a4cd8c58aa621d4ba0739afedae661c90965b26
|
||||
size 3678371
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bebbf8151434ba37bf5e46566a4e8b57812944281926f579d056bdc835ca26aa
|
||||
size 2729799
|
||||
oid sha256:0098b55bb6a52fa39f807a01fa352ce112bcb446e2a2acb963fb02d21b28c934
|
||||
size 3088813
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3712afff6fcde00eb951264bb24d4301deb085d082b4e95ed4c1893a571938ee
|
||||
size 1528294
|
||||
oid sha256:4bf9d8c247d278799d1daea784fc662a22f12b1bd7883f808ef30f35025ebca6
|
||||
size 4166443
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0b218d400836a50661c9cdcce2d2b1e285cc5fe592cb42f58aae41f3e7d60684
|
||||
size 1327413
|
||||
oid sha256:02d5124217a94849bf2971d6d13d28da18c557195a81b9cca121fb7c07f0501b
|
||||
size 3523244
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6a232ddc1b9ddd9bf6b5d99c05153e1094be56f1952f02636ca498eb7484e096
|
||||
size 3808675
|
||||
oid sha256:412a0e68f8e52801cafbb9a703ca9577e7c14cc7c0e439160b9185961997f23c
|
||||
size 4435697
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5a7c9f5f6d47074a6af135190fda6d0a1936cd7a0b04b3aa04ea7d99167a9e05
|
||||
size 3333014
|
||||
oid sha256:0e600a1d3216b44497a889f91eac94d62ef7207b4ed0471465dcb72408caa28e
|
||||
size 3764693
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:66f4547ed2e83b302d795875588d9a085af76071a480f1096f2bb64344b80c42
|
||||
size 5428670
|
||||
oid sha256:3c07a283f4f70c4ab205b0f0acb5d6f55e3ced4c12caa7a8d5914ffe3548233a
|
||||
size 5768166
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:75c8cdeefbbdcab2a240821d3410539f2a2cbe0a015897f4135404c80c3ac32c
|
||||
size 6578366
|
||||
oid sha256:562228841de976d70ee80999b930eadf3866a13ff2867d900279993744c44671
|
||||
size 6667918
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:92ed41854fe852ae54aa5f05de8ceaf35c3ad8ef82b3034e67edf758d1acdf50
|
||||
size 13593713
|
||||
oid sha256:845746ac1cb101c3077d420c4f3fda3ca372492582dc123ac8a031a68ae9b6b1
|
||||
size 12943150
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8507c61df25981dbe7e5bd2f9ed16c9a0befbca218947cad29f6679c77a695a7
|
||||
size 12451891
|
||||
oid sha256:189259558b5760c02583885168d7b0b47cf476cba81c7c028ec770f9d6033129
|
||||
size 12415357
|
||||
|
||||
@@ -221,7 +221,7 @@ Keep config and data separate:
|
||||
|
||||
/opt/appdata/ # Local: per-host app data
|
||||
├── plex/
|
||||
└── sonarr/
|
||||
└── grafana/
|
||||
```
|
||||
|
||||
## Performance
|
||||
@@ -235,7 +235,7 @@ Compose Farm runs operations in parallel. For large deployments:
|
||||
cf up --all
|
||||
|
||||
# Avoid: sequential updates when possible
|
||||
for svc in plex sonarr radarr; do
|
||||
for svc in plex grafana nextcloud; do
|
||||
cf update $svc
|
||||
done
|
||||
```
|
||||
@@ -249,7 +249,7 @@ SSH connections are reused within a command. For many operations:
|
||||
cf update --all
|
||||
|
||||
# Multiple commands, multiple connections (slower)
|
||||
cf update plex && cf update sonarr && cf update radarr
|
||||
cf update plex && cf update grafana && cf update nextcloud
|
||||
```
|
||||
|
||||
## Traefik Setup
|
||||
@@ -297,7 +297,7 @@ http:
|
||||
|------|----------|--------|
|
||||
| Compose Farm config | `~/.config/compose-farm/` | Git or copy |
|
||||
| Compose files | `/opt/compose/` | Git |
|
||||
| State file | `~/.config/compose-farm/state.yaml` | Optional (can refresh) |
|
||||
| State file | `~/.config/compose-farm/compose-farm-state.yaml` | Optional (can refresh) |
|
||||
| App data | `/opt/appdata/` | Backup solution |
|
||||
|
||||
### Disaster Recovery
|
||||
|
||||
143
docs/commands.md
143
docs/commands.md
@@ -13,9 +13,11 @@ The Compose Farm CLI is available as both `compose-farm` and the shorter alias `
|
||||
| **Lifecycle** | `apply` | Make reality match config |
|
||||
| | `up` | Start stacks |
|
||||
| | `down` | Stop stacks |
|
||||
| | `stop` | Stop services without removing containers |
|
||||
| | `restart` | Restart stacks (down + up) |
|
||||
| | `update` | Update stacks (pull + build + down + up) |
|
||||
| | `pull` | Pull latest images |
|
||||
| | `compose` | Run any docker compose command |
|
||||
| **Monitoring** | `ps` | Show stack status |
|
||||
| | `logs` | Show stack logs |
|
||||
| | `stats` | Show overview statistics |
|
||||
@@ -97,19 +99,23 @@ cf up [OPTIONS] [STACKS]...
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Start all stacks |
|
||||
| `--host, -H TEXT` | Filter to stacks on this host |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Start specific stacks
|
||||
cf up plex sonarr
|
||||
cf up plex grafana
|
||||
|
||||
# Start all stacks
|
||||
cf up --all
|
||||
|
||||
# Start all stacks on a specific host
|
||||
cf up --all --host nuc
|
||||
|
||||
# Start a specific service within a stack
|
||||
cf up immich --service database
|
||||
```
|
||||
|
||||
**Auto-migration:**
|
||||
@@ -158,9 +164,40 @@ cf down --all --host nuc
|
||||
|
||||
---
|
||||
|
||||
### cf stop
|
||||
|
||||
Stop services without removing containers.
|
||||
|
||||
```bash
|
||||
cf stop [OPTIONS] [STACKS]...
|
||||
```
|
||||
|
||||
**Options:**
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Stop all stacks |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Stop specific stacks
|
||||
cf stop plex
|
||||
|
||||
# Stop all stacks
|
||||
cf stop --all
|
||||
|
||||
# Stop a specific service within a stack
|
||||
cf stop immich --service database
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### cf restart
|
||||
|
||||
Restart stacks (down + up).
|
||||
Restart stacks (down + up). With `--service`, restarts just that service.
|
||||
|
||||
```bash
|
||||
cf restart [OPTIONS] [STACKS]...
|
||||
@@ -171,6 +208,7 @@ cf restart [OPTIONS] [STACKS]...
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Restart all stacks |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
@@ -178,13 +216,16 @@ cf restart [OPTIONS] [STACKS]...
|
||||
```bash
|
||||
cf restart plex
|
||||
cf restart --all
|
||||
|
||||
# Restart a specific service
|
||||
cf restart immich --service database
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### cf update
|
||||
|
||||
Update stacks (pull + build + down + up).
|
||||
Update stacks (pull + build + down + up). With `--service`, updates just that service.
|
||||
|
||||
<video autoplay loop muted playsinline>
|
||||
<source src="/assets/update.webm" type="video/webm">
|
||||
@@ -199,6 +240,7 @@ cf update [OPTIONS] [STACKS]...
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Update all stacks |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
@@ -209,6 +251,9 @@ cf update plex
|
||||
|
||||
# Update all stacks
|
||||
cf update --all
|
||||
|
||||
# Update a specific service
|
||||
cf update immich --service database
|
||||
```
|
||||
|
||||
---
|
||||
@@ -226,6 +271,7 @@ cf pull [OPTIONS] [STACKS]...
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Pull for all stacks |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
@@ -233,6 +279,60 @@ cf pull [OPTIONS] [STACKS]...
|
||||
```bash
|
||||
cf pull plex
|
||||
cf pull --all
|
||||
|
||||
# Pull a specific service
|
||||
cf pull immich --service database
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### cf compose
|
||||
|
||||
Run any docker compose command on a stack. This is a passthrough to docker compose for commands not wrapped by cf.
|
||||
|
||||
<video autoplay loop muted playsinline>
|
||||
<source src="/assets/compose.webm" type="video/webm">
|
||||
</video>
|
||||
|
||||
```bash
|
||||
cf compose [OPTIONS] STACK COMMAND [ARGS]...
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
|
||||
| Argument | Description |
|
||||
|----------|-------------|
|
||||
| `STACK` | Stack to operate on (use `.` for current dir) |
|
||||
| `COMMAND` | Docker compose command to run |
|
||||
| `ARGS` | Additional arguments passed to docker compose |
|
||||
|
||||
**Options:**
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--host, -H TEXT` | Filter to stacks on this host (required for multi-host stacks) |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Show docker compose help
|
||||
cf compose mystack --help
|
||||
|
||||
# View running processes
|
||||
cf compose mystack top
|
||||
|
||||
# List images
|
||||
cf compose mystack images
|
||||
|
||||
# Interactive shell
|
||||
cf compose mystack exec web bash
|
||||
|
||||
# View parsed config
|
||||
cf compose mystack config
|
||||
|
||||
# Use current directory as stack
|
||||
cf compose . ps
|
||||
```
|
||||
|
||||
---
|
||||
@@ -253,6 +353,7 @@ cf ps [OPTIONS] [STACKS]...
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Show all stacks (default) |
|
||||
| `--host, -H TEXT` | Filter to stacks on this host |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Examples:**
|
||||
@@ -262,10 +363,13 @@ cf ps [OPTIONS] [STACKS]...
|
||||
cf ps
|
||||
|
||||
# Show specific stacks
|
||||
cf ps plex sonarr
|
||||
cf ps plex grafana
|
||||
|
||||
# Filter by host
|
||||
cf ps --host nuc
|
||||
|
||||
# Show status of a specific service
|
||||
cf ps immich --service database
|
||||
```
|
||||
|
||||
---
|
||||
@@ -288,6 +392,7 @@ cf logs [OPTIONS] [STACKS]...
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Show logs for all stacks |
|
||||
| `--host, -H TEXT` | Filter to stacks on this host |
|
||||
| `--service, -s TEXT` | Target a specific service within the stack |
|
||||
| `--follow, -f` | Follow logs (live stream) |
|
||||
| `--tail, -n INTEGER` | Number of lines (default: 20 for --all, 100 otherwise) |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
@@ -302,10 +407,13 @@ cf logs plex
|
||||
cf logs -f plex
|
||||
|
||||
# Show last 50 lines of multiple stacks
|
||||
cf logs -n 50 plex sonarr
|
||||
cf logs -n 50 plex grafana
|
||||
|
||||
# Show last 20 lines of all stacks
|
||||
cf logs --all
|
||||
|
||||
# Show logs for a specific service
|
||||
cf logs immich --service database
|
||||
```
|
||||
|
||||
---
|
||||
@@ -374,25 +482,31 @@ cf check jellyfin
|
||||
Update local state from running stacks.
|
||||
|
||||
```bash
|
||||
cf refresh [OPTIONS]
|
||||
cf refresh [OPTIONS] [STACKS]...
|
||||
```
|
||||
|
||||
**Options:**
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--all, -a` | Refresh all stacks |
|
||||
| `--dry-run, -n` | Show what would change |
|
||||
| `--log-path, -l PATH` | Path to Dockerfarm TOML log |
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
Without arguments, refreshes all stacks (same as `--all`). With stack names, refreshes only those stacks.
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Sync state with reality
|
||||
# Sync state with reality (all stacks)
|
||||
cf refresh
|
||||
|
||||
# Preview changes
|
||||
cf refresh --dry-run
|
||||
|
||||
# Refresh specific stacks only
|
||||
cf refresh plex sonarr
|
||||
```
|
||||
|
||||
---
|
||||
@@ -539,7 +653,20 @@ cf ssh COMMAND
|
||||
| `status` | Show SSH key status and host connectivity |
|
||||
| `keygen` | Generate key without distributing |
|
||||
|
||||
**Options for `cf ssh setup` and `cf ssh keygen`:**
|
||||
**Options for `cf ssh setup`:**
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
| `--force, -f` | Regenerate key even if it exists |
|
||||
|
||||
**Options for `cf ssh status`:**
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--config, -c PATH` | Path to config file |
|
||||
|
||||
**Options for `cf ssh keygen`:**
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
|
||||
@@ -42,8 +42,8 @@ hosts:
|
||||
# Map stacks to the local host
|
||||
stacks:
|
||||
plex: local
|
||||
sonarr: local
|
||||
radarr: local
|
||||
grafana: local
|
||||
nextcloud: local
|
||||
```
|
||||
|
||||
### Multi-host (full example)
|
||||
@@ -69,8 +69,8 @@ hosts:
|
||||
stacks:
|
||||
# Single-host stacks
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
radarr: hp
|
||||
grafana: nuc
|
||||
nextcloud: hp
|
||||
|
||||
# Multi-host stacks
|
||||
dozzle: all # Run on ALL hosts
|
||||
@@ -94,7 +94,7 @@ compose_dir: /opt/compose
|
||||
├── plex/
|
||||
│ ├── docker-compose.yml # or compose.yaml
|
||||
│ └── .env # optional environment file
|
||||
├── sonarr/
|
||||
├── grafana/
|
||||
│ └── docker-compose.yml
|
||||
└── ...
|
||||
```
|
||||
@@ -185,8 +185,8 @@ hosts:
|
||||
```yaml
|
||||
stacks:
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
radarr: hp
|
||||
grafana: nuc
|
||||
nextcloud: hp
|
||||
```
|
||||
|
||||
### Multi-Host Stack
|
||||
@@ -229,7 +229,7 @@ For example, if your config is at `~/.config/compose-farm/compose-farm.yaml`, th
|
||||
```yaml
|
||||
deployed:
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
grafana: nuc
|
||||
```
|
||||
|
||||
This file records which stacks are deployed and on which host.
|
||||
@@ -373,8 +373,8 @@ hosts:
|
||||
stacks:
|
||||
# Media
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
radarr: nuc
|
||||
jellyfin: nuc
|
||||
immich: nuc
|
||||
|
||||
# Infrastructure
|
||||
traefik: nuc
|
||||
@@ -388,7 +388,6 @@ stacks:
|
||||
|
||||
```yaml
|
||||
compose_dir: /opt/compose
|
||||
network: production
|
||||
traefik_file: /opt/traefik/dynamic.d/cf.yml
|
||||
traefik_stack: traefik
|
||||
|
||||
|
||||
@@ -10,10 +10,10 @@ VHS-based terminal demo recordings for Compose Farm CLI.
|
||||
|
||||
```bash
|
||||
# Record all demos
|
||||
./docs/demos/cli/record.sh
|
||||
python docs/demos/cli/record.py
|
||||
|
||||
# Record single demo
|
||||
cd /opt/stacks && vhs docs/demos/cli/quickstart.tape
|
||||
# Record specific demos
|
||||
python docs/demos/cli/record.py quickstart migration
|
||||
```
|
||||
|
||||
## Demos
|
||||
@@ -23,6 +23,7 @@ cd /opt/stacks && vhs docs/demos/cli/quickstart.tape
|
||||
| `install.tape` | Installing with `uv tool install` |
|
||||
| `quickstart.tape` | `cf ps`, `cf up`, `cf logs` |
|
||||
| `logs.tape` | Viewing logs |
|
||||
| `compose.tape` | `cf compose` passthrough (--help, images, exec) |
|
||||
| `update.tape` | `cf update` |
|
||||
| `migration.tape` | Service migration |
|
||||
| `apply.tape` | `cf apply` |
|
||||
|
||||
50
docs/demos/cli/compose.tape
Normal file
50
docs/demos/cli/compose.tape
Normal file
@@ -0,0 +1,50 @@
|
||||
# Compose Demo
|
||||
# Shows that cf compose passes through ANY docker compose command
|
||||
|
||||
Output docs/assets/compose.gif
|
||||
Output docs/assets/compose.webm
|
||||
|
||||
Set Shell "bash"
|
||||
Set FontSize 14
|
||||
Set Width 900
|
||||
Set Height 550
|
||||
Set Theme "Catppuccin Mocha"
|
||||
Set TypingSpeed 50ms
|
||||
|
||||
Type "# cf compose runs ANY docker compose command on the right host"
|
||||
Enter
|
||||
Sleep 500ms
|
||||
|
||||
Type "# See ALL available compose commands"
|
||||
Enter
|
||||
Sleep 500ms
|
||||
|
||||
Type "cf compose immich --help"
|
||||
Enter
|
||||
Sleep 4s
|
||||
|
||||
Type "# Show images"
|
||||
Enter
|
||||
Sleep 500ms
|
||||
|
||||
Type "cf compose immich images"
|
||||
Enter
|
||||
Wait+Screen /immich/
|
||||
Sleep 2s
|
||||
|
||||
Type "# Open shell in a container"
|
||||
Enter
|
||||
Sleep 500ms
|
||||
|
||||
Type "cf compose immich exec immich-machine-learning sh"
|
||||
Enter
|
||||
Wait+Screen /#/
|
||||
Sleep 1s
|
||||
|
||||
Type "python3 --version"
|
||||
Enter
|
||||
Sleep 1s
|
||||
|
||||
Type "exit"
|
||||
Enter
|
||||
Sleep 500ms
|
||||
@@ -21,7 +21,7 @@ Type "# First, define your hosts..."
|
||||
Enter
|
||||
Sleep 500ms
|
||||
|
||||
Type "bat -r 1:11 compose-farm.yaml"
|
||||
Type "bat -r 1:16 compose-farm.yaml"
|
||||
Enter
|
||||
Sleep 3s
|
||||
Type "q"
|
||||
@@ -31,7 +31,7 @@ Type "# Then map each stack to a host"
|
||||
Enter
|
||||
Sleep 500ms
|
||||
|
||||
Type "bat -r 13:30 compose-farm.yaml"
|
||||
Type "bat -r 17:35 compose-farm.yaml"
|
||||
Enter
|
||||
Sleep 3s
|
||||
Type "q"
|
||||
|
||||
134
docs/demos/cli/record.py
Executable file
134
docs/demos/cli/record.py
Executable file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Record CLI demos using VHS."""
|
||||
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from rich.console import Console
|
||||
|
||||
from compose_farm.config import load_config
|
||||
from compose_farm.state import load_state
|
||||
|
||||
console = Console()
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
STACKS_DIR = Path("/opt/stacks")
|
||||
CONFIG_FILE = STACKS_DIR / "compose-farm.yaml"
|
||||
OUTPUT_DIR = SCRIPT_DIR.parent.parent / "assets"
|
||||
|
||||
DEMOS = ["install", "quickstart", "logs", "compose", "update", "migration", "apply"]
|
||||
|
||||
|
||||
def _run(cmd: list[str], **kw) -> bool:
|
||||
return subprocess.run(cmd, check=False, **kw).returncode == 0
|
||||
|
||||
|
||||
def _set_config(host: str) -> None:
|
||||
"""Set audiobookshelf host in config file."""
|
||||
_run(["sed", "-i", f"s/audiobookshelf: .*/audiobookshelf: {host}/", str(CONFIG_FILE)])
|
||||
|
||||
|
||||
def _get_hosts() -> tuple[str | None, str | None]:
|
||||
"""Return (config_host, state_host) for audiobookshelf."""
|
||||
config = load_config()
|
||||
state = load_state(config)
|
||||
return config.stacks.get("audiobookshelf"), state.get("audiobookshelf")
|
||||
|
||||
|
||||
def _setup_state(demo: str) -> bool:
|
||||
"""Set up required state for demo. Returns False on failure."""
|
||||
if demo not in ("migration", "apply"):
|
||||
return True
|
||||
|
||||
config_host, state_host = _get_hosts()
|
||||
|
||||
if demo == "migration":
|
||||
# Migration needs audiobookshelf on nas in BOTH config and state
|
||||
if config_host != "nas":
|
||||
console.print("[yellow]Setting up: config → nas[/yellow]")
|
||||
_set_config("nas")
|
||||
if state_host != "nas":
|
||||
console.print("[yellow]Setting up: state → nas[/yellow]")
|
||||
if not _run(["cf", "apply"], cwd=STACKS_DIR):
|
||||
return False
|
||||
|
||||
elif demo == "apply":
|
||||
# Apply needs config=nas, state=anton (so there's something to apply)
|
||||
if config_host != "nas":
|
||||
console.print("[yellow]Setting up: config → nas[/yellow]")
|
||||
_set_config("nas")
|
||||
if state_host == "nas":
|
||||
console.print("[yellow]Setting up: state → anton[/yellow]")
|
||||
_set_config("anton")
|
||||
if not _run(["cf", "apply"], cwd=STACKS_DIR):
|
||||
return False
|
||||
_set_config("nas")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def _record(name: str, index: int, total: int) -> bool:
|
||||
"""Record a single demo."""
|
||||
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] {name}")
|
||||
if _run(["vhs", str(SCRIPT_DIR / f"{name}.tape")], cwd=STACKS_DIR):
|
||||
console.print("[green] ✓ Done[/green]")
|
||||
return True
|
||||
console.print("[red] ✗ Failed[/red]")
|
||||
return False
|
||||
|
||||
|
||||
def _reset_after(demo: str, next_demo: str | None) -> None:
|
||||
"""Reset state after demos that modify audiobookshelf."""
|
||||
if demo not in ("quickstart", "migration"):
|
||||
return
|
||||
_set_config("nas")
|
||||
if next_demo != "apply": # Let apply demo show the migration
|
||||
_run(["cf", "apply"], cwd=STACKS_DIR)
|
||||
|
||||
|
||||
def _restore_config(original: str) -> None:
|
||||
"""Restore original config and sync state."""
|
||||
console.print("[yellow]Restoring original config...[/yellow]")
|
||||
CONFIG_FILE.write_text(original)
|
||||
_run(["cf", "apply"], cwd=STACKS_DIR)
|
||||
|
||||
|
||||
def _main() -> int:
|
||||
if not shutil.which("vhs"):
|
||||
console.print("[red]VHS not found. Install: brew install vhs[/red]")
|
||||
return 1
|
||||
|
||||
if not _run(["git", "-C", str(STACKS_DIR), "diff", "--quiet", "compose-farm.yaml"]):
|
||||
console.print("[red]compose-farm.yaml has uncommitted changes[/red]")
|
||||
return 1
|
||||
|
||||
demos = [d for d in sys.argv[1:] if d in DEMOS] or DEMOS
|
||||
if sys.argv[1:] and not demos:
|
||||
console.print(f"[red]Unknown demo. Available: {', '.join(DEMOS)}[/red]")
|
||||
return 1
|
||||
|
||||
# Save original config to restore after recording
|
||||
original_config = CONFIG_FILE.read_text()
|
||||
|
||||
try:
|
||||
for i, demo in enumerate(demos, 1):
|
||||
if not _setup_state(demo):
|
||||
return 1
|
||||
if not _record(demo, i, len(demos)):
|
||||
return 1
|
||||
_reset_after(demo, demos[i] if i < len(demos) else None)
|
||||
finally:
|
||||
_restore_config(original_config)
|
||||
|
||||
# Move outputs
|
||||
OUTPUT_DIR.mkdir(exist_ok=True)
|
||||
for f in (STACKS_DIR / "docs/assets").glob("*.[gw]*"):
|
||||
shutil.move(str(f), str(OUTPUT_DIR / f.name))
|
||||
|
||||
console.print(f"\n[green]Done![/green] Saved to {OUTPUT_DIR}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(_main())
|
||||
@@ -1,89 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Record all VHS demos
|
||||
# Run this on a Docker host with compose-farm configured
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
DEMOS_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
DOCS_DIR="$(dirname "$DEMOS_DIR")"
|
||||
REPO_DIR="$(dirname "$DOCS_DIR")"
|
||||
OUTPUT_DIR="$DOCS_DIR/assets"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
BLUE='\033[0;34m'
|
||||
YELLOW='\033[0;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Check for VHS
|
||||
if ! command -v vhs &> /dev/null; then
|
||||
echo "VHS not found. Install with:"
|
||||
echo " brew install vhs"
|
||||
echo " # or"
|
||||
echo " go install github.com/charmbracelet/vhs@latest"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Ensure output directory exists
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Temp output dir (VHS runs from /opt/stacks, so relative paths go here)
|
||||
TEMP_OUTPUT="/opt/stacks/docs/assets"
|
||||
mkdir -p "$TEMP_OUTPUT"
|
||||
|
||||
# Change to /opt/stacks so cf commands use installed version (not editable install)
|
||||
cd /opt/stacks
|
||||
|
||||
# Ensure compose-farm.yaml has no uncommitted changes (safety check)
|
||||
if ! git diff --quiet compose-farm.yaml; then
|
||||
echo -e "${RED}Error: compose-farm.yaml has uncommitted changes${NC}"
|
||||
echo "Commit or stash your changes before recording demos"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Recording VHS demos...${NC}"
|
||||
echo "Output directory: $OUTPUT_DIR"
|
||||
echo ""
|
||||
|
||||
# Function to record a tape
|
||||
record_tape() {
|
||||
local tape=$1
|
||||
local name=$(basename "$tape" .tape)
|
||||
echo -e "${GREEN}Recording:${NC} $name"
|
||||
if vhs "$tape"; then
|
||||
echo -e "${GREEN} ✓ Done${NC}"
|
||||
else
|
||||
echo -e "${RED} ✗ Failed${NC}"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Record demos in logical order
|
||||
echo -e "${YELLOW}=== Phase 1: Basic demos ===${NC}"
|
||||
record_tape "$SCRIPT_DIR/install.tape"
|
||||
record_tape "$SCRIPT_DIR/quickstart.tape"
|
||||
record_tape "$SCRIPT_DIR/logs.tape"
|
||||
|
||||
echo -e "${YELLOW}=== Phase 2: Update demo ===${NC}"
|
||||
record_tape "$SCRIPT_DIR/update.tape"
|
||||
|
||||
echo -e "${YELLOW}=== Phase 3: Migration demo ===${NC}"
|
||||
record_tape "$SCRIPT_DIR/migration.tape"
|
||||
git -C /opt/stacks checkout compose-farm.yaml # Reset after migration
|
||||
|
||||
echo -e "${YELLOW}=== Phase 4: Apply demo ===${NC}"
|
||||
record_tape "$SCRIPT_DIR/apply.tape"
|
||||
|
||||
# Move GIFs and WebMs from temp location to repo
|
||||
echo ""
|
||||
echo -e "${BLUE}Moving recordings to repo...${NC}"
|
||||
mv "$TEMP_OUTPUT"/*.gif "$OUTPUT_DIR/" 2>/dev/null || true
|
||||
mv "$TEMP_OUTPUT"/*.webm "$OUTPUT_DIR/" 2>/dev/null || true
|
||||
rmdir "$TEMP_OUTPUT" 2>/dev/null || true
|
||||
rmdir "$(dirname "$TEMP_OUTPUT")" 2>/dev/null || true
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}Done!${NC} Recordings saved to $OUTPUT_DIR/"
|
||||
ls -la "$OUTPUT_DIR"/*.gif "$OUTPUT_DIR"/*.webm 2>/dev/null || echo "No recordings found (check for errors above)"
|
||||
@@ -60,10 +60,14 @@ def test_demo_console(recording_page: Page, server_url: str) -> None:
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 2500) # Wait for output
|
||||
|
||||
# Scroll down to show the Editor section with Compose Farm config
|
||||
editor_section = page.locator(".collapse", has_text="Editor").first
|
||||
editor_section.scroll_into_view_if_needed()
|
||||
pause(page, 800)
|
||||
# Smoothly scroll down to show the Editor section with Compose Farm config
|
||||
page.evaluate("""
|
||||
const editor = document.getElementById('console-editor');
|
||||
if (editor) {
|
||||
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
|
||||
}
|
||||
""")
|
||||
pause(page, 1200) # Wait for smooth scroll animation
|
||||
|
||||
# Wait for Monaco editor to load with config content
|
||||
page.wait_for_selector("#console-editor .monaco-editor", timeout=10000)
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
"""Demo: Container shell exec.
|
||||
"""Demo: Container shell exec via command palette.
|
||||
|
||||
Records a ~25 second demo showing:
|
||||
- Navigating to a stack page
|
||||
- Clicking Shell button on a container
|
||||
- Running top command inside the container
|
||||
Records a ~35 second demo showing:
|
||||
- Navigating to immich stack (multiple containers)
|
||||
- Using command palette with fuzzy matching ("sh mach") to open shell
|
||||
- Running a command
|
||||
- Using command palette to switch to server container shell
|
||||
- Running another command
|
||||
|
||||
Run: pytest docs/demos/web/demo_shell.py -v --no-cov
|
||||
"""
|
||||
@@ -14,6 +16,7 @@ from typing import TYPE_CHECKING
|
||||
|
||||
import pytest
|
||||
from conftest import (
|
||||
open_command_palette,
|
||||
pause,
|
||||
slow_type,
|
||||
wait_for_sidebar,
|
||||
@@ -33,39 +36,71 @@ def test_demo_shell(recording_page: Page, server_url: str) -> None:
|
||||
wait_for_sidebar(page)
|
||||
pause(page, 800)
|
||||
|
||||
# Navigate to a stack with a running container (grocy)
|
||||
page.locator("#sidebar-stacks a", has_text="grocy").click()
|
||||
page.wait_for_url("**/stack/grocy", timeout=5000)
|
||||
# Navigate to immich via command palette (has multiple containers)
|
||||
open_command_palette(page)
|
||||
pause(page, 400)
|
||||
slow_type(page, "#cmd-input", "immich", delay=100)
|
||||
pause(page, 600)
|
||||
page.keyboard.press("Enter")
|
||||
page.wait_for_url("**/stack/immich", timeout=5000)
|
||||
pause(page, 1500)
|
||||
|
||||
# Wait for containers list to load (loaded via HTMX)
|
||||
# Wait for containers list to load (so shell commands are available)
|
||||
page.wait_for_selector("#containers-list button", timeout=10000)
|
||||
pause(page, 800)
|
||||
|
||||
# Click Shell button on the first container
|
||||
shell_btn = page.locator("#containers-list button", has_text="Shell").first
|
||||
shell_btn.click()
|
||||
# Use command palette with fuzzy matching: "sh mach" -> "Shell: immich-machine-learning"
|
||||
open_command_palette(page)
|
||||
pause(page, 400)
|
||||
slow_type(page, "#cmd-input", "sh mach", delay=100)
|
||||
pause(page, 600)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 1000)
|
||||
|
||||
# Wait for exec terminal to appear
|
||||
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
|
||||
|
||||
# Scroll down to make the terminal visible
|
||||
page.locator("#exec-terminal").scroll_into_view_if_needed()
|
||||
pause(page, 2000)
|
||||
# Smoothly scroll down to make the terminal visible
|
||||
page.evaluate("""
|
||||
const terminal = document.getElementById('exec-terminal');
|
||||
if (terminal) {
|
||||
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
|
||||
}
|
||||
""")
|
||||
pause(page, 1200)
|
||||
|
||||
# Run top command
|
||||
slow_type(page, "#exec-terminal .xterm-helper-textarea", "top", delay=100)
|
||||
# Run python version command
|
||||
slow_type(page, "#exec-terminal .xterm-helper-textarea", "python3 --version", delay=60)
|
||||
pause(page, 300)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 4000) # Let top run for a bit
|
||||
pause(page, 1500)
|
||||
|
||||
# Press q to quit top
|
||||
page.keyboard.press("q")
|
||||
# Blur the terminal to release focus (won't scroll)
|
||||
page.evaluate("document.activeElement?.blur()")
|
||||
pause(page, 500)
|
||||
|
||||
# Use command palette to switch to server container: "sh serv" -> "Shell: immich-server"
|
||||
open_command_palette(page)
|
||||
pause(page, 400)
|
||||
slow_type(page, "#cmd-input", "sh serv", delay=100)
|
||||
pause(page, 600)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 1000)
|
||||
|
||||
# Run another command to show it's interactive
|
||||
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ps aux | head", delay=60)
|
||||
# Wait for new terminal
|
||||
page.wait_for_selector("#exec-terminal .xterm", timeout=10000)
|
||||
|
||||
# Scroll to terminal
|
||||
page.evaluate("""
|
||||
const terminal = document.getElementById('exec-terminal');
|
||||
if (terminal) {
|
||||
terminal.scrollIntoView({ behavior: 'smooth', block: 'center' });
|
||||
}
|
||||
""")
|
||||
pause(page, 1200)
|
||||
|
||||
# Run ls command
|
||||
slow_type(page, "#exec-terminal .xterm-helper-textarea", "ls /usr/src/app", delay=60)
|
||||
pause(page, 300)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 2000)
|
||||
|
||||
@@ -55,9 +55,14 @@ def test_demo_stack(recording_page: Page, server_url: str) -> None:
|
||||
page.wait_for_selector("#compose-editor .monaco-editor", timeout=10000)
|
||||
pause(page, 2000) # Let viewer see the compose file
|
||||
|
||||
# Scroll down slightly to show more of the editor
|
||||
page.locator("#compose-editor").scroll_into_view_if_needed()
|
||||
pause(page, 1500)
|
||||
# Smoothly scroll down to show more of the editor
|
||||
page.evaluate("""
|
||||
const editor = document.getElementById('compose-editor');
|
||||
if (editor) {
|
||||
editor.scrollIntoView({ behavior: 'smooth', block: 'center' });
|
||||
}
|
||||
""")
|
||||
pause(page, 1200) # Wait for smooth scroll animation
|
||||
|
||||
# Close the compose file section
|
||||
compose_collapse.locator("input[type=checkbox]").click(force=True)
|
||||
|
||||
@@ -63,7 +63,7 @@ def test_demo_themes(recording_page: Page, server_url: str) -> None:
|
||||
pause(page, 400)
|
||||
|
||||
# Type to filter to a light theme (theme button pre-populates "theme:")
|
||||
slow_type(page, "#cmd-input", " cup", delay=100)
|
||||
slow_type(page, "#cmd-input", "cup", delay=100)
|
||||
pause(page, 500)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 1000)
|
||||
@@ -75,7 +75,7 @@ def test_demo_themes(recording_page: Page, server_url: str) -> None:
|
||||
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
|
||||
pause(page, 300)
|
||||
|
||||
slow_type(page, "#cmd-input", " dark", delay=100)
|
||||
slow_type(page, "#cmd-input", "dark", delay=100)
|
||||
pause(page, 400)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 800)
|
||||
|
||||
@@ -5,7 +5,7 @@ Records a comprehensive demo (~60 seconds) combining all major features:
|
||||
2. Editor showing Compose Farm YAML config
|
||||
3. Command palette navigation to grocy stack
|
||||
4. Stack actions: up, logs
|
||||
5. Switch to mealie stack via command palette, run update
|
||||
5. Switch to dozzle stack via command palette, run update
|
||||
6. Dashboard overview
|
||||
7. Theme cycling via command palette
|
||||
|
||||
@@ -126,13 +126,13 @@ def _demo_stack_actions(page: Page) -> None:
|
||||
page.wait_for_selector("#terminal-output .xterm", timeout=5000)
|
||||
pause(page, 2500)
|
||||
|
||||
# Switch to mealie via command palette
|
||||
# Switch to dozzle via command palette (on nas for lower latency)
|
||||
open_command_palette(page)
|
||||
pause(page, 300)
|
||||
slow_type(page, "#cmd-input", "mealie", delay=100)
|
||||
slow_type(page, "#cmd-input", "dozzle", delay=100)
|
||||
pause(page, 400)
|
||||
page.keyboard.press("Enter")
|
||||
page.wait_for_url("**/stack/mealie", timeout=5000)
|
||||
page.wait_for_url("**/stack/dozzle", timeout=5000)
|
||||
pause(page, 1000)
|
||||
|
||||
# Run update action
|
||||
@@ -162,32 +162,20 @@ def _demo_dashboard_and_themes(page: Page, server_url: str) -> None:
|
||||
page.evaluate("window.scrollTo(0, 0)")
|
||||
pause(page, 600)
|
||||
|
||||
# Open theme picker and arrow down to Luxury (shows live preview)
|
||||
# Theme order: light, dark, cupcake, bumblebee, emerald, corporate, synthwave,
|
||||
# retro, cyberpunk, valentine, halloween, garden, forest, aqua, lofi, pastel,
|
||||
# fantasy, wireframe, black, luxury (index 19)
|
||||
# Open theme picker and arrow down to Dracula (shows live preview)
|
||||
page.locator("#theme-btn").click()
|
||||
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
|
||||
pause(page, 400)
|
||||
|
||||
# Arrow down through themes with live preview until we reach Luxury
|
||||
# Arrow down through themes with live preview until we reach Dracula
|
||||
for _ in range(19):
|
||||
page.keyboard.press("ArrowDown")
|
||||
pause(page, 180)
|
||||
|
||||
# Select Luxury theme
|
||||
# Select Dracula theme and end on it
|
||||
pause(page, 400)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 1000)
|
||||
|
||||
# Return to dark theme
|
||||
page.locator("#theme-btn").click()
|
||||
page.wait_for_selector("#cmd-palette[open]", timeout=2000)
|
||||
pause(page, 300)
|
||||
slow_type(page, "#cmd-input", " dark", delay=80)
|
||||
pause(page, 400)
|
||||
page.keyboard.press("Enter")
|
||||
pause(page, 1000)
|
||||
pause(page, 1500)
|
||||
|
||||
|
||||
@pytest.mark.browser # type: ignore[misc]
|
||||
|
||||
@@ -88,9 +88,9 @@ def patch_playwright_video_quality() -> None:
|
||||
console.print("[green]Patched Playwright for high-quality video recording[/green]")
|
||||
|
||||
|
||||
def record_demo(name: str) -> Path | None:
|
||||
def record_demo(name: str, index: int, total: int) -> Path | None:
|
||||
"""Run a single demo and return the video path."""
|
||||
console.print(f"[green]Recording:[/green] web-{name}")
|
||||
console.print(f"[cyan][{index}/{total}][/cyan] [green]Recording:[/green] web-{name}")
|
||||
|
||||
demo_file = SCRIPT_DIR / f"demo_{name}.py"
|
||||
if not demo_file.exists():
|
||||
@@ -227,9 +227,7 @@ def main() -> int:
|
||||
|
||||
try:
|
||||
for i, demo in enumerate(demos_to_record, 1):
|
||||
console.print(f"[yellow]=== Demo {i}/{len(demos_to_record)}: {demo} ===[/yellow]")
|
||||
|
||||
video_path = record_demo(demo)
|
||||
video_path = record_demo(demo, i, len(demos_to_record))
|
||||
if video_path:
|
||||
webm, gif = move_recording(video_path, demo)
|
||||
results[demo] = (webm, gif)
|
||||
|
||||
@@ -24,7 +24,7 @@ Before you begin, ensure you have:
|
||||
### One-liner (recommended)
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/basnijholt/compose-farm/main/bootstrap.sh | sh
|
||||
curl -fsSL https://compose-farm.nijho.lt/install | sh
|
||||
```
|
||||
|
||||
This installs [uv](https://docs.astral.sh/uv/) if needed, then installs compose-farm.
|
||||
@@ -54,6 +54,25 @@ docker run --rm \
|
||||
ghcr.io/basnijholt/compose-farm up --all
|
||||
```
|
||||
|
||||
**Running as non-root user** (recommended for NFS mounts):
|
||||
|
||||
By default, containers run as root. To preserve file ownership on mounted volumes, set these environment variables in your `.env` file:
|
||||
|
||||
```bash
|
||||
# Add to .env file (one-time setup)
|
||||
echo "CF_UID=$(id -u)" >> .env
|
||||
echo "CF_GID=$(id -g)" >> .env
|
||||
echo "CF_HOME=$HOME" >> .env
|
||||
echo "CF_USER=$USER" >> .env
|
||||
```
|
||||
|
||||
Or use [direnv](https://direnv.net/) to auto-set these variables when entering the directory:
|
||||
```bash
|
||||
cp .envrc.example .envrc && direnv allow
|
||||
```
|
||||
|
||||
This ensures files like `compose-farm-state.yaml` and web UI edits are owned by your user instead of root. The `CF_USER` variable is required for SSH to work when running as a non-root user.
|
||||
|
||||
### Verify Installation
|
||||
|
||||
```bash
|
||||
@@ -111,9 +130,9 @@ nas:/volume1/compose /opt/compose nfs defaults 0 0
|
||||
/opt/compose/ # compose_dir in config
|
||||
├── plex/
|
||||
│ └── docker-compose.yml
|
||||
├── sonarr/
|
||||
├── grafana/
|
||||
│ └── docker-compose.yml
|
||||
├── radarr/
|
||||
├── nextcloud/
|
||||
│ └── docker-compose.yml
|
||||
└── jellyfin/
|
||||
└── docker-compose.yml
|
||||
@@ -123,7 +142,21 @@ nas:/volume1/compose /opt/compose nfs defaults 0 0
|
||||
|
||||
### Create Config File
|
||||
|
||||
Create `~/.config/compose-farm/compose-farm.yaml`:
|
||||
Create `compose-farm.yaml` in the directory where you'll run commands. For example, if your stacks are in `/opt/stacks`, place the config there too:
|
||||
|
||||
```bash
|
||||
cd /opt/stacks
|
||||
cf config init
|
||||
```
|
||||
|
||||
Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config. You can also symlink a working directory config to the global location:
|
||||
|
||||
```bash
|
||||
# Create config in your stacks directory, symlink to ~/.config
|
||||
cf config symlink /opt/stacks/compose-farm.yaml
|
||||
```
|
||||
|
||||
This way, `cf` commands work from anywhere while the config lives with your stacks.
|
||||
|
||||
#### Single host example
|
||||
|
||||
@@ -136,8 +169,8 @@ hosts:
|
||||
|
||||
stacks:
|
||||
plex: local
|
||||
sonarr: local
|
||||
radarr: local
|
||||
grafana: local
|
||||
nextcloud: local
|
||||
```
|
||||
|
||||
#### Multi-host example
|
||||
@@ -157,8 +190,8 @@ hosts:
|
||||
# Map stacks to hosts
|
||||
stacks:
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
radarr: hp
|
||||
grafana: nuc
|
||||
nextcloud: hp
|
||||
```
|
||||
|
||||
Each entry in `stacks:` maps to a folder under `compose_dir` that contains a compose file.
|
||||
@@ -197,7 +230,7 @@ Starts all stacks on their assigned hosts.
|
||||
### Start Specific Stacks
|
||||
|
||||
```bash
|
||||
cf up plex sonarr
|
||||
cf up plex grafana
|
||||
```
|
||||
|
||||
### Apply Configuration
|
||||
@@ -236,19 +269,22 @@ Create the compose file:
|
||||
|
||||
```bash
|
||||
# On any host (shared storage)
|
||||
mkdir -p /opt/compose/prowlarr
|
||||
cat > /opt/compose/prowlarr/docker-compose.yml << 'EOF'
|
||||
mkdir -p /opt/compose/gitea
|
||||
cat > /opt/compose/gitea/docker-compose.yml << 'EOF'
|
||||
services:
|
||||
prowlarr:
|
||||
image: lscr.io/linuxserver/prowlarr:latest
|
||||
container_name: prowlarr
|
||||
gitea:
|
||||
image: docker.gitea.com/gitea:latest
|
||||
container_name: gitea
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
volumes:
|
||||
- /opt/config/prowlarr:/config
|
||||
- /opt/config/gitea:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "9696:9696"
|
||||
- "3000:3000"
|
||||
- "2222:22"
|
||||
restart: unless-stopped
|
||||
EOF
|
||||
```
|
||||
@@ -258,13 +294,13 @@ Add to config:
|
||||
```yaml
|
||||
stacks:
|
||||
# ... existing stacks
|
||||
prowlarr: nuc
|
||||
gitea: nuc
|
||||
```
|
||||
|
||||
Start the stack:
|
||||
|
||||
```bash
|
||||
cf up prowlarr
|
||||
cf up gitea
|
||||
```
|
||||
|
||||
### 2. Move a Stack to Another Host
|
||||
|
||||
@@ -76,7 +76,7 @@ hosts:
|
||||
stacks:
|
||||
plex: server-1
|
||||
jellyfin: server-2
|
||||
sonarr: server-1
|
||||
grafana: server-1
|
||||
```
|
||||
|
||||
```bash
|
||||
@@ -96,7 +96,7 @@ pip install compose-farm
|
||||
|
||||
### Configuration
|
||||
|
||||
Create `~/.config/compose-farm/compose-farm.yaml`:
|
||||
Create `compose-farm.yaml` in the directory where you'll run commands (e.g., `/opt/stacks`), or in `~/.config/compose-farm/`:
|
||||
|
||||
```yaml
|
||||
compose_dir: /opt/compose
|
||||
@@ -110,10 +110,12 @@ hosts:
|
||||
|
||||
stacks:
|
||||
plex: nuc
|
||||
sonarr: nuc
|
||||
radarr: hp
|
||||
grafana: nuc
|
||||
nextcloud: hp
|
||||
```
|
||||
|
||||
See [Configuration](configuration.md) for all options and the full search order.
|
||||
|
||||
### Usage
|
||||
|
||||
```bash
|
||||
@@ -121,7 +123,7 @@ stacks:
|
||||
cf apply
|
||||
|
||||
# Start specific stacks
|
||||
cf up plex sonarr
|
||||
cf up plex grafana
|
||||
|
||||
# Check status
|
||||
cf ps
|
||||
|
||||
2
bootstrap.sh → docs/install
Executable file → Normal file
2
bootstrap.sh → docs/install
Executable file → Normal file
@@ -1,6 +1,6 @@
|
||||
#!/bin/sh
|
||||
# Compose Farm bootstrap script
|
||||
# Usage: curl -fsSL https://raw.githubusercontent.com/basnijholt/compose-farm/main/bootstrap.sh | sh
|
||||
# Usage: curl -fsSL https://compose-farm.nijho.lt/install | sh
|
||||
#
|
||||
# This script installs uv (if needed) and then installs compose-farm as a uv tool.
|
||||
|
||||
21
docs/javascripts/video-fix.js
Normal file
21
docs/javascripts/video-fix.js
Normal file
@@ -0,0 +1,21 @@
|
||||
// Fix Safari video autoplay issues
|
||||
(function() {
|
||||
function initVideos() {
|
||||
document.querySelectorAll('video[autoplay]').forEach(function(video) {
|
||||
video.load();
|
||||
video.play().catch(function() {});
|
||||
});
|
||||
}
|
||||
|
||||
// For initial page load (needed for Chrome)
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', initVideos);
|
||||
} else {
|
||||
initVideos();
|
||||
}
|
||||
|
||||
// For MkDocs instant navigation (needed for Safari)
|
||||
if (typeof document$ !== 'undefined') {
|
||||
document$.subscribe(initVideos);
|
||||
}
|
||||
})();
|
||||
@@ -27,8 +27,8 @@ hosts:
|
||||
stacks:
|
||||
plex: nuc
|
||||
jellyfin: hp
|
||||
sonarr: nuc
|
||||
radarr: nuc
|
||||
grafana: nuc
|
||||
nextcloud: nuc
|
||||
```
|
||||
|
||||
Then just:
|
||||
|
||||
@@ -133,7 +133,7 @@ hosts:
|
||||
stacks:
|
||||
traefik: nuc # Traefik runs here
|
||||
plex: hp # Routed via file-provider
|
||||
sonarr: hp
|
||||
grafana: hp
|
||||
```
|
||||
|
||||
With `traefik_file` set, these commands auto-regenerate the config:
|
||||
@@ -256,8 +256,8 @@ stacks:
|
||||
traefik: nuc
|
||||
plex: hp
|
||||
jellyfin: nas
|
||||
sonarr: nuc
|
||||
radarr: nuc
|
||||
grafana: nuc
|
||||
nextcloud: nuc
|
||||
```
|
||||
|
||||
### /opt/compose/plex/docker-compose.yml
|
||||
@@ -309,7 +309,7 @@ http:
|
||||
- url: http://192.168.1.100:8096
|
||||
```
|
||||
|
||||
Note: `sonarr` and `radarr` are NOT in the file because they're on the same host as Traefik (`nuc`).
|
||||
Note: `grafana` and `nextcloud` are NOT in the file because they're on the same host as Traefik (`nuc`).
|
||||
|
||||
## Combining with Existing Config
|
||||
|
||||
|
||||
@@ -63,6 +63,8 @@ Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy sear
|
||||
- Container shell access (exec into running containers)
|
||||
- Terminal output for running commands
|
||||
|
||||
Files are automatically backed up before saving to `~/.config/compose-farm/backups/`.
|
||||
|
||||
### Console (`/console`)
|
||||
|
||||
- Full shell access to any host
|
||||
@@ -116,7 +118,7 @@ The web UI requires additional dependencies:
|
||||
pip install compose-farm[web]
|
||||
|
||||
# If installed via uv
|
||||
uv tool install compose-farm --with web
|
||||
uv tool install 'compose-farm[web]'
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
60
justfile
Normal file
60
justfile
Normal file
@@ -0,0 +1,60 @@
|
||||
# Compose Farm Development Commands
|
||||
# Run `just` to see available commands
|
||||
|
||||
# Default: list available commands
|
||||
default:
|
||||
@just --list
|
||||
|
||||
# Install development dependencies
|
||||
install:
|
||||
uv sync --all-extras --dev
|
||||
|
||||
# Run all tests (parallel)
|
||||
test:
|
||||
uv run pytest -n auto
|
||||
|
||||
# Run CLI tests only (parallel, with coverage)
|
||||
test-cli:
|
||||
uv run pytest -m "not browser" -n auto
|
||||
|
||||
# Run web UI tests only (parallel)
|
||||
test-web:
|
||||
uv run pytest -m browser -n auto
|
||||
|
||||
# Lint, format, and type check
|
||||
lint:
|
||||
uv run ruff check --fix .
|
||||
uv run ruff format .
|
||||
uv run mypy src
|
||||
uv run ty check src
|
||||
|
||||
# Start web UI in development mode with auto-reload
|
||||
web:
|
||||
uv run cf web --reload --port 9001
|
||||
|
||||
# Kill the web server
|
||||
kill-web:
|
||||
lsof -ti :9001 | xargs kill -9 2>/dev/null || true
|
||||
|
||||
# Build docs and serve locally
|
||||
doc:
|
||||
uvx zensical build
|
||||
python -m http.server -d site 9002
|
||||
|
||||
# Kill the docs server
|
||||
kill-doc:
|
||||
lsof -ti :9002 | xargs kill -9 2>/dev/null || true
|
||||
|
||||
# Record CLI demos (all or specific: just record-cli quickstart)
|
||||
record-cli *demos:
|
||||
python docs/demos/cli/record.py {{demos}}
|
||||
|
||||
# Record web UI demos (all or specific: just record-web navigation)
|
||||
record-web *demos:
|
||||
python docs/demos/web/record.py {{demos}}
|
||||
|
||||
# Clean up build artifacts and caches
|
||||
clean:
|
||||
rm -rf .pytest_cache .mypy_cache .ruff_cache .coverage htmlcov dist build
|
||||
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
|
||||
find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true
|
||||
@@ -23,6 +23,7 @@ app = typer.Typer(
|
||||
help="Compose Farm - run docker compose commands across multiple hosts",
|
||||
no_args_is_help=True,
|
||||
context_settings={"help_option_names": ["-h", "--help"]},
|
||||
rich_markup_mode="rich",
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -59,6 +59,10 @@ HostOption = Annotated[
|
||||
str | None,
|
||||
typer.Option("--host", "-H", help="Filter to stacks on this host"),
|
||||
]
|
||||
ServiceOption = Annotated[
|
||||
str | None,
|
||||
typer.Option("--service", "-s", help="Target a specific service within the stack"),
|
||||
]
|
||||
|
||||
# --- Constants (internal) ---
|
||||
_MISSING_PATH_PREVIEW_LIMIT = 2
|
||||
|
||||
@@ -2,15 +2,20 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Annotated
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Annotated
|
||||
|
||||
import typer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from compose_farm.config import Config
|
||||
|
||||
from compose_farm.cli.app import app
|
||||
from compose_farm.cli.common import (
|
||||
AllOption,
|
||||
ConfigOption,
|
||||
HostOption,
|
||||
ServiceOption,
|
||||
StacksArg,
|
||||
format_host,
|
||||
get_stacks,
|
||||
@@ -18,10 +23,17 @@ from compose_farm.cli.common import (
|
||||
maybe_regenerate_traefik,
|
||||
report_results,
|
||||
run_async,
|
||||
validate_host_for_stack,
|
||||
validate_stacks,
|
||||
)
|
||||
from compose_farm.cli.management import _discover_stacks_full
|
||||
from compose_farm.console import MSG_DRY_RUN, console, print_error, print_success
|
||||
from compose_farm.executor import run_on_stacks, run_sequential_on_stacks
|
||||
from compose_farm.operations import stop_orphaned_stacks, up_stacks
|
||||
from compose_farm.executor import run_compose_on_host, run_on_stacks, run_sequential_on_stacks
|
||||
from compose_farm.operations import (
|
||||
stop_orphaned_stacks,
|
||||
stop_stray_stacks,
|
||||
up_stacks,
|
||||
)
|
||||
from compose_farm.state import (
|
||||
get_orphaned_stacks,
|
||||
get_stack_host,
|
||||
@@ -36,11 +48,19 @@ def up(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
host: HostOption = None,
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Start stacks (docker compose up -d). Auto-migrates if host changed."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config, host=host)
|
||||
results = run_async(up_stacks(cfg, stack_list, raw=True))
|
||||
if service:
|
||||
if len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
# For service-level up, use run_on_stacks directly (no migration logic)
|
||||
results = run_async(run_on_stacks(cfg, stack_list, f"up -d {service}", raw=True))
|
||||
else:
|
||||
results = run_async(up_stacks(cfg, stack_list, raw=True))
|
||||
maybe_regenerate_traefik(cfg, results)
|
||||
report_results(results)
|
||||
|
||||
@@ -98,16 +118,39 @@ def down(
|
||||
report_results(results)
|
||||
|
||||
|
||||
@app.command(rich_help_panel="Lifecycle")
|
||||
def stop(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Stop services without removing containers (docker compose stop)."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
if service and len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
cmd = f"stop {service}" if service else "stop"
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(run_on_stacks(cfg, stack_list, cmd, raw=raw))
|
||||
report_results(results)
|
||||
|
||||
|
||||
@app.command(rich_help_panel="Lifecycle")
|
||||
def pull(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Pull latest images (docker compose pull)."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
if service and len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
cmd = f"pull --ignore-buildable {service}" if service else "pull --ignore-buildable"
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(run_on_stacks(cfg, stack_list, "pull", raw=raw))
|
||||
results = run_async(run_on_stacks(cfg, stack_list, cmd, raw=raw))
|
||||
report_results(results)
|
||||
|
||||
|
||||
@@ -115,12 +158,21 @@ def pull(
|
||||
def restart(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Restart stacks (down + up)."""
|
||||
"""Restart stacks (down + up). With --service, restarts just that service."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(run_sequential_on_stacks(cfg, stack_list, ["down", "up -d"], raw=raw))
|
||||
if service:
|
||||
if len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
# For service-level restart, use docker compose restart (more efficient)
|
||||
raw = True
|
||||
results = run_async(run_on_stacks(cfg, stack_list, f"restart {service}", raw=raw))
|
||||
else:
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(run_sequential_on_stacks(cfg, stack_list, ["down", "up -d"], raw=raw))
|
||||
maybe_regenerate_traefik(cfg, results)
|
||||
report_results(results)
|
||||
|
||||
@@ -129,22 +181,58 @@ def restart(
|
||||
def update(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Update stacks (pull + build + down + up)."""
|
||||
"""Update stacks (pull + build + down + up). With --service, updates just that service."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(
|
||||
run_sequential_on_stacks(
|
||||
cfg, stack_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
|
||||
if service:
|
||||
if len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
# For service-level update: pull + build + stop + up (stop instead of down)
|
||||
raw = True
|
||||
results = run_async(
|
||||
run_sequential_on_stacks(
|
||||
cfg,
|
||||
stack_list,
|
||||
[
|
||||
f"pull --ignore-buildable {service}",
|
||||
f"build {service}",
|
||||
f"stop {service}",
|
||||
f"up -d {service}",
|
||||
],
|
||||
raw=raw,
|
||||
)
|
||||
)
|
||||
else:
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(
|
||||
run_sequential_on_stacks(
|
||||
cfg, stack_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
|
||||
)
|
||||
)
|
||||
)
|
||||
maybe_regenerate_traefik(cfg, results)
|
||||
report_results(results)
|
||||
|
||||
|
||||
def _discover_strays(cfg: Config) -> dict[str, list[str]]:
|
||||
"""Discover stacks running on unauthorized hosts by scanning all hosts."""
|
||||
_, strays, duplicates = _discover_stacks_full(cfg)
|
||||
|
||||
# Merge duplicates into strays (for single-host stacks on multiple hosts,
|
||||
# keep correct host and stop others)
|
||||
for stack, running_hosts in duplicates.items():
|
||||
configured = cfg.get_hosts(stack)[0]
|
||||
stray_hosts = [h for h in running_hosts if h != configured]
|
||||
if stray_hosts:
|
||||
strays[stack] = stray_hosts
|
||||
|
||||
return strays
|
||||
|
||||
|
||||
@app.command(rich_help_panel="Lifecycle")
|
||||
def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
|
||||
def apply( # noqa: C901, PLR0912, PLR0915 (multi-phase reconciliation needs these branches)
|
||||
dry_run: Annotated[
|
||||
bool,
|
||||
typer.Option("--dry-run", "-n", help="Show what would change without executing"),
|
||||
@@ -153,23 +241,29 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
|
||||
bool,
|
||||
typer.Option("--no-orphans", help="Only migrate, don't stop orphaned stacks"),
|
||||
] = False,
|
||||
no_strays: Annotated[
|
||||
bool,
|
||||
typer.Option("--no-strays", help="Don't stop stray stacks (running on wrong host)"),
|
||||
] = False,
|
||||
full: Annotated[
|
||||
bool,
|
||||
typer.Option("--full", "-f", help="Also run up on all stacks to apply config changes"),
|
||||
] = False,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Make reality match config (start, migrate, stop as needed).
|
||||
"""Make reality match config (start, migrate, stop strays/orphans as needed).
|
||||
|
||||
This is the "reconcile" command that ensures running stacks match your
|
||||
config file. It will:
|
||||
|
||||
1. Stop orphaned stacks (in state but removed from config)
|
||||
2. Migrate stacks on wrong host (host in state ≠ host in config)
|
||||
3. Start missing stacks (in config but not in state)
|
||||
2. Stop stray stacks (running on unauthorized hosts)
|
||||
3. Migrate stacks on wrong host (host in state ≠ host in config)
|
||||
4. Start missing stacks (in config but not in state)
|
||||
|
||||
Use --dry-run to preview changes before applying.
|
||||
Use --no-orphans to only migrate/start without stopping orphaned stacks.
|
||||
Use --no-orphans to skip stopping orphaned stacks.
|
||||
Use --no-strays to skip stopping stray stacks.
|
||||
Use --full to also run 'up' on all stacks (picks up compose/env changes).
|
||||
"""
|
||||
cfg = load_config_or_exit(config)
|
||||
@@ -177,16 +271,28 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
|
||||
migrations = get_stacks_needing_migration(cfg)
|
||||
missing = get_stacks_not_in_state(cfg)
|
||||
|
||||
strays: dict[str, list[str]] = {}
|
||||
if not no_strays:
|
||||
console.print("[dim]Scanning hosts for stray containers...[/]")
|
||||
strays = _discover_strays(cfg)
|
||||
|
||||
# For --full: refresh all stacks not already being started/migrated
|
||||
handled = set(migrations) | set(missing)
|
||||
to_refresh = [stack for stack in cfg.stacks if stack not in handled] if full else []
|
||||
|
||||
has_orphans = bool(orphaned) and not no_orphans
|
||||
has_strays = bool(strays)
|
||||
has_migrations = bool(migrations)
|
||||
has_missing = bool(missing)
|
||||
has_refresh = bool(to_refresh)
|
||||
|
||||
if not has_orphans and not has_migrations and not has_missing and not has_refresh:
|
||||
if (
|
||||
not has_orphans
|
||||
and not has_strays
|
||||
and not has_migrations
|
||||
and not has_missing
|
||||
and not has_refresh
|
||||
):
|
||||
print_success("Nothing to apply - reality matches config")
|
||||
return
|
||||
|
||||
@@ -195,6 +301,14 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
|
||||
console.print(f"[yellow]Orphaned stacks to stop ({len(orphaned)}):[/]")
|
||||
for svc, hosts in orphaned.items():
|
||||
console.print(f" [cyan]{svc}[/] on [magenta]{format_host(hosts)}[/]")
|
||||
if has_strays:
|
||||
console.print(f"[red]Stray stacks to stop ({len(strays)}):[/]")
|
||||
for stack, hosts in strays.items():
|
||||
configured = cfg.get_hosts(stack)
|
||||
console.print(
|
||||
f" [cyan]{stack}[/] on [magenta]{', '.join(hosts)}[/] "
|
||||
f"[dim](should be on {', '.join(configured)})[/]"
|
||||
)
|
||||
if has_migrations:
|
||||
console.print(f"[cyan]Stacks to migrate ({len(migrations)}):[/]")
|
||||
for stack in migrations:
|
||||
@@ -223,21 +337,26 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
|
||||
console.print("[yellow]Stopping orphaned stacks...[/]")
|
||||
all_results.extend(run_async(stop_orphaned_stacks(cfg)))
|
||||
|
||||
# 2. Migrate stacks on wrong host
|
||||
# 2. Stop stray stacks (running on unauthorized hosts)
|
||||
if has_strays:
|
||||
console.print("[red]Stopping stray stacks...[/]")
|
||||
all_results.extend(run_async(stop_stray_stacks(cfg, strays)))
|
||||
|
||||
# 3. Migrate stacks on wrong host
|
||||
if has_migrations:
|
||||
console.print("[cyan]Migrating stacks...[/]")
|
||||
migrate_results = run_async(up_stacks(cfg, migrations, raw=True))
|
||||
all_results.extend(migrate_results)
|
||||
maybe_regenerate_traefik(cfg, migrate_results)
|
||||
|
||||
# 3. Start missing stacks (reuse up_stacks which handles state updates)
|
||||
# 4. Start missing stacks (reuse up_stacks which handles state updates)
|
||||
if has_missing:
|
||||
console.print("[green]Starting missing stacks...[/]")
|
||||
start_results = run_async(up_stacks(cfg, missing, raw=True))
|
||||
all_results.extend(start_results)
|
||||
maybe_regenerate_traefik(cfg, start_results)
|
||||
|
||||
# 4. Refresh remaining stacks (--full: run up to apply config changes)
|
||||
# 5. Refresh remaining stacks (--full: run up to apply config changes)
|
||||
if has_refresh:
|
||||
console.print("[blue]Refreshing stacks...[/]")
|
||||
refresh_results = run_async(up_stacks(cfg, to_refresh, raw=True))
|
||||
@@ -247,5 +366,62 @@ def apply( # noqa: PLR0912 (multi-phase reconciliation needs these branches)
|
||||
report_results(all_results)
|
||||
|
||||
|
||||
@app.command(
|
||||
rich_help_panel="Lifecycle",
|
||||
context_settings={"allow_interspersed_args": False},
|
||||
)
|
||||
def compose(
|
||||
stack: Annotated[str, typer.Argument(help="Stack to operate on (use '.' for current dir)")],
|
||||
command: Annotated[str, typer.Argument(help="Docker compose command")],
|
||||
args: Annotated[list[str] | None, typer.Argument(help="Additional arguments")] = None,
|
||||
host: HostOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Run any docker compose command on a stack.
|
||||
|
||||
Passthrough to docker compose for commands not wrapped by cf.
|
||||
Options after COMMAND are passed to docker compose, not cf.
|
||||
|
||||
Examples:
|
||||
cf compose mystack --help - show docker compose help
|
||||
cf compose mystack top - view running processes
|
||||
cf compose mystack images - list images
|
||||
cf compose mystack exec web bash - interactive shell
|
||||
cf compose mystack config - view parsed config
|
||||
|
||||
"""
|
||||
cfg = load_config_or_exit(config)
|
||||
|
||||
# Resolve "." to current directory name
|
||||
resolved_stack = Path.cwd().name if stack == "." else stack
|
||||
validate_stacks(cfg, [resolved_stack])
|
||||
|
||||
# Handle multi-host stacks
|
||||
hosts = cfg.get_hosts(resolved_stack)
|
||||
if len(hosts) > 1:
|
||||
if host is None:
|
||||
print_error(
|
||||
f"Stack [cyan]{resolved_stack}[/] runs on multiple hosts: {', '.join(hosts)}\n"
|
||||
f"Use [bold]--host[/] to specify which host"
|
||||
)
|
||||
raise typer.Exit(1)
|
||||
validate_host_for_stack(cfg, resolved_stack, host)
|
||||
target_host = host
|
||||
else:
|
||||
target_host = hosts[0]
|
||||
|
||||
# Build the full compose command
|
||||
full_cmd = command
|
||||
if args:
|
||||
full_cmd += " " + " ".join(args)
|
||||
|
||||
# Run with raw=True for proper TTY handling (progress bars, interactive)
|
||||
result = run_async(run_compose_on_host(cfg, resolved_stack, target_host, full_cmd, raw=True))
|
||||
print() # Ensure newline after raw output
|
||||
|
||||
if not result.success:
|
||||
raise typer.Exit(result.exit_code)
|
||||
|
||||
|
||||
# Alias: cf a = cf apply
|
||||
app.command("a", hidden=True)(apply)
|
||||
|
||||
@@ -50,9 +50,11 @@ from compose_farm.logs import (
|
||||
write_toml,
|
||||
)
|
||||
from compose_farm.operations import (
|
||||
StackDiscoveryResult,
|
||||
check_host_compatibility,
|
||||
check_stack_requirements,
|
||||
discover_stack_host,
|
||||
discover_stack_on_all_hosts,
|
||||
)
|
||||
from compose_farm.state import get_orphaned_stacks, load_state, save_state
|
||||
from compose_farm.traefik import generate_traefik_config, render_traefik_config
|
||||
@@ -147,6 +149,80 @@ def _report_sync_changes(
|
||||
console.print(f" [red]-[/] [cyan]{stack}[/] (was on [magenta]{host_str}[/])")
|
||||
|
||||
|
||||
def _discover_stacks_full(
|
||||
cfg: Config,
|
||||
stacks: list[str] | None = None,
|
||||
) -> tuple[dict[str, str | list[str]], dict[str, list[str]], dict[str, list[str]]]:
|
||||
"""Discover running stacks with full host scanning for stray detection.
|
||||
|
||||
Returns:
|
||||
Tuple of (discovered, strays, duplicates):
|
||||
- discovered: stack -> host(s) where running correctly
|
||||
- strays: stack -> list of unauthorized hosts
|
||||
- duplicates: stack -> list of all hosts (for single-host stacks on multiple)
|
||||
|
||||
"""
|
||||
stack_list = stacks if stacks is not None else list(cfg.stacks)
|
||||
results: list[StackDiscoveryResult] = run_parallel_with_progress(
|
||||
"Discovering",
|
||||
stack_list,
|
||||
lambda s: discover_stack_on_all_hosts(cfg, s),
|
||||
)
|
||||
|
||||
discovered: dict[str, str | list[str]] = {}
|
||||
strays: dict[str, list[str]] = {}
|
||||
duplicates: dict[str, list[str]] = {}
|
||||
|
||||
for result in results:
|
||||
correct_hosts = [h for h in result.running_hosts if h in result.configured_hosts]
|
||||
if correct_hosts:
|
||||
if result.is_multi_host:
|
||||
discovered[result.stack] = correct_hosts
|
||||
else:
|
||||
discovered[result.stack] = correct_hosts[0]
|
||||
|
||||
if result.is_stray:
|
||||
strays[result.stack] = result.stray_hosts
|
||||
|
||||
if result.is_duplicate:
|
||||
duplicates[result.stack] = result.running_hosts
|
||||
|
||||
return discovered, strays, duplicates
|
||||
|
||||
|
||||
def _report_stray_stacks(
|
||||
strays: dict[str, list[str]],
|
||||
cfg: Config,
|
||||
) -> None:
|
||||
"""Report stacks running on unauthorized hosts."""
|
||||
if strays:
|
||||
console.print(f"\n[red]Stray stacks[/] (running on wrong host, {len(strays)}):")
|
||||
console.print("[dim]Run [bold]cf apply[/bold] to stop them.[/]")
|
||||
for stack in sorted(strays):
|
||||
stray_hosts = strays[stack]
|
||||
configured = cfg.get_hosts(stack)
|
||||
console.print(
|
||||
f" [red]![/] [cyan]{stack}[/] on [magenta]{', '.join(stray_hosts)}[/] "
|
||||
f"[dim](should be on {', '.join(configured)})[/]"
|
||||
)
|
||||
|
||||
|
||||
def _report_duplicate_stacks(duplicates: dict[str, list[str]], cfg: Config) -> None:
|
||||
"""Report single-host stacks running on multiple hosts."""
|
||||
if duplicates:
|
||||
console.print(
|
||||
f"\n[yellow]Duplicate stacks[/] (running on multiple hosts, {len(duplicates)}):"
|
||||
)
|
||||
console.print("[dim]Run [bold]cf apply[/bold] to stop extras.[/]")
|
||||
for stack in sorted(duplicates):
|
||||
hosts = duplicates[stack]
|
||||
configured = cfg.get_hosts(stack)[0]
|
||||
console.print(
|
||||
f" [yellow]![/] [cyan]{stack}[/] on [magenta]{', '.join(hosts)}[/] "
|
||||
f"[dim](should only be on {configured})[/]"
|
||||
)
|
||||
|
||||
|
||||
# --- Check helpers ---
|
||||
|
||||
|
||||
@@ -440,7 +516,7 @@ def refresh(
|
||||
|
||||
current_state = load_state(cfg)
|
||||
|
||||
discovered = _discover_stacks(cfg, stack_list)
|
||||
discovered, strays, duplicates = _discover_stacks_full(cfg, stack_list)
|
||||
|
||||
# Calculate changes (only for the stacks we're refreshing)
|
||||
added = [s for s in discovered if s not in current_state]
|
||||
@@ -463,6 +539,9 @@ def refresh(
|
||||
else:
|
||||
print_success("State is already in sync.")
|
||||
|
||||
_report_stray_stacks(strays, cfg)
|
||||
_report_duplicate_stacks(duplicates, cfg)
|
||||
|
||||
if dry_run:
|
||||
console.print(f"\n{MSG_DRY_RUN}")
|
||||
return
|
||||
|
||||
@@ -14,6 +14,7 @@ from compose_farm.cli.common import (
|
||||
AllOption,
|
||||
ConfigOption,
|
||||
HostOption,
|
||||
ServiceOption,
|
||||
StacksArg,
|
||||
get_stacks,
|
||||
load_config_or_exit,
|
||||
@@ -21,7 +22,7 @@ from compose_farm.cli.common import (
|
||||
run_async,
|
||||
run_parallel_with_progress,
|
||||
)
|
||||
from compose_farm.console import console
|
||||
from compose_farm.console import console, print_error
|
||||
from compose_farm.executor import run_command, run_on_stacks
|
||||
from compose_farm.state import get_stacks_needing_migration, group_stacks_by_host, load_state
|
||||
|
||||
@@ -118,6 +119,7 @@ def logs(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
host: HostOption = None,
|
||||
service: ServiceOption = None,
|
||||
follow: Annotated[bool, typer.Option("--follow", "-f", help="Follow logs")] = False,
|
||||
tail: Annotated[
|
||||
int | None,
|
||||
@@ -125,8 +127,11 @@ def logs(
|
||||
] = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Show stack logs."""
|
||||
"""Show stack logs. With --service, shows logs for just that service."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config, host=host)
|
||||
if service and len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
|
||||
# Default to fewer lines when showing multiple stacks
|
||||
many_stacks = all_stacks or host is not None or len(stack_list) > 1
|
||||
@@ -134,6 +139,8 @@ def logs(
|
||||
cmd = f"logs --tail {effective_tail}"
|
||||
if follow:
|
||||
cmd += " -f"
|
||||
if service:
|
||||
cmd += f" {service}"
|
||||
results = run_async(run_on_stacks(cfg, stack_list, cmd))
|
||||
report_results(results)
|
||||
|
||||
@@ -143,6 +150,7 @@ def ps(
|
||||
stacks: StacksArg = None,
|
||||
all_stacks: AllOption = False,
|
||||
host: HostOption = None,
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Show status of stacks.
|
||||
@@ -150,9 +158,14 @@ def ps(
|
||||
Without arguments: shows all stacks (same as --all).
|
||||
With stack names: shows only those stacks.
|
||||
With --host: shows stacks on that host.
|
||||
With --service: filters to a specific service within the stack.
|
||||
"""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config, host=host, default_all=True)
|
||||
results = run_async(run_on_stacks(cfg, stack_list, "ps"))
|
||||
if service and len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
cmd = f"ps {service}" if service else "ps"
|
||||
results = run_async(run_on_stacks(cfg, stack_list, cmd))
|
||||
report_results(results)
|
||||
|
||||
|
||||
|
||||
@@ -336,3 +336,18 @@ def get_ports_for_service(
|
||||
if isinstance(ref_def, dict):
|
||||
return _parse_ports(ref_def.get("ports"), env)
|
||||
return _parse_ports(definition.get("ports"), env)
|
||||
|
||||
|
||||
def get_container_name(
|
||||
service_name: str,
|
||||
service_def: dict[str, Any] | None,
|
||||
project_name: str,
|
||||
) -> str:
|
||||
"""Get the container name for a service.
|
||||
|
||||
Uses container_name from compose if set, otherwise defaults to {project}-{service}-1.
|
||||
This matches Docker Compose's default naming convention.
|
||||
"""
|
||||
if isinstance(service_def, dict) and service_def.get("container_name"):
|
||||
return str(service_def["container_name"])
|
||||
return f"{project_name}-{service_name}-1"
|
||||
|
||||
@@ -4,6 +4,7 @@ from __future__ import annotations
|
||||
|
||||
import getpass
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
@@ -14,7 +15,7 @@ from .paths import config_search_paths, find_config_path
|
||||
COMPOSE_FILENAMES = ("compose.yaml", "compose.yml", "docker-compose.yml", "docker-compose.yaml")
|
||||
|
||||
|
||||
class Host(BaseModel):
|
||||
class Host(BaseModel, extra="forbid"):
|
||||
"""SSH host configuration."""
|
||||
|
||||
address: str
|
||||
@@ -22,7 +23,7 @@ class Host(BaseModel):
|
||||
port: int = 22
|
||||
|
||||
|
||||
class Config(BaseModel):
|
||||
class Config(BaseModel, extra="forbid"):
|
||||
"""Main configuration."""
|
||||
|
||||
compose_dir: Path = Path("/opt/compose")
|
||||
@@ -113,7 +114,7 @@ class Config(BaseModel):
|
||||
return found
|
||||
|
||||
|
||||
def _parse_hosts(raw_hosts: dict[str, str | dict[str, str | int]]) -> dict[str, Host]:
|
||||
def _parse_hosts(raw_hosts: dict[str, Any]) -> dict[str, Host]:
|
||||
"""Parse hosts from config, handling both simple and full forms."""
|
||||
hosts = {}
|
||||
for name, value in raw_hosts.items():
|
||||
@@ -122,11 +123,7 @@ def _parse_hosts(raw_hosts: dict[str, str | dict[str, str | int]]) -> dict[str,
|
||||
hosts[name] = Host(address=value)
|
||||
else:
|
||||
# Full form: hostname: {address: ..., user: ..., port: ...}
|
||||
hosts[name] = Host(
|
||||
address=str(value.get("address", "")),
|
||||
user=str(value["user"]) if "user" in value else getpass.getuser(),
|
||||
port=int(value["port"]) if "port" in value else 22,
|
||||
)
|
||||
hosts[name] = Host(**value)
|
||||
return hosts
|
||||
|
||||
|
||||
|
||||
@@ -101,6 +101,58 @@ async def discover_stack_host(cfg: Config, stack: str) -> tuple[str, str | list[
|
||||
return stack, None
|
||||
|
||||
|
||||
class StackDiscoveryResult(NamedTuple):
|
||||
"""Result of discovering where a stack is running across all hosts."""
|
||||
|
||||
stack: str
|
||||
configured_hosts: list[str] # From config (where it SHOULD run)
|
||||
running_hosts: list[str] # From reality (where it IS running)
|
||||
|
||||
@property
|
||||
def is_multi_host(self) -> bool:
|
||||
"""Check if this is a multi-host stack."""
|
||||
return len(self.configured_hosts) > 1
|
||||
|
||||
@property
|
||||
def stray_hosts(self) -> list[str]:
|
||||
"""Hosts where stack is running but shouldn't be."""
|
||||
return [h for h in self.running_hosts if h not in self.configured_hosts]
|
||||
|
||||
@property
|
||||
def missing_hosts(self) -> list[str]:
|
||||
"""Hosts where stack should be running but isn't."""
|
||||
return [h for h in self.configured_hosts if h not in self.running_hosts]
|
||||
|
||||
@property
|
||||
def is_stray(self) -> bool:
|
||||
"""Stack is running on unauthorized host(s)."""
|
||||
return len(self.stray_hosts) > 0
|
||||
|
||||
@property
|
||||
def is_duplicate(self) -> bool:
|
||||
"""Single-host stack running on multiple hosts."""
|
||||
return not self.is_multi_host and len(self.running_hosts) > 1
|
||||
|
||||
|
||||
async def discover_stack_on_all_hosts(cfg: Config, stack: str) -> StackDiscoveryResult:
|
||||
"""Discover where a stack is running across ALL hosts.
|
||||
|
||||
Unlike discover_stack_host(), this checks every host in parallel
|
||||
to detect strays and duplicates.
|
||||
"""
|
||||
configured_hosts = cfg.get_hosts(stack)
|
||||
all_hosts = list(cfg.hosts.keys())
|
||||
|
||||
checks = await asyncio.gather(*[check_stack_running(cfg, stack, h) for h in all_hosts])
|
||||
running_hosts = [h for h, is_running in zip(all_hosts, checks, strict=True) if is_running]
|
||||
|
||||
return StackDiscoveryResult(
|
||||
stack=stack,
|
||||
configured_hosts=configured_hosts,
|
||||
running_hosts=running_hosts,
|
||||
)
|
||||
|
||||
|
||||
async def check_stack_requirements(
|
||||
cfg: Config,
|
||||
stack: str,
|
||||
@@ -359,26 +411,33 @@ async def check_host_compatibility(
|
||||
return results
|
||||
|
||||
|
||||
async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
|
||||
"""Stop orphaned stacks (in state but not in config).
|
||||
async def _stop_stacks_on_hosts(
|
||||
cfg: Config,
|
||||
stacks_to_hosts: dict[str, list[str]],
|
||||
label: str = "",
|
||||
) -> list[CommandResult]:
|
||||
"""Stop stacks on specific hosts.
|
||||
|
||||
Runs docker compose down on each stack on its tracked host(s).
|
||||
Only removes from state on successful stop.
|
||||
Shared helper for stop_orphaned_stacks and stop_stray_stacks.
|
||||
|
||||
Args:
|
||||
cfg: Config object.
|
||||
stacks_to_hosts: Dict mapping stack name to list of hosts to stop on.
|
||||
label: Optional label for success message (e.g., "stray", "orphaned").
|
||||
|
||||
Returns:
|
||||
List of CommandResults for each stack@host.
|
||||
|
||||
Returns list of CommandResults for each stack@host.
|
||||
"""
|
||||
orphaned = get_orphaned_stacks(cfg)
|
||||
if not orphaned:
|
||||
if not stacks_to_hosts:
|
||||
return []
|
||||
|
||||
results: list[CommandResult] = []
|
||||
tasks: list[tuple[str, str, asyncio.Task[CommandResult]]] = []
|
||||
suffix = f" ({label})" if label else ""
|
||||
|
||||
# Build list of (stack, host, task) for all orphaned stacks
|
||||
for stack, hosts in orphaned.items():
|
||||
host_list = hosts if isinstance(hosts, list) else [hosts]
|
||||
for host in host_list:
|
||||
# Skip hosts no longer in config
|
||||
for stack, hosts in stacks_to_hosts.items():
|
||||
for host in hosts:
|
||||
if host not in cfg.hosts:
|
||||
print_warning(f"{stack}@{host}: host no longer in config, skipping")
|
||||
results.append(
|
||||
@@ -393,30 +452,48 @@ async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
|
||||
coro = run_compose_on_host(cfg, stack, host, "down")
|
||||
tasks.append((stack, host, asyncio.create_task(coro)))
|
||||
|
||||
# Run all down commands in parallel
|
||||
if tasks:
|
||||
for stack, host, task in tasks:
|
||||
try:
|
||||
result = await task
|
||||
results.append(result)
|
||||
if result.success:
|
||||
print_success(f"{stack}@{host}: stopped")
|
||||
else:
|
||||
print_error(f"{stack}@{host}: {result.stderr or 'failed'}")
|
||||
except Exception as e:
|
||||
print_error(f"{stack}@{host}: {e}")
|
||||
results.append(
|
||||
CommandResult(
|
||||
stack=f"{stack}@{host}",
|
||||
exit_code=1,
|
||||
success=False,
|
||||
stderr=str(e),
|
||||
)
|
||||
for stack, host, task in tasks:
|
||||
try:
|
||||
result = await task
|
||||
results.append(result)
|
||||
if result.success:
|
||||
print_success(f"{stack}@{host}: stopped{suffix}")
|
||||
else:
|
||||
print_error(f"{stack}@{host}: {result.stderr or 'failed'}")
|
||||
except Exception as e:
|
||||
print_error(f"{stack}@{host}: {e}")
|
||||
results.append(
|
||||
CommandResult(
|
||||
stack=f"{stack}@{host}",
|
||||
exit_code=1,
|
||||
success=False,
|
||||
stderr=str(e),
|
||||
)
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
|
||||
"""Stop orphaned stacks (in state but not in config).
|
||||
|
||||
Runs docker compose down on each stack on its tracked host(s).
|
||||
Only removes from state on successful stop.
|
||||
|
||||
Returns list of CommandResults for each stack@host.
|
||||
"""
|
||||
orphaned = get_orphaned_stacks(cfg)
|
||||
if not orphaned:
|
||||
return []
|
||||
|
||||
normalized: dict[str, list[str]] = {
|
||||
stack: (hosts if isinstance(hosts, list) else [hosts]) for stack, hosts in orphaned.items()
|
||||
}
|
||||
|
||||
results = await _stop_stacks_on_hosts(cfg, normalized)
|
||||
|
||||
# Remove from state only for stacks where ALL hosts succeeded
|
||||
for stack, hosts in orphaned.items():
|
||||
host_list = hosts if isinstance(hosts, list) else [hosts]
|
||||
for stack in normalized:
|
||||
all_succeeded = all(
|
||||
r.success for r in results if r.stack.startswith(f"{stack}@") or r.stack == stack
|
||||
)
|
||||
@@ -424,3 +501,20 @@ async def stop_orphaned_stacks(cfg: Config) -> list[CommandResult]:
|
||||
remove_stack(cfg, stack)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
async def stop_stray_stacks(
|
||||
cfg: Config,
|
||||
strays: dict[str, list[str]],
|
||||
) -> list[CommandResult]:
|
||||
"""Stop stacks running on unauthorized hosts.
|
||||
|
||||
Args:
|
||||
cfg: Config object.
|
||||
strays: Dict mapping stack name to list of stray hosts.
|
||||
|
||||
Returns:
|
||||
List of CommandResults for each stack@host stopped.
|
||||
|
||||
"""
|
||||
return await _stop_stacks_on_hosts(cfg, strays, label="stray")
|
||||
|
||||
@@ -11,9 +11,19 @@ def xdg_config_home() -> Path:
|
||||
return Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config"))
|
||||
|
||||
|
||||
def config_dir() -> Path:
|
||||
"""Get the compose-farm config directory."""
|
||||
return xdg_config_home() / "compose-farm"
|
||||
|
||||
|
||||
def default_config_path() -> Path:
|
||||
"""Get the default user config path."""
|
||||
return xdg_config_home() / "compose-farm" / "compose-farm.yaml"
|
||||
return config_dir() / "compose-farm.yaml"
|
||||
|
||||
|
||||
def backup_dir() -> Path:
|
||||
"""Get the backup directory for file edits."""
|
||||
return config_dir() / "backups"
|
||||
|
||||
|
||||
def config_search_paths() -> list[Path]:
|
||||
|
||||
@@ -8,6 +8,7 @@ use host-published ports for cross-host reachability.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
@@ -383,3 +384,53 @@ def render_traefik_config(dynamic: dict[str, Any]) -> str:
|
||||
"""Render Traefik dynamic config as YAML with a header comment."""
|
||||
body = yaml.safe_dump(dynamic, sort_keys=False)
|
||||
return _TRAEFIK_CONFIG_HEADER + body
|
||||
|
||||
|
||||
_HOST_RULE_PATTERN = re.compile(r"Host\(`([^`]+)`\)")
|
||||
|
||||
|
||||
def extract_website_urls(config: Config, stack: str) -> list[str]:
|
||||
"""Extract website URLs from Traefik labels in a stack's compose file.
|
||||
|
||||
Reuses generate_traefik_config to parse labels, then extracts Host() rules
|
||||
from router configurations.
|
||||
|
||||
Returns a list of unique URLs, preferring HTTPS over HTTP.
|
||||
"""
|
||||
try:
|
||||
dynamic, _ = generate_traefik_config(config, [stack], check_all=True)
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
routers = dynamic.get("http", {}).get("routers", {})
|
||||
if not routers:
|
||||
return []
|
||||
|
||||
# Track URLs with their scheme preference (https > http)
|
||||
urls: dict[str, str] = {} # host -> scheme
|
||||
|
||||
for router_info in routers.values():
|
||||
if not isinstance(router_info, dict):
|
||||
continue
|
||||
|
||||
rule = router_info.get("rule", "")
|
||||
entrypoints = router_info.get("entrypoints", [])
|
||||
|
||||
# entrypoints can be a list or string
|
||||
if isinstance(entrypoints, list):
|
||||
entrypoints_str = ",".join(entrypoints)
|
||||
else:
|
||||
entrypoints_str = str(entrypoints)
|
||||
|
||||
# Determine scheme from entrypoint
|
||||
scheme = "https" if "websecure" in entrypoints_str else "http"
|
||||
|
||||
# Extract host(s) from rule
|
||||
for match in _HOST_RULE_PATTERN.finditer(str(rule)):
|
||||
host = match.group(1)
|
||||
# Prefer https over http
|
||||
if host not in urls or scheme == "https":
|
||||
urls[host] = scheme
|
||||
|
||||
# Build URL list, sorted for consistency
|
||||
return sorted(f"{scheme}://{host}" for host, scheme in urls.items())
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
from contextlib import asynccontextmanager, suppress
|
||||
from typing import TYPE_CHECKING
|
||||
@@ -10,11 +11,22 @@ from typing import TYPE_CHECKING
|
||||
from fastapi import FastAPI
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from pydantic import ValidationError
|
||||
from rich.logging import RichHandler
|
||||
|
||||
from compose_farm.web.deps import STATIC_DIR, get_config
|
||||
from compose_farm.web.routes import actions, api, pages
|
||||
from compose_farm.web.streaming import TASK_TTL_SECONDS, cleanup_stale_tasks
|
||||
|
||||
# Configure logging with Rich handler for compose_farm.web modules
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(message)s",
|
||||
datefmt="[%X]",
|
||||
handlers=[RichHandler(rich_tracebacks=True, show_path=False)],
|
||||
)
|
||||
# Set our web modules to INFO level (uvicorn handles its own logging)
|
||||
logging.getLogger("compose_farm.web").setLevel(logging.INFO)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import AsyncGenerator
|
||||
|
||||
|
||||
@@ -33,12 +33,15 @@ def _start_task(coro_factory: Callable[[str], Coroutine[Any, Any, None]]) -> str
|
||||
|
||||
|
||||
# Allowed stack commands
|
||||
ALLOWED_COMMANDS = {"up", "down", "restart", "pull", "update", "logs"}
|
||||
ALLOWED_COMMANDS = {"up", "down", "restart", "pull", "update", "logs", "stop"}
|
||||
|
||||
# Allowed service-level commands (no 'down' - use 'stop' for individual services)
|
||||
ALLOWED_SERVICE_COMMANDS = {"logs", "pull", "restart", "up", "stop"}
|
||||
|
||||
|
||||
@router.post("/stack/{name}/{command}")
|
||||
async def stack_action(name: str, command: str) -> dict[str, Any]:
|
||||
"""Run a compose command for a stack (up, down, restart, pull, update, logs)."""
|
||||
"""Run a compose command for a stack (up, down, restart, pull, update, logs, stop)."""
|
||||
if command not in ALLOWED_COMMANDS:
|
||||
raise HTTPException(status_code=404, detail=f"Unknown command '{command}'")
|
||||
|
||||
@@ -50,6 +53,23 @@ async def stack_action(name: str, command: str) -> dict[str, Any]:
|
||||
return {"task_id": task_id, "stack": name, "command": command}
|
||||
|
||||
|
||||
@router.post("/stack/{name}/service/{service}/{command}")
|
||||
async def service_action(name: str, service: str, command: str) -> dict[str, Any]:
|
||||
"""Run a compose command for a specific service within a stack."""
|
||||
if command not in ALLOWED_SERVICE_COMMANDS:
|
||||
raise HTTPException(status_code=404, detail=f"Unknown command '{command}'")
|
||||
|
||||
config = get_config()
|
||||
if name not in config.stacks:
|
||||
raise HTTPException(status_code=404, detail=f"Stack '{name}' not found")
|
||||
|
||||
# Use --service flag to target specific service
|
||||
task_id = _start_task(
|
||||
lambda tid: run_compose_streaming(config, name, f"{command} --service {service}", tid)
|
||||
)
|
||||
return {"task_id": task_id, "stack": name, "service": service, "command": command}
|
||||
|
||||
|
||||
@router.post("/apply")
|
||||
async def apply_all() -> dict[str, Any]:
|
||||
"""Run cf apply to reconcile all stacks."""
|
||||
@@ -64,3 +84,19 @@ async def refresh_state() -> dict[str, Any]:
|
||||
config = get_config()
|
||||
task_id = _start_task(lambda tid: run_cli_streaming(config, ["refresh"], tid))
|
||||
return {"task_id": task_id, "command": "refresh"}
|
||||
|
||||
|
||||
@router.post("/pull-all")
|
||||
async def pull_all() -> dict[str, Any]:
|
||||
"""Pull latest images for all stacks."""
|
||||
config = get_config()
|
||||
task_id = _start_task(lambda tid: run_cli_streaming(config, ["pull", "--all"], tid))
|
||||
return {"task_id": task_id, "command": "pull --all"}
|
||||
|
||||
|
||||
@router.post("/update-all")
|
||||
async def update_all() -> dict[str, Any]:
|
||||
"""Update all stacks (pull + build + down + up)."""
|
||||
config = get_config()
|
||||
task_id = _start_task(lambda tid: run_cli_streaming(config, ["update", "--all"], tid))
|
||||
return {"task_id": task_id, "command": "update --all"}
|
||||
|
||||
@@ -5,6 +5,7 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
import contextlib
|
||||
import json
|
||||
import logging
|
||||
import shlex
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
@@ -18,11 +19,14 @@ import yaml
|
||||
from fastapi import APIRouter, Body, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from compose_farm.compose import get_container_name
|
||||
from compose_farm.executor import is_local, run_compose_on_host, ssh_connect_kwargs
|
||||
from compose_farm.paths import find_config_path
|
||||
from compose_farm.paths import backup_dir, find_config_path
|
||||
from compose_farm.state import load_state
|
||||
from compose_farm.web.deps import get_config, get_templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(tags=["api"])
|
||||
|
||||
|
||||
@@ -37,26 +41,30 @@ def _validate_yaml(content: str) -> None:
|
||||
def _backup_file(file_path: Path) -> Path | None:
|
||||
"""Create a timestamped backup of a file if it exists and content differs.
|
||||
|
||||
Backups are stored in a .backups directory alongside the file.
|
||||
Backups are stored in XDG config dir under compose-farm/backups/.
|
||||
The original file's absolute path is mirrored in the backup directory.
|
||||
Returns the backup path if created, None if no backup was needed.
|
||||
"""
|
||||
if not file_path.exists():
|
||||
return None
|
||||
|
||||
# Create backup directory
|
||||
backup_dir = file_path.parent / ".backups"
|
||||
backup_dir.mkdir(exist_ok=True)
|
||||
# Create backup directory mirroring original path structure
|
||||
# e.g., /opt/stacks/plex/compose.yaml -> ~/.config/compose-farm/backups/opt/stacks/plex/
|
||||
# On Windows: C:\Users\foo\stacks -> backups/Users/foo/stacks
|
||||
resolved = file_path.resolve()
|
||||
file_backup_dir = backup_dir() / resolved.parent.relative_to(resolved.anchor)
|
||||
file_backup_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate timestamped backup filename
|
||||
timestamp = datetime.now(tz=UTC).strftime("%Y%m%d_%H%M%S")
|
||||
backup_name = f"{file_path.name}.{timestamp}"
|
||||
backup_path = backup_dir / backup_name
|
||||
backup_path = file_backup_dir / backup_name
|
||||
|
||||
# Copy current content to backup
|
||||
backup_path.write_text(file_path.read_text())
|
||||
|
||||
# Clean up old backups (keep last 200)
|
||||
backups = sorted(backup_dir.glob(f"{file_path.name}.*"), reverse=True)
|
||||
backups = sorted(file_backup_dir.glob(f"{file_path.name}.*"), reverse=True)
|
||||
for old_backup in backups[200:]:
|
||||
old_backup.unlink()
|
||||
|
||||
@@ -113,14 +121,9 @@ def _get_compose_services(config: Any, stack: str, hosts: list[str]) -> list[dic
|
||||
containers = []
|
||||
for host in hosts:
|
||||
for svc_name, svc_def in raw_services.items():
|
||||
# Use container_name if set, otherwise default to {project}-{service}-1
|
||||
if isinstance(svc_def, dict) and svc_def.get("container_name"):
|
||||
container_name = svc_def["container_name"]
|
||||
else:
|
||||
container_name = f"{project_name}-{svc_name}-1"
|
||||
containers.append(
|
||||
{
|
||||
"Name": container_name,
|
||||
"Name": get_container_name(svc_name, svc_def, project_name),
|
||||
"Service": svc_name,
|
||||
"Host": host,
|
||||
"State": "unknown", # Status requires Docker query
|
||||
@@ -144,6 +147,12 @@ async def _get_container_states(
|
||||
config, stack, host_name, "ps -a --format json", stream=False
|
||||
)
|
||||
if not result.success:
|
||||
logger.warning(
|
||||
"Failed to get container states for %s on %s: %s",
|
||||
stack,
|
||||
host_name,
|
||||
result.stderr or result.stdout,
|
||||
)
|
||||
return containers
|
||||
|
||||
# Build state map: name -> (state, exit_code)
|
||||
@@ -350,6 +359,7 @@ async def read_console_file(
|
||||
except PermissionError:
|
||||
raise HTTPException(status_code=403, detail=f"Permission denied: {path}") from None
|
||||
except Exception as e:
|
||||
logger.exception("Failed to read file %s from host %s", path, host)
|
||||
raise HTTPException(status_code=500, detail=str(e)) from e
|
||||
|
||||
|
||||
@@ -373,4 +383,5 @@ async def write_console_file(
|
||||
except PermissionError:
|
||||
raise HTTPException(status_code=403, detail=f"Permission denied: {path}") from None
|
||||
except Exception as e:
|
||||
logger.exception("Failed to write file %s to host %s", path, host)
|
||||
raise HTTPException(status_code=500, detail=str(e)) from e
|
||||
|
||||
@@ -7,6 +7,7 @@ from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from pydantic import ValidationError
|
||||
|
||||
from compose_farm.compose import get_container_name
|
||||
from compose_farm.paths import find_config_path
|
||||
from compose_farm.state import (
|
||||
get_orphaned_stacks,
|
||||
@@ -16,6 +17,7 @@ from compose_farm.state import (
|
||||
group_running_stacks_by_host,
|
||||
load_state,
|
||||
)
|
||||
from compose_farm.traefik import extract_website_urls
|
||||
from compose_farm.web.deps import (
|
||||
extract_config_error,
|
||||
get_config,
|
||||
@@ -89,8 +91,8 @@ async def index(request: Request) -> HTMLResponse:
|
||||
# Get state
|
||||
deployed = load_state(config)
|
||||
|
||||
# Stats
|
||||
running_count = len(deployed)
|
||||
# Stats (only count stacks that are both in config AND deployed)
|
||||
running_count = sum(1 for stack in deployed if stack in config.stacks)
|
||||
stopped_count = len(config.stacks) - running_count
|
||||
|
||||
# Pending operations
|
||||
@@ -159,6 +161,29 @@ async def stack_detail(request: Request, name: str) -> HTMLResponse:
|
||||
# Get state
|
||||
current_host = get_stack_host(config, name)
|
||||
|
||||
# Get service names and container info from compose file
|
||||
services: list[str] = []
|
||||
containers: dict[str, dict[str, str]] = {}
|
||||
shell_host = current_host[0] if isinstance(current_host, list) else current_host
|
||||
if compose_content:
|
||||
compose_data = yaml.safe_load(compose_content) or {}
|
||||
raw_services = compose_data.get("services", {})
|
||||
if isinstance(raw_services, dict):
|
||||
services = list(raw_services.keys())
|
||||
# Build container info for shell access (only if stack is running)
|
||||
if shell_host:
|
||||
project_name = compose_path.parent.name if compose_path else name
|
||||
containers = {
|
||||
svc: {
|
||||
"container": get_container_name(svc, svc_def, project_name),
|
||||
"host": shell_host,
|
||||
}
|
||||
for svc, svc_def in raw_services.items()
|
||||
}
|
||||
|
||||
# Extract website URLs from Traefik labels
|
||||
website_urls = extract_website_urls(config, name)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"stack.html",
|
||||
{
|
||||
@@ -170,6 +195,9 @@ async def stack_detail(request: Request, name: str) -> HTMLResponse:
|
||||
"compose_path": str(compose_path) if compose_path else None,
|
||||
"env_content": env_content,
|
||||
"env_path": str(env_path) if env_path else None,
|
||||
"services": services,
|
||||
"containers": containers,
|
||||
"website_urls": website_urls,
|
||||
},
|
||||
)
|
||||
|
||||
@@ -222,7 +250,8 @@ async def stats_partial(request: Request) -> HTMLResponse:
|
||||
templates = get_templates()
|
||||
|
||||
deployed = load_state(config)
|
||||
running_count = len(deployed)
|
||||
# Only count stacks that are both in config AND deployed
|
||||
running_count = sum(1 for stack in deployed if stack in config.stacks)
|
||||
stopped_count = len(config.stacks) - running_count
|
||||
|
||||
return templates.TemplateResponse(
|
||||
|
||||
@@ -1,3 +1,9 @@
|
||||
/* Tooltips - ensure they appear above sidebar and other elements */
|
||||
.tooltip::before,
|
||||
.tooltip::after {
|
||||
z-index: 1000;
|
||||
}
|
||||
|
||||
/* Sidebar inputs - remove focus outline (DaisyUI 5 uses outline + outline-offset) */
|
||||
#sidebar .input:focus,
|
||||
#sidebar .input:focus-within,
|
||||
|
||||
@@ -57,6 +57,10 @@ const LANGUAGE_MAP = {
|
||||
'env': 'plaintext'
|
||||
};
|
||||
|
||||
// Detect Mac for keyboard shortcut display
|
||||
const IS_MAC = navigator.platform.toUpperCase().indexOf('MAC') >= 0;
|
||||
const MOD_KEY = IS_MAC ? '⌘' : 'Ctrl';
|
||||
|
||||
// ============================================================================
|
||||
// STATE
|
||||
// ============================================================================
|
||||
@@ -219,7 +223,9 @@ function initExecTerminal(stack, container, host) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Unhide the terminal container first, then expand/scroll
|
||||
containerEl.classList.remove('hidden');
|
||||
expandCollapse(document.getElementById('exec-collapse'), containerEl);
|
||||
|
||||
// Clean up existing (use wrapper's dispose to clean up ResizeObserver)
|
||||
if (execWs) { execWs.close(); execWs = null; }
|
||||
@@ -255,17 +261,42 @@ function initExecTerminal(stack, container, host) {
|
||||
|
||||
window.initExecTerminal = initExecTerminal;
|
||||
|
||||
/**
|
||||
* Expand a collapse component and scroll to a target element
|
||||
* @param {HTMLInputElement} toggle - The checkbox input that controls the collapse
|
||||
* @param {HTMLElement} [scrollTarget] - Element to scroll to (defaults to collapse container)
|
||||
*/
|
||||
function expandCollapse(toggle, scrollTarget = null) {
|
||||
if (!toggle) return;
|
||||
|
||||
// Find the parent collapse container
|
||||
const collapse = toggle.closest('.collapse');
|
||||
if (!collapse) return;
|
||||
|
||||
const target = scrollTarget || collapse;
|
||||
const scrollToTarget = () => {
|
||||
target.scrollIntoView({ behavior: 'smooth', block: 'start' });
|
||||
};
|
||||
|
||||
if (!toggle.checked) {
|
||||
// Collapsed - expand first, then scroll after transition
|
||||
const onTransitionEnd = () => {
|
||||
collapse.removeEventListener('transitionend', onTransitionEnd);
|
||||
scrollToTarget();
|
||||
};
|
||||
collapse.addEventListener('transitionend', onTransitionEnd);
|
||||
toggle.checked = true;
|
||||
} else {
|
||||
// Already expanded - just scroll
|
||||
scrollToTarget();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Expand terminal collapse and scroll to it
|
||||
*/
|
||||
function expandTerminal() {
|
||||
const toggle = document.getElementById('terminal-toggle');
|
||||
if (toggle) toggle.checked = true;
|
||||
|
||||
const collapse = document.getElementById('terminal-collapse');
|
||||
if (collapse) {
|
||||
collapse.scrollIntoView({ behavior: 'smooth', block: 'start' });
|
||||
}
|
||||
expandCollapse(document.getElementById('terminal-toggle'));
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -512,16 +543,23 @@ function playFabIntro() {
|
||||
const THEMES = ['light', 'dark', 'cupcake', 'bumblebee', 'emerald', 'corporate', 'synthwave', 'retro', 'cyberpunk', 'valentine', 'halloween', 'garden', 'forest', 'aqua', 'lofi', 'pastel', 'fantasy', 'wireframe', 'black', 'luxury', 'dracula', 'cmyk', 'autumn', 'business', 'acid', 'lemonade', 'night', 'coffee', 'winter', 'dim', 'nord', 'sunset', 'caramellatte', 'abyss', 'silk'];
|
||||
const THEME_KEY = 'cf_theme';
|
||||
|
||||
const colors = { stack: '#22c55e', action: '#eab308', nav: '#3b82f6', app: '#a855f7', theme: '#ec4899' };
|
||||
const colors = { stack: '#22c55e', action: '#eab308', nav: '#3b82f6', app: '#a855f7', theme: '#ec4899', service: '#14b8a6' };
|
||||
let commands = [];
|
||||
let filtered = [];
|
||||
let selected = 0;
|
||||
let originalTheme = null; // Store theme when palette opens for preview/restore
|
||||
|
||||
const post = (url) => () => htmx.ajax('POST', url, {swap: 'none'});
|
||||
const nav = (url) => () => {
|
||||
const nav = (url, afterNav) => () => {
|
||||
// Set hash before HTMX swap so inline scripts can read it
|
||||
const hashIndex = url.indexOf('#');
|
||||
if (hashIndex !== -1) {
|
||||
window.location.hash = url.substring(hashIndex);
|
||||
}
|
||||
htmx.ajax('GET', url, {target: '#main-content', select: '#main-content', swap: 'outerHTML'}).then(() => {
|
||||
history.pushState({}, '', url);
|
||||
window.scrollTo(0, 0);
|
||||
afterNav?.();
|
||||
});
|
||||
};
|
||||
// Navigate to dashboard (if needed) and trigger action
|
||||
@@ -529,6 +567,7 @@ function playFabIntro() {
|
||||
if (window.location.pathname !== '/') {
|
||||
await htmx.ajax('GET', '/', {target: '#main-content', select: '#main-content', swap: 'outerHTML'});
|
||||
history.pushState({}, '', '/');
|
||||
window.scrollTo(0, 0);
|
||||
}
|
||||
htmx.ajax('POST', `/api/${endpoint}`, {swap: 'none'});
|
||||
};
|
||||
@@ -564,10 +603,14 @@ function playFabIntro() {
|
||||
const actions = [
|
||||
cmd('action', 'Apply', 'Make reality match config', dashboardAction('apply'), icons.check),
|
||||
cmd('action', 'Refresh', 'Update state from reality', dashboardAction('refresh'), icons.refresh_cw),
|
||||
cmd('action', 'Pull All', 'Pull latest images for all stacks', dashboardAction('pull-all'), icons.cloud_download),
|
||||
cmd('action', 'Update All', 'Update all stacks', dashboardAction('update-all'), icons.refresh_cw),
|
||||
cmd('app', 'Theme', 'Change color theme', openThemePicker, icons.palette),
|
||||
cmd('app', 'Dashboard', 'Go to dashboard', nav('/'), icons.home),
|
||||
cmd('app', 'Console', 'Go to console', nav('/console'), icons.terminal),
|
||||
cmd('app', 'Edit Config', 'Edit compose-farm.yaml', nav('/console#editor'), icons.file_code),
|
||||
cmd('app', 'Docs', 'Open documentation', openExternal('https://compose-farm.nijho.lt/'), icons.book_open),
|
||||
cmd('app', 'Repo', 'Open GitHub repository', openExternal('https://github.com/basnijholt/compose-farm'), icons.external_link),
|
||||
];
|
||||
|
||||
// Add stack-specific actions if on a stack page
|
||||
@@ -583,6 +626,50 @@ function playFabIntro() {
|
||||
stackCmd('Update', 'Pull + restart', 'update', icons.refresh_cw),
|
||||
stackCmd('Logs', 'View logs for', 'logs', icons.file_text),
|
||||
);
|
||||
|
||||
// Add Open Website commands if website URLs are available
|
||||
const websiteUrlsAttr = document.querySelector('[data-website-urls]')?.getAttribute('data-website-urls');
|
||||
if (websiteUrlsAttr) {
|
||||
const websiteUrls = JSON.parse(websiteUrlsAttr);
|
||||
for (const url of websiteUrls) {
|
||||
const displayUrl = url.replace(/^https?:\/\//, '');
|
||||
const label = websiteUrls.length > 1 ? `Open: ${displayUrl}` : 'Open Website';
|
||||
actions.unshift(cmd('stack', label, `Open ${displayUrl} in browser`, openExternal(url), icons.external_link));
|
||||
}
|
||||
}
|
||||
|
||||
// Add service-specific commands from data-services and data-containers attributes
|
||||
// Grouped by action (all Logs together, all Pull together, etc.) with services sorted alphabetically
|
||||
const servicesAttr = document.querySelector('[data-services]')?.getAttribute('data-services');
|
||||
const containersAttr = document.querySelector('[data-containers]')?.getAttribute('data-containers');
|
||||
if (servicesAttr) {
|
||||
const services = servicesAttr.split(',').filter(s => s).sort();
|
||||
// Parse container info for shell access: {service: {container, host}}
|
||||
const containers = containersAttr ? JSON.parse(containersAttr) : {};
|
||||
|
||||
const svcCmd = (action, service, desc, endpoint, icon) =>
|
||||
cmd('service', `${action}: ${service}`, desc, post(`/api/stack/${stack}/service/${service}/${endpoint}`), icon);
|
||||
const svcActions = [
|
||||
['Logs', 'View logs for service', 'logs', icons.file_text],
|
||||
['Pull', 'Pull image for service', 'pull', icons.cloud_download],
|
||||
['Restart', 'Restart service', 'restart', icons.rotate_cw],
|
||||
['Stop', 'Stop service', 'stop', icons.square],
|
||||
['Up', 'Start service', 'up', icons.play],
|
||||
];
|
||||
for (const [action, desc, endpoint, icon] of svcActions) {
|
||||
for (const service of services) {
|
||||
actions.push(svcCmd(action, service, desc, endpoint, icon));
|
||||
}
|
||||
}
|
||||
// Add Shell commands if container info is available
|
||||
for (const service of services) {
|
||||
const info = containers[service];
|
||||
if (info?.container && info?.host) {
|
||||
actions.push(cmd('service', `Shell: ${service}`, 'Open interactive shell',
|
||||
() => initExecTerminal(stack, info.container, info.host), icons.terminal));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add nav commands for all stacks from sidebar
|
||||
@@ -601,8 +688,21 @@ function playFabIntro() {
|
||||
}
|
||||
|
||||
function filter() {
|
||||
const q = input.value.toLowerCase();
|
||||
filtered = commands.filter(c => c.name.toLowerCase().includes(q));
|
||||
// Fuzzy matching: all query words must match the START of a word in the command name
|
||||
// Examples: "r ba" matches "Restart: bazarr" but NOT "Logs: bazarr"
|
||||
const q = input.value.toLowerCase().trim();
|
||||
// Split query into words and strip non-alphanumeric chars
|
||||
const queryWords = q.split(/[^a-z0-9]+/).filter(w => w);
|
||||
|
||||
filtered = commands.filter(c => {
|
||||
const name = c.name.toLowerCase();
|
||||
// Split command name into words (split on non-alphanumeric)
|
||||
const nameWords = name.split(/[^a-z0-9]+/).filter(w => w);
|
||||
// Each query word must match the start of some word in the command name
|
||||
return queryWords.every(qw =>
|
||||
nameWords.some(nw => nw.startsWith(qw))
|
||||
);
|
||||
});
|
||||
selected = Math.max(0, Math.min(selected, filtered.length - 1));
|
||||
}
|
||||
|
||||
@@ -634,7 +734,7 @@ function playFabIntro() {
|
||||
input.value = initialFilter;
|
||||
filter();
|
||||
// If opening theme picker, select current theme
|
||||
if (initialFilter === 'theme:') {
|
||||
if (initialFilter.startsWith('theme:')) {
|
||||
const currentIdx = filtered.findIndex(c => c.themeId === originalTheme);
|
||||
if (currentIdx >= 0) selected = currentIdx;
|
||||
}
|
||||
@@ -749,19 +849,38 @@ function initKeyboardShortcuts() {
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Update keyboard shortcut display based on OS
|
||||
* Replaces ⌘ with Ctrl on non-Mac platforms
|
||||
*/
|
||||
function updateShortcutKeys() {
|
||||
// Update elements with class 'shortcut-key' that contain ⌘
|
||||
document.querySelectorAll('.shortcut-key').forEach(el => {
|
||||
if (el.textContent === '⌘') {
|
||||
el.textContent = MOD_KEY;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize page components
|
||||
*/
|
||||
function initPage() {
|
||||
initMonacoEditors();
|
||||
initSaveButton();
|
||||
updateShortcutKeys();
|
||||
}
|
||||
|
||||
/**
|
||||
* Attempt to reconnect to an active task from localStorage
|
||||
* @param {string} [path] - Optional path to use for task key lookup.
|
||||
* If not provided, uses current window.location.pathname.
|
||||
* This is important for HTMX navigation where pushState
|
||||
* hasn't happened yet when htmx:afterSwap fires.
|
||||
*/
|
||||
function tryReconnectToTask() {
|
||||
const taskId = localStorage.getItem(getTaskKey());
|
||||
function tryReconnectToTask(path) {
|
||||
const taskKey = TASK_KEY_PREFIX + (path || window.location.pathname);
|
||||
const taskId = localStorage.getItem(taskKey);
|
||||
if (!taskId) return;
|
||||
|
||||
whenXtermReady(() => {
|
||||
@@ -784,8 +903,12 @@ document.addEventListener('DOMContentLoaded', function() {
|
||||
document.body.addEventListener('htmx:afterSwap', function(evt) {
|
||||
if (evt.detail.target.id === 'main-content') {
|
||||
initPage();
|
||||
// Try to reconnect when navigating back to dashboard
|
||||
tryReconnectToTask();
|
||||
// Try to reconnect to task for the TARGET page, not current URL.
|
||||
// When using command palette navigation (htmx.ajax + manual pushState),
|
||||
// window.location.pathname still reflects the OLD page at this point.
|
||||
// Use pathInfo.requestPath to get the correct target path.
|
||||
const targetPath = evt.detail.pathInfo?.requestPath?.split('?')[0] || window.location.pathname;
|
||||
tryReconnectToTask(targetPath);
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@
|
||||
<span class="font-semibold rainbow-hover">Compose Farm</span>
|
||||
</header>
|
||||
|
||||
<main id="main-content" class="flex-1 p-6 overflow-y-auto">
|
||||
<main id="main-content" class="flex-1 p-6">
|
||||
{% block content %}{% endblock %}
|
||||
</main>
|
||||
</div>
|
||||
@@ -51,15 +51,21 @@
|
||||
<header class="p-4 border-b border-base-300">
|
||||
<h2 class="text-lg font-semibold flex items-center gap-2">
|
||||
<span class="rainbow-hover">Compose Farm</span>
|
||||
<a href="https://compose-farm.nijho.lt/" target="_blank" title="Docs" class="opacity-50 hover:opacity-100 transition-opacity">
|
||||
{{ book_open() }}
|
||||
</a>
|
||||
<a href="https://github.com/basnijholt/compose-farm" target="_blank" title="GitHub" class="opacity-50 hover:opacity-100 transition-opacity">
|
||||
{{ github() }}
|
||||
</a>
|
||||
<button type="button" id="theme-btn" class="opacity-50 hover:opacity-100 transition-opacity cursor-pointer" title="Change theme (opens command palette)">
|
||||
{{ palette() }}
|
||||
</button>
|
||||
<div class="tooltip tooltip-bottom" data-tip="Docs">
|
||||
<a href="https://compose-farm.nijho.lt/" target="_blank" class="opacity-50 hover:opacity-100 transition-opacity">
|
||||
{{ book_open() }}
|
||||
</a>
|
||||
</div>
|
||||
<div class="tooltip tooltip-bottom" data-tip="GitHub">
|
||||
<a href="https://github.com/basnijholt/compose-farm" target="_blank" class="opacity-50 hover:opacity-100 transition-opacity">
|
||||
{{ github() }}
|
||||
</a>
|
||||
</div>
|
||||
<div class="tooltip tooltip-bottom" data-tip="Change theme">
|
||||
<button type="button" id="theme-btn" class="opacity-50 hover:opacity-100 transition-opacity cursor-pointer">
|
||||
{{ palette() }}
|
||||
</button>
|
||||
</div>
|
||||
</h2>
|
||||
</header>
|
||||
<nav class="flex-1 overflow-y-auto p-2" hx-get="/partials/sidebar" hx-trigger="load, cf:refresh from:body" hx-swap="innerHTML">
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
<option value="{{ name }}">{{ name }}{% if name == local_host %} (local){% endif %}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
<button id="console-connect-btn" class="btn btn-sm btn-primary" onclick="connectConsole()">Connect</button>
|
||||
<div class="tooltip" data-tip="Connect to host via SSH"><button id="console-connect-btn" class="btn btn-sm btn-primary" onclick="connectConsole()">Connect</button></div>
|
||||
<span id="console-status" class="text-sm opacity-60"></span>
|
||||
</div>
|
||||
|
||||
@@ -29,11 +29,11 @@
|
||||
<div class="flex items-center justify-between mb-2">
|
||||
<div class="flex items-center gap-4">
|
||||
<input type="text" id="console-file-path" class="input input-sm input-bordered w-96" placeholder="Enter file path (e.g., ~/docker-compose.yaml)" value="{{ config_path }}">
|
||||
<button class="btn btn-sm btn-outline" onclick="loadFile()">Open</button>
|
||||
<div class="tooltip" data-tip="Load file from host"><button class="btn btn-sm btn-outline" onclick="loadFile()">Open</button></div>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<span id="editor-status" class="text-sm opacity-60"></span>
|
||||
<button id="console-save-btn" class="btn btn-sm btn-primary" onclick="saveFile()">{{ save() }} Save</button>
|
||||
<div class="tooltip" data-tip="Save file to host (⌘/Ctrl+S)"><button id="console-save-btn" class="btn btn-sm btn-primary" onclick="saveFile()">{{ save() }} Save</button></div>
|
||||
</div>
|
||||
</div>
|
||||
<div id="console-editor" class="resize-y overflow-hidden rounded-lg" style="height: 512px; min-height: 200px;"></div>
|
||||
@@ -97,7 +97,10 @@ function connectConsole() {
|
||||
consoleWs.onopen = () => {
|
||||
statusEl.textContent = `Connected to ${host}`;
|
||||
sendSize(term.cols, term.rows);
|
||||
term.focus();
|
||||
// Focus terminal unless #editor hash is present (command palette Edit Config)
|
||||
if (window.location.hash !== '#editor') {
|
||||
term.focus();
|
||||
}
|
||||
// Auto-load the default file once editor is ready
|
||||
const pathInput = document.getElementById('console-file-path');
|
||||
if (pathInput && pathInput.value) {
|
||||
@@ -133,6 +136,14 @@ function initConsoleEditor() {
|
||||
|
||||
loadMonaco(() => {
|
||||
consoleEditor = createEditor(editorEl, '', 'plaintext', { onSave: saveFile });
|
||||
// Focus editor if #editor hash is present (command palette Edit Config)
|
||||
if (window.location.hash === '#editor') {
|
||||
// Small delay for Monaco to fully initialize before focusing
|
||||
setTimeout(() => {
|
||||
consoleEditor.focus();
|
||||
editorEl.scrollIntoView({ behavior: 'smooth', block: 'center' });
|
||||
}, 100);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{% extends "base.html" %}
|
||||
{% from "partials/components.html" import page_header, collapse, stat_card, table, action_btn %}
|
||||
{% from "partials/icons.html" import check, refresh_cw, save, settings, server, database %}
|
||||
{% from "partials/icons.html" import check, refresh_cw, save, settings, server, database, cloud_download, rotate_cw %}
|
||||
{% block title %}Dashboard - Compose Farm{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
@@ -17,7 +17,9 @@
|
||||
<div class="flex flex-wrap gap-2 mb-6">
|
||||
{{ action_btn("Apply", "/api/apply", "primary", "Make reality match config", check()) }}
|
||||
{{ action_btn("Refresh", "/api/refresh", "outline", "Update state from reality", refresh_cw()) }}
|
||||
<button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button>
|
||||
{{ action_btn("Pull All", "/api/pull-all", "outline", "Pull latest images for all stacks", cloud_download()) }}
|
||||
{{ action_btn("Update All", "/api/update-all", "outline", "Update all stacks (pull + build + down + up)", rotate_cw()) }}
|
||||
<div class="tooltip" data-tip="Save compose-farm.yaml config file"><button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button></div>
|
||||
</div>
|
||||
|
||||
{% include "partials/terminal.html" %}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{% from "partials/icons.html" import search, play, square, rotate_cw, cloud_download, refresh_cw, file_text, check, home, terminal, box, palette, book_open %}
|
||||
{% from "partials/icons.html" import search, play, square, rotate_cw, cloud_download, refresh_cw, file_text, file_code, check, home, terminal, box, palette, book_open, external_link %}
|
||||
|
||||
<!-- Icons for command palette (referenced by JS) -->
|
||||
<template id="cmd-icons">
|
||||
@@ -14,6 +14,8 @@
|
||||
<span data-icon="box">{{ box() }}</span>
|
||||
<span data-icon="palette">{{ palette() }}</span>
|
||||
<span data-icon="book_open">{{ book_open() }}</span>
|
||||
<span data-icon="file_code">{{ file_code() }}</span>
|
||||
<span data-icon="external_link">{{ external_link() }}</span>
|
||||
</template>
|
||||
<dialog id="cmd-palette" class="modal">
|
||||
<div class="modal-box max-w-lg p-0">
|
||||
@@ -30,8 +32,8 @@
|
||||
</dialog>
|
||||
|
||||
<!-- Floating button to open command palette -->
|
||||
<button id="cmd-fab" class="fixed bottom-6 right-6 z-50" title="Command Palette (⌘K)">
|
||||
<button id="cmd-fab" class="fixed bottom-6 right-6 z-50" title="Command Palette (⌘/Ctrl+K)">
|
||||
<div class="cmd-fab-inner">
|
||||
<span>⌘ + K</span>
|
||||
<span class="shortcut-key">⌘</span><span class="shortcut-plus"> + </span><span class="shortcut-key">K</span>
|
||||
</div>
|
||||
</button>
|
||||
|
||||
@@ -25,12 +25,13 @@
|
||||
|
||||
{# Action button with htmx #}
|
||||
{% macro action_btn(label, url, style="outline", title=None, icon=None) %}
|
||||
{% if title %}<div class="tooltip" data-tip="{{ title }}">{% endif %}
|
||||
<button hx-post="{{ url }}"
|
||||
hx-swap="none"
|
||||
class="btn btn-{{ style }}"
|
||||
{% if title %}title="{{ title }}"{% endif %}>
|
||||
class="btn btn-{{ style }}">
|
||||
{% if icon %}{{ icon }}{% endif %}{{ label }}
|
||||
</button>
|
||||
{% if title %}</div>{% endif %}
|
||||
{% endmacro %}
|
||||
|
||||
{# Stat card for dashboard #}
|
||||
|
||||
@@ -1,27 +1,68 @@
|
||||
{# Container list for a stack on a single host #}
|
||||
{% from "partials/icons.html" import terminal %}
|
||||
{% from "partials/icons.html" import terminal, rotate_ccw, scroll_text, square, play, cloud_download %}
|
||||
{% macro container_row(stack, container, host) %}
|
||||
<div class="flex items-center gap-2 mb-2">
|
||||
{% if container.State == "running" %}
|
||||
<span class="badge badge-success">running</span>
|
||||
{% elif container.State == "unknown" %}
|
||||
<span class="badge badge-ghost"><span class="loading loading-spinner loading-xs"></span></span>
|
||||
{% elif container.State == "exited" %}
|
||||
{% if container.ExitCode == 0 %}
|
||||
<span class="badge badge-neutral">exited (0)</span>
|
||||
<div class="flex flex-col sm:flex-row sm:items-center gap-2 mb-3 p-2 bg-base-200 rounded-lg">
|
||||
<div class="flex items-center gap-2 min-w-0">
|
||||
{% if container.State == "running" %}
|
||||
<span class="badge badge-success">running</span>
|
||||
{% elif container.State == "unknown" %}
|
||||
<span class="badge badge-ghost"><span class="loading loading-spinner loading-xs"></span></span>
|
||||
{% elif container.State == "exited" %}
|
||||
{% if container.ExitCode == 0 %}
|
||||
<span class="badge badge-neutral">exited (0)</span>
|
||||
{% else %}
|
||||
<span class="badge badge-error">exited ({{ container.ExitCode }})</span>
|
||||
{% endif %}
|
||||
{% elif container.State == "created" %}
|
||||
<span class="badge badge-neutral">created</span>
|
||||
{% else %}
|
||||
<span class="badge badge-error">exited ({{ container.ExitCode }})</span>
|
||||
<span class="badge badge-warning">{{ container.State }}</span>
|
||||
{% endif %}
|
||||
{% elif container.State == "created" %}
|
||||
<span class="badge badge-neutral">created</span>
|
||||
{% else %}
|
||||
<span class="badge badge-warning">{{ container.State }}</span>
|
||||
{% endif %}
|
||||
<code class="text-sm flex-1">{{ container.Name }}</code>
|
||||
<button class="btn btn-sm btn-outline"
|
||||
onclick="initExecTerminal('{{ stack }}', '{{ container.Name }}', '{{ host }}')">
|
||||
{{ terminal() }} Shell
|
||||
</button>
|
||||
<code class="text-sm truncate">{{ container.Name }}</code>
|
||||
</div>
|
||||
<div class="join sm:ml-auto shrink-0">
|
||||
<div class="tooltip tooltip-top" data-tip="View logs">
|
||||
<button class="btn btn-sm btn-outline join-item"
|
||||
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/logs"
|
||||
hx-swap="none">
|
||||
{{ scroll_text() }}
|
||||
</button>
|
||||
</div>
|
||||
<div class="tooltip tooltip-top" data-tip="Restart service">
|
||||
<button class="btn btn-sm btn-outline join-item"
|
||||
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/restart"
|
||||
hx-swap="none">
|
||||
{{ rotate_ccw() }}
|
||||
</button>
|
||||
</div>
|
||||
<div class="tooltip tooltip-top" data-tip="Pull image">
|
||||
<button class="btn btn-sm btn-outline join-item"
|
||||
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/pull"
|
||||
hx-swap="none">
|
||||
{{ cloud_download() }}
|
||||
</button>
|
||||
</div>
|
||||
<div class="tooltip tooltip-top" data-tip="Start service">
|
||||
<button class="btn btn-sm btn-outline join-item"
|
||||
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/up"
|
||||
hx-swap="none">
|
||||
{{ play() }}
|
||||
</button>
|
||||
</div>
|
||||
<div class="tooltip tooltip-top" data-tip="Stop service">
|
||||
<button class="btn btn-sm btn-outline join-item"
|
||||
hx-post="/api/stack/{{ stack }}/service/{{ container.Service }}/stop"
|
||||
hx-swap="none">
|
||||
{{ square() }}
|
||||
</button>
|
||||
</div>
|
||||
<div class="tooltip tooltip-top" data-tip="Open shell">
|
||||
<button class="btn btn-sm btn-outline join-item"
|
||||
onclick="initExecTerminal('{{ stack }}', '{{ container.Name }}', '{{ host }}')">
|
||||
{{ terminal() }}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endmacro %}
|
||||
|
||||
|
||||
@@ -43,6 +43,18 @@
|
||||
</svg>
|
||||
{% endmacro %}
|
||||
|
||||
{% macro rotate_ccw(size=16) %}
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path d="M3 12a9 9 0 1 0 9-9 9.75 9.75 0 0 0-6.74 2.74L3 8"/><path d="M3 3v5h5"/>
|
||||
</svg>
|
||||
{% endmacro %}
|
||||
|
||||
{% macro scroll_text(size=16) %}
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path d="M15 12h-5"/><path d="M15 8h-5"/><path d="M19 17V5a2 2 0 0 0-2-2H4"/><path d="M8 21h12a2 2 0 0 0 2-2v-1a1 1 0 0 0-1-1H11a1 1 0 0 0-1 1v1a2 2 0 1 1-4 0V5a2 2 0 1 0-4 0v2a1 1 0 0 0 1 1h3"/>
|
||||
</svg>
|
||||
{% endmacro %}
|
||||
|
||||
{% macro download(size=16) %}
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"/><polyline points="7 10 12 15 17 10"/><line x1="12" x2="12" y1="15" y2="3"/>
|
||||
@@ -158,3 +170,9 @@
|
||||
<circle cx="13.5" cy="6.5" r="0.5" fill="currentColor"/><circle cx="17.5" cy="10.5" r="0.5" fill="currentColor"/><circle cx="8.5" cy="7.5" r="0.5" fill="currentColor"/><circle cx="6.5" cy="12.5" r="0.5" fill="currentColor"/><path d="M12 2C6.5 2 2 6.5 2 12s4.5 10 10 10c.926 0 1.648-.746 1.648-1.688 0-.437-.18-.835-.437-1.125-.29-.289-.438-.652-.438-1.125a1.64 1.64 0 0 1 1.668-1.668h1.996c3.051 0 5.555-2.503 5.555-5.555C21.965 6.012 17.461 2 12 2z"/>
|
||||
</svg>
|
||||
{% endmacro %}
|
||||
|
||||
{% macro external_link(size=16) %}
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path d="M15 3h6v6"/><path d="M10 14 21 3"/><path d="M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"/>
|
||||
</svg>
|
||||
{% endmacro %}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
{% extends "base.html" %}
|
||||
{% from "partials/components.html" import collapse, action_btn %}
|
||||
{% from "partials/icons.html" import play, square, rotate_cw, download, cloud_download, file_text, save, file_code, terminal, settings %}
|
||||
{% from "partials/icons.html" import play, square, rotate_cw, download, cloud_download, file_text, save, file_code, terminal, settings, external_link %}
|
||||
{% block title %}{{ name }} - Compose Farm{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="max-w-5xl">
|
||||
<div class="max-w-5xl" data-services="{{ services | join(',') }}" data-containers='{{ containers | tojson }}' data-website-urls='{{ website_urls | tojson }}'>
|
||||
<div class="mb-6">
|
||||
<h1 class="text-3xl font-bold rainbow-hover">{{ name }}</h1>
|
||||
<div class="flex flex-wrap items-center gap-2 mt-2">
|
||||
@@ -30,7 +30,20 @@
|
||||
<!-- Other -->
|
||||
{{ action_btn("Pull", "/api/stack/" ~ name ~ "/pull", "outline", "Pull latest images (no restart)", cloud_download()) }}
|
||||
{{ action_btn("Logs", "/api/stack/" ~ name ~ "/logs", "outline", "Show recent logs", file_text()) }}
|
||||
<button id="save-btn" class="btn btn-outline">{{ save() }} Save All</button>
|
||||
<div class="tooltip" data-tip="Save compose and .env files"><button id="save-btn" class="btn btn-outline">{{ save() }} Save All</button></div>
|
||||
|
||||
{% if website_urls %}
|
||||
<div class="divider divider-horizontal mx-0"></div>
|
||||
|
||||
<!-- Open Website -->
|
||||
{% for url in website_urls %}
|
||||
<div class="tooltip" data-tip="Open {{ url }}">
|
||||
<a href="{{ url }}" target="_blank" rel="noopener noreferrer" class="btn btn-outline">
|
||||
{{ external_link() }} {% if website_urls | length > 1 %}{{ url | replace('https://', '') | replace('http://', '') }}{% else %}Open Website{% endif %}
|
||||
</a>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
{% call collapse("Compose File", badge=compose_path, icon=file_code()) %}
|
||||
|
||||
@@ -6,6 +6,7 @@ import asyncio
|
||||
import contextlib
|
||||
import fcntl
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import pty
|
||||
import shlex
|
||||
@@ -21,6 +22,8 @@ from compose_farm.executor import is_local, ssh_connect_kwargs
|
||||
from compose_farm.web.deps import get_config
|
||||
from compose_farm.web.streaming import CRLF, DIM, GREEN, RED, RESET, tasks
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Shell command to prefer bash over sh
|
||||
SHELL_FALLBACK = "command -v bash >/dev/null && exec bash || exec sh"
|
||||
|
||||
@@ -34,7 +37,7 @@ def _parse_resize(msg: str) -> tuple[int, int] | None:
|
||||
"""Parse a resize message, return (cols, rows) or None if not a resize."""
|
||||
try:
|
||||
data = json.loads(msg)
|
||||
if data.get("type") == "resize":
|
||||
if isinstance(data, dict) and data.get("type") == "resize":
|
||||
return int(data["cols"]), int(data["rows"])
|
||||
except (json.JSONDecodeError, KeyError, TypeError, ValueError):
|
||||
pass
|
||||
@@ -214,6 +217,7 @@ async def exec_websocket(
|
||||
except WebSocketDisconnect:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.exception("WebSocket exec error for %s on %s", container, host)
|
||||
with contextlib.suppress(Exception):
|
||||
await websocket.send_text(f"{RED}Error: {e}{RESET}{CRLF}")
|
||||
finally:
|
||||
@@ -232,8 +236,8 @@ async def _run_shell_session(
|
||||
await websocket.send_text(f"{RED}Host '{host_name}' not found{RESET}{CRLF}")
|
||||
return
|
||||
|
||||
# Start interactive shell in home directory (avoid login shell to prevent job control warnings)
|
||||
shell_cmd = "cd ~ && exec bash -i 2>/dev/null || exec sh -i"
|
||||
# Start interactive shell in home directory
|
||||
shell_cmd = "cd ~ && exec bash -i || exec sh -i"
|
||||
|
||||
if is_local(host):
|
||||
# Local: use argv list with shell -c to interpret the command
|
||||
@@ -258,6 +262,7 @@ async def shell_websocket(
|
||||
except WebSocketDisconnect:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.exception("WebSocket shell error for host %s", host)
|
||||
with contextlib.suppress(Exception):
|
||||
await websocket.send_text(f"{RED}Error: {e}{RESET}{CRLF}")
|
||||
finally:
|
||||
|
||||
@@ -58,8 +58,9 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=False, full=False, config=None)
|
||||
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert "Nothing to apply" in captured.out
|
||||
@@ -82,10 +83,11 @@ class TestApplyCommand:
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch("compose_farm.cli.lifecycle.stop_orphaned_stacks") as mock_stop,
|
||||
patch("compose_farm.cli.lifecycle.up_stacks") as mock_up,
|
||||
):
|
||||
apply(dry_run=True, no_orphans=False, full=False, config=None)
|
||||
apply(dry_run=True, no_orphans=False, no_strays=False, full=False, config=None)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert "Stacks to migrate" in captured.out
|
||||
@@ -112,6 +114,7 @@ class TestApplyCommand:
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch(
|
||||
"compose_farm.cli.lifecycle.run_async",
|
||||
return_value=mock_results,
|
||||
@@ -120,7 +123,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
|
||||
patch("compose_farm.cli.lifecycle.report_results"),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=False, full=False, config=None)
|
||||
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
|
||||
|
||||
mock_up.assert_called_once()
|
||||
call_args = mock_up.call_args
|
||||
@@ -139,6 +142,7 @@ class TestApplyCommand:
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch(
|
||||
"compose_farm.cli.lifecycle.run_async",
|
||||
return_value=mock_results,
|
||||
@@ -146,7 +150,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.stop_orphaned_stacks") as mock_stop,
|
||||
patch("compose_farm.cli.lifecycle.report_results"),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=False, full=False, config=None)
|
||||
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
|
||||
|
||||
mock_stop.assert_called_once_with(cfg)
|
||||
|
||||
@@ -169,6 +173,7 @@ class TestApplyCommand:
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host1"),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch(
|
||||
"compose_farm.cli.lifecycle.run_async",
|
||||
return_value=mock_results,
|
||||
@@ -178,7 +183,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
|
||||
patch("compose_farm.cli.lifecycle.report_results"),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=True, full=False, config=None)
|
||||
apply(dry_run=False, no_orphans=True, no_strays=False, full=False, config=None)
|
||||
|
||||
# Should run migrations but not orphan cleanup
|
||||
mock_up.assert_called_once()
|
||||
@@ -202,8 +207,9 @@ class TestApplyCommand:
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=True, full=False, config=None)
|
||||
apply(dry_run=False, no_orphans=True, no_strays=False, full=False, config=None)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert "Nothing to apply" in captured.out
|
||||
@@ -221,6 +227,7 @@ class TestApplyCommand:
|
||||
"compose_farm.cli.lifecycle.get_stacks_not_in_state",
|
||||
return_value=["svc1"],
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch(
|
||||
"compose_farm.cli.lifecycle.run_async",
|
||||
return_value=mock_results,
|
||||
@@ -229,7 +236,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
|
||||
patch("compose_farm.cli.lifecycle.report_results"),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=False, full=False, config=None)
|
||||
apply(dry_run=False, no_orphans=False, no_strays=False, full=False, config=None)
|
||||
|
||||
mock_up.assert_called_once()
|
||||
call_args = mock_up.call_args
|
||||
@@ -249,8 +256,9 @@ class TestApplyCommand:
|
||||
"compose_farm.cli.lifecycle.get_stacks_not_in_state",
|
||||
return_value=["svc1"],
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
):
|
||||
apply(dry_run=True, no_orphans=False, full=False, config=None)
|
||||
apply(dry_run=True, no_orphans=False, no_strays=False, full=False, config=None)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert "Stacks to start" in captured.out
|
||||
@@ -267,6 +275,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch(
|
||||
"compose_farm.cli.lifecycle.run_async",
|
||||
return_value=mock_results,
|
||||
@@ -275,7 +284,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
|
||||
patch("compose_farm.cli.lifecycle.report_results"),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=False, full=True, config=None)
|
||||
apply(dry_run=False, no_orphans=False, no_strays=False, full=True, config=None)
|
||||
|
||||
mock_up.assert_called_once()
|
||||
call_args = mock_up.call_args
|
||||
@@ -293,8 +302,9 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.get_orphaned_stacks", return_value={}),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_needing_migration", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle.get_stacks_not_in_state", return_value=[]),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
):
|
||||
apply(dry_run=True, no_orphans=False, full=True, config=None)
|
||||
apply(dry_run=True, no_orphans=False, no_strays=False, full=True, config=None)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert "Stacks to refresh" in captured.out
|
||||
@@ -319,6 +329,7 @@ class TestApplyCommand:
|
||||
return_value=["svc2"],
|
||||
),
|
||||
patch("compose_farm.cli.lifecycle.get_stack_host", return_value="host2"),
|
||||
patch("compose_farm.cli.lifecycle._discover_strays", return_value={}),
|
||||
patch(
|
||||
"compose_farm.cli.lifecycle.run_async",
|
||||
return_value=mock_results,
|
||||
@@ -327,7 +338,7 @@ class TestApplyCommand:
|
||||
patch("compose_farm.cli.lifecycle.maybe_regenerate_traefik"),
|
||||
patch("compose_farm.cli.lifecycle.report_results"),
|
||||
):
|
||||
apply(dry_run=False, no_orphans=False, full=True, config=None)
|
||||
apply(dry_run=False, no_orphans=False, no_strays=False, full=True, config=None)
|
||||
|
||||
# up_stacks should be called 3 times: migrate, start, refresh
|
||||
assert mock_up.call_count == 3
|
||||
|
||||
@@ -2,11 +2,14 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
import pytest
|
||||
|
||||
# Thresholds in seconds, per OS
|
||||
if sys.platform == "win32":
|
||||
CLI_STARTUP_THRESHOLD = 2.0
|
||||
@@ -16,6 +19,10 @@ else: # Linux
|
||||
CLI_STARTUP_THRESHOLD = 0.25
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
"PYTEST_XDIST_WORKER" in os.environ,
|
||||
reason="Skip in parallel mode due to resource contention",
|
||||
)
|
||||
def test_cli_startup_time() -> None:
|
||||
"""Verify CLI startup time stays within acceptable bounds.
|
||||
|
||||
|
||||
60
tests/test_compose.py
Normal file
60
tests/test_compose.py
Normal file
@@ -0,0 +1,60 @@
|
||||
"""Tests for compose file parsing utilities."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from compose_farm.compose import get_container_name
|
||||
|
||||
|
||||
class TestGetContainerName:
|
||||
"""Test get_container_name helper function."""
|
||||
|
||||
def test_explicit_container_name(self) -> None:
|
||||
"""Uses container_name from service definition when set."""
|
||||
service_def = {"image": "nginx", "container_name": "my-custom-name"}
|
||||
result = get_container_name("web", service_def, "myproject")
|
||||
assert result == "my-custom-name"
|
||||
|
||||
def test_default_naming_pattern(self) -> None:
|
||||
"""Falls back to {project}-{service}-1 pattern."""
|
||||
service_def = {"image": "nginx"}
|
||||
result = get_container_name("web", service_def, "myproject")
|
||||
assert result == "myproject-web-1"
|
||||
|
||||
def test_none_service_def(self) -> None:
|
||||
"""Handles None service definition gracefully."""
|
||||
result = get_container_name("web", None, "myproject")
|
||||
assert result == "myproject-web-1"
|
||||
|
||||
def test_empty_service_def(self) -> None:
|
||||
"""Handles empty service definition."""
|
||||
result = get_container_name("web", {}, "myproject")
|
||||
assert result == "myproject-web-1"
|
||||
|
||||
def test_container_name_none_value(self) -> None:
|
||||
"""Handles container_name set to None."""
|
||||
service_def = {"image": "nginx", "container_name": None}
|
||||
result = get_container_name("web", service_def, "myproject")
|
||||
assert result == "myproject-web-1"
|
||||
|
||||
def test_container_name_empty_string(self) -> None:
|
||||
"""Handles container_name set to empty string."""
|
||||
service_def = {"image": "nginx", "container_name": ""}
|
||||
result = get_container_name("web", service_def, "myproject")
|
||||
assert result == "myproject-web-1"
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("service_name", "project_name", "expected"),
|
||||
[
|
||||
("redis", "plex", "plex-redis-1"),
|
||||
("plex-server", "media", "media-plex-server-1"),
|
||||
("db", "my-app", "my-app-db-1"),
|
||||
],
|
||||
)
|
||||
def test_various_naming_combinations(
|
||||
self, service_name: str, project_name: str, expected: str
|
||||
) -> None:
|
||||
"""Test various service/project name combinations."""
|
||||
result = get_container_name(service_name, {"image": "test"}, project_name)
|
||||
assert result == expected
|
||||
@@ -211,8 +211,8 @@ class TestRefreshCommand:
|
||||
return_value=existing_state,
|
||||
),
|
||||
patch(
|
||||
"compose_farm.cli.management._discover_stacks",
|
||||
return_value={"plex": "nas02"}, # plex moved to nas02
|
||||
"compose_farm.cli.management._discover_stacks_full",
|
||||
return_value=({"plex": "nas02"}, {}, {}), # plex moved to nas02
|
||||
),
|
||||
patch("compose_farm.cli.management._snapshot_stacks"),
|
||||
patch("compose_farm.cli.management.save_state") as mock_save,
|
||||
@@ -247,8 +247,12 @@ class TestRefreshCommand:
|
||||
return_value=existing_state,
|
||||
),
|
||||
patch(
|
||||
"compose_farm.cli.management._discover_stacks",
|
||||
return_value={"plex": "nas01", "grafana": "nas02"}, # jellyfin not running
|
||||
"compose_farm.cli.management._discover_stacks_full",
|
||||
return_value=(
|
||||
{"plex": "nas01", "grafana": "nas02"},
|
||||
{},
|
||||
{},
|
||||
), # jellyfin not running
|
||||
),
|
||||
patch("compose_farm.cli.management._snapshot_stacks"),
|
||||
patch("compose_farm.cli.management.save_state") as mock_save,
|
||||
@@ -281,8 +285,8 @@ class TestRefreshCommand:
|
||||
return_value=existing_state,
|
||||
),
|
||||
patch(
|
||||
"compose_farm.cli.management._discover_stacks",
|
||||
return_value={"plex": "nas01"}, # only plex running
|
||||
"compose_farm.cli.management._discover_stacks_full",
|
||||
return_value=({"plex": "nas01"}, {}, {}), # only plex running
|
||||
),
|
||||
patch("compose_farm.cli.management._snapshot_stacks"),
|
||||
patch("compose_farm.cli.management.save_state") as mock_save,
|
||||
@@ -315,8 +319,8 @@ class TestRefreshCommand:
|
||||
return_value=existing_state,
|
||||
),
|
||||
patch(
|
||||
"compose_farm.cli.management._discover_stacks",
|
||||
return_value={"plex": "nas01"}, # jellyfin not running
|
||||
"compose_farm.cli.management._discover_stacks_full",
|
||||
return_value=({"plex": "nas01"}, {}, {}), # jellyfin not running
|
||||
),
|
||||
patch("compose_farm.cli.management._snapshot_stacks"),
|
||||
patch("compose_farm.cli.management.save_state") as mock_save,
|
||||
@@ -350,8 +354,8 @@ class TestRefreshCommand:
|
||||
return_value=existing_state,
|
||||
),
|
||||
patch(
|
||||
"compose_farm.cli.management._discover_stacks",
|
||||
return_value={"plex": "nas02"}, # would change
|
||||
"compose_farm.cli.management._discover_stacks_full",
|
||||
return_value=({"plex": "nas02"}, {}, {}), # would change
|
||||
),
|
||||
patch("compose_farm.cli.management.save_state") as mock_save,
|
||||
):
|
||||
|
||||
@@ -6,7 +6,7 @@ import yaml
|
||||
|
||||
from compose_farm.compose import parse_external_networks
|
||||
from compose_farm.config import Config, Host
|
||||
from compose_farm.traefik import generate_traefik_config
|
||||
from compose_farm.traefik import extract_website_urls, generate_traefik_config
|
||||
|
||||
|
||||
def _write_compose(path: Path, data: dict[str, object]) -> None:
|
||||
@@ -212,22 +212,22 @@ def test_generate_follows_network_mode_service_for_ports(tmp_path: Path) -> None
|
||||
"image": "gluetun",
|
||||
"ports": ["5080:5080", "9696:9696"],
|
||||
},
|
||||
"qbittorrent": {
|
||||
"image": "qbittorrent",
|
||||
"syncthing": {
|
||||
"image": "syncthing",
|
||||
"network_mode": "service:vpn",
|
||||
"labels": [
|
||||
"traefik.enable=true",
|
||||
"traefik.http.routers.torrent.rule=Host(`torrent.example.com`)",
|
||||
"traefik.http.services.torrent.loadbalancer.server.port=5080",
|
||||
"traefik.http.routers.sync.rule=Host(`sync.example.com`)",
|
||||
"traefik.http.services.sync.loadbalancer.server.port=5080",
|
||||
],
|
||||
},
|
||||
"prowlarr": {
|
||||
"image": "prowlarr",
|
||||
"searxng": {
|
||||
"image": "searxng",
|
||||
"network_mode": "service:vpn",
|
||||
"labels": [
|
||||
"traefik.enable=true",
|
||||
"traefik.http.routers.prowlarr.rule=Host(`prowlarr.example.com`)",
|
||||
"traefik.http.services.prowlarr.loadbalancer.server.port=9696",
|
||||
"traefik.http.routers.searxng.rule=Host(`searxng.example.com`)",
|
||||
"traefik.http.services.searxng.loadbalancer.server.port=9696",
|
||||
],
|
||||
},
|
||||
}
|
||||
@@ -238,10 +238,10 @@ def test_generate_follows_network_mode_service_for_ports(tmp_path: Path) -> None
|
||||
|
||||
assert warnings == []
|
||||
# Both services should get their ports from the vpn service
|
||||
torrent_servers = dynamic["http"]["services"]["torrent"]["loadbalancer"]["servers"]
|
||||
assert torrent_servers == [{"url": "http://192.168.1.10:5080"}]
|
||||
prowlarr_servers = dynamic["http"]["services"]["prowlarr"]["loadbalancer"]["servers"]
|
||||
assert prowlarr_servers == [{"url": "http://192.168.1.10:9696"}]
|
||||
sync_servers = dynamic["http"]["services"]["sync"]["loadbalancer"]["servers"]
|
||||
assert sync_servers == [{"url": "http://192.168.1.10:5080"}]
|
||||
searxng_servers = dynamic["http"]["services"]["searxng"]["loadbalancer"]["servers"]
|
||||
assert searxng_servers == [{"url": "http://192.168.1.10:9696"}]
|
||||
|
||||
|
||||
def test_parse_external_networks_single(tmp_path: Path) -> None:
|
||||
@@ -336,3 +336,330 @@ def test_parse_external_networks_missing_compose(tmp_path: Path) -> None:
|
||||
|
||||
networks = parse_external_networks(cfg, "app")
|
||||
assert networks == []
|
||||
|
||||
|
||||
class TestExtractWebsiteUrls:
|
||||
"""Test extract_website_urls function."""
|
||||
|
||||
def _create_config(self, tmp_path: Path) -> Config:
|
||||
"""Create a test config."""
|
||||
return Config(
|
||||
compose_dir=tmp_path,
|
||||
hosts={"nas": Host(address="192.168.1.10")},
|
||||
stacks={"mystack": "nas"},
|
||||
)
|
||||
|
||||
def test_extract_https_url(self, tmp_path: Path) -> None:
|
||||
"""Extracts HTTPS URL from websecure entrypoint."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
|
||||
"traefik.http.routers.web.entrypoints": "websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.example.com"]
|
||||
|
||||
def test_extract_http_url(self, tmp_path: Path) -> None:
|
||||
"""Extracts HTTP URL from web entrypoint."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.local`)",
|
||||
"traefik.http.routers.web.entrypoints": "web",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["http://app.local"]
|
||||
|
||||
def test_extract_multiple_urls(self, tmp_path: Path) -> None:
|
||||
"""Extracts multiple URLs from different routers."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
|
||||
"traefik.http.routers.web.entrypoints": "websecure",
|
||||
"traefik.http.routers.web-local.rule": "Host(`app.local`)",
|
||||
"traefik.http.routers.web-local.entrypoints": "web",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["http://app.local", "https://app.example.com"]
|
||||
|
||||
def test_https_preferred_over_http(self, tmp_path: Path) -> None:
|
||||
"""HTTPS is preferred when same host has both."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
# Same host with different entrypoints
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web-http.rule": "Host(`app.example.com`)",
|
||||
"traefik.http.routers.web-http.entrypoints": "web",
|
||||
"traefik.http.routers.web-https.rule": "Host(`app.example.com`)",
|
||||
"traefik.http.routers.web-https.entrypoints": "websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.example.com"]
|
||||
|
||||
def test_traefik_disabled(self, tmp_path: Path) -> None:
|
||||
"""Returns empty list when traefik.enable is false."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "false",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
|
||||
"traefik.http.routers.web.entrypoints": "websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == []
|
||||
|
||||
def test_no_traefik_labels(self, tmp_path: Path) -> None:
|
||||
"""Returns empty list when no traefik labels."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == []
|
||||
|
||||
def test_compose_file_not_exists(self, tmp_path: Path) -> None:
|
||||
"""Returns empty list when compose file doesn't exist."""
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == []
|
||||
|
||||
def test_env_variable_interpolation(self, tmp_path: Path) -> None:
|
||||
"""Interpolates environment variables in host rule."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
env_file = stack_dir / ".env"
|
||||
|
||||
env_file.write_text("DOMAIN=example.com\n")
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.${DOMAIN}`)",
|
||||
"traefik.http.routers.web.entrypoints": "websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.example.com"]
|
||||
|
||||
def test_multiple_hosts_in_one_rule_with_or(self, tmp_path: Path) -> None:
|
||||
"""Extracts multiple hosts from a single rule with || operator."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`) || Host(`app.backup.com`)",
|
||||
"traefik.http.routers.web.entrypoints": "websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.backup.com", "https://app.example.com"]
|
||||
|
||||
def test_host_with_path_prefix(self, tmp_path: Path) -> None:
|
||||
"""Extracts host from rule that includes PathPrefix."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`) && PathPrefix(`/api`)",
|
||||
"traefik.http.routers.web.entrypoints": "websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.example.com"]
|
||||
|
||||
def test_multiple_services_in_stack(self, tmp_path: Path) -> None:
|
||||
"""Extracts URLs from multiple services in one stack (like arr stack)."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"radarr": {
|
||||
"image": "radarr",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.radarr.rule": "Host(`radarr.example.com`)",
|
||||
"traefik.http.routers.radarr.entrypoints": "websecure",
|
||||
},
|
||||
},
|
||||
"sonarr": {
|
||||
"image": "sonarr",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.sonarr.rule": "Host(`sonarr.example.com`)",
|
||||
"traefik.http.routers.sonarr.entrypoints": "websecure",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://radarr.example.com", "https://sonarr.example.com"]
|
||||
|
||||
def test_labels_in_list_format(self, tmp_path: Path) -> None:
|
||||
"""Handles labels in list format (- key=value)."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": [
|
||||
"traefik.enable=true",
|
||||
"traefik.http.routers.web.rule=Host(`app.example.com`)",
|
||||
"traefik.http.routers.web.entrypoints=websecure",
|
||||
],
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.example.com"]
|
||||
|
||||
def test_no_entrypoints_defaults_to_http(self, tmp_path: Path) -> None:
|
||||
"""When no entrypoints specified, defaults to http."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["http://app.example.com"]
|
||||
|
||||
def test_multiple_entrypoints_with_websecure(self, tmp_path: Path) -> None:
|
||||
"""When entrypoints includes websecure, use https."""
|
||||
stack_dir = tmp_path / "mystack"
|
||||
stack_dir.mkdir()
|
||||
compose_file = stack_dir / "compose.yaml"
|
||||
compose_data = {
|
||||
"services": {
|
||||
"web": {
|
||||
"image": "nginx",
|
||||
"labels": {
|
||||
"traefik.enable": "true",
|
||||
"traefik.http.routers.web.rule": "Host(`app.example.com`)",
|
||||
"traefik.http.routers.web.entrypoints": "web,websecure",
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
compose_file.write_text(yaml.dump(compose_data))
|
||||
|
||||
config = self._create_config(tmp_path)
|
||||
urls = extract_website_urls(config, "mystack")
|
||||
assert urls == ["https://app.example.com"]
|
||||
|
||||
@@ -31,12 +31,21 @@ services:
|
||||
(plex_dir / ".env").write_text("PLEX_CLAIM=claim-xxx\n")
|
||||
|
||||
# Create another stack
|
||||
sonarr_dir = compose_path / "sonarr"
|
||||
sonarr_dir.mkdir()
|
||||
(sonarr_dir / "compose.yaml").write_text("""
|
||||
grafana_dir = compose_path / "grafana"
|
||||
grafana_dir.mkdir()
|
||||
(grafana_dir / "compose.yaml").write_text("""
|
||||
services:
|
||||
sonarr:
|
||||
image: linuxserver/sonarr
|
||||
grafana:
|
||||
image: grafana/grafana
|
||||
""")
|
||||
|
||||
# Create a single-service stack for testing service commands
|
||||
redis_dir = compose_path / "redis"
|
||||
redis_dir.mkdir()
|
||||
(redis_dir / "compose.yaml").write_text("""
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
""")
|
||||
|
||||
return compose_path
|
||||
@@ -58,7 +67,8 @@ hosts:
|
||||
|
||||
stacks:
|
||||
plex: server-1
|
||||
sonarr: server-2
|
||||
grafana: server-2
|
||||
redis: server-1
|
||||
""")
|
||||
|
||||
# State file must be alongside config file
|
||||
|
||||
@@ -2,53 +2,65 @@
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from compose_farm.web.routes.api import _backup_file, _save_with_backup
|
||||
|
||||
|
||||
def test_backup_creates_timestamped_file(tmp_path: Path) -> None:
|
||||
"""Test that backup creates file in .backups with correct content."""
|
||||
test_file = tmp_path / "test.yaml"
|
||||
@pytest.fixture
|
||||
def xdg_backup_dir(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
|
||||
"""Set XDG_CONFIG_HOME to tmp_path and return the backup directory path."""
|
||||
monkeypatch.setenv("XDG_CONFIG_HOME", str(tmp_path))
|
||||
return tmp_path / "compose-farm" / "backups"
|
||||
|
||||
|
||||
def test_backup_creates_timestamped_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
|
||||
"""Test that backup creates file in XDG backup dir with correct content."""
|
||||
test_file = tmp_path / "stacks" / "test.yaml"
|
||||
test_file.parent.mkdir(parents=True)
|
||||
test_file.write_text("original content")
|
||||
|
||||
backup_path = _backup_file(test_file)
|
||||
|
||||
assert backup_path is not None
|
||||
assert backup_path.parent.name == ".backups"
|
||||
assert backup_path.is_relative_to(xdg_backup_dir)
|
||||
assert backup_path.name.startswith("test.yaml.")
|
||||
assert backup_path.read_text() == "original content"
|
||||
|
||||
|
||||
def test_backup_returns_none_for_nonexistent_file(tmp_path: Path) -> None:
|
||||
def test_backup_returns_none_for_nonexistent_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
|
||||
"""Test that backup returns None if file doesn't exist."""
|
||||
assert _backup_file(tmp_path / "nonexistent.yaml") is None
|
||||
|
||||
|
||||
def test_save_creates_new_file(tmp_path: Path) -> None:
|
||||
def test_save_creates_new_file(tmp_path: Path, xdg_backup_dir: Path) -> None:
|
||||
"""Test that save creates new file without backup."""
|
||||
test_file = tmp_path / "new.yaml"
|
||||
|
||||
assert _save_with_backup(test_file, "content") is True
|
||||
assert test_file.read_text() == "content"
|
||||
assert not (tmp_path / ".backups").exists()
|
||||
assert not xdg_backup_dir.exists()
|
||||
|
||||
|
||||
def test_save_skips_unchanged_content(tmp_path: Path) -> None:
|
||||
def test_save_skips_unchanged_content(tmp_path: Path, xdg_backup_dir: Path) -> None:
|
||||
"""Test that save returns False and creates no backup if unchanged."""
|
||||
test_file = tmp_path / "test.yaml"
|
||||
test_file.write_text("same")
|
||||
|
||||
assert _save_with_backup(test_file, "same") is False
|
||||
assert not (tmp_path / ".backups").exists()
|
||||
assert not xdg_backup_dir.exists()
|
||||
|
||||
|
||||
def test_save_creates_backup_before_overwrite(tmp_path: Path) -> None:
|
||||
def test_save_creates_backup_before_overwrite(tmp_path: Path, xdg_backup_dir: Path) -> None:
|
||||
"""Test that save backs up original before overwriting."""
|
||||
test_file = tmp_path / "test.yaml"
|
||||
test_file = tmp_path / "stacks" / "test.yaml"
|
||||
test_file.parent.mkdir(parents=True)
|
||||
test_file.write_text("original")
|
||||
|
||||
assert _save_with_backup(test_file, "new") is True
|
||||
assert test_file.read_text() == "new"
|
||||
|
||||
backups = list((tmp_path / ".backups").glob("test.yaml.*"))
|
||||
# Find backup in XDG dir
|
||||
backups = list(xdg_backup_dir.rglob("test.yaml.*"))
|
||||
assert len(backups) == 1
|
||||
assert backups[0].read_text() == "original"
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -84,6 +84,15 @@ class TestPageTemplatesRender:
|
||||
assert response.status_code == 200
|
||||
assert "test-service" in response.text
|
||||
|
||||
def test_stack_detail_has_containers_data(self, client: TestClient) -> None:
|
||||
"""Test stack detail page includes data-containers for command palette shell."""
|
||||
response = client.get("/stack/test-service")
|
||||
assert response.status_code == 200
|
||||
# Should have data-containers attribute with JSON
|
||||
assert "data-containers=" in response.text
|
||||
# Container name should follow {project}-{service}-1 pattern
|
||||
assert "test-service-app-1" in response.text
|
||||
|
||||
|
||||
class TestPartialTemplatesRender:
|
||||
"""Test that partial templates render without missing variables."""
|
||||
|
||||
@@ -11,6 +11,7 @@ copyright = "Copyright © 2025 Bas Nijholt"
|
||||
repo_url = "https://github.com/basnijholt/compose-farm"
|
||||
repo_name = "GitHub"
|
||||
edit_uri = "edit/main/docs"
|
||||
extra_javascript = ["javascripts/video-fix.js"]
|
||||
|
||||
nav = [
|
||||
{ "Home" = "index.md" },
|
||||
@@ -48,16 +49,25 @@ features = [
|
||||
]
|
||||
|
||||
[[project.theme.palette]]
|
||||
media = "(prefers-color-scheme)"
|
||||
toggle.icon = "lucide/sun-moon"
|
||||
toggle.name = "Switch to light mode"
|
||||
|
||||
[[project.theme.palette]]
|
||||
media = "(prefers-color-scheme: light)"
|
||||
scheme = "default"
|
||||
primary = "teal"
|
||||
primary = "indigo"
|
||||
accent = "indigo"
|
||||
toggle.icon = "lucide/sun"
|
||||
toggle.name = "Switch to dark mode"
|
||||
|
||||
[[project.theme.palette]]
|
||||
media = "(prefers-color-scheme: dark)"
|
||||
scheme = "slate"
|
||||
primary = "teal"
|
||||
toggle.icon = "lucide/moon"
|
||||
toggle.name = "Switch to light mode"
|
||||
primary = "indigo"
|
||||
accent = "orange"
|
||||
toggle.icon = "lucide/moon-star"
|
||||
toggle.name = "Switch to system preference"
|
||||
|
||||
[project.theme.font]
|
||||
text = "Inter"
|
||||
@@ -67,6 +77,9 @@ code = "JetBrains Mono"
|
||||
logo = "lucide/server"
|
||||
repo = "lucide/github"
|
||||
|
||||
[project.extra]
|
||||
generator = false
|
||||
|
||||
[[project.extra.social]]
|
||||
icon = "fontawesome/brands/github"
|
||||
link = "https://github.com/basnijholt/compose-farm"
|
||||
|
||||
Reference in New Issue
Block a user