mirror of
https://github.com/basnijholt/compose-farm.git
synced 2026-02-03 14:13:26 +00:00
Compare commits
11 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6436becff9 | ||
|
|
3460d8a3ea | ||
|
|
8dabc27272 | ||
|
|
5e08f1d712 | ||
|
|
8302f1d97a | ||
|
|
eac9338352 | ||
|
|
667931dc80 | ||
|
|
5890221528 | ||
|
|
c8fc3c2496 | ||
|
|
ffb7a32402 | ||
|
|
beb1630fcf |
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@@ -12,7 +12,7 @@ jobs:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||
os: [ubuntu-latest, macos-latest]
|
||||
python-version: ["3.11", "3.12", "3.13"]
|
||||
|
||||
steps:
|
||||
|
||||
79
.prompts/duplication-audit.md
Normal file
79
.prompts/duplication-audit.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# Duplication audit and generalization prompt
|
||||
|
||||
You are a coding agent working inside a repository. Your job is to find duplicated
|
||||
functionality (not just identical code) and propose a minimal, safe generalization.
|
||||
Keep it simple and avoid adding features.
|
||||
|
||||
## First steps
|
||||
|
||||
- Read project-specific instructions (AGENTS.md, CONTRIBUTING.md, or similar) and follow them.
|
||||
- If instructions mention tooling or style (e.g., preferred search tools), use those.
|
||||
- Ask a brief clarification if the request is ambiguous (for example: report only vs refactor).
|
||||
|
||||
## Objective
|
||||
|
||||
Identify and consolidate duplicated functionality across the codebase. Duplication includes:
|
||||
- Multiple functions that parse or validate the same data in slightly different ways
|
||||
- Repeated file reads or config parsing
|
||||
- Similar command building or subprocess execution paths
|
||||
- Near-identical error handling or logging patterns
|
||||
- Repeated data transforms that can become a shared helper
|
||||
|
||||
The goal is to propose a general, reusable abstraction that reduces duplication while
|
||||
preserving behavior. Keep changes minimal and easy to review.
|
||||
|
||||
## Search strategy
|
||||
|
||||
1) Map the hot paths
|
||||
- Scan entry points (CLI, web handlers, tasks, jobs) to see what they do repeatedly.
|
||||
- Look for cross-module patterns: same steps, different files.
|
||||
|
||||
2) Find duplicate operations
|
||||
- Use fast search tools (prefer `rg`) to find repeated keywords and patterns.
|
||||
- Check for repeated YAML/JSON parsing, env interpolation, file IO, command building,
|
||||
data validation, or response formatting.
|
||||
|
||||
3) Validate duplication is real
|
||||
- Confirm the functional intent matches (not just similar code).
|
||||
- Note any subtle differences that must be preserved.
|
||||
|
||||
4) Propose a minimal generalization
|
||||
- Suggest a shared helper, utility, or wrapper.
|
||||
- Avoid over-engineering. If only two call sites exist, keep the helper small.
|
||||
- Prefer pure functions and centralized IO if that already exists.
|
||||
|
||||
## Deliverables
|
||||
|
||||
Provide a concise report with:
|
||||
|
||||
1) Findings
|
||||
- List duplicated behaviors with file references and a short description of the
|
||||
shared functionality.
|
||||
- Explain why these are functionally the same (or nearly the same).
|
||||
|
||||
2) Proposed generalizations
|
||||
- For each duplication, propose a shared helper and where it should live.
|
||||
- Outline any behavior differences that need to be parameterized.
|
||||
|
||||
3) Impact and risk
|
||||
- Note any behavior risks, test needs, or migration steps.
|
||||
|
||||
If the user asked you to implement changes:
|
||||
- Make only the minimal edits needed to dedupe behavior.
|
||||
- Keep the public API stable unless explicitly requested.
|
||||
- Add small comments only when the logic is non-obvious.
|
||||
- Summarize what changed and why.
|
||||
|
||||
## Output format
|
||||
|
||||
- Start with a short summary of the top 1-3 duplications.
|
||||
- Then provide a list of findings, ordered by impact.
|
||||
- Include a small proposed refactor plan (step-by-step, no more than 5 steps).
|
||||
- End with any questions or assumptions.
|
||||
|
||||
## Guardrails
|
||||
|
||||
- Do not add new features or change behavior beyond deduplication.
|
||||
- Avoid deep refactors without explicit request.
|
||||
- Preserve existing style conventions and import rules.
|
||||
- If a duplication is better left alone (e.g., clarity, single usage), say so.
|
||||
10
CLAUDE.md
10
CLAUDE.md
@@ -110,6 +110,10 @@ Browser tests are marked with `@pytest.mark.browser`. They use Playwright to tes
|
||||
Use `gh release create` to create releases. The tag is created automatically.
|
||||
|
||||
```bash
|
||||
# IMPORTANT: Ensure you're on latest origin/main before releasing!
|
||||
git fetch origin
|
||||
git checkout origin/main
|
||||
|
||||
# Check current version
|
||||
git tag --sort=-v:refname | head -1
|
||||
|
||||
@@ -133,8 +137,8 @@ CLI available as `cf` or `compose-farm`.
|
||||
| `down` | Stop stacks (`docker compose down`). Use `--orphaned` to stop stacks removed from config |
|
||||
| `stop` | Stop services without removing containers (`docker compose stop`) |
|
||||
| `pull` | Pull latest images |
|
||||
| `restart` | `down` + `up -d` |
|
||||
| `update` | `pull` + `build` + `down` + `up -d` |
|
||||
| `restart` | Restart running containers (`docker compose restart`) |
|
||||
| `update` | Pull, build, recreate only if changed (`up -d --pull always --build`) |
|
||||
| `apply` | Make reality match config: migrate stacks + stop orphans. Use `--dry-run` to preview |
|
||||
| `compose` | Run any docker compose command on a stack (passthrough) |
|
||||
| `logs` | Show stack logs |
|
||||
@@ -144,6 +148,6 @@ CLI available as `cf` or `compose-farm`.
|
||||
| `check` | Validate config, traefik labels, mounts, networks; show host compatibility |
|
||||
| `init-network` | Create Docker network on hosts with consistent subnet/gateway |
|
||||
| `traefik-file` | Generate Traefik file-provider config from compose labels |
|
||||
| `config` | Manage config files (init, show, path, validate, edit, symlink) |
|
||||
| `config` | Manage config files (init, init-env, show, path, validate, edit, symlink) |
|
||||
| `ssh` | Manage SSH keys (setup, status, keygen) |
|
||||
| `web` | Start web UI server |
|
||||
|
||||
34
README.md
34
README.md
@@ -369,8 +369,8 @@ The CLI is available as both `compose-farm` and the shorter `cf` alias.
|
||||
| `cf up <stack>` | Start stack (auto-migrates if host changed) |
|
||||
| `cf down <stack>` | Stop and remove stack containers |
|
||||
| `cf stop <stack>` | Stop stack without removing containers |
|
||||
| `cf restart <stack>` | down + up |
|
||||
| `cf update <stack>` | pull + build + down + up |
|
||||
| `cf restart <stack>` | Restart running containers |
|
||||
| `cf update <stack>` | Pull, build, recreate only if changed |
|
||||
| `cf pull <stack>` | Pull latest images |
|
||||
| `cf logs -f <stack>` | Follow logs |
|
||||
| `cf ps` | Show status of all stacks |
|
||||
@@ -400,10 +400,10 @@ cf down --orphaned # stop stacks removed from config
|
||||
# Pull latest images
|
||||
cf pull --all
|
||||
|
||||
# Restart (down + up)
|
||||
# Restart running containers
|
||||
cf restart plex
|
||||
|
||||
# Update (pull + build + down + up) - the end-to-end update command
|
||||
# Update (pull + build, only recreates containers if images changed)
|
||||
cf update --all
|
||||
|
||||
# Update state from reality (discovers running stacks + captures digests)
|
||||
@@ -473,10 +473,8 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ stop Stop services without removing containers (docker compose │
|
||||
│ stop). │
|
||||
│ pull Pull latest images (docker compose pull). │
|
||||
│ restart Restart stacks (down + up). With --service, restarts just │
|
||||
│ that service. │
|
||||
│ update Update stacks (pull + build + down + up). With --service, │
|
||||
│ updates just that service. │
|
||||
│ restart Restart running containers (docker compose restart). │
|
||||
│ update Update stacks. Only recreates containers if images changed. │
|
||||
│ apply Make reality match config (start, migrate, stop │
|
||||
│ strays/orphans as needed). │
|
||||
│ compose Run any docker compose command on a stack. │
|
||||
@@ -525,6 +523,8 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ --all -a Run on all stacks │
|
||||
│ --host -H TEXT Filter to stacks on this host │
|
||||
│ --service -s TEXT Target a specific service within the stack │
|
||||
│ --pull Pull images before starting (--pull always) │
|
||||
│ --build Build images before starting │
|
||||
│ --config -c PATH Path to config file │
|
||||
│ --help -h Show this message and exit. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
@@ -659,7 +659,7 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
Usage: cf restart [OPTIONS] [STACKS]...
|
||||
|
||||
Restart stacks (down + up). With --service, restarts just that service.
|
||||
Restart running containers (docker compose restart).
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
@@ -694,8 +694,7 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
|
||||
Usage: cf update [OPTIONS] [STACKS]...
|
||||
|
||||
Update stacks (pull + build + down + up). With --service, updates just that
|
||||
service.
|
||||
Update stacks. Only recreates containers if images changed.
|
||||
|
||||
╭─ Arguments ──────────────────────────────────────────────────────────────────╮
|
||||
│ stacks [STACKS]... Stacks to operate on │
|
||||
@@ -1003,6 +1002,7 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
|
||||
│ validate Validate the config file syntax and schema. │
|
||||
│ symlink Create a symlink from the default config location to a config │
|
||||
│ file. │
|
||||
│ init-env Generate a .env file for Docker deployment. │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
```
|
||||
@@ -1282,12 +1282,12 @@ published ports.
|
||||
|
||||
**Auto-regeneration**
|
||||
|
||||
To automatically regenerate the Traefik config after `up`, `down`, `restart`, or `update`,
|
||||
To automatically regenerate the Traefik config after `up`, `down`, or `update`,
|
||||
add `traefik_file` to your config:
|
||||
|
||||
```yaml
|
||||
compose_dir: /opt/compose
|
||||
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml # auto-regenerate on up/down/restart/update
|
||||
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml # auto-regenerate on up/down/update
|
||||
traefik_stack: traefik # skip stacks on same host (docker provider handles them)
|
||||
|
||||
hosts:
|
||||
@@ -1363,6 +1363,14 @@ glances_stack: glances # Enables resource stats in web UI
|
||||
|
||||
3. Deploy: `cf up glances`
|
||||
|
||||
4. **(Docker web UI only)** If running the web UI in a Docker container, set `CF_LOCAL_HOST` to your local hostname in `.env`:
|
||||
|
||||
```bash
|
||||
echo "CF_LOCAL_HOST=nas" >> .env # Replace 'nas' with your local host name
|
||||
```
|
||||
|
||||
This tells the web UI to reach the local Glances via container name instead of IP (required due to Docker network isolation).
|
||||
|
||||
The web UI dashboard will now show a "Host Resources" section with live stats from all hosts. Hosts where Glances is unreachable show an error indicator.
|
||||
|
||||
**Live Stats Page**
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
compose_dir: /opt/compose
|
||||
|
||||
# Optional: Auto-regenerate Traefik file-provider config after up/down/restart/update
|
||||
# Optional: Auto-regenerate Traefik file-provider config after up/down/update
|
||||
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
|
||||
traefik_stack: traefik # Skip stacks on same host (docker provider handles them)
|
||||
|
||||
|
||||
@@ -47,6 +47,8 @@ services:
|
||||
- CF_CONFIG=${CF_COMPOSE_DIR:-/opt/stacks}/compose-farm.yaml
|
||||
# Used to detect self-updates and run via SSH to survive container restart
|
||||
- CF_WEB_STACK=compose-farm
|
||||
# Local host for Glances (use container name instead of IP to avoid Docker network issues)
|
||||
- CF_LOCAL_HOST=${CF_LOCAL_HOST:-}
|
||||
# HOME must match the user running the container for SSH to find keys
|
||||
- HOME=${CF_HOME:-/root}
|
||||
# USER is required for SSH when running as non-root (UID not in /etc/passwd)
|
||||
|
||||
@@ -14,8 +14,8 @@ The Compose Farm CLI is available as both `compose-farm` and the shorter alias `
|
||||
| | `up` | Start stacks |
|
||||
| | `down` | Stop stacks |
|
||||
| | `stop` | Stop services without removing containers |
|
||||
| | `restart` | Restart stacks (down + up) |
|
||||
| | `update` | Update stacks (pull + build + down + up) |
|
||||
| | `restart` | Restart running containers |
|
||||
| | `update` | Update stacks (only recreates if images changed) |
|
||||
| | `pull` | Pull latest images |
|
||||
| | `compose` | Run any docker compose command |
|
||||
| **Monitoring** | `ps` | Show stack status |
|
||||
@@ -197,7 +197,7 @@ cf stop immich --service database
|
||||
|
||||
### cf restart
|
||||
|
||||
Restart stacks (down + up). With `--service`, restarts just that service.
|
||||
Restart running containers (`docker compose restart`). With `--service`, restarts just that service.
|
||||
|
||||
```bash
|
||||
cf restart [OPTIONS] [STACKS]...
|
||||
@@ -225,7 +225,7 @@ cf restart immich --service database
|
||||
|
||||
### cf update
|
||||
|
||||
Update stacks (pull + build + down + up). With `--service`, updates just that service.
|
||||
Update stacks. Only recreates containers if images changed. With `--service`, updates just that service.
|
||||
|
||||
<video autoplay loop muted playsinline>
|
||||
<source src="/assets/update.webm" type="video/webm">
|
||||
|
||||
@@ -107,7 +107,7 @@ Supported compose file names (checked in order):
|
||||
|
||||
### traefik_file
|
||||
|
||||
Path to auto-generated Traefik file-provider config. When set, Compose Farm regenerates this file after `up`, `down`, `restart`, and `update` commands.
|
||||
Path to auto-generated Traefik file-provider config. When set, Compose Farm regenerates this file after `up`, `down`, and `update` commands.
|
||||
|
||||
```yaml
|
||||
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Update Demo
|
||||
# Shows updating stacks (pull + build + down + up)
|
||||
# Shows updating stacks (only recreates containers if images changed)
|
||||
|
||||
Output docs/assets/update.gif
|
||||
Output docs/assets/update.webm
|
||||
|
||||
116
docs/docker-deployment.md
Normal file
116
docs/docker-deployment.md
Normal file
@@ -0,0 +1,116 @@
|
||||
---
|
||||
icon: lucide/container
|
||||
---
|
||||
|
||||
# Docker Deployment
|
||||
|
||||
Run the Compose Farm web UI in Docker.
|
||||
|
||||
## Quick Start
|
||||
|
||||
**1. Get the compose file:**
|
||||
|
||||
```bash
|
||||
curl -O https://raw.githubusercontent.com/basnijholt/compose-farm/main/docker-compose.yml
|
||||
```
|
||||
|
||||
**2. Generate `.env` file:**
|
||||
|
||||
```bash
|
||||
cf config init-env -o .env
|
||||
```
|
||||
|
||||
This auto-detects settings from your `compose-farm.yaml`:
|
||||
- `DOMAIN` from existing traefik labels
|
||||
- `CF_COMPOSE_DIR` from config
|
||||
- `CF_UID/GID/HOME/USER` from current user
|
||||
- `CF_LOCAL_HOST` by matching local IPs to config hosts
|
||||
|
||||
Review the output and edit if needed.
|
||||
|
||||
**3. Set up SSH keys:**
|
||||
|
||||
```bash
|
||||
docker compose run --rm cf ssh setup
|
||||
```
|
||||
|
||||
**4. Start the web UI:**
|
||||
|
||||
```bash
|
||||
docker compose up -d web
|
||||
```
|
||||
|
||||
Open `http://localhost:9000` (or `https://compose-farm.example.com` if using Traefik).
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
The `cf config init-env` command auto-detects most settings. After running it, review the generated `.env` file and edit if needed:
|
||||
|
||||
```bash
|
||||
$EDITOR .env
|
||||
```
|
||||
|
||||
### What init-env detects
|
||||
|
||||
| Variable | How it's detected |
|
||||
|----------|-------------------|
|
||||
| `DOMAIN` | Extracted from traefik labels in your stacks |
|
||||
| `CF_COMPOSE_DIR` | From `compose_dir` in your config |
|
||||
| `CF_UID/GID/HOME/USER` | From current user (for NFS compatibility) |
|
||||
| `CF_LOCAL_HOST` | By matching local IPs to configured hosts |
|
||||
|
||||
If auto-detection fails for any value, edit the `.env` file manually.
|
||||
|
||||
### Glances Monitoring
|
||||
|
||||
To show host CPU/memory stats in the dashboard, deploy [Glances](https://nicolargo.github.io/glances/) on your hosts. If `CF_LOCAL_HOST` wasn't detected correctly, set it to your local hostname:
|
||||
|
||||
```bash
|
||||
CF_LOCAL_HOST=nas # Replace with your local host name
|
||||
```
|
||||
|
||||
See [Host Resource Monitoring](https://github.com/basnijholt/compose-farm#host-resource-monitoring-glances) in the README.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### SSH "Permission denied" or "Host key verification failed"
|
||||
|
||||
Regenerate keys:
|
||||
|
||||
```bash
|
||||
docker compose run --rm cf ssh setup
|
||||
```
|
||||
|
||||
### Glances shows error for local host
|
||||
|
||||
Add your local hostname to `.env`:
|
||||
|
||||
```bash
|
||||
echo "CF_LOCAL_HOST=nas" >> .env
|
||||
docker compose restart web
|
||||
```
|
||||
|
||||
### Files created as root
|
||||
|
||||
Add the non-root variables above and restart.
|
||||
|
||||
---
|
||||
|
||||
## All Environment Variables
|
||||
|
||||
For advanced users, here's the complete reference:
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `DOMAIN` | Domain for Traefik labels | *(required)* |
|
||||
| `CF_COMPOSE_DIR` | Compose files directory | `/opt/stacks` |
|
||||
| `CF_UID` / `CF_GID` | User/group ID | `0` (root) |
|
||||
| `CF_HOME` | Home directory | `/root` |
|
||||
| `CF_USER` | Username for SSH | `root` |
|
||||
| `CF_LOCAL_HOST` | Local hostname for Glances | *(auto-detect)* |
|
||||
| `CF_SSH_DIR` | SSH keys directory | `~/.ssh/compose-farm` |
|
||||
| `CF_XDG_CONFIG` | Config/backup directory | `~/.config/compose-farm` |
|
||||
@@ -329,7 +329,7 @@ cf apply
|
||||
|
||||
```bash
|
||||
cf update --all
|
||||
# Runs: pull + build + down + up for each stack
|
||||
# Only recreates containers if images changed
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
@@ -139,7 +139,6 @@ stacks:
|
||||
With `traefik_file` set, these commands auto-regenerate the config:
|
||||
- `cf up`
|
||||
- `cf down`
|
||||
- `cf restart`
|
||||
- `cf update`
|
||||
- `cf apply`
|
||||
|
||||
|
||||
@@ -168,4 +168,4 @@ traefik_file: /opt/stacks/traefik/dynamic.d/compose-farm.yml
|
||||
traefik_stack: traefik
|
||||
```
|
||||
|
||||
With `traefik_file` configured, compose-farm automatically regenerates the config after `up`, `down`, `restart`, and `update` commands.
|
||||
With `traefik_file` configured, compose-farm automatically regenerates the config after `up`, `down`, and `update` commands.
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
compose_dir: /opt/stacks/compose-farm/examples
|
||||
|
||||
# Auto-regenerate Traefik file-provider config after up/down/restart/update
|
||||
# Auto-regenerate Traefik file-provider config after up/down/update
|
||||
traefik_file: /opt/stacks/compose-farm/examples/traefik/dynamic.d/compose-farm.yml
|
||||
traefik_stack: traefik # Skip Traefik's host in file-provider (docker provider handles it)
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""Hatch build hook to vendor CDN assets for offline use.
|
||||
|
||||
During wheel builds, this hook:
|
||||
1. Parses base.html to find elements with data-vendor attributes
|
||||
1. Reads vendor-assets.json to find assets marked for vendoring
|
||||
2. Downloads each CDN asset to a temporary vendor directory
|
||||
3. Rewrites base.html to use local /static/vendor/ paths
|
||||
4. Fetches and bundles license information
|
||||
@@ -13,6 +13,7 @@ distributed wheel has vendored assets.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
@@ -23,22 +24,6 @@ from urllib.request import Request, urlopen
|
||||
|
||||
from hatchling.builders.hooks.plugin.interface import BuildHookInterface
|
||||
|
||||
# Matches elements with data-vendor attribute: extracts URL and target filename
|
||||
# Example: <script src="https://..." data-vendor="htmx.js">
|
||||
# Captures: (1) src/href, (2) URL, (3) attributes between, (4) vendor filename
|
||||
VENDOR_PATTERN = re.compile(r'(src|href)="(https://[^"]+)"([^>]*?)data-vendor="([^"]+)"')
|
||||
|
||||
# License URLs for each package (GitHub raw URLs)
|
||||
LICENSE_URLS: dict[str, tuple[str, str]] = {
|
||||
"htmx": ("MIT", "https://raw.githubusercontent.com/bigskysoftware/htmx/master/LICENSE"),
|
||||
"xterm": ("MIT", "https://raw.githubusercontent.com/xtermjs/xterm.js/master/LICENSE"),
|
||||
"daisyui": ("MIT", "https://raw.githubusercontent.com/saadeghi/daisyui/master/LICENSE"),
|
||||
"tailwindcss": (
|
||||
"MIT",
|
||||
"https://raw.githubusercontent.com/tailwindlabs/tailwindcss/master/LICENSE",
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def _download(url: str) -> bytes:
|
||||
"""Download a URL, trying urllib first then curl as fallback."""
|
||||
@@ -61,7 +46,14 @@ def _download(url: str) -> bytes:
|
||||
return bytes(result.stdout)
|
||||
|
||||
|
||||
def _generate_licenses_file(temp_dir: Path) -> None:
|
||||
def _load_vendor_assets(root: Path) -> dict[str, Any]:
|
||||
"""Load vendor-assets.json from the web module."""
|
||||
json_path = root / "src" / "compose_farm" / "web" / "vendor-assets.json"
|
||||
with json_path.open() as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def _generate_licenses_file(temp_dir: Path, licenses: dict[str, dict[str, str]]) -> None:
|
||||
"""Download and combine license files into LICENSES.txt."""
|
||||
lines = [
|
||||
"# Vendored Dependencies - License Information",
|
||||
@@ -73,7 +65,9 @@ def _generate_licenses_file(temp_dir: Path) -> None:
|
||||
"",
|
||||
]
|
||||
|
||||
for pkg_name, (license_type, license_url) in LICENSE_URLS.items():
|
||||
for pkg_name, license_info in licenses.items():
|
||||
license_type = license_info["type"]
|
||||
license_url = license_info["url"]
|
||||
lines.append(f"## {pkg_name} ({license_type})")
|
||||
lines.append(f"Source: {license_url}")
|
||||
lines.append("")
|
||||
@@ -107,44 +101,57 @@ class VendorAssetsHook(BuildHookInterface): # type: ignore[misc]
|
||||
if not base_html_path.exists():
|
||||
return
|
||||
|
||||
# Load vendor assets configuration
|
||||
vendor_config = _load_vendor_assets(Path(self.root))
|
||||
assets_to_vendor = vendor_config["assets"]
|
||||
|
||||
if not assets_to_vendor:
|
||||
return
|
||||
|
||||
# Create temp directory for vendored assets
|
||||
temp_dir = Path(tempfile.mkdtemp(prefix="compose_farm_vendor_"))
|
||||
vendor_dir = temp_dir / "vendor"
|
||||
vendor_dir.mkdir()
|
||||
|
||||
# Read and parse base.html
|
||||
# Read base.html
|
||||
html_content = base_html_path.read_text()
|
||||
|
||||
# Build URL to filename mapping and download assets
|
||||
url_to_filename: dict[str, str] = {}
|
||||
|
||||
# Find all elements with data-vendor attribute and download them
|
||||
for match in VENDOR_PATTERN.finditer(html_content):
|
||||
url = match.group(2)
|
||||
filename = match.group(4)
|
||||
|
||||
if url in url_to_filename:
|
||||
continue
|
||||
|
||||
for asset in assets_to_vendor:
|
||||
url = asset["url"]
|
||||
filename = asset["filename"]
|
||||
url_to_filename[url] = filename
|
||||
filepath = vendor_dir / filename
|
||||
filepath.parent.mkdir(parents=True, exist_ok=True)
|
||||
content = _download(url)
|
||||
(vendor_dir / filename).write_bytes(content)
|
||||
filepath.write_bytes(content)
|
||||
|
||||
if not url_to_filename:
|
||||
return
|
||||
# Generate LICENSES.txt from the JSON config
|
||||
_generate_licenses_file(vendor_dir, vendor_config["licenses"])
|
||||
|
||||
# Generate LICENSES.txt
|
||||
_generate_licenses_file(vendor_dir)
|
||||
# Rewrite HTML: replace CDN URLs with local paths and remove data-vendor attributes
|
||||
# Pattern matches: src="URL" ... data-vendor="filename" or href="URL" ... data-vendor="filename"
|
||||
vendor_pattern = re.compile(r'(src|href)="(https://[^"]+)"([^>]*?)data-vendor="([^"]+)"')
|
||||
|
||||
# Rewrite HTML to use local paths (remove data-vendor, update URL)
|
||||
def replace_vendor_tag(match: re.Match[str]) -> str:
|
||||
attr = match.group(1) # src or href
|
||||
url = match.group(2)
|
||||
between = match.group(3) # attributes between URL and data-vendor
|
||||
filename = match.group(4)
|
||||
if url in url_to_filename:
|
||||
filename = url_to_filename[url]
|
||||
return f'{attr}="/static/vendor/{filename}"{between}'
|
||||
return match.group(0)
|
||||
|
||||
modified_html = VENDOR_PATTERN.sub(replace_vendor_tag, html_content)
|
||||
modified_html = vendor_pattern.sub(replace_vendor_tag, html_content)
|
||||
|
||||
# Inject vendored mode flag for JavaScript to detect
|
||||
# Insert right after <head> tag so it's available early
|
||||
modified_html = modified_html.replace(
|
||||
"<head>",
|
||||
"<head>\n <script>window.CF_VENDORED=true;</script>",
|
||||
1, # Only replace first occurrence
|
||||
)
|
||||
|
||||
# Write modified base.html to temp
|
||||
templates_dir = temp_dir / "templates"
|
||||
|
||||
@@ -30,7 +30,8 @@ classifiers = [
|
||||
"Intended Audience :: Developers",
|
||||
"Intended Audience :: System Administrators",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Operating System :: MacOS",
|
||||
"Operating System :: POSIX :: Linux",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
@@ -46,6 +47,7 @@ dependencies = [
|
||||
"asyncssh>=2.14.0",
|
||||
"pyyaml>=6.0",
|
||||
"rich>=13.0.0",
|
||||
"python-dotenv>=1.0.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -142,6 +142,9 @@ def load_config_or_exit(config_path: Path | None) -> Config:
|
||||
except FileNotFoundError as e:
|
||||
print_error(str(e))
|
||||
raise typer.Exit(1) from e
|
||||
except Exception as e:
|
||||
print_error(f"Invalid config: {e}")
|
||||
raise typer.Exit(1) from e
|
||||
|
||||
|
||||
def get_stacks(
|
||||
|
||||
@@ -3,13 +3,12 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import platform
|
||||
import shlex
|
||||
import shutil
|
||||
import subprocess
|
||||
from importlib import resources
|
||||
from pathlib import Path
|
||||
from typing import Annotated
|
||||
from typing import TYPE_CHECKING, Annotated
|
||||
|
||||
import typer
|
||||
|
||||
@@ -17,6 +16,9 @@ from compose_farm.cli.app import app
|
||||
from compose_farm.console import MSG_CONFIG_NOT_FOUND, console, print_error, print_success
|
||||
from compose_farm.paths import config_search_paths, default_config_path, find_config_path
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from compose_farm.config import Config
|
||||
|
||||
config_app = typer.Typer(
|
||||
name="config",
|
||||
help="Manage compose-farm configuration files.",
|
||||
@@ -43,8 +45,6 @@ def _get_editor() -> str:
|
||||
"""Get the user's preferred editor ($EDITOR > $VISUAL > platform default)."""
|
||||
if editor := os.environ.get("EDITOR") or os.environ.get("VISUAL"):
|
||||
return editor
|
||||
if platform.system() == "Windows":
|
||||
return "notepad"
|
||||
return next((e for e in ("nano", "vim", "vi") if shutil.which(e)), "vi")
|
||||
|
||||
|
||||
@@ -68,6 +68,22 @@ def _get_config_file(path: Path | None) -> Path | None:
|
||||
return config_path.resolve() if config_path else None
|
||||
|
||||
|
||||
def _load_config_with_path(path: Path | None) -> tuple[Path, Config]:
|
||||
"""Load config and return both the resolved path and Config object.
|
||||
|
||||
Exits with error if config not found or invalid.
|
||||
"""
|
||||
from compose_farm.cli.common import load_config_or_exit # noqa: PLC0415
|
||||
|
||||
config_file = _get_config_file(path)
|
||||
if config_file is None:
|
||||
print_error(MSG_CONFIG_NOT_FOUND)
|
||||
raise typer.Exit(1)
|
||||
|
||||
cfg = load_config_or_exit(config_file)
|
||||
return config_file, cfg
|
||||
|
||||
|
||||
def _report_missing_config(explicit_path: Path | None = None) -> None:
|
||||
"""Report that a config file was not found."""
|
||||
console.print("[yellow]Config file not found.[/yellow]")
|
||||
@@ -135,7 +151,7 @@ def config_edit(
|
||||
console.print(f"[dim]Opening {config_file} with {editor}...[/dim]")
|
||||
|
||||
try:
|
||||
editor_cmd = shlex.split(editor, posix=os.name != "nt")
|
||||
editor_cmd = shlex.split(editor)
|
||||
except ValueError as e:
|
||||
print_error("Invalid editor command. Check [bold]$EDITOR[/]/[bold]$VISUAL[/]")
|
||||
raise typer.Exit(1) from e
|
||||
@@ -207,23 +223,7 @@ def config_validate(
|
||||
path: _PathOption = None,
|
||||
) -> None:
|
||||
"""Validate the config file syntax and schema."""
|
||||
config_file = _get_config_file(path)
|
||||
|
||||
if config_file is None:
|
||||
print_error(MSG_CONFIG_NOT_FOUND)
|
||||
raise typer.Exit(1)
|
||||
|
||||
# Lazy import: pydantic adds ~50ms to startup, only load when actually needed
|
||||
from compose_farm.config import load_config # noqa: PLC0415
|
||||
|
||||
try:
|
||||
cfg = load_config(config_file)
|
||||
except FileNotFoundError as e:
|
||||
print_error(str(e))
|
||||
raise typer.Exit(1) from e
|
||||
except Exception as e:
|
||||
print_error(f"Invalid config: {e}")
|
||||
raise typer.Exit(1) from e
|
||||
config_file, cfg = _load_config_with_path(path)
|
||||
|
||||
print_success(f"Valid config: {config_file}")
|
||||
console.print(f" Hosts: {len(cfg.hosts)}")
|
||||
@@ -293,5 +293,129 @@ def config_symlink(
|
||||
console.print(f" -> {target_path}")
|
||||
|
||||
|
||||
def _detect_domain(cfg: Config) -> str | None:
|
||||
"""Try to detect DOMAIN from traefik Host() rules in existing stacks.
|
||||
|
||||
Uses extract_website_urls from traefik module to get interpolated
|
||||
URLs, then extracts the domain from the first valid URL.
|
||||
Skips local domains (.local, localhost, etc.).
|
||||
"""
|
||||
from urllib.parse import urlparse # noqa: PLC0415
|
||||
|
||||
from compose_farm.traefik import extract_website_urls # noqa: PLC0415
|
||||
|
||||
max_stacks_to_check = 10
|
||||
min_domain_parts = 2
|
||||
subdomain_parts = 4
|
||||
skip_tlds = {"local", "localhost", "internal", "lan", "home"}
|
||||
|
||||
for stack_name in list(cfg.stacks.keys())[:max_stacks_to_check]:
|
||||
urls = extract_website_urls(cfg, stack_name)
|
||||
for url in urls:
|
||||
host = urlparse(url).netloc
|
||||
parts = host.split(".")
|
||||
# Skip local/internal domains
|
||||
if parts[-1].lower() in skip_tlds:
|
||||
continue
|
||||
if len(parts) >= subdomain_parts:
|
||||
# e.g., "app.lab.nijho.lt" -> "lab.nijho.lt"
|
||||
return ".".join(parts[-3:])
|
||||
if len(parts) >= min_domain_parts:
|
||||
# e.g., "app.example.com" -> "example.com"
|
||||
return ".".join(parts[-2:])
|
||||
return None
|
||||
|
||||
|
||||
def _detect_local_host(cfg: Config) -> str | None:
|
||||
"""Find which config host matches local machine's IPs."""
|
||||
from compose_farm.executor import is_local # noqa: PLC0415
|
||||
|
||||
for name, host in cfg.hosts.items():
|
||||
if is_local(host):
|
||||
return name
|
||||
return None
|
||||
|
||||
|
||||
@config_app.command("init-env")
|
||||
def config_init_env(
|
||||
path: _PathOption = None,
|
||||
output: Annotated[
|
||||
Path | None,
|
||||
typer.Option(
|
||||
"--output", "-o", help="Output .env file path. Defaults to .env in config directory."
|
||||
),
|
||||
] = None,
|
||||
force: _ForceOption = False,
|
||||
) -> None:
|
||||
"""Generate a .env file for Docker deployment.
|
||||
|
||||
Reads the compose-farm.yaml config and auto-detects settings:
|
||||
- CF_COMPOSE_DIR from compose_dir
|
||||
- CF_LOCAL_HOST by detecting which config host matches local IPs
|
||||
- CF_UID/GID/HOME/USER from current user
|
||||
- DOMAIN from traefik labels in stacks (if found)
|
||||
|
||||
Example::
|
||||
|
||||
cf config init-env # Create .env next to config
|
||||
cf config init-env -o .env # Create .env in current directory
|
||||
|
||||
"""
|
||||
config_file, cfg = _load_config_with_path(path)
|
||||
|
||||
# Determine output path
|
||||
env_path = output.expanduser().resolve() if output else config_file.parent / ".env"
|
||||
|
||||
if env_path.exists() and not force:
|
||||
console.print(f"[yellow].env file already exists:[/] {env_path}")
|
||||
if not typer.confirm("Overwrite?"):
|
||||
console.print("[dim]Aborted.[/dim]")
|
||||
raise typer.Exit(0)
|
||||
|
||||
# Auto-detect values
|
||||
uid = os.getuid()
|
||||
gid = os.getgid()
|
||||
home = os.environ.get("HOME", "/root")
|
||||
user = os.environ.get("USER", "root")
|
||||
compose_dir = str(cfg.compose_dir)
|
||||
local_host = _detect_local_host(cfg)
|
||||
domain = _detect_domain(cfg)
|
||||
|
||||
# Generate .env content
|
||||
lines = [
|
||||
"# Generated by: cf config init-env",
|
||||
f"# From config: {config_file}",
|
||||
"",
|
||||
"# Domain for Traefik labels",
|
||||
f"DOMAIN={domain or 'example.com'}",
|
||||
"",
|
||||
"# Compose files location",
|
||||
f"CF_COMPOSE_DIR={compose_dir}",
|
||||
"",
|
||||
"# Run as current user (recommended for NFS)",
|
||||
f"CF_UID={uid}",
|
||||
f"CF_GID={gid}",
|
||||
f"CF_HOME={home}",
|
||||
f"CF_USER={user}",
|
||||
"",
|
||||
"# Local hostname for Glances integration",
|
||||
f"CF_LOCAL_HOST={local_host or '# auto-detect failed - set manually'}",
|
||||
"",
|
||||
]
|
||||
|
||||
env_path.write_text("\n".join(lines), encoding="utf-8")
|
||||
|
||||
print_success(f"Created .env file: {env_path}")
|
||||
console.print()
|
||||
console.print("[dim]Detected settings:[/dim]")
|
||||
console.print(f" DOMAIN: {domain or '[yellow]example.com[/] (edit this)'}")
|
||||
console.print(f" CF_COMPOSE_DIR: {compose_dir}")
|
||||
console.print(f" CF_UID/GID: {uid}:{gid}")
|
||||
console.print(f" CF_LOCAL_HOST: {local_host or '[yellow]not detected[/] (set manually)'}")
|
||||
console.print()
|
||||
console.print("[dim]Review and edit as needed:[/dim]")
|
||||
console.print(f" [cyan]$EDITOR {env_path}[/cyan]")
|
||||
|
||||
|
||||
# Register config subcommand on the shared app
|
||||
app.add_typer(config_app, name="config", rich_help_panel="Configuration")
|
||||
|
||||
@@ -28,8 +28,9 @@ from compose_farm.cli.common import (
|
||||
)
|
||||
from compose_farm.cli.management import _discover_stacks_full
|
||||
from compose_farm.console import MSG_DRY_RUN, console, print_error, print_success
|
||||
from compose_farm.executor import run_compose_on_host, run_on_stacks, run_sequential_on_stacks
|
||||
from compose_farm.executor import run_compose_on_host, run_on_stacks
|
||||
from compose_farm.operations import (
|
||||
build_up_cmd,
|
||||
stop_orphaned_stacks,
|
||||
stop_stray_stacks,
|
||||
up_stacks,
|
||||
@@ -49,6 +50,14 @@ def up(
|
||||
all_stacks: AllOption = False,
|
||||
host: HostOption = None,
|
||||
service: ServiceOption = None,
|
||||
pull: Annotated[
|
||||
bool,
|
||||
typer.Option("--pull", help="Pull images before starting (--pull always)"),
|
||||
] = False,
|
||||
build: Annotated[
|
||||
bool,
|
||||
typer.Option("--build", help="Build images before starting"),
|
||||
] = False,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Start stacks (docker compose up -d). Auto-migrates if host changed."""
|
||||
@@ -58,9 +67,13 @@ def up(
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
# For service-level up, use run_on_stacks directly (no migration logic)
|
||||
results = run_async(run_on_stacks(cfg, stack_list, f"up -d {service}", raw=True))
|
||||
results = run_async(
|
||||
run_on_stacks(
|
||||
cfg, stack_list, build_up_cmd(pull=pull, build=build, service=service), raw=True
|
||||
)
|
||||
)
|
||||
else:
|
||||
results = run_async(up_stacks(cfg, stack_list, raw=True))
|
||||
results = run_async(up_stacks(cfg, stack_list, raw=True, pull=pull, build=build))
|
||||
maybe_regenerate_traefik(cfg, results)
|
||||
report_results(results)
|
||||
|
||||
@@ -161,19 +174,17 @@ def restart(
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Restart stacks (down + up). With --service, restarts just that service."""
|
||||
"""Restart running containers (docker compose restart)."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
if service:
|
||||
if len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
# For service-level restart, use docker compose restart (more efficient)
|
||||
raw = True
|
||||
results = run_async(run_on_stacks(cfg, stack_list, f"restart {service}", raw=raw))
|
||||
cmd = f"restart {service}"
|
||||
else:
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(run_sequential_on_stacks(cfg, stack_list, ["down", "up -d"], raw=raw))
|
||||
maybe_regenerate_traefik(cfg, results)
|
||||
cmd = "restart"
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(run_on_stacks(cfg, stack_list, cmd, raw=raw))
|
||||
report_results(results)
|
||||
|
||||
|
||||
@@ -184,36 +195,9 @@ def update(
|
||||
service: ServiceOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Update stacks (pull + build + down + up). With --service, updates just that service."""
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
if service:
|
||||
if len(stack_list) != 1:
|
||||
print_error("--service requires exactly one stack")
|
||||
raise typer.Exit(1)
|
||||
# For service-level update: pull + build + stop + up (stop instead of down)
|
||||
raw = True
|
||||
results = run_async(
|
||||
run_sequential_on_stacks(
|
||||
cfg,
|
||||
stack_list,
|
||||
[
|
||||
f"pull --ignore-buildable {service}",
|
||||
f"build {service}",
|
||||
f"stop {service}",
|
||||
f"up -d {service}",
|
||||
],
|
||||
raw=raw,
|
||||
)
|
||||
)
|
||||
else:
|
||||
raw = len(stack_list) == 1
|
||||
results = run_async(
|
||||
run_sequential_on_stacks(
|
||||
cfg, stack_list, ["pull --ignore-buildable", "build", "down", "up -d"], raw=raw
|
||||
)
|
||||
)
|
||||
maybe_regenerate_traefik(cfg, results)
|
||||
report_results(results)
|
||||
"""Update stacks. Only recreates containers if images changed."""
|
||||
# update is just up --pull --build
|
||||
up(stacks=stacks, all_stacks=all_stacks, service=service, pull=True, build=True, config=config)
|
||||
|
||||
|
||||
def _discover_strays(cfg: Config) -> dict[str, list[str]]:
|
||||
|
||||
@@ -56,7 +56,6 @@ from compose_farm.operations import (
|
||||
check_stack_requirements,
|
||||
)
|
||||
from compose_farm.state import get_orphaned_stacks, load_state, save_state
|
||||
from compose_farm.traefik import generate_traefik_config, render_traefik_config
|
||||
|
||||
# --- Sync helpers ---
|
||||
|
||||
@@ -328,6 +327,8 @@ def _report_orphaned_stacks(cfg: Config) -> bool:
|
||||
|
||||
def _report_traefik_status(cfg: Config, stacks: list[str]) -> None:
|
||||
"""Check and report traefik label status."""
|
||||
from compose_farm.traefik import generate_traefik_config # noqa: PLC0415
|
||||
|
||||
try:
|
||||
_, warnings = generate_traefik_config(cfg, stacks, check_all=True)
|
||||
except (FileNotFoundError, ValueError):
|
||||
@@ -447,6 +448,11 @@ def traefik_file(
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Generate a Traefik file-provider fragment from compose Traefik labels."""
|
||||
from compose_farm.traefik import ( # noqa: PLC0415
|
||||
generate_traefik_config,
|
||||
render_traefik_config,
|
||||
)
|
||||
|
||||
stack_list, cfg = get_stacks(stacks or [], all_stacks, config)
|
||||
try:
|
||||
dynamic, warnings = generate_traefik_config(cfg, stack_list)
|
||||
|
||||
@@ -13,6 +13,7 @@ from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
import yaml
|
||||
from dotenv import dotenv_values
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import Config
|
||||
@@ -40,25 +41,37 @@ def _load_env(compose_path: Path) -> dict[str, str]:
|
||||
Reads from .env file in the same directory as compose file,
|
||||
then overlays current environment variables.
|
||||
"""
|
||||
env: dict[str, str] = {}
|
||||
env_path = compose_path.parent / ".env"
|
||||
if env_path.exists():
|
||||
for line in env_path.read_text().splitlines():
|
||||
stripped = line.strip()
|
||||
if not stripped or stripped.startswith("#") or "=" not in stripped:
|
||||
continue
|
||||
key, value = stripped.split("=", 1)
|
||||
key = key.strip()
|
||||
value = value.strip()
|
||||
if (value.startswith('"') and value.endswith('"')) or (
|
||||
value.startswith("'") and value.endswith("'")
|
||||
):
|
||||
value = value[1:-1]
|
||||
env[key] = value
|
||||
env: dict[str, str] = {k: v for k, v in dotenv_values(env_path).items() if v is not None}
|
||||
env.update({k: v for k, v in os.environ.items() if isinstance(v, str)})
|
||||
return env
|
||||
|
||||
|
||||
def parse_compose_data(content: str) -> dict[str, Any]:
|
||||
"""Parse compose YAML content into a dict."""
|
||||
compose_data = yaml.safe_load(content) or {}
|
||||
return compose_data if isinstance(compose_data, dict) else {}
|
||||
|
||||
|
||||
def load_compose_data(compose_path: Path) -> dict[str, Any]:
|
||||
"""Load compose YAML from a file path."""
|
||||
return parse_compose_data(compose_path.read_text())
|
||||
|
||||
|
||||
def load_compose_data_for_stack(config: Config, stack: str) -> tuple[Path, dict[str, Any]]:
|
||||
"""Load compose YAML for a stack, returning (path, data)."""
|
||||
compose_path = config.get_compose_path(stack)
|
||||
if not compose_path.exists():
|
||||
return compose_path, {}
|
||||
return compose_path, load_compose_data(compose_path)
|
||||
|
||||
|
||||
def extract_services(compose_data: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Extract services mapping from compose data."""
|
||||
raw_services = compose_data.get("services", {})
|
||||
return raw_services if isinstance(raw_services, dict) else {}
|
||||
|
||||
|
||||
def _interpolate(value: str, env: dict[str, str]) -> str:
|
||||
"""Perform ${VAR} and ${VAR:-default} interpolation."""
|
||||
|
||||
@@ -185,16 +198,15 @@ def parse_host_volumes(config: Config, stack: str) -> list[str]:
|
||||
Returns a list of absolute host paths used as volume mounts.
|
||||
Skips named volumes and resolves relative paths.
|
||||
"""
|
||||
compose_path = config.get_compose_path(stack)
|
||||
compose_path, compose_data = load_compose_data_for_stack(config, stack)
|
||||
if not compose_path.exists():
|
||||
return []
|
||||
|
||||
env = _load_env(compose_path)
|
||||
compose_data = yaml.safe_load(compose_path.read_text()) or {}
|
||||
raw_services = compose_data.get("services", {})
|
||||
if not isinstance(raw_services, dict):
|
||||
raw_services = extract_services(compose_data)
|
||||
if not raw_services:
|
||||
return []
|
||||
|
||||
env = _load_env(compose_path)
|
||||
paths: list[str] = []
|
||||
compose_dir = compose_path.parent
|
||||
|
||||
@@ -221,16 +233,15 @@ def parse_devices(config: Config, stack: str) -> list[str]:
|
||||
|
||||
Returns a list of host device paths (e.g., /dev/dri, /dev/dri/renderD128).
|
||||
"""
|
||||
compose_path = config.get_compose_path(stack)
|
||||
compose_path, compose_data = load_compose_data_for_stack(config, stack)
|
||||
if not compose_path.exists():
|
||||
return []
|
||||
|
||||
env = _load_env(compose_path)
|
||||
compose_data = yaml.safe_load(compose_path.read_text()) or {}
|
||||
raw_services = compose_data.get("services", {})
|
||||
if not isinstance(raw_services, dict):
|
||||
raw_services = extract_services(compose_data)
|
||||
if not raw_services:
|
||||
return []
|
||||
|
||||
env = _load_env(compose_path)
|
||||
devices: list[str] = []
|
||||
for definition in raw_services.values():
|
||||
if not isinstance(definition, dict):
|
||||
@@ -260,11 +271,10 @@ def parse_external_networks(config: Config, stack: str) -> list[str]:
|
||||
|
||||
Returns a list of network names marked as external: true.
|
||||
"""
|
||||
compose_path = config.get_compose_path(stack)
|
||||
compose_path, compose_data = load_compose_data_for_stack(config, stack)
|
||||
if not compose_path.exists():
|
||||
return []
|
||||
|
||||
compose_data = yaml.safe_load(compose_path.read_text()) or {}
|
||||
networks = compose_data.get("networks", {})
|
||||
if not isinstance(networks, dict):
|
||||
return []
|
||||
@@ -285,15 +295,14 @@ def load_compose_services(
|
||||
|
||||
Returns (services_dict, env_dict, host_address).
|
||||
"""
|
||||
compose_path = config.get_compose_path(stack)
|
||||
compose_path, compose_data = load_compose_data_for_stack(config, stack)
|
||||
if not compose_path.exists():
|
||||
message = f"[{stack}] Compose file not found: {compose_path}"
|
||||
raise FileNotFoundError(message)
|
||||
|
||||
env = _load_env(compose_path)
|
||||
compose_data = yaml.safe_load(compose_path.read_text()) or {}
|
||||
raw_services = compose_data.get("services", {})
|
||||
if not isinstance(raw_services, dict):
|
||||
raw_services = extract_services(compose_data)
|
||||
if not raw_services:
|
||||
return {}, env, config.get_host(stack).address
|
||||
return raw_services, env, config.get_host(stack).address
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ stacks:
|
||||
# traefik_file: (optional) Auto-generate Traefik file-provider config
|
||||
# ------------------------------------------------------------------------------
|
||||
# When set, compose-farm automatically regenerates this file after
|
||||
# up/down/restart/update commands. Traefik watches this file for changes.
|
||||
# up/down/update commands. Traefik watches this file for changes.
|
||||
#
|
||||
# traefik_file: /opt/compose/traefik/dynamic.d/compose-farm.yml
|
||||
|
||||
|
||||
@@ -3,16 +3,48 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from .executor import is_local
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import Config
|
||||
from .config import Config, Host
|
||||
|
||||
# Default Glances REST API port
|
||||
DEFAULT_GLANCES_PORT = 61208
|
||||
|
||||
|
||||
def _get_glances_address(
|
||||
host_name: str,
|
||||
host: Host,
|
||||
glances_container: str | None,
|
||||
) -> str:
|
||||
"""Get the address to use for Glances API requests.
|
||||
|
||||
When running in a Docker container (CF_WEB_STACK set), the local host's Glances
|
||||
may not be reachable via its LAN IP due to Docker network isolation. In this case,
|
||||
we use the Glances container name for the local host.
|
||||
Set CF_LOCAL_HOST=<hostname> to explicitly specify which host is local.
|
||||
"""
|
||||
# Only use container name when running inside a Docker container
|
||||
in_container = os.environ.get("CF_WEB_STACK") is not None
|
||||
if not in_container or not glances_container:
|
||||
return host.address
|
||||
|
||||
# CF_LOCAL_HOST explicitly tells us which host to reach via container name
|
||||
explicit_local = os.environ.get("CF_LOCAL_HOST")
|
||||
if explicit_local and host_name == explicit_local:
|
||||
return glances_container
|
||||
|
||||
# Fall back to is_local detection (may not work in container)
|
||||
if is_local(host):
|
||||
return glances_container
|
||||
|
||||
return host.address
|
||||
|
||||
|
||||
@dataclass
|
||||
class HostStats:
|
||||
"""Resource statistics for a host."""
|
||||
@@ -112,7 +144,11 @@ async def fetch_all_host_stats(
|
||||
port: int = DEFAULT_GLANCES_PORT,
|
||||
) -> dict[str, HostStats]:
|
||||
"""Fetch stats from all hosts in parallel."""
|
||||
tasks = [fetch_host_stats(name, host.address, port) for name, host in config.hosts.items()]
|
||||
glances_container = config.glances_stack
|
||||
tasks = [
|
||||
fetch_host_stats(name, _get_glances_address(name, host, glances_container), port)
|
||||
for name, host in config.hosts.items()
|
||||
]
|
||||
results = await asyncio.gather(*tasks)
|
||||
return {stats.host: stats for stats in results}
|
||||
|
||||
@@ -212,6 +248,8 @@ async def fetch_all_container_stats(
|
||||
"""Fetch container stats from all hosts in parallel, enriched with compose labels."""
|
||||
from .executor import get_container_compose_labels # noqa: PLC0415
|
||||
|
||||
glances_container = config.glances_stack
|
||||
|
||||
async def fetch_host_data(
|
||||
host_name: str,
|
||||
host_address: str,
|
||||
@@ -230,7 +268,10 @@ async def fetch_all_container_stats(
|
||||
c.stack, c.service = labels.get(c.name, ("", ""))
|
||||
return containers
|
||||
|
||||
tasks = [fetch_host_data(name, host.address) for name, host in config.hosts.items()]
|
||||
tasks = [
|
||||
fetch_host_data(name, _get_glances_address(name, host, glances_container))
|
||||
for name, host in config.hosts.items()
|
||||
]
|
||||
results = await asyncio.gather(*tasks)
|
||||
# Flatten list of lists
|
||||
return [container for host_containers in results for container in host_containers]
|
||||
|
||||
@@ -185,18 +185,37 @@ def _report_preflight_failures(
|
||||
print_error(f" missing device: {dev}")
|
||||
|
||||
|
||||
def build_up_cmd(
|
||||
*,
|
||||
pull: bool = False,
|
||||
build: bool = False,
|
||||
service: str | None = None,
|
||||
) -> str:
|
||||
"""Build compose 'up' subcommand with optional flags."""
|
||||
parts = ["up", "-d"]
|
||||
if pull:
|
||||
parts.append("--pull always")
|
||||
if build:
|
||||
parts.append("--build")
|
||||
if service:
|
||||
parts.append(service)
|
||||
return " ".join(parts)
|
||||
|
||||
|
||||
async def _up_multi_host_stack(
|
||||
cfg: Config,
|
||||
stack: str,
|
||||
prefix: str,
|
||||
*,
|
||||
raw: bool = False,
|
||||
pull: bool = False,
|
||||
build: bool = False,
|
||||
) -> list[CommandResult]:
|
||||
"""Start a multi-host stack on all configured hosts."""
|
||||
host_names = cfg.get_hosts(stack)
|
||||
results: list[CommandResult] = []
|
||||
compose_path = cfg.get_compose_path(stack)
|
||||
command = f"docker compose -f {compose_path} up -d"
|
||||
command = f"docker compose -f {compose_path} {build_up_cmd(pull=pull, build=build)}"
|
||||
|
||||
# Pre-flight checks on all hosts
|
||||
for host_name in host_names:
|
||||
@@ -269,6 +288,8 @@ async def _up_single_stack(
|
||||
prefix: str,
|
||||
*,
|
||||
raw: bool,
|
||||
pull: bool = False,
|
||||
build: bool = False,
|
||||
) -> CommandResult:
|
||||
"""Start a single-host stack with migration support."""
|
||||
target_host = cfg.get_hosts(stack)[0]
|
||||
@@ -297,7 +318,7 @@ async def _up_single_stack(
|
||||
|
||||
# Start on target host
|
||||
console.print(f"{prefix} Starting on [magenta]{target_host}[/]...")
|
||||
up_result = await _run_compose_step(cfg, stack, "up -d", raw=raw)
|
||||
up_result = await _run_compose_step(cfg, stack, build_up_cmd(pull=pull, build=build), raw=raw)
|
||||
|
||||
# Update state on success, or rollback on failure
|
||||
if up_result.success:
|
||||
@@ -316,24 +337,101 @@ async def _up_single_stack(
|
||||
return up_result
|
||||
|
||||
|
||||
async def _up_stack_simple(
|
||||
cfg: Config,
|
||||
stack: str,
|
||||
*,
|
||||
raw: bool = False,
|
||||
pull: bool = False,
|
||||
build: bool = False,
|
||||
) -> CommandResult:
|
||||
"""Start a single-host stack without migration (parallel-safe)."""
|
||||
target_host = cfg.get_hosts(stack)[0]
|
||||
|
||||
# Pre-flight check
|
||||
preflight = await check_stack_requirements(cfg, stack, target_host)
|
||||
if not preflight.ok:
|
||||
_report_preflight_failures(stack, target_host, preflight)
|
||||
return CommandResult(stack=stack, exit_code=1, success=False)
|
||||
|
||||
# Run with streaming for parallel output
|
||||
result = await run_compose(cfg, stack, build_up_cmd(pull=pull, build=build), raw=raw)
|
||||
if raw:
|
||||
print()
|
||||
if result.interrupted:
|
||||
raise OperationInterruptedError
|
||||
|
||||
# Update state on success
|
||||
if result.success:
|
||||
set_stack_host(cfg, stack, target_host)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
async def up_stacks(
|
||||
cfg: Config,
|
||||
stacks: list[str],
|
||||
*,
|
||||
raw: bool = False,
|
||||
pull: bool = False,
|
||||
build: bool = False,
|
||||
) -> list[CommandResult]:
|
||||
"""Start stacks with automatic migration if host changed."""
|
||||
"""Start stacks with automatic migration if host changed.
|
||||
|
||||
Stacks without migration run in parallel. Migration stacks run sequentially.
|
||||
"""
|
||||
# Categorize stacks
|
||||
multi_host: list[str] = []
|
||||
needs_migration: list[str] = []
|
||||
simple: list[str] = []
|
||||
|
||||
for stack in stacks:
|
||||
if cfg.is_multi_host(stack):
|
||||
multi_host.append(stack)
|
||||
else:
|
||||
target = cfg.get_hosts(stack)[0]
|
||||
current = get_stack_host(cfg, stack)
|
||||
if current and current != target:
|
||||
needs_migration.append(stack)
|
||||
else:
|
||||
simple.append(stack)
|
||||
|
||||
results: list[CommandResult] = []
|
||||
total = len(stacks)
|
||||
|
||||
try:
|
||||
for idx, stack in enumerate(stacks, 1):
|
||||
prefix = f"[dim][{idx}/{total}][/] [cyan]\\[{stack}][/]"
|
||||
# Simple stacks: run in parallel (no migration needed)
|
||||
if simple:
|
||||
use_raw = raw and len(simple) == 1
|
||||
simple_results = await asyncio.gather(
|
||||
*[
|
||||
_up_stack_simple(cfg, stack, raw=use_raw, pull=pull, build=build)
|
||||
for stack in simple
|
||||
]
|
||||
)
|
||||
results.extend(simple_results)
|
||||
|
||||
# Multi-host stacks: run in parallel
|
||||
if multi_host:
|
||||
multi_results = await asyncio.gather(
|
||||
*[
|
||||
_up_multi_host_stack(
|
||||
cfg, stack, f"[cyan]\\[{stack}][/]", raw=raw, pull=pull, build=build
|
||||
)
|
||||
for stack in multi_host
|
||||
]
|
||||
)
|
||||
for result_list in multi_results:
|
||||
results.extend(result_list)
|
||||
|
||||
# Migration stacks: run sequentially for clear output and rollback
|
||||
if needs_migration:
|
||||
total = len(needs_migration)
|
||||
for idx, stack in enumerate(needs_migration, 1):
|
||||
prefix = f"[dim][{idx}/{total}][/] [cyan]\\[{stack}][/]"
|
||||
results.append(
|
||||
await _up_single_stack(cfg, stack, prefix, raw=raw, pull=pull, build=build)
|
||||
)
|
||||
|
||||
if cfg.is_multi_host(stack):
|
||||
results.extend(await _up_multi_host_stack(cfg, stack, prefix, raw=raw))
|
||||
else:
|
||||
results.append(await _up_single_stack(cfg, stack, prefix, raw=raw))
|
||||
except OperationInterruptedError:
|
||||
raise KeyboardInterrupt from None
|
||||
|
||||
|
||||
@@ -4,7 +4,6 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
from contextlib import asynccontextmanager, suppress
|
||||
from typing import TYPE_CHECKING, Any, cast
|
||||
|
||||
@@ -17,6 +16,7 @@ from rich.logging import RichHandler
|
||||
from compose_farm.web.deps import STATIC_DIR, get_config
|
||||
from compose_farm.web.routes import actions, api, containers, pages
|
||||
from compose_farm.web.streaming import TASK_TTL_SECONDS, cleanup_stale_tasks
|
||||
from compose_farm.web.ws import router as ws_router
|
||||
|
||||
# Configure logging with Rich handler for compose_farm.web modules
|
||||
logging.basicConfig(
|
||||
@@ -76,10 +76,6 @@ def create_app() -> FastAPI:
|
||||
app.include_router(api.router, prefix="/api")
|
||||
app.include_router(actions.router, prefix="/api")
|
||||
|
||||
# WebSocket routes use Unix-only modules (fcntl, pty)
|
||||
if sys.platform != "win32":
|
||||
from compose_farm.web.ws import router as ws_router # noqa: PLC0415
|
||||
|
||||
app.include_router(ws_router)
|
||||
app.include_router(ws_router)
|
||||
|
||||
return app
|
||||
|
||||
@@ -1,78 +1,39 @@
|
||||
"""CDN asset definitions and caching for tests and demo recordings.
|
||||
|
||||
This module provides a single source of truth for CDN asset URLs used in
|
||||
browser tests and demo recordings. Assets are intercepted and served from
|
||||
a local cache to eliminate network variability.
|
||||
This module provides CDN asset URLs used in browser tests and demo recordings.
|
||||
Assets are intercepted and served from a local cache to eliminate network
|
||||
variability.
|
||||
|
||||
Note: The canonical list of CDN assets for production is in base.html
|
||||
(with data-vendor attributes). This module includes those plus dynamically
|
||||
loaded assets (like Monaco editor modules loaded by app.js).
|
||||
The canonical list of CDN assets is in vendor-assets.json. This module loads
|
||||
that file and provides the CDN_ASSETS dict for test caching.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
from typing import TYPE_CHECKING
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def _load_cdn_assets() -> dict[str, tuple[str, str]]:
|
||||
"""Load CDN assets from vendor-assets.json.
|
||||
|
||||
Returns:
|
||||
Dict mapping URL to (filename, content_type) tuple.
|
||||
|
||||
"""
|
||||
json_path = Path(__file__).parent / "vendor-assets.json"
|
||||
with json_path.open() as f:
|
||||
config = json.load(f)
|
||||
|
||||
return {asset["url"]: (asset["filename"], asset["content_type"]) for asset in config["assets"]}
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
|
||||
# CDN assets to cache locally for tests/demos
|
||||
# Format: URL -> (local_filename, content_type)
|
||||
#
|
||||
# If tests fail with "Uncached CDN request", add the URL here.
|
||||
CDN_ASSETS: dict[str, tuple[str, str]] = {
|
||||
# From base.html (data-vendor attributes)
|
||||
"https://cdn.jsdelivr.net/npm/daisyui@5/themes.css": ("daisyui-themes.css", "text/css"),
|
||||
"https://cdn.jsdelivr.net/npm/daisyui@5": ("daisyui.css", "text/css"),
|
||||
"https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4": (
|
||||
"tailwind.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/css/xterm.css": ("xterm.css", "text/css"),
|
||||
"https://unpkg.com/htmx.org@2.0.4": ("htmx.js", "application/javascript"),
|
||||
"https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/lib/xterm.js": (
|
||||
"xterm.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/@xterm/addon-fit@0.10.0/lib/addon-fit.js": (
|
||||
"xterm-fit.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://unpkg.com/idiomorph/dist/idiomorph.min.js": (
|
||||
"idiomorph.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://unpkg.com/idiomorph/dist/idiomorph-ext.min.js": (
|
||||
"idiomorph-ext.js",
|
||||
"application/javascript",
|
||||
),
|
||||
# Monaco editor - dynamically loaded by app.js
|
||||
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/loader.js": (
|
||||
"monaco-loader.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/editor/editor.main.js": (
|
||||
"monaco-editor-main.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/editor/editor.main.css": (
|
||||
"monaco-editor-main.css",
|
||||
"text/css",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/base/worker/workerMain.js": (
|
||||
"monaco-workerMain.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/basic-languages/yaml/yaml.js": (
|
||||
"monaco-yaml.js",
|
||||
"application/javascript",
|
||||
),
|
||||
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/base/browser/ui/codicons/codicon/codicon.ttf": (
|
||||
"monaco-codicon.ttf",
|
||||
"font/ttf",
|
||||
),
|
||||
}
|
||||
# If tests fail with "Uncached CDN request", add the URL to vendor-assets.json.
|
||||
CDN_ASSETS: dict[str, tuple[str, str]] = _load_cdn_assets()
|
||||
|
||||
|
||||
def download_url(url: str) -> bytes | None:
|
||||
@@ -107,6 +68,7 @@ def ensure_vendor_cache(cache_dir: Path) -> Path:
|
||||
filepath = cache_dir / filename
|
||||
if filepath.exists():
|
||||
continue
|
||||
filepath.parent.mkdir(parents=True, exist_ok=True)
|
||||
content = download_url(url)
|
||||
if not content:
|
||||
msg = f"Failed to download {url} - check network/curl"
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import uuid
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
@@ -14,6 +15,9 @@ if TYPE_CHECKING:
|
||||
from compose_farm.web.deps import get_config
|
||||
from compose_farm.web.streaming import run_cli_streaming, run_compose_streaming, tasks
|
||||
|
||||
# Environment variable to identify the web stack (for exclusion from bulk updates)
|
||||
CF_WEB_STACK = os.environ.get("CF_WEB_STACK", "")
|
||||
|
||||
router = APIRouter(tags=["actions"])
|
||||
|
||||
# Store task references to prevent garbage collection
|
||||
@@ -96,7 +100,15 @@ async def pull_all() -> dict[str, Any]:
|
||||
|
||||
@router.post("/update-all")
|
||||
async def update_all() -> dict[str, Any]:
|
||||
"""Update all stacks (pull + build + down + up)."""
|
||||
"""Update all stacks, excluding the web stack. Only recreates if images changed.
|
||||
|
||||
The web stack is excluded to prevent the UI from shutting down mid-operation.
|
||||
Use 'cf update <web-stack>' manually to update the web UI.
|
||||
"""
|
||||
config = get_config()
|
||||
task_id = _start_task(lambda tid: run_cli_streaming(config, ["update", "--all"], tid))
|
||||
return {"task_id": task_id, "command": "update --all"}
|
||||
# Get all stacks except the web stack to avoid self-shutdown
|
||||
stacks = [s for s in config.stacks if s != CF_WEB_STACK]
|
||||
if not stacks:
|
||||
return {"task_id": "", "command": "update (no stacks)", "skipped": True}
|
||||
task_id = _start_task(lambda tid: run_cli_streaming(config, ["update", *stacks], tid))
|
||||
return {"task_id": task_id, "command": f"update {' '.join(stacks)}"}
|
||||
|
||||
@@ -19,7 +19,7 @@ import yaml
|
||||
from fastapi import APIRouter, Body, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from compose_farm.compose import get_container_name
|
||||
from compose_farm.compose import extract_services, get_container_name, load_compose_data_for_stack
|
||||
from compose_farm.executor import is_local, run_compose_on_host, ssh_connect_kwargs
|
||||
from compose_farm.glances import fetch_all_host_stats
|
||||
from compose_farm.paths import backup_dir, find_config_path
|
||||
@@ -51,7 +51,6 @@ def _backup_file(file_path: Path) -> Path | None:
|
||||
|
||||
# Create backup directory mirroring original path structure
|
||||
# e.g., /opt/stacks/plex/compose.yaml -> ~/.config/compose-farm/backups/opt/stacks/plex/
|
||||
# On Windows: C:\Users\foo\stacks -> backups/Users/foo/stacks
|
||||
resolved = file_path.resolve()
|
||||
file_backup_dir = backup_dir() / resolved.parent.relative_to(resolved.anchor)
|
||||
file_backup_dir.mkdir(parents=True, exist_ok=True)
|
||||
@@ -107,13 +106,11 @@ def _get_compose_services(config: Any, stack: str, hosts: list[str]) -> list[dic
|
||||
|
||||
Returns one entry per container per host for multi-host stacks.
|
||||
"""
|
||||
compose_path = config.get_compose_path(stack)
|
||||
if not compose_path or not compose_path.exists():
|
||||
compose_path, compose_data = load_compose_data_for_stack(config, stack)
|
||||
if not compose_path.exists():
|
||||
return []
|
||||
|
||||
compose_data = yaml.safe_load(compose_path.read_text()) or {}
|
||||
raw_services = compose_data.get("services", {})
|
||||
if not isinstance(raw_services, dict):
|
||||
raw_services = extract_services(compose_data)
|
||||
if not raw_services:
|
||||
return []
|
||||
|
||||
# Project name is the directory name (docker compose default)
|
||||
|
||||
@@ -7,7 +7,7 @@ from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from pydantic import ValidationError
|
||||
|
||||
from compose_farm.compose import get_container_name
|
||||
from compose_farm.compose import extract_services, get_container_name, parse_compose_data
|
||||
from compose_farm.paths import find_config_path
|
||||
from compose_farm.state import (
|
||||
get_orphaned_stacks,
|
||||
@@ -166,9 +166,9 @@ async def stack_detail(request: Request, name: str) -> HTMLResponse:
|
||||
containers: dict[str, dict[str, str]] = {}
|
||||
shell_host = current_host[0] if isinstance(current_host, list) else current_host
|
||||
if compose_content:
|
||||
compose_data = yaml.safe_load(compose_content) or {}
|
||||
raw_services = compose_data.get("services", {})
|
||||
if isinstance(raw_services, dict):
|
||||
compose_data = parse_compose_data(compose_content)
|
||||
raw_services = extract_services(compose_data)
|
||||
if raw_services:
|
||||
services = list(raw_services.keys())
|
||||
# Build container info for shell access (only if stack is running)
|
||||
if shell_host:
|
||||
|
||||
@@ -332,10 +332,14 @@ function loadMonaco(callback) {
|
||||
monacoLoading = true;
|
||||
|
||||
// Load the Monaco loader script
|
||||
// Use local paths when running from vendored wheel, CDN otherwise
|
||||
const monacoBase = window.CF_VENDORED
|
||||
? '/static/vendor/monaco'
|
||||
: 'https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs';
|
||||
const script = document.createElement('script');
|
||||
script.src = 'https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/loader.js';
|
||||
script.src = monacoBase + '/loader.js';
|
||||
script.onload = function() {
|
||||
require.config({ paths: { vs: 'https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs' }});
|
||||
require.config({ paths: { vs: monacoBase }});
|
||||
require(['vs/editor/editor.main'], function() {
|
||||
monacoLoaded = true;
|
||||
monacoLoading = false;
|
||||
@@ -604,7 +608,7 @@ function playFabIntro() {
|
||||
cmd('action', 'Apply', 'Make reality match config', dashboardAction('apply'), icons.check),
|
||||
cmd('action', 'Refresh', 'Update state from reality', dashboardAction('refresh'), icons.refresh_cw),
|
||||
cmd('action', 'Pull All', 'Pull latest images for all stacks', dashboardAction('pull-all'), icons.cloud_download),
|
||||
cmd('action', 'Update All', 'Update all stacks', dashboardAction('update-all'), icons.refresh_cw),
|
||||
cmd('action', 'Update All', 'Update all stacks except web', dashboardAction('update-all'), icons.refresh_cw),
|
||||
cmd('app', 'Theme', 'Change color theme', openThemePicker, icons.palette),
|
||||
cmd('app', 'Dashboard', 'Go to dashboard', nav('/'), icons.home),
|
||||
cmd('app', 'Live Stats', 'View all containers across hosts', nav('/live-stats'), icons.box),
|
||||
@@ -624,7 +628,7 @@ function playFabIntro() {
|
||||
stackCmd('Down', 'Stop', 'down', icons.square),
|
||||
stackCmd('Restart', 'Restart', 'restart', icons.rotate_cw),
|
||||
stackCmd('Pull', 'Pull', 'pull', icons.cloud_download),
|
||||
stackCmd('Update', 'Pull + restart', 'update', icons.refresh_cw),
|
||||
stackCmd('Update', 'Pull + recreate', 'update', icons.refresh_cw),
|
||||
stackCmd('Logs', 'View logs for', 'logs', icons.file_text),
|
||||
);
|
||||
|
||||
|
||||
@@ -103,8 +103,8 @@ def _is_self_update(stack: str, command: str) -> bool:
|
||||
"""
|
||||
if not CF_WEB_STACK or stack != CF_WEB_STACK:
|
||||
return False
|
||||
# Commands that involve 'down' need SSH: update, restart, down
|
||||
return command in ("update", "restart", "down")
|
||||
# Commands that involve 'down' need SSH: update, down
|
||||
return command in ("update", "down")
|
||||
|
||||
|
||||
async def _run_cli_via_ssh(
|
||||
|
||||
@@ -97,8 +97,8 @@
|
||||
|
||||
<!-- Scripts - HTMX first -->
|
||||
<script src="https://unpkg.com/htmx.org@2.0.4" data-vendor="htmx.js"></script>
|
||||
<script src="https://unpkg.com/idiomorph/dist/idiomorph.min.js"></script>
|
||||
<script src="https://unpkg.com/idiomorph/dist/idiomorph-ext.min.js"></script>
|
||||
<script src="https://unpkg.com/idiomorph/dist/idiomorph.min.js" data-vendor="idiomorph.js"></script>
|
||||
<script src="https://unpkg.com/idiomorph/dist/idiomorph-ext.min.js" data-vendor="idiomorph-ext.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/lib/xterm.js" data-vendor="xterm.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/@xterm/addon-fit@0.10.0/lib/addon-fit.js" data-vendor="xterm-fit.js"></script>
|
||||
<script src="/static/app.js"></script>
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
{{ action_btn("Apply", "/api/apply", "primary", "Make reality match config", check()) }}
|
||||
{{ action_btn("Refresh", "/api/refresh", "outline", "Update state from reality", refresh_cw()) }}
|
||||
{{ action_btn("Pull All", "/api/pull-all", "outline", "Pull latest images for all stacks", cloud_download()) }}
|
||||
{{ action_btn("Update All", "/api/update-all", "outline", "Update all stacks (pull + build + down + up)", rotate_cw()) }}
|
||||
{{ action_btn("Update All", "/api/update-all", "outline", "Update all stacks except web (only recreates if changed)", rotate_cw()) }}
|
||||
<div class="tooltip" data-tip="Save compose-farm.yaml config file"><button id="save-config-btn" class="btn btn-outline">{{ save() }} Save Config</button></div>
|
||||
</div>
|
||||
|
||||
|
||||
@@ -22,8 +22,8 @@
|
||||
<!-- Lifecycle -->
|
||||
{{ action_btn("Up", "/api/stack/" ~ name ~ "/up", "primary", "Start stack (docker compose up -d)", play()) }}
|
||||
{{ action_btn("Down", "/api/stack/" ~ name ~ "/down", "outline", "Stop stack (docker compose down)", square()) }}
|
||||
{{ action_btn("Restart", "/api/stack/" ~ name ~ "/restart", "secondary", "Restart stack (down + up)", rotate_cw()) }}
|
||||
{{ action_btn("Update", "/api/stack/" ~ name ~ "/update", "accent", "Update to latest (pull + build + down + up)", download()) }}
|
||||
{{ action_btn("Restart", "/api/stack/" ~ name ~ "/restart", "secondary", "Restart running containers", rotate_cw()) }}
|
||||
{{ action_btn("Update", "/api/stack/" ~ name ~ "/update", "accent", "Update to latest (only recreates if changed)", download()) }}
|
||||
|
||||
<div class="divider divider-horizontal mx-0"></div>
|
||||
|
||||
|
||||
122
src/compose_farm/web/vendor-assets.json
Normal file
122
src/compose_farm/web/vendor-assets.json
Normal file
@@ -0,0 +1,122 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$comment": "CDN assets vendored into production builds and cached for tests",
|
||||
"assets": [
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/daisyui@5",
|
||||
"filename": "daisyui.css",
|
||||
"content_type": "text/css",
|
||||
"package": "daisyui"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/daisyui@5/themes.css",
|
||||
"filename": "daisyui-themes.css",
|
||||
"content_type": "text/css",
|
||||
"package": "daisyui"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4",
|
||||
"filename": "tailwind.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "tailwindcss"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/css/xterm.css",
|
||||
"filename": "xterm.css",
|
||||
"content_type": "text/css",
|
||||
"package": "xterm"
|
||||
},
|
||||
{
|
||||
"url": "https://unpkg.com/htmx.org@2.0.4",
|
||||
"filename": "htmx.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "htmx"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/lib/xterm.js",
|
||||
"filename": "xterm.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "xterm"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/@xterm/addon-fit@0.10.0/lib/addon-fit.js",
|
||||
"filename": "xterm-fit.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "xterm"
|
||||
},
|
||||
{
|
||||
"url": "https://unpkg.com/idiomorph/dist/idiomorph.min.js",
|
||||
"filename": "idiomorph.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "idiomorph"
|
||||
},
|
||||
{
|
||||
"url": "https://unpkg.com/idiomorph/dist/idiomorph-ext.min.js",
|
||||
"filename": "idiomorph-ext.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "idiomorph"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/loader.js",
|
||||
"filename": "monaco/loader.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "monaco-editor"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/editor/editor.main.js",
|
||||
"filename": "monaco/editor/editor.main.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "monaco-editor"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/editor/editor.main.css",
|
||||
"filename": "monaco/editor/editor.main.css",
|
||||
"content_type": "text/css",
|
||||
"package": "monaco-editor"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/base/worker/workerMain.js",
|
||||
"filename": "monaco/base/worker/workerMain.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "monaco-editor"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/basic-languages/yaml/yaml.js",
|
||||
"filename": "monaco/basic-languages/yaml/yaml.js",
|
||||
"content_type": "application/javascript",
|
||||
"package": "monaco-editor"
|
||||
},
|
||||
{
|
||||
"url": "https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/base/browser/ui/codicons/codicon/codicon.ttf",
|
||||
"filename": "monaco/base/browser/ui/codicons/codicon/codicon.ttf",
|
||||
"content_type": "font/ttf",
|
||||
"package": "monaco-editor"
|
||||
}
|
||||
],
|
||||
"licenses": {
|
||||
"htmx": {
|
||||
"type": "MIT",
|
||||
"url": "https://raw.githubusercontent.com/bigskysoftware/htmx/master/LICENSE"
|
||||
},
|
||||
"idiomorph": {
|
||||
"type": "BSD-2-Clause",
|
||||
"url": "https://raw.githubusercontent.com/bigskysoftware/idiomorph/main/LICENSE"
|
||||
},
|
||||
"xterm": {
|
||||
"type": "MIT",
|
||||
"url": "https://raw.githubusercontent.com/xtermjs/xterm.js/master/LICENSE"
|
||||
},
|
||||
"daisyui": {
|
||||
"type": "MIT",
|
||||
"url": "https://raw.githubusercontent.com/saadeghi/daisyui/master/LICENSE"
|
||||
},
|
||||
"tailwindcss": {
|
||||
"type": "MIT",
|
||||
"url": "https://raw.githubusercontent.com/tailwindlabs/tailwindcss/master/LICENSE"
|
||||
},
|
||||
"monaco-editor": {
|
||||
"type": "MIT",
|
||||
"url": "https://raw.githubusercontent.com/microsoft/monaco-editor/main/LICENSE.txt"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -11,9 +11,7 @@ import time
|
||||
import pytest
|
||||
|
||||
# Thresholds in seconds, per OS
|
||||
if sys.platform == "win32":
|
||||
CLI_STARTUP_THRESHOLD = 2.0
|
||||
elif sys.platform == "darwin":
|
||||
if sys.platform == "darwin":
|
||||
CLI_STARTUP_THRESHOLD = 0.35
|
||||
else: # Linux
|
||||
CLI_STARTUP_THRESHOLD = 0.25
|
||||
|
||||
@@ -9,10 +9,13 @@ from typer.testing import CliRunner
|
||||
|
||||
from compose_farm.cli import app
|
||||
from compose_farm.cli.config import (
|
||||
_detect_domain,
|
||||
_detect_local_host,
|
||||
_generate_template,
|
||||
_get_config_file,
|
||||
_get_editor,
|
||||
)
|
||||
from compose_farm.config import Config, Host
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -228,3 +231,159 @@ class TestConfigValidate:
|
||||
# Error goes to stderr
|
||||
output = result.stdout + (result.stderr or "")
|
||||
assert "Config file not found" in output or "not found" in output.lower()
|
||||
|
||||
|
||||
class TestDetectLocalHost:
|
||||
"""Tests for _detect_local_host function."""
|
||||
|
||||
def test_detects_localhost(self) -> None:
|
||||
cfg = Config(
|
||||
compose_dir=Path("/opt/compose"),
|
||||
hosts={
|
||||
"local": Host(address="localhost"),
|
||||
"remote": Host(address="192.168.1.100"),
|
||||
},
|
||||
stacks={"test": "local"},
|
||||
)
|
||||
result = _detect_local_host(cfg)
|
||||
assert result == "local"
|
||||
|
||||
def test_returns_none_for_remote_only(self) -> None:
|
||||
cfg = Config(
|
||||
compose_dir=Path("/opt/compose"),
|
||||
hosts={
|
||||
"remote1": Host(address="192.168.1.100"),
|
||||
"remote2": Host(address="192.168.1.200"),
|
||||
},
|
||||
stacks={"test": "remote1"},
|
||||
)
|
||||
result = _detect_local_host(cfg)
|
||||
# Remote IPs won't match local machine
|
||||
assert result is None or result in cfg.hosts
|
||||
|
||||
|
||||
class TestDetectDomain:
|
||||
"""Tests for _detect_domain function."""
|
||||
|
||||
def test_returns_none_for_empty_stacks(self) -> None:
|
||||
cfg = Config(
|
||||
compose_dir=Path("/opt/compose"),
|
||||
hosts={"nas": Host(address="192.168.1.6")},
|
||||
stacks={},
|
||||
)
|
||||
result = _detect_domain(cfg)
|
||||
assert result is None
|
||||
|
||||
def test_skips_local_domains(self, tmp_path: Path) -> None:
|
||||
# Create a minimal compose file with .local domain
|
||||
stack_dir = tmp_path / "test"
|
||||
stack_dir.mkdir()
|
||||
compose = stack_dir / "compose.yaml"
|
||||
compose.write_text(
|
||||
"""
|
||||
name: test
|
||||
services:
|
||||
web:
|
||||
image: nginx
|
||||
labels:
|
||||
- "traefik.http.routers.test-local.rule=Host(`test.local`)"
|
||||
"""
|
||||
)
|
||||
cfg = Config(
|
||||
compose_dir=tmp_path,
|
||||
hosts={"nas": Host(address="192.168.1.6")},
|
||||
stacks={"test": "nas"},
|
||||
)
|
||||
result = _detect_domain(cfg)
|
||||
# .local should be skipped
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestConfigInitEnv:
|
||||
"""Tests for cf config init-env command."""
|
||||
|
||||
def test_init_env_creates_file(
|
||||
self,
|
||||
runner: CliRunner,
|
||||
tmp_path: Path,
|
||||
valid_config_data: dict[str, Any],
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
monkeypatch.delenv("CF_CONFIG", raising=False)
|
||||
config_file = tmp_path / "compose-farm.yaml"
|
||||
config_file.write_text(yaml.dump(valid_config_data))
|
||||
env_file = tmp_path / ".env"
|
||||
|
||||
result = runner.invoke(
|
||||
app, ["config", "init-env", "-p", str(config_file), "-o", str(env_file)]
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert env_file.exists()
|
||||
content = env_file.read_text()
|
||||
assert "CF_COMPOSE_DIR=/opt/compose" in content
|
||||
assert "CF_UID=" in content
|
||||
assert "CF_GID=" in content
|
||||
|
||||
def test_init_env_force_overwrites(
|
||||
self,
|
||||
runner: CliRunner,
|
||||
tmp_path: Path,
|
||||
valid_config_data: dict[str, Any],
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
monkeypatch.delenv("CF_CONFIG", raising=False)
|
||||
config_file = tmp_path / "compose-farm.yaml"
|
||||
config_file.write_text(yaml.dump(valid_config_data))
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text("OLD_CONTENT=true")
|
||||
|
||||
result = runner.invoke(
|
||||
app, ["config", "init-env", "-p", str(config_file), "-o", str(env_file), "-f"]
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
content = env_file.read_text()
|
||||
assert "OLD_CONTENT" not in content
|
||||
assert "CF_COMPOSE_DIR" in content
|
||||
|
||||
def test_init_env_prompts_on_existing(
|
||||
self,
|
||||
runner: CliRunner,
|
||||
tmp_path: Path,
|
||||
valid_config_data: dict[str, Any],
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
monkeypatch.delenv("CF_CONFIG", raising=False)
|
||||
config_file = tmp_path / "compose-farm.yaml"
|
||||
config_file.write_text(yaml.dump(valid_config_data))
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text("KEEP_THIS=true")
|
||||
|
||||
result = runner.invoke(
|
||||
app,
|
||||
["config", "init-env", "-p", str(config_file), "-o", str(env_file)],
|
||||
input="n\n",
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert "Aborted" in result.stdout
|
||||
assert env_file.read_text() == "KEEP_THIS=true"
|
||||
|
||||
def test_init_env_defaults_to_config_dir(
|
||||
self,
|
||||
runner: CliRunner,
|
||||
tmp_path: Path,
|
||||
valid_config_data: dict[str, Any],
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
monkeypatch.delenv("CF_CONFIG", raising=False)
|
||||
config_file = tmp_path / "compose-farm.yaml"
|
||||
config_file.write_text(yaml.dump(valid_config_data))
|
||||
|
||||
result = runner.invoke(app, ["config", "init-env", "-p", str(config_file)])
|
||||
|
||||
assert result.exit_code == 0
|
||||
# Should create .env in same directory as config
|
||||
env_file = tmp_path / ".env"
|
||||
assert env_file.exists()
|
||||
|
||||
@@ -11,6 +11,7 @@ from compose_farm.glances import (
|
||||
DEFAULT_GLANCES_PORT,
|
||||
ContainerStats,
|
||||
HostStats,
|
||||
_get_glances_address,
|
||||
fetch_all_container_stats,
|
||||
fetch_all_host_stats,
|
||||
fetch_container_stats,
|
||||
@@ -347,3 +348,62 @@ class TestFetchAllContainerStats:
|
||||
hosts = {c.host for c in containers}
|
||||
assert "nas" in hosts
|
||||
assert "nuc" in hosts
|
||||
|
||||
|
||||
class TestGetGlancesAddress:
|
||||
"""Tests for _get_glances_address function."""
|
||||
|
||||
def test_returns_host_address_outside_container(self, monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
"""Without CF_WEB_STACK, always return host address."""
|
||||
monkeypatch.delenv("CF_WEB_STACK", raising=False)
|
||||
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
|
||||
host = Host(address="192.168.1.6")
|
||||
result = _get_glances_address("nas", host, "glances")
|
||||
assert result == "192.168.1.6"
|
||||
|
||||
def test_returns_host_address_without_glances_container(
|
||||
self, monkeypatch: pytest.MonkeyPatch
|
||||
) -> None:
|
||||
"""In container without glances_stack config, return host address."""
|
||||
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
|
||||
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
|
||||
host = Host(address="192.168.1.6")
|
||||
result = _get_glances_address("nas", host, None)
|
||||
assert result == "192.168.1.6"
|
||||
|
||||
def test_returns_container_name_for_explicit_local_host(
|
||||
self, monkeypatch: pytest.MonkeyPatch
|
||||
) -> None:
|
||||
"""CF_LOCAL_HOST explicitly marks which host uses container name."""
|
||||
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
|
||||
monkeypatch.setenv("CF_LOCAL_HOST", "nas")
|
||||
host = Host(address="192.168.1.6")
|
||||
result = _get_glances_address("nas", host, "glances")
|
||||
assert result == "glances"
|
||||
|
||||
def test_returns_host_address_for_non_local_host(self, monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
"""Non-local hosts use their IP address even in container mode."""
|
||||
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
|
||||
monkeypatch.setenv("CF_LOCAL_HOST", "nas")
|
||||
host = Host(address="192.168.1.2")
|
||||
result = _get_glances_address("nuc", host, "glances")
|
||||
assert result == "192.168.1.2"
|
||||
|
||||
def test_fallback_to_is_local_detection(self, monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
"""Without CF_LOCAL_HOST, falls back to is_local detection."""
|
||||
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
|
||||
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
|
||||
# Use localhost which should be detected as local
|
||||
host = Host(address="localhost")
|
||||
result = _get_glances_address("local", host, "glances")
|
||||
assert result == "glances"
|
||||
|
||||
def test_remote_host_not_affected_by_container_mode(
|
||||
self, monkeypatch: pytest.MonkeyPatch
|
||||
) -> None:
|
||||
"""Remote hosts always use their IP, even in container mode."""
|
||||
monkeypatch.setenv("CF_WEB_STACK", "compose-farm")
|
||||
monkeypatch.delenv("CF_LOCAL_HOST", raising=False)
|
||||
host = Host(address="192.168.1.100")
|
||||
result = _get_glances_address("remote", host, "glances")
|
||||
assert result == "192.168.1.100"
|
||||
|
||||
@@ -14,6 +14,7 @@ from compose_farm.executor import CommandResult
|
||||
from compose_farm.operations import (
|
||||
_migrate_stack,
|
||||
build_discovery_results,
|
||||
build_up_cmd,
|
||||
)
|
||||
|
||||
|
||||
@@ -95,23 +96,47 @@ class TestMigrationCommands:
|
||||
assert pull_idx < build_idx
|
||||
|
||||
|
||||
class TestBuildUpCmd:
|
||||
"""Tests for build_up_cmd helper."""
|
||||
|
||||
def test_basic(self) -> None:
|
||||
"""Basic up command without flags."""
|
||||
assert build_up_cmd() == "up -d"
|
||||
|
||||
def test_with_pull(self) -> None:
|
||||
"""Up command with pull flag."""
|
||||
assert build_up_cmd(pull=True) == "up -d --pull always"
|
||||
|
||||
def test_with_build(self) -> None:
|
||||
"""Up command with build flag."""
|
||||
assert build_up_cmd(build=True) == "up -d --build"
|
||||
|
||||
def test_with_pull_and_build(self) -> None:
|
||||
"""Up command with both flags."""
|
||||
assert build_up_cmd(pull=True, build=True) == "up -d --pull always --build"
|
||||
|
||||
def test_with_service(self) -> None:
|
||||
"""Up command targeting a specific service."""
|
||||
assert build_up_cmd(service="web") == "up -d web"
|
||||
|
||||
def test_with_all_options(self) -> None:
|
||||
"""Up command with all options."""
|
||||
assert (
|
||||
build_up_cmd(pull=True, build=True, service="web") == "up -d --pull always --build web"
|
||||
)
|
||||
|
||||
|
||||
class TestUpdateCommandSequence:
|
||||
"""Tests for update command sequence."""
|
||||
|
||||
def test_update_command_sequence_includes_build(self) -> None:
|
||||
"""Update command should use pull --ignore-buildable and build."""
|
||||
# This is a static check of the command sequence in lifecycle.py
|
||||
# The actual command sequence is defined in the update function
|
||||
|
||||
def test_update_delegates_to_up_with_pull_and_build(self) -> None:
|
||||
"""Update command should delegate to up with pull=True and build=True."""
|
||||
source = inspect.getsource(lifecycle.update)
|
||||
|
||||
# Verify the command sequence includes pull --ignore-buildable
|
||||
assert "pull --ignore-buildable" in source
|
||||
# Verify build is included
|
||||
assert '"build"' in source or "'build'" in source
|
||||
# Verify the sequence is pull, build, down, up
|
||||
assert "down" in source
|
||||
assert "up -d" in source
|
||||
# Verify update calls up with pull=True and build=True
|
||||
assert "up(" in source
|
||||
assert "pull=True" in source
|
||||
assert "build=True" in source
|
||||
|
||||
|
||||
class TestBuildDiscoveryResults:
|
||||
|
||||
@@ -4,7 +4,7 @@ Run with: uv run pytest tests/web/test_htmx_browser.py -v --no-cov
|
||||
|
||||
CDN assets are cached locally (in .pytest_cache/vendor/) to eliminate network
|
||||
variability. If a test fails with "Uncached CDN request", add the URL to
|
||||
compose_farm.web.cdn.CDN_ASSETS.
|
||||
src/compose_farm/web/vendor-assets.json.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -90,7 +90,7 @@ def page(page: Page, vendor_cache: Path) -> Page:
|
||||
return
|
||||
# Uncached CDN request - abort with helpful error
|
||||
route.abort("failed")
|
||||
msg = f"Uncached CDN request: {url}\n\nAdd this URL to CDN_ASSETS in tests/web/test_htmx_browser.py"
|
||||
msg = f"Uncached CDN request: {url}\n\nAdd this URL to src/compose_farm/web/vendor-assets.json"
|
||||
raise RuntimeError(msg)
|
||||
|
||||
page.route(re.compile(r"https://(cdn\.jsdelivr\.net|unpkg\.com)/.*"), handle_cdn)
|
||||
|
||||
2
uv.lock
generated
2
uv.lock
generated
@@ -234,6 +234,7 @@ source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "asyncssh" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "rich" },
|
||||
{ name = "typer" },
|
||||
@@ -274,6 +275,7 @@ requires-dist = [
|
||||
{ name = "humanize", marker = "extra == 'web'", specifier = ">=4.0.0" },
|
||||
{ name = "jinja2", marker = "extra == 'web'", specifier = ">=3.1.0" },
|
||||
{ name = "pydantic", specifier = ">=2.0.0" },
|
||||
{ name = "python-dotenv", specifier = ">=1.0.0" },
|
||||
{ name = "pyyaml", specifier = ">=6.0" },
|
||||
{ name = "rich", specifier = ">=13.0.0" },
|
||||
{ name = "typer", specifier = ">=0.9.0" },
|
||||
|
||||
@@ -16,6 +16,7 @@ extra_javascript = ["javascripts/video-fix.js"]
|
||||
nav = [
|
||||
{ "Home" = "index.md" },
|
||||
{ "Getting Started" = "getting-started.md" },
|
||||
{ "Docker Deployment" = "docker-deployment.md" },
|
||||
{ "Configuration" = "configuration.md" },
|
||||
{ "Commands" = "commands.md" },
|
||||
{ "Web UI" = "web-ui.md" },
|
||||
|
||||
Reference in New Issue
Block a user