Compare commits

..

7 Commits
init ... v1.8.0

Author SHA1 Message Date
Bas Nijholt
2af48b2642 feat(web): add Glances integration for host resource stats (#124) 2025-12-28 08:37:57 +01:00
Bas Nijholt
f69993eac8 web: Rename command palette entry to "GitHub Repo" (#134)
Makes the entry searchable by typing "github" in the command palette.
2025-12-28 07:06:32 +01:00
Bas Nijholt
9bdcd143cf Prioritize dedicated SSH key over agent (#133) 2025-12-24 22:34:53 -08:00
Bas Nijholt
9230e12eb0 fix: Make SSH agent socket optional in docker-compose.yml (#132) 2025-12-24 12:22:01 -08:00
Bas Nijholt
2a923e6e81 fix: Include field name in config validation error messages (#131)
Previously, Pydantic validation errors like "Extra inputs are not
permitted" didn't show which field caused the error. Now the error
message includes the field location (e.g., "unknown_key: Extra inputs
are not permitted").
2025-12-22 22:35:19 -08:00
Bas Nijholt
5f2e081298 perf: Batch snapshot collection to 1 SSH call per host (#130)
## Summary

Optimize `cf refresh` SSH calls from O(stacks) to O(hosts):
- Discovery: 1 SSH call per host (unchanged)
- Snapshots: 1 SSH call per host (was 1 per stack)

For 50 stacks across 4 hosts: 54 → 8 SSH calls.

## Changes

**Performance:**
- Use `docker ps` + `docker image inspect` instead of `docker compose images` per stack
- Batch snapshot collection by host in `collect_stacks_entries_on_host()`

**Architecture:**
- Add `build_discovery_results()` to `operations.py` (business logic)
- Keep progress bar wrapper in `cli/management.py` (presentation)
- Remove dead code: `discover_all_stacks_on_all_hosts()`, `collect_all_stacks_entries()`
2025-12-22 22:19:32 -08:00
Bas Nijholt
6fbc7430cb perf: Optimize stray detection to use 1 SSH call per host (#129)
* perf: Optimize stray detection to use 1 SSH call per host

Previously, stray detection checked each stack on each host individually,
resulting in (stacks * hosts) SSH calls. For 50 stacks across 4 hosts,
this meant ~200 parallel SSH connections, causing "Connection lost" errors.

Now queries each host once for all running compose projects using:
  docker ps --format '{{.Label "com.docker.compose.project"}}' | sort -u

This reduces SSH calls from ~200 to just 4 (one per host).

Changes:
- Add get_running_stacks_on_host() in executor.py
- Add discover_all_stacks_on_all_hosts() in operations.py
- Update _discover_stacks_full() to use the batch approach

* Remove unused function and add tests

- Remove discover_stack_on_all_hosts() which is no longer used
- Add tests for get_running_stacks_on_host()
- Add tests for discover_all_stacks_on_all_hosts()
  - Verifies it returns correct StackDiscoveryResult
  - Verifies stray detection works
  - Verifies it makes only 1 call per host (not per stack)
2025-12-22 12:09:59 -08:00
68 changed files with 3729 additions and 1131 deletions

2
.gitignore vendored
View File

@@ -37,6 +37,7 @@ ENV/
.coverage
.pytest_cache/
htmlcov/
.code/
# Local config (don't commit real configs)
compose-farm.yaml
@@ -45,3 +46,4 @@ coverage.xml
.env
homepage/
site/
.playwright-mcp/

View File

@@ -21,7 +21,7 @@ repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.9
hooks:
- id: ruff
- id: ruff-check
args: [--fix]
- id: ruff-format

View File

@@ -43,8 +43,8 @@ A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
- [What Compose Farm doesn't do](#what-compose-farm-doesnt-do)
- [Installation](#installation)
- [SSH Authentication](#ssh-authentication)
- [SSH Agent (default)](#ssh-agent-default)
- [Dedicated SSH Key (recommended for Docker/Web UI)](#dedicated-ssh-key-recommended-for-dockerweb-ui)
- [SSH Agent](#ssh-agent)
- [Dedicated SSH Key (default for Docker)](#dedicated-ssh-key-default-for-docker)
- [Configuration](#configuration)
- [Single-host example](#single-host-example)
- [Multi-host example](#multi-host-example)
@@ -54,6 +54,7 @@ A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
- [CLI `--help` Output](#cli---help-output)
- [Auto-Migration](#auto-migration)
- [Traefik Multihost Ingress (File Provider)](#traefik-multihost-ingress-file-provider)
- [Host Resource Monitoring (Glances)](#host-resource-monitoring-glances)
- [Comparison with Alternatives](#comparison-with-alternatives)
- [License](#license)
@@ -208,9 +209,9 @@ cp .envrc.example .envrc && direnv allow
Compose Farm uses SSH to run commands on remote hosts. There are two authentication methods:
### SSH Agent (default)
### SSH Agent
Works out of the box if you have an SSH agent running with your keys loaded:
Works out of the box when running locally if you have an SSH agent running with your keys loaded:
```bash
# Verify your agent has keys
@@ -220,9 +221,9 @@ ssh-add -l
cf up --all
```
### Dedicated SSH Key (recommended for Docker/Web UI)
### Dedicated SSH Key (default for Docker)
When running compose-farm in Docker, the SSH agent connection can be lost (e.g., after container restart). The `cf ssh` command sets up a dedicated key that persists:
When running in Docker, SSH agent sockets are ephemeral and can be lost after container restarts. The `cf ssh` command sets up a dedicated key that persists:
```bash
# Generate key and copy to all configured hosts
@@ -250,6 +251,13 @@ volumes:
- cf-ssh:${CF_HOME:-/root}/.ssh
```
**Option 3: SSH agent forwarding** - if you prefer using your host's ssh-agent
```yaml
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
```
Note: Requires `SSH_AUTH_SOCK` environment variable to be set. The socket path is ephemeral and changes across sessions.
Run setup once after starting the container (while the SSH agent still works):
```bash
@@ -342,18 +350,14 @@ When you run `cf up autokuma`, it starts the stack on all hosts in parallel. Mul
Compose Farm includes a `config` subcommand to help manage configuration files:
```bash
cf config init # Create a new config file with documented example
cf config init --discover # Auto-detect compose files and interactively create config
cf config show # Display current config with syntax highlighting
cf config path # Print the config file path (useful for scripting)
cf config validate # Validate config syntax and schema
cf config edit # Open config in $EDITOR
cf config example --list # List available example templates
cf config example whoami # Generate sample stack files
cf config example full # Generate complete Traefik + whoami setup
cf config init # Create a new config file with documented example
cf config show # Display current config with syntax highlighting
cf config path # Print the config file path (useful for scripting)
cf config validate # Validate config syntax and schema
cf config edit # Open config in $EDITOR
```
Use `cf config init` to get started with a template, or `cf config init --discover` if you already have compose files.
Use `cf config init` to get started with a fully documented template.
## Usage
@@ -999,7 +1003,6 @@ Full `--help` output for each command. See the [Usage](#usage) table above for a
│ validate Validate the config file syntax and schema. │
│ symlink Create a symlink from the default config location to a config │
│ file. │
│ example Generate example stack files from built-in templates. │
╰──────────────────────────────────────────────────────────────────────────────╯
```
@@ -1323,6 +1326,52 @@ Update your Traefik config to use directory watching instead of a single file:
- --providers.file.watch=true
```
## Host Resource Monitoring (Glances)
The web UI can display real-time CPU, memory, and load stats for all configured hosts. This uses [Glances](https://nicolargo.github.io/glances/), a cross-platform system monitoring tool with a REST API.
**Setup**
1. Deploy a Glances stack that runs on all hosts:
```yaml
# glances/compose.yaml
name: glances
services:
glances:
image: nicolargo/glances:latest
container_name: glances
restart: unless-stopped
pid: host
ports:
- "61208:61208"
environment:
- GLANCES_OPT=-w # Enable web server mode
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
```
2. Add it to your config as a multi-host stack:
```yaml
# compose-farm.yaml
stacks:
glances: all # Runs on every host
glances_stack: glances # Enables resource stats in web UI
```
3. Deploy: `cf up glances`
The web UI dashboard will now show a "Host Resources" section with live stats from all hosts. Hosts where Glances is unreachable show an error indicator.
**Live Stats Page**
With Glances configured, a Live Stats page (`/live-stats`) shows all running containers across all hosts:
- **Columns**: Stack, Service, Host, Image, Status, Uptime, CPU, Memory, Net I/O
- **Features**: Sorting, filtering, live updates (no SSH required—uses Glances REST API)
## Comparison with Alternatives
There are many ways to run containers on multiple hosts. Here is where Compose Farm sits:

View File

@@ -6,7 +6,6 @@ services:
# Defaults to root (0:0) for backwards compatibility
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
# Compose directory (contains compose files AND compose-farm.yaml config)
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
# SSH keys for passwordless auth (generated by `cf ssh setup`)
@@ -15,6 +14,8 @@ services:
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
# Option 2: Named volume - managed by Docker, shared between services
# - cf-ssh:${CF_HOME:-/root}/.ssh
# Option 3: SSH agent forwarding (uncomment if using ssh-agent)
# - ${SSH_AUTH_SOCK}:/ssh-agent:ro
environment:
- SSH_AUTH_SOCK=/ssh-agent
# Config file path (state stored alongside it)
@@ -31,13 +32,14 @@ services:
# Run as current user to preserve file ownership on mounted volumes
user: "${CF_UID:-0}:${CF_GID:-0}"
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent:ro
- ${CF_COMPOSE_DIR:-/opt/stacks}:${CF_COMPOSE_DIR:-/opt/stacks}
# SSH keys - use the SAME option as cf service above
# Option 1: Host path (default)
- ${CF_SSH_DIR:-~/.ssh/compose-farm}:${CF_HOME:-/root}/.ssh/compose-farm
# Option 2: Named volume
# - cf-ssh:${CF_HOME:-/root}/.ssh
# Option 3: SSH agent forwarding (uncomment if using ssh-agent)
# - ${SSH_AUTH_SOCK}:/ssh-agent:ro
# XDG config dir for backups and image digest logs (persists across restarts)
- ${CF_XDG_CONFIG:-~/.config/compose-farm}:${CF_HOME:-/root}/.config/compose-farm
environment:

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:611d6fef767a8e0755367bf0c008dad016f38fa8b3be2362825ef7ef6ec2ec1a
size 2444902

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1e851879acc99234628abce0f8dadeeaf500effe4f78bebc63c4b17a0ae092f1
size 900800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4135888689a10c5ae2904825d98f2a6d215c174a4bd823e25761f619590f04ff
size 3990104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87739cd6f6576a81100392d8d1e59d3e776fecc8f0721a31332df89e7fc8593d
size 5814274

View File

@@ -578,10 +578,6 @@ cf traefik-file plex jellyfin -o /opt/traefik/cf.yml
Manage configuration files.
<video autoplay loop muted playsinline>
<source src="/assets/config-example.webm" type="video/webm">
</video>
```bash
cf config COMMAND
```
@@ -596,19 +592,17 @@ cf config COMMAND
| `validate` | Validate syntax and schema |
| `edit` | Open in $EDITOR |
| `symlink` | Create symlink from default location |
| `example` | Generate example stack files |
**Options by subcommand:**
| Subcommand | Options |
|------------|---------|
| `init` | `--path/-p PATH`, `--force/-f`, `--discover/-d` |
| `init` | `--path/-p PATH`, `--force/-f` |
| `show` | `--path/-p PATH`, `--raw/-r` |
| `edit` | `--path/-p PATH` |
| `path` | `--path/-p PATH` |
| `validate` | `--path/-p PATH` |
| `symlink` | `--force/-f` |
| `example` | `--list/-l`, `--output/-o PATH`, `--force/-f` |
**Examples:**
@@ -616,9 +610,6 @@ cf config COMMAND
# Create config at default location
cf config init
# Auto-discover compose files and interactively create config
cf config init --discover
# Create config at custom path
cf config init --path /opt/compose-farm/config.yaml
@@ -642,18 +633,6 @@ cf config symlink
# Create symlink to specific file
cf config symlink /opt/compose-farm/config.yaml
# List available example templates
cf config example --list
# Generate a sample stack (whoami, nginx, postgres)
cf config example whoami
# Generate complete Traefik + whoami setup
cf config example full
# Generate example in specific directory
cf config example nginx --output /opt/compose
```
---

View File

@@ -27,7 +27,6 @@ python docs/demos/cli/record.py quickstart migration
| `update.tape` | `cf update` |
| `migration.tape` | Service migration |
| `apply.tape` | `cf apply` |
| `config-example.tape` | `cf config example` - generate example stacks |
## Output

View File

@@ -1,110 +0,0 @@
# Config Example Demo
# Shows cf config example command
Output docs/assets/config-example.gif
Output docs/assets/config-example.webm
Set Shell "bash"
Set FontSize 14
Set Width 900
Set Height 600
Set Theme "Catppuccin Mocha"
Set FontFamily "FiraCode Nerd Font"
Set TypingSpeed 50ms
Env BAT_PAGING "always"
Type "# Generate example stacks with cf config example"
Enter
Sleep 500ms
Type "# List available templates"
Enter
Sleep 500ms
Type "cf config example --list"
Enter
Wait+Screen /Usage:/
Sleep 2s
Type "# Create a directory for our stacks"
Enter
Sleep 500ms
Type "mkdir -p ~/compose && cd ~/compose"
Enter
Wait
Sleep 500ms
Type "# Generate the full Traefik + whoami setup"
Enter
Sleep 500ms
Type "cf config example full"
Enter
Wait
Sleep 2s
Type "# See what was created"
Enter
Sleep 500ms
Type "tree ."
Enter
Wait
Sleep 2s
Type "# View the generated config"
Enter
Sleep 500ms
Type "bat compose-farm.yaml"
Enter
Sleep 3s
Type "q"
Sleep 500ms
Type "# View the traefik compose file"
Enter
Sleep 500ms
Type "bat traefik/compose.yaml"
Enter
Sleep 3s
Type "q"
Sleep 500ms
Type "# Validate the config"
Enter
Sleep 500ms
Type "cf check --local"
Enter
Wait
Sleep 2s
Type "# Create the Docker network"
Enter
Sleep 500ms
Type "cf init-network"
Enter
Wait
Sleep 1s
Type "# Deploy traefik and whoami"
Enter
Sleep 500ms
Type "cf up traefik whoami"
Enter
Wait
Sleep 3s
Type "# Verify it's running"
Enter
Sleep 500ms
Type "cf ps"
Enter
Wait
Sleep 2s

View File

@@ -21,24 +21,37 @@ import uvicorn
from compose_farm.config import Config as CFConfig
from compose_farm.config import load_config
from compose_farm.executor import (
get_container_compose_labels as _original_get_compose_labels,
)
from compose_farm.glances import ContainerStats
from compose_farm.glances import fetch_container_stats as _original_fetch_container_stats
from compose_farm.state import load_state as _original_load_state
from compose_farm.web.app import create_app
from compose_farm.web.cdn import CDN_ASSETS, ensure_vendor_cache
# NOTE: Do NOT import create_app here - it must be imported AFTER patches are applied
# to ensure the patched get_config is used by all route modules
if TYPE_CHECKING:
from collections.abc import Generator
from playwright.sync_api import BrowserContext, Page, Route
# Stacks to exclude from demo recordings (exact match)
DEMO_EXCLUDE_STACKS = {"arr"}
# Substrings to exclude from demo recordings (case-insensitive)
DEMO_EXCLUDE_PATTERNS = {"arr", "vpn", "tash"}
def _should_exclude(name: str) -> bool:
"""Check if a stack/container name should be excluded from demo."""
name_lower = name.lower()
return any(pattern in name_lower for pattern in DEMO_EXCLUDE_PATTERNS)
def _get_filtered_config() -> CFConfig:
"""Load config but filter out excluded stacks."""
config = load_config()
filtered_stacks = {
name: host for name, host in config.stacks.items() if name not in DEMO_EXCLUDE_STACKS
name: host for name, host in config.stacks.items() if not _should_exclude(name)
}
return CFConfig(
compose_dir=config.compose_dir,
@@ -46,6 +59,7 @@ def _get_filtered_config() -> CFConfig:
stacks=filtered_stacks,
traefik_file=config.traefik_file,
traefik_stack=config.traefik_stack,
glances_stack=config.glances_stack,
config_path=config.config_path,
)
@@ -53,7 +67,37 @@ def _get_filtered_config() -> CFConfig:
def _get_filtered_state(config: CFConfig) -> dict[str, str | list[str]]:
"""Load state but filter out excluded stacks."""
state = _original_load_state(config)
return {name: host for name, host in state.items() if name not in DEMO_EXCLUDE_STACKS}
return {name: host for name, host in state.items() if not _should_exclude(name)}
async def _filtered_fetch_container_stats(
host_name: str,
host_address: str,
port: int = 61208,
request_timeout: float = 10.0,
) -> tuple[list[ContainerStats] | None, str | None]:
"""Fetch container stats but filter out excluded containers."""
containers, error = await _original_fetch_container_stats(
host_name, host_address, port, request_timeout
)
if containers:
# Filter by container name (stack is empty at this point)
containers = [c for c in containers if not _should_exclude(c.name)]
return containers, error
async def _filtered_get_compose_labels(
config: CFConfig,
host_name: str,
) -> dict[str, tuple[str, str]]:
"""Get compose labels but filter out excluded stacks."""
labels = await _original_get_compose_labels(config, host_name)
# Filter out containers whose stack (project) name should be excluded
return {
name: (stack, service)
for name, (stack, service) in labels.items()
if not _should_exclude(stack)
}
@pytest.fixture(scope="session")
@@ -84,19 +128,23 @@ def server_url() -> Generator[str, None, None]:
# Patch at source module level so all callers get filtered versions
patches = [
# Patch load_state at source - all functions calling it get filtered state
# Patch load_config at source - get_config() calls this internally
patch("compose_farm.config.load_config", _get_filtered_config),
# Patch load_state at source and where imported
patch("compose_farm.state.load_state", _get_filtered_state),
# Patch get_config where imported
patch("compose_farm.web.routes.pages.get_config", _get_filtered_config),
patch("compose_farm.web.routes.api.get_config", _get_filtered_config),
patch("compose_farm.web.routes.actions.get_config", _get_filtered_config),
patch("compose_farm.web.app.get_config", _get_filtered_config),
patch("compose_farm.web.ws.get_config", _get_filtered_config),
patch("compose_farm.web.routes.pages.load_state", _get_filtered_state),
# Patch container fetch to filter out excluded containers (Live Stats page)
patch("compose_farm.glances.fetch_container_stats", _filtered_fetch_container_stats),
# Patch compose labels to filter out excluded stacks
patch("compose_farm.executor.get_container_compose_labels", _filtered_get_compose_labels),
]
for p in patches:
p.start()
# Import create_app AFTER patches are started so route modules see patched get_config
from compose_farm.web.app import create_app # noqa: PLC0415
with socket.socket() as s:
s.bind(("127.0.0.1", 0))
port = s.getsockname()[1]
@@ -160,6 +208,7 @@ def recording_context(
if url.startswith(url_prefix):
route.fulfill(status=200, content_type=content_type, body=filepath.read_bytes())
return
print(f"UNCACHED CDN request: {url}")
route.abort("failed")
context.route(re.compile(r"https://(cdn\.jsdelivr\.net|unpkg\.com)/.*"), handle_cdn)
@@ -176,6 +225,35 @@ def recording_page(recording_context: BrowserContext) -> Generator[Page, None, N
page.close()
@pytest.fixture
def wide_recording_context(
browser: Any, # pytest-playwright's browser fixture
recording_output_dir: Path,
) -> Generator[BrowserContext, None, None]:
"""Browser context with wider viewport for demos needing more horizontal space.
NOTE: This fixture does NOT use CDN interception (unlike recording_context).
CDN interception was causing inline scripts from containers.html to be
removed from the DOM, likely due to Tailwind's browser plugin behavior.
"""
context = browser.new_context(
viewport={"width": 1920, "height": 1080},
record_video_dir=str(recording_output_dir),
record_video_size={"width": 1920, "height": 1080},
)
yield context
context.close()
@pytest.fixture
def wide_recording_page(wide_recording_context: BrowserContext) -> Generator[Page, None, None]:
"""Page with wider viewport for demos needing more horizontal space."""
page = wide_recording_context.new_page()
yield page
page.close()
# Demo helper functions

View File

@@ -0,0 +1,85 @@
"""Demo: Live Stats page.
Records a ~20 second demo showing:
- Navigating to Live Stats via command palette
- Container table with real-time stats
- Filtering containers
- Sorting by different columns
- Auto-refresh countdown
Run: pytest docs/demos/web/demo_live_stats.py -v --no-cov
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from conftest import (
open_command_palette,
pause,
slow_type,
wait_for_sidebar,
)
if TYPE_CHECKING:
from playwright.sync_api import Page
@pytest.mark.browser # type: ignore[misc]
def test_demo_live_stats(wide_recording_page: Page, server_url: str) -> None:
"""Record Live Stats page demo."""
page = wide_recording_page
# Start on dashboard
page.goto(server_url)
wait_for_sidebar(page)
pause(page, 1000)
# Navigate to Live Stats via command palette
open_command_palette(page)
pause(page, 400)
slow_type(page, "#cmd-input", "live", delay=100)
pause(page, 500)
page.keyboard.press("Enter")
page.wait_for_url("**/live-stats", timeout=5000)
# Wait for containers to load (may take ~10s on first load due to SSH)
page.wait_for_selector("#container-rows tr:not(:has(.loading))", timeout=30000)
pause(page, 2000) # Let viewer see the full table with timer
# Demonstrate filtering
slow_type(page, "#filter-input", "grocy", delay=100)
pause(page, 1500) # Show filtered results
# Clear filter
page.fill("#filter-input", "")
pause(page, 1000)
# Sort by memory (click header)
page.click("th:has-text('Mem')")
pause(page, 1500)
# Sort by CPU
page.click("th:has-text('CPU')")
pause(page, 1500)
# Sort by host
page.click("th:has-text('Host')")
pause(page, 1500)
# Watch auto-refresh timer count down
pause(page, 3500) # Wait for refresh to happen
# Hover on action menu to show pause behavior
action_btn = page.locator('button[onclick^="openActionMenu"]').first
action_btn.scroll_into_view_if_needed()
action_btn.hover()
pause(page, 2000) # Show paused state (timer shows ⏸) and action menu
# Move away to close menu and resume refresh
page.locator("h2").first.hover() # Move to header
pause(page, 3500) # Watch countdown resume and refresh happen
# Final pause
pause(page, 1000)

View File

@@ -37,6 +37,7 @@ DEMOS = [
"workflow",
"console",
"shell",
"live_stats",
]
# High-quality ffmpeg settings for VP8 encoding

View File

@@ -149,24 +149,6 @@ cd /opt/stacks
cf config init
```
**Already have compose files?** Use `--discover` to auto-detect them and interactively build your config:
```bash
cf config init --discover
```
This scans for directories containing compose files, lets you select which stacks to include, and generates a ready-to-use config.
**Starting fresh?** Generate example stacks to learn from:
```bash
# List available examples
cf config example --list
# Generate a complete Traefik + whoami setup
cf config example full
```
Alternatively, use `~/.config/compose-farm/compose-farm.yaml` for a global config. You can also symlink a working directory config to the global location:
```bash

View File

@@ -51,10 +51,32 @@ Press `Ctrl+K` (or `Cmd+K` on macOS) to open the command palette. Use fuzzy sear
### Dashboard (`/`)
- Stack overview with status indicators
- Host statistics
- Host statistics (CPU, memory, disk, load via Glances)
- Pending operations (migrations, orphaned stacks)
- Quick actions via command palette
### Live Stats (`/live-stats`)
Real-time container monitoring across all hosts, powered by [Glances](https://nicolargo.github.io/glances/).
- **Live metrics**: CPU, memory, network I/O for every container
- **Auto-refresh**: Updates every 3 seconds (pauses when dropdown menus are open)
- **Filtering**: Type to filter containers by name, stack, host, or image
- **Sorting**: Click column headers to sort by any metric
- **Update detection**: Shows when container images have updates available
<video autoplay loop muted playsinline>
<source src="/assets/web-live_stats.webm" type="video/webm">
</video>
#### Requirements
Live Stats requires Glances to be deployed on all hosts:
1. Add `glances_stack: glances` to your `compose-farm.yaml`
2. Deploy a Glances stack that runs on all hosts (see [example](https://github.com/basnijholt/compose-farm/tree/main/examples/glances))
3. Glances must expose its REST API on port 61208
### Stack Detail (`/stack/{name}`)
- Compose file editor (Monaco)

View File

@@ -53,6 +53,7 @@ web = [
"fastapi[standard]>=0.109.0",
"jinja2>=3.1.0",
"websockets>=12.0",
"humanize>=4.0.0",
]
[project.urls]

View File

@@ -37,12 +37,6 @@ _RawOption = Annotated[
bool,
typer.Option("--raw", "-r", help="Output raw file contents (for copy-paste)."),
]
_DiscoverOption = Annotated[
bool,
typer.Option(
"--discover", "-d", help="Auto-detect compose files and interactively select stacks."
),
]
def _get_editor() -> str:
@@ -74,117 +68,6 @@ def _get_config_file(path: Path | None) -> Path | None:
return config_path.resolve() if config_path else None
def _generate_discovered_config(
compose_dir: Path,
hostname: str,
host_address: str,
selected_stacks: list[str],
) -> str:
"""Generate config YAML from discovered stacks."""
import yaml # noqa: PLC0415
config_data = {
"compose_dir": str(compose_dir),
"hosts": {hostname: host_address},
"stacks": dict.fromkeys(selected_stacks, hostname),
}
header = """\
# Compose Farm configuration
# Documentation: https://github.com/basnijholt/compose-farm
#
# Generated by: cf config init --discover
"""
return header + yaml.dump(config_data, default_flow_style=False, sort_keys=False)
def _interactive_stack_selection(stacks: list[str]) -> list[str]:
"""Interactively select stacks to include."""
from rich.prompt import Confirm, Prompt # noqa: PLC0415
console.print("\n[bold]Found stacks:[/bold]")
for stack in stacks:
console.print(f" [cyan]{stack}[/cyan]")
console.print()
# Fast path: include all
if Confirm.ask(f"Include all {len(stacks)} stacks?", default=True):
return stacks
# Let user specify which to exclude
console.print(
"\n[dim]Enter stack names to exclude (comma-separated), or press Enter to select individually:[/dim]"
)
exclude_input = Prompt.ask("Exclude", default="")
if exclude_input.strip():
exclude = {s.strip() for s in exclude_input.split(",")}
return [s for s in stacks if s not in exclude]
# Fall back to individual selection
console.print()
return [
stack for stack in stacks if Confirm.ask(f" Include [cyan]{stack}[/cyan]?", default=True)
]
def _run_discovery_flow() -> str | None:
"""Run the interactive discovery flow and return generated config content."""
import socket # noqa: PLC0415
from rich.prompt import Prompt # noqa: PLC0415
console.print("[bold]Compose Farm Config Discovery[/bold]")
console.print("[dim]This will scan for compose files and generate a config.[/dim]\n")
# Step 1: Get compose directory
default_dir = Path.cwd()
compose_dir_str = Prompt.ask(
"Compose directory",
default=str(default_dir),
)
compose_dir = Path(compose_dir_str).expanduser().resolve()
if not compose_dir.exists():
print_error(f"Directory does not exist: {compose_dir}")
return None
if not compose_dir.is_dir():
print_error(f"Path is not a directory: {compose_dir}")
return None
# Step 2: Discover stacks
from compose_farm.config import discover_compose_dirs # noqa: PLC0415
console.print(f"\n[dim]Scanning {compose_dir}...[/dim]")
stacks = discover_compose_dirs(compose_dir)
if not stacks:
print_error(f"No compose files found in {compose_dir}")
console.print("[dim]Each stack should be in a subdirectory with a compose.yaml file.[/dim]")
return None
console.print(f"[green]Found {len(stacks)} stack(s)[/green]")
# Step 3: Interactive selection
selected_stacks = _interactive_stack_selection(stacks)
if not selected_stacks:
console.print("\n[yellow]No stacks selected.[/yellow]")
return None
# Step 4: Get hostname and address
default_hostname = socket.gethostname()
hostname = Prompt.ask("\nHost name", default=default_hostname)
host_address = Prompt.ask("Host address", default="localhost")
# Step 5: Generate config
console.print(f"\n[dim]Generating config with {len(selected_stacks)} stack(s)...[/dim]")
return _generate_discovered_config(compose_dir, hostname, host_address, selected_stacks)
def _report_missing_config(explicit_path: Path | None = None) -> None:
"""Report that a config file was not found."""
console.print("[yellow]Config file not found.[/yellow]")
@@ -202,15 +85,11 @@ def _report_missing_config(explicit_path: Path | None = None) -> None:
def config_init(
path: _PathOption = None,
force: _ForceOption = False,
discover: _DiscoverOption = False,
) -> None:
"""Create a new config file with documented example.
The generated config file serves as a template showing all available
options with explanatory comments.
Use --discover to auto-detect compose files and interactively select
which stacks to include.
"""
target_path = (path.expanduser().resolve() if path else None) or default_config_path()
@@ -222,17 +101,11 @@ def config_init(
console.print("[dim]Aborted.[/dim]")
raise typer.Exit(0)
if discover:
template_content = _run_discovery_flow()
if template_content is None:
raise typer.Exit(0)
else:
template_content = _generate_template()
# Create parent directories
target_path.parent.mkdir(parents=True, exist_ok=True)
# Write config file
# Generate and write template
template_content = _generate_template()
target_path.write_text(template_content, encoding="utf-8")
print_success(f"Config file created at: {target_path}")
@@ -420,115 +293,5 @@ def config_symlink(
console.print(f" -> {target_path}")
_ListOption = Annotated[
bool,
typer.Option("--list", "-l", help="List available example templates."),
]
@config_app.command("example")
def config_example(
name: Annotated[
str | None,
typer.Argument(help="Example template name (e.g., whoami, full)"),
] = None,
output_dir: Annotated[
Path | None,
typer.Option("--output", "-o", help="Output directory. Defaults to current directory."),
] = None,
list_examples: _ListOption = False,
force: _ForceOption = False,
) -> None:
"""Generate example stack files from built-in templates.
Examples:
cf config example --list # List available examples
cf config example whoami # Generate whoami stack in ./whoami/
cf config example full # Generate complete Traefik + whoami setup
cf config example nginx -o /opt/compose # Generate in specific directory
"""
from compose_farm.examples import ( # noqa: PLC0415
EXAMPLES,
SINGLE_STACK_EXAMPLES,
list_example_files,
)
# List mode
if list_examples:
console.print("[bold]Available example templates:[/bold]\n")
console.print("[dim]Single stack examples:[/dim]")
for example_name, description in SINGLE_STACK_EXAMPLES.items():
console.print(f" [cyan]{example_name}[/cyan] - {description}")
console.print()
console.print("[dim]Complete setup:[/dim]")
console.print(f" [cyan]full[/cyan] - {EXAMPLES['full']}")
console.print()
console.print("[dim]Usage: cf config example <name>[/dim]")
return
# Interactive selection if no name provided
if name is None:
from rich.prompt import Prompt # noqa: PLC0415
console.print("[bold]Available example templates:[/bold]\n")
example_names = list(EXAMPLES.keys())
for i, (example_name, description) in enumerate(EXAMPLES.items(), 1):
console.print(f" [{i}] [cyan]{example_name}[/cyan] - {description}")
console.print()
choice = Prompt.ask(
"Select example",
choices=[str(i) for i in range(1, len(example_names) + 1)] + example_names,
default="1",
)
# Handle numeric or name input
name = example_names[int(choice) - 1] if choice.isdigit() else choice
# Validate example name
if name not in EXAMPLES:
print_error(f"Unknown example: {name}")
console.print(f"Available examples: {', '.join(EXAMPLES.keys())}")
raise typer.Exit(1)
# Determine output directory
base_dir = (output_dir or Path.cwd()).expanduser().resolve()
# For 'full' example, use current dir; for single stacks, create subdir
target_dir = base_dir if name == "full" else base_dir / name
# Check for existing files
files = list_example_files(name)
existing_files = [f for f, _ in files if (target_dir / f).exists()]
if existing_files and not force:
console.print(f"[yellow]Files already exist in:[/yellow] {target_dir}")
console.print(f"[dim] {len(existing_files)} file(s) would be overwritten[/dim]")
if not typer.confirm("Overwrite existing files?"):
console.print("[dim]Aborted.[/dim]")
raise typer.Exit(0)
# Create directories and copy files
for rel_path, content in files:
file_path = target_dir / rel_path
file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(content, encoding="utf-8")
console.print(f" [green]Created[/green] {file_path}")
print_success(f"Example '{name}' created at: {target_dir}")
# Show appropriate next steps
if name == "full":
console.print("\n[dim]Next steps:[/dim]")
console.print(f" 1. Edit [cyan]{target_dir}/compose-farm.yaml[/cyan] with your host IP")
console.print(" 2. Edit [cyan].env[/cyan] files with your domain")
console.print(" 3. Create Docker network: [cyan]docker network create mynetwork[/cyan]")
console.print(" 4. Deploy: [cyan]cf up traefik whoami[/cyan]")
else:
console.print("\n[dim]Next steps:[/dim]")
console.print(f" 1. Edit [cyan]{target_dir}/.env[/cyan] with your settings")
console.print(f" 2. Add to compose-farm.yaml: [cyan]{name}: <hostname>[/cyan]")
console.print(f" 3. Deploy with: [cyan]cf up {name}[/cyan]")
# Register config subcommand on the shared app
app.add_typer(config_app, name="config", rich_help_panel="Configuration")

View File

@@ -37,24 +37,23 @@ from compose_farm.console import (
)
from compose_farm.executor import (
CommandResult,
get_running_stacks_on_host,
is_local,
run_command,
)
from compose_farm.logs import (
DEFAULT_LOG_PATH,
SnapshotEntry,
collect_stack_entries,
collect_stacks_entries_on_host,
isoformat,
load_existing_entries,
merge_entries,
write_toml,
)
from compose_farm.operations import (
StackDiscoveryResult,
build_discovery_results,
check_host_compatibility,
check_stack_requirements,
discover_stack_host,
discover_stack_on_all_hosts,
)
from compose_farm.state import get_orphaned_stacks, load_state, save_state
from compose_farm.traefik import generate_traefik_config, render_traefik_config
@@ -62,38 +61,39 @@ from compose_farm.traefik import generate_traefik_config, render_traefik_config
# --- Sync helpers ---
def _discover_stacks(cfg: Config, stacks: list[str] | None = None) -> dict[str, str | list[str]]:
"""Discover running stacks with a progress bar."""
stack_list = stacks if stacks is not None else list(cfg.stacks)
results = run_parallel_with_progress(
"Discovering",
stack_list,
lambda s: discover_stack_host(cfg, s),
)
return {svc: host for svc, host in results if host is not None}
def _snapshot_stacks(
cfg: Config,
stacks: list[str],
discovered: dict[str, str | list[str]],
log_path: Path | None,
) -> Path:
"""Capture image digests with a progress bar."""
"""Capture image digests using batched SSH calls (1 per host).
Args:
cfg: Configuration
discovered: Dict mapping stack -> host(s) where it's running
log_path: Optional path to write the log file
Returns:
Path to the written log file.
"""
effective_log_path = log_path or DEFAULT_LOG_PATH
now_dt = datetime.now(UTC)
now_iso = isoformat(now_dt)
async def collect_stack(stack: str) -> tuple[str, list[SnapshotEntry]]:
try:
return stack, await collect_stack_entries(cfg, stack, now=now_dt)
except RuntimeError:
return stack, []
# Group stacks by host for batched SSH calls
stacks_by_host: dict[str, set[str]] = {}
for stack, hosts in discovered.items():
# Use first host for multi-host stacks (they use the same images)
host = hosts[0] if isinstance(hosts, list) else hosts
stacks_by_host.setdefault(host, set()).add(stack)
results = run_parallel_with_progress(
"Capturing",
stacks,
collect_stack,
)
# Collect entries with 1 SSH call per host (with progress bar)
async def collect_on_host(host: str) -> tuple[str, list[SnapshotEntry]]:
entries = await collect_stacks_entries_on_host(cfg, host, stacks_by_host[host], now=now_dt)
return host, entries
results = run_parallel_with_progress("Capturing", list(stacks_by_host.keys()), collect_on_host)
snapshot_entries = [entry for _, entries in results for entry in entries]
if not snapshot_entries:
@@ -155,39 +155,20 @@ def _discover_stacks_full(
) -> tuple[dict[str, str | list[str]], dict[str, list[str]], dict[str, list[str]]]:
"""Discover running stacks with full host scanning for stray detection.
Returns:
Tuple of (discovered, strays, duplicates):
- discovered: stack -> host(s) where running correctly
- strays: stack -> list of unauthorized hosts
- duplicates: stack -> list of all hosts (for single-host stacks on multiple)
Queries each host once for all running stacks (with progress bar),
then delegates to build_discovery_results for categorization.
"""
stack_list = stacks if stacks is not None else list(cfg.stacks)
results: list[StackDiscoveryResult] = run_parallel_with_progress(
"Discovering",
stack_list,
lambda s: discover_stack_on_all_hosts(cfg, s),
)
all_hosts = list(cfg.hosts.keys())
discovered: dict[str, str | list[str]] = {}
strays: dict[str, list[str]] = {}
duplicates: dict[str, list[str]] = {}
# Query each host for running stacks (with progress bar)
async def get_stacks_on_host(host: str) -> tuple[str, set[str]]:
running = await get_running_stacks_on_host(cfg, host)
return host, running
for result in results:
correct_hosts = [h for h in result.running_hosts if h in result.configured_hosts]
if correct_hosts:
if result.is_multi_host:
discovered[result.stack] = correct_hosts
else:
discovered[result.stack] = correct_hosts[0]
host_results = run_parallel_with_progress("Discovering", all_hosts, get_stacks_on_host)
running_on_host: dict[str, set[str]] = dict(host_results)
if result.is_stray:
strays[result.stack] = result.stray_hosts
if result.is_duplicate:
duplicates[result.stack] = result.running_hosts
return discovered, strays, duplicates
return build_discovery_results(cfg, running_on_host, stacks)
def _report_stray_stacks(
@@ -554,10 +535,10 @@ def refresh(
save_state(cfg, new_state)
print_success(f"State updated: {len(new_state)} stacks tracked.")
# Capture image digests for running stacks
# Capture image digests for running stacks (1 SSH call per host)
if discovered:
try:
path = _snapshot_stacks(cfg, list(discovered.keys()), log_path)
path = _snapshot_stacks(cfg, discovered, log_path)
print_success(f"Digests written to {path}")
except RuntimeError as exc:
print_warning(str(exc))

View File

@@ -15,17 +15,6 @@ from .paths import config_search_paths, find_config_path
COMPOSE_FILENAMES = ("compose.yaml", "compose.yml", "docker-compose.yml", "docker-compose.yaml")
def discover_compose_dirs(compose_dir: Path) -> list[str]:
"""Find all directories in compose_dir that contain a compose file."""
if not compose_dir.exists():
return []
return sorted(
subdir.name
for subdir in compose_dir.iterdir()
if subdir.is_dir() and any((subdir / f).exists() for f in COMPOSE_FILENAMES)
)
class Host(BaseModel, extra="forbid"):
"""SSH host configuration."""
@@ -42,6 +31,9 @@ class Config(BaseModel, extra="forbid"):
stacks: dict[str, str | list[str]] # stack_name -> host_name or list of hosts
traefik_file: Path | None = None # Auto-regenerate traefik config after up/down
traefik_stack: str | None = None # Stack name for Traefik (skip its host in file-provider)
glances_stack: str | None = (
None # Stack name for Glances (enables host resource stats in web UI)
)
config_path: Path = Path() # Set by load_config()
def get_state_path(self) -> Path:
@@ -116,7 +108,13 @@ class Config(BaseModel, extra="forbid"):
def discover_compose_dirs(self) -> set[str]:
"""Find all directories in compose_dir that contain a compose file."""
return set(discover_compose_dirs(self.compose_dir))
found: set[str] = set()
if not self.compose_dir.exists():
return found
for subdir in self.compose_dir.iterdir():
if subdir.is_dir() and any((subdir / f).exists() for f in COMPOSE_FILENAMES):
found.add(subdir.name)
return found
def _parse_hosts(raw_hosts: dict[str, Any]) -> dict[str, Host]:

View File

@@ -1,41 +0,0 @@
"""Example stack templates for compose-farm."""
from __future__ import annotations
from importlib import resources
from pathlib import Path
# All available examples: name -> description
# "full" is special: multi-stack setup with config file
EXAMPLES = {
"whoami": "Simple HTTP service that returns hostname (great for testing Traefik)",
"nginx": "Basic nginx web server with static files",
"postgres": "PostgreSQL database with persistent volume",
"full": "Complete setup with Traefik + whoami (includes compose-farm.yaml)",
}
# Examples that are single stacks (everything except "full")
SINGLE_STACK_EXAMPLES = {k: v for k, v in EXAMPLES.items() if k != "full"}
def list_example_files(name: str) -> list[tuple[str, str]]:
"""List files in an example template, returning (relative_path, content) tuples."""
if name not in EXAMPLES:
msg = f"Unknown example: {name}. Available: {', '.join(EXAMPLES.keys())}"
raise ValueError(msg)
example_dir = resources.files("compose_farm.examples") / name
example_path = Path(str(example_dir))
files: list[tuple[str, str]] = []
def walk_dir(directory: Path, prefix: str = "") -> None:
for item in sorted(directory.iterdir()):
rel_path = f"{prefix}{item.name}" if prefix else item.name
if item.is_file():
content = item.read_text(encoding="utf-8")
files.append((rel_path, content))
elif item.is_dir() and not item.name.startswith("__"):
walk_dir(item, f"{rel_path}/")
walk_dir(example_path)
return files

View File

@@ -1,41 +0,0 @@
# Compose Farm Full Example
A complete starter setup with Traefik reverse proxy and a test service.
## Quick Start
1. **Create the Docker network** (once per host):
```bash
docker network create --subnet=172.20.0.0/16 --gateway=172.20.0.1 mynetwork
```
2. **Create data directory for Traefik**:
```bash
mkdir -p /mnt/data/traefik
```
3. **Edit configuration**:
- Update `compose-farm.yaml` with your host IP
- Update `.env` files with your domain
4. **Start the stacks**:
```bash
cf up traefik whoami
```
5. **Test**:
- Dashboard: http://localhost:8080
- Whoami: Add `whoami.example.com` to /etc/hosts pointing to your host
## Files
```
full/
├── compose-farm.yaml # Compose Farm config
├── traefik/
│ ├── compose.yaml # Traefik reverse proxy
│ └── .env
└── whoami/
├── compose.yaml # Test HTTP service
└── .env
```

View File

@@ -1,17 +0,0 @@
# Compose Farm configuration
# Edit the host address to match your setup
compose_dir: .
hosts:
local: localhost # For remote hosts, use: myhost: 192.168.1.100
stacks:
traefik: local
whoami: local
nginx: local
postgres: local
# Traefik file-provider integration (optional)
# traefik_file: ./traefik/dynamic.d/compose-farm.yml
traefik_stack: traefik

View File

@@ -1,2 +0,0 @@
# Environment variables for nginx stack
DOMAIN=example.com

View File

@@ -1,26 +0,0 @@
# Nginx - Basic web server
services:
nginx:
image: nginx:alpine
container_name: cf-nginx
user: "1000:1000"
networks:
- mynetwork
volumes:
- /mnt/data/nginx/html:/usr/share/nginx/html:ro
ports:
- "9082:80" # Use 80:80 or 8080:80 in production
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.nginx.rule=Host(`nginx.${DOMAIN}`)
- traefik.http.routers.nginx.entrypoints=websecure
- traefik.http.routers.nginx-local.rule=Host(`nginx.local`)
- traefik.http.routers.nginx-local.entrypoints=web
- traefik.http.services.nginx.loadbalancer.server.port=80
- kuma.nginx.http.name=Nginx
- kuma.nginx.http.url=https://nginx.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,5 +0,0 @@
# Environment variables for postgres stack
# IMPORTANT: Change these values before deploying!
POSTGRES_USER=postgres
POSTGRES_PASSWORD=changeme
POSTGRES_DB=myapp

View File

@@ -1,26 +0,0 @@
# PostgreSQL - Database with persistent storage
services:
postgres:
image: postgres:16-alpine
container_name: cf-postgres
networks:
- mynetwork
environment:
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- /mnt/data/postgres:/var/lib/postgresql/data
ports:
- "5433:5432" # Use 5432:5432 in production
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 30s
timeout: 10s
retries: 3
networks:
mynetwork:
external: true

View File

@@ -1 +0,0 @@
DOMAIN=example.com

View File

@@ -1,37 +0,0 @@
# Traefik - Reverse proxy and load balancer
services:
traefik:
image: traefik:v2.11
container_name: cf-traefik
networks:
- mynetwork
ports:
- "9080:80" # HTTP (use 80:80 in production)
- "9443:443" # HTTPS (use 443:443 in production)
- "9081:8080" # Dashboard - remove in production
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /mnt/data/traefik:/etc/traefik
command:
- --api.dashboard=true
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=mynetwork
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --log.level=INFO
labels:
- traefik.enable=true
- traefik.http.routers.traefik.rule=Host(`traefik.${DOMAIN}`)
- traefik.http.routers.traefik.entrypoints=websecure
- traefik.http.routers.traefik-local.rule=Host(`traefik.local`)
- traefik.http.routers.traefik-local.entrypoints=web
- traefik.http.services.traefik.loadbalancer.server.port=8080
- kuma.traefik.http.name=Traefik
- kuma.traefik.http.url=https://traefik.${DOMAIN}
restart: unless-stopped
networks:
mynetwork:
external: true

View File

@@ -1 +0,0 @@
DOMAIN=example.com

View File

@@ -1,23 +0,0 @@
# Whoami - Test service for Traefik routing
services:
whoami:
image: traefik/whoami:latest
container_name: cf-whoami
networks:
- mynetwork
ports:
- "9000:80"
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.whoami.rule=Host(`whoami.${DOMAIN}`)
- traefik.http.routers.whoami.entrypoints=websecure
- traefik.http.routers.whoami-local.rule=Host(`whoami.local`)
- traefik.http.routers.whoami-local.entrypoints=web
- traefik.http.services.whoami.loadbalancer.server.port=80
- kuma.whoami.http.name=Whoami
- kuma.whoami.http.url=https://whoami.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,2 +0,0 @@
# Environment variables for nginx stack
DOMAIN=example.com

View File

@@ -1,26 +0,0 @@
# Nginx - Basic web server
services:
nginx:
image: nginx:alpine
container_name: cf-nginx
user: "1000:1000"
networks:
- mynetwork
volumes:
- /mnt/data/nginx/html:/usr/share/nginx/html:ro
ports:
- "9082:80"
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.nginx.rule=Host(`nginx.${DOMAIN}`)
- traefik.http.routers.nginx.entrypoints=websecure
- traefik.http.routers.nginx-local.rule=Host(`nginx.local`)
- traefik.http.routers.nginx-local.entrypoints=web
- traefik.http.services.nginx.loadbalancer.server.port=80
- kuma.nginx.http.name=Nginx
- kuma.nginx.http.url=https://nginx.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -1,5 +0,0 @@
# Environment variables for postgres stack
# IMPORTANT: Change these values before deploying!
POSTGRES_USER=postgres
POSTGRES_PASSWORD=changeme
POSTGRES_DB=myapp

View File

@@ -1,26 +0,0 @@
# PostgreSQL - Database with persistent storage
services:
postgres:
image: postgres:16-alpine
container_name: cf-postgres
networks:
- mynetwork
environment:
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- /mnt/data/postgres:/var/lib/postgresql/data
ports:
- "5433:5432"
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 30s
timeout: 10s
retries: 3
networks:
mynetwork:
external: true

View File

@@ -1,2 +0,0 @@
# Environment variables for whoami stack
DOMAIN=example.com

View File

@@ -1,24 +0,0 @@
# Whoami - Simple HTTP service for testing
# Returns the container hostname - useful for testing load balancers and Traefik
services:
whoami:
image: traefik/whoami:latest
container_name: cf-whoami
networks:
- mynetwork
ports:
- "9000:80"
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.http.routers.whoami.rule=Host(`whoami.${DOMAIN}`)
- traefik.http.routers.whoami.entrypoints=websecure
- traefik.http.routers.whoami-local.rule=Host(`whoami.local`)
- traefik.http.routers.whoami-local.entrypoints=web
- traefik.http.services.whoami.loadbalancer.server.port=80
- kuma.whoami.http.name=Whoami
- kuma.whoami.http.url=https://whoami.${DOMAIN}
networks:
mynetwork:
external: true

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import asyncio
import socket
import subprocess
import time
from dataclasses import dataclass
from functools import lru_cache
from typing import TYPE_CHECKING, Any
@@ -23,6 +24,38 @@ LOCAL_ADDRESSES = frozenset({"local", "localhost", "127.0.0.1", "::1"})
_DEFAULT_SSH_PORT = 22
class TTLCache:
"""Simple TTL cache for async function results."""
def __init__(self, ttl_seconds: float = 30.0) -> None:
"""Initialize cache with default TTL in seconds."""
# Cache stores: key -> (timestamp, value, item_ttl)
self._cache: dict[str, tuple[float, Any, float]] = {}
self._default_ttl = ttl_seconds
def get(self, key: str) -> Any | None:
"""Get value if exists and not expired."""
if key in self._cache:
timestamp, value, item_ttl = self._cache[key]
if time.monotonic() - timestamp < item_ttl:
return value
del self._cache[key]
return None
def set(self, key: str, value: Any, ttl_seconds: float | None = None) -> None:
"""Set value with current timestamp and optional custom TTL."""
ttl = ttl_seconds if ttl_seconds is not None else self._default_ttl
self._cache[key] = (time.monotonic(), value, ttl)
def clear(self) -> None:
"""Clear all cached values."""
self._cache.clear()
# Cache compose labels per host for 30 seconds
_compose_labels_cache = TTLCache(ttl_seconds=30.0)
def _print_compose_command(
host_name: str,
compose_dir: str,
@@ -158,15 +191,20 @@ def ssh_connect_kwargs(host: Host) -> dict[str, Any]:
"port": host.port,
"username": host.user,
"known_hosts": None,
"gss_auth": False, # Disable GSSAPI - causes multi-second delays
}
# Add SSH agent path (auto-detect forwarded agent if needed)
agent_path = get_ssh_auth_sock()
if agent_path:
kwargs["agent_path"] = agent_path
# Add key file fallback for when SSH agent is unavailable
# Add key file fallback (prioritized over agent if present)
key_path = get_key_path()
agent_path = get_ssh_auth_sock()
if key_path:
# If dedicated key exists, force use of it and ignore agent
# This avoids issues with stale/broken forwarded agents in Docker
kwargs["client_keys"] = [str(key_path)]
elif agent_path:
# Fallback to agent if no dedicated key
kwargs["agent_path"] = agent_path
return kwargs
@@ -497,6 +535,72 @@ async def check_stack_running(
return result.success and bool(result.stdout.strip())
async def get_running_stacks_on_host(
config: Config,
host_name: str,
) -> set[str]:
"""Get all running compose stacks on a host in a single SSH call.
Uses docker ps with the compose.project label to identify running stacks.
Much more efficient than checking each stack individually.
"""
host = config.hosts[host_name]
# Get unique project names from running containers
command = "docker ps --format '{{.Label \"com.docker.compose.project\"}}' | sort -u"
result = await run_command(host, command, stack=host_name, stream=False, prefix="")
if not result.success:
return set()
# Filter out empty lines and return as set
return {line.strip() for line in result.stdout.splitlines() if line.strip()}
async def get_container_compose_labels(
config: Config,
host_name: str,
) -> dict[str, tuple[str, str]]:
"""Get compose labels for all containers on a host.
Returns dict of container_name -> (project, service).
Includes all containers (-a flag) since Glances shows stopped containers too.
Falls back to empty dict on timeout/error (5s timeout).
Results are cached for 30 seconds to reduce SSH overhead.
"""
# Check cache first
cached: dict[str, tuple[str, str]] | None = _compose_labels_cache.get(host_name)
if cached is not None:
return cached
host = config.hosts[host_name]
cmd = (
"docker ps -a --format "
'\'{{.Names}}\t{{.Label "com.docker.compose.project"}}\t'
'{{.Label "com.docker.compose.service"}}\''
)
try:
async with asyncio.timeout(5.0):
result = await run_command(host, cmd, stack=host_name, stream=False, prefix="")
except TimeoutError:
return {}
except Exception:
return {}
labels: dict[str, tuple[str, str]] = {}
if result.success:
for line in result.stdout.splitlines():
parts = line.strip().split("\t")
if len(parts) >= 3: # noqa: PLR2004
name, project, service = parts[0], parts[1], parts[2]
labels[name] = (project or "", service or "")
# Cache the result
_compose_labels_cache.set(host_name, labels)
return labels
async def _batch_check_existence(
config: Config,
host_name: str,

236
src/compose_farm/glances.py Normal file
View File

@@ -0,0 +1,236 @@
"""Glances API client for host resource monitoring."""
from __future__ import annotations
import asyncio
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from .config import Config
# Default Glances REST API port
DEFAULT_GLANCES_PORT = 61208
@dataclass
class HostStats:
"""Resource statistics for a host."""
host: str
cpu_percent: float
mem_percent: float
swap_percent: float
load: float
disk_percent: float
net_rx_rate: float = 0.0 # bytes/sec
net_tx_rate: float = 0.0 # bytes/sec
error: str | None = None
@classmethod
def from_error(cls, host: str, error: str) -> HostStats:
"""Create a HostStats with an error."""
return cls(
host=host,
cpu_percent=0,
mem_percent=0,
swap_percent=0,
load=0,
disk_percent=0,
net_rx_rate=0,
net_tx_rate=0,
error=error,
)
async def fetch_host_stats(
host_name: str,
host_address: str,
port: int = DEFAULT_GLANCES_PORT,
request_timeout: float = 10.0,
) -> HostStats:
"""Fetch stats from a single host's Glances API."""
import httpx # noqa: PLC0415
base_url = f"http://{host_address}:{port}/api/4"
try:
async with httpx.AsyncClient(timeout=request_timeout) as client:
# Fetch quicklook stats (CPU, mem, load)
response = await client.get(f"{base_url}/quicklook")
if not response.is_success:
return HostStats.from_error(host_name, f"HTTP {response.status_code}")
data = response.json()
# Fetch filesystem stats for disk usage (root fs or max across all)
disk_percent = 0.0
try:
fs_response = await client.get(f"{base_url}/fs")
if fs_response.is_success:
fs_data = fs_response.json()
root = next((fs for fs in fs_data if fs.get("mnt_point") == "/"), None)
disk_percent = (
root.get("percent", 0)
if root
else max((fs.get("percent", 0) for fs in fs_data), default=0)
)
except httpx.HTTPError:
pass # Disk stats are optional
# Fetch network stats for rate (sum across non-loopback interfaces)
net_rx_rate, net_tx_rate = 0.0, 0.0
try:
net_response = await client.get(f"{base_url}/network")
if net_response.is_success:
for iface in net_response.json():
if not iface.get("interface_name", "").startswith("lo"):
net_rx_rate += iface.get("bytes_recv_rate_per_sec") or 0
net_tx_rate += iface.get("bytes_sent_rate_per_sec") or 0
except httpx.HTTPError:
pass # Network stats are optional
return HostStats(
host=host_name,
cpu_percent=data.get("cpu", 0),
mem_percent=data.get("mem", 0),
swap_percent=data.get("swap", 0),
load=data.get("load", 0),
disk_percent=disk_percent,
net_rx_rate=net_rx_rate,
net_tx_rate=net_tx_rate,
)
except httpx.TimeoutException:
return HostStats.from_error(host_name, "timeout")
except httpx.HTTPError as e:
return HostStats.from_error(host_name, str(e))
except Exception as e:
return HostStats.from_error(host_name, str(e))
async def fetch_all_host_stats(
config: Config,
port: int = DEFAULT_GLANCES_PORT,
) -> dict[str, HostStats]:
"""Fetch stats from all hosts in parallel."""
tasks = [fetch_host_stats(name, host.address, port) for name, host in config.hosts.items()]
results = await asyncio.gather(*tasks)
return {stats.host: stats for stats in results}
@dataclass
class ContainerStats:
"""Container statistics from Glances."""
name: str
host: str
status: str
image: str
cpu_percent: float
memory_usage: int # bytes
memory_limit: int # bytes
memory_percent: float
network_rx: int # cumulative bytes received
network_tx: int # cumulative bytes sent
uptime: str
ports: str
engine: str # docker, podman, etc.
stack: str = "" # compose project name (from docker labels)
service: str = "" # compose service name (from docker labels)
def _parse_container(data: dict[str, Any], host_name: str) -> ContainerStats:
"""Parse container data from Glances API response."""
# Image can be a list or string
image = data.get("image", ["unknown"])
if isinstance(image, list):
image = image[0] if image else "unknown"
# Calculate memory percent
mem_usage = data.get("memory_usage", 0) or 0
mem_limit = data.get("memory_limit", 1) or 1 # Avoid division by zero
mem_percent = (mem_usage / mem_limit) * 100 if mem_limit > 0 else 0
# Network stats
network = data.get("network", {}) or {}
network_rx = network.get("cumulative_rx", 0) or 0
network_tx = network.get("cumulative_tx", 0) or 0
return ContainerStats(
name=data.get("name", "unknown"),
host=host_name,
status=data.get("status", "unknown"),
image=image,
cpu_percent=data.get("cpu_percent", 0) or 0,
memory_usage=mem_usage,
memory_limit=mem_limit,
memory_percent=mem_percent,
network_rx=network_rx,
network_tx=network_tx,
uptime=data.get("uptime", ""),
ports=data.get("ports", "") or "",
engine=data.get("engine", "docker"),
)
async def fetch_container_stats(
host_name: str,
host_address: str,
port: int = DEFAULT_GLANCES_PORT,
request_timeout: float = 10.0,
) -> tuple[list[ContainerStats] | None, str | None]:
"""Fetch container stats from a single host's Glances API.
Returns:
(containers, error_message)
- Success: ([...], None)
- Failure: (None, "error message")
"""
import httpx # noqa: PLC0415
url = f"http://{host_address}:{port}/api/4/containers"
try:
async with httpx.AsyncClient(timeout=request_timeout) as client:
response = await client.get(url)
if not response.is_success:
return None, f"HTTP {response.status_code}: {response.reason_phrase}"
data = response.json()
return [_parse_container(c, host_name) for c in data], None
except httpx.ConnectError:
return None, "Connection refused (Glances offline?)"
except httpx.TimeoutException:
return None, "Connection timed out"
except Exception as e:
return None, str(e)
async def fetch_all_container_stats(
config: Config,
port: int = DEFAULT_GLANCES_PORT,
) -> list[ContainerStats]:
"""Fetch container stats from all hosts in parallel, enriched with compose labels."""
from .executor import get_container_compose_labels # noqa: PLC0415
async def fetch_host_data(
host_name: str,
host_address: str,
) -> list[ContainerStats]:
# Fetch Glances stats and compose labels in parallel
stats_task = fetch_container_stats(host_name, host_address, port)
labels_task = get_container_compose_labels(config, host_name)
(containers, _), labels = await asyncio.gather(stats_task, labels_task)
if containers is None:
# Skip failed hosts in aggregate view
return []
# Enrich containers with compose labels (mutate in place)
for c in containers:
c.stack, c.service = labels.get(c.name, ("", ""))
return containers
tasks = [fetch_host_data(name, host.address) for name, host in config.hosts.items()]
results = await asyncio.gather(*tasks)
# Flatten list of lists
return [container for host_containers in results for container in host_containers]

View File

@@ -6,21 +6,22 @@ import json
import tomllib
from dataclasses import dataclass
from datetime import UTC, datetime
from typing import TYPE_CHECKING, Any
from typing import TYPE_CHECKING
from .executor import run_compose
from .executor import run_command
from .paths import xdg_config_home
if TYPE_CHECKING:
from collections.abc import Awaitable, Callable, Iterable
from collections.abc import Iterable
from pathlib import Path
from .config import Config
from .executor import CommandResult
# Separator used to split output sections
_SECTION_SEPARATOR = "---CF-SEP---"
DEFAULT_LOG_PATH = xdg_config_home() / "compose-farm" / "dockerfarm-log.toml"
_DIGEST_HEX_LENGTH = 64
@dataclass(frozen=True)
@@ -56,87 +57,97 @@ def _escape(value: str) -> str:
return value.replace("\\", "\\\\").replace('"', '\\"')
def _parse_images_output(raw: str) -> list[dict[str, Any]]:
"""Parse `docker compose images --format json` output.
Handles both a JSON array and newline-separated JSON objects for robustness.
"""
raw = raw.strip()
if not raw:
return []
def _parse_image_digests(image_json: str) -> dict[str, str]:
"""Parse docker image inspect JSON to build image tag -> digest map."""
if not image_json:
return {}
try:
parsed = json.loads(raw)
image_data = json.loads(image_json)
except json.JSONDecodeError:
objects = []
for line in raw.splitlines():
if not line.strip():
continue
objects.append(json.loads(line))
return objects
return {}
if isinstance(parsed, list):
return parsed
if isinstance(parsed, dict):
return [parsed]
return []
image_digests: dict[str, str] = {}
for img in image_data:
tags = img.get("RepoTags") or []
digests = img.get("RepoDigests") or []
digest = digests[0].split("@")[-1] if digests else img.get("Id", "")
for tag in tags:
image_digests[tag] = digest
if img.get("Id"):
image_digests[img["Id"]] = digest
return image_digests
def _extract_image_fields(record: dict[str, Any]) -> tuple[str, str]:
"""Extract image name and digest with fallbacks."""
image = record.get("Image") or record.get("Repository") or record.get("Name") or ""
tag = record.get("Tag") or record.get("Version")
if tag and ":" not in image.rsplit("/", 1)[-1]:
image = f"{image}:{tag}"
digest = (
record.get("Digest")
or record.get("Image ID")
or record.get("ImageID")
or record.get("ID")
or ""
)
if digest and not digest.startswith("sha256:") and len(digest) == _DIGEST_HEX_LENGTH:
digest = f"sha256:{digest}"
return image, digest
async def collect_stack_entries(
async def collect_stacks_entries_on_host(
config: Config,
stack: str,
host_name: str,
stacks: set[str],
*,
now: datetime,
run_compose_fn: Callable[..., Awaitable[CommandResult]] = run_compose,
) -> list[SnapshotEntry]:
"""Run `docker compose images` for a stack and normalize results."""
result = await run_compose_fn(config, stack, "images --format json", stream=False)
"""Collect image entries for stacks on one host using 2 docker commands.
Uses `docker ps` to get running containers + their compose project labels,
then `docker image inspect` to get digests for all unique images.
Much faster than running N `docker compose images` commands.
"""
if not stacks:
return []
host = config.hosts[host_name]
# Single SSH call with 2 docker commands:
# 1. Get project|image pairs from running containers
# 2. Get image info (including digests) for all unique images
command = (
f"docker ps --format '{{{{.Label \"com.docker.compose.project\"}}}}|{{{{.Image}}}}' && "
f"echo '{_SECTION_SEPARATOR}' && "
"docker image inspect $(docker ps --format '{{.Image}}' | sort -u) 2>/dev/null || true"
)
result = await run_command(host, command, host_name, stream=False, prefix="")
if not result.success:
msg = result.stderr or f"compose images exited with {result.exit_code}"
error = f"[{stack}] Unable to read images: {msg}"
raise RuntimeError(error)
return []
records = _parse_images_output(result.stdout)
# Use first host for snapshots (multi-host stacks use same images on all hosts)
host_name = config.get_hosts(stack)[0]
compose_path = config.get_compose_path(stack)
# Split output into two sections
parts = result.stdout.split(_SECTION_SEPARATOR)
if len(parts) != 2: # noqa: PLR2004
return []
entries: list[SnapshotEntry] = []
for record in records:
image, digest = _extract_image_fields(record)
if not digest:
container_lines, image_json = parts[0].strip(), parts[1].strip()
# Parse project|image pairs, filtering to only stacks we care about
stack_images: dict[str, set[str]] = {}
for line in container_lines.splitlines():
if "|" not in line:
continue
entries.append(
SnapshotEntry(
stack=stack,
host=host_name,
compose_file=compose_path,
image=image,
digest=digest,
captured_at=now,
)
)
project, image = line.split("|", 1)
if project in stacks:
stack_images.setdefault(project, set()).add(image)
if not stack_images:
return []
# Parse image inspect JSON to build image -> digest map
image_digests = _parse_image_digests(image_json)
# Build entries
entries: list[SnapshotEntry] = []
for stack, images in stack_images.items():
for image in images:
digest = image_digests.get(image, "")
if digest:
entries.append(
SnapshotEntry(
stack=stack,
host=host_name,
compose_file=config.get_compose_path(stack),
image=image,
digest=digest,
captured_at=now,
)
)
return entries

View File

@@ -76,31 +76,6 @@ def get_stack_paths(cfg: Config, stack: str) -> list[str]:
return paths
async def discover_stack_host(cfg: Config, stack: str) -> tuple[str, str | list[str] | None]:
"""Discover where a stack is running.
For multi-host stacks, checks all assigned hosts in parallel.
For single-host, checks assigned host first, then others.
Returns (stack_name, host_or_hosts_or_none).
"""
assigned_hosts = cfg.get_hosts(stack)
if cfg.is_multi_host(stack):
# Check all assigned hosts in parallel
checks = await asyncio.gather(*[check_stack_running(cfg, stack, h) for h in assigned_hosts])
running = [h for h, is_running in zip(assigned_hosts, checks, strict=True) if is_running]
return stack, running if running else None
# Single-host: check assigned host first, then others
if await check_stack_running(cfg, stack, assigned_hosts[0]):
return stack, assigned_hosts[0]
for host in cfg.hosts:
if host != assigned_hosts[0] and await check_stack_running(cfg, stack, host):
return stack, host
return stack, None
class StackDiscoveryResult(NamedTuple):
"""Result of discovering where a stack is running across all hosts."""
@@ -134,25 +109,6 @@ class StackDiscoveryResult(NamedTuple):
return not self.is_multi_host and len(self.running_hosts) > 1
async def discover_stack_on_all_hosts(cfg: Config, stack: str) -> StackDiscoveryResult:
"""Discover where a stack is running across ALL hosts.
Unlike discover_stack_host(), this checks every host in parallel
to detect strays and duplicates.
"""
configured_hosts = cfg.get_hosts(stack)
all_hosts = list(cfg.hosts.keys())
checks = await asyncio.gather(*[check_stack_running(cfg, stack, h) for h in all_hosts])
running_hosts = [h for h, is_running in zip(all_hosts, checks, strict=True) if is_running]
return StackDiscoveryResult(
stack=stack,
configured_hosts=configured_hosts,
running_hosts=running_hosts,
)
async def check_stack_requirements(
cfg: Config,
stack: str,
@@ -518,3 +474,60 @@ async def stop_stray_stacks(
"""
return await _stop_stacks_on_hosts(cfg, strays, label="stray")
def build_discovery_results(
cfg: Config,
running_on_host: dict[str, set[str]],
stacks: list[str] | None = None,
) -> tuple[dict[str, str | list[str]], dict[str, list[str]], dict[str, list[str]]]:
"""Build discovery results from per-host running stacks.
Takes the raw data of which stacks are running on which hosts and
categorizes them into discovered (running correctly), strays (wrong host),
and duplicates (single-host stack on multiple hosts).
Args:
cfg: Config object.
running_on_host: Dict mapping host -> set of running stack names.
stacks: Optional list of stacks to check. Defaults to all configured stacks.
Returns:
Tuple of (discovered, strays, duplicates):
- discovered: stack -> host(s) where running correctly
- strays: stack -> list of unauthorized hosts
- duplicates: stack -> list of all hosts (for single-host stacks on multiple)
"""
stack_list = stacks if stacks is not None else list(cfg.stacks)
all_hosts = list(running_on_host.keys())
# Build StackDiscoveryResult for each stack
results: list[StackDiscoveryResult] = [
StackDiscoveryResult(
stack=stack,
configured_hosts=cfg.get_hosts(stack),
running_hosts=[h for h in all_hosts if stack in running_on_host[h]],
)
for stack in stack_list
]
discovered: dict[str, str | list[str]] = {}
strays: dict[str, list[str]] = {}
duplicates: dict[str, list[str]] = {}
for result in results:
correct_hosts = [h for h in result.running_hosts if h in result.configured_hosts]
if correct_hosts:
if result.is_multi_host:
discovered[result.stack] = correct_hosts
else:
discovered[result.stack] = correct_hosts[0]
if result.is_stray:
strays[result.stack] = result.stray_hosts
if result.is_duplicate:
duplicates[result.stack] = result.running_hosts
return discovered, strays, duplicates

View File

@@ -0,0 +1,220 @@
"""Container registry API client for tag discovery."""
from __future__ import annotations
import re
from dataclasses import dataclass, field
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import httpx
# Image reference pattern: [registry/][namespace/]name[:tag][@digest]
IMAGE_PATTERN = re.compile(
r"^(?:(?P<registry>[^/]+\.[^/]+)/)?(?:(?P<namespace>[^/:@]+)/)?(?P<name>[^/:@]+)(?::(?P<tag>[^@]+))?(?:@(?P<digest>.+))?$"
)
# Docker Hub aliases
DOCKER_HUB_ALIASES = frozenset(
{"docker.io", "index.docker.io", "registry.hub.docker.com", "registry-1.docker.io"}
)
# Token endpoints per registry: (url, extra_params)
TOKEN_ENDPOINTS: dict[str, tuple[str, dict[str, str]]] = {
"docker.io": ("https://auth.docker.io/token", {"service": "registry.docker.io"}),
"ghcr.io": ("https://ghcr.io/token", {}),
}
# Registry URL overrides (Docker Hub uses a different host for API)
REGISTRY_URLS: dict[str, str] = {
"docker.io": "https://registry-1.docker.io",
}
HTTP_OK = 200
MANIFEST_ACCEPT = (
"application/vnd.docker.distribution.manifest.v2+json, "
"application/vnd.oci.image.manifest.v1+json, "
"application/vnd.oci.image.index.v1+json"
)
@dataclass(frozen=True)
class ImageRef:
"""Parsed container image reference."""
registry: str
namespace: str
name: str
tag: str
digest: str | None = None
@property
def full_name(self) -> str:
"""Full image name with namespace."""
return f"{self.namespace}/{self.name}" if self.namespace else self.name
@property
def display_name(self) -> str:
"""Display name (omits docker.io/library for official images)."""
if self.registry in DOCKER_HUB_ALIASES:
if self.namespace == "library":
return self.name
return self.full_name
return f"{self.registry}/{self.full_name}"
@classmethod
def parse(cls, image: str) -> ImageRef:
"""Parse image string into components."""
match = IMAGE_PATTERN.match(image)
if not match:
return cls("docker.io", "library", image.split(":")[0].split("@")[0], "latest")
groups = match.groupdict()
registry = groups.get("registry") or "docker.io"
namespace = groups.get("namespace") or ""
name = groups.get("name") or image
tag = groups.get("tag") or "latest"
digest = groups.get("digest")
# Docker Hub official images have implicit "library" namespace
if registry in DOCKER_HUB_ALIASES and not namespace:
namespace = "library"
return cls(registry, namespace, name, tag, digest)
@dataclass
class TagCheckResult:
"""Result of checking tags for an image."""
image: ImageRef
current_digest: str
available_updates: list[str] = field(default_factory=list)
error: str | None = None
class RegistryClient:
"""Unified OCI Distribution API client."""
def __init__(self, registry: str) -> None:
"""Initialize for a specific registry."""
self.registry = registry.lower()
# Normalize Docker Hub aliases
if self.registry in DOCKER_HUB_ALIASES:
self.registry = "docker.io"
self.registry_url = REGISTRY_URLS.get(self.registry, f"https://{self.registry}")
self._token_cache: dict[str, str] = {}
async def _get_token(self, image: ImageRef, client: httpx.AsyncClient) -> str | None:
"""Get auth token for the registry (cached per image)."""
cache_key = image.full_name
if cache_key in self._token_cache:
return self._token_cache[cache_key]
endpoint = TOKEN_ENDPOINTS.get(self.registry)
if not endpoint:
return None # No auth needed or unknown registry
url, extra_params = endpoint
params = {"scope": f"repository:{image.full_name}:pull", **extra_params}
resp = await client.get(url, params=params)
if resp.status_code == HTTP_OK:
token: str | None = resp.json().get("token")
if token:
self._token_cache[cache_key] = token
return token
return None
async def get_tags(self, image: ImageRef, client: httpx.AsyncClient) -> list[str]:
"""Fetch available tags for an image."""
headers = {}
token = await self._get_token(image, client)
if token:
headers["Authorization"] = f"Bearer {token}"
url = f"{self.registry_url}/v2/{image.full_name}/tags/list"
resp = await client.get(url, headers=headers)
if resp.status_code != HTTP_OK:
return []
tags: list[str] = resp.json().get("tags", [])
return tags
async def get_digest(self, image: ImageRef, tag: str, client: httpx.AsyncClient) -> str | None:
"""Get digest for a specific tag."""
headers = {"Accept": MANIFEST_ACCEPT}
token = await self._get_token(image, client)
if token:
headers["Authorization"] = f"Bearer {token}"
url = f"{self.registry_url}/v2/{image.full_name}/manifests/{tag}"
resp = await client.head(url, headers=headers)
if resp.status_code == HTTP_OK:
digest: str | None = resp.headers.get("docker-content-digest")
return digest
return None
def _parse_version(tag: str) -> tuple[int, ...] | None:
"""Parse version string into comparable tuple."""
tag = tag.lstrip("vV")
parts = tag.split(".")
try:
return tuple(int(p) for p in parts)
except ValueError:
return None
def _find_updates(current_tag: str, tags: list[str]) -> list[str]:
"""Find tags newer than current based on version comparison."""
current_version = _parse_version(current_tag)
if current_version is None:
return []
updates = []
for tag in tags:
tag_version = _parse_version(tag)
if tag_version and tag_version > current_version:
updates.append(tag)
updates.sort(key=lambda t: _parse_version(t) or (), reverse=True)
return updates
async def check_image_updates(
image_str: str,
client: httpx.AsyncClient,
) -> TagCheckResult:
"""Check if newer versions are available for an image.
Args:
image_str: Image string like "nginx:1.25" or "ghcr.io/user/repo:tag"
client: httpx async client
Returns:
TagCheckResult with available updates
"""
image = ImageRef.parse(image_str)
registry_client = RegistryClient(image.registry)
try:
tags = await registry_client.get_tags(image, client)
updates = _find_updates(image.tag, tags)
current_digest = await registry_client.get_digest(image, image.tag, client) or ""
return TagCheckResult(
image=image,
current_digest=current_digest,
available_updates=updates,
)
except Exception as e:
return TagCheckResult(
image=image,
current_digest="",
error=str(e),
)

View File

@@ -6,15 +6,16 @@ import asyncio
import logging
import sys
from contextlib import asynccontextmanager, suppress
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Any, cast
from fastapi import FastAPI
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.staticfiles import StaticFiles
from pydantic import ValidationError
from rich.logging import RichHandler
from compose_farm.web.deps import STATIC_DIR, get_config
from compose_farm.web.routes import actions, api, pages
from compose_farm.web.routes import actions, api, containers, pages
from compose_farm.web.streaming import TASK_TTL_SECONDS, cleanup_stale_tasks
# Configure logging with Rich handler for compose_farm.web modules
@@ -64,10 +65,14 @@ def create_app() -> FastAPI:
lifespan=lifespan,
)
# Enable Gzip compression for faster transfers over slow networks
app.add_middleware(cast("Any", GZipMiddleware), minimum_size=1000)
# Mount static files
app.mount("/static", StaticFiles(directory=str(STATIC_DIR)), name="static")
app.include_router(pages.router)
app.include_router(containers.router)
app.include_router(api.router, prefix="/api")
app.include_router(actions.router, prefix="/api")

View File

@@ -39,6 +39,14 @@ CDN_ASSETS: dict[str, tuple[str, str]] = {
"xterm-fit.js",
"application/javascript",
),
"https://unpkg.com/idiomorph/dist/idiomorph.min.js": (
"idiomorph.js",
"application/javascript",
),
"https://unpkg.com/idiomorph/dist/idiomorph-ext.min.js": (
"idiomorph-ext.js",
"application/javascript",
),
# Monaco editor - dynamically loaded by app.js
"https://cdn.jsdelivr.net/npm/monaco-editor@0.52.2/min/vs/loader.js": (
"monaco-loader.js",

View File

@@ -38,7 +38,17 @@ def get_templates() -> Jinja2Templates:
def extract_config_error(exc: Exception) -> str:
"""Extract a user-friendly error message from a config exception."""
if isinstance(exc, ValidationError):
return "; ".join(err.get("msg", str(err)) for err in exc.errors())
parts = []
for err in exc.errors():
msg = err.get("msg", str(err))
loc = err.get("loc", ())
if loc:
# Format location as dot-separated path (e.g., "hosts.nas.port")
loc_str = ".".join(str(part) for part in loc)
parts.append(f"{loc_str}: {msg}")
else:
parts.append(msg)
return "; ".join(parts)
return str(exc)

View File

@@ -1,5 +1,5 @@
"""Web routes."""
from compose_farm.web.routes import actions, api, pages
from compose_farm.web.routes import actions, api, containers, pages
__all__ = ["actions", "api", "pages"]
__all__ = ["actions", "api", "containers", "pages"]

View File

@@ -21,6 +21,7 @@ from fastapi.responses import HTMLResponse
from compose_farm.compose import get_container_name
from compose_farm.executor import is_local, run_compose_on_host, ssh_connect_kwargs
from compose_farm.glances import fetch_all_host_stats
from compose_farm.paths import backup_dir, find_config_path
from compose_farm.state import load_state
from compose_farm.web.deps import get_config, get_templates
@@ -385,3 +386,19 @@ async def write_console_file(
except Exception as e:
logger.exception("Failed to write file %s to host %s", path, host)
raise HTTPException(status_code=500, detail=str(e)) from e
@router.get("/glances", response_class=HTMLResponse)
async def get_glances_stats() -> HTMLResponse:
"""Get resource stats from Glances for all hosts."""
config = get_config()
if not config.glances_stack:
return HTMLResponse("") # Glances not configured
stats = await fetch_all_host_stats(config)
templates = get_templates()
template = templates.env.get_template("partials/glances.html")
html = template.render(stats=stats)
return HTMLResponse(html)

View File

@@ -0,0 +1,370 @@
"""Container dashboard routes using Glances API."""
from __future__ import annotations
import html
import re
from typing import TYPE_CHECKING
from urllib.parse import quote
import humanize
from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse, JSONResponse
from compose_farm.executor import TTLCache
from compose_farm.glances import ContainerStats, fetch_all_container_stats
from compose_farm.registry import DOCKER_HUB_ALIASES, ImageRef
from compose_farm.web.deps import get_config, get_templates
router = APIRouter(tags=["containers"])
if TYPE_CHECKING:
from compose_farm.registry import TagCheckResult
# Cache registry update checks for 5 minutes (300 seconds)
# Registry calls are slow and often rate-limited
_update_check_cache = TTLCache(ttl_seconds=300.0)
# Minimum parts needed to infer stack/service from container name
MIN_NAME_PARTS = 2
# HTML for "no update info" dash
_DASH_HTML = '<span class="text-xs opacity-50">-</span>'
def _format_bytes(bytes_val: int) -> str:
"""Format bytes to human readable string."""
return humanize.naturalsize(bytes_val, binary=True, format="%.1f")
def _parse_image(image: str) -> tuple[str, str]:
"""Parse image string into (name, tag)."""
# Handle registry prefix (e.g., ghcr.io/user/repo:tag)
if ":" in image:
# Find last colon that's not part of port
parts = image.rsplit(":", 1)
if "/" in parts[-1]:
# The "tag" contains a slash, so it's probably a port
return image, "latest"
return parts[0], parts[1]
return image, "latest"
def _infer_stack_service(name: str) -> tuple[str, str]:
"""Fallback: infer stack and service from container name.
Used when compose labels are not available.
Docker Compose naming conventions:
- Default: {project}_{service}_{instance} or {project}-{service}-{instance}
- Custom: {container_name} from compose file
"""
# Try underscore separator first (older compose)
if "_" in name:
parts = name.split("_")
if len(parts) >= MIN_NAME_PARTS:
return parts[0], parts[1]
# Try hyphen separator (newer compose)
if "-" in name:
parts = name.split("-")
if len(parts) >= MIN_NAME_PARTS:
return parts[0], "-".join(parts[1:-1]) if len(parts) > MIN_NAME_PARTS else parts[1]
# Fallback: use name as both stack and service
return name, name
@router.get("/live-stats", response_class=HTMLResponse)
async def containers_page(request: Request) -> HTMLResponse:
"""Container dashboard page."""
config = get_config()
templates = get_templates()
# Check if Glances is configured
glances_enabled = config.glances_stack is not None
return templates.TemplateResponse(
"containers.html",
{
"request": request,
"glances_enabled": glances_enabled,
"hosts": sorted(config.hosts.keys()) if glances_enabled else [],
},
)
_STATUS_CLASSES = {
"running": "badge badge-success badge-sm",
"exited": "badge badge-error badge-sm",
"paused": "badge badge-warning badge-sm",
}
def _status_class(status: str) -> str:
"""Get CSS class for status badge."""
return _STATUS_CLASSES.get(status.lower(), "badge badge-ghost badge-sm")
def _progress_class(percent: float) -> str:
"""Get CSS class for progress bar color."""
if percent > 80: # noqa: PLR2004
return "bg-error"
if percent > 50: # noqa: PLR2004
return "bg-warning"
return "bg-success"
def _render_update_cell(image: str, tag: str) -> str:
"""Render update check cell with client-side batch updates."""
encoded_image = quote(image, safe="")
encoded_tag = quote(tag, safe="")
cached_html = _update_check_cache.get(f"{image}:{tag}")
inner = cached_html if cached_html is not None else _DASH_HTML
return (
f"""<td class="update-cell" data-image="{encoded_image}" data-tag="{encoded_tag}">"""
f"{inner}</td>"
)
def _image_web_url(image: str) -> str | None:
"""Return a human-friendly registry URL for an image (without tag)."""
ref = ImageRef.parse(image)
if ref.registry in DOCKER_HUB_ALIASES:
if ref.namespace == "library":
return f"https://hub.docker.com/_/{ref.name}"
return f"https://hub.docker.com/r/{ref.namespace}/{ref.name}"
return f"https://{ref.registry}/{ref.full_name}"
def _render_row(c: ContainerStats, idx: int | str) -> str:
"""Render a single container as an HTML table row."""
image_name, tag = _parse_image(c.image)
stack = c.stack if c.stack else _infer_stack_service(c.name)[0]
service = c.service if c.service else _infer_stack_service(c.name)[1]
cpu = c.cpu_percent
mem = c.memory_percent
cpu_class = _progress_class(cpu)
mem_class = _progress_class(mem)
# Highlight rows with high resource usage
high_cpu = cpu > 80 # noqa: PLR2004
high_mem = mem > 90 # noqa: PLR2004
row_class = "high-usage" if (high_cpu or high_mem) else ""
uptime_sec = _parse_uptime_seconds(c.uptime)
actions = _render_actions(stack)
update_cell = _render_update_cell(image_name, tag)
image_label = f"{image_name}:{tag}"
image_url = _image_web_url(image_name)
if image_url:
image_html = (
f'<a href="{image_url}" target="_blank" rel="noopener noreferrer" '
f'class="link link-hover">'
f'<code class="text-xs bg-base-200 px-1 rounded">{image_label}</code></a>'
)
else:
image_html = f'<code class="text-xs bg-base-200 px-1 rounded">{image_label}</code>'
# Render as single line to avoid whitespace nodes in DOM
row_id = f"c-{c.host}-{c.name}"
class_attr = f' class="{row_class}"' if row_class else ""
return (
f'<tr id="{row_id}" data-host="{c.host}"{class_attr}><td class="text-xs opacity-50">{idx}</td>'
f'<td data-sort="{stack.lower()}"><a href="/stack/{stack}" class="link link-hover link-primary" hx-boost="true">{stack}</a></td>'
f'<td data-sort="{service.lower()}" class="text-xs opacity-70">{service}</td>'
f"<td>{actions}</td>"
f'<td data-sort="{c.host.lower()}"><span class="badge badge-outline badge-xs">{c.host}</span></td>'
f'<td data-sort="{c.image.lower()}">{image_html}</td>'
f"{update_cell}"
f'<td data-sort="{c.status.lower()}"><span class="{_status_class(c.status)}">{c.status}</span></td>'
f'<td data-sort="{uptime_sec}" class="text-xs text-right font-mono">{c.uptime or "-"}</td>'
f'<td data-sort="{cpu}" class="text-right font-mono"><div class="flex flex-col items-end gap-0.5"><div class="w-12 h-2 bg-base-300 rounded-full overflow-hidden"><div class="h-full {cpu_class}" style="width: {min(cpu, 100)}%"></div></div><span class="text-xs">{cpu:.0f}%</span></div></td>'
f'<td data-sort="{c.memory_usage}" class="text-right font-mono"><div class="flex flex-col items-end gap-0.5"><div class="w-12 h-2 bg-base-300 rounded-full overflow-hidden"><div class="h-full {mem_class}" style="width: {min(mem, 100)}%"></div></div><span class="text-xs">{_format_bytes(c.memory_usage)}</span></div></td>'
f'<td data-sort="{c.network_rx + c.network_tx}" class="text-xs text-right font-mono">↓{_format_bytes(c.network_rx)}{_format_bytes(c.network_tx)}</td>'
"</tr>"
)
def _render_actions(stack: str) -> str:
"""Render actions dropdown for a container row."""
return f"""<button class="btn btn-circle btn-ghost btn-xs" onclick="openActionMenu(event, '{stack}')" aria-label="Actions for {stack}">
<svg class="h-4 w-4"><use href="#icon-menu" /></svg>
</button>"""
def _parse_uptime_seconds(uptime: str) -> int:
"""Parse uptime string to seconds for sorting."""
if not uptime:
return 0
uptime = uptime.lower().strip()
# Handle "a/an" as 1
uptime = uptime.replace("an ", "1 ").replace("a ", "1 ")
total = 0
multipliers = {
"second": 1,
"minute": 60,
"hour": 3600,
"day": 86400,
"week": 604800,
"month": 2592000,
"year": 31536000,
}
for match in re.finditer(r"(\d+)\s*(\w+)", uptime):
num = int(match.group(1))
unit = match.group(2).rstrip("s") # Remove plural 's'
total += num * multipliers.get(unit, 0)
return total
@router.get("/api/containers/rows", response_class=HTMLResponse)
async def get_containers_rows() -> HTMLResponse:
"""Get container table rows as HTML for HTMX.
Each cell has data-sort attribute for instant client-side sorting.
"""
config = get_config()
if not config.glances_stack:
return HTMLResponse(
'<tr><td colspan="12" class="text-center text-error">Glances not configured</td></tr>'
)
containers = await fetch_all_container_stats(config)
if not containers:
return HTMLResponse(
'<tr><td colspan="12" class="text-center py-4 opacity-60">No containers found</td></tr>'
)
rows = "\n".join(_render_row(c, i + 1) for i, c in enumerate(containers))
return HTMLResponse(rows)
@router.get("/api/containers/rows/{host_name}", response_class=HTMLResponse)
async def get_containers_rows_by_host(host_name: str) -> HTMLResponse:
"""Get container rows for a specific host.
Returns immediately with Glances data. Stack/service are inferred from
container names for instant display (no SSH wait).
"""
import logging # noqa: PLC0415
import time # noqa: PLC0415
from compose_farm.executor import get_container_compose_labels # noqa: PLC0415
from compose_farm.glances import fetch_container_stats # noqa: PLC0415
logger = logging.getLogger(__name__)
config = get_config()
if host_name not in config.hosts:
return HTMLResponse("")
host = config.hosts[host_name]
t0 = time.monotonic()
containers, error = await fetch_container_stats(host_name, host.address)
t1 = time.monotonic()
fetch_ms = (t1 - t0) * 1000
if containers is None:
logger.error(
"Failed to fetch stats for %s in %.1fms: %s",
host_name,
fetch_ms,
error,
)
return HTMLResponse(
f'<tr class="text-error"><td colspan="12" class="text-center py-2">Error: {error}</td></tr>'
)
if not containers:
return HTMLResponse("") # No rows for this host
labels = await get_container_compose_labels(config, host_name)
for c in containers:
stack, service = labels.get(c.name, ("", ""))
if not stack or not service:
stack, service = _infer_stack_service(c.name)
c.stack, c.service = stack, service
# Only show containers from stacks in config (filters out orphaned/unknown stacks)
containers = [c for c in containers if not c.stack or c.stack in config.stacks]
# Use placeholder index (will be renumbered by JS after all hosts load)
rows = "\n".join(_render_row(c, "-") for c in containers)
t2 = time.monotonic()
render_ms = (t2 - t1) * 1000
logger.info(
"Loaded %d rows for %s in %.1fms (fetch) + %.1fms (render)",
len(containers),
host_name,
fetch_ms,
render_ms,
)
return HTMLResponse(rows)
def _render_update_badge(result: TagCheckResult) -> str:
if result.error:
return _DASH_HTML
if result.available_updates:
updates = result.available_updates
count = len(updates)
title = f"Newer: {', '.join(updates[:3])}" + ("..." if count > 3 else "") # noqa: PLR2004
tip = html.escape(title, quote=True)
return (
f'<span class="tooltip" data-tip="{tip}">'
f'<span class="badge badge-warning badge-xs cursor-help">{count} new</span>'
"</span>"
)
return '<span class="tooltip" data-tip="Up to date"><span class="text-success text-xs">✓</span></span>'
@router.post("/api/containers/check-updates", response_class=JSONResponse)
async def check_container_updates_batch(request: Request) -> JSONResponse:
"""Batch update checks for a list of images.
Payload: {"items": [{"image": "...", "tag": "..."}, ...]}
Returns: {"results": [{"image": "...", "tag": "...", "html": "..."}, ...]}
"""
import httpx # noqa: PLC0415
payload = await request.json()
items = payload.get("items", []) if isinstance(payload, dict) else []
if not items:
return JSONResponse({"results": []})
results = []
from compose_farm.registry import check_image_updates # noqa: PLC0415
async with httpx.AsyncClient(timeout=10.0) as client:
for item in items:
image = item.get("image", "")
tag = item.get("tag", "")
full_image = f"{image}:{tag}"
if not image or not tag:
results.append({"image": image, "tag": tag, "html": _DASH_HTML})
continue
# NOTE: Tag-based checks cannot detect digest changes for moving tags
# like "latest". A future improvement could compare remote vs local
# digests using dockerfarm-log.toml (from `cf refresh`) or a per-host
# digest lookup.
cached_html: str | None = _update_check_cache.get(full_image)
if cached_html is not None:
results.append({"image": image, "tag": tag, "html": cached_html})
continue
try:
result = await check_image_updates(full_image, client)
html = _render_update_badge(result)
_update_check_cache.set(full_image, html)
except Exception:
_update_check_cache.set(full_image, _DASH_HTML, ttl_seconds=60.0)
html = _DASH_HTML
results.append({"image": image, "tag": tag, "html": html})
return JSONResponse({"results": results})

View File

@@ -9,7 +9,6 @@
// ANSI escape codes for terminal output
const ANSI = {
RED: '\x1b[31m',
GREEN: '\x1b[32m',
DIM: '\x1b[2m',
RESET: '\x1b[0m',
CRLF: '\r\n'
@@ -122,7 +121,6 @@ function whenXtermReady(callback, maxAttempts = 20) {
};
tryInit(maxAttempts);
}
window.whenXtermReady = whenXtermReady;
// ============================================================================
// TERMINAL
@@ -209,8 +207,6 @@ function initTerminal(elementId, taskId) {
return { term, ws };
}
window.initTerminal = initTerminal;
/**
* Initialize an interactive exec terminal
*/
@@ -432,7 +428,7 @@ function initMonacoEditors() {
* Save all editors
*/
async function saveAllEditors() {
const saveBtn = document.getElementById('save-btn') || document.getElementById('save-config-btn');
const saveBtn = getSaveButton();
const results = [];
for (const [id, editor] of Object.entries(editors)) {
@@ -468,12 +464,16 @@ async function saveAllEditors() {
* Initialize save button handler
*/
function initSaveButton() {
const saveBtn = document.getElementById('save-btn') || document.getElementById('save-config-btn');
const saveBtn = getSaveButton();
if (!saveBtn) return;
saveBtn.onclick = saveAllEditors;
}
function getSaveButton() {
return document.getElementById('save-btn') || document.getElementById('save-config-btn');
}
// ============================================================================
// UI HELPERS
// ============================================================================
@@ -607,10 +607,11 @@ function playFabIntro() {
cmd('action', 'Update All', 'Update all stacks', dashboardAction('update-all'), icons.refresh_cw),
cmd('app', 'Theme', 'Change color theme', openThemePicker, icons.palette),
cmd('app', 'Dashboard', 'Go to dashboard', nav('/'), icons.home),
cmd('app', 'Live Stats', 'View all containers across hosts', nav('/live-stats'), icons.box),
cmd('app', 'Console', 'Go to console', nav('/console'), icons.terminal),
cmd('app', 'Edit Config', 'Edit compose-farm.yaml', nav('/console#editor'), icons.file_code),
cmd('app', 'Docs', 'Open documentation', openExternal('https://compose-farm.nijho.lt/'), icons.book_open),
cmd('app', 'Repo', 'Open GitHub repository', openExternal('https://github.com/basnijholt/compose-farm'), icons.external_link),
cmd('app', 'GitHub Repo', 'Open GitHub repository', openExternal('https://github.com/basnijholt/compose-farm'), icons.external_link),
];
// Add stack-specific actions if on a stack page
@@ -743,11 +744,6 @@ function playFabIntro() {
input.focus();
}
function close() {
dialog.close();
restoreTheme();
}
function exec() {
const cmd = filtered[selected];
if (cmd) {
@@ -869,6 +865,119 @@ function initPage() {
initMonacoEditors();
initSaveButton();
updateShortcutKeys();
initLiveStats();
initSharedActionMenu();
maybeRunStackAction();
}
function navigateToStack(stack, action = null) {
const url = action ? `/stack/${stack}?action=${action}` : `/stack/${stack}`;
window.location.href = url;
}
/**
* Initialize shared action menu for container rows
*/
function initSharedActionMenu() {
const menuEl = document.getElementById('shared-action-menu');
if (!menuEl) return;
if (menuEl.dataset.bound === '1') return;
menuEl.dataset.bound = '1';
let hoverTimeout = null;
function showMenuForButton(btn, stack) {
menuEl.dataset.stack = stack;
// Position menu relative to button
const rect = btn.getBoundingClientRect();
menuEl.classList.remove('hidden');
menuEl.style.visibility = 'hidden';
const menuRect = menuEl.getBoundingClientRect();
const left = rect.right - menuRect.width + window.scrollX;
const top = rect.bottom + window.scrollY;
menuEl.style.top = `${top}px`;
menuEl.style.left = `${left}px`;
menuEl.style.visibility = '';
if (typeof liveStats !== 'undefined') liveStats.dropdownOpen = true;
}
function closeMenu() {
menuEl.classList.add('hidden');
if (typeof liveStats !== 'undefined') liveStats.dropdownOpen = false;
menuEl.dataset.stack = '';
}
function scheduleClose() {
if (hoverTimeout) clearTimeout(hoverTimeout);
hoverTimeout = setTimeout(closeMenu, 100);
}
function cancelClose() {
if (hoverTimeout) {
clearTimeout(hoverTimeout);
hoverTimeout = null;
}
}
// Button hover: show menu (event delegation on tbody)
const tbody = document.getElementById('container-rows');
if (tbody) {
tbody.addEventListener('mouseenter', (e) => {
const btn = e.target.closest('button[onclick^="openActionMenu"]');
if (!btn) return;
// Extract stack from onclick attribute
const match = btn.getAttribute('onclick')?.match(/openActionMenu\(event,\s*'([^']+)'\)/);
if (!match) return;
cancelClose();
showMenuForButton(btn, match[1]);
}, true);
tbody.addEventListener('mouseleave', (e) => {
const btn = e.target.closest('button[onclick^="openActionMenu"]');
if (btn) scheduleClose();
}, true);
}
// Keep menu open while hovering over it
menuEl.addEventListener('mouseenter', cancelClose);
menuEl.addEventListener('mouseleave', scheduleClose);
// Click action in menu
menuEl.addEventListener('click', (e) => {
const link = e.target.closest('a[data-action]');
const stack = menuEl.dataset.stack;
if (!link || !stack) return;
e.preventDefault();
navigateToStack(stack, link.dataset.action);
closeMenu();
});
// Also support click on button (for touch/accessibility)
window.openActionMenu = function(event, stack) {
event.stopPropagation();
showMenuForButton(event.currentTarget, stack);
};
// Close on outside click
document.body.addEventListener('click', (e) => {
if (!menuEl.classList.contains('hidden') &&
!menuEl.contains(e.target) &&
!e.target.closest('button[onclick^="openActionMenu"]')) {
closeMenu();
}
});
// Close on Escape
document.body.addEventListener('keydown', (e) => {
if (e.key === 'Escape') closeMenu();
});
}
/**
@@ -889,6 +998,30 @@ function tryReconnectToTask(path) {
});
}
function maybeRunStackAction() {
const params = new URLSearchParams(window.location.search);
const stackEl = document.querySelector('[data-stack-name]');
const stackName = stackEl?.dataset?.stackName;
if (!stackName) return;
const action = params.get('action');
if (!action) return;
const button = document.querySelector(`button[hx-post="/api/stack/${stackName}/${action}"]`);
if (!button) return;
params.delete('action');
const newQuery = params.toString();
const newUrl = newQuery ? `${window.location.pathname}?${newQuery}` : window.location.pathname;
history.replaceState({}, '', newUrl);
if (window.htmx) {
htmx.trigger(button, 'click');
} else {
button.click();
}
}
// Initialize on page load
document.addEventListener('DOMContentLoaded', function() {
initPage();
@@ -930,3 +1063,443 @@ document.body.addEventListener('htmx:afterRequest', function(evt) {
// Not valid JSON, ignore
}
});
// ============================================================================
// LIVE STATS PAGE
// ============================================================================
// State persists across SPA navigation (intervals must be cleared on re-init)
let liveStats = {
sortCol: 9,
sortAsc: false,
lastUpdate: 0,
dropdownOpen: false,
scrolling: false,
scrollTimer: null,
loadingHosts: new Set(),
eventsBound: false,
intervals: [],
updateCheckTimes: new Map(),
autoRefresh: true
};
const REFRESH_INTERVAL = 5000;
const UPDATE_CHECK_TTL = 120000;
const NUMERIC_COLS = new Set([8, 9, 10, 11]); // uptime, cpu, mem, net
function filterTable() {
const textFilter = document.getElementById('filter-input')?.value.toLowerCase() || '';
const hostFilter = document.getElementById('host-filter')?.value || '';
const rows = document.querySelectorAll('#container-rows tr');
let visible = 0;
let total = 0;
rows.forEach(row => {
// Skip loading/empty/error rows (they have colspan)
if (row.cells[0]?.colSpan > 1) return;
total++;
const matchesText = !textFilter || row.textContent.toLowerCase().includes(textFilter);
const matchesHost = !hostFilter || row.dataset.host === hostFilter;
const show = matchesText && matchesHost;
row.style.display = show ? '' : 'none';
if (show) visible++;
});
const countEl = document.getElementById('container-count');
if (countEl) {
const isFiltering = textFilter || hostFilter;
countEl.textContent = total > 0
? (isFiltering ? `${visible} of ${total} containers` : `${total} containers`)
: '';
}
}
window.filterTable = filterTable;
function sortTable(col) {
if (liveStats.sortCol === col) {
liveStats.sortAsc = !liveStats.sortAsc;
} else {
liveStats.sortCol = col;
liveStats.sortAsc = false;
}
updateSortIndicators();
doSort();
}
window.sortTable = sortTable;
function updateSortIndicators() {
document.querySelectorAll('thead th').forEach((th, i) => {
const span = th.querySelector('.sort-indicator');
if (span) {
span.textContent = (i === liveStats.sortCol) ? (liveStats.sortAsc ? '↑' : '↓') : '';
span.style.opacity = (i === liveStats.sortCol) ? '1' : '0.3';
}
});
}
function doSort() {
const tbody = document.getElementById('container-rows');
if (!tbody) return;
const rows = Array.from(tbody.querySelectorAll('tr'));
if (rows.length === 0) return;
if (rows.length === 1 && rows[0].cells[0]?.colSpan > 1) return; // Empty state row
const isNumeric = NUMERIC_COLS.has(liveStats.sortCol);
rows.sort((a, b) => {
// Pin placeholders/empty rows to the bottom
const aLoading = a.classList.contains('loading-row') || a.classList.contains('host-empty') || a.cells[0]?.colSpan > 1;
const bLoading = b.classList.contains('loading-row') || b.classList.contains('host-empty') || b.cells[0]?.colSpan > 1;
if (aLoading && !bLoading) return 1;
if (!aLoading && bLoading) return -1;
if (aLoading && bLoading) return 0;
const aVal = a.cells[liveStats.sortCol]?.dataset?.sort ?? '';
const bVal = b.cells[liveStats.sortCol]?.dataset?.sort ?? '';
const cmp = isNumeric ? aVal - bVal : aVal.localeCompare(bVal);
return liveStats.sortAsc ? cmp : -cmp;
});
let index = 1;
const fragment = document.createDocumentFragment();
rows.forEach((row) => {
if (row.cells.length > 1) {
row.cells[0].textContent = index++;
}
fragment.appendChild(row);
});
tbody.appendChild(fragment);
}
function isLoading() {
return liveStats.loadingHosts.size > 0;
}
function getLiveStatsHosts() {
const tbody = document.getElementById('container-rows');
if (!tbody) return [];
const dataHosts = tbody.dataset.hosts || '';
return dataHosts.split(',').map(h => h.trim()).filter(Boolean);
}
function buildHostRow(host, message, className) {
return (
`<tr class="${className}" data-host="${host}">` +
`<td colspan="12" class="text-center py-2">` +
`<span class="text-sm opacity-60">${message}</span>` +
`</td></tr>`
);
}
async function checkUpdatesForHost(host) {
// Update checks always run - they only update small cells, not disruptive
const last = liveStats.updateCheckTimes.get(host) || 0;
if (Date.now() - last < UPDATE_CHECK_TTL) return;
const cells = Array.from(
document.querySelectorAll(`tr[data-host="${host}"] td.update-cell[data-image][data-tag]`)
);
if (cells.length === 0) return;
const items = [];
const seen = new Set();
cells.forEach(cell => {
const image = decodeURIComponent(cell.dataset.image || '');
const tag = decodeURIComponent(cell.dataset.tag || '');
const key = `${image}:${tag}`;
if (!image || seen.has(key)) return;
seen.add(key);
items.push({ image, tag });
});
if (items.length === 0) return;
try {
const response = await fetch('/api/containers/check-updates', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ items })
});
if (!response.ok) return;
const data = await response.json();
const results = Array.isArray(data?.results) ? data.results : [];
const htmlMap = new Map();
results.forEach(result => {
const key = `${result.image}:${result.tag}`;
htmlMap.set(key, result.html);
});
cells.forEach(cell => {
const image = decodeURIComponent(cell.dataset.image || '');
const tag = decodeURIComponent(cell.dataset.tag || '');
const key = `${image}:${tag}`;
const html = htmlMap.get(key);
if (html && cell.innerHTML !== html) {
cell.innerHTML = html;
}
});
liveStats.updateCheckTimes.set(host, Date.now());
} catch (e) {
console.error('Update check failed:', e);
}
}
function replaceHostRows(host, html) {
const tbody = document.getElementById('container-rows');
if (!tbody) return;
// Remove loading indicator for this host if present
const loadingRow = tbody.querySelector(`tr.loading-row[data-host="${host}"]`);
if (loadingRow) loadingRow.remove();
const template = document.createElement('template');
template.innerHTML = html.trim();
let newRows = Array.from(template.content.children).filter(el => el.tagName === 'TR');
if (newRows.length === 0) {
// Only show empty message if we don't have any rows for this host
const existing = tbody.querySelector(`tr[data-host="${host}"]:not(.loading-row)`);
if (!existing) {
template.innerHTML = buildHostRow(host, `No containers on ${host}`, 'host-empty');
newRows = Array.from(template.content.children);
}
}
// Track which IDs we've seen in this update
const newIds = new Set();
newRows.forEach(newRow => {
const id = newRow.id;
if (id) newIds.add(id);
if (id) {
const existing = document.getElementById(id);
if (existing) {
// Morph in place if Idiomorph is available, otherwise replace
if (typeof Idiomorph !== 'undefined') {
Idiomorph.morph(existing, newRow);
} else {
existing.replaceWith(newRow);
}
// Re-process HTMX if needed (though inner content usually carries attributes)
const morphedRow = document.getElementById(id);
if (window.htmx) htmx.process(morphedRow);
// Trigger refresh animation
if (morphedRow) {
morphedRow.classList.add('row-updated');
setTimeout(() => morphedRow.classList.remove('row-updated'), 500);
}
} else {
// New row - append (will be sorted later)
tbody.appendChild(newRow);
if (window.htmx) htmx.process(newRow);
// Animate new rows too
newRow.classList.add('row-updated');
setTimeout(() => newRow.classList.remove('row-updated'), 500);
}
} else {
// Fallback for rows without ID (like error/empty messages)
// Just append them, cleaning up previous generic rows handled below
tbody.appendChild(newRow);
}
});
// Remove orphaned rows for this host (rows that exist in DOM but not in new response)
// Be careful not to remove rows that were just added (if they lack IDs)
const currentHostRows = Array.from(tbody.querySelectorAll(`tr[data-host="${host}"]`));
currentHostRows.forEach(row => {
// Skip if it's one of the new rows we just appended (check presence in newRows?)
// Actually, if we just appended it, it is in DOM.
// We rely on ID matching.
// Error/Empty rows usually don't have ID, but we handle them by clearing old ones?
// Let's assume data rows have IDs.
if (row.id && !newIds.has(row.id)) {
row.remove();
}
// Also remove old empty/error messages if we now have data
if (!row.id && newRows.length > 0 && newRows[0].id) {
row.remove();
}
});
liveStats.loadingHosts.delete(host);
checkUpdatesForHost(host);
scheduleRowUpdate();
}
async function loadHostRows(host) {
liveStats.loadingHosts.add(host);
try {
const response = await fetch(`/api/containers/rows/${encodeURIComponent(host)}`);
const html = response.ok ? await response.text() : '';
replaceHostRows(host, html);
} catch (e) {
console.error(`Failed to load ${host}:`, e);
const msg = e.message || String(e);
// Fallback to simpler error display if replaceHostRows fails (e.g. Idiomorph missing)
try {
replaceHostRows(host, buildHostRow(host, `Error: ${msg}`, 'text-error'));
} catch (err2) {
// Last resort: find row and force innerHTML
const tbody = document.getElementById('container-rows');
const row = tbody?.querySelector(`tr[data-host="${host}"]`);
if (row) row.innerHTML = `<td colspan="12" class="text-center text-error">Error: ${msg}</td>`;
}
} finally {
liveStats.loadingHosts.delete(host);
}
}
function refreshLiveStats() {
if (liveStats.dropdownOpen || liveStats.scrolling) return;
const hosts = getLiveStatsHosts();
if (hosts.length === 0) return;
liveStats.lastUpdate = Date.now();
hosts.forEach(loadHostRows);
}
window.refreshLiveStats = refreshLiveStats;
function toggleAutoRefresh() {
liveStats.autoRefresh = !liveStats.autoRefresh;
const timer = document.getElementById('refresh-timer');
if (timer) {
timer.classList.toggle('btn-error', !liveStats.autoRefresh);
timer.classList.toggle('btn-outline', liveStats.autoRefresh);
}
if (liveStats.autoRefresh) {
// Re-enabling: trigger immediate refresh
refreshLiveStats();
} else {
// Disabling: ensure update checks run for current data
const hosts = getLiveStatsHosts();
hosts.forEach(host => checkUpdatesForHost(host));
}
}
window.toggleAutoRefresh = toggleAutoRefresh;
function initLiveStats() {
if (!document.getElementById('refresh-timer')) return;
// Clear previous intervals (important for SPA navigation)
liveStats.intervals.forEach(clearInterval);
liveStats.intervals = [];
liveStats.lastUpdate = Date.now();
liveStats.dropdownOpen = false;
liveStats.scrolling = false;
if (liveStats.scrollTimer) clearTimeout(liveStats.scrollTimer);
liveStats.scrollTimer = null;
liveStats.loadingHosts.clear();
liveStats.updateCheckTimes = new Map();
liveStats.autoRefresh = true;
if (!liveStats.eventsBound) {
liveStats.eventsBound = true;
// Dropdown pauses refresh
document.body.addEventListener('click', e => {
liveStats.dropdownOpen = !!e.target.closest('.dropdown');
});
document.body.addEventListener('focusin', e => {
if (e.target.closest('.dropdown')) liveStats.dropdownOpen = true;
});
document.body.addEventListener('focusout', () => {
setTimeout(() => {
liveStats.dropdownOpen = !!document.activeElement?.closest('.dropdown');
}, 150);
});
document.body.addEventListener('keydown', e => {
if (e.key === 'Escape') liveStats.dropdownOpen = false;
});
// Pause refresh while scrolling (helps on slow mobile browsers)
window.addEventListener('scroll', () => {
liveStats.scrolling = true;
if (liveStats.scrollTimer) clearTimeout(liveStats.scrollTimer);
liveStats.scrollTimer = setTimeout(() => {
liveStats.scrolling = false;
}, 200);
}, { passive: true });
}
// Auto-refresh every 5 seconds (skip if disabled, loading, or dropdown open)
liveStats.intervals.push(setInterval(() => {
if (!liveStats.autoRefresh) return;
if (liveStats.dropdownOpen || liveStats.scrolling || isLoading()) return;
refreshLiveStats();
}, REFRESH_INTERVAL));
// Timer display (updates every 100ms)
liveStats.intervals.push(setInterval(() => {
const timer = document.getElementById('refresh-timer');
if (!timer) {
liveStats.intervals.forEach(clearInterval);
return;
}
const loading = isLoading();
const paused = liveStats.dropdownOpen || liveStats.scrolling;
const elapsed = Date.now() - liveStats.lastUpdate;
window.refreshPaused = paused || loading || !liveStats.autoRefresh;
// Update refresh timer button
let text;
if (!liveStats.autoRefresh) {
text = 'OFF';
} else if (paused) {
text = '❚❚';
} else {
const remaining = Math.max(0, REFRESH_INTERVAL - elapsed);
text = loading ? '↻ …' : `${Math.ceil(remaining / 1000)}s`;
}
if (timer.textContent !== text) {
timer.textContent = text;
}
// Update "last updated" display
const lastUpdatedEl = document.getElementById('last-updated');
if (lastUpdatedEl) {
const secs = Math.floor(elapsed / 1000);
const updatedText = secs < 5 ? 'Updated just now' : `Updated ${secs}s ago`;
if (lastUpdatedEl.textContent !== updatedText) {
lastUpdatedEl.textContent = updatedText;
}
}
}, 100));
updateSortIndicators();
refreshLiveStats();
}
function scheduleRowUpdate() {
// Sort and filter immediately to prevent flicker
doSort();
filterTable();
}
// ============================================================================
// STACKS BY HOST FILTER
// ============================================================================
function sbhFilter() {
const query = (document.getElementById('sbh-filter')?.value || '').toLowerCase();
const hostFilter = document.getElementById('sbh-host-select')?.value || '';
document.querySelectorAll('.sbh-group').forEach(group => {
if (hostFilter && group.dataset.h !== hostFilter) {
group.hidden = true;
return;
}
let visibleCount = 0;
group.querySelectorAll('li[data-s]').forEach(li => {
const show = !query || li.dataset.s.includes(query);
li.hidden = !show;
if (show) visibleCount++;
});
group.hidden = visibleCount === 0;
});
}
window.sbhFilter = sbhFilter;

View File

@@ -26,6 +26,23 @@
</script>
</head>
<body class="min-h-screen bg-base-200">
<svg style="display: none">
<symbol id="icon-menu" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<circle cx="12" cy="5" r="1" /><circle cx="12" cy="12" r="1" /><circle cx="12" cy="19" r="1" />
</symbol>
<symbol id="icon-restart" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M4 4v5h.582m15.356 2A8.001 8.001 0 004.582 9m0 0H9m11 11v-5h-.581m0 0a8.003 8.003 0 01-15.357-2m15.357 2H15" />
</symbol>
<symbol id="icon-pull" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M4 16v1a3 3 0 003 3h10a3 3 0 003-3v-1m-4-4l-4 4m0 0l-4-4m4 4V4" />
</symbol>
<symbol id="icon-update" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M4 16v1a3 3 0 003 3h10a3 3 0 003-3v-1m-4-8l-4-4m0 0L8 8m4-4v12" />
</symbol>
<symbol id="icon-logs" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M9 12h6m-6 4h6m2 5H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z" />
</symbol>
</svg>
<div class="drawer lg:drawer-open">
<input id="drawer-toggle" type="checkbox" class="drawer-toggle" />
@@ -80,6 +97,8 @@
<!-- Scripts - HTMX first -->
<script src="https://unpkg.com/htmx.org@2.0.4" data-vendor="htmx.js"></script>
<script src="https://unpkg.com/idiomorph/dist/idiomorph.min.js"></script>
<script src="https://unpkg.com/idiomorph/dist/idiomorph-ext.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/lib/xterm.js" data-vendor="xterm.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@xterm/addon-fit@0.10.0/lib/addon-fit.js" data-vendor="xterm-fit.js"></script>
<script src="/static/app.js"></script>

View File

@@ -0,0 +1,97 @@
{% extends "base.html" %}
{% from "partials/components.html" import page_header %}
{% from "partials/icons.html" import refresh_cw %}
{% block title %}Live Stats - Compose Farm{% endblock %}
{% block content %}
<div class="max-w-7xl">
{{ page_header("Live Stats", "All running containers across hosts") }}
{% if not glances_enabled %}
<div class="alert alert-warning mb-6">
<svg xmlns="http://www.w3.org/2000/svg" class="stroke-current shrink-0 h-6 w-6" fill="none" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L3.34 16c-.77 1.333.192 3 1.732 3z" /></svg>
<div>
<h3 class="font-bold">Glances not configured</h3>
<div class="text-xs">Add <code class="bg-base-300 px-1 rounded">glances_stack: glances</code> to your config and deploy Glances on all hosts.</div>
</div>
</div>
{% else %}
<!-- Action Bar -->
<div class="flex flex-wrap items-center gap-4 mb-6">
<div class="tooltip" data-tip="Refresh now">
<button class="btn btn-outline btn-sm" type="button" onclick="refreshLiveStats()">
{{ refresh_cw() }} Refresh
</button>
</div>
<div class="tooltip" data-tip="Click to toggle auto-refresh">
<button class="btn btn-outline btn-sm font-mono w-20 justify-center"
id="refresh-timer" onclick="toggleAutoRefresh()"></button>
</div>
<input type="text" id="filter-input" placeholder="Filter containers..."
class="input input-bordered input-sm w-64" onkeyup="filterTable()">
<select id="host-filter" class="select select-bordered select-sm" onchange="filterTable()">
<option value="">All hosts</option>
{% for host in hosts %}<option value="{{ host }}">{{ host }}</option>{% endfor %}
</select>
<span id="container-count" class="text-sm text-base-content/60"></span>
<span id="last-updated" class="text-sm text-base-content/40 ml-auto"></span>
</div>
<!-- Container Table -->
<div class="card bg-base-100 shadow overflow-x-auto">
<table class="table table-zebra table-sm w-full">
<thead class="sticky top-0 bg-base-200">
<tr>
<th class="w-8">#</th>
<th class="cursor-pointer" onclick="sortTable(1)">Stack<span class="sort-indicator"></span></th>
<th class="cursor-pointer" onclick="sortTable(2)">Service<span class="sort-indicator"></span></th>
<th></th>
<th class="cursor-pointer" onclick="sortTable(4)">Host<span class="sort-indicator"></span></th>
<th class="cursor-pointer" onclick="sortTable(5)">Image<span class="sort-indicator"></span></th>
<th class="w-16">Update</th>
<th class="cursor-pointer" onclick="sortTable(7)">Status<span class="sort-indicator"></span></th>
<th class="cursor-pointer text-right" onclick="sortTable(8)">Uptime<span class="sort-indicator"></span></th>
<th class="cursor-pointer text-right" onclick="sortTable(9)">CPU<span class="sort-indicator"></span></th>
<th class="cursor-pointer text-right" onclick="sortTable(10)">Mem<span class="sort-indicator"></span></th>
<th class="cursor-pointer text-right" onclick="sortTable(11)">Net I/O<span class="sort-indicator"></span></th>
</tr>
</thead>
<tbody id="container-rows" data-hosts="{{ hosts | join(',') }}">
{% for host in hosts %}
<tr class="loading-row" data-host="{{ host }}">
<td colspan="12" class="text-center py-2">
<span class="loading loading-spinner loading-xs"></span>
<span class="text-sm opacity-60">Loading {{ host }}...</span>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% endif %}
<!-- Shared Action Menu -->
<ul id="shared-action-menu" class="menu menu-sm bg-base-200 rounded-box shadow-lg w-36 absolute z-50 p-2 hidden">
<li><a data-action="restart"><svg class="h-4 w-4"><use href="#icon-restart" /></svg>Restart</a></li>
<li><a data-action="pull"><svg class="h-4 w-4"><use href="#icon-pull" /></svg>Pull</a></li>
<li><a data-action="update"><svg class="h-4 w-4"><use href="#icon-update" /></svg>Update</a></li>
<li><a data-action="logs"><svg class="h-4 w-4"><use href="#icon-logs" /></svg>Logs</a></li>
</ul>
</div>
{% endblock %}
{% block scripts %}
{% if glances_enabled %}
<style>
.sort-indicator { display: inline-block; width: 1em; text-align: center; opacity: 0.5; }
.high-usage { background-color: oklch(var(--er) / 0.15) !important; }
/* Refresh animation */
@keyframes row-pulse {
0% { background-color: oklch(var(--p) / 0.2); }
100% { background-color: transparent; }
}
.row-updated { animation: row-pulse 0.5s ease-out; }
</style>
{% endif %}
{% endblock %}

View File

@@ -53,6 +53,13 @@
{% include "partials/stacks_by_host.html" %}
</div>
<!-- Host Resources (Glances) -->
<div id="glances-stats"
hx-get="/api/glances"
hx-trigger="load, cf:refresh from:body, every 30s"
hx-swap="innerHTML">
</div>
<!-- Hosts Configuration -->
{% call collapse("Hosts (" ~ (hosts | length) ~ ")", icon=server()) %}
{% call table() %}

View File

@@ -0,0 +1,66 @@
{# Glances resource stats display #}
{% from "partials/icons.html" import cpu, memory_stick, gauge, server, activity, hard_drive, arrow_down_up, refresh_cw %}
{% macro progress_bar(percent, color="primary") %}
<div class="flex items-center gap-2 min-w-32">
<progress class="progress progress-{{ color }} flex-1" value="{{ percent }}" max="100"></progress>
<span class="text-xs w-10 text-right">{{ "%.1f"|format(percent) }}%</span>
</div>
{% endmacro %}
{% macro format_rate(bytes_per_sec) %}
{%- if bytes_per_sec >= 1048576 -%}
{{ "%.1f"|format(bytes_per_sec / 1048576) }} MB/s
{%- elif bytes_per_sec >= 1024 -%}
{{ "%.1f"|format(bytes_per_sec / 1024) }} KB/s
{%- else -%}
{{ "%.0f"|format(bytes_per_sec) }} B/s
{%- endif -%}
{% endmacro %}
{% macro host_row(host_stats) %}
<tr>
<td class="font-medium">{{ server(14) }} {{ host_stats.host }}</td>
{% if host_stats.error %}
<td colspan="5" class="text-error text-xs">{{ host_stats.error }}</td>
{% else %}
<td>{{ progress_bar(host_stats.cpu_percent, "info") }}</td>
<td>{{ progress_bar(host_stats.mem_percent, "success") }}</td>
<td>{{ progress_bar(host_stats.disk_percent, "warning") }}</td>
<td class="text-xs font-mono">↓{{ format_rate(host_stats.net_rx_rate) }} ↑{{ format_rate(host_stats.net_tx_rate) }}</td>
<td class="text-sm">{{ "%.1f"|format(host_stats.load) }}</td>
{% endif %}
</tr>
{% endmacro %}
<div class="card bg-base-100 shadow mt-4 mb-4">
<div class="card-body p-4">
<div class="flex items-center justify-between">
<h2 class="card-title text-base gap-2">{{ activity(18) }} Host Resources</h2>
<button class="btn btn-ghost btn-xs opacity-50 hover:opacity-100"
hx-get="/api/glances" hx-target="#glances-stats" hx-swap="innerHTML"
title="Refresh">
{{ refresh_cw(14) }}
</button>
</div>
<div class="overflow-x-auto">
<table class="table table-sm">
<thead>
<tr>
<th>Host</th>
<th>{{ cpu(14) }} CPU</th>
<th>{{ memory_stick(14) }} Memory</th>
<th>{{ hard_drive(14) }} Disk</th>
<th>{{ arrow_down_up(14) }} Net</th>
<th>{{ gauge(14) }} Load</th>
</tr>
</thead>
<tbody>
{% for host_name, host_stats in stats.items() %}
{{ host_row(host_stats) }}
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>

View File

@@ -176,3 +176,46 @@
<path d="M15 3h6v6"/><path d="M10 14 21 3"/><path d="M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"/>
</svg>
{% endmacro %}
{# Resource monitoring icons #}
{% macro cpu(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<rect width="16" height="16" x="4" y="4" rx="2"/><rect width="6" height="6" x="9" y="9" rx="1"/><path d="M15 2v2"/><path d="M15 20v2"/><path d="M2 15h2"/><path d="M2 9h2"/><path d="M20 15h2"/><path d="M20 9h2"/><path d="M9 2v2"/><path d="M9 20v2"/>
</svg>
{% endmacro %}
{% macro memory_stick(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M6 19v-3"/><path d="M10 19v-3"/><path d="M14 19v-3"/><path d="M18 19v-3"/><path d="M8 11V9"/><path d="M16 11V9"/><path d="M12 11V9"/><path d="M2 15h20"/><path d="M2 7a2 2 0 0 1 2-2h16a2 2 0 0 1 2 2v1.1a2 2 0 0 0 0 3.837V17a2 2 0 0 1-2 2H4a2 2 0 0 1-2-2v-5.1a2 2 0 0 0 0-3.837z"/>
</svg>
{% endmacro %}
{% macro gauge(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="m12 14 4-4"/><path d="M3.34 19a10 10 0 1 1 17.32 0"/>
</svg>
{% endmacro %}
{% macro activity(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M22 12h-2.48a2 2 0 0 0-1.93 1.46l-2.35 8.36a.25.25 0 0 1-.48 0L9.24 2.18a.25.25 0 0 0-.48 0l-2.35 8.36A2 2 0 0 1 4.49 12H2"/>
</svg>
{% endmacro %}
{% macro arrow_down_up(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="m3 16 4 4 4-4"/><path d="M7 20V4"/><path d="m21 8-4-4-4 4"/><path d="M17 4v16"/>
</svg>
{% endmacro %}
{% macro hard_drive(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<line x1="22" x2="2" y1="12" y2="12"/><path d="M5.45 5.11 2 12v6a2 2 0 0 0 2 2h16a2 2 0 0 0 2-2v-6l-3.45-6.89A2 2 0 0 0 16.76 4H7.24a2 2 0 0 0-1.79 1.11z"/><line x1="6" x2="6.01" y1="16" y2="16"/><line x1="10" x2="10.01" y1="16" y2="16"/>
</svg>
{% endmacro %}
{% macro box(size=16) %}
<svg xmlns="http://www.w3.org/2000/svg" width="{{ size }}" height="{{ size }}" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M21 8a2 2 0 0 0-1-1.73l-7-4a2 2 0 0 0-2 0l-7 4A2 2 0 0 0 3 8v8a2 2 0 0 0 1 1.73l7 4a2 2 0 0 0 2 0l7-4A2 2 0 0 0 21 16Z"/><path d="m3.3 7 8.7 5 8.7-5"/><path d="M12 22V12"/>
</svg>
{% endmacro %}

View File

@@ -1,8 +1,9 @@
{% from "partials/icons.html" import home, search, terminal %}
{% from "partials/icons.html" import home, search, terminal, box %}
<!-- Navigation Links -->
<div class="mb-4">
<ul class="menu" hx-boost="true" hx-target="#main-content" hx-select="#main-content" hx-swap="outerHTML">
<li><a href="/" class="font-semibold">{{ home() }} Dashboard</a></li>
<li><a href="/live-stats" class="font-semibold">{{ box() }} Live Stats</a></li>
<li><a href="/console" class="font-semibold">{{ terminal() }} Console</a></li>
</ul>
</div>

View File

@@ -20,20 +20,4 @@
{% else %}
<p class="text-base-content/60 italic">No stacks currently running.</p>
{% endfor %}
<script>
function sbhFilter() {
const q = (document.getElementById('sbh-filter')?.value || '').toLowerCase();
const h = document.getElementById('sbh-host-select')?.value || '';
document.querySelectorAll('.sbh-group').forEach(g => {
if (h && g.dataset.h !== h) { g.hidden = true; return; }
let n = 0;
g.querySelectorAll('li[data-s]').forEach(li => {
const show = !q || li.dataset.s.includes(q);
li.hidden = !show;
if (show) n++;
});
g.hidden = !n;
});
}
</script>
{% endcall %}

View File

@@ -4,7 +4,7 @@
{% block title %}{{ name }} - Compose Farm{% endblock %}
{% block content %}
<div class="max-w-5xl" data-services="{{ services | join(',') }}" data-containers='{{ containers | tojson }}' data-website-urls='{{ website_urls | tojson }}'>
<div class="max-w-5xl" data-stack-name="{{ name }}" data-services="{{ services | join(',') }}" data-containers='{{ containers | tojson }}' data-website-urls='{{ website_urls | tojson }}'>
<div class="mb-6">
<h1 class="text-3xl font-bold rainbow-hover">{{ name }}</h1>
<div class="flex flex-wrap items-center gap-2 mt-2">

View File

@@ -228,99 +228,3 @@ class TestConfigValidate:
# Error goes to stderr
output = result.stdout + (result.stderr or "")
assert "Config file not found" in output or "not found" in output.lower()
class TestConfigExample:
"""Tests for cf config example command."""
def test_example_list(self, runner: CliRunner) -> None:
result = runner.invoke(app, ["config", "example", "--list"])
assert result.exit_code == 0
assert "whoami" in result.stdout
assert "nginx" in result.stdout
assert "postgres" in result.stdout
assert "full" in result.stdout
def test_example_whoami(self, runner: CliRunner, tmp_path: Path) -> None:
result = runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path)])
assert result.exit_code == 0
assert "Example 'whoami' created" in result.stdout
assert (tmp_path / "whoami" / "compose.yaml").exists()
assert (tmp_path / "whoami" / ".env").exists()
def test_example_full(self, runner: CliRunner, tmp_path: Path) -> None:
result = runner.invoke(app, ["config", "example", "full", "-o", str(tmp_path)])
assert result.exit_code == 0
assert "Example 'full' created" in result.stdout
assert (tmp_path / "compose-farm.yaml").exists()
assert (tmp_path / "traefik" / "compose.yaml").exists()
assert (tmp_path / "whoami" / "compose.yaml").exists()
assert (tmp_path / "nginx" / "compose.yaml").exists()
assert (tmp_path / "postgres" / "compose.yaml").exists()
def test_example_unknown(self, runner: CliRunner, tmp_path: Path) -> None:
result = runner.invoke(app, ["config", "example", "unknown", "-o", str(tmp_path)])
assert result.exit_code == 1
output = result.stdout + (result.stderr or "")
assert "Unknown example" in output
def test_example_force_overwrites(self, runner: CliRunner, tmp_path: Path) -> None:
# Create first time
runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path)])
# Overwrite with force
result = runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path), "-f"])
assert result.exit_code == 0
def test_example_prompts_on_existing(self, runner: CliRunner, tmp_path: Path) -> None:
# Create first time
runner.invoke(app, ["config", "example", "whoami", "-o", str(tmp_path)])
# Try again without force, decline
result = runner.invoke(
app, ["config", "example", "whoami", "-o", str(tmp_path)], input="n\n"
)
assert result.exit_code == 0
assert "Aborted" in result.stdout
class TestExamplesModule:
"""Tests for the examples module."""
def test_list_example_files_whoami(self) -> None:
from compose_farm.examples import list_example_files
files = list_example_files("whoami")
file_names = [f for f, _ in files]
assert ".env" in file_names
assert "compose.yaml" in file_names
def test_list_example_files_full(self) -> None:
from compose_farm.examples import list_example_files
files = list_example_files("full")
file_names = [f for f, _ in files]
assert "compose-farm.yaml" in file_names
assert "traefik/compose.yaml" in file_names
assert "whoami/compose.yaml" in file_names
def test_list_example_files_unknown(self) -> None:
from compose_farm.examples import list_example_files
with pytest.raises(ValueError, match="Unknown example"):
list_example_files("unknown")
def test_examples_dict(self) -> None:
from compose_farm.examples import EXAMPLES, SINGLE_STACK_EXAMPLES
assert "whoami" in EXAMPLES
assert "full" in EXAMPLES
assert "full" not in SINGLE_STACK_EXAMPLES
assert "whoami" in SINGLE_STACK_EXAMPLES
class TestConfigInitDiscover:
"""Tests for cf config init --discover."""
def test_discover_option_exists(self, runner: CliRunner) -> None:
result = runner.invoke(app, ["config", "init", "--help"])
assert "--discover" in result.stdout
assert "-d" in result.stdout

269
tests/test_containers.py Normal file
View File

@@ -0,0 +1,269 @@
"""Tests for Containers page routes."""
from pathlib import Path
from unittest.mock import AsyncMock, patch
import pytest
from fastapi.testclient import TestClient
from compose_farm.config import Config, Host
from compose_farm.glances import ContainerStats
from compose_farm.web.app import create_app
from compose_farm.web.routes.containers import (
_format_bytes,
_infer_stack_service,
_parse_image,
_parse_uptime_seconds,
)
# Byte size constants for tests
KB = 1024
MB = KB * 1024
GB = MB * 1024
class TestFormatBytes:
"""Tests for _format_bytes function (uses humanize library)."""
def test_bytes(self) -> None:
assert _format_bytes(500) == "500 Bytes"
assert _format_bytes(0) == "0 Bytes"
def test_kilobytes(self) -> None:
assert _format_bytes(KB) == "1.0 KiB"
assert _format_bytes(KB * 5) == "5.0 KiB"
assert _format_bytes(KB + 512) == "1.5 KiB"
def test_megabytes(self) -> None:
assert _format_bytes(MB) == "1.0 MiB"
assert _format_bytes(MB * 100) == "100.0 MiB"
assert _format_bytes(MB * 512) == "512.0 MiB"
def test_gigabytes(self) -> None:
assert _format_bytes(GB) == "1.0 GiB"
assert _format_bytes(GB * 2) == "2.0 GiB"
class TestParseImage:
"""Tests for _parse_image function."""
def test_simple_image_with_tag(self) -> None:
assert _parse_image("nginx:latest") == ("nginx", "latest")
assert _parse_image("redis:7") == ("redis", "7")
def test_image_without_tag(self) -> None:
assert _parse_image("nginx") == ("nginx", "latest")
def test_registry_image(self) -> None:
assert _parse_image("ghcr.io/user/repo:v1.0") == ("ghcr.io/user/repo", "v1.0")
assert _parse_image("docker.io/library/nginx:alpine") == (
"docker.io/library/nginx",
"alpine",
)
def test_image_with_port_in_registry(self) -> None:
# Registry with port should not be confused with tag
assert _parse_image("localhost:5000/myimage") == ("localhost:5000/myimage", "latest")
class TestParseUptimeSeconds:
"""Tests for _parse_uptime_seconds function."""
def test_seconds(self) -> None:
assert _parse_uptime_seconds("17 seconds") == 17
assert _parse_uptime_seconds("1 second") == 1
def test_minutes(self) -> None:
assert _parse_uptime_seconds("5 minutes") == 300
assert _parse_uptime_seconds("1 minute") == 60
def test_hours(self) -> None:
assert _parse_uptime_seconds("2 hours") == 7200
assert _parse_uptime_seconds("an hour") == 3600
assert _parse_uptime_seconds("1 hour") == 3600
def test_days(self) -> None:
assert _parse_uptime_seconds("3 days") == 259200
assert _parse_uptime_seconds("a day") == 86400
def test_empty(self) -> None:
assert _parse_uptime_seconds("") == 0
assert _parse_uptime_seconds("-") == 0
class TestInferStackService:
"""Tests for _infer_stack_service function."""
def test_underscore_separator(self) -> None:
assert _infer_stack_service("mystack_web_1") == ("mystack", "web")
assert _infer_stack_service("app_db_1") == ("app", "db")
def test_hyphen_separator(self) -> None:
assert _infer_stack_service("mystack-web-1") == ("mystack", "web")
assert _infer_stack_service("compose-farm-api-1") == ("compose", "farm-api")
def test_simple_name(self) -> None:
# No separator - use name for both
assert _infer_stack_service("nginx") == ("nginx", "nginx")
assert _infer_stack_service("traefik") == ("traefik", "traefik")
def test_single_part_with_separator(self) -> None:
# Edge case: separator with empty second part
assert _infer_stack_service("single_") == ("single", "")
class TestContainersPage:
"""Tests for containers page endpoint."""
@pytest.fixture
def client(self) -> TestClient:
app = create_app()
return TestClient(app)
@pytest.fixture
def mock_config(self) -> Config:
return Config(
compose_dir=Path("/opt/compose"),
hosts={
"nas": Host(address="192.168.1.6"),
"nuc": Host(address="192.168.1.2"),
},
stacks={"test": "nas"},
glances_stack="glances",
)
def test_containers_page_without_glances(self, client: TestClient) -> None:
"""Test containers page shows warning when Glances not configured."""
with patch("compose_farm.web.routes.containers.get_config") as mock:
mock.return_value = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"test": "nas"},
glances_stack=None,
)
response = client.get("/live-stats")
assert response.status_code == 200
assert "Glances not configured" in response.text
def test_containers_page_with_glances(self, client: TestClient, mock_config: Config) -> None:
"""Test containers page loads when Glances is configured."""
with patch("compose_farm.web.routes.containers.get_config") as mock:
mock.return_value = mock_config
response = client.get("/live-stats")
assert response.status_code == 200
assert "Live Stats" in response.text
assert "container-rows" in response.text
class TestContainersRowsAPI:
"""Tests for containers rows HTML endpoint."""
@pytest.fixture
def client(self) -> TestClient:
app = create_app()
return TestClient(app)
def test_rows_without_glances(self, client: TestClient) -> None:
"""Test rows endpoint returns error when Glances not configured."""
with patch("compose_farm.web.routes.containers.get_config") as mock:
mock.return_value = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"test": "nas"},
glances_stack=None,
)
response = client.get("/api/containers/rows")
assert response.status_code == 200
assert "Glances not configured" in response.text
def test_rows_returns_html(self, client: TestClient) -> None:
"""Test rows endpoint returns HTML table rows."""
mock_containers = [
ContainerStats(
name="nginx",
host="nas",
status="running",
image="nginx:latest",
cpu_percent=5.5,
memory_usage=104857600,
memory_limit=1073741824,
memory_percent=9.77,
network_rx=1000,
network_tx=500,
uptime="2 hours",
ports="80->80/tcp",
engine="docker",
stack="web",
service="nginx",
),
]
with (
patch("compose_farm.web.routes.containers.get_config") as mock_config,
patch(
"compose_farm.web.routes.containers.fetch_all_container_stats",
new_callable=AsyncMock,
) as mock_fetch,
):
mock_config.return_value = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"test": "nas"},
glances_stack="glances",
)
mock_fetch.return_value = mock_containers
response = client.get("/api/containers/rows")
assert response.status_code == 200
assert "<tr " in response.text # <tr id="..."> has attributes
assert "nginx" in response.text
assert "running" in response.text
def test_rows_have_data_sort_attributes(self, client: TestClient) -> None:
"""Test rows have data-sort attributes for client-side sorting."""
mock_containers = [
ContainerStats(
name="alpha",
host="nas",
status="running",
image="nginx:latest",
cpu_percent=10.0,
memory_usage=100,
memory_limit=1000,
memory_percent=10.0,
network_rx=100,
network_tx=100,
uptime="1 hour",
ports="",
engine="docker",
stack="alpha",
service="web",
),
]
with (
patch("compose_farm.web.routes.containers.get_config") as mock_config,
patch(
"compose_farm.web.routes.containers.fetch_all_container_stats",
new_callable=AsyncMock,
) as mock_fetch,
):
mock_config.return_value = Config(
compose_dir=Path("/opt/compose"),
hosts={"nas": Host(address="192.168.1.6")},
stacks={"test": "nas"},
glances_stack="glances",
)
mock_fetch.return_value = mock_containers
response = client.get("/api/containers/rows")
assert response.status_code == 200
# Check that cells have data-sort attributes
assert 'data-sort="alpha"' in response.text # stack
assert 'data-sort="web"' in response.text # service
assert 'data-sort="3600"' in response.text # uptime (1 hour = 3600s)
assert 'data-sort="10' in response.text # cpu

View File

@@ -11,6 +11,7 @@ from compose_farm.executor import (
_run_local_command,
check_networks_exist,
check_paths_exist,
get_running_stacks_on_host,
is_local,
run_command,
run_compose,
@@ -239,3 +240,31 @@ class TestCheckNetworksExist:
result = await check_networks_exist(config, "local", [])
assert result == {}
@linux_only
class TestGetRunningStacksOnHost:
"""Tests for get_running_stacks_on_host function (requires Docker)."""
async def test_returns_set_of_stacks(self, tmp_path: Path) -> None:
"""Function returns a set of stack names."""
config = Config(
compose_dir=tmp_path,
hosts={"local": Host(address="localhost")},
stacks={},
)
result = await get_running_stacks_on_host(config, "local")
assert isinstance(result, set)
async def test_filters_empty_lines(self, tmp_path: Path) -> None:
"""Empty project names are filtered out."""
config = Config(
compose_dir=tmp_path,
hosts={"local": Host(address="localhost")},
stacks={},
)
# Result should not contain empty strings
result = await get_running_stacks_on_host(config, "local")
assert "" not in result

349
tests/test_glances.py Normal file
View File

@@ -0,0 +1,349 @@
"""Tests for Glances integration."""
from pathlib import Path
from unittest.mock import AsyncMock, patch
import httpx
import pytest
from compose_farm.config import Config, Host
from compose_farm.glances import (
DEFAULT_GLANCES_PORT,
ContainerStats,
HostStats,
fetch_all_container_stats,
fetch_all_host_stats,
fetch_container_stats,
fetch_host_stats,
)
class TestHostStats:
"""Tests for HostStats dataclass."""
def test_host_stats_creation(self) -> None:
stats = HostStats(
host="nas",
cpu_percent=25.5,
mem_percent=50.0,
swap_percent=10.0,
load=2.5,
disk_percent=75.0,
)
assert stats.host == "nas"
assert stats.cpu_percent == 25.5
assert stats.mem_percent == 50.0
assert stats.disk_percent == 75.0
assert stats.error is None
def test_host_stats_from_error(self) -> None:
stats = HostStats.from_error("nas", "Connection refused")
assert stats.host == "nas"
assert stats.cpu_percent == 0
assert stats.mem_percent == 0
assert stats.error == "Connection refused"
class TestFetchHostStats:
"""Tests for fetch_host_stats function."""
@pytest.mark.asyncio
async def test_fetch_host_stats_success(self) -> None:
quicklook_response = httpx.Response(
200,
json={
"cpu": 25.5,
"mem": 50.0,
"swap": 5.0,
"load": 2.5,
},
)
fs_response = httpx.Response(
200,
json=[
{"mnt_point": "/", "percent": 65.0},
{"mnt_point": "/mnt/data", "percent": 80.0},
],
)
async def mock_get(url: str) -> httpx.Response:
if "quicklook" in url:
return quicklook_response
return fs_response
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(side_effect=mock_get)
stats = await fetch_host_stats("nas", "192.168.1.6")
assert stats.host == "nas"
assert stats.cpu_percent == 25.5
assert stats.mem_percent == 50.0
assert stats.swap_percent == 5.0
assert stats.load == 2.5
assert stats.disk_percent == 65.0 # Root filesystem
assert stats.error is None
@pytest.mark.asyncio
async def test_fetch_host_stats_http_error(self) -> None:
mock_response = httpx.Response(500)
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(return_value=mock_response)
stats = await fetch_host_stats("nas", "192.168.1.6")
assert stats.host == "nas"
assert stats.error == "HTTP 500"
assert stats.cpu_percent == 0
@pytest.mark.asyncio
async def test_fetch_host_stats_timeout(self) -> None:
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(side_effect=httpx.TimeoutException("timeout"))
stats = await fetch_host_stats("nas", "192.168.1.6")
assert stats.host == "nas"
assert stats.error == "timeout"
@pytest.mark.asyncio
async def test_fetch_host_stats_connection_error(self) -> None:
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(
side_effect=httpx.ConnectError("Connection refused")
)
stats = await fetch_host_stats("nas", "192.168.1.6")
assert stats.host == "nas"
assert stats.error is not None
assert "Connection refused" in stats.error
class TestFetchAllHostStats:
"""Tests for fetch_all_host_stats function."""
@pytest.mark.asyncio
async def test_fetch_all_host_stats(self) -> None:
config = Config(
compose_dir=Path("/opt/compose"),
hosts={
"nas": Host(address="192.168.1.6"),
"nuc": Host(address="192.168.1.2"),
},
stacks={"test": "nas"},
)
quicklook_response = httpx.Response(
200,
json={
"cpu": 25.5,
"mem": 50.0,
"swap": 5.0,
"load": 2.5,
},
)
fs_response = httpx.Response(
200,
json=[{"mnt_point": "/", "percent": 70.0}],
)
async def mock_get(url: str) -> httpx.Response:
if "quicklook" in url:
return quicklook_response
return fs_response
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(side_effect=mock_get)
stats = await fetch_all_host_stats(config)
assert "nas" in stats
assert "nuc" in stats
assert stats["nas"].cpu_percent == 25.5
assert stats["nuc"].cpu_percent == 25.5
assert stats["nas"].disk_percent == 70.0
class TestDefaultPort:
"""Tests for default Glances port constant."""
def test_default_port(self) -> None:
assert DEFAULT_GLANCES_PORT == 61208
class TestContainerStats:
"""Tests for ContainerStats dataclass."""
def test_container_stats_creation(self) -> None:
stats = ContainerStats(
name="nginx",
host="nas",
status="running",
image="nginx:latest",
cpu_percent=5.5,
memory_usage=104857600, # 100MB
memory_limit=1073741824, # 1GB
memory_percent=9.77,
network_rx=1000000,
network_tx=500000,
uptime="2 hours",
ports="80->80/tcp",
engine="docker",
)
assert stats.name == "nginx"
assert stats.host == "nas"
assert stats.cpu_percent == 5.5
class TestFetchContainerStats:
"""Tests for fetch_container_stats function."""
@pytest.mark.asyncio
async def test_fetch_container_stats_success(self) -> None:
mock_response = httpx.Response(
200,
json=[
{
"name": "nginx",
"status": "running",
"image": ["nginx:latest"],
"cpu_percent": 5.5,
"memory_usage": 104857600,
"memory_limit": 1073741824,
"network": {"cumulative_rx": 1000, "cumulative_tx": 500},
"uptime": "2 hours",
"ports": "80->80/tcp",
"engine": "docker",
},
{
"name": "redis",
"status": "running",
"image": ["redis:7"],
"cpu_percent": 1.2,
"memory_usage": 52428800,
"memory_limit": 1073741824,
"network": {},
"uptime": "3 hours",
"ports": "",
"engine": "docker",
},
],
)
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(return_value=mock_response)
containers, error = await fetch_container_stats("nas", "192.168.1.6")
assert error is None
assert containers is not None
assert len(containers) == 2
assert containers[0].name == "nginx"
assert containers[0].host == "nas"
assert containers[0].cpu_percent == 5.5
assert containers[1].name == "redis"
@pytest.mark.asyncio
async def test_fetch_container_stats_empty_on_error(self) -> None:
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(side_effect=httpx.TimeoutException("timeout"))
containers, error = await fetch_container_stats("nas", "192.168.1.6")
assert containers is None
assert error == "Connection timed out"
@pytest.mark.asyncio
async def test_fetch_container_stats_handles_string_image(self) -> None:
"""Test that image field works as string (not just list)."""
mock_response = httpx.Response(
200,
json=[
{
"name": "test",
"status": "running",
"image": "myimage:v1", # String instead of list
"cpu_percent": 0,
"memory_usage": 0,
"memory_limit": 1,
"network": {},
"uptime": "",
"ports": "",
"engine": "docker",
},
],
)
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(return_value=mock_response)
containers, error = await fetch_container_stats("nas", "192.168.1.6")
assert error is None
assert containers is not None
assert len(containers) == 1
assert containers[0].image == "myimage:v1"
class TestFetchAllContainerStats:
"""Tests for fetch_all_container_stats function."""
@pytest.mark.asyncio
async def test_fetch_all_container_stats(self) -> None:
config = Config(
compose_dir=Path("/opt/compose"),
hosts={
"nas": Host(address="192.168.1.6"),
"nuc": Host(address="192.168.1.2"),
},
stacks={"test": "nas"},
)
mock_response = httpx.Response(
200,
json=[
{
"name": "nginx",
"status": "running",
"image": ["nginx:latest"],
"cpu_percent": 5.5,
"memory_usage": 104857600,
"memory_limit": 1073741824,
"network": {},
"uptime": "2 hours",
"ports": "",
"engine": "docker",
},
],
)
with patch("httpx.AsyncClient") as mock_client:
mock_client.return_value.__aenter__ = AsyncMock(return_value=mock_client.return_value)
mock_client.return_value.__aexit__ = AsyncMock(return_value=None)
mock_client.return_value.get = AsyncMock(return_value=mock_response)
containers = await fetch_all_container_stats(config)
# 2 hosts x 1 container each = 2 containers
assert len(containers) == 2
hosts = {c.host for c in containers}
assert "nas" in hosts
assert "nuc" in hosts

View File

@@ -10,8 +10,8 @@ import pytest
from compose_farm.config import Config, Host
from compose_farm.executor import CommandResult
from compose_farm.logs import (
_parse_images_output,
collect_stack_entries,
_SECTION_SEPARATOR,
collect_stacks_entries_on_host,
isoformat,
load_existing_entries,
merge_entries,
@@ -19,74 +19,252 @@ from compose_farm.logs import (
)
def test_parse_images_output_handles_list_and_lines() -> None:
data = [
{"Service": "svc", "Image": "redis", "Digest": "sha256:abc"},
{"Service": "svc", "Image": "db", "Digest": "sha256:def"},
def _make_mock_output(
project_images: dict[str, list[str]], image_info: list[dict[str, object]]
) -> str:
"""Build mock output matching the 2-docker-command format."""
# Section 1: project|image pairs from docker ps
ps_lines = [
f"{project}|{image}" for project, images in project_images.items() for image in images
]
as_array = _parse_images_output(json.dumps(data))
assert len(as_array) == 2
as_lines = _parse_images_output("\n".join(json.dumps(item) for item in data))
assert len(as_lines) == 2
# Section 2: JSON array from docker image inspect
image_json = json.dumps(image_info)
return f"{chr(10).join(ps_lines)}\n{_SECTION_SEPARATOR}\n{image_json}"
@pytest.mark.asyncio
async def test_snapshot_preserves_first_seen(tmp_path: Path) -> None:
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
stack_dir = compose_dir / "svc"
stack_dir.mkdir()
(stack_dir / "docker-compose.yml").write_text("services: {}\n")
class TestCollectStacksEntriesOnHost:
"""Tests for collect_stacks_entries_on_host (2 docker commands per host)."""
config = Config(
compose_dir=compose_dir,
hosts={"local": Host(address="localhost")},
stacks={"svc": "local"},
)
@pytest.fixture
def config_with_stacks(self, tmp_path: Path) -> Config:
"""Create a config with multiple stacks."""
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
for stack in ["plex", "jellyfin", "sonarr"]:
stack_dir = compose_dir / stack
stack_dir.mkdir()
(stack_dir / "docker-compose.yml").write_text("services: {}\n")
sample_output = json.dumps([{"Service": "svc", "Image": "redis", "Digest": "sha256:abc"}])
async def fake_run_compose(
_cfg: Config, stack: str, compose_cmd: str, *, stream: bool = True
) -> CommandResult:
assert compose_cmd == "images --format json"
assert stream is False or stream is True
return CommandResult(
stack=stack,
exit_code=0,
success=True,
stdout=sample_output,
stderr="",
return Config(
compose_dir=compose_dir,
hosts={"host1": Host(address="localhost"), "host2": Host(address="localhost")},
stacks={"plex": "host1", "jellyfin": "host1", "sonarr": "host2"},
)
log_path = tmp_path / "dockerfarm-log.toml"
@pytest.mark.asyncio
async def test_single_ssh_call(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Verify only 1 SSH call is made regardless of stack count."""
call_count = {"count": 0}
# First snapshot
first_time = datetime(2025, 1, 1, tzinfo=UTC)
first_entries = await collect_stack_entries(
config, "svc", now=first_time, run_compose_fn=fake_run_compose
)
first_iso = isoformat(first_time)
merged = merge_entries([], first_entries, now_iso=first_iso)
meta = {"generated_at": first_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
call_count["count"] += 1
output = _make_mock_output(
{"plex": ["plex:latest"], "jellyfin": ["jellyfin:latest"]},
[
{
"RepoTags": ["plex:latest"],
"Id": "sha256:aaa",
"RepoDigests": ["plex@sha256:aaa"],
},
{
"RepoTags": ["jellyfin:latest"],
"Id": "sha256:bbb",
"RepoDigests": ["jellyfin@sha256:bbb"],
},
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
after_first = tomllib.loads(log_path.read_text())
first_seen = after_first["entries"][0]["first_seen"]
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
# Second snapshot
second_time = datetime(2025, 2, 1, tzinfo=UTC)
second_entries = await collect_stack_entries(
config, "svc", now=second_time, run_compose_fn=fake_run_compose
)
second_iso = isoformat(second_time)
existing = load_existing_entries(log_path)
merged = merge_entries(existing, second_entries, now_iso=second_iso)
meta = {"generated_at": second_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex", "jellyfin"}, now=now
)
after_second = tomllib.loads(log_path.read_text())
entry = after_second["entries"][0]
assert entry["first_seen"] == first_seen
assert entry["last_seen"].startswith("2025-02-01")
assert call_count["count"] == 1
assert len(entries) == 2
@pytest.mark.asyncio
async def test_filters_to_requested_stacks(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Only return entries for stacks we asked for, even if others are running."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
# Docker ps shows 3 stacks, but we only want plex
output = _make_mock_output(
{
"plex": ["plex:latest"],
"jellyfin": ["jellyfin:latest"],
"other": ["other:latest"],
},
[
{
"RepoTags": ["plex:latest"],
"Id": "sha256:aaa",
"RepoDigests": ["plex@sha256:aaa"],
},
{
"RepoTags": ["jellyfin:latest"],
"Id": "sha256:bbb",
"RepoDigests": ["j@sha256:bbb"],
},
{
"RepoTags": ["other:latest"],
"Id": "sha256:ccc",
"RepoDigests": ["o@sha256:ccc"],
},
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex"}, now=now
)
assert len(entries) == 1
assert entries[0].stack == "plex"
@pytest.mark.asyncio
async def test_multiple_images_per_stack(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Stack with multiple containers/images returns multiple entries."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
output = _make_mock_output(
{"plex": ["plex:latest", "redis:7"]},
[
{
"RepoTags": ["plex:latest"],
"Id": "sha256:aaa",
"RepoDigests": ["p@sha256:aaa"],
},
{"RepoTags": ["redis:7"], "Id": "sha256:bbb", "RepoDigests": ["r@sha256:bbb"]},
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex"}, now=now
)
assert len(entries) == 2
images = {e.image for e in entries}
assert images == {"plex:latest", "redis:7"}
@pytest.mark.asyncio
async def test_empty_stacks_returns_empty(self, config_with_stacks: Config) -> None:
"""Empty stack set returns empty entries without making SSH call."""
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(config_with_stacks, "host1", set(), now=now)
assert entries == []
@pytest.mark.asyncio
async def test_ssh_failure_returns_empty(
self, config_with_stacks: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""SSH failure returns empty list instead of raising."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
return CommandResult(stack=stack, exit_code=1, success=False, stdout="", stderr="error")
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
now = datetime(2025, 1, 1, tzinfo=UTC)
entries = await collect_stacks_entries_on_host(
config_with_stacks, "host1", {"plex"}, now=now
)
assert entries == []
class TestSnapshotMerging:
"""Tests for merge_entries preserving first_seen."""
@pytest.fixture
def config(self, tmp_path: Path) -> Config:
compose_dir = tmp_path / "compose"
compose_dir.mkdir()
stack_dir = compose_dir / "svc"
stack_dir.mkdir()
(stack_dir / "docker-compose.yml").write_text("services: {}\n")
return Config(
compose_dir=compose_dir,
hosts={"local": Host(address="localhost")},
stacks={"svc": "local"},
)
@pytest.mark.asyncio
async def test_preserves_first_seen(
self, tmp_path: Path, config: Config, monkeypatch: pytest.MonkeyPatch
) -> None:
"""Repeated snapshots preserve first_seen timestamp."""
async def mock_run_command(
host: Host, command: str, stack: str, *, stream: bool, prefix: str
) -> CommandResult:
output = _make_mock_output(
{"svc": ["redis:latest"]},
[
{
"RepoTags": ["redis:latest"],
"Id": "sha256:abc",
"RepoDigests": ["r@sha256:abc"],
}
],
)
return CommandResult(stack=stack, exit_code=0, success=True, stdout=output)
monkeypatch.setattr("compose_farm.logs.run_command", mock_run_command)
log_path = tmp_path / "dockerfarm-log.toml"
# First snapshot
first_time = datetime(2025, 1, 1, tzinfo=UTC)
first_entries = await collect_stacks_entries_on_host(
config, "local", {"svc"}, now=first_time
)
first_iso = isoformat(first_time)
merged = merge_entries([], first_entries, now_iso=first_iso)
meta = {"generated_at": first_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
after_first = tomllib.loads(log_path.read_text())
first_seen = after_first["entries"][0]["first_seen"]
# Second snapshot
second_time = datetime(2025, 2, 1, tzinfo=UTC)
second_entries = await collect_stacks_entries_on_host(
config, "local", {"svc"}, now=second_time
)
second_iso = isoformat(second_time)
existing = load_existing_entries(log_path)
merged = merge_entries(existing, second_entries, now_iso=second_iso)
meta = {"generated_at": second_iso, "compose_dir": str(config.compose_dir)}
write_toml(log_path, meta=meta, entries=merged)
after_second = tomllib.loads(log_path.read_text())
entry = after_second["entries"][0]
assert entry["first_seen"] == first_seen
assert entry["last_seen"].startswith("2025-02-01")

View File

@@ -11,7 +11,10 @@ import pytest
from compose_farm.cli import lifecycle
from compose_farm.config import Config, Host
from compose_farm.executor import CommandResult
from compose_farm.operations import _migrate_stack
from compose_farm.operations import (
_migrate_stack,
build_discovery_results,
)
@pytest.fixture
@@ -109,3 +112,83 @@ class TestUpdateCommandSequence:
# Verify the sequence is pull, build, down, up
assert "down" in source
assert "up -d" in source
class TestBuildDiscoveryResults:
"""Tests for build_discovery_results function."""
@pytest.fixture
def config(self, tmp_path: Path) -> Config:
"""Create a test config with multiple stacks."""
compose_dir = tmp_path / "compose"
for stack in ["plex", "jellyfin", "sonarr"]:
(compose_dir / stack).mkdir(parents=True)
(compose_dir / stack / "docker-compose.yml").write_text("services: {}")
return Config(
compose_dir=compose_dir,
hosts={
"host1": Host(address="localhost"),
"host2": Host(address="localhost"),
},
stacks={"plex": "host1", "jellyfin": "host1", "sonarr": "host2"},
)
def test_discovers_correctly_running_stacks(self, config: Config) -> None:
"""Stacks running on correct hosts are discovered."""
running_on_host = {
"host1": {"plex", "jellyfin"},
"host2": {"sonarr"},
}
discovered, strays, duplicates = build_discovery_results(config, running_on_host)
assert discovered == {"plex": "host1", "jellyfin": "host1", "sonarr": "host2"}
assert strays == {}
assert duplicates == {}
def test_detects_stray_stacks(self, config: Config) -> None:
"""Stacks running on wrong hosts are marked as strays."""
running_on_host = {
"host1": set(),
"host2": {"plex"}, # plex should be on host1
}
discovered, strays, _duplicates = build_discovery_results(config, running_on_host)
assert "plex" not in discovered
assert strays == {"plex": ["host2"]}
def test_detects_duplicates(self, config: Config) -> None:
"""Single-host stacks running on multiple hosts are duplicates."""
running_on_host = {
"host1": {"plex"},
"host2": {"plex"}, # plex running on both hosts
}
discovered, strays, duplicates = build_discovery_results(
config, running_on_host, stacks=["plex"]
)
# plex is correctly running on host1
assert discovered == {"plex": "host1"}
# plex is also a stray on host2
assert strays == {"plex": ["host2"]}
# plex is a duplicate (single-host stack on multiple hosts)
assert duplicates == {"plex": ["host1", "host2"]}
def test_filters_to_requested_stacks(self, config: Config) -> None:
"""Only returns results for requested stacks."""
running_on_host = {
"host1": {"plex", "jellyfin"},
"host2": {"sonarr"},
}
discovered, _strays, _duplicates = build_discovery_results(
config, running_on_host, stacks=["plex"]
)
# Only plex should be in results
assert discovered == {"plex": "host1"}
assert "jellyfin" not in discovered
assert "sonarr" not in discovered

182
tests/test_registry.py Normal file
View File

@@ -0,0 +1,182 @@
"""Tests for registry module."""
from compose_farm.registry import (
DOCKER_HUB_ALIASES,
ImageRef,
RegistryClient,
TagCheckResult,
_find_updates,
_parse_version,
)
class TestImageRef:
"""Tests for ImageRef parsing."""
def test_parse_simple_image(self) -> None:
"""Test parsing simple image name."""
ref = ImageRef.parse("nginx")
assert ref.registry == "docker.io"
assert ref.namespace == "library"
assert ref.name == "nginx"
assert ref.tag == "latest"
def test_parse_image_with_tag(self) -> None:
"""Test parsing image with tag."""
ref = ImageRef.parse("nginx:1.25")
assert ref.registry == "docker.io"
assert ref.namespace == "library"
assert ref.name == "nginx"
assert ref.tag == "1.25"
def test_parse_image_with_namespace(self) -> None:
"""Test parsing image with namespace."""
ref = ImageRef.parse("linuxserver/jellyfin:latest")
assert ref.registry == "docker.io"
assert ref.namespace == "linuxserver"
assert ref.name == "jellyfin"
assert ref.tag == "latest"
def test_parse_ghcr_image(self) -> None:
"""Test parsing GitHub Container Registry image."""
ref = ImageRef.parse("ghcr.io/user/repo:v1.0.0")
assert ref.registry == "ghcr.io"
assert ref.namespace == "user"
assert ref.name == "repo"
assert ref.tag == "v1.0.0"
def test_parse_image_with_digest(self) -> None:
"""Test parsing image with digest."""
ref = ImageRef.parse("nginx:latest@sha256:abc123")
assert ref.registry == "docker.io"
assert ref.name == "nginx"
assert ref.tag == "latest"
assert ref.digest == "sha256:abc123"
def test_full_name_with_namespace(self) -> None:
"""Test full_name property with namespace."""
ref = ImageRef.parse("linuxserver/jellyfin")
assert ref.full_name == "linuxserver/jellyfin"
def test_full_name_without_namespace(self) -> None:
"""Test full_name property for official images."""
ref = ImageRef.parse("nginx")
assert ref.full_name == "library/nginx"
def test_display_name_official_image(self) -> None:
"""Test display_name for official Docker Hub images."""
ref = ImageRef.parse("nginx:latest")
assert ref.display_name == "nginx"
def test_display_name_hub_with_namespace(self) -> None:
"""Test display_name for Docker Hub images with namespace."""
ref = ImageRef.parse("linuxserver/jellyfin")
assert ref.display_name == "linuxserver/jellyfin"
def test_display_name_other_registry(self) -> None:
"""Test display_name for other registries."""
ref = ImageRef.parse("ghcr.io/user/repo")
assert ref.display_name == "ghcr.io/user/repo"
class TestParseVersion:
"""Tests for version parsing."""
def test_parse_semver(self) -> None:
"""Test parsing semantic version."""
assert _parse_version("1.2.3") == (1, 2, 3)
def test_parse_version_with_v_prefix(self) -> None:
"""Test parsing version with v prefix."""
assert _parse_version("v1.2.3") == (1, 2, 3)
assert _parse_version("V1.2.3") == (1, 2, 3)
def test_parse_two_part_version(self) -> None:
"""Test parsing two-part version."""
assert _parse_version("1.25") == (1, 25)
def test_parse_single_number(self) -> None:
"""Test parsing single number version."""
assert _parse_version("7") == (7,)
def test_parse_invalid_version(self) -> None:
"""Test parsing non-version tags."""
assert _parse_version("latest") is None
assert _parse_version("stable") is None
assert _parse_version("alpine") is None
class TestFindUpdates:
"""Tests for finding available updates."""
def test_find_updates_with_newer_versions(self) -> None:
"""Test finding newer versions."""
current = "1.0.0"
tags = ["0.9.0", "1.0.0", "1.1.0", "2.0.0"]
updates = _find_updates(current, tags)
assert updates == ["2.0.0", "1.1.0"]
def test_find_updates_no_newer(self) -> None:
"""Test when already on latest."""
current = "2.0.0"
tags = ["1.0.0", "1.5.0", "2.0.0"]
updates = _find_updates(current, tags)
assert updates == []
def test_find_updates_non_version_tag(self) -> None:
"""Test with non-version current tag."""
current = "latest"
tags = ["1.0.0", "2.0.0"]
updates = _find_updates(current, tags)
# Can't determine updates for non-version tags
assert updates == []
class TestRegistryClient:
"""Tests for unified registry client."""
def test_docker_hub_normalization(self) -> None:
"""Test Docker Hub aliases are normalized."""
for alias in DOCKER_HUB_ALIASES:
client = RegistryClient(alias)
assert client.registry == "docker.io"
assert client.registry_url == "https://registry-1.docker.io"
def test_ghcr_client(self) -> None:
"""Test GitHub Container Registry client."""
client = RegistryClient("ghcr.io")
assert client.registry == "ghcr.io"
assert client.registry_url == "https://ghcr.io"
def test_generic_registry(self) -> None:
"""Test generic registry client."""
client = RegistryClient("quay.io")
assert client.registry == "quay.io"
assert client.registry_url == "https://quay.io"
class TestTagCheckResult:
"""Tests for TagCheckResult."""
def test_create_result(self) -> None:
"""Test creating a result."""
ref = ImageRef.parse("nginx:1.25")
result = TagCheckResult(
image=ref,
current_digest="sha256:abc",
available_updates=["1.26", "1.27"],
)
assert result.image.name == "nginx"
assert result.available_updates == ["1.26", "1.27"]
assert result.error is None
def test_result_with_error(self) -> None:
"""Test result with error."""
ref = ImageRef.parse("nginx")
result = TagCheckResult(
image=ref,
current_digest="",
error="Connection refused",
)
assert result.error == "Connection refused"
assert result.available_updates == []

View File

@@ -219,7 +219,7 @@ class TestSshConnectKwargs:
assert result["client_keys"] == [str(key_path)]
def test_includes_both_agent_and_key(self, tmp_path: Path) -> None:
"""Include both agent_path and client_keys when both available."""
"""Prioritize client_keys over agent_path when both available."""
host = Host(address="example.com")
key_path = tmp_path / "compose-farm"
@@ -229,7 +229,8 @@ class TestSshConnectKwargs:
):
result = ssh_connect_kwargs(host)
assert result["agent_path"] == "/tmp/agent.sock"
# Agent should be ignored in favor of the dedicated key
assert "agent_path" not in result
assert result["client_keys"] == [str(key_path)]
def test_custom_port(self) -> None:

View File

@@ -7,11 +7,58 @@ from typing import TYPE_CHECKING
import pytest
from fastapi import HTTPException
from pydantic import ValidationError
if TYPE_CHECKING:
from compose_farm.config import Config
class TestExtractConfigError:
"""Tests for extract_config_error helper."""
def test_validation_error_with_location(self) -> None:
from compose_farm.config import Config, Host
from compose_farm.web.deps import extract_config_error
# Trigger a validation error with an extra field
with pytest.raises(ValidationError) as exc_info:
Config(
hosts={"server": Host(address="192.168.1.1")},
stacks={"app": "server"},
unknown_field="bad", # type: ignore[call-arg]
)
msg = extract_config_error(exc_info.value)
assert "unknown_field" in msg
assert "Extra inputs are not permitted" in msg
def test_validation_error_nested_location(self) -> None:
from compose_farm.config import Host
from compose_farm.web.deps import extract_config_error
# Trigger a validation error with a nested extra field
with pytest.raises(ValidationError) as exc_info:
Host(address="192.168.1.1", bad_key="value") # type: ignore[call-arg]
msg = extract_config_error(exc_info.value)
assert "bad_key" in msg
assert "Extra inputs are not permitted" in msg
def test_regular_exception(self) -> None:
from compose_farm.web.deps import extract_config_error
exc = ValueError("Something went wrong")
msg = extract_config_error(exc)
assert msg == "Something went wrong"
def test_file_not_found_exception(self) -> None:
from compose_farm.web.deps import extract_config_error
exc = FileNotFoundError("Config file not found")
msg = extract_config_error(exc)
assert msg == "Config file not found"
class TestValidateYaml:
"""Tests for _validate_yaml helper."""

View File

@@ -134,6 +134,13 @@ def test_config(tmp_path_factory: pytest.TempPathFactory) -> Path:
else:
(svc / "compose.yaml").write_text(f"services:\n {name}:\n image: test/{name}\n")
# Create glances stack (required for containers page)
glances_dir = compose_dir / "glances"
glances_dir.mkdir()
(glances_dir / "compose.yaml").write_text(
"services:\n glances:\n image: nicolargo/glances\n"
)
# Create config with multiple hosts
config = tmp / "compose-farm.yaml"
config.write_text(f"""
@@ -151,6 +158,8 @@ stacks:
nextcloud: server-2
jellyfin: server-2
redis: server-1
glances: all
glances_stack: glances
""")
# Create state (plex and nextcloud running, grafana and jellyfin not started)
@@ -245,7 +254,7 @@ class TestHTMXSidebarLoading:
# Verify actual stacks from test config appear
stacks = page.locator("#sidebar-stacks li")
assert stacks.count() == 5 # plex, grafana, nextcloud, jellyfin, redis
assert stacks.count() == 6 # plex, grafana, nextcloud, jellyfin, redis, glances
# Check specific stacks are present
content = page.locator("#sidebar-stacks").inner_text()
@@ -348,7 +357,7 @@ class TestDashboardContent:
# From test config: 2 hosts, 5 stacks, 2 running (plex, nextcloud)
assert "2" in stats # hosts count
assert "5" in stats # stacks count
assert "6" in stats # stacks count
def test_pending_shows_not_started_stacks(self, page: Page, server_url: str) -> None:
"""Pending operations shows grafana and jellyfin as not started."""
@@ -476,9 +485,9 @@ class TestSidebarFilter:
page.goto(server_url)
page.wait_for_selector("#sidebar-stacks", timeout=TIMEOUT)
# Initially all 4 stacks visible
# Initially all 6 stacks visible
visible_items = page.locator("#sidebar-stacks li:not([hidden])")
assert visible_items.count() == 5
assert visible_items.count() == 6
# Type in filter to match only "plex"
self._filter_sidebar(page, "plex")
@@ -493,9 +502,9 @@ class TestSidebarFilter:
page.goto(server_url)
page.wait_for_selector("#sidebar-stacks", timeout=TIMEOUT)
# Initial count should be (5)
# Initial count should be (6)
count_badge = page.locator("#sidebar-count")
assert "(5)" in count_badge.inner_text()
assert "(6)" in count_badge.inner_text()
# Filter to show only stacks containing "x" (plex, nextcloud)
self._filter_sidebar(page, "x")
@@ -524,13 +533,14 @@ class TestSidebarFilter:
# Select server-1 from dropdown
page.locator("#sidebar-host-select").select_option("server-1")
# Only plex, grafana, and redis (server-1 stacks) should be visible
# plex, grafana, redis (server-1), and glances (all) should be visible
visible = page.locator("#sidebar-stacks li:not([hidden])")
assert visible.count() == 3
assert visible.count() == 4
content = visible.all_inner_texts()
assert any("plex" in s for s in content)
assert any("grafana" in s for s in content)
assert any("glances" in s for s in content)
assert not any("nextcloud" in s for s in content)
assert not any("jellyfin" in s for s in content)
@@ -562,7 +572,7 @@ class TestSidebarFilter:
self._filter_sidebar(page, "")
# All stacks visible again
assert page.locator("#sidebar-stacks li:not([hidden])").count() == 5
assert page.locator("#sidebar-stacks li:not([hidden])").count() == 6
class TestCommandPalette:
@@ -884,7 +894,7 @@ class TestContentStability:
# Remember sidebar state
initial_count = page.locator("#sidebar-stacks li").count()
assert initial_count == 5
assert initial_count == 6
# Navigate away
page.locator("#sidebar-stacks a", has_text="plex").click()
@@ -2329,3 +2339,227 @@ class TestTerminalNavigationIsolation:
# Terminal should still be collapsed (no task to reconnect to)
terminal_toggle = page.locator("#terminal-toggle")
assert not terminal_toggle.is_checked(), "Terminal should remain collapsed after navigation"
class TestContainersPagePause:
"""Test containers page auto-refresh pause mechanism.
The containers page auto-refreshes every 3 seconds. When a user opens
an action dropdown, refresh should pause to prevent the dropdown from
closing unexpectedly.
"""
# Mock HTML for container rows with action dropdowns
MOCK_ROWS_HTML = """
<tr>
<td>1</td>
<td data-sort="plex"><a href="/stack/plex" class="link">plex</a></td>
<td data-sort="server">server</td>
<td><div class="dropdown dropdown-end">
<label tabindex="0" class="btn btn-circle btn-ghost btn-xs"><svg class="h-4 w-4"></svg></label>
<ul tabindex="0" class="dropdown-content menu menu-sm bg-base-200 rounded-box shadow-lg w-36 z-50 p-2">
<li><a hx-post="/api/stack/plex/restart">Restart</a></li>
</ul>
</div></td>
<td data-sort="nas"><span class="badge">nas</span></td>
<td data-sort="nginx:latest"><code>nginx:latest</code></td>
<td data-sort="running"><span class="badge badge-success">running</span></td>
<td data-sort="3600">1 hour</td>
<td data-sort="5"><progress class="progress" value="5" max="100"></progress><span>5%</span></td>
<td data-sort="104857600"><progress class="progress" value="10" max="100"></progress><span>100MB</span></td>
<td data-sort="1000">↓1KB ↑1KB</td>
</tr>
<tr>
<td>2</td>
<td data-sort="redis"><a href="/stack/redis" class="link">redis</a></td>
<td data-sort="redis">redis</td>
<td><div class="dropdown dropdown-end">
<label tabindex="0" class="btn btn-circle btn-ghost btn-xs"><svg class="h-4 w-4"></svg></label>
<ul tabindex="0" class="dropdown-content menu menu-sm bg-base-200 rounded-box shadow-lg w-36 z-50 p-2">
<li><a hx-post="/api/stack/redis/restart">Restart</a></li>
</ul>
</div></td>
<td data-sort="nas"><span class="badge">nas</span></td>
<td data-sort="redis:7"><code>redis:7</code></td>
<td data-sort="running"><span class="badge badge-success">running</span></td>
<td data-sort="7200">2 hours</td>
<td data-sort="1"><progress class="progress" value="1" max="100"></progress><span>1%</span></td>
<td data-sort="52428800"><progress class="progress" value="5" max="100"></progress><span>50MB</span></td>
<td data-sort="500">↓500B ↑500B</td>
</tr>
"""
def test_dropdown_pauses_refresh(self, page: Page, server_url: str) -> None:
"""Opening action dropdown pauses auto-refresh.
Bug: focusin event triggers pause, but focusout fires shortly after
when focus moves within the dropdown, causing refresh to resume
while dropdown is still visually open.
"""
# Mock container rows and update checks
page.route(
"**/api/containers/rows/*",
lambda route: route.fulfill(
status=200,
content_type="text/html",
body=self.MOCK_ROWS_HTML,
),
)
page.route(
"**/api/containers/check-updates",
lambda route: route.fulfill(
status=200,
content_type="application/json",
body='{"results": []}',
),
)
page.goto(f"{server_url}/live-stats")
# Wait for container rows to load
page.wait_for_function(
"document.querySelectorAll('#container-rows tr:not(.loading-row)').length > 0",
timeout=TIMEOUT,
)
# Wait for timer to start
page.wait_for_function(
"document.getElementById('refresh-timer')?.textContent?.includes('')",
timeout=TIMEOUT,
)
# Click on a dropdown to open it
dropdown_label = page.locator(".dropdown label").first
dropdown_label.click()
# Wait a moment for focusin to trigger
page.wait_for_timeout(200)
# Verify pause is engaged
timer_text = page.locator("#refresh-timer").inner_text()
assert timer_text == "❚❚", (
f"Refresh should be paused after clicking dropdown. timer='{timer_text}'"
)
assert "❚❚" in timer_text, f"Timer should show pause icon, got '{timer_text}'"
def test_refresh_stays_paused_while_dropdown_open(self, page: Page, server_url: str) -> None:
"""Refresh remains paused for duration dropdown is open (>5s refresh interval).
This is the critical test for the pause bug: refresh should stay paused
for longer than the 3-second refresh interval while dropdown is open.
"""
# Mock container rows and update checks
page.route(
"**/api/containers/rows/*",
lambda route: route.fulfill(
status=200,
content_type="text/html",
body=self.MOCK_ROWS_HTML,
),
)
page.route(
"**/api/containers/check-updates",
lambda route: route.fulfill(
status=200,
content_type="application/json",
body='{"results": []}',
),
)
page.goto(f"{server_url}/live-stats")
# Wait for container rows to load
page.wait_for_function(
"document.querySelectorAll('#container-rows tr:not(.loading-row)').length > 0",
timeout=TIMEOUT,
)
# Wait for timer to start
page.wait_for_function(
"document.getElementById('refresh-timer')?.textContent?.includes('')",
timeout=TIMEOUT,
)
# Record a marker in the first row to detect if refresh happened
page.evaluate("""
const firstRow = document.querySelector('#container-rows tr');
if (firstRow) firstRow.dataset.testMarker = 'original';
""")
# Click dropdown to pause
dropdown_label = page.locator(".dropdown label").first
dropdown_label.click()
page.wait_for_timeout(200)
# Confirm paused
assert page.locator("#refresh-timer").inner_text() == "❚❚"
# Wait longer than the 5-second refresh interval
page.wait_for_timeout(6000)
# Check if still paused
timer_text = page.locator("#refresh-timer").inner_text()
# Check if the row was replaced (marker would be gone)
marker = page.evaluate("""
document.querySelector('#container-rows tr')?.dataset?.testMarker
""")
assert timer_text == "❚❚", f"Refresh should still be paused after 6s. timer='{timer_text}'"
assert marker == "original", (
"Table was refreshed while dropdown was open - pause mechanism failed"
)
def test_refresh_resumes_after_dropdown_closes(self, page: Page, server_url: str) -> None:
"""Refresh resumes after dropdown is closed."""
# Mock container rows and update checks
page.route(
"**/api/containers/rows/*",
lambda route: route.fulfill(
status=200,
content_type="text/html",
body=self.MOCK_ROWS_HTML,
),
)
page.route(
"**/api/containers/check-updates",
lambda route: route.fulfill(
status=200,
content_type="application/json",
body='{"results": []}',
),
)
page.goto(f"{server_url}/live-stats")
# Wait for container rows to load
page.wait_for_function(
"document.querySelectorAll('#container-rows tr:not(.loading-row)').length > 0",
timeout=TIMEOUT,
)
# Wait for timer to start
page.wait_for_function(
"document.getElementById('refresh-timer')?.textContent?.includes('')",
timeout=TIMEOUT,
)
# Click dropdown to pause
dropdown_label = page.locator(".dropdown label").first
dropdown_label.click()
page.wait_for_timeout(200)
assert page.locator("#refresh-timer").inner_text() == "❚❚"
# Close dropdown by pressing Escape or clicking elsewhere
page.keyboard.press("Escape")
page.wait_for_timeout(300) # Wait for focusout timeout (150ms) + buffer
# Verify refresh resumed
timer_text = page.locator("#refresh-timer").inner_text()
assert timer_text != "❚❚", (
f"Refresh should resume after closing dropdown. timer='{timer_text}'"
)
assert "" in timer_text, f"Timer should show countdown, got '{timer_text}'"

11
uv.lock generated
View File

@@ -242,6 +242,7 @@ dependencies = [
[package.optional-dependencies]
web = [
{ name = "fastapi", extra = ["standard"] },
{ name = "humanize" },
{ name = "jinja2" },
{ name = "websockets" },
]
@@ -270,6 +271,7 @@ dev = [
requires-dist = [
{ name = "asyncssh", specifier = ">=2.14.0" },
{ name = "fastapi", extras = ["standard"], marker = "extra == 'web'", specifier = ">=0.109.0" },
{ name = "humanize", marker = "extra == 'web'", specifier = ">=4.0.0" },
{ name = "jinja2", marker = "extra == 'web'", specifier = ">=3.1.0" },
{ name = "pydantic", specifier = ">=2.0.0" },
{ name = "pyyaml", specifier = ">=6.0" },
@@ -781,6 +783,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
]
[[package]]
name = "humanize"
version = "4.15.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ba/66/a3921783d54be8a6870ac4ccffcd15c4dc0dd7fcce51c6d63b8c63935276/humanize-4.15.0.tar.gz", hash = "sha256:1dd098483eb1c7ee8e32eb2e99ad1910baefa4b75c3aff3a82f4d78688993b10", size = 83599, upload-time = "2025-12-20T20:16:13.19Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c5/7b/bca5613a0c3b542420cf92bd5e5fb8ebd5435ce1011a091f66bb7693285e/humanize-4.15.0-py3-none-any.whl", hash = "sha256:b1186eb9f5a9749cd9cb8565aee77919dd7c8d076161cf44d70e59e3301e1769", size = 132203, upload-time = "2025-12-20T20:16:11.67Z" },
]
[[package]]
name = "identify"
version = "2.6.15"