mirror of
https://github.com/basnijholt/compose-farm.git
synced 2026-02-09 00:39:54 +00:00
Compare commits
29 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
feb0e13bfd | ||
|
|
b86f6d190f | ||
|
|
5ed15b5445 | ||
|
|
761b6dd2d1 | ||
|
|
e86c2b6d47 | ||
|
|
9353b74c35 | ||
|
|
b7e8e0f3a9 | ||
|
|
b6c02587bc | ||
|
|
d412c42ca4 | ||
|
|
13e0adbbb9 | ||
|
|
68c41eb37c | ||
|
|
8af088bb5d | ||
|
|
1308eeca12 | ||
|
|
a66a68f395 | ||
|
|
6ea25c862e | ||
|
|
280524b546 | ||
|
|
db9360771b | ||
|
|
c7590ed0b7 | ||
|
|
bb563b9d4b | ||
|
|
fe160ee116 | ||
|
|
4c7f49414f | ||
|
|
bebe5b34ba | ||
|
|
5d21e64781 | ||
|
|
114c7b6eb6 | ||
|
|
20e281a23e | ||
|
|
ec33d28d6c | ||
|
|
a818b7726e | ||
|
|
cead3904bf | ||
|
|
8f5e14d621 |
@@ -31,7 +31,7 @@ compose_farm/
|
||||
## Git Safety
|
||||
|
||||
- Never amend commits.
|
||||
- Never merge into a branch; prefer fast-forward or rebase as directed.
|
||||
- **NEVER merge anything into main.** Always commit directly or use fast-forward/rebase.
|
||||
- Never force push.
|
||||
|
||||
## Commands Quick Reference
|
||||
|
||||
145
README.md
145
README.md
@@ -1,23 +1,30 @@
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
||||
- [Compose Farm](#compose-farm)
|
||||
- [Why Compose Farm?](#why-compose-farm)
|
||||
- [Key Assumption: Shared Storage](#key-assumption-shared-storage)
|
||||
- [Installation](#installation)
|
||||
- [Configuration](#configuration)
|
||||
- [Usage](#usage)
|
||||
- [Traefik Multihost Ingress (File Provider)](#traefik-multihost-ingress-file-provider)
|
||||
- [Requirements](#requirements)
|
||||
- [How It Works](#how-it-works)
|
||||
- [License](#license)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
||||
# Compose Farm
|
||||
|
||||
A minimal CLI tool to run Docker Compose commands across multiple hosts via SSH.
|
||||
|
||||
> [!NOTE]
|
||||
> Run `docker compose` commands across multiple hosts via SSH. One YAML maps services to hosts. Change the mapping, run `up`, and it auto-migrates. No Kubernetes, no Swarm, no magic.
|
||||
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
||||
- [Why Compose Farm?](#why-compose-farm)
|
||||
- [Key Assumption: Shared Storage](#key-assumption-shared-storage)
|
||||
- [Limitations & Best Practices](#limitations--best-practices)
|
||||
- [What breaks when you move a service](#what-breaks-when-you-move-a-service)
|
||||
- [Best practices](#best-practices)
|
||||
- [What Compose Farm doesn't do](#what-compose-farm-doesnt-do)
|
||||
- [Installation](#installation)
|
||||
- [Configuration](#configuration)
|
||||
- [Usage](#usage)
|
||||
- [Auto-Migration](#auto-migration)
|
||||
- [Traefik Multihost Ingress (File Provider)](#traefik-multihost-ingress-file-provider)
|
||||
- [Requirements](#requirements)
|
||||
- [How It Works](#how-it-works)
|
||||
- [License](#license)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
||||
## Why Compose Farm?
|
||||
|
||||
I run 100+ Docker Compose stacks on an LXC container that frequently runs out of memory. I needed a way to distribute services across multiple machines without the complexity of:
|
||||
@@ -44,6 +51,37 @@ nas:/volume1/compose → /opt/compose (on nas03)
|
||||
|
||||
Compose Farm simply runs `docker compose -f /opt/compose/{service}/docker-compose.yml` on the appropriate host—it doesn't copy or sync files.
|
||||
|
||||
## Limitations & Best Practices
|
||||
|
||||
Compose Farm moves containers between hosts but **does not provide cross-host networking**. Docker's internal DNS and networks don't span hosts.
|
||||
|
||||
### What breaks when you move a service
|
||||
|
||||
- **Docker DNS** - `http://redis:6379` won't resolve from another host
|
||||
- **Docker networks** - Containers can't reach each other via network names
|
||||
- **Environment variables** - `DATABASE_URL=postgres://db:5432` stops working
|
||||
|
||||
### Best practices
|
||||
|
||||
1. **Keep dependent services together** - If an app needs a database, redis, or worker, keep them in the same compose file on the same host
|
||||
|
||||
2. **Only migrate standalone services** - Services that don't talk to other containers (or only talk to external APIs) are safe to move
|
||||
|
||||
3. **Expose ports for cross-host communication** - If services must communicate across hosts, publish ports and use IP addresses instead of container names:
|
||||
```yaml
|
||||
# Instead of: DATABASE_URL=postgres://db:5432
|
||||
# Use: DATABASE_URL=postgres://192.168.1.66:5432
|
||||
```
|
||||
This includes Traefik routing—containers need published ports for the file-provider to reach them
|
||||
|
||||
### What Compose Farm doesn't do
|
||||
|
||||
- No overlay networking (use Docker Swarm or Kubernetes for that)
|
||||
- No service discovery across hosts
|
||||
- No automatic dependency tracking between compose files
|
||||
|
||||
If you need containers on different hosts to communicate seamlessly, you need Docker Swarm, Kubernetes, or a service mesh—which adds the complexity Compose Farm is designed to avoid.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
@@ -75,12 +113,12 @@ services:
|
||||
radarr: local # Runs on the machine where you invoke compose-farm
|
||||
```
|
||||
|
||||
Compose files are expected at `{compose_dir}/{service}/docker-compose.yml`.
|
||||
Compose files are expected at `{compose_dir}/{service}/compose.yaml` (also supports `compose.yml`, `docker-compose.yml`, `docker-compose.yaml`).
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Start services
|
||||
# Start services (auto-migrates if host changed in config)
|
||||
compose-farm up plex jellyfin
|
||||
compose-farm up --all
|
||||
|
||||
@@ -96,9 +134,9 @@ compose-farm restart plex
|
||||
# Update (pull + down + up) - the end-to-end update command
|
||||
compose-farm update --all
|
||||
|
||||
# Capture image digests to a TOML log (per service or all)
|
||||
compose-farm snapshot plex
|
||||
compose-farm snapshot --all # writes ~/.config/compose-farm/dockerfarm-log.toml
|
||||
# Sync state with reality (discovers running services + captures image digests)
|
||||
compose-farm sync # updates state.yaml and dockerfarm-log.toml
|
||||
compose-farm sync --dry-run # preview without writing
|
||||
|
||||
# View logs
|
||||
compose-farm logs plex
|
||||
@@ -108,6 +146,23 @@ compose-farm logs -f plex # follow
|
||||
compose-farm ps
|
||||
```
|
||||
|
||||
### Auto-Migration
|
||||
|
||||
When you change a service's host assignment in config and run `up`, Compose Farm automatically:
|
||||
1. Runs `down` on the old host
|
||||
2. Runs `up -d` on the new host
|
||||
3. Updates state tracking
|
||||
|
||||
```yaml
|
||||
# Before: plex runs on nas01
|
||||
services:
|
||||
plex: nas01
|
||||
|
||||
# After: change to nas02, then run `compose-farm up plex`
|
||||
services:
|
||||
plex: nas02 # Compose Farm will migrate automatically
|
||||
```
|
||||
|
||||
## Traefik Multihost Ingress (File Provider)
|
||||
|
||||
If you run a single Traefik instance on one “front‑door” host and want it to route to
|
||||
@@ -156,12 +211,58 @@ providers:
|
||||
**Generate the fragment**
|
||||
|
||||
```bash
|
||||
compose-farm traefik-file --output /mnt/data/traefik/dynamic.d/compose-farm.generated.yml
|
||||
compose-farm traefik-file --all --output /mnt/data/traefik/dynamic.d/compose-farm.yml
|
||||
```
|
||||
|
||||
Re‑run this after changing Traefik labels, moving a service to another host, or changing
|
||||
published ports.
|
||||
|
||||
**Auto-regeneration**
|
||||
|
||||
To automatically regenerate the Traefik config after `up`, `down`, `restart`, or `update`,
|
||||
add `traefik_file` to your config:
|
||||
|
||||
```yaml
|
||||
compose_dir: /opt/compose
|
||||
traefik_file: /opt/traefik/dynamic.d/compose-farm.yml # auto-regenerate on up/down/restart/update
|
||||
traefik_service: traefik # skip services on same host (docker provider handles them)
|
||||
|
||||
hosts:
|
||||
# ...
|
||||
services:
|
||||
traefik: nas01 # Traefik runs here
|
||||
plex: nas02 # Services on other hosts get file-provider entries
|
||||
# ...
|
||||
```
|
||||
|
||||
The `traefik_service` option specifies which service runs Traefik. Services on the same host
|
||||
are skipped in the file-provider config since Traefik's docker provider handles them directly.
|
||||
|
||||
Now `compose-farm up plex` will update the Traefik config automatically—no separate
|
||||
`traefik-file` command needed.
|
||||
|
||||
**Combining with existing config**
|
||||
|
||||
If you already have a `dynamic.yml` with manual routes, middlewares, etc., move it into the
|
||||
directory and Traefik will merge all files:
|
||||
|
||||
```bash
|
||||
mkdir -p /opt/traefik/dynamic.d
|
||||
mv /opt/traefik/dynamic.yml /opt/traefik/dynamic.d/manual.yml
|
||||
compose-farm traefik-file --all -o /opt/traefik/dynamic.d/compose-farm.yml
|
||||
```
|
||||
|
||||
Update your Traefik config to use directory watching instead of a single file:
|
||||
|
||||
```yaml
|
||||
# Before
|
||||
- --providers.file.filename=/dynamic.yml
|
||||
|
||||
# After
|
||||
- --providers.file.directory=/dynamic.d
|
||||
- --providers.file.watch=true
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.11+
|
||||
|
||||
90
docs/dev/docker-swarm-network.md
Normal file
90
docs/dev/docker-swarm-network.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Docker Swarm Overlay Networks with Compose Farm
|
||||
|
||||
Notes from testing Docker Swarm's attachable overlay networks as a way to get cross-host container networking while still using `docker compose`.
|
||||
|
||||
## The Idea
|
||||
|
||||
Docker Swarm overlay networks can be made "attachable", allowing regular `docker compose` containers (not just swarm services) to join them. This would give us:
|
||||
|
||||
- Cross-host Docker DNS (containers find each other by name)
|
||||
- No need to publish ports for inter-container communication
|
||||
- Keep using `docker compose up` instead of `docker stack deploy`
|
||||
|
||||
## Setup Steps
|
||||
|
||||
```bash
|
||||
# On manager node
|
||||
docker swarm init --advertise-addr <manager-ip>
|
||||
|
||||
# On worker nodes (use token from init output)
|
||||
docker swarm join --token <token> <manager-ip>:2377
|
||||
|
||||
# Create attachable overlay network (on manager)
|
||||
docker network create --driver overlay --attachable my-network
|
||||
|
||||
# In compose files, add the network
|
||||
networks:
|
||||
my-network:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Required Ports
|
||||
|
||||
Docker Swarm requires these ports open **bidirectionally** between all nodes:
|
||||
|
||||
| Port | Protocol | Purpose |
|
||||
|------|----------|---------|
|
||||
| 2377 | TCP | Cluster management |
|
||||
| 7946 | TCP + UDP | Node communication |
|
||||
| 4789 | UDP | Overlay network traffic (VXLAN) |
|
||||
|
||||
## Test Results (2024-12-13)
|
||||
|
||||
- docker-debian (192.168.1.66) as manager
|
||||
- dev-lxc (192.168.1.167) as worker
|
||||
|
||||
### What worked
|
||||
|
||||
- Swarm init and join
|
||||
- Overlay network creation
|
||||
- Nodes showed as Ready
|
||||
|
||||
### What failed
|
||||
|
||||
- Container on dev-lxc couldn't attach to overlay network
|
||||
- Error: `attaching to network failed... context deadline exceeded`
|
||||
- Cause: Port 7946 blocked from docker-debian → dev-lxc
|
||||
|
||||
### Root cause
|
||||
|
||||
Firewall on dev-lxc wasn't configured to allow swarm ports. Opening these ports requires sudo access on each node.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Docker Swarm overlay networks are **not plug-and-play**. Requirements:
|
||||
|
||||
1. Swarm init/join on all nodes
|
||||
2. Firewall rules on all nodes (needs sudo/root)
|
||||
3. All nodes must have bidirectional connectivity on 3 ports
|
||||
|
||||
For a simpler alternative, consider:
|
||||
|
||||
- **Tailscale**: VPN mesh, containers use host's Tailscale IP
|
||||
- **Host networking + published ports**: What compose-farm does today
|
||||
- **Keep dependent services together**: Avoid cross-host networking entirely
|
||||
|
||||
## Future Work
|
||||
|
||||
If we decide to support overlay networks:
|
||||
|
||||
1. Add a `compose-farm network create` command that:
|
||||
- Initializes swarm if needed
|
||||
- Creates attachable overlay network
|
||||
- Documents required firewall rules
|
||||
|
||||
2. Add network config to compose-farm.yaml:
|
||||
```yaml
|
||||
overlay_network: compose-farm-net
|
||||
```
|
||||
|
||||
3. Auto-inject network into compose files (or document manual setup)
|
||||
@@ -12,6 +12,7 @@ dependencies = [
|
||||
"pydantic>=2.0.0",
|
||||
"asyncssh>=2.14.0",
|
||||
"pyyaml>=6.0",
|
||||
"rich>=13.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -8,21 +8,47 @@ from typing import TYPE_CHECKING, Annotated, TypeVar
|
||||
|
||||
import typer
|
||||
import yaml
|
||||
from rich.console import Console
|
||||
|
||||
from . import __version__
|
||||
from .config import Config, load_config
|
||||
from .logs import snapshot_services
|
||||
from .ssh import (
|
||||
CommandResult,
|
||||
check_service_running,
|
||||
run_compose,
|
||||
run_compose_on_host,
|
||||
run_on_services,
|
||||
run_sequential_on_services,
|
||||
)
|
||||
from .state import get_service_host, load_state, remove_service, save_state, set_service_host
|
||||
from .traefik import generate_traefik_config
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Coroutine
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
console = Console(highlight=False)
|
||||
err_console = Console(stderr=True, highlight=False)
|
||||
|
||||
|
||||
def _maybe_regenerate_traefik(cfg: Config) -> None:
|
||||
"""Regenerate traefik config if traefik_file is configured."""
|
||||
if cfg.traefik_file is None:
|
||||
return
|
||||
|
||||
try:
|
||||
dynamic, warnings = generate_traefik_config(cfg, list(cfg.services.keys()))
|
||||
cfg.traefik_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
cfg.traefik_file.write_text(yaml.safe_dump(dynamic, sort_keys=False))
|
||||
console.print(f"[green]✓[/] Traefik config updated: {cfg.traefik_file}")
|
||||
for warning in warnings:
|
||||
err_console.print(f"[yellow]![/] {warning}")
|
||||
except (FileNotFoundError, ValueError) as exc:
|
||||
err_console.print(f"[yellow]![/] Failed to update traefik config: {exc}")
|
||||
|
||||
|
||||
def _version_callback(value: bool) -> None:
|
||||
"""Print version and exit."""
|
||||
if value:
|
||||
@@ -64,7 +90,7 @@ def _get_services(
|
||||
if all_services:
|
||||
return list(config.services.keys()), config
|
||||
if not services:
|
||||
typer.echo("Error: Specify services or use --all", err=True)
|
||||
err_console.print("[red]✗[/] Specify services or use --all")
|
||||
raise typer.Exit(1)
|
||||
return list(services), config
|
||||
|
||||
@@ -79,7 +105,9 @@ def _report_results(results: list[CommandResult]) -> None:
|
||||
failed = [r for r in results if not r.success]
|
||||
if failed:
|
||||
for r in failed:
|
||||
typer.echo(f"[{r.service}] Failed with exit code {r.exit_code}", err=True)
|
||||
err_console.print(
|
||||
f"[cyan]\\[{r.service}][/] [red]Failed with exit code {r.exit_code}[/]"
|
||||
)
|
||||
raise typer.Exit(1)
|
||||
|
||||
|
||||
@@ -101,15 +129,55 @@ LogPathOption = Annotated[
|
||||
]
|
||||
|
||||
|
||||
async def _up_with_migration(
|
||||
cfg: Config,
|
||||
services: list[str],
|
||||
) -> list[CommandResult]:
|
||||
"""Start services with automatic migration if host changed."""
|
||||
results: list[CommandResult] = []
|
||||
|
||||
for service in services:
|
||||
target_host = cfg.services[service]
|
||||
current_host = get_service_host(cfg, service)
|
||||
|
||||
# If service is deployed elsewhere, migrate it
|
||||
if current_host and current_host != target_host:
|
||||
if current_host in cfg.hosts:
|
||||
console.print(
|
||||
f"[cyan]\\[{service}][/] Migrating from "
|
||||
f"[magenta]{current_host}[/] → [magenta]{target_host}[/]..."
|
||||
)
|
||||
down_result = await run_compose_on_host(cfg, service, current_host, "down")
|
||||
if not down_result.success:
|
||||
results.append(down_result)
|
||||
continue
|
||||
else:
|
||||
err_console.print(
|
||||
f"[cyan]\\[{service}][/] [yellow]![/] was on "
|
||||
f"[magenta]{current_host}[/] (not in config), skipping down"
|
||||
)
|
||||
|
||||
# Start on target host
|
||||
up_result = await run_compose(cfg, service, "up -d")
|
||||
results.append(up_result)
|
||||
|
||||
# Update state on success
|
||||
if up_result.success:
|
||||
set_service_host(cfg, service, target_host)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
@app.command()
|
||||
def up(
|
||||
services: ServicesArg = None,
|
||||
all_services: AllOption = False,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Start services (docker compose up -d)."""
|
||||
"""Start services (docker compose up -d). Auto-migrates if host changed."""
|
||||
svc_list, cfg = _get_services(services or [], all_services, config)
|
||||
results = _run_async(run_on_services(cfg, svc_list, "up -d"))
|
||||
results = _run_async(_up_with_migration(cfg, svc_list))
|
||||
_maybe_regenerate_traefik(cfg)
|
||||
_report_results(results)
|
||||
|
||||
|
||||
@@ -122,6 +190,13 @@ def down(
|
||||
"""Stop services (docker compose down)."""
|
||||
svc_list, cfg = _get_services(services or [], all_services, config)
|
||||
results = _run_async(run_on_services(cfg, svc_list, "down"))
|
||||
|
||||
# Remove from state on success
|
||||
for result in results:
|
||||
if result.success:
|
||||
remove_service(cfg, result.service)
|
||||
|
||||
_maybe_regenerate_traefik(cfg)
|
||||
_report_results(results)
|
||||
|
||||
|
||||
@@ -146,6 +221,7 @@ def restart(
|
||||
"""Restart services (down + up)."""
|
||||
svc_list, cfg = _get_services(services or [], all_services, config)
|
||||
results = _run_async(run_sequential_on_services(cfg, svc_list, ["down", "up -d"]))
|
||||
_maybe_regenerate_traefik(cfg)
|
||||
_report_results(results)
|
||||
|
||||
|
||||
@@ -158,6 +234,7 @@ def update(
|
||||
"""Update services (pull + down + up)."""
|
||||
svc_list, cfg = _get_services(services or [], all_services, config)
|
||||
results = _run_async(run_sequential_on_services(cfg, svc_list, ["pull", "down", "up -d"]))
|
||||
_maybe_regenerate_traefik(cfg)
|
||||
_report_results(results)
|
||||
|
||||
|
||||
@@ -188,24 +265,6 @@ def ps(
|
||||
_report_results(results)
|
||||
|
||||
|
||||
@app.command()
|
||||
def snapshot(
|
||||
services: ServicesArg = None,
|
||||
all_services: AllOption = False,
|
||||
log_path: LogPathOption = None,
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Record current image digests into the Dockerfarm TOML log."""
|
||||
svc_list, cfg = _get_services(services or [], all_services, config)
|
||||
try:
|
||||
path = _run_async(snapshot_services(cfg, svc_list, log_path=log_path))
|
||||
except RuntimeError as exc: # pragma: no cover - error path
|
||||
typer.echo(str(exc), err=True)
|
||||
raise typer.Exit(1) from exc
|
||||
|
||||
typer.echo(f"Snapshot written to {path}")
|
||||
|
||||
|
||||
@app.command("traefik-file")
|
||||
def traefik_file(
|
||||
services: ServicesArg = None,
|
||||
@@ -221,13 +280,11 @@ def traefik_file(
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Generate a Traefik file-provider fragment from compose Traefik labels."""
|
||||
from .traefik import generate_traefik_config
|
||||
|
||||
svc_list, cfg = _get_services(services or [], all_services, config)
|
||||
try:
|
||||
dynamic, warnings = generate_traefik_config(cfg, svc_list)
|
||||
except (FileNotFoundError, ValueError) as exc:
|
||||
typer.echo(str(exc), err=True)
|
||||
err_console.print(f"[red]✗[/] {exc}")
|
||||
raise typer.Exit(1) from exc
|
||||
|
||||
rendered = yaml.safe_dump(dynamic, sort_keys=False)
|
||||
@@ -235,12 +292,151 @@ def traefik_file(
|
||||
if output:
|
||||
output.parent.mkdir(parents=True, exist_ok=True)
|
||||
output.write_text(rendered)
|
||||
typer.echo(f"Traefik config written to {output}")
|
||||
console.print(f"[green]✓[/] Traefik config written to {output}")
|
||||
else:
|
||||
typer.echo(rendered)
|
||||
console.print(rendered)
|
||||
|
||||
for warning in warnings:
|
||||
typer.echo(warning, err=True)
|
||||
err_console.print(f"[yellow]![/] {warning}")
|
||||
|
||||
|
||||
async def _discover_running_services(cfg: Config) -> dict[str, str]:
|
||||
"""Discover which services are running on which hosts.
|
||||
|
||||
Returns a dict mapping service names to host names for running services.
|
||||
"""
|
||||
discovered: dict[str, str] = {}
|
||||
|
||||
for service, assigned_host in cfg.services.items():
|
||||
# Check assigned host first (most common case)
|
||||
if await check_service_running(cfg, service, assigned_host):
|
||||
discovered[service] = assigned_host
|
||||
continue
|
||||
|
||||
# Check other hosts in case service was migrated but state is stale
|
||||
for host_name in cfg.hosts:
|
||||
if host_name == assigned_host:
|
||||
continue
|
||||
if await check_service_running(cfg, service, host_name):
|
||||
discovered[service] = host_name
|
||||
break
|
||||
|
||||
return discovered
|
||||
|
||||
|
||||
def _report_sync_changes(
|
||||
added: list[str],
|
||||
removed: list[str],
|
||||
changed: list[tuple[str, str, str]],
|
||||
discovered: dict[str, str],
|
||||
current_state: dict[str, str],
|
||||
) -> None:
|
||||
"""Report sync changes to the user."""
|
||||
if added:
|
||||
console.print(f"\nNew services found ({len(added)}):")
|
||||
for service in sorted(added):
|
||||
console.print(f" [green]+[/] [cyan]{service}[/] on [magenta]{discovered[service]}[/]")
|
||||
|
||||
if changed:
|
||||
console.print(f"\nServices on different hosts ({len(changed)}):")
|
||||
for service, old_host, new_host in sorted(changed):
|
||||
console.print(
|
||||
f" [yellow]~[/] [cyan]{service}[/]: "
|
||||
f"[magenta]{old_host}[/] → [magenta]{new_host}[/]"
|
||||
)
|
||||
|
||||
if removed:
|
||||
console.print(f"\nServices no longer running ({len(removed)}):")
|
||||
for service in sorted(removed):
|
||||
console.print(
|
||||
f" [red]-[/] [cyan]{service}[/] (was on [magenta]{current_state[service]}[/])"
|
||||
)
|
||||
|
||||
|
||||
@app.command()
|
||||
def sync(
|
||||
config: ConfigOption = None,
|
||||
log_path: LogPathOption = None,
|
||||
dry_run: Annotated[
|
||||
bool,
|
||||
typer.Option("--dry-run", "-n", help="Show what would be synced without writing"),
|
||||
] = False,
|
||||
) -> None:
|
||||
"""Sync local state with running services.
|
||||
|
||||
Discovers which services are running on which hosts, updates the state
|
||||
file, and captures image digests. Combines service discovery with
|
||||
image snapshot into a single command.
|
||||
"""
|
||||
cfg = load_config(config)
|
||||
current_state = load_state(cfg)
|
||||
|
||||
console.print("Discovering running services...")
|
||||
discovered = _run_async(_discover_running_services(cfg))
|
||||
|
||||
# Calculate changes
|
||||
added = [s for s in discovered if s not in current_state]
|
||||
removed = [s for s in current_state if s not in discovered]
|
||||
changed = [
|
||||
(s, current_state[s], discovered[s])
|
||||
for s in discovered
|
||||
if s in current_state and current_state[s] != discovered[s]
|
||||
]
|
||||
|
||||
# Report state changes
|
||||
state_changed = bool(added or removed or changed)
|
||||
if state_changed:
|
||||
_report_sync_changes(added, removed, changed, discovered, current_state)
|
||||
else:
|
||||
console.print("[green]✓[/] State is already in sync.")
|
||||
|
||||
if dry_run:
|
||||
console.print("\n[dim](dry-run: no changes made)[/]")
|
||||
return
|
||||
|
||||
# Update state file
|
||||
if state_changed:
|
||||
save_state(cfg, discovered)
|
||||
console.print(f"\n[green]✓[/] State updated: {len(discovered)} services tracked.")
|
||||
|
||||
# Capture image digests for running services
|
||||
if discovered:
|
||||
console.print("\nCapturing image digests...")
|
||||
try:
|
||||
path = _run_async(snapshot_services(cfg, list(discovered.keys()), log_path=log_path))
|
||||
console.print(f"[green]✓[/] Digests written to {path}")
|
||||
except RuntimeError as exc:
|
||||
err_console.print(f"[yellow]![/] {exc}")
|
||||
|
||||
|
||||
@app.command()
|
||||
def check(
|
||||
config: ConfigOption = None,
|
||||
) -> None:
|
||||
"""Check for compose directories not in config (and vice versa)."""
|
||||
cfg = load_config(config)
|
||||
configured = set(cfg.services.keys())
|
||||
on_disk = cfg.discover_compose_dirs()
|
||||
|
||||
missing_from_config = sorted(on_disk - configured)
|
||||
missing_from_disk = sorted(configured - on_disk)
|
||||
|
||||
if missing_from_config:
|
||||
console.print(f"\n[yellow]Not in config[/] ({len(missing_from_config)}):")
|
||||
for name in missing_from_config:
|
||||
console.print(f" [yellow]+[/] [cyan]{name}[/]")
|
||||
|
||||
if missing_from_disk:
|
||||
console.print(f"\n[red]No compose file found[/] ({len(missing_from_disk)}):")
|
||||
for name in missing_from_disk:
|
||||
console.print(f" [red]-[/] [cyan]{name}[/]")
|
||||
|
||||
if not missing_from_config and not missing_from_disk:
|
||||
console.print("[green]✓[/] All compose directories are in config.")
|
||||
elif missing_from_config:
|
||||
console.print(f"\n[dim]To add missing services, append to {cfg.config_path}:[/]")
|
||||
for name in missing_from_config:
|
||||
console.print(f"[dim] {name}: docker-debian[/]")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -23,6 +23,13 @@ class Config(BaseModel):
|
||||
compose_dir: Path = Path("/opt/compose")
|
||||
hosts: dict[str, Host]
|
||||
services: dict[str, str] # service_name -> host_name
|
||||
traefik_file: Path | None = None # Auto-regenerate traefik config after up/down
|
||||
traefik_service: str | None = None # Service name for Traefik (skip its host in file-provider)
|
||||
config_path: Path = Path() # Set by load_config()
|
||||
|
||||
def get_state_path(self) -> Path:
|
||||
"""Get the state file path (stored alongside config)."""
|
||||
return self.config_path.parent / "compose-farm-state.yaml"
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_service_hosts(self) -> Config:
|
||||
@@ -46,13 +53,37 @@ class Config(BaseModel):
|
||||
Tries compose.yaml first, then docker-compose.yml.
|
||||
"""
|
||||
service_dir = self.compose_dir / service
|
||||
for filename in ("compose.yaml", "compose.yml", "docker-compose.yml", "docker-compose.yaml"):
|
||||
for filename in (
|
||||
"compose.yaml",
|
||||
"compose.yml",
|
||||
"docker-compose.yml",
|
||||
"docker-compose.yaml",
|
||||
):
|
||||
candidate = service_dir / filename
|
||||
if candidate.exists():
|
||||
return candidate
|
||||
# Default to compose.yaml if none exist (will error later)
|
||||
return service_dir / "compose.yaml"
|
||||
|
||||
def discover_compose_dirs(self) -> set[str]:
|
||||
"""Find all directories in compose_dir that contain a compose file."""
|
||||
compose_filenames = {
|
||||
"compose.yaml",
|
||||
"compose.yml",
|
||||
"docker-compose.yml",
|
||||
"docker-compose.yaml",
|
||||
}
|
||||
found: set[str] = set()
|
||||
if not self.compose_dir.exists():
|
||||
return found
|
||||
for subdir in self.compose_dir.iterdir():
|
||||
if subdir.is_dir():
|
||||
for filename in compose_filenames:
|
||||
if (subdir / filename).exists():
|
||||
found.add(subdir.name)
|
||||
break
|
||||
return found
|
||||
|
||||
|
||||
def _parse_hosts(raw_hosts: dict[str, str | dict[str, str | int]]) -> dict[str, Host]:
|
||||
"""Parse hosts from config, handling both simple and full forms."""
|
||||
@@ -98,5 +129,6 @@ def load_config(path: Path | None = None) -> Config:
|
||||
|
||||
# Parse hosts with flexible format support
|
||||
raw["hosts"] = _parse_hosts(raw.get("hosts", {}))
|
||||
raw["config_path"] = config_path.resolve()
|
||||
|
||||
return Config(**raw)
|
||||
|
||||
@@ -3,15 +3,18 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
import asyncssh
|
||||
from rich.console import Console
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import Config, Host
|
||||
|
||||
_console = Console(highlight=False)
|
||||
_err_console = Console(stderr=True, highlight=False)
|
||||
|
||||
LOCAL_ADDRESSES = frozenset({"local", "localhost", "127.0.0.1", "::1"})
|
||||
|
||||
|
||||
@@ -53,12 +56,12 @@ async def _run_local_command(
|
||||
*,
|
||||
is_stderr: bool = False,
|
||||
) -> None:
|
||||
output = sys.stderr if is_stderr else sys.stdout
|
||||
console = _err_console if is_stderr else _console
|
||||
while True:
|
||||
line = await reader.readline()
|
||||
if not line:
|
||||
break
|
||||
print(f"[{prefix}] {line.decode()}", end="", file=output, flush=True)
|
||||
console.print(f"[cyan]\\[{prefix}][/] {line.decode()}", end="")
|
||||
|
||||
await asyncio.gather(
|
||||
read_stream(proc.stdout, service),
|
||||
@@ -80,7 +83,7 @@ async def _run_local_command(
|
||||
stderr=stderr_data.decode() if stderr_data else "",
|
||||
)
|
||||
except OSError as e:
|
||||
print(f"[{service}] Local error: {e}", file=sys.stderr)
|
||||
_err_console.print(f"[cyan]\\[{service}][/] [red]Local error:[/] {e}")
|
||||
return CommandResult(service=service, exit_code=1, success=False)
|
||||
|
||||
|
||||
@@ -111,9 +114,9 @@ async def _run_ssh_command(
|
||||
*,
|
||||
is_stderr: bool = False,
|
||||
) -> None:
|
||||
output = sys.stderr if is_stderr else sys.stdout
|
||||
console = _err_console if is_stderr else _console
|
||||
async for line in reader:
|
||||
print(f"[{prefix}] {line}", end="", file=output, flush=True)
|
||||
console.print(f"[cyan]\\[{prefix}][/] {line}", end="")
|
||||
|
||||
await asyncio.gather(
|
||||
read_stream(proc.stdout, service),
|
||||
@@ -135,7 +138,7 @@ async def _run_ssh_command(
|
||||
stderr=stderr_data,
|
||||
)
|
||||
except (OSError, asyncssh.Error) as e:
|
||||
print(f"[{service}] SSH error: {e}", file=sys.stderr)
|
||||
_err_console.print(f"[cyan]\\[{service}][/] [red]SSH error:[/] {e}")
|
||||
return CommandResult(service=service, exit_code=1, success=False)
|
||||
|
||||
|
||||
@@ -167,6 +170,25 @@ async def run_compose(
|
||||
return await run_command(host, command, service, stream=stream)
|
||||
|
||||
|
||||
async def run_compose_on_host(
|
||||
config: Config,
|
||||
service: str,
|
||||
host_name: str,
|
||||
compose_cmd: str,
|
||||
*,
|
||||
stream: bool = True,
|
||||
) -> CommandResult:
|
||||
"""Run a docker compose command for a service on a specific host.
|
||||
|
||||
Used for migration - running 'down' on the old host before 'up' on new host.
|
||||
"""
|
||||
host = config.hosts[host_name]
|
||||
compose_path = config.get_compose_path(service)
|
||||
|
||||
command = f"docker compose -f {compose_path} {compose_cmd}"
|
||||
return await run_command(host, command, service, stream=stream)
|
||||
|
||||
|
||||
async def run_on_services(
|
||||
config: Config,
|
||||
services: list[str],
|
||||
@@ -206,3 +228,20 @@ async def run_sequential_on_services(
|
||||
run_sequential_commands(config, service, commands, stream=stream) for service in services
|
||||
]
|
||||
return await asyncio.gather(*tasks)
|
||||
|
||||
|
||||
async def check_service_running(
|
||||
config: Config,
|
||||
service: str,
|
||||
host_name: str,
|
||||
) -> bool:
|
||||
"""Check if a service has running containers on a specific host."""
|
||||
host = config.hosts[host_name]
|
||||
compose_path = config.get_compose_path(service)
|
||||
|
||||
# Use ps --status running to check for running containers
|
||||
command = f"docker compose -f {compose_path} ps --status running -q"
|
||||
result = await run_command(host, command, service, stream=False)
|
||||
|
||||
# If command succeeded and has output, containers are running
|
||||
return result.success and bool(result.stdout.strip())
|
||||
|
||||
53
src/compose_farm/state.py
Normal file
53
src/compose_farm/state.py
Normal file
@@ -0,0 +1,53 @@
|
||||
"""State tracking for deployed services."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
import yaml
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import Config
|
||||
|
||||
|
||||
def load_state(config: Config) -> dict[str, str]:
|
||||
"""Load the current deployment state.
|
||||
|
||||
Returns a dict mapping service names to host names.
|
||||
"""
|
||||
state_path = config.get_state_path()
|
||||
if not state_path.exists():
|
||||
return {}
|
||||
|
||||
with state_path.open() as f:
|
||||
data: dict[str, Any] = yaml.safe_load(f) or {}
|
||||
|
||||
deployed: dict[str, str] = data.get("deployed", {})
|
||||
return deployed
|
||||
|
||||
|
||||
def save_state(config: Config, deployed: dict[str, str]) -> None:
|
||||
"""Save the deployment state."""
|
||||
state_path = config.get_state_path()
|
||||
with state_path.open("w") as f:
|
||||
yaml.safe_dump({"deployed": deployed}, f, sort_keys=False)
|
||||
|
||||
|
||||
def get_service_host(config: Config, service: str) -> str | None:
|
||||
"""Get the host where a service is currently deployed."""
|
||||
state = load_state(config)
|
||||
return state.get(service)
|
||||
|
||||
|
||||
def set_service_host(config: Config, service: str, host: str) -> None:
|
||||
"""Record that a service is deployed on a host."""
|
||||
state = load_state(config)
|
||||
state[service] = host
|
||||
save_state(config, state)
|
||||
|
||||
|
||||
def remove_service(config: Config, service: str) -> None:
|
||||
"""Remove a service from the state (after down)."""
|
||||
state = load_state(config)
|
||||
state.pop(service, None)
|
||||
save_state(config, state)
|
||||
@@ -15,6 +15,8 @@ from typing import TYPE_CHECKING, Any
|
||||
|
||||
import yaml
|
||||
|
||||
from .ssh import LOCAL_ADDRESSES
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
|
||||
@@ -289,8 +291,8 @@ def _finalize_http_services(
|
||||
if published_port is None:
|
||||
warnings.append(
|
||||
f"[{source.stack}/{source.compose_service}] "
|
||||
f"No host-published port found for Traefik service '{traefik_service}'. "
|
||||
"Traefik will require L3 reachability to container IPs."
|
||||
f"No published port found for Traefik service '{traefik_service}'. "
|
||||
"Add a ports: mapping (e.g., '8080:8080') for cross-host routing."
|
||||
)
|
||||
continue
|
||||
|
||||
@@ -459,8 +461,21 @@ def generate_traefik_config(
|
||||
warnings: list[str] = []
|
||||
sources: dict[str, TraefikServiceSource] = {}
|
||||
|
||||
# Determine Traefik's host from service assignment
|
||||
traefik_host = None
|
||||
if config.traefik_service:
|
||||
traefik_host = config.services.get(config.traefik_service)
|
||||
|
||||
for stack in services:
|
||||
raw_services, env, host_address = _load_stack(config, stack)
|
||||
stack_host = config.services.get(stack)
|
||||
|
||||
# Skip services on Traefik's host - docker provider handles them directly
|
||||
if host_address.lower() in LOCAL_ADDRESSES:
|
||||
continue
|
||||
if traefik_host and stack_host == traefik_host:
|
||||
continue
|
||||
|
||||
for compose_service, definition in raw_services.items():
|
||||
if not isinstance(definition, dict):
|
||||
continue
|
||||
|
||||
@@ -75,7 +75,8 @@ class TestConfig:
|
||||
services={"plex": "nas01"},
|
||||
)
|
||||
path = config.get_compose_path("plex")
|
||||
assert path == Path("/opt/compose/plex/docker-compose.yml")
|
||||
# Defaults to compose.yaml when no file exists
|
||||
assert path == Path("/opt/compose/plex/compose.yaml")
|
||||
|
||||
|
||||
class TestLoadConfig:
|
||||
|
||||
132
tests/test_state.py
Normal file
132
tests/test_state.py
Normal file
@@ -0,0 +1,132 @@
|
||||
"""Tests for state module."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from compose_farm.config import Config, Host
|
||||
from compose_farm.state import (
|
||||
get_service_host,
|
||||
load_state,
|
||||
remove_service,
|
||||
save_state,
|
||||
set_service_host,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config(tmp_path: Path) -> Config:
|
||||
"""Create a config with a temporary config path for state storage."""
|
||||
config_path = tmp_path / "compose-farm.yaml"
|
||||
config_path.write_text("") # Create empty file
|
||||
return Config(
|
||||
compose_dir=tmp_path / "compose",
|
||||
hosts={"nas01": Host(address="192.168.1.10")},
|
||||
services={"plex": "nas01"},
|
||||
config_path=config_path,
|
||||
)
|
||||
|
||||
|
||||
class TestLoadState:
|
||||
"""Tests for load_state function."""
|
||||
|
||||
def test_load_state_empty(self, config: Config) -> None:
|
||||
"""Returns empty dict when state file doesn't exist."""
|
||||
result = load_state(config)
|
||||
assert result == {}
|
||||
|
||||
def test_load_state_with_data(self, config: Config) -> None:
|
||||
"""Loads existing state from file."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("deployed:\n plex: nas01\n jellyfin: nas02\n")
|
||||
|
||||
result = load_state(config)
|
||||
assert result == {"plex": "nas01", "jellyfin": "nas02"}
|
||||
|
||||
def test_load_state_empty_file(self, config: Config) -> None:
|
||||
"""Returns empty dict for empty file."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("")
|
||||
|
||||
result = load_state(config)
|
||||
assert result == {}
|
||||
|
||||
|
||||
class TestSaveState:
|
||||
"""Tests for save_state function."""
|
||||
|
||||
def test_save_state(self, config: Config) -> None:
|
||||
"""Saves state to file."""
|
||||
save_state(config, {"plex": "nas01", "jellyfin": "nas02"})
|
||||
|
||||
state_file = config.get_state_path()
|
||||
assert state_file.exists()
|
||||
content = state_file.read_text()
|
||||
assert "plex: nas01" in content
|
||||
assert "jellyfin: nas02" in content
|
||||
|
||||
|
||||
class TestGetServiceHost:
|
||||
"""Tests for get_service_host function."""
|
||||
|
||||
def test_get_existing_service(self, config: Config) -> None:
|
||||
"""Returns host for existing service."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("deployed:\n plex: nas01\n")
|
||||
|
||||
host = get_service_host(config, "plex")
|
||||
assert host == "nas01"
|
||||
|
||||
def test_get_nonexistent_service(self, config: Config) -> None:
|
||||
"""Returns None for service not in state."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("deployed:\n plex: nas01\n")
|
||||
|
||||
host = get_service_host(config, "unknown")
|
||||
assert host is None
|
||||
|
||||
|
||||
class TestSetServiceHost:
|
||||
"""Tests for set_service_host function."""
|
||||
|
||||
def test_set_new_service(self, config: Config) -> None:
|
||||
"""Adds new service to state."""
|
||||
set_service_host(config, "plex", "nas01")
|
||||
|
||||
result = load_state(config)
|
||||
assert result["plex"] == "nas01"
|
||||
|
||||
def test_update_existing_service(self, config: Config) -> None:
|
||||
"""Updates host for existing service."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("deployed:\n plex: nas01\n")
|
||||
|
||||
set_service_host(config, "plex", "nas02")
|
||||
|
||||
result = load_state(config)
|
||||
assert result["plex"] == "nas02"
|
||||
|
||||
|
||||
class TestRemoveService:
|
||||
"""Tests for remove_service function."""
|
||||
|
||||
def test_remove_existing_service(self, config: Config) -> None:
|
||||
"""Removes service from state."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("deployed:\n plex: nas01\n jellyfin: nas02\n")
|
||||
|
||||
remove_service(config, "plex")
|
||||
|
||||
result = load_state(config)
|
||||
assert "plex" not in result
|
||||
assert result["jellyfin"] == "nas02"
|
||||
|
||||
def test_remove_nonexistent_service(self, config: Config) -> None:
|
||||
"""Removing nonexistent service doesn't error."""
|
||||
state_file = config.get_state_path()
|
||||
state_file.write_text("deployed:\n plex: nas01\n")
|
||||
|
||||
remove_service(config, "unknown") # Should not raise
|
||||
|
||||
result = load_state(config)
|
||||
assert result["plex"] == "nas01"
|
||||
174
tests/test_sync.py
Normal file
174
tests/test_sync.py
Normal file
@@ -0,0 +1,174 @@
|
||||
"""Tests for sync command and related functions."""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from compose_farm import cli as cli_module
|
||||
from compose_farm import ssh as ssh_module
|
||||
from compose_farm import state as state_module
|
||||
from compose_farm.config import Config, Host
|
||||
from compose_farm.ssh import CommandResult, check_service_running
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_config(tmp_path: Path) -> Config:
|
||||
"""Create a mock config for testing."""
|
||||
compose_dir = tmp_path / "stacks"
|
||||
compose_dir.mkdir()
|
||||
|
||||
# Create service directories with compose files
|
||||
for service in ["plex", "jellyfin", "sonarr"]:
|
||||
svc_dir = compose_dir / service
|
||||
svc_dir.mkdir()
|
||||
(svc_dir / "compose.yaml").write_text(f"# {service} compose file\n")
|
||||
|
||||
return Config(
|
||||
compose_dir=compose_dir,
|
||||
hosts={
|
||||
"nas01": Host(address="192.168.1.10", user="admin", port=22),
|
||||
"nas02": Host(address="192.168.1.11", user="admin", port=22),
|
||||
},
|
||||
services={
|
||||
"plex": "nas01",
|
||||
"jellyfin": "nas01",
|
||||
"sonarr": "nas02",
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def state_dir(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
|
||||
"""Create a temporary state directory and patch _get_state_path."""
|
||||
state_path = tmp_path / ".config" / "compose-farm"
|
||||
state_path.mkdir(parents=True)
|
||||
|
||||
def mock_get_state_path() -> Path:
|
||||
return state_path / "state.yaml"
|
||||
|
||||
monkeypatch.setattr(state_module, "_get_state_path", mock_get_state_path)
|
||||
return state_path
|
||||
|
||||
|
||||
class TestCheckServiceRunning:
|
||||
"""Tests for check_service_running function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_service_running(self, mock_config: Config) -> None:
|
||||
"""Returns True when service has running containers."""
|
||||
with patch.object(ssh_module, "run_command", new_callable=AsyncMock) as mock_run:
|
||||
mock_run.return_value = CommandResult(
|
||||
service="plex",
|
||||
exit_code=0,
|
||||
success=True,
|
||||
stdout="abc123\ndef456\n",
|
||||
)
|
||||
result = await check_service_running(mock_config, "plex", "nas01")
|
||||
assert result is True
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_service_not_running(self, mock_config: Config) -> None:
|
||||
"""Returns False when service has no running containers."""
|
||||
with patch.object(ssh_module, "run_command", new_callable=AsyncMock) as mock_run:
|
||||
mock_run.return_value = CommandResult(
|
||||
service="plex",
|
||||
exit_code=0,
|
||||
success=True,
|
||||
stdout="",
|
||||
)
|
||||
result = await check_service_running(mock_config, "plex", "nas01")
|
||||
assert result is False
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_command_failed(self, mock_config: Config) -> None:
|
||||
"""Returns False when command fails."""
|
||||
with patch.object(ssh_module, "run_command", new_callable=AsyncMock) as mock_run:
|
||||
mock_run.return_value = CommandResult(
|
||||
service="plex",
|
||||
exit_code=1,
|
||||
success=False,
|
||||
)
|
||||
result = await check_service_running(mock_config, "plex", "nas01")
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestDiscoverRunningServices:
|
||||
"""Tests for _discover_running_services function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_discovers_on_assigned_host(self, mock_config: Config) -> None:
|
||||
"""Discovers service running on its assigned host."""
|
||||
with patch.object(
|
||||
cli_module, "check_service_running", new_callable=AsyncMock
|
||||
) as mock_check:
|
||||
# plex running on nas01, jellyfin not running, sonarr on nas02
|
||||
async def check_side_effect(_cfg: Any, service: str, host: str) -> bool:
|
||||
return (service == "plex" and host == "nas01") or (
|
||||
service == "sonarr" and host == "nas02"
|
||||
)
|
||||
|
||||
mock_check.side_effect = check_side_effect
|
||||
|
||||
result = await cli_module._discover_running_services(mock_config)
|
||||
assert result == {"plex": "nas01", "sonarr": "nas02"}
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_discovers_on_different_host(self, mock_config: Config) -> None:
|
||||
"""Discovers service running on non-assigned host (after migration)."""
|
||||
with patch.object(
|
||||
cli_module, "check_service_running", new_callable=AsyncMock
|
||||
) as mock_check:
|
||||
# plex migrated to nas02
|
||||
async def check_side_effect(_cfg: Any, service: str, host: str) -> bool:
|
||||
return service == "plex" and host == "nas02"
|
||||
|
||||
mock_check.side_effect = check_side_effect
|
||||
|
||||
result = await cli_module._discover_running_services(mock_config)
|
||||
assert result == {"plex": "nas02"}
|
||||
|
||||
|
||||
class TestReportSyncChanges:
|
||||
"""Tests for _report_sync_changes function."""
|
||||
|
||||
def test_reports_added(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
"""Reports newly discovered services."""
|
||||
cli_module._report_sync_changes(
|
||||
added=["plex", "jellyfin"],
|
||||
removed=[],
|
||||
changed=[],
|
||||
discovered={"plex": "nas01", "jellyfin": "nas02"},
|
||||
current_state={},
|
||||
)
|
||||
captured = capsys.readouterr()
|
||||
assert "New services found (2)" in captured.out
|
||||
assert "+ plex on nas01" in captured.out
|
||||
assert "+ jellyfin on nas02" in captured.out
|
||||
|
||||
def test_reports_removed(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
"""Reports services that are no longer running."""
|
||||
cli_module._report_sync_changes(
|
||||
added=[],
|
||||
removed=["sonarr"],
|
||||
changed=[],
|
||||
discovered={},
|
||||
current_state={"sonarr": "nas01"},
|
||||
)
|
||||
captured = capsys.readouterr()
|
||||
assert "Services no longer running (1)" in captured.out
|
||||
assert "- sonarr (was on nas01)" in captured.out
|
||||
|
||||
def test_reports_changed(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
"""Reports services that moved to a different host."""
|
||||
cli_module._report_sync_changes(
|
||||
added=[],
|
||||
removed=[],
|
||||
changed=[("plex", "nas01", "nas02")],
|
||||
discovered={"plex": "nas02"},
|
||||
current_state={"plex": "nas01"},
|
||||
)
|
||||
captured = capsys.readouterr()
|
||||
assert "Services on different hosts (1)" in captured.out
|
||||
assert "~ plex: nas01 → nas02" in captured.out
|
||||
@@ -76,7 +76,7 @@ def test_generate_traefik_config_without_published_port_warns(tmp_path: Path) ->
|
||||
dynamic, warnings = generate_traefik_config(cfg, ["app"])
|
||||
|
||||
assert dynamic["http"]["routers"]["app"]["rule"] == "Host(`app.lab.mydomain.org`)"
|
||||
assert any("No host-published port found" in warning for warning in warnings)
|
||||
assert any("No published port found" in warning for warning in warnings)
|
||||
|
||||
|
||||
def test_generate_interpolates_env_and_infers_router_service(tmp_path: Path) -> None:
|
||||
|
||||
2
uv.lock
generated
2
uv.lock
generated
@@ -131,6 +131,7 @@ dependencies = [
|
||||
{ name = "asyncssh" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "rich" },
|
||||
{ name = "typer" },
|
||||
]
|
||||
|
||||
@@ -151,6 +152,7 @@ requires-dist = [
|
||||
{ name = "asyncssh", specifier = ">=2.14.0" },
|
||||
{ name = "pydantic", specifier = ">=2.0.0" },
|
||||
{ name = "pyyaml", specifier = ">=6.0" },
|
||||
{ name = "rich", specifier = ">=13.0.0" },
|
||||
{ name = "typer", specifier = ">=0.9.0" },
|
||||
]
|
||||
|
||||
|
||||
Reference in New Issue
Block a user