Compare commits

...

10 Commits

Author SHA1 Message Date
grandwizard28
0d97f543df fix(alerts-downtime): capture load-time GETs before navigation
Flow 1 registered cap.mark() AFTER page.goto() and then called
page.waitForResponse(/api/v2/rules) — but against a fast local backend
the GET /api/v2/rules response arrived during page.goto, before the
waiter could register, and the test timed out at 30s.

installCapture's page.on('response') listener runs from before the
navigation, so moving mark() above page.goto() and relying on
dumpSince's 500ms drain is enough. No lost precision.

One site only; the same pattern exists in later flows (via per-action
waitForResponse) and may surface similar races — those are left for a
follow-up once the backend-side 2095 migration lands on main (current
frontend still calls PATCH /api/v1/rules/:id which the spec's assertion
doesn't match anyway).
2026-04-21 00:48:14 +05:30
grandwizard28
be7099b2b4 feat(tests/e2e): surface seeder_url to Playwright via globalSetup
- bootstrap/setup.py: test_setup now depends on the seeder fixture and
  writes seeder_url into .signoz-backend.json alongside base_url.
- bootstrap/run.py: test_e2e exports SIGNOZ_E2E_SEEDER_URL to the
  subprocessed yarn test so Playwright specs can reach the seeder
  directly in the one-command path.
- global.setup.ts: if .signoz-backend.json carries seeder_url, populate
  process.env.SIGNOZ_E2E_SEEDER_URL. Remains optional — staging mode
  leaves it unset.

Playwright specs that want per-test telemetry can:
  await fetch(process.env.SIGNOZ_E2E_SEEDER_URL + '/telemetry/traces', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify([...])
  });
and await a truncate via DELETE on teardown.
2026-04-21 00:47:58 +05:30
grandwizard28
ab6e8291fe feat(fixtures/seeder): HTTP seeder container for fine-grained telemetry seeding
Adds a sibling container alongside signoz/clickhouse/postgres that exposes
HTTP endpoints for direct-ClickHouse telemetry seeding, so Playwright
tests can shape per-test data without going through OTel or the SigNoz
ingestion path.

tests/fixtures/seeder/:
- Dockerfile: python:3.13-slim + the shared fixtures/ tree so the
  container can import fixtures.traces and reuse the exact insert path
  used by pytest.
- server.py: FastAPI app with GET /healthz, POST /telemetry/traces
  (accepts a JSON list matching Traces.from_dict input; auto-tags each
  inserted row with resource seeder=true), DELETE /telemetry/traces
  (truncates all traces tables).
- requirements.txt: fastapi, uvicorn, clickhouse-connect, numpy plus
  sqlalchemy/pytest/testcontainers because fixtures/{__init__,types,
  traces}.py import them at module load.

tests/fixtures/seeder/__init__.py: pytest fixture (`seeder`, package-
scoped) that builds the image via docker-py (testcontainers DockerImage
had multi-segment dockerfile issues), starts the container on the
shared network wired to ClickHouse via env vars, and waits for
/healthz. Cache key + restore follow the dev.wrap pattern other
fixtures use for --reuse.

tests/.dockerignore: exclude .venv, caches, e2e node_modules, and test
outputs so the build context is small and deterministic.

tests/conftest.py: register fixtures.seeder as a pytest plugin.

Currently traces-only — logs + metrics follow the same pattern.
2026-04-21 00:47:43 +05:30
grandwizard28
0839c532bc refactor(fixtures/traces): extract insert + truncate helpers
Pull the ClickHouse insert path out of the insert_traces pytest fixture
into a plain module-level function insert_traces_to_clickhouse(conn,
traces), and move the per-table TRUNCATE loop into truncate_traces_tables
(conn, cluster). The fixture becomes a thin wrapper over both — zero
behavioural change.

Lets the HTTP seeder container (tests/fixtures/seeder/) reuse the exact
same insert + truncate code the pytest fixture uses, so the two stay in
sync as the trace schema evolves.
2026-04-21 00:47:23 +05:30
grandwizard28
5ef206a666 feat(tests/e2e): alerts-downtime regression suite (platform-pod/issues/2095)
Import the 34-step regression suite originally developed on
platform-pod/issues/2095-frontend. Targets the alerts and planned-downtime
frontend flows after their migration to generated OpenAPI clients and
generated react-query hooks.

- specs/alerts-downtime/: SUITE.md (the stable spec), README.md (scope +
  open observations from the original runs), results-schema.md (legacy
  per-run artifact shape, retained for context).
- tests/alerts-downtime/alerts-downtime.spec.ts: 881-line Playwright spec
  covering 6 flows — alert CRUD/toggle, alert detail 404, planned
  downtime CRUD, notification channel routing, anomaly alerts.

Integration with the shared suite:
- Uses baseURL + storageState from tests/e2e/playwright.config.ts (no
  separate config). page.goto calls use relative paths; SIGNOZ_E2E_*
  env vars from the pytest bootstrap drive auth.
- test.describe.configure({ mode: 'serial' }) at the top of the describe:
  the flows mutate shared tenant state, so parallel runs cause cross-
  flow interference (documented in the original 2095 config).
- Per-run artifacts (network captures + screenshots) land in
  tests/e2e/tests/alerts-downtime/run-spec-<ts>/ by default — gitignored.

Historical per-run artifacts (~7.5MB of screenshots across run-1 through
run-7) are not imported; they lived at e2e/2095/run-*/ on the original
branch and remain there if needed.
2026-04-20 23:34:12 +05:30
grandwizard28
fce92115a9 fix(tests/fixtures/signoz.py): anchor Docker build context to repo root
Previously used path="../../" which resolved to the repo root only when
pytest's cwd was tests/integration/. After hoisting the pytest project
to tests/, that same relative path pointed one level above the repo
root and the build failed with:

  Cannot locate specified Dockerfile: cmd/enterprise/Dockerfile.with-web.integration

Anchor the build context to an absolute path computed from __file__ so
the fixture works regardless of pytest cwd.
2026-04-20 21:45:27 +05:30
grandwizard28
9743002edf docs(tests): describe pytest-master workflow and shared fixture layout
- tests/README.md (new): top-level map of the shared pytest project,
  fixture-ownership rule (shared vs per-tree), and common commands.
- tests/e2e/README.md: lead with the one-command pytest run and the
  warm-backend dev loop; keep the staging fallback as option 2.
- tests/e2e/CLAUDE.md: updated commands so agent contexts reflect the
  pytest-driven lifecycle.
- tests/e2e/.env.example: drop unused SIGNOZ_E2E_ENV_TYPE; note the file
  is only needed for staging mode.
2026-04-20 21:06:32 +05:30
grandwizard28
0efde7b5ce feat(tests/e2e): pytest-driven backend bring-up, seeding, and playwright runner
Wire the Playwright suite into the shared pytest fixture graph so the
backend + its seeded state are provisioned locally instead of pointing
at remote staging.

Python side (owns lifecycle):
- tests/fixtures/dashboards.py — generic create/list/upsert_dashboard
  helpers (shared infra; testdata stays per-tree).
- tests/e2e/conftest.py — e2e-scoped pytest fixtures: seed_dashboards
  (idempotent upsert from tests/e2e/testdata/dashboards/*.json),
  seed_alert_rules (from tests/e2e/testdata/alerts/*.json, via existing
  create_alert_rule), seed_e2e_telemetry (fresh traces/logs across a
  few synthetic services so /home and Services pages have data).
- tests/e2e/src/bootstrap/setup.py — test_setup depends on the fixture
  graph and persists backend coordinates to tests/e2e/.signoz-backend.json;
  test_teardown is the --teardown target.
- tests/e2e/src/bootstrap/run.py — test_e2e: one-command entrypoint that
  brings up the backend + seeds, then subprocesses yarn test and asserts
  Playwright exits 0.
- tests/conftest.py — register fixtures.dashboards plugin.

Playwright side (just reads):
- tests/e2e/global.setup.ts — loads .signoz-backend.json and injects
  SIGNOZ_E2E_BASE_URL/USERNAME/PASSWORD. No-op when env is already
  populated (staging mode, or pytest-driven runs where env is pre-set).
- playwright.config.ts registers globalSetup.
- package.json gains test:staging; existing scripts unchanged.

Testdata layout: tests/e2e/testdata/{dashboards,alerts,channels}/*.json
— per-tree (integration has its own tests/integration/testdata/).
2026-04-20 21:03:52 +05:30
grandwizard28
8bdaecbe25 feat(tests/e2e): import Playwright suite from signoz-e2e
Relocate the standalone signoz-e2e repository into tests/e2e/ as a
sibling of tests/integration/. The suite still points at remote
staging by default; subsequent commits wire it to the shared pytest
fixture graph so the backend can be provisioned locally.

Excluded from the import: .git, .github (CI migration deferred),
.auth, node_modules, test-results, playwright-report.
2026-04-20 20:40:02 +05:30
grandwizard28
deb90abd9c refactor(tests): hoist pytest project to tests/ root for shared fixtures
Lift pyproject.toml, uv.lock, conftest.py, and fixtures/ up from
tests/integration/ so the pytest project becomes shared infrastructure
rather than integration's private property. A sibling tests/e2e/ can
reuse the same fixture graph (containers, auth, seeding) without
duplicating plugins.

Also:
- Merge tests/integration/src/querier/util.py into tests/fixtures/querier.py
  (response assertions and corrupt-metadata generators belong with the
  other querier helpers).
- Use --import-mode=importlib + pythonpath=["."] in pyproject so
  same-basename tests across src/*/ do not collide at the now-wider
  rootdir.
- Broaden python_files to "*/src/**/**.py" so future test trees under
  tests/e2e/src/ get discovered.
- Update Makefile py-* targets and integrationci.yaml to cd into tests/
  and reference integration/src/... paths.
2026-04-20 20:39:16 +05:30
101 changed files with 14615 additions and 454 deletions

View File

@@ -25,11 +25,11 @@ jobs:
uses: astral-sh/setup-uv@v4
- name: install
run: |
cd tests/integration && uv sync
cd tests && uv sync
- name: fmt
run: |
make py-fmt
git diff --exit-code -- tests/integration/
git diff --exit-code -- tests/
- name: lint
run: |
make py-lint
@@ -79,7 +79,7 @@ jobs:
uses: astral-sh/setup-uv@v4
- name: install
run: |
cd tests/integration && uv sync
cd tests && uv sync
- name: webdriver
run: |
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
@@ -99,10 +99,10 @@ jobs:
google-chrome-stable --version
- name: run
run: |
cd tests/integration && \
cd tests && \
uv run pytest \
--basetemp=./tmp/ \
src/${{matrix.src}} \
integration/src/${{matrix.src}} \
--sqlstore-provider ${{matrix.sqlstore-provider}} \
--sqlite-mode ${{matrix.sqlite-mode}} \
--postgres-version ${{matrix.postgres-version}} \

View File

@@ -201,26 +201,26 @@ docker-buildx-enterprise: go-build-enterprise js-build
# python commands
##############################################################
.PHONY: py-fmt
py-fmt: ## Run black for integration tests
@cd tests/integration && uv run black .
py-fmt: ## Run black across the shared tests project
@cd tests && uv run black .
.PHONY: py-lint
py-lint: ## Run lint for integration tests
@cd tests/integration && uv run isort .
@cd tests/integration && uv run autoflake .
@cd tests/integration && uv run pylint .
py-lint: ## Run lint across the shared tests project
@cd tests && uv run isort .
@cd tests && uv run autoflake .
@cd tests && uv run pylint .
.PHONY: py-test-setup
py-test-setup: ## Runs integration tests
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --reuse --capture=no src/bootstrap/setup.py::test_setup
py-test-setup: ## Bring up the shared SigNoz backend used by integration and e2e tests
@cd tests && uv run pytest --basetemp=./tmp/ -vv --reuse --capture=no integration/src/bootstrap/setup.py::test_setup
.PHONY: py-test-teardown
py-test-teardown: ## Runs integration tests with teardown
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --teardown --capture=no src/bootstrap/setup.py::test_teardown
py-test-teardown: ## Tear down the shared SigNoz backend
@cd tests && uv run pytest --basetemp=./tmp/ -vv --teardown --capture=no integration/src/bootstrap/setup.py::test_teardown
.PHONY: py-test
py-test: ## Runs integration tests
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --capture=no src/
@cd tests && uv run pytest --basetemp=./tmp/ -vv --capture=no integration/src/
.PHONY: py-clean
py-clean: ## Clear all pycache and pytest cache from tests directory recursively

20
tests/.dockerignore Normal file
View File

@@ -0,0 +1,20 @@
# Build context for tests/fixtures/seeder/Dockerfile. Keep the context lean —
# the seeder image only needs fixtures/ to be importable.
.venv
.pytest_cache
tmp
**/__pycache__
**/*.pyc
# e2e Playwright outputs and deps
e2e/node_modules
e2e/test-results
e2e/playwright-report
e2e/.auth
e2e/.playwright-cli
e2e/tests/alerts-downtime/run-spec-*
# Integration-side outputs (if any stale dirs remain)
integration/tmp
integration/testdata

49
tests/README.md Normal file
View File

@@ -0,0 +1,49 @@
# SigNoz Tests
Shared pytest project with two test trees that reuse the same fixture graph.
```
tests/
pyproject.toml Shared uv/pytest project (rootdir for both trees)
conftest.py Registers pytest_plugins from fixtures/
fixtures/ Shared Python fixtures: container bring-up, auth,
telemetry inserts, API-seeding helpers
integration/
src/ Backend integration tests (pytest)
testdata/ Integration-specific JSON/YAML
e2e/
src/
bootstrap/setup.py Brings backend up + seeds; writes .signoz-backend.json
bootstrap/run.py One-command entrypoint: subprocesses `yarn test`
conftest.py e2e-scoped fixtures (seed_dashboards, seed_e2e_telemetry)
tests/ Playwright specs (TS)
testdata/ e2e-specific JSON (dashboards, alerts, channels)
playwright.config.ts baseURL reads from env injected by global.setup.ts
global.setup.ts Reads .signoz-backend.json, sets env vars
```
## Fixture ownership
- **Shared** (`tests/fixtures/`): anything that could be useful across trees —
container bring-up, auth, direct telemetry inserts, API helpers.
- **Per-tree** (`tests/<tree>/conftest.py`): fixtures whose payloads are
tree-specific (e.g. e2e dashboard JSONs live in `tests/e2e/testdata/`,
loaded by `seed_dashboards` declared in `tests/e2e/conftest.py`).
Testdata follows the same rule — JSON/YAML lives next to the tests that own it.
## Common commands
```bash
# From signoz/:
make py-test # Run all integration tests
make py-test-setup # Warm up backend (for iterative dev)
make py-test-teardown # Free containers
# From signoz/tests/:
uv sync # First-time Python deps
uv run pytest integration/src/ # Integration suite
uv run pytest --with-web e2e/src/bootstrap/run.py::test_e2e # Full e2e run
```
See `e2e/README.md` for the e2e-specific workflow.

View File

@@ -23,6 +23,8 @@ pytest_plugins = [
"fixtures.notification_channel",
"fixtures.alerts",
"fixtures.cloudintegrations",
"fixtures.dashboards",
"fixtures.seeder",
]

View File

@@ -0,0 +1,59 @@
---
name: playwright-test-generator
description: Use this agent when you need to create automated browser tests using Playwright. Examples: <example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example><example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>
tools: Glob, Grep, Read, mcp__playwright-test__browser_click, mcp__playwright-test__browser_drag, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_file_upload, mcp__playwright-test__browser_handle_dialog, mcp__playwright-test__browser_hover, mcp__playwright-test__browser_navigate, mcp__playwright-test__browser_press_key, mcp__playwright-test__browser_select_option, mcp__playwright-test__browser_snapshot, mcp__playwright-test__browser_type, mcp__playwright-test__browser_verify_element_visible, mcp__playwright-test__browser_verify_list_visible, mcp__playwright-test__browser_verify_text_visible, mcp__playwright-test__browser_verify_value, mcp__playwright-test__browser_wait_for, mcp__playwright-test__generator_read_log, mcp__playwright-test__generator_setup_page, mcp__playwright-test__generator_write_test
model: sonnet
color: blue
---
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>

View File

@@ -0,0 +1,45 @@
---
name: playwright-test-healer
description: Use this agent when you need to debug and fix failing Playwright tests. Examples: <example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example><example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>
tools: Glob, Grep, Read, Write, Edit, MultiEdit, mcp__playwright-test__browser_console_messages, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_generate_locator, mcp__playwright-test__browser_network_requests, mcp__playwright-test__browser_snapshot, mcp__playwright-test__test_debug, mcp__playwright-test__test_list, mcp__playwright-test__test_run
model: sonnet
color: red
---
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis

View File

@@ -0,0 +1,99 @@
---
name: playwright-test-planner
description: Use this agent when you need to create comprehensive test plan for a web application or website. Examples: <example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example><example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>
tools: Glob, Grep, Read, Write, WebFetch, mcp__playwright-test__browser_click, mcp__playwright-test__browser_close, mcp__playwright-test__browser_console_messages, mcp__playwright-test__browser_drag, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_file_upload, mcp__playwright-test__browser_handle_dialog, mcp__playwright-test__browser_hover, mcp__playwright-test__browser_navigate, mcp__playwright-test__browser_navigate_back, mcp__playwright-test__browser_network_requests, mcp__playwright-test__browser_press_key, mcp__playwright-test__browser_select_option, mcp__playwright-test__browser_snapshot, mcp__playwright-test__browser_take_screenshot, mcp__playwright-test__browser_type, mcp__playwright-test__browser_wait_for, mcp__playwright-test__planner_setup_page
model: sonnet
color: green
---
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Inspect Source Component Structure**
- For the feature under test, fetch the relevant source files from `https://github.com/SigNoz/signoz/` to understand the component hierarchy, props, state, and any conditional rendering paths
- Use `WebFetch` to retrieve raw file contents from GitHub (e.g. `https://raw.githubusercontent.com/SigNoz/signoz/main/frontend/src/pages/<Feature>/index.tsx`)
- Browse the directory listing at `https://github.com/SigNoz/signoz/tree/main/frontend/src/` to discover the correct paths if uncertain
- Identify all interactive sub-components, loading/error states, permission guards, and feature flags exposed in the source — these reveal test scenarios not always visible from the UI alone
2. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
3. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
4. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
5. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
6. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.

View File

@@ -0,0 +1,278 @@
---
name: playwright-cli
description: Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, test web applications, or extract information from web pages.
allowed-tools: Bash(playwright-cli:*)
---
# Browser Automation with playwright-cli
## Quick start
```bash
# open new browser
playwright-cli open
# navigate to a page
playwright-cli goto https://playwright.dev
# interact with the page using refs from the snapshot
playwright-cli click e15
playwright-cli type "page.click"
playwright-cli press Enter
# take a screenshot (rarely used, as snapshot is more common)
playwright-cli screenshot
# close the browser
playwright-cli close
```
## Commands
### Core
```bash
playwright-cli open
# open and navigate right away
playwright-cli open https://example.com/
playwright-cli goto https://playwright.dev
playwright-cli type "search query"
playwright-cli click e3
playwright-cli dblclick e7
playwright-cli fill e5 "user@example.com"
playwright-cli drag e2 e8
playwright-cli hover e4
playwright-cli select e9 "option-value"
playwright-cli upload ./document.pdf
playwright-cli check e12
playwright-cli uncheck e12
playwright-cli snapshot
playwright-cli snapshot --filename=after-click.yaml
playwright-cli eval "document.title"
playwright-cli eval "el => el.textContent" e5
playwright-cli dialog-accept
playwright-cli dialog-accept "confirmation text"
playwright-cli dialog-dismiss
playwright-cli resize 1920 1080
playwright-cli close
```
### Navigation
```bash
playwright-cli go-back
playwright-cli go-forward
playwright-cli reload
```
### Keyboard
```bash
playwright-cli press Enter
playwright-cli press ArrowDown
playwright-cli keydown Shift
playwright-cli keyup Shift
```
### Mouse
```bash
playwright-cli mousemove 150 300
playwright-cli mousedown
playwright-cli mousedown right
playwright-cli mouseup
playwright-cli mouseup right
playwright-cli mousewheel 0 100
```
### Save as
```bash
playwright-cli screenshot
playwright-cli screenshot e5
playwright-cli screenshot --filename=page.png
playwright-cli pdf --filename=page.pdf
```
### Tabs
```bash
playwright-cli tab-list
playwright-cli tab-new
playwright-cli tab-new https://example.com/page
playwright-cli tab-close
playwright-cli tab-close 2
playwright-cli tab-select 0
```
### Storage
```bash
playwright-cli state-save
playwright-cli state-save auth.json
playwright-cli state-load auth.json
# Cookies
playwright-cli cookie-list
playwright-cli cookie-list --domain=example.com
playwright-cli cookie-get session_id
playwright-cli cookie-set session_id abc123
playwright-cli cookie-set session_id abc123 --domain=example.com --httpOnly --secure
playwright-cli cookie-delete session_id
playwright-cli cookie-clear
# LocalStorage
playwright-cli localstorage-list
playwright-cli localstorage-get theme
playwright-cli localstorage-set theme dark
playwright-cli localstorage-delete theme
playwright-cli localstorage-clear
# SessionStorage
playwright-cli sessionstorage-list
playwright-cli sessionstorage-get step
playwright-cli sessionstorage-set step 3
playwright-cli sessionstorage-delete step
playwright-cli sessionstorage-clear
```
### Network
```bash
playwright-cli route "**/*.jpg" --status=404
playwright-cli route "https://api.example.com/**" --body='{"mock": true}'
playwright-cli route-list
playwright-cli unroute "**/*.jpg"
playwright-cli unroute
```
### DevTools
```bash
playwright-cli console
playwright-cli console warning
playwright-cli network
playwright-cli run-code "async page => await page.context().grantPermissions(['geolocation'])"
playwright-cli tracing-start
playwright-cli tracing-stop
playwright-cli video-start
playwright-cli video-stop video.webm
```
## Open parameters
```bash
# Use specific browser when creating session
playwright-cli open --browser=chrome
playwright-cli open --browser=firefox
playwright-cli open --browser=webkit
playwright-cli open --browser=msedge
# Connect to browser via extension
playwright-cli open --extension
# Use persistent profile (by default profile is in-memory)
playwright-cli open --persistent
# Use persistent profile with custom directory
playwright-cli open --profile=/path/to/profile
# Start with config file
playwright-cli open --config=my-config.json
# Close the browser
playwright-cli close
# Delete user data for the default session
playwright-cli delete-data
```
## Snapshots
After each command, playwright-cli provides a snapshot of the current browser state.
```bash
> playwright-cli goto https://example.com
### Page
- Page URL: https://example.com/
- Page Title: Example Domain
### Snapshot
[Snapshot](.playwright-cli/page-2026-02-14T19-22-42-679Z.yml)
```
You can also take a snapshot on demand using `playwright-cli snapshot` command.
If `--filename` is not provided, a new snapshot file is created with a timestamp. Default to automatic file naming, use `--filename=` when artifact is a part of the workflow result.
## Browser Sessions
```bash
# create new browser session named "mysession" with persistent profile
playwright-cli -s=mysession open example.com --persistent
# same with manually specified profile directory (use when requested explicitly)
playwright-cli -s=mysession open example.com --profile=/path/to/profile
playwright-cli -s=mysession click e6
playwright-cli -s=mysession close # stop a named browser
playwright-cli -s=mysession delete-data # delete user data for persistent session
playwright-cli list
# Close all browsers
playwright-cli close-all
# Forcefully kill all browser processes
playwright-cli kill-all
```
## Local installation
In some cases user might want to install playwright-cli locally. If running globally available `playwright-cli` binary fails, use `npx playwright-cli` to run the commands. For example:
```bash
npx playwright-cli open https://example.com
npx playwright-cli click e1
```
## Example: Form submission
```bash
playwright-cli open https://example.com/form
playwright-cli snapshot
playwright-cli fill e1 "user@example.com"
playwright-cli fill e2 "password123"
playwright-cli click e3
playwright-cli snapshot
playwright-cli close
```
## Example: Multi-tab workflow
```bash
playwright-cli open https://example.com
playwright-cli tab-new https://example.com/other
playwright-cli tab-list
playwright-cli tab-select 0
playwright-cli snapshot
playwright-cli close
```
## Example: Debugging with DevTools
```bash
playwright-cli open https://example.com
playwright-cli click e4
playwright-cli fill e7 "test"
playwright-cli console
playwright-cli network
playwright-cli close
```
```bash
playwright-cli open https://example.com
playwright-cli tracing-start
playwright-cli click e4
playwright-cli fill e7 "test"
playwright-cli tracing-stop
playwright-cli close
```
## Specific tasks
* **Request mocking** [references/request-mocking.md](references/request-mocking.md)
* **Running Playwright code** [references/running-code.md](references/running-code.md)
* **Browser session management** [references/session-management.md](references/session-management.md)
* **Storage state (cookies, localStorage)** [references/storage-state.md](references/storage-state.md)
* **Test generation** [references/test-generation.md](references/test-generation.md)
* **Tracing** [references/tracing.md](references/tracing.md)
* **Video recording** [references/video-recording.md](references/video-recording.md)

View File

@@ -0,0 +1,87 @@
# Request Mocking
Intercept, mock, modify, and block network requests.
## CLI Route Commands
```bash
# Mock with custom status
playwright-cli route "**/*.jpg" --status=404
# Mock with JSON body
playwright-cli route "**/api/users" --body='[{"id":1,"name":"Alice"}]' --content-type=application/json
# Mock with custom headers
playwright-cli route "**/api/data" --body='{"ok":true}' --header="X-Custom: value"
# Remove headers from requests
playwright-cli route "**/*" --remove-header=cookie,authorization
# List active routes
playwright-cli route-list
# Remove a route or all routes
playwright-cli unroute "**/*.jpg"
playwright-cli unroute
```
## URL Patterns
```
**/api/users - Exact path match
**/api/*/details - Wildcard in path
**/*.{png,jpg,jpeg} - Match file extensions
**/search?q=* - Match query parameters
```
## Advanced Mocking with run-code
For conditional responses, request body inspection, response modification, or delays:
### Conditional Response Based on Request
```bash
playwright-cli run-code "async page => {
await page.route('**/api/login', route => {
const body = route.request().postDataJSON();
if (body.username === 'admin') {
route.fulfill({ body: JSON.stringify({ token: 'mock-token' }) });
} else {
route.fulfill({ status: 401, body: JSON.stringify({ error: 'Invalid' }) });
}
});
}"
```
### Modify Real Response
```bash
playwright-cli run-code "async page => {
await page.route('**/api/user', async route => {
const response = await route.fetch();
const json = await response.json();
json.isPremium = true;
await route.fulfill({ response, json });
});
}"
```
### Simulate Network Failures
```bash
playwright-cli run-code "async page => {
await page.route('**/api/offline', route => route.abort('internetdisconnected'));
}"
# Options: connectionrefused, timedout, connectionreset, internetdisconnected
```
### Delayed Response
```bash
playwright-cli run-code "async page => {
await page.route('**/api/slow', async route => {
await new Promise(r => setTimeout(r, 3000));
route.fulfill({ body: JSON.stringify({ data: 'loaded' }) });
});
}"
```

View File

@@ -0,0 +1,232 @@
# Running Custom Playwright Code
Use `run-code` to execute arbitrary Playwright code for advanced scenarios not covered by CLI commands.
## Syntax
```bash
playwright-cli run-code "async page => {
// Your Playwright code here
// Access page.context() for browser context operations
}"
```
## Geolocation
```bash
# Grant geolocation permission and set location
playwright-cli run-code "async page => {
await page.context().grantPermissions(['geolocation']);
await page.context().setGeolocation({ latitude: 37.7749, longitude: -122.4194 });
}"
# Set location to London
playwright-cli run-code "async page => {
await page.context().grantPermissions(['geolocation']);
await page.context().setGeolocation({ latitude: 51.5074, longitude: -0.1278 });
}"
# Clear geolocation override
playwright-cli run-code "async page => {
await page.context().clearPermissions();
}"
```
## Permissions
```bash
# Grant multiple permissions
playwright-cli run-code "async page => {
await page.context().grantPermissions([
'geolocation',
'notifications',
'camera',
'microphone'
]);
}"
# Grant permissions for specific origin
playwright-cli run-code "async page => {
await page.context().grantPermissions(['clipboard-read'], {
origin: 'https://example.com'
});
}"
```
## Media Emulation
```bash
# Emulate dark color scheme
playwright-cli run-code "async page => {
await page.emulateMedia({ colorScheme: 'dark' });
}"
# Emulate light color scheme
playwright-cli run-code "async page => {
await page.emulateMedia({ colorScheme: 'light' });
}"
# Emulate reduced motion
playwright-cli run-code "async page => {
await page.emulateMedia({ reducedMotion: 'reduce' });
}"
# Emulate print media
playwright-cli run-code "async page => {
await page.emulateMedia({ media: 'print' });
}"
```
## Wait Strategies
```bash
# Wait for network idle
playwright-cli run-code "async page => {
await page.waitForLoadState('networkidle');
}"
# Wait for specific element
playwright-cli run-code "async page => {
await page.waitForSelector('.loading', { state: 'hidden' });
}"
# Wait for function to return true
playwright-cli run-code "async page => {
await page.waitForFunction(() => window.appReady === true);
}"
# Wait with timeout
playwright-cli run-code "async page => {
await page.waitForSelector('.result', { timeout: 10000 });
}"
```
## Frames and Iframes
```bash
# Work with iframe
playwright-cli run-code "async page => {
const frame = page.locator('iframe#my-iframe').contentFrame();
await frame.locator('button').click();
}"
# Get all frames
playwright-cli run-code "async page => {
const frames = page.frames();
return frames.map(f => f.url());
}"
```
## File Downloads
```bash
# Handle file download
playwright-cli run-code "async page => {
const [download] = await Promise.all([
page.waitForEvent('download'),
page.click('a.download-link')
]);
await download.saveAs('./downloaded-file.pdf');
return download.suggestedFilename();
}"
```
## Clipboard
```bash
# Read clipboard (requires permission)
playwright-cli run-code "async page => {
await page.context().grantPermissions(['clipboard-read']);
return await page.evaluate(() => navigator.clipboard.readText());
}"
# Write to clipboard
playwright-cli run-code "async page => {
await page.evaluate(text => navigator.clipboard.writeText(text), 'Hello clipboard!');
}"
```
## Page Information
```bash
# Get page title
playwright-cli run-code "async page => {
return await page.title();
}"
# Get current URL
playwright-cli run-code "async page => {
return page.url();
}"
# Get page content
playwright-cli run-code "async page => {
return await page.content();
}"
# Get viewport size
playwright-cli run-code "async page => {
return page.viewportSize();
}"
```
## JavaScript Execution
```bash
# Execute JavaScript and return result
playwright-cli run-code "async page => {
return await page.evaluate(() => {
return {
userAgent: navigator.userAgent,
language: navigator.language,
cookiesEnabled: navigator.cookieEnabled
};
});
}"
# Pass arguments to evaluate
playwright-cli run-code "async page => {
const multiplier = 5;
return await page.evaluate(m => document.querySelectorAll('li').length * m, multiplier);
}"
```
## Error Handling
```bash
# Try-catch in run-code
playwright-cli run-code "async page => {
try {
await page.click('.maybe-missing', { timeout: 1000 });
return 'clicked';
} catch (e) {
return 'element not found';
}
}"
```
## Complex Workflows
```bash
# Login and save state
playwright-cli run-code "async page => {
await page.goto('https://example.com/login');
await page.fill('input[name=email]', 'user@example.com');
await page.fill('input[name=password]', 'secret');
await page.click('button[type=submit]');
await page.waitForURL('**/dashboard');
await page.context().storageState({ path: 'auth.json' });
return 'Login successful';
}"
# Scrape data from multiple pages
playwright-cli run-code "async page => {
const results = [];
for (let i = 1; i <= 3; i++) {
await page.goto(\`https://example.com/page/\${i}\`);
const items = await page.locator('.item').allTextContents();
results.push(...items);
}
return results;
}"
```

View File

@@ -0,0 +1,169 @@
# Browser Session Management
Run multiple isolated browser sessions concurrently with state persistence.
## Named Browser Sessions
Use `-s` flag to isolate browser contexts:
```bash
# Browser 1: Authentication flow
playwright-cli -s=auth open https://app.example.com/login
# Browser 2: Public browsing (separate cookies, storage)
playwright-cli -s=public open https://example.com
# Commands are isolated by browser session
playwright-cli -s=auth fill e1 "user@example.com"
playwright-cli -s=public snapshot
```
## Browser Session Isolation Properties
Each browser session has independent:
- Cookies
- LocalStorage / SessionStorage
- IndexedDB
- Cache
- Browsing history
- Open tabs
## Browser Session Commands
```bash
# List all browser sessions
playwright-cli list
# Stop a browser session (close the browser)
playwright-cli close # stop the default browser
playwright-cli -s=mysession close # stop a named browser
# Stop all browser sessions
playwright-cli close-all
# Forcefully kill all daemon processes (for stale/zombie processes)
playwright-cli kill-all
# Delete browser session user data (profile directory)
playwright-cli delete-data # delete default browser data
playwright-cli -s=mysession delete-data # delete named browser data
```
## Environment Variable
Set a default browser session name via environment variable:
```bash
export PLAYWRIGHT_CLI_SESSION="mysession"
playwright-cli open example.com # Uses "mysession" automatically
```
## Common Patterns
### Concurrent Scraping
```bash
#!/bin/bash
# Scrape multiple sites concurrently
# Start all browsers
playwright-cli -s=site1 open https://site1.com &
playwright-cli -s=site2 open https://site2.com &
playwright-cli -s=site3 open https://site3.com &
wait
# Take snapshots from each
playwright-cli -s=site1 snapshot
playwright-cli -s=site2 snapshot
playwright-cli -s=site3 snapshot
# Cleanup
playwright-cli close-all
```
### A/B Testing Sessions
```bash
# Test different user experiences
playwright-cli -s=variant-a open "https://app.com?variant=a"
playwright-cli -s=variant-b open "https://app.com?variant=b"
# Compare
playwright-cli -s=variant-a screenshot
playwright-cli -s=variant-b screenshot
```
### Persistent Profile
By default, browser profile is kept in memory only. Use `--persistent` flag on `open` to persist the browser profile to disk:
```bash
# Use persistent profile (auto-generated location)
playwright-cli open https://example.com --persistent
# Use persistent profile with custom directory
playwright-cli open https://example.com --profile=/path/to/profile
```
## Default Browser Session
When `-s` is omitted, commands use the default browser session:
```bash
# These use the same default browser session
playwright-cli open https://example.com
playwright-cli snapshot
playwright-cli close # Stops default browser
```
## Browser Session Configuration
Configure a browser session with specific settings when opening:
```bash
# Open with config file
playwright-cli open https://example.com --config=.playwright/my-cli.json
# Open with specific browser
playwright-cli open https://example.com --browser=firefox
# Open in headed mode
playwright-cli open https://example.com --headed
# Open with persistent profile
playwright-cli open https://example.com --persistent
```
## Best Practices
### 1. Name Browser Sessions Semantically
```bash
# GOOD: Clear purpose
playwright-cli -s=github-auth open https://github.com
playwright-cli -s=docs-scrape open https://docs.example.com
# AVOID: Generic names
playwright-cli -s=s1 open https://github.com
```
### 2. Always Clean Up
```bash
# Stop browsers when done
playwright-cli -s=auth close
playwright-cli -s=scrape close
# Or stop all at once
playwright-cli close-all
# If browsers become unresponsive or zombie processes remain
playwright-cli kill-all
```
### 3. Delete Stale Browser Data
```bash
# Remove old browser data to free disk space
playwright-cli -s=oldsession delete-data
```

View File

@@ -0,0 +1,275 @@
# Storage Management
Manage cookies, localStorage, sessionStorage, and browser storage state.
## Storage State
Save and restore complete browser state including cookies and storage.
### Save Storage State
```bash
# Save to auto-generated filename (storage-state-{timestamp}.json)
playwright-cli state-save
# Save to specific filename
playwright-cli state-save my-auth-state.json
```
### Restore Storage State
```bash
# Load storage state from file
playwright-cli state-load my-auth-state.json
# Reload page to apply cookies
playwright-cli open https://example.com
```
### Storage State File Format
The saved file contains:
```json
{
"cookies": [
{
"name": "session_id",
"value": "abc123",
"domain": "example.com",
"path": "/",
"expires": 1735689600,
"httpOnly": true,
"secure": true,
"sameSite": "Lax"
}
],
"origins": [
{
"origin": "https://example.com",
"localStorage": [
{ "name": "theme", "value": "dark" },
{ "name": "user_id", "value": "12345" }
]
}
]
}
```
## Cookies
### List All Cookies
```bash
playwright-cli cookie-list
```
### Filter Cookies by Domain
```bash
playwright-cli cookie-list --domain=example.com
```
### Filter Cookies by Path
```bash
playwright-cli cookie-list --path=/api
```
### Get Specific Cookie
```bash
playwright-cli cookie-get session_id
```
### Set a Cookie
```bash
# Basic cookie
playwright-cli cookie-set session abc123
# Cookie with options
playwright-cli cookie-set session abc123 --domain=example.com --path=/ --httpOnly --secure --sameSite=Lax
# Cookie with expiration (Unix timestamp)
playwright-cli cookie-set remember_me token123 --expires=1735689600
```
### Delete a Cookie
```bash
playwright-cli cookie-delete session_id
```
### Clear All Cookies
```bash
playwright-cli cookie-clear
```
### Advanced: Multiple Cookies or Custom Options
For complex scenarios like adding multiple cookies at once, use `run-code`:
```bash
playwright-cli run-code "async page => {
await page.context().addCookies([
{ name: 'session_id', value: 'sess_abc123', domain: 'example.com', path: '/', httpOnly: true },
{ name: 'preferences', value: JSON.stringify({ theme: 'dark' }), domain: 'example.com', path: '/' }
]);
}"
```
## Local Storage
### List All localStorage Items
```bash
playwright-cli localstorage-list
```
### Get Single Value
```bash
playwright-cli localstorage-get token
```
### Set Value
```bash
playwright-cli localstorage-set theme dark
```
### Set JSON Value
```bash
playwright-cli localstorage-set user_settings '{"theme":"dark","language":"en"}'
```
### Delete Single Item
```bash
playwright-cli localstorage-delete token
```
### Clear All localStorage
```bash
playwright-cli localstorage-clear
```
### Advanced: Multiple Operations
For complex scenarios like setting multiple values at once, use `run-code`:
```bash
playwright-cli run-code "async page => {
await page.evaluate(() => {
localStorage.setItem('token', 'jwt_abc123');
localStorage.setItem('user_id', '12345');
localStorage.setItem('expires_at', Date.now() + 3600000);
});
}"
```
## Session Storage
### List All sessionStorage Items
```bash
playwright-cli sessionstorage-list
```
### Get Single Value
```bash
playwright-cli sessionstorage-get form_data
```
### Set Value
```bash
playwright-cli sessionstorage-set step 3
```
### Delete Single Item
```bash
playwright-cli sessionstorage-delete step
```
### Clear sessionStorage
```bash
playwright-cli sessionstorage-clear
```
## IndexedDB
### List Databases
```bash
playwright-cli run-code "async page => {
return await page.evaluate(async () => {
const databases = await indexedDB.databases();
return databases;
});
}"
```
### Delete Database
```bash
playwright-cli run-code "async page => {
await page.evaluate(() => {
indexedDB.deleteDatabase('myDatabase');
});
}"
```
## Common Patterns
### Authentication State Reuse
```bash
# Step 1: Login and save state
playwright-cli open https://app.example.com/login
playwright-cli snapshot
playwright-cli fill e1 "user@example.com"
playwright-cli fill e2 "password123"
playwright-cli click e3
# Save the authenticated state
playwright-cli state-save auth.json
# Step 2: Later, restore state and skip login
playwright-cli state-load auth.json
playwright-cli open https://app.example.com/dashboard
# Already logged in!
```
### Save and Restore Roundtrip
```bash
# Set up authentication state
playwright-cli open https://example.com
playwright-cli eval "() => { document.cookie = 'session=abc123'; localStorage.setItem('user', 'john'); }"
# Save state to file
playwright-cli state-save my-session.json
# ... later, in a new session ...
# Restore state
playwright-cli state-load my-session.json
playwright-cli open https://example.com
# Cookies and localStorage are restored!
```
## Security Notes
- Never commit storage state files containing auth tokens
- Add `*.auth-state.json` to `.gitignore`
- Delete state files after automation completes
- Use environment variables for sensitive data
- By default, sessions run in-memory mode which is safer for sensitive operations

View File

@@ -0,0 +1,88 @@
# Test Generation
Generate Playwright test code automatically as you interact with the browser.
## How It Works
Every action you perform with `playwright-cli` generates corresponding Playwright TypeScript code.
This code appears in the output and can be copied directly into your test files.
## Example Workflow
```bash
# Start a session
playwright-cli open https://example.com/login
# Take a snapshot to see elements
playwright-cli snapshot
# Output shows: e1 [textbox "Email"], e2 [textbox "Password"], e3 [button "Sign In"]
# Fill form fields - generates code automatically
playwright-cli fill e1 "user@example.com"
# Ran Playwright code:
# await page.getByRole('textbox', { name: 'Email' }).fill('user@example.com');
playwright-cli fill e2 "password123"
# Ran Playwright code:
# await page.getByRole('textbox', { name: 'Password' }).fill('password123');
playwright-cli click e3
# Ran Playwright code:
# await page.getByRole('button', { name: 'Sign In' }).click();
```
## Building a Test File
Collect the generated code into a Playwright test:
```typescript
import { test, expect } from '@playwright/test';
test('login flow', async ({ page }) => {
// Generated code from playwright-cli session:
await page.goto('https://example.com/login');
await page.getByRole('textbox', { name: 'Email' }).fill('user@example.com');
await page.getByRole('textbox', { name: 'Password' }).fill('password123');
await page.getByRole('button', { name: 'Sign In' }).click();
// Add assertions
await expect(page).toHaveURL(/.*dashboard/);
});
```
## Best Practices
### 1. Use Semantic Locators
The generated code uses role-based locators when possible, which are more resilient:
```typescript
// Generated (good - semantic)
await page.getByRole('button', { name: 'Submit' }).click();
// Avoid (fragile - CSS selectors)
await page.locator('#submit-btn').click();
```
### 2. Explore Before Recording
Take snapshots to understand the page structure before recording actions:
```bash
playwright-cli open https://example.com
playwright-cli snapshot
# Review the element structure
playwright-cli click e5
```
### 3. Add Assertions Manually
Generated code captures actions but not assertions. Add expectations in your test:
```typescript
// Generated action
await page.getByRole('button', { name: 'Submit' }).click();
// Manual assertion
await expect(page.getByText('Success')).toBeVisible();
```

View File

@@ -0,0 +1,139 @@
# Tracing
Capture detailed execution traces for debugging and analysis. Traces include DOM snapshots, screenshots, network activity, and console logs.
## Basic Usage
```bash
# Start trace recording
playwright-cli tracing-start
# Perform actions
playwright-cli open https://example.com
playwright-cli click e1
playwright-cli fill e2 "test"
# Stop trace recording
playwright-cli tracing-stop
```
## Trace Output Files
When you start tracing, Playwright creates a `traces/` directory with several files:
### `trace-{timestamp}.trace`
**Action log** - The main trace file containing:
- Every action performed (clicks, fills, navigations)
- DOM snapshots before and after each action
- Screenshots at each step
- Timing information
- Console messages
- Source locations
### `trace-{timestamp}.network`
**Network log** - Complete network activity:
- All HTTP requests and responses
- Request headers and bodies
- Response headers and bodies
- Timing (DNS, connect, TLS, TTFB, download)
- Resource sizes
- Failed requests and errors
### `resources/`
**Resources directory** - Cached resources:
- Images, fonts, stylesheets, scripts
- Response bodies for replay
- Assets needed to reconstruct page state
## What Traces Capture
| Category | Details |
|----------|---------|
| **Actions** | Clicks, fills, hovers, keyboard input, navigations |
| **DOM** | Full DOM snapshot before/after each action |
| **Screenshots** | Visual state at each step |
| **Network** | All requests, responses, headers, bodies, timing |
| **Console** | All console.log, warn, error messages |
| **Timing** | Precise timing for each operation |
## Use Cases
### Debugging Failed Actions
```bash
playwright-cli tracing-start
playwright-cli open https://app.example.com
# This click fails - why?
playwright-cli click e5
playwright-cli tracing-stop
# Open trace to see DOM state when click was attempted
```
### Analyzing Performance
```bash
playwright-cli tracing-start
playwright-cli open https://slow-site.com
playwright-cli tracing-stop
# View network waterfall to identify slow resources
```
### Capturing Evidence
```bash
# Record a complete user flow for documentation
playwright-cli tracing-start
playwright-cli open https://app.example.com/checkout
playwright-cli fill e1 "4111111111111111"
playwright-cli fill e2 "12/25"
playwright-cli fill e3 "123"
playwright-cli click e4
playwright-cli tracing-stop
# Trace shows exact sequence of events
```
## Trace vs Video vs Screenshot
| Feature | Trace | Video | Screenshot |
|---------|-------|-------|------------|
| **Format** | .trace file | .webm video | .png/.jpeg image |
| **DOM inspection** | Yes | No | No |
| **Network details** | Yes | No | No |
| **Step-by-step replay** | Yes | Continuous | Single frame |
| **File size** | Medium | Large | Small |
| **Best for** | Debugging | Demos | Quick capture |
## Best Practices
### 1. Start Tracing Before the Problem
```bash
# Trace the entire flow, not just the failing step
playwright-cli tracing-start
playwright-cli open https://example.com
# ... all steps leading to the issue ...
playwright-cli tracing-stop
```
### 2. Clean Up Old Traces
Traces can consume significant disk space:
```bash
# Remove traces older than 7 days
find .playwright-cli/traces -mtime +7 -delete
```
## Limitations
- Traces add overhead to automation
- Large traces can consume significant disk space
- Some dynamic content may not replay perfectly

View File

@@ -0,0 +1,43 @@
# Video Recording
Capture browser automation sessions as video for debugging, documentation, or verification. Produces WebM (VP8/VP9 codec).
## Basic Recording
```bash
# Start recording
playwright-cli video-start
# Perform actions
playwright-cli open https://example.com
playwright-cli snapshot
playwright-cli click e1
playwright-cli fill e2 "test input"
# Stop and save
playwright-cli video-stop demo.webm
```
## Best Practices
### 1. Use Descriptive Filenames
```bash
# Include context in filename
playwright-cli video-stop recordings/login-flow-2024-01-15.webm
playwright-cli video-stop recordings/checkout-test-run-42.webm
```
## Tracing vs Video
| Feature | Video | Tracing |
|---------|-------|---------|
| Output | WebM file | Trace file (viewable in Trace Viewer) |
| Shows | Visual recording | DOM snapshots, network, console, actions |
| Use case | Demos, documentation | Debugging, analysis |
| Size | Larger | Smaller |
## Limitations
- Recording adds slight overhead to automation
- Large recordings can consume significant disk space

742
tests/e2e/.cursorrules Normal file
View File

@@ -0,0 +1,742 @@
# SigNoz E2E Testing - Cursor Rules
## Project Overview
This is a Playwright-based E2E testing framework for SigNoz frontend application. The project follows a test-plan-first approach where comprehensive test plans are created before generating automated tests.
## Directory Structure
```
signoz-e2e/
├── examples/ # Test plan templates and examples
│ └── example-test-plan.md
├── specs/ # Test plans in markdown format
│ └── [feature]/
│ └── [feature]-test-plan.md
├── tests/ # Playwright test files
│ ├── seed.spec.ts # Reference test for patterns
│ └── [feature]/
│ └── [feature].spec.ts
├── utils/ # Shared utilities
│ └── login.util.ts
├── playwright.config.ts # Playwright configuration
└── .env # Environment variables
```
## Writing Test Plans
### Test Plan Structure
Test plans MUST follow this structure (see `examples/example-test-plan.md`):
```markdown
# [Feature Name] - Test Plan
## Application Overview
[Describe the feature/module being tested with key functionality and user flows]
## User Role Permissions (Optional)
- **@admin**: [Admin capabilities]
- **@editor**: [Editor capabilities]
- **@viewer**: [Viewer capabilities]
## Test Scenarios
### 1. [Main Scenario Category] **[Role Tag]**
**Seed:** `tests/seed.spec.ts`
#### 1.1 [Specific Test Case] **[Role Tag]**
**Pre-conditions:**
- [List any setup needed]
**Steps:**
1. [Action step]
2. [Action step]
...
**Expected Results:**
- [Expected outcome]
- [Expected state]
...
**Data:** (Optional)
- Input field: "test value"
- Select option: "option name"
### 2. [Another Scenario Category]
...
## Edge Cases
[Document edge cases and error scenarios]
## Notes
- [Special considerations]
- [Known limitations]
```
### Test Plan Best Practices
1. **Be Specific**: Include exact UI element names, button labels, and navigation paths
2. **Role-Based**: Tag scenarios with appropriate role tags (@admin, @editor, @viewer)
3. **Seed Reference**: Always reference `tests/seed.spec.ts` as the seed test
4. **Data-Driven**: Include specific test data values in the plan
5. **Comprehensive**: Cover happy paths, edge cases, and error scenarios
6. **Actionable Steps**: Write steps that can be directly translated to Playwright code
7. **Expected Results**: Be explicit about what should happen after each action
### Creating Test Plans Manually
1. Use `examples/example-test-plan.md` as template
2. Explore the feature thoroughly in the application
3. Document all user flows and interactions
4. Include metadata (created by, created on, etc.) validation
5. Consider different user roles and permissions
6. Document error states and validation messages
7. Save to `specs/[feature]/[feature]-test-plan.md`
### Creating Test Plans with Playwright Agents
```bash
# Initialize agents first (if not done)
npx playwright init-agents --loop=vscode
# Use the planner agent
@🎭 planner @tests/seed.spec.ts
Create a comprehensive test plan for: [feature name]
Save to: specs/[feature]/[feature]-test-plan.md
```
The planner will:
- Explore the application using the seed test for context
- Generate a detailed test plan following the template
- Include all scenarios, edge cases, and validation checks
## Writing Playwright Tests
### Test File Structure
Every test file MUST follow this structure:
```typescript
// spec: specs/[feature]/[feature]-test-plan.md
// seed: tests/seed.spec.ts
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../utils/login.util';
test.describe('[Feature Name]', () => {
test.beforeEach(async ({ page }) => {
// Login to the application
await ensureLoggedIn(page);
// Navigate to the feature (specific to each feature)
// Example navigation steps here
});
test(
'[Test Name]',
{
tag: '@viewer', // or @editor, @admin
},
async ({ page }) => {
// Test implementation with numbered comments
// 1. [Step description]
// 2. [Step description]
...
},
);
});
```
### Test Implementation Best Practices
1. **Always Reference**: Include spec and seed references at the top of the file
2. **Use ensureLoggedIn**: Always use `ensureLoggedIn(page)` for authentication
```typescript
await ensureLoggedIn(page);
```
3. **Role-Based Tags**: Tag every test with appropriate role
```typescript
test(
'Test Name',
{ tag: '@admin' }, // @admin, @editor, or @viewer
async ({ page }) => { ... }
);
```
4. **Semantic Locators**: Prefer Playwright's built-in locators (in order of preference):
```typescript
// ✅ GOOD
page.getByRole('button', { name: 'Submit' })
page.getByLabel('Email')
page.getByPlaceholder('Enter email...')
page.getByText('Welcome')
page.getByTestId('login-button')
// ❌ AVOID
page.locator('.btn-submit') // CSS selectors
page.locator('#email') // ID selectors
```
5. **Step Comments**: Add numbered comments matching test plan steps
```typescript
// 1. Click "New routing policy" button
await page.getByRole('button', { name: 'plus New routing policy' }).click();
// 2. Fill in routing policy name
await page.getByRole('textbox', { name: 'e.g. Base routing policy...' }).fill('My Policy');
```
6. **Explicit Waits**: Wait for elements to be visible before interacting
```typescript
await expect(page.getByRole('dialog', { name: 'Create policy' })).toBeVisible();
```
7. **Assertions**: Use explicit assertions for all validations
```typescript
await expect(page.getByText('Success message')).toBeVisible();
await expect(field).toHaveValue('expected value');
await expect(element).toBeHidden();
```
8. **Unique Test Data**: Use timestamps or unique identifiers for test data
```typescript
const uniqueName = `Test Policy ${Date.now()}`;
```
9. **Navigation in beforeEach**: Put common navigation steps in beforeEach
```typescript
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
await page.locator('svg.lucide-bell-dot').click();
await page.getByRole('tab', { name: 'Configuration' }).click();
});
```
10. **Handle Async Operations**: Add appropriate waits for async operations
```typescript
await page.getByRole('button', { name: 'Save' }).click();
await expect(page.getByText('Saved successfully')).toBeVisible();
```
### Writing Tests Manually
1. Read the test plan in `specs/[feature]/[feature]-test-plan.md`
2. Use `tests/seed.spec.ts` as reference for patterns
3. Create test file at `tests/[feature]/[feature].spec.ts`
4. Implement each scenario from the test plan
5. Use Playwright codegen for complex interactions: `yarn codegen`
6. Run tests to verify: `yarn test:ui`
### Writing Tests with Playwright Agents
```bash
# Use the generator agent
@🎭 generator @specs/[feature]/[feature]-test-plan.md @tests/seed.spec.ts
Generate Playwright tests from the test plan
Save to: tests/[feature]/[feature].spec.ts
```
The generator will:
- Read the test plan
- Use seed.spec.ts for patterns and context
- Generate properly structured test files
- Use semantic locators and best practices
- Include proper role tags and assertions
### Fixing Failing Tests with Playwright Agents
```bash
# Use the healer agent
@🎭 healer @tests/[feature]/[failing-test].spec.ts
Fix the failing test: [test name]
Error: [paste error message from test output]
```
The healer will:
- Replay the failing steps
- Update locators if UI elements changed
- Add proper waits for timing issues
- Re-run until the test passes
## Locator Strategies
### Priority Order (use in this order):
1. **Role-based** (Preferred): `getByRole(role, { name })`
2. **Label**: `getByLabel(text)`
3. **Placeholder**: `getByPlaceholder(text)`
4. **Text**: `getByText(text)`
5. **Test ID**: `getByTestId(id)`
6. **CSS/XPath** (Last Resort): `locator(selector)`
### Common Patterns
```typescript
// Buttons
page.getByRole('button', { name: 'Submit' })
page.getByRole('button', { name: 'plus New policy' }) // with icon
// Text inputs
page.getByRole('textbox', { name: 'Email' })
page.getByPlaceholder('Enter email...')
// Dropdowns
page.locator('.ant-select').click() // If no semantic alternative
page.locator('.ant-select-item').first().click()
// Tables
page.getByRole('table')
page.getByRole('row')
page.getByRole('cell')
// Dialogs/Modals
page.getByRole('dialog', { name: 'Create policy' })
// Tabs
page.getByRole('tab', { name: 'Configuration' })
// Headings
page.getByRole('heading', { name: 'Dashboard' })
// Links
page.getByRole('link', { name: 'Learn more' })
```
## Role-Based Testing
### Role Hierarchy
- **@viewer**: Read-only access (view, search, filter)
- **@editor**: Create and edit access (all @viewer tests + create/edit)
- **@admin**: Full access (all @viewer + @editor tests + delete/admin operations)
### Tagging Guidelines
```typescript
// Viewer tests - read-only operations
test('View dashboard', { tag: '@viewer' }, async ({ page }) => { ... });
test('Search policies', { tag: '@viewer' }, async ({ page }) => { ... });
// Editor tests - create/edit operations
test('Create policy', { tag: '@editor' }, async ({ page }) => { ... });
test('Edit policy', { tag: '@editor' }, async ({ page }) => { ... });
// Admin tests - delete/admin operations
test('Delete policy', { tag: '@admin' }, async ({ page }) => { ... });
test('Manage users', { tag: '@admin' }, async ({ page }) => { ... });
```
### Running Tests by Role
```bash
# Set role in .env file
SIGNOZ_USER_ROLE=Admin # Runs all tests
SIGNOZ_USER_ROLE=Editor # Runs @editor and @viewer tests
SIGNOZ_USER_ROLE=Viewer # Runs @viewer tests only
# Or pass directly
SIGNOZ_USER_ROLE=Admin yarn test
```
## Environment Configuration
### Required Environment Variables
Create `.env` file with:
```bash
SIGNOZ_E2E_BASE_URL=https://app.us.staging.signoz.cloud
SIGNOZ_E2E_USERNAME=your-email@example.com
SIGNOZ_E2E_PASSWORD=your-password
SIGNOZ_USER_ROLE=Admin # or Editor, Viewer
```
### Using Environment Variables in Tests
```typescript
// Already handled in playwright.config.ts
// baseURL from SIGNOZ_E2E_BASE_URL
// credentials from SIGNOZ_E2E_USERNAME and SIGNOZ_E2E_PASSWORD
```
## Common Patterns & Utilities
### Login Pattern
```typescript
import { ensureLoggedIn } from '../../utils/login.util';
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
});
```
### Navigation Pattern
```typescript
// Navigate through sidebar
await page.locator('svg.lucide-bell-dot').click(); // Alerts icon
await page.getByRole('tab', { name: 'Configuration' }).click();
await page.getByRole('tab', { name: 'Routing Policies' }).click();
// Navigate by URL
await page.goto('/alerts?tab=Configuration');
```
### Form Filling Pattern
```typescript
// Fill form fields
await page.getByRole('textbox', { name: 'Name' }).fill('Test Name');
await page.getByRole('textbox', { name: 'Description' }).fill('Test Description');
// Select dropdown
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
// Submit form
await page.getByRole('button', { name: 'Save' }).click();
```
### Search Pattern
```typescript
const searchBox = page.getByRole('textbox', { name: 'Search...' });
await searchBox.fill('search term');
await page.keyboard.press('Enter');
await expect(page.getByText('search term')).toBeVisible();
```
### CRUD Operations Pattern
```typescript
// Create
await page.getByRole('button', { name: 'New Item' }).click();
// ... fill form ...
await page.getByRole('button', { name: 'Save' }).click();
await expect(page.getByText('Created successfully')).toBeVisible();
// Read/View
await page.getByText('Item Name').click();
await expect(page.getByText('Item Details')).toBeVisible();
// Update
await page.getByTestId('edit-item').click();
// ... update form ...
await page.getByRole('button', { name: 'Save' }).click();
await expect(page.getByText('Updated successfully')).toBeVisible();
// Delete
await page.getByTestId('delete-item').click();
await page.getByRole('button', { name: 'Delete' }).click();
await expect(page.getByText('Deleted successfully')).toBeVisible();
```
### Expandable Details Pattern
```typescript
// Expand to view details
const expandButton = page.getByRole('tab', { name: 'right' }).first();
await expandButton.click();
await new Promise(resolve => setTimeout(resolve, 1000));
// Verify details
await expect(page.getByText('Created by')).toBeVisible();
await expect(page.getByText('Created on')).toBeVisible();
```
## Testing Best Practices
### 1. Test Independence
- Each test should be independent and not rely on other tests
- Use `beforeEach` for common setup
- Clean up test data if necessary
### 2. Stable Selectors
- Always prefer semantic locators over CSS selectors
- Use test IDs when semantic locators are not available
- Avoid fragile selectors like `.class-name:nth-child(3)`
### 3. Explicit Waits
```typescript
// ✅ GOOD - Wait for element to be visible
await expect(page.getByRole('dialog')).toBeVisible();
await page.getByRole('button', { name: 'Submit' }).click();
// ❌ BAD - Arbitrary timeout
await page.waitForTimeout(5000);
```
### 4. Assertions
- Always add assertions to verify expected outcomes
- Use appropriate assertion methods
- Check for both positive and negative cases
```typescript
// Visibility assertions
await expect(element).toBeVisible();
await expect(element).toBeHidden();
// Value assertions
await expect(input).toHaveValue('expected');
await expect(element).toHaveText('expected');
// State assertions
await expect(button).toBeEnabled();
await expect(button).toBeDisabled();
```
### 5. Error Handling
```typescript
// Verify error messages
await expect(page.getByText('Error: Invalid input')).toBeVisible();
// Verify validation
await expect(page.getByText('Please provide a name')).toBeVisible();
```
### 6. Test Data Management
```typescript
// Use unique identifiers
const uniqueName = `Test ${Date.now()}`;
const uniqueId = `test-${Math.random().toString(36).substring(7)}`;
// Clean up after tests if needed
test.afterEach(async ({ page }) => {
// Delete created test data
});
```
## Debugging Tests
### Debug Commands
```bash
# UI Mode - interactive debugging
yarn test:ui
# Debug Mode - step through tests
yarn test:debug
# Headed Mode - see browser
yarn test:headed
# Run specific test
yarn test tests/[feature]/[test].spec.ts
# Run with specific role
SIGNOZ_USER_ROLE=Admin yarn test:ui
```
### Using Playwright Inspector
```typescript
// Add breakpoint in test
await page.pause(); // Test will pause here
```
### Viewing Test Reports
```bash
# Open HTML report
yarn report
# View JSON results
cat test-results/results.json
```
## Complete Workflow
### Creating New Test Suite
1. **Plan First**
```bash
# Use planner agent (recommended)
@🎭 planner @tests/seed.spec.ts
Create test plan for: [feature name]
Save to: specs/[feature]/[feature]-test-plan.md
# OR create manually using template
cp examples/example-test-plan.md specs/[feature]/[feature]-test-plan.md
# Edit the plan
```
2. **Generate Tests**
```bash
# Use generator agent (recommended)
@🎭 generator @specs/[feature]/[feature]-test-plan.md @tests/seed.spec.ts
Generate tests and save to: tests/[feature]/[feature].spec.ts
# OR write tests manually using seed.spec.ts as reference
```
3. **Run Tests**
```bash
yarn test:ui # Interactive mode
yarn test # Headless mode
```
4. **Fix Failures**
```bash
# Use healer agent (recommended)
@🎭 healer @tests/[feature]/[failing-test].spec.ts
Fix the failing test: [test name]
Error: [paste error]
# OR debug manually
yarn test:debug tests/[feature]/[failing-test].spec.ts
```
5. **Verify & Commit**
```bash
# Run all tests
yarn test
# Check types
yarn typecheck
# Lint code
yarn lint:fix
```
## Code Quality
### Linting
```bash
# Check for issues
yarn lint
# Auto-fix issues
yarn lint:fix
```
### Type Checking
```bash
# Verify TypeScript types
yarn typecheck
```
### Before Committing
1. Run `yarn typecheck`
2. Run `yarn lint:fix`
3. Run `yarn test` to ensure all tests pass
4. Update test plan if feature changed
## Anti-Patterns to Avoid
### ❌ DON'T
```typescript
// Don't use arbitrary timeouts
await page.waitForTimeout(5000);
// Don't use fragile CSS selectors
await page.locator('.css-12345').click();
// Don't skip error handling
// (missing validation checks)
// Don't hardcode URLs
await page.goto('https://hardcoded-url.com');
// Don't use test.only in committed code
test.only('My test', async ({ page }) => { ... });
// Don't write tests without tags
test('Create item', async ({ page }) => { ... }); // Missing role tag!
```
### ✅ DO
```typescript
// Use explicit waits
await expect(page.getByRole('button', { name: 'Submit' })).toBeVisible();
// Use semantic locators
await page.getByRole('button', { name: 'Submit' }).click();
// Add proper assertions
await expect(page.getByText('Success')).toBeVisible();
// Use baseURL from config
await page.goto('/dashboard'); // Uses baseURL from config
// Remove test.only before committing
test('Create item', { tag: '@editor' }, async ({ page }) => { ... });
// Always tag tests with roles
test('Create item', { tag: '@editor' }, async ({ page }) => { ... });
```
## Playwright Agents Quick Reference
### When to Use Each Agent
- **🎭 Planner**: When you need to create a comprehensive test plan for a new feature or update an existing one
- **🎭 Generator**: When you have a test plan and need to generate Playwright tests from it
- **🎭 Healer**: When tests are failing due to locator issues, timing problems, or UI changes
### Agent Commands
```bash
# Planner
@🎭 planner @tests/seed.spec.ts
Create test plan for: [feature]
Save to: specs/[feature]/test-plan.md
# Generator
@🎭 generator @specs/[feature]/test-plan.md @tests/seed.spec.ts
Generate tests
Save to: tests/[feature]/[feature].spec.ts
# Healer
@🎭 healer @tests/[feature]/[test].spec.ts
Fix failing test: [test name]
Error: [error message]
```
## Additional Resources
- Playwright Documentation: https://playwright.dev
- Playwright Agents: https://playwright.dev/docs/test-agents
- Playwright Best Practices: https://playwright.dev/docs/best-practices
- TypeScript Handbook: https://www.typescriptlang.org/docs/
## Summary Checklist
When creating test plans:
- [ ] Use template from `examples/example-test-plan.md`
- [ ] Include Application Overview section
- [ ] Add role-based permissions if applicable
- [ ] Write specific, actionable steps
- [ ] Include expected results for each scenario
- [ ] Add edge cases and error scenarios
- [ ] Save to `specs/[feature]/[feature]-test-plan.md`
When writing tests:
- [ ] Reference test plan and seed in comments
- [ ] Import and use `ensureLoggedIn` utility
- [ ] Add `beforeEach` with navigation
- [ ] Tag every test with role (@admin, @editor, @viewer)
- [ ] Use semantic locators (getByRole, getByLabel, etc.)
- [ ] Add numbered comments matching test plan steps
- [ ] Include explicit assertions
- [ ] Use unique test data with timestamps
- [ ] Run tests and verify they pass
- [ ] Run linting and type checking
When debugging:
- [ ] Use `yarn test:ui` for interactive debugging
- [ ] Use `yarn test:debug` for step-through debugging
- [ ] Check test reports with `yarn report`
- [ ] Use healer agent for fixing locator/timing issues
- [ ] Verify tests pass in all browsers if needed

15
tests/e2e/.env.example Normal file
View File

@@ -0,0 +1,15 @@
# Copy this to .env and fill in your values.
#
# Local mode is the default: `cd tests && uv run pytest ... e2e/src/bootstrap/setup.py::test_setup`
# brings up a containerized backend and writes .signoz-backend.json, which
# global.setup.ts consumes — you only need this file for staging mode.
# Staging override (set BASE_URL to opt out of local backend bring-up)
SIGNOZ_E2E_BASE_URL=https://app.us.staging.signoz.cloud
# Test credentials (only needed when SIGNOZ_E2E_BASE_URL is set, i.e. staging mode)
SIGNOZ_E2E_USERNAME=
SIGNOZ_E2E_PASSWORD=
# Role of the user - Admin/Editor/Viewer
SIGNOZ_USER_ROLE=

38
tests/e2e/.eslintignore Normal file
View File

@@ -0,0 +1,38 @@
# Dependencies
node_modules/
# Build outputs
dist/
build/
# Test results
test-results/
playwright-report/
coverage/
# Environment files
.env
.env.local
.env.production
# Editor files
.vscode/
.idea/
*.swp
*.swo
# OS files
.DS_Store
Thumbs.db
# Logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Runtime data
pids
*.pid
*.seed
*.pid.lock

68
tests/e2e/.eslintrc.js Normal file
View File

@@ -0,0 +1,68 @@
module.exports = {
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 2022,
sourceType: 'module',
},
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:playwright/recommended',
],
env: {
node: true,
es2022: true,
},
rules: {
// Code Quality
'@typescript-eslint/no-unused-vars': 'error',
'@typescript-eslint/no-explicit-any': 'warn',
'prefer-const': 'error',
'no-var': 'error',
// Formatting Rules (ESLint handles formatting)
'semi': ['error', 'always'],
'quotes': ['error', 'single', { avoidEscape: true }],
'comma-dangle': ['error', 'always-multiline'],
'indent': ['error', 2, { SwitchCase: 1 }],
'object-curly-spacing': ['error', 'always'],
'array-bracket-spacing': ['error', 'never'],
'space-before-function-paren': ['error', {
anonymous: 'always',
named: 'never',
asyncArrow: 'always',
}],
'keyword-spacing': 'error',
'space-infix-ops': 'error',
'eol-last': 'error',
'no-trailing-spaces': 'error',
'no-multiple-empty-lines': ['error', { max: 2, maxEOF: 1 }],
// Playwright-specific (enhanced)
'playwright/expect-expect': 'error',
'playwright/no-conditional-in-test': 'error',
'playwright/no-page-pause': 'error',
'playwright/no-wait-for-timeout': 'warn',
'playwright/prefer-web-first-assertions': 'error',
// Console usage
'no-console': ['warn', { allow: ['warn', 'error'] }],
},
overrides: [
{
// Config files can use console and have relaxed formatting
files: ['*.config.{js,ts}', 'playwright.config.ts'],
rules: {
'no-console': 'off',
'@typescript-eslint/no-explicit-any': 'off',
},
},
{
// Test files specific rules
files: ['**/*.spec.ts', '**/*.test.ts'],
rules: {
'@typescript-eslint/no-explicit-any': 'off', // Page objects often need any
},
},
],
};

24
tests/e2e/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
node_modules/
/test-results/
/playwright-report/
/playwright/.cache/
.env
.env.local
dist/
*.log
yarn-error.log
.yarn/cache
.yarn/install-state.gz
.vscode/
# playwright-cli artifacts (snapshots, screenshots, videos, traces)
.playwright-cli/
# saved auth session (generated by tests/auth.setup.ts)
.auth/
# backend coordinates written by the pytest bootstrap (src/bootstrap/setup.py)
.signoz-backend.json
# per-run artifacts from the alerts-downtime regression suite
tests/alerts-downtime/run-spec-*/

12
tests/e2e/.mcp.json Normal file
View File

@@ -0,0 +1,12 @@
{
"mcpServers": {
"playwright-test": {
"command": "npx",
"args": [
"playwright",
"run-test-mcp-server",
"--headless"
]
}
}
}

31
tests/e2e/.prettierignore Normal file
View File

@@ -0,0 +1,31 @@
# Dependencies
node_modules/
# Generated test outputs
playwright-report/
test-results/
playwright/.cache/
# Build outputs
dist/
# Environment files
.env
.env.local
.env*.local
# Lock files
yarn.lock
package-lock.json
pnpm-lock.yaml
# Logs
*.log
yarn-error.log
# IDE
.vscode/
.idea/
# Other
.DS_Store

View File

@@ -0,0 +1,6 @@
{
"useTabs": false,
"tabWidth": 2,
"singleQuote": true,
"trailingComma": "all"
}

187
tests/e2e/CLAUDE.md Normal file
View File

@@ -0,0 +1,187 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Overview
Playwright-based E2E suite for the SigNoz frontend, wired into the shared pytest project at `signoz/tests/`. Pytest fixtures (under `tests/fixtures/`) bring up the backend (ClickHouse + Postgres + migrator + SigNoz-with-web) and seed dashboards/alerts/telemetry before Playwright runs. Tests follow a test-plan-first workflow: write a markdown plan in `specs/`, then generate tests in `tests/`.
## Commands
```bash
# One-command local run (pytest owns lifecycle; shells out to Playwright):
cd signoz/tests && uv run pytest --basetemp=./tmp/ -vv --with-web \
e2e/src/bootstrap/run.py::test_e2e
# Warm the backend for iterative dev (keeps containers under --reuse):
cd signoz/tests && uv run pytest --basetemp=./tmp/ -vv --reuse --with-web \
e2e/src/bootstrap/setup.py::test_setup
# Then, from signoz/tests/e2e:
yarn install && yarn install:browsers # first time only
yarn test # headless
yarn test:ui # interactive
yarn test:headed # headed
yarn test:debug # step-through
yarn test tests/roles/roles-listing.spec.ts # single file
# Staging fallback (skips all pytest lifecycle, hits remote env):
yarn test:staging
# Teardown the warm backend:
cd signoz/tests && uv run pytest --basetemp=./tmp/ -vv --teardown \
e2e/src/bootstrap/setup.py::test_teardown
# Role-filtered runs (auto-set by global.setup.ts when backend is up):
SIGNOZ_USER_ROLE=Admin yarn test
SIGNOZ_USER_ROLE=Editor yarn test
SIGNOZ_USER_ROLE=Viewer yarn test
# Code quality (run before committing)
yarn typecheck
yarn lint:fix
# Reports
yarn report # Open HTML report
yarn codegen # Generate test code interactively
```
## Environment Variables
```bash
SIGNOZ_E2E_BASE_URL=https://app.us.staging.signoz.cloud
SIGNOZ_E2E_USERNAME=your-email@example.com
SIGNOZ_E2E_PASSWORD=your-password
SIGNOZ_USER_ROLE=Admin # Admin | Editor | Viewer
```
## Architecture
```
specs/[feature]/[feature]-test-plan.md # Markdown test plans (source of truth)
tests/[feature]/[feature].spec.ts # Generated/implemented Playwright tests
tests/seed.spec.ts # Reference patterns — always cite as context
utils/login.util.ts # ensureLoggedIn() shared auth helper
examples/example-test-plan.md # Template for new test plans
```
### Role Hierarchy
- `@viewer` — read-only tests (run for all roles)
- `@editor` — create/edit tests (run for Editor and Admin)
- `@admin` — delete/admin tests (run for Admin only)
`playwright.config.ts` automatically sets the grep filter based on `SIGNOZ_USER_ROLE`.
## Test File Structure
Every test file must follow this pattern:
```typescript
// spec: specs/[feature]/[feature]-test-plan.md
// seed: tests/seed.spec.ts
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../utils/login.util';
test.describe('[Feature Name]', () => {
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
// navigate to feature
});
test('[Test Name]', { tag: '@viewer' }, async ({ page }) => {
// 1. Step description
// 2. Step description
});
});
```
## Locator Priority
Use in this order:
1. `getByRole('button', { name: 'Submit' })`
2. `getByLabel('Email')`
3. `getByPlaceholder('...')`
4. `getByText('...')`
5. `getByTestId('...')`
6. `locator('.ant-select')` — last resort (e.g., Ant Design dropdowns have no semantic alternative)
## Key Patterns
**Unique test data:** `const name = \`Test ${Date.now()}\`;`
**Explicit waits over timeouts:**
```typescript
// ✅ DO
await expect(page.getByRole('dialog')).toBeVisible();
// ❌ DON'T
await page.waitForTimeout(5000);
```
**Never commit `test.only` or untagged tests.**
## Playwright Agents
Two `--loop` targets are configured:
| Loop | Agents location | Use when |
|------|----------------|----------|
| `--loop=claude` | `.claude/agents/` | Working in Claude Code (this tool) |
| `--loop=vscode` | `.github/chatmodes/` | Working in VS Code Copilot |
**Re-run after every Playwright upgrade** to pick up improved prompts and new tools:
```bash
npx playwright init-agents --loop=claude
npx playwright init-agents --loop=vscode
```
Claude Code agents (in `.claude/agents/`):
- **playwright-test-planner** — explores the app and writes `specs/[feature]/test-plan.md`
- **playwright-test-generator** — reads a test plan, executes steps live, writes `tests/[feature]/[feature].spec.ts`
- **playwright-test-healer** — runs failing tests, debugs, patches locators/waits until green
These agents use MCP (`run-test-mcp-server --headless`) for bounded, structured test generation sessions.
## CLI vs MCP: When to Use What
**Use the Playwright subagents (MCP)** for the structured plan → generate → heal workflow. Each session is bounded so the MCP token overhead (~4x vs CLI) is acceptable.
**Use `playwright-cli` directly** for all other browser work — quick locator checks, exploring the app, debugging outside a structured session. It saves snapshots/screenshots to disk (`.playwright-cli/`) instead of streaming them into the context window, giving ~4x token savings vs MCP.
```bash
# Open the app and take a snapshot (element refs saved to .playwright-cli/*.yml)
playwright-cli open https://app.us.staging.signoz.cloud
# Get compact element refs (e1, e2, ...) without sending the DOM into context
playwright-cli snapshot
# Interact using refs from the snapshot
playwright-cli fill e5 "search term"
playwright-cli click e12
playwright-cli press Enter
# Each action also outputs the equivalent Playwright code — copy straight into tests
# e.g.: await page.getByRole('button', { name: 'Submit' }).click();
# Take a screenshot (saved to disk, not injected into context)
playwright-cli screenshot
# Check console errors and network requests
playwright-cli console
playwright-cli network
# Save and restore auth state (skip login in subsequent sessions)
playwright-cli state-save .playwright-cli/auth.json
playwright-cli state-load .playwright-cli/auth.json
playwright-cli close
```
For running and debugging actual test files, use these (they're faster for simple cases):
```bash
yarn test tests/[feature]/[feature].spec.ts # run single spec
yarn test:debug tests/[feature]/[feature].spec.ts # step-through
yarn test:ui # interactive UI
yarn codegen # record a flow interactively
```

304
tests/e2e/README.md Normal file
View File

@@ -0,0 +1,304 @@
# SigNoz E2E
Playwright tests for the SigNoz frontend. Lives alongside `tests/integration/` and reuses its pytest fixture graph to bring up a containerized backend, register an admin, and seed dashboards + telemetry before Playwright runs.
## Two ways to run
### 1. One-command (local backend, recommended)
Pytest owns the lifecycle. It provisions containers, registers the admin, seeds dashboards/alerts/telemetry, writes backend coordinates to `.signoz-backend.json`, then shells out to `yarn test`:
```bash
cd signoz/tests
uv sync # first time only
uv run pytest --basetemp=./tmp/ -vv --with-web \
e2e/src/bootstrap/run.py::test_e2e
```
For iterative Playwright dev, bring the backend up once (`--reuse` keeps containers warm) and drive Playwright directly:
```bash
cd signoz/tests
uv run pytest --basetemp=./tmp/ -vv --reuse --with-web \
e2e/src/bootstrap/setup.py::test_setup
cd e2e && yarn install && yarn install:browsers # first time
yarn test:ui # iterate
```
Teardown when done:
```bash
cd signoz/tests
uv run pytest --basetemp=./tmp/ -vv --teardown \
e2e/src/bootstrap/setup.py::test_teardown
```
### 2. Staging fallback
Point `SIGNOZ_E2E_BASE_URL` at a remote env (e.g. staging) — `global.setup.ts` becomes a no-op and Playwright hits the URL directly:
```bash
cp .env.example .env # fill SIGNOZ_E2E_USERNAME/PASSWORD
yarn test:staging
```
## Setup details
```bash
# Install dependencies (local deps for Playwright)
yarn install
# Install Playwright browsers
yarn install:browsers
# Copy .env.example to .env (only needed for staging mode)
cp .env.example .env
```
### Playwright CLI Setup (token-efficient browser automation)
`@playwright/cli` is a standalone CLI tool for AI coding agents. It saves snapshots and screenshots to disk instead of streaming them into the LLM context — ~4x fewer tokens than MCP for open-ended browser sessions.
```bash
yarn install:cli
# Equivalent to:
# npm install -g @playwright/cli@latest
# playwright-cli install --skills
```
This initializes the workspace and installs the `SKILL.md` to `.claude/skills/playwright-cli/` so Claude Code can use `playwright-cli` commands directly.
> **Note:** Re-run `yarn install:cli` after upgrading `@playwright/cli` to pick up new commands and skill definitions.
### Playwright Agents Setup
Initialize Playwright agents for your AI tool:
```bash
# For Claude Code
npx playwright init-agents --loop=claude
# For VS Code Copilot
npx playwright init-agents --loop=vscode
```
> **Note:** Re-run these commands after every Playwright version upgrade to pick up improved agent prompts and new tools.
This creates three agents:
1. **Planner** - Explores the app and creates comprehensive test plans in `specs/`
2. **Generator** - Generates Playwright tests from test plans in `specs/`
3. **Healer** - Fixes failing tests by updating locators and adding proper waits
The MCP server (`.mcp.json`) runs in `--headless` mode to avoid spawning a visible browser window during agent sessions.
The agents are configured to work with your `seed.spec.ts` file for context and patterns.
## Running Tests
```bash
# Run all tests
yarn test
# Run in UI mode (interactive)
yarn test:ui
# Run in headed mode (see browser)
yarn test:headed
# Debug mode
yarn test:debug
# Run specific browser
yarn test:chromium
yarn test:firefox
yarn test:webkit
# View HTML report
yarn report
# Generate tests with Codegen
yarn codegen
# Linting and formatting
yarn lint
yarn lint:fix
# Type checking
yarn typecheck
```
## Using Playwright Agents
The agent invocation syntax differs by AI tool:
### Claude Code
```
use playwright-test-planner to create a test plan for [feature] at [url]
save to: specs/[feature]/[feature]-test-plan.md
use playwright-test-generator with specs/[feature]/[feature]-test-plan.md and tests/seed.spec.ts
save to: tests/[feature]/[feature].spec.ts
use playwright-test-healer to fix failing tests in tests/[feature]/[feature].spec.ts
error: [paste error message]
```
### VS Code Copilot
```
@🎭 planner @tests/seed.spec.ts
Create a test plan for [feature]. Save to: specs/[feature]/[feature]-test-plan.md
@🎭 generator @specs/[feature]/[feature]-test-plan.md @tests/seed.spec.ts
Generate tests. Save to: tests/[feature]/[feature].spec.ts
@🎭 healer @tests/[feature]/[test].spec.ts
Fix the failing test: [test name]. Error: [paste error message]
```
### What each agent does
| Agent | Input | Output |
|-------|-------|--------|
| Planner | App URL + seed test | `specs/[feature]/test-plan.md` |
| Generator | Test plan + seed test | `tests/[feature]/[feature].spec.ts` (validated live) |
| Healer | Failing `.spec.ts` + error | Patched test, or `test.fixme()` with explanation |
## Environment Variables
| Variable | Description | Example |
|----------|-------------|---------|
| `SIGNOZ_E2E_BASE_URL` | Base URL of the application | `https://app.us.staging.signoz.cloud` |
| `SIGNOZ_E2E_USERNAME` | Test user email | `test@example.com` |
| `SIGNOZ_E2E_PASSWORD` | Test user password | `your-password` |
| `SIGNOZ_USER_ROLE` | Role of the user for testing | `Admin`, `Editor`, or `Viewer` |
## Workflow Example
### Complete Test Creation Flow (Claude Code)
```bash
# 1. Create test plan
# In Claude Code, say:
# "use playwright-test-planner to create a test plan for the routing policies feature at https://app.us.staging.signoz.cloud/alerts
# save to: specs/alerts/routing-policies-test-plan.md"
# 2. Review and edit the generated plan in specs/alerts/routing-policies-test-plan.md
# 3. Generate tests from the plan
# "use playwright-test-generator with specs/alerts/routing-policies-test-plan.md and tests/seed.spec.ts
# save to: tests/alerts/routing-policies.spec.ts"
# 4. Run the tests
yarn test:ui
# 5. If any test fails, heal it
# "use playwright-test-healer to fix failing tests in tests/alerts/routing-policies.spec.ts
# error: <paste error output>"
# 6. Re-run to verify
yarn test
```
### Role-Based Test Execution
This project uses Playwright's tag system to run tests based on user roles. Tests are automatically filtered by the `SIGNOZ_USER_ROLE` environment variable.
#### How It Works
When you run tests, Playwright automatically filters which tests execute based on your role:
- **Admin**: Runs ALL tests (has access to everything)
- **Editor**: Runs Editor + Viewer tests (cannot run Admin-only features)
- **Viewer**: Runs only Viewer tests (read-only access)
#### Tagging Tests
When writing tests, tag them based on required permissions:
```typescript
// Admin-only test
test(
'Delete Organization',
{ tag: '@admin' },
async ({ page }) => {
// Only admins can delete organizations
}
);
// Editor-level test
test(
'Create Dashboard',
{ tag: '@editor' },
async ({ page }) => {
// Editors and Admins can create dashboards
}
);
// Viewer-level test
test(
'View Dashboard',
{ tag: '@viewer' },
async ({ page }) => {
// All users can view dashboards
}
);
```
#### Running Tests
```bash
# Run tests as different roles
SIGNOZ_USER_ROLE=Admin yarn test # Runs @admin, @editor, @viewer tests
SIGNOZ_USER_ROLE=Editor yarn test # Runs @editor, @viewer tests only
SIGNOZ_USER_ROLE=Viewer yarn test # Runs @viewer tests only
# Set role in .env file for persistent testing
echo "SIGNOZ_USER_ROLE=Admin" >> .env
yarn test
```
## Best Practices
1. **Start with Seed Test** - Always reference `seed.spec.ts` for patterns
2. **Review Generated Plans** - Edit test plans before generating tests
3. **Use Semantic Locators** - Prefer `getByRole`, `getByLabel` over CSS selectors
4. **Keep Plans Updated** - Update `specs/` when features change
5. **Let Healer Work** - The healer can fix most locator and timing issues
6. **Write Descriptive Tests** - Use clear test names and comments
## Troubleshooting
### Tests Won't Run
- Check `.env` has correct credentials
- Verify `baseURL` is accessible
- Run `yarn test:debug` for detailed output
### Locators Failing
- Use the healer agent to fix them
- Or use Playwright Inspector: `yarn test:debug`
- Check if UI elements have changed
### Authentication Issues
- Verify `ensureLoggedIn()` function works
- Check credentials in `.env`
- Run seed test independently: `yarn test tests/seed.spec.ts`
## Resources
- [Playwright Documentation](https://playwright.dev)
- [Playwright Agents](https://playwright.dev/docs/test-agents)
- [Playwright Best Practices](https://playwright.dev/docs/best-practices)
- [TypeScript Handbook](https://www.typescriptlang.org/docs/)
## Contributing
When adding new tests:
1. Create a test plan in `specs/` first
2. Use agents to generate tests
3. Review and refine generated code
4. Ensure tests follow existing patterns
5. Add proper documentation

145
tests/e2e/conftest.py Normal file
View File

@@ -0,0 +1,145 @@
import json
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Callable, Dict, List
import pytest
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.dashboards import upsert_dashboard
from fixtures.logger import setup_logger
from fixtures.logs import Logs
from fixtures.metrics import Metrics
from fixtures.traces import (
TraceIdGenerator,
Traces,
TracesKind,
TracesStatusCode,
)
logger = setup_logger(__name__)
TESTDATA_ROOT = Path(__file__).resolve().parent / "testdata"
def _load_json_dir(path: Path) -> List[Dict]:
if not path.exists():
return []
return [json.loads(f.read_text()) for f in sorted(path.glob("*.json"))]
@pytest.fixture(name="seed_dashboards", scope="function")
def seed_dashboards(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> List[str]:
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
ids: List[str] = []
for payload in _load_json_dir(TESTDATA_ROOT / "dashboards"):
dashboard_id = upsert_dashboard(signoz, token, payload)
logger.info("seeded dashboard: %s", {"title": payload.get("title"), "id": dashboard_id})
ids.append(dashboard_id)
return ids
@pytest.fixture(name="seed_alert_rules", scope="function")
def seed_alert_rules(
create_alert_rule: Callable[[Dict], str],
) -> List[str]:
ids: List[str] = []
for payload in _load_json_dir(TESTDATA_ROOT / "alerts"):
ids.append(create_alert_rule(payload))
return ids
@pytest.fixture(name="seed_e2e_telemetry", scope="function")
def seed_e2e_telemetry(
insert_traces: Callable[[List[Traces]], None],
insert_logs: Callable[[List[Logs]], None],
insert_metrics: Callable[[List[Metrics]], None],
) -> None:
"""
Emit a small, fresh slice of telemetry across a few synthetic services so the
Services table has rows and /home ingestion banners pass their freshness check.
Re-run each pytest invocation (function scope) — the signoz container is reused
under --reuse but telemetry freshness is re-established every time.
"""
now = datetime.now(tz=timezone.utc).replace(microsecond=0)
services = ["checkout-service", "orders-service", "payment-service"]
traces_out: List[Traces] = []
logs_out: List[Logs] = []
for service in services:
trace_id = TraceIdGenerator.trace_id()
parent_span_id = TraceIdGenerator.span_id()
child_span_id = TraceIdGenerator.span_id()
traces_out.extend(
[
Traces(
timestamp=now - timedelta(seconds=30),
duration=timedelta(milliseconds=120),
trace_id=trace_id,
span_id=parent_span_id,
parent_span_id="",
name=f"GET /{service}/healthz",
kind=TracesKind.SPAN_KIND_SERVER,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"service.name": service,
"deployment.environment": "e2e",
"host.name": f"{service}-host-01",
},
attributes={
"http.request.method": "GET",
"http.response.status_code": "200",
"http.route": "/healthz",
},
),
Traces(
timestamp=now - timedelta(seconds=29),
duration=timedelta(milliseconds=45),
trace_id=trace_id,
span_id=child_span_id,
parent_span_id=parent_span_id,
name=f"SELECT {service}_db.status",
kind=TracesKind.SPAN_KIND_CLIENT,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"service.name": service,
"deployment.environment": "e2e",
"host.name": f"{service}-host-01",
},
attributes={
"db.system": "postgresql",
"db.statement": f"SELECT 1 FROM {service}_db.status",
},
),
]
)
logs_out.append(
Logs(
timestamp=now - timedelta(seconds=15),
body=f"{service} ready",
severity_text="INFO",
resources={
"service.name": service,
"deployment.environment": "e2e",
"host.name": f"{service}-host-01",
},
attributes={"ready": "true"},
)
)
insert_traces(traces_out)
insert_logs(logs_out)
# Metrics bypassed for now — insert_metrics needs metric specs beyond a simple smoke payload.
# Traces + logs alone light up Services and Logs-ingestion-active banners.
_ = insert_metrics # keep dependency declared so the fixture graph wires up correctly

View File

@@ -0,0 +1,76 @@
# Example Feature - Test Plan Template
## Application Overview
[Describe the feature/module being tested. Include key functionality, user flows, and important business logic.]
Example:
> The Routing Policies feature allows users to create, edit, and manage alert routing configurations. Users can define rules that determine how alerts are routed to different channels based on conditions like severity, labels, or alert names.
## Test Scenarios
### 1. [Main Scenario Category]
**Seed:** `tests/seed.spec.ts`
#### 1.1 [Specific Test Case]
**Pre-conditions:**
- User is logged in (handled by seed test)
- [Any other specific setup needed]
**Steps:**
1. Navigate to [specific page/section]
2. Click on [element description]
3. Fill in [field] with "[test data]"
4. Click [button/action]
5. Verify [expected outcome]
**Expected Results:**
- [Expected UI change or behavior]
- [Expected data state]
- [Expected navigation or feedback]
**Data:**
- Input field: "test value"
- Select option: "option name"
#### 1.2 [Another Test Case]
**Steps:**
1. ...
**Expected Results:**
- ...
### 2. [Another Scenario Category]
#### 2.1 [Test Case]
**Steps:**
1. ...
**Expected Results:**
- ...
## Edge Cases
### 3. Error Handling
#### 3.1 Invalid Input
**Steps:**
1. Enter invalid data
2. Attempt to submit
**Expected Results:**
- Error message displayed
- Form not submitted
- User remains on page
## Notes
- [Any special considerations]
- [Known limitations]
- [Areas requiring manual verification]

36
tests/e2e/global.setup.ts Normal file
View File

@@ -0,0 +1,36 @@
import fs from 'fs';
import path from 'path';
/**
* Loads backend coordinates written by the pytest test_setup / test_e2e entry
* points (at tests/e2e/.signoz-backend.json) and exports them as env vars for
* the Playwright projects.
*
* If SIGNOZ_E2E_BASE_URL is already set (staging fallback, or the pytest-run
* case where env is injected directly), this is a no-op.
*/
export default async function globalSetup(): Promise<void> {
if (process.env.SIGNOZ_E2E_BASE_URL) return;
const endpointsPath = path.resolve(__dirname, '.signoz-backend.json');
if (!fs.existsSync(endpointsPath)) {
throw new Error(
'No .signoz-backend.json. Bring the backend up first:\n' +
' cd signoz/tests && uv run pytest --basetemp=./tmp/ --reuse --with-web e2e/src/bootstrap/setup.py::test_setup',
);
}
const endpoints = JSON.parse(fs.readFileSync(endpointsPath, 'utf8')) as {
base_url: string;
admin_email: string;
admin_password: string;
seeder_url?: string;
};
process.env.SIGNOZ_E2E_BASE_URL = endpoints.base_url;
process.env.SIGNOZ_E2E_USERNAME = endpoints.admin_email;
process.env.SIGNOZ_E2E_PASSWORD = endpoints.admin_password;
if (endpoints.seeder_url) {
process.env.SIGNOZ_E2E_SEEDER_URL = endpoints.seeder_url;
}
}

45
tests/e2e/package.json Normal file
View File

@@ -0,0 +1,45 @@
{
"name": "signoz-frontend-automation",
"version": "1.0.0",
"description": "E2E tests for SigNoz frontend with Playwright",
"main": "index.js",
"scripts": {
"test": "playwright test",
"test:staging": "SIGNOZ_E2E_BASE_URL=https://app.us.staging.signoz.cloud playwright test",
"test:ui": "playwright test --ui",
"test:headed": "playwright test --headed",
"test:debug": "playwright test --debug",
"test:chromium": "playwright test --project=chromium",
"test:firefox": "playwright test --project=firefox",
"test:webkit": "playwright test --project=webkit",
"report": "playwright show-report",
"codegen": "playwright codegen",
"install:browsers": "playwright install",
"install:cli": "npm install -g @playwright/cli@latest && playwright-cli install --skills",
"lint": "eslint . --ext .ts,.js",
"lint:fix": "eslint . --ext .ts,.js --fix",
"typecheck": "tsc --noEmit"
},
"keywords": [
"playwright",
"e2e",
"testing",
"signoz"
],
"author": "",
"license": "MIT",
"devDependencies": {
"@playwright/test": "^1.57.0-alpha-2025-10-09",
"@types/node": "^20.0.0",
"@typescript-eslint/eslint-plugin": "^6.0.0",
"@typescript-eslint/parser": "^6.0.0",
"dotenv": "^16.0.0",
"eslint": "^9.26.0",
"eslint-plugin-playwright": "^0.16.0",
"typescript": "^5.0.0"
},
"engines": {
"node": ">=18.0.0",
"yarn": ">=1.22.0"
}
}

View File

@@ -0,0 +1,11 @@
{
"browser": {
"browserName": "chromium",
"launchOptions": { "headless": true }
},
"timeouts": {
"action": 5000,
"navigation": 30000
},
"outputDir": ".playwright-cli"
}

View File

@@ -0,0 +1,101 @@
import { defineConfig, devices } from '@playwright/test';
import dotenv from 'dotenv';
import path from 'path';
const authFile = path.join(__dirname, '.auth/user.json');
// Load environment variables
dotenv.config({ path: path.resolve(__dirname, '.env') });
// Function to get grep pattern based on user role
function getRoleGrepPattern(): string | undefined {
const userRole = process.env.SIGNOZ_USER_ROLE;
if (!userRole) {
console.log('SIGNOZ_USER_ROLE not set, running all tests');
return undefined;
}
switch (userRole.toLowerCase()) {
case 'admin':
return '@admin|@editor|@viewer'; // Admin can run all tests
case 'editor':
return '@editor|@viewer'; // Editor can run editor and viewer tests
case 'viewer':
return '@viewer'; // Viewer can only run viewer tests
default:
console.warn(`Unknown role: ${userRole}, running all tests`);
return undefined;
}
}
export default defineConfig({
testDir: './tests',
// Pulls backend coordinates from .signoz-backend.json (written by the pytest
// bootstrap) and sets SIGNOZ_E2E_BASE_URL/USERNAME/PASSWORD before the suite
// runs. A no-op when SIGNOZ_E2E_BASE_URL is already set (staging mode, or
// when pytest shelled out to `yarn test` with env pre-injected).
globalSetup: require.resolve('./global.setup.ts'),
// Filter tests based on user role
grep: getRoleGrepPattern() ? new RegExp(getRoleGrepPattern()!) : undefined,
// Run tests in parallel
fullyParallel: true,
// Fail the build on CI if you accidentally left test.only
forbidOnly: !!process.env.CI,
// Retry on CI only
retries: process.env.CI ? 2 : 0,
// Workers
workers: process.env.CI ? 2 : undefined,
// Reporter
reporter: [
['html'],
['json', { outputFile: 'test-results/results.json' }],
['list'],
],
// Shared settings
use: {
baseURL:
process.env.SIGNOZ_E2E_BASE_URL || 'https://app.us.staging.signoz.cloud',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
colorScheme: 'dark',
locale: 'en-US',
viewport: { width: 1280, height: 720 },
},
// Configure projects for multiple browsers
projects: [
// Login once and save session — all browser projects depend on this.
// grep is overridden so it always runs regardless of SIGNOZ_USER_ROLE.
{
name: 'setup',
testMatch: /auth\.setup\.ts/,
grep: /.*/,
},
{
name: 'chromium',
use: { ...devices['Desktop Chrome'], storageState: authFile },
dependencies: ['setup'],
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'], storageState: authFile },
dependencies: ['setup'],
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'], storageState: authFile },
dependencies: ['setup'],
},
],
});

View File

@@ -0,0 +1,39 @@
# Alerts + Planned Downtime regression suite
Playwright suite originally developed for `platform-pod/issues/2095-frontend` (migration of alerts and planned-downtime flows to generated OpenAPI clients + generated react-query hooks). 34 steps across 6 flows; mutates shared tenant state so runs serially regardless of config-level `fullyParallel`.
## Files
- [`SUITE.md`](./SUITE.md) — **stable spec.** What the suite tests and how to execute a run. Read this before driving a browser or modifying the spec.
- [`results-schema.md`](./results-schema.md) — shape of the per-run `results.json` diffable artifact used while this suite was still driven interactively.
- Spec lives at `tests/e2e/tests/alerts-downtime/alerts-downtime.spec.ts`.
## Running
Same lifecycle as the rest of the e2e suite — the pytest bootstrap provisions the backend and sets env vars, Playwright reads `.signoz-backend.json` via `global.setup.ts`.
```bash
# One-command run against a freshly-provisioned local backend
cd signoz/tests
uv run pytest --basetemp=./tmp/ -vv --reuse --with-web \
e2e/src/bootstrap/setup.py::test_setup
cd e2e
yarn test tests/alerts-downtime/
# Staging (requires SIGNOZ_E2E_USERNAME/PASSWORD in .env)
yarn test:staging tests/alerts-downtime/
```
Runtime is ~1m30s sequentially. Artifacts (network captures + screenshots) land in `tests/e2e/tests/alerts-downtime/run-spec-<ts>/` or `$RUN_OUTPUT_DIR` — gitignored.
## Open observations from the original regression runs
These are FE behaviours the 2095 runs surfaced that are not yet fixed. They should be resolved or formally acknowledged as working-as-intended before the suite is considered fully green.
1. **Error-surface inconsistency** (alert-delete modal vs downtime-delete toast). BE returns the same 409 shape for both; only FE render differs.
2. **`PlannedDowntimeForm.tsx:250`** `console.error`s the `validateFields` rejection — pre-existing nit.
3. **`.alert-details__header` selector** no longer matches after the v2 overview refactor; use `.alert-header__input` or `[data-testid=alert-name-input]`. SUITE.md step 3.2 needs an amend.
4. **V2 Save Alert Rule button enables without a metric** — toggling Routing Policies and skipping the metric selection lets the user click Save, which fails with 400 `invalid_input`. The error modal surfaces correctly, but the client-side gate should require a metric.
5. **`/api/v2/rules/test` bypasses threshold evaluation by design** — the test endpoint uses `WithSendUnmatched()`, so threshold matching does not gate the sample. `Footer.tsx:94-99` empty-result branch is only reachable via a zero-data query (nonexistent metric / empty filter), not an unsatisfiable threshold. SUITE.md step 2.12 should be amended to use a zero-data query instead of `target: 1e18`.
6. **Detection-method toggle is one-way (anomaly → threshold has no return path).** `CreateAlertRule/index.tsx:49-61` routes to the classic form only for `ANOMALY_BASED_ALERT` (or forced classic). V2 has no anomaly tab (`CreateAlertV2/AlertCondition/AlertCondition.tsx:43-51` comments it out), so users have no UI control to switch back without re-routing through `/alerts/type-selection`. SUITE.md step 6.6 should describe the asymmetric behavior, and the FE should either preserve detection-method tabs in V2 or add a "Switch back" affordance.
7. **v5 builder spec rejects function `namedArgs`.** Posting `functions: [{name:"anomaly", namedArgs:{z_score_threshold:3}}]` to `/api/v2/rules` returns 400 `unknown field "namedArgs"`. The FE's `prepareQueryRangePayloadV5.ts:186-204` converts `namedArgs → args:[{name,value}]` before posting; any direct-fetch test must do the same conversion.

View File

@@ -0,0 +1,246 @@
# SUITE — platform-pod/issues/2095-frontend regression sweep
This file is the **stable specification** of the test suite. It doesn't record results — those live in per-run directories (`run-1/`, `run-2/`, …). Re-running the suite = following this file and producing a new `run-N/` from it.
## What this suite verifies
The branch `platform-pod/issues/2095-frontend` migrated the Alerts and Planned Downtime frontend flows to the generated OpenAPI clients and generated react-query hooks. Legacy adapters under `frontend/src/api/alerts/*` and `frontend/src/api/plannedDowntime/*` were deleted; ad-hoc types (`GettableAlert`) retired in favor of generated `RuletypesRuleDTO` / `RuletypesPlannedMaintenanceDTO`. The suite also exercises four real behavior fixes bundled with the refactor:
| Commit | Claim |
|---|---|
| `3db3a8626` | Null rules no longer cause infinite spinner |
| `684ef010d` | Alert detail maps HTTP 404 to `<AlertNotFound>` |
| `dc27a72bf` | Downtime ID kept as string (no number coercion) |
| `ddb0cb66e` | `convertToApiError` on save/delete paths, errors surface via `showErrorModal` |
## Environment
- Target: `baseURL` from `tests/e2e/playwright.config.ts` (set by `global.setup.ts` from the pytest-provisioned `.signoz-backend.json`, or `SIGNOZ_E2E_BASE_URL` in staging mode).
- Credentials: `SIGNOZ_E2E_USERNAME` / `SIGNOZ_E2E_PASSWORD` (pre-populated by the pytest bootstrap; only needed in `.env` for staging mode). Historical runs against `full-mastodon.us.staging.signoz.cloud` used `LOGIN_USERNAME` / `LOGIN_PASSWORD`.
- Driver: `tests/e2e/tests/alerts-downtime/alerts-downtime.spec.ts` via Playwright. Historical runs 16 were driven via Playwright MCP interactively.
- Prerequisite: the staging tenant must have at least one notification channel provisioned (Flow 5 selects `#staging-alerts`; if that exact channel isn't available, pick any channel and note the substitution in the run report)
## Naming conventions for destructive actions
Derive once per run: `E2E_TAG = e2e-2095-<unix-ts>`. Every entity created gets this prefix. Never delete anything that doesn't start with the current run's `E2E_TAG`.
## Suite structure
Six flows, 47 numbered steps total. Flow order: 1 depends on 2 (needs a rule to toggle/delete); best practice is `Flow 2 step 1-3 → Flow 1 → Flow 3 → Flow 4 → Flow 5 → Flow 6`. Each flow's detailed steps are below; full narrative with rationale lives in `run-1/0X-*.md` for reference.
Steps 2.82.10 were added after run-1 to cover the labels/annotations round-trip exposed by `types/api/alerts/convert.ts`. They are expected to appear as "new scenarios added since last run" in run-2's `DIFF_vs_run-1.md`.
Steps 2.112.13 were added after run-4 to cover the `/api/v2/rules/test` (Test Notification) endpoint and the V2 footer button's success / empty-result / validation-gated states (`CreateAlertV2/Footer/Footer.tsx:81-219`). They are expected to appear as "new scenarios added since last run" in run-5's `DIFF_vs_run-3.md`.
Flow 6 (anomaly alerts, steps 6.16.8) was added after run-5 to cover the anomaly-rule creation path, which always routes through the classic `FormAlertRules` on this branch (see `CreateAlertRule/index.tsx:49-61``CreateAlertV2` is never rendered when `alertType === ANOMALY_BASED_ALERT`). Expected to appear as "new scenarios added since last run" in the first full run after run-5.
Step 6.9 was added during run-6 to extend Flow 6 with `/api/v2/rules/test` coverage for the anomaly DTO (the classic anomaly form has no Test Notification button, so 6.9 is a direct-fetch API-contract probe). Expected to appear as "new scenarios added since last run" in the first run after run-6.
### Flow 1 — Alerts list, toggle, delete
Files exercised: `container/ListAlertRules/{ListAlert,DeleteAlert,ToggleAlertState}.tsx`.
| Step | Action | Expected | Regression-fix tie-in |
|---|---|---|---|
| 1.1 | Navigate `/alerts?tab=AlertRules` on a tenant with zero rules | Empty state (`No Alert rules yet.`) renders; no infinite spinner | `3db3a8626` |
| 1.2 | Open action menu (ellipsis) on a row | Menu shows Disable, Edit, Edit in New Tab, Clone, Delete | — |
| 1.3 | Click Disable | Status cell flips `OK``Disabled`; menu label flips to `Enable` | `patchRuleByID` roundtrip |
| 1.4 | Click Enable | Status flips back to `OK`; menu label back to `Disable` | same |
| 1.5 | Click Delete (no confirm modal for rules) | Row disappears from table immediately | `deleteRuleByID`; optimistic `setData` at `DeleteAlert.tsx:38` |
| 1.6 | Hard-refresh list | Row is gone | BE delete actually persisted |
### Flow 2 — Alert create, edit, clone
Files exercised: `container/CreateAlertV2/Footer/Footer.tsx`, `container/FormAlertRules/index.tsx`, `pages/AlertDetails/hooks.tsx::useAlertRuleDuplicate / useAlertRuleUpdate`, `types/api/alerts/convert.ts::toPostableRuleDTO` (labels / annotations mapping).
| Step | Action | Expected |
|---|---|---|
| 2.1 | Click `New Alert Rule` → v2 metric-based form opens | Form renders; Save disabled |
| 2.2 | Fill name `${E2E_TAG}-create`, pick first metric (e.g. `app.ads.ad_requests`), threshold default 0 | Chart plots; Save still disabled (channel/routing required) |
| 2.3 | Enable `Use Routing Policies` toggle (or select the test channel if configured) | Save enables |
| 2.4 | Click Save Alert Rule | Redirect to list; row `${E2E_TAG}-create` appears with status `OK` |
| 2.5 | Click row → overview loads → rename to `${E2E_TAG}-renamed` via header input → Save | Redirect to list; row name updated |
| 2.6 | Open menu → Clone | New rule `${E2E_TAG}-renamed - Copy` created; navigation lands on its overview URL |
| 2.7 | Visit History tab from the cloned overview | History widgets (TOTAL TRIGGERED / AVG RESOLUTION TIME / TOP CONTRIBUTORS) render; no spinner |
| 2.8 | On a fresh `/alerts/new` form, after 2.12.3: add two labels via the labels input — `env=prod`, `severity=warn`; fill annotations `description=${E2E_TAG}-desc`, `summary=${E2E_TAG}-summary`; name `${E2E_TAG}-labels`. Save; capture POST body to `network/02_step2.8_POST_rules.json` | POST body `labels` object contains exactly `{env: "prod", severity: "warn"}`; `annotations` carries the description/summary. Row `${E2E_TAG}-labels` appears in list |
| 2.9 | Click the `${E2E_TAG}-labels` row → open its Edit form | Labels input shows both chips; annotations inputs rehydrate with the saved description/summary |
| 2.10 | In the Edit form: clear the **value** of the `severity` label (leave the key, blank the value); Save; capture PUT body to `network/02_step2.10_PUT_rules.json` | Document the resulting PUT body's `labels` shape. Two behaviors possible and both are acceptable to observe — note whichever occurs: (a) `severity` key retained with `""` value (UI passes empty strings through; `stripUndefinedLabels` only filters `undefined`), or (b) `severity` key absent (UI coerces blank to `undefined` and our helper drops it — this is the **silent ignore** surface) |
| 2.11 | On a fresh `/alerts/new` V2 form, fill name `${E2E_TAG}-test-notif`, pick a metric known to produce non-zero values on staging (e.g. `app.currency_counter` — the one run-3 used for 2.4), leave threshold default (target `0`, op `>`), enable `Use Routing Policies`. Click `Test Notification`. Capture request + response to `network/02_step2.11_POST_rules_test.json` | `POST /api/v2/rules/test` fires with a full `RuletypesPostableRuleDTO` body (same shape as the create POST in `run-3/network/02_step2.4_POST_rules.json`: top-level `alert`, `condition`, `annotations`, `labels`, `notificationSettings.usePolicy: true`, `evaluation`). Response 200 with `data.alertCount > 0`. Toast `Test notification sent successfully` (green). Nothing persisted — the rule does **not** appear in the list. Button icon is `<Loader>` during the request, reverts to `<Send>` on completion. Covers `Footer.tsx:94-102` success branch |
| 2.12 | Same form as 2.11, but before clicking Test Notification change the threshold target to `1e18` (or any value the metric can't exceed) with op `>`. Click `Test Notification`; capture to `network/02_step2.12_POST_rules_test.json` | `POST /api/v2/rules/test` returns 200 with `data.alertCount === 0`. Toast `No alerts found during the evaluation. This happens when rule condition is unsatisfied. You may adjust the rule threshold and retry.` (red). No persisted rule. Covers the empty-result branch at `Footer.tsx:94-99` |
| 2.13 | On a fresh V2 form with name empty and no metric picked, inspect the `Test Notification` button. Hover to read the tooltip. Then fill name + metric + routing; re-inspect | While validation fails: button has `disabled` attribute, Ant tooltip shows the message from `validateCreateAlertState` (e.g. name-required or channel-required — whichever the current branch surfaces). No `POST /api/v2/rules/test` fires while disabled. Once validation passes, the button becomes enabled (no tooltip). Covers `Footer.tsx:169-170,204,210-211` and reinforces 2.3's gating contract |
Notes on **silent ignore** for future sessions: `convert.ts::stripUndefinedLabels` drops entries whose value is `undefined` (not entries whose value is `""`). The UI rarely produces an `undefined`-valued label directly — most inputs produce strings, possibly empty. Step 2.10 is the closest black-box probe. Authoritative coverage of the `undefined` filter lives (or will live) in a Jest test for `convert.ts`; if that test is missing, flag it as an observation rather than a step failure.
### Flow 3 — Alert details and 404
Files exercised: `pages/AlertDetails/{AlertDetails,AlertHeader/AlertHeader,hooks}.tsx`, `pages/AlertDetails/AlertNotFound/AlertNotFound.tsx`.
| Step | Action | Expected | Regression-fix tie-in |
|---|---|---|---|
| 3.1 | On a valid rule's `/alerts/overview?ruleId=<id>` | Breadcrumb `Alert Rules/<id>`, AlertHeader, Overview+History tabs, chart | — |
| 3.2 | DOM probe: `.alert-details__header`, `.ant-breadcrumb`, tab count | All present (tab count ≥ 2) | — |
| 3.3 | Click History tab | History page loads with the three widgets | — |
| 3.4 | Visit `/alerts/overview?ruleId=00000000-0000-0000-0000-000000000000` | `<AlertNotFound>` renders; document title = `Alert Not Found`; two reason panels | `684ef010d` |
| 3.5 | Visit `/alerts/overview` with no query params | Same `<AlertNotFound>` render (`!isValidRuleId` branch) | `AlertDetails.tsx:107` |
| 3.6 | Delete a rule, then revisit its overview URL | `<AlertNotFound>` render (HTTP 404 → error branch) | `684ef010d` |
### Flow 4 — Planned Downtime CRUD
Files exercised: `container/PlannedDowntime/*`.
| Step | Action | Expected | Regression-fix tie-in |
|---|---|---|---|
| 4.1a | Deep-link to `/alerts?tab=Configuration&subTab=planned-downtime` in a fresh tab | List renders | **Known FAIL (run 1)** — infinite spinner; no `GET /downtime_schedules` fires |
| 4.1b | From `/alerts?tab=AlertRules`, click `Configuration` tab → Planned Downtime sub-tab | List renders | — |
| 4.2 | Click `New downtime`, fill name `${E2E_TAG}-downtime-once`, start = today, end = tomorrow, timezone UTC, no alerts selected, `Does not repeat`, submit | Row appears with `24 hours` duration chip and start-time caption | `b608ac6f8`, `739c868ef` |
| 4.3 | Probe validation: submit with `Ends on` empty | Inline red error `Please enter Ends on` | client-side validation |
| 4.4 | Click pencil → edit name to `${E2E_TAG}-downtime-edited``Update downtime schedule` | Row name updates; `24 hours` duration preserved | `dc27a72bf` |
| 4.5 | Click trash → `Delete Schedule` modal → confirm | Row removed; list returns to `No data` | — |
### Flow 5 — Classic experience + cascade-delete error paths
Files exercised: `container/FormAlertRules/index.tsx` (classic form), `DeleteAlert.tsx`, `PlannedDowntimeForm.tsx`, `PlannedDowntimeDeleteModal.tsx`, `api/ErrorResponseHandlerForGeneratedAPIs.ts`.
| Step | Action | Expected | Regression-fix tie-in |
|---|---|---|---|
| 5.1 | `/alerts/new` → click `Switch to Classic Experience` | URL gains `showClassicCreateAlertsPage=true&ruleType=threshold_rule`; classic `FormAlertRules` renders with 3-step layout | — |
| 5.2 | Fill: first metric, name `${E2E_TAG}-classic`, select channel from `Notification Channels` dropdown | `Create Rule` enables | — |
| 5.3 | Click `Create Rule`, confirm `Save Changes` modal OK | Redirect to list; row `${E2E_TAG}-classic` with severity `warning` | `createRule` generated client |
| 5.4 | Create downtime `${E2E_TAG}-downtime-linked` (see Flow 4 steps for mechanics) and attach the classic alert in `Silence Alerts` | Downtime row shows the `24 hours` chip; `${E2E_TAG}-downtime-linked` created | — |
| 5.5 | Alert Rules tab → ellipsis on the classic alert → Delete | **Error modal** surfaces: `already_exists` code, message `"cannot delete rule because it is referenced by a planned maintenance, remove the rule from the planned maintenance first"`. Row remains. | `ddb0cb66e``showErrorModal` |
| 5.6 | Configuration → trash on the downtime → `Delete Schedule` | **Error notification (toast)**: same `already_exists` code, message `"cannot delete planned maintenance because it is referenced by associated rules, remove the rules from the planned maintenance first"`. Network DELETE returns 409. Row remains. | `ddb0cb66e` |
### Flow 6 — Anomaly alerts
Files exercised: `pages/AlertTypeSelection/AlertTypeSelection.tsx`, `container/CreateAlertRule/SelectAlertType/index.tsx`, `container/CreateAlertRule/index.tsx:49-61` (classic-vs-V2 dispatch), `container/FormAlertRules/index.tsx` (with `ruleType=anomaly_rule`), `container/FormAlertRules/RuleOptions.tsx:100-124,381-390` (anomaly-specific inputs — z-score, seasonality), `ee/query-service/rules/anomaly.go` (BE evaluation).
Anomaly alerts use the **classic** form exclusively on this branch. The V2 anomaly tab in `CreateAlertV2/AlertCondition/AlertCondition.tsx:43-51` is commented out; `buildCreateAnomalyAlertRulePayload` in `CreateAlertV2/Footer/utils.tsx:291` is still a TODO stub. The classic flow serializes anomaly as a query function: `functions: [{ name: "anomaly", namedArgs: { z_score_threshold: <n> } }]` with `ruleType: "anomaly_rule"`, `alertType: "METRIC_BASED_ALERT"` (see `FormAlertRules/index.tsx:216-233`).
**Prerequisite:** tenant must have the `anomaly_detection` feature flag active (`constants/features.ts:9`). Without it, the type-selection card is hidden but direct-URL access still works. Step 6.1 gates on the flag; if it's off, mark 6.1 as `blocked` in the run and skip to 6.2 via direct URL.
| Step | Action | Expected |
|---|---|---|
| 6.1 | Navigate `/alerts/type-selection`. Observe alert-type cards rendered | With `anomaly_detection` flag on: five cards including an **Anomaly** card with a `Beta` tag (`data-testid="alert-type-card-ANOMALY_BASED_ALERT"`). With flag off: only four cards, no Anomaly entry. |
| 6.2 | Click the Anomaly card (or if flag off, direct-URL to `/alerts/new?ruleType=anomaly_rule&alertType=METRIC_BASED_ALERT`) | Page lands on `/alerts/new?ruleType=anomaly_rule&alertType=METRIC_BASED_ALERT`. Classic `FormAlertRules` renders (not `CreateAlertV2`), top-of-page detection-method tabs show `Threshold Alert` and `Anomaly Detection Alert` with the anomaly tab selected. |
| 6.3 | On the anomaly form: pick a metric with data on staging (e.g. `app.currency_counter`), fill name `${E2E_TAG}-anomaly`, leave z-score deviation at default (`3`), pick a channel from `Notification Channels` | `Create Rule` enables. The query-builder chart preview renders with the overlay/anomaly-band UI specific to `ruleType=anomaly_rule` (`FormAlertRules/ChartPreview/index.tsx:338`). |
| 6.4 | Click `Create Rule` → confirm the `Save Changes` modal OK. Capture POST body to `network/06_step6.4_POST_rules.json` | POST `/api/v2/rules` => 201. Body has `ruleType: "anomaly_rule"`, `alertType: "METRIC_BASED_ALERT"`, and `condition.compositeQuery.builder.queryData[0].functions` contains `{ name: "anomaly", namedArgs: { z_score_threshold: 3 } }`. Redirect to list; row `${E2E_TAG}-anomaly` appears (rule type visible in the row or accessible via detail). |
| 6.5 | Click the row → overview loads → click Edit. Change the z-score deviation from `3` to `5`. Save. Capture PUT body to `network/06_step6.5_PUT_rules.json` | PUT `/api/v2/rules/<id>` body's `functions[anomaly].namedArgs.z_score_threshold` is `5` (serialized via `FormAlertRules/index.tsx:216-233`'s z-score branch). List reflects the update after redirect. |
| 6.6 | On a fresh `/alerts/new?ruleType=anomaly_rule&alertType=METRIC_BASED_ALERT`, click the `Threshold Alert` detection-method tab | URL updates to `ruleType=threshold_rule` (`FormAlertRules/index.tsx:245-259` useEffect). The anomaly function is **removed** from `currentQuery.builder.queryData[*].functions` (`FormAlertRules/index.tsx:234-240`). Switching back to `Anomaly Detection Alert` re-adds the anomaly function with the current `z_score_threshold`. No network call fires on tab toggles. |
| 6.7 | From the list: ellipsis on the `${E2E_TAG}-anomaly` row → Delete. Capture DELETE to `network/06_step6.7_DELETE_rules.json` | DELETE `/api/v2/rules/<id>` => 204 (same path as threshold delete). Row disappears; hard-refresh confirms. |
| 6.8 | Visit the deleted anomaly rule's overview URL (`/alerts/overview?ruleId=<id>`) | `<AlertNotFound>` renders — same 404 branch as Flow 3.6. Confirms the detail page's AlertNotFound path works for anomaly rules too, not just threshold. |
| 6.9 | `POST /api/v2/rules/test` with the anomaly DTO from 6.4 (no rule needs to exist; this is a fire-and-forget evaluation). Capture to `network/06_step6.9_POST_rules_test.json`. | 200 with `data.alertCount >= 1` and `message: "notification sent"`. Confirms `/api/v2/rules/test` accepts the anomaly rule shape (`ruleType: "anomaly_rule"`, `condition.algorithm`, `condition.seasonality`, `functions[anomaly]`) and returns the same envelope shape as a threshold test. **No UI driver** for this step — the classic anomaly form has no Test Notification button (Flow 2.112.13's button is V2-only); `06.9` is an API-contract probe via direct fetch. The BE-side observation from run-5 (`SendUnmatched` bypasses threshold evaluation) applies here too — `alertCount: 0` is reachable only via a zero-data query, not via threshold tweaking. |
Notes for future runs:
- **FE serialization is z-score-only** on this branch. The `seasonality` / `algorithm` fields surfaced by `AnomalyThreshold.tsx` (V2) are not sent by the classic form's POST body. If a future run observes `seasonality` or `algorithm` keys in `06_step6.4_POST_rules.json`, flag as new coverage.
- **BE evaluation** lives in `ee/query-service/rules/anomaly.go` (daily provider by default; hourly/weekly selected by the `seasonality` field when sent). Not directly probed by UI regression — BE-unit tests at `ee/query-service/rules/anomaly_test.go` are the authoritative source.
- The **Test Notification** button from Flow 2.112.13 is V2-only and does **not** render on the classic anomaly form. Step 6.9 probes the same endpoint by direct fetch with the anomaly DTO; if the V2 anomaly tab is uncommented in a future branch, add a UI-driven sibling step (e.g. `6.9b`) that clicks the V2 footer button.
## Cleanup
End-of-run: every entity named `${E2E_TAG}*` must be removed. Alerts: menu → Delete. Downtime: trash → confirm. Verify no residue remains before wrapping.
**Known blocker for Flow 5 cleanup (run 1)**: after creating the downtime-linked entities, the UI's "edit downtime → remove silenced alert → delete downtime → delete alert" path does not actually clear the server-side association. See `run-1/05-classic-and-cascade-delete.md` "Cleanup residue" for the hypothesis. If this remains true in run-N, document it as cleanup residue and move on.
## How to execute a new run
Follow this procedure if you (a future LLM session) are asked to re-run the suite.
1. Artifacts land at `tests/e2e/tests/alerts-downtime/run-spec-<ts>/` (or `$RUN_OUTPUT_DIR` if set). Derive a new `E2E_TAG` from `date +%s` — the spec does this at module load.
2. Log in to staging via Playwright MCP (reuse existing session cookie if available).
3. Walk Flows 15 step-by-step. For each step:
- Take a screenshot with filename `<flow>_step<stepId>_<slug>.png` and save to `run-<N>/screenshots/`.
- Record observed state (DOM probe results, notification text, network response codes) in the per-flow markdown log.
- Mark each step as `pass` / `fail` / `blocked` / `skipped`.
4. Write per-flow logs to `run-<N>/0X-*.md` with the same structure as `run-1/` (one table per flow, one row per step, screenshot and network-capture links per row).
5. Write `run-<N>/results.json` conforming to the schema in `results-schema.md`. This is the diffable artifact.
6. Write `run-<N>/RUN_REPORT.md` — narrative summary.
7. Write `run-<N>/DIFF_vs_run-<N-1>.md` comparing the two runs. Use the comparison template below.
### Capturing API request/response bodies per step
Every step that triggers a backend call (create, update, delete, toggle, list refetch, etc.) must also save the request + response payload. Do **not** embed the bodies inline in the markdown — dump them to files under `run-<N>/network/`, one JSON per step.
Filename pattern: `<flow>_step<stepId>_<METHOD>_<endpoint-slug>.json`
Examples:
- `02_step2.4_POST_rules.json` — create alert (Flow 2 step 2.4)
- `04_step4.4_PUT_downtime_schedules.json` — update downtime
- `05_step5.5_DELETE_rules.json` — the BE rejection when deleting a referenced alert
- `05_step5.6_DELETE_downtime_schedules.json` — the HTTP 409 on the referenced-downtime delete
File shape:
```jsonc
{
"request": {
"method": "POST",
"url": "/api/v1/rules",
"headers": { /* verbatim */ },
"body": { /* parsed JSON or null */ }
},
"response": {
"status": 200,
"headers": { /* verbatim */ },
"body": { /* parsed JSON or text */ }
},
"step": "2.4",
"flow": "flow-2"
}
```
Capture bodies verbatim — no redaction. Capture mechanics via Playwright MCP: after the action that fires the call, use `browser_network_requests` with `requestBody: true` and the endpoint regex as `filter` to capture the request. For response bodies, the simplest path is `page.on('response', ...)` before the action fires — alternatively, re-issue the call via `browser_evaluate` + `fetch` and save both sides.
Rules:
- **Only capture calls we triggered**: don't dump unrelated polling, telemetry, or preference reads.
- **One file per call**; if a step triggers multiple calls (e.g. delete then list refetch), append `_a`, `_b` to the filename.
- **Reference the file from `results.json`** via the `network` field — see `results-schema.md`.
For `GET` refetches that fire automatically after a mutation, capture is encouraged when:
- The response shape might change between runs (API contract drift).
- The step's assertion depends on the response (e.g. "new row appears" → the list-refetch response tells you whether the row is really there).
## Diff template (run-N vs run-(N-1))
```markdown
# Diff — run-<N> vs run-<N-1>
Dates: <old date> → <new date>
Session tags: <old> → <new>
## Results delta
| Scenario | run-<N-1> | run-<N> | Delta |
|---|---|---|---|
| 1.1 Empty list renders | pass | pass | same |
| 1.5 Row delete | pass | pass | same |
| ... | ... | ... | ... |
| 4.1a Deep-link spinner | FAIL | pass | **FIXED** |
| 5.6 Downtime delete error | pass | FAIL | **REGRESSED** |
## Newly passing (regressions fixed)
- <list of IDs that went fail → pass>
## Newly failing (regressions introduced)
- <list of IDs that went pass → fail>
## New scenarios added since last run
- <list>
## Retired / removed scenarios
- <list>
## Cleanup residue delta
- <what entities remained after each run, whether cleanup path still blocked>
## Notes for the next run
- <observations that might matter next time>
```
Produce the diff by reading both runs' `results.json` and narrating the deltas. Screenshots aren't image-diffed automatically — only flag visible differences that were observed during live driving.
## Schema for `results.json`
See `results-schema.md` for the exact shape. Keep step `id` values stable across runs (e.g. `1.1`, `2.7`, `4.1a`) so diffs are straightforward.
## Updating this spec
If a flow gains or loses steps on develop, update this file AND `results-schema.md` before the next run so the diff comparison remains meaningful. Never rename existing step IDs — only append new ones or mark old ones as `retired`.

View File

@@ -0,0 +1,78 @@
# results.json schema
Each run produces one `run-<N>/results.json` conforming to this shape. This is the diffable artifact — it's what lets a future run produce a meaningful `DIFF_vs_run-<N-1>.md`.
## Shape
```jsonc
{
"run": 1, // integer, matches dir name (run-1 → 1)
"date": "2026-04-19", // ISO date of run
"sessionTags": [ // array, one per independent E2E_TAG used during the run
"e2e-2095-1776624456",
"e2e-2095-1776626343"
],
"target": "https://full-mastodon.us.staging.signoz.cloud",
"flows": [
{
"id": "flow-1",
"name": "Alerts list, toggle, delete",
"steps": [
{
"id": "1.1", // matches SUITE.md step id
"name": "Empty list renders",
"result": "pass", // pass | fail | blocked | skipped
"screenshot": "screenshots/01_step1_rules-list-empty.png",
"network": [ // array of captured req/resp dumps for this step; [] if none
"network/01_step1.1_GET_rules.json"
],
"notes": "Empty state copy visible; no spinner"
},
{
"id": "1.5",
"name": "Delete row",
"result": "pass",
"screenshot": "screenshots/01_step5_deleted.png",
"network": [
"network/01_step1.5_DELETE_rules.json", // the delete itself
"network/01_step1.5_GET_rules_b.json" // auto-refetch that fired after
],
"notes": "Row disappeared. DELETE returned 204; refetch confirms row is gone."
}
// ...
]
}
// ... flows 2-5
],
"observations": [ // free-form list of things noticed that aren't step-level pass/fail
{
"severity": "info", // info | nit | bug | regression
"title": "Direct-URL spinner on /alerts?tab=Configuration&subTab=planned-downtime",
"where": "Flow 4 step 4.1a",
"detail": "Page hangs on ant-spin; no GET /downtime_schedules fires until user navigates via tab click."
},
{ "severity": "nit", "title": "PlannedDowntimeForm.handleOk logs validateFields rejection", "where": "PlannedDowntimeForm.tsx:250", "detail": "..." }
],
"cleanupResidue": [ // entities the suite couldn't clean up
{ "type": "rule", "name": "e2e-2095-1776626343-classic", "reason": "referenced by downtime whose cleanup path is blocked" },
{ "type": "downtime", "name": "e2e-2095-1776626343-downtime-linked", "id": "019da737-880e-796a-adc3-acf0b2161cc5", "reason": "edit drawer fails to hydrate alertRules so unlinking via UI has no effect" }
],
"counts": {
"steps": 29,
"pass": 28,
"fail": 1,
"blocked": 0,
"skipped": 0,
"screenshots": 39
}
}
```
## Rules for a re-run
1. **Step ids never change.** If a flow changes, append new steps (e.g. `1.7`) or mark old ones `retired`. This preserves diffability.
2. **`result` values are from a fixed set.** `pass | fail | blocked | skipped`. Do not invent new values.
3. **Screenshots must use the same filename pattern** as run-1 so a file-level diff remains tractable. Pattern: `<flow>_step<stepId>_<slug>.png`.
4. **Network captures follow a parallel filename pattern.** `<flow>_step<stepId>_<METHOD>_<endpoint-slug>.json` under `run-<N>/network/`. Multiple calls per step get `_a`, `_b` suffixes. Bodies are captured verbatim (no redaction). See `SUITE.md` "Capturing API request/response bodies per step".
5. **Observations** are separate from step results. Even if all 29 steps pass, regressions can surface as observations (e.g. a new console error not tied to a scenario).
6. **The diff between runs is computed from `results.json`.** Always fill in every field for every step in SUITE.md, even if you didn't learn anything new — a null/missing result on a future run will look like a step was skipped. Across runs, diffing request/response payloads between corresponding `network/*.json` files is a first-class signal for API contract drift.

View File

@@ -0,0 +1,463 @@
# SigNoz Routing Policies - Comprehensive Test Plan
## Application Overview
The SigNoz Routing Policies feature is an alert routing and notification management system that allows users to define rules for routing alerts to specific notification channels based on alert attributes and conditions. The feature is located under **Alerts > Configuration > Routing Policies** and provides:
- **Policy Management**: Create, view, edit, and delete routing policies
- **Expression-Based Routing**: Define complex routing conditions using expressions
- **Notification Channel Integration**: Route alerts to configured notification channels
- **Search and Pagination**: Efficiently manage multiple policies
- **Real-time Validation**: Validate expressions and policy configurations
## User Role Permissions
- **@admin**: Can execute all test scenarios (create, edit, delete, view)
- **@editor**: Can execute editor and viewer test scenarios (create, edit, view)
- **@viewer**: Can only execute viewer test scenarios (view only)
## Test Scenarios
### 1. Navigation and Page Layout **[@viewer]**
**Seed:** `tests/seed.spec.ts`
#### 1.1 Navigate to Routing Policies and Verify Page Layout **[@viewer]**
**Steps:**
1. Login to SigNoz application
2. Click on "Alerts" in the main navigation
3. Click on "Configuration" tab
4. Click on "Routing Policies" sub-tab
5. Observe page layout and components
**Expected Results:**
- Successfully navigates to Routing Policies page
- Page displays "Routing Policies" heading
- Page shows "Create and manage routing policies" description
- Header contains "Routing Policies" title
- Search functionality is prominently displayed with placeholder "Search for a routing policy..."
- "New routing policy" button with plus icon is visible and clickable
- Policy list displays in expandable table format
- Each policy shows name and management icons (edit/delete)
- Existing policies are displayed in a table format (if any exist)
- Pagination controls are present at bottom
- Page maintains consistent SigNoz design system
### 2. Creating New Routing Policies **[@admin]**
**Seed:** `tests/seed.spec.ts`
#### 2.1 Create Routing Policies with Basic and Complex Expressions **[@admin]**
**Steps:**
1. Navigate to Routing Policies page
2. Click "New routing policy" button
3. Fill in routing policy name: "Critical Payment Alerts"
4. Fill in description: "Route critical payment service alerts to Slack"
5. Enter expression: `service.name == "payment" && severity == "critical"`
6. Select notification channel from dropdown
7. Click "Save Routing Policy"
8. Ensure that the success notification shows
**Expected Results:**
- "Create routing policy" dialog opens for each policy creation
- All form fields are properly labeled and accessible
- Policy name field accepts alphanumeric characters and special characters
- Description field accepts multi-line text
- Expression field provides syntax guidance
- Notification channels dropdown shows available channels
- Save button is enabled when required fields are filled
- Policy is created successfully and appears in the policies list
- Success message or confirmation is displayed for each creation
- Multiple notification channels can be selected
- Policies save successfully with complex conditions
#### 2.2 Create Policy with Empty Required Fields **[@admin]**
**Steps:**
1. Click "New routing policy" button
2. Leave name field empty
3. Enter valid expression and select notification channel
4. Attempt to save
**Expected Results:**
- Validation prevents saving
- Required field indicators are shown
- Error messages specify which fields are required
- Form submission is blocked until required fields are filled
#### 2.3 Cancel Policy Creation **[@admin]**
**Steps:**
1. Click "New routing policy" button
2. Fill in some form fields
3. Click "Cancel" button
**Expected Results:**
- Dialog closes without saving
- No new policy is created
- Returns to main routing policies list
- No error messages or side effects
### 3. Viewing and Managing Existing Policies
**Seed:** `tests/seed.spec.ts`
#### 3.1 View Policy Details **[@viewer]**
**Steps:**
1. Navigate to Routing Policies page with existing policies
2. Click on a routing policy row to expand it
**Expected Results:**
- Policy details expand below the policy name
- Shows "Created by" with user email
- Shows "Created on" with formatted timestamp
- Shows "Updated by" with user email
- Shows "Updated on" with formatted timestamp
- Displays "Expression" with the routing condition
- Shows "Description" text
- Lists associated "Channels" with channel names
- All information is properly formatted and readable
#### 3.2 Verify Policy Metadata Accuracy **[@admin]**
**Steps:**
1. Create a new policy and note creation time
2. View the policy details
3. Verify metadata matches creation details
**Expected Results:**
- Created by shows current user
- Created on shows accurate timestamp
- Updated fields match created fields for new policies
- All metadata is consistent and accurate
### 4. Search and Filter Functionality
**Seed:** `tests/seed.spec.ts`
#### 4.1 Search Policies by Name **[@viewer]**
**Steps:**
1. Navigate to Routing Policies page with multiple policies
2. Enter a policy name in the search box
3. Observe filtered results
**Expected Results:**
- Search filters policies in real-time or after pressing Enter
- Only matching policies are displayed
- Search is case-insensitive
- Partial matches are supported
- Clear search shows all policies again
#### 4.2 Search with No Results **[@viewer]**
**Steps:**
1. Enter a search term that matches no policies
2. Observe the results
**Expected Results:**
- "No results" or similar message is displayed
- Empty state is user-friendly
- Search can be cleared to return to full list
- No errors or broken UI elements
### 5. Pagination and Navigation
**Seed:** `tests/seed.spec.ts`
#### 5.1 Setup Pagination Test Data **[@admin]**
**Pre-requisite Steps:**
1. Create 6 routing policies to ensure pagination is triggered:
- Policy 1: "Test Policy Alpha"
- Policy 2: "Test Policy Beta"
- Policy 3: "Test Policy Gamma"
- Policy 4: "Test Policy Delta"
- Policy 5: "Test Policy Epsilon"
- Policy 6: "Test Policy Zeta"
#### 5.2 Navigate Between Pages **[@viewer]**
**Steps:**
1. Navigate to routing policies with more than one page of results
2. Click "Next Page" button
3. Click "Previous Page" button
4. Click specific page numbers
**Expected Results:**
- Pagination controls function correctly
- Page content updates appropriately
- Page numbers reflect current page
- Previous/Next buttons enable/disable appropriately
### 6. Notification Channel Integration
**Seed:** `tests/seed.spec.ts`
#### 6.1 Select Notification Channels **[@admin]**
**Steps:**
1. Create new routing policy
2. Open notification channels dropdown
3. Select single channel
4. Save policy
5. Verify channel association
**Expected Results:**
- Notification channels dropdown shows available channels
- Channels can be selected successfully
- Selected channels are displayed in policy details
- Channel names are accurately displayed
- Channel selection persists after saving
#### 6.2 Multiple Channel Selection **[@admin]**
**Steps:**
1. Create policy with multiple notification channels
2. Select 2-3 different channels
3. Save and verify
**Expected Results:**
- Multiple channels can be selected (if supported)
- All selected channels are saved with policy
- Policy details show all associated channels
- Channel list is properly formatted
#### 6.3 No Channel Selection **[@admin]**
**Steps:**
1. Create policy without selecting notification channels
2. Attempt to save
**Expected Results:**
- Validation handles missing channels appropriately
- Either requires at least one channel or allows empty
- Clear guidance on channel requirements
- Consistent behavior across the application
### 7. Policy Management Operations
**Seed:** `tests/seed.spec.ts`
#### 7.1 Edit Existing Policy **[@admin]**
**Steps:**
1. Locate existing routing policy
2. Click edit button/option
3. Modify policy name, description, and expression
4. Update notification channels
5. Save changes
**Expected Results:**
- Edit functionality is accessible
- All fields can be modified
- Changes are saved successfully
- Updated information appears in policy list
- Metadata shows correct "Updated by" and "Updated on"
#### 7.2 Delete Routing Policy **[@admin]**
**Steps:**
1. Locate existing routing policy
2. Click delete button/option
3. Confirm deletion when prompted
**Expected Results:**
- Delete option is available
- Confirmation dialog prevents accidental deletion
- Policy is removed from list after confirmation
- No errors or broken references remain
- Other policies remain unaffected
#### 7.3 Delete Policy Confirmation **[@admin]**
**Steps:**
1. Attempt to delete a policy
2. Cancel the deletion when prompted
3. Verify policy remains
**Expected Results:**
- Cancellation prevents deletion
- Policy remains in list unchanged
- No side effects from cancelled deletion
- User can retry deletion if needed
### 8. Error Handling and Edge Cases
**Seed:** `tests/seed.spec.ts`
#### 8.1 Network Error Handling **[@admin]**
**Steps:**
1. Simulate network disconnection
2. Attempt to create/save routing policy
3. Restore network connection
**Expected Results:**
- Appropriate error messages for network issues
- Form data is preserved during network errors
- User can retry operation after network restoration
- No data corruption or partial saves
#### 8.2 Large Expression Handling **[@admin]**
**Steps:**
1. Create policy with very long expression (500+ characters)
2. Test policy with complex nested conditions
3. Save and verify
**Expected Results:**
- Long expressions are handled correctly
- UI accommodates long text without breaking
- Performance remains acceptable
- Expression validation works with complex conditions
#### 8.3 Special Character Handling **[@admin]**
**Steps:**
1. Test policy names with special characters: !@#$%^&\*()
2. Test expressions with escaped quotes and special characters
3. Test descriptions with unicode characters
**Expected Results:**
- Special characters are properly handled
- String escaping works correctly
- Unicode text is supported
- No XSS or injection vulnerabilities
### 9. Performance and Scalability
**Seed:** `tests/seed.spec.ts`
#### 9.1 Concurrent User Actions **[@admin]**
**Steps:**
1. Have multiple users create/edit policies simultaneously
2. Verify data consistency
3. Test for race conditions
**Expected Results:**
- No data corruption from concurrent edits
- Proper conflict resolution mechanisms
- Users see updated data appropriately
- System maintains data integrity
### 10. Security and Access Control
**Seed:** `tests/seed.spec.ts`
#### 10.1 User Permission Validation **[@admin]**
**Steps:**
1. Test routing policies access with different user roles
2. Verify create/edit/delete permissions
3. Test unauthorized access attempts
**Expected Results:**
- Appropriate permissions are enforced
- Unauthorized users cannot modify policies
- Clear feedback on permission restrictions
- No sensitive data exposure
### 11. Integration and Data Consistency
**Seed:** `tests/seed.spec.ts`
#### 11.1 Alert Routing Verification **[@admin]**
**Steps:**
1. Create routing policy with specific conditions
2. Generate test alert matching policy conditions
3. Verify alert is routed to specified channels
**Expected Results:**
- Routing policies correctly filter alerts
- Alerts reach specified notification channels
- Policy conditions are accurately evaluated
- No alerts are lost or misrouted
#### 12.2 Notification Channel Synchronization **[@admin]**
**Steps:**
1. Create routing policy with notification channel
2. Delete or modify the notification channel
3. Verify routing policy behavior
**Expected Results:**
- Routing policies handle channel changes gracefully
- Orphaned channel references are managed appropriately
- User is notified of channel issues
- System remains stable with invalid channel references
## Quality Standards
- **Reproducibility**: All scenarios can be executed independently and repeatedly
- **Clarity**: Steps are specific enough for any tester to follow without ambiguity
- **Coverage**: Scenarios cover happy path, edge cases, error conditions, and security aspects
- **Traceability**: Each test links back to specific routing policies functionality
- **Automation-Ready**: Steps are written to facilitate future test automation
## Environment Setup
- **Prerequisites**: Valid SigNoz workspace with appropriate permissions
- **Test Data**: Multiple notification channels configured for testing
- **User Accounts**: Test users with different permission levels (admin, editor, viewer)
- **Network**: Stable internet connection for cloud testing
## Success Criteria
- All routing policy CRUD operations function correctly
- Notification channel integration works seamlessly
- Search and pagination handle large datasets efficiently
- Mobile and responsive designs maintain full functionality
- Security measures prevent unauthorized access and injection attacks
- Performance remains acceptable under normal and stress conditions
- User role permissions are properly enforced

View File

@@ -0,0 +1,101 @@
# Dashboards List — Functional Checklist
Route: `/dashboard` | Title: `SigNoz | All Dashboards`
---
## Page Load
- [ ] Heading "Dashboards" and subtitle "Create and manage dashboards for your workspace." are visible
- [ ] "Browse dashboard templates" link is visible
- [ ] "Enter dashboard name..." input is visible (Admin / Editor only; not shown to Viewer)
- [ ] "Search by name, description, or tags..." input is visible
- [ ] "New dashboard" button is visible (Admin / Editor only; not shown to Viewer)
- [ ] "All Dashboards" label and sort toggle button are visible
- [ ] Table rows show a two-row layout: top row has thumbnail, name, tags, and action icon; bottom row has clock icon, last-updated timestamp, creator avatar, and creator email
- [ ] URL on default load is `/dashboard` with no query parameters
- [ ] Dashboards with more tags than fit show a `+ N` overflow indicator
- [ ] Dashboards with no tags show a clean row (no empty tag containers)
- [ ] If more than 20 dashboards exist, a pagination bar is shown ("1 — 20 of N")
---
## Search
- [ ] Typing filters list by name, description, or tags in real time
- [ ] URL updates to `?search=<term>`; navigating to that URL pre-fills search and filters list
- [ ] Clearing search restores the full list; `search` param is removed from the URL
- [ ] No-match search shows an empty state with no error and no broken layout
- [ ] Search is case-insensitive
- [ ] Searching while on page 2+ resets pagination to page 1
---
## Sorting
- [ ] Clicking the sort toggle next to "All Dashboards" sorts the list by updated date descending
- [ ] Clicking the sort toggle again does not flip to ascending — current behavior is descend only
- [ ] Clicking the sort toggle does not add or change URL query parameters
---
## Row Actions (three-dot menu) — Admin and Editor
- [ ] The action icon (`dashboard-action-icon`) is visible only for Admin and Editor; Viewer sees no action icon
- [ ] Clicking the action icon reveals a popover with exactly five items for Admin: View, Open in New Tab, Copy Link, Export JSON, Delete dashboard
- [ ] Clicking the action icon reveals a popover with exactly four items for Editor: View, Open in New Tab, Copy Link, Export JSON (no Delete dashboard)
- [ ] **View** — navigates to `/dashboard/<uuid>`; breadcrumb "Dashboard /" + name visible
- [ ] **Open in New Tab** — opens `/dashboard/<uuid>` in a new tab
- [ ] **Copy Link** — success feedback shown; clipboard contains the dashboard URL
- [ ] **Export JSON** — file download triggered; downloaded file is valid JSON
- [ ] Pressing Escape closes the popover; page stays on `/dashboard`
---
## Creating Dashboards _(Editor / Admin only)_
- [ ] Submit button is disabled when "Enter dashboard name..." input is empty
- [ ] Typing a name enables Submit; clicking Submit navigates to `/dashboard/<uuid>` with the typed name as the dashboard title
- [ ] **New dashboard** dropdown has exactly three items: Create dashboard, Import JSON, View templates
- [ ] **New dashboard → Create dashboard** — navigates to a new dashboard; default title is "Sample Title"; onboarding state shows Configure and New Panel buttons
- [ ] **New dashboard → Import JSON** — opens a dialog titled "Import Dashboard JSON" with a Monaco code editor, "Upload JSON file" button, "View templates" link, and "Import and Next" button
- [ ] Import dialog closes on Escape or clicking ×; no dashboard is created
- [ ] **New dashboard → View templates** — opens the external SigNoz docs templates link in a new tab
---
## Deleting Dashboards _(Admin only)_
- [ ] Action menu → Delete dashboard shows a confirmation dialog with a level-5 heading, the dashboard name in bold/emphasis, and Cancel and Delete buttons
- [ ] Clicking Cancel closes the dialog and navigates to the dashboard detail page (known current behavior — does not remain on the list page)
- [ ] Clicking Delete (confirm) removes the dashboard from the list; all other dashboards are unaffected
---
## Navigation
- [ ] Clicking a row (not the action icon) navigates to `/dashboard/<uuid>`; breadcrumb is visible
- [ ] Sidebar "Dashboards" link navigates to `/dashboard`
---
## URL / Deep-link State
- [ ] `/dashboard?search=PromQL` pre-fills search and filters the list on load
- [ ] Browser Back after navigating into a dashboard restores the search state (search param preserved)
- [ ] Sort state is not reflected in the URL and cannot be deep-linked
---
## Role Permissions
| Action | Viewer | Editor | Admin |
| ------------------------------------------ | ------ | ------ | ----- |
| View / search / navigate | ✓ | ✓ | ✓ |
| See sort toggle | ✓ | ✓ | ✓ |
| See "Enter dashboard name..." input | ✗ | ✓ | ✓ |
| See "New dashboard" button | ✗ | ✓ | ✓ |
| Create dashboards | ✗ | ✓ | ✓ |
| See three-dot action menu icon | ✗ | ✓ | ✓ |
| Use View / Open in New Tab / Copy / Export | ✗ | ✓ | ✓ |
| Delete dashboards | ✗ | ✗ | ✓ |

View File

@@ -0,0 +1,650 @@
# Dashboards List Page - Comprehensive Test Plan
<!-- spec: tests/dashboards/dashboards-list.spec.ts -->
<!-- seed: tests/seed.spec.ts -->
## Application Overview
The Dashboards list page (`/dashboard`) is the central hub for managing observability dashboards in SigNoz. It provides:
- **Dashboard Listing**: A table displaying all dashboards with a thumbnail image, name, tags, last-updated timestamp, and creator avatar initial + email
- **Inline Quick Create**: A "Enter dashboard name..." text field followed by a disabled "Submit" button that enables once text is entered, allowing rapid dashboard creation without a modal
- **New Dashboard Dropdown**: A "New dashboard" button that expands a dropdown menu to offer three options: "Create dashboard", "Import JSON", and "View templates" (external link)
- **Search**: A "Search by name, description, or tags..." input that filters the list in real time and reflects the query in the URL
- **Sorting**: A sort control (icon button) adjacent to the "All Dashboards" label that controls display order; currently the sort only applies `descend` order and does not update the URL
- **Per-row Action Menu**: A three-dot action icon (`dashboard-action-icon`) on each row, revealed via interaction, that opens a popover with: View, Open in New Tab, Copy Link, Export JSON, and Delete dashboard
- **Delete Confirmation Dialog**: A modal that requires confirmation before deleting, showing the dashboard name in the prompt with "Cancel" and "Delete" buttons
- **Import JSON Dialog**: A modal with a Monaco code editor, "Upload JSON file" button, "View templates" link, and "Import and Next" button
- **Pagination**: 20 dashboards per page with a pagination bar showing item range and page navigation
- **URL State Synchronization**: Search term is reflected in the URL query string
**Route**: `/dashboard`
**Page Title**: `SigNoz | All Dashboards`
**URL State Parameters**:
- `search` — active search query (reflected in URL when a search is typed)
**Row Data Fields** (two visual sections per row):
- Section 1: Dashboard thumbnail image (alt: `dashboard-image`), dashboard name, tags (displayed as pills; overflow shown as `+ N`), and three-dot action icon
- Section 2: Clock icon, last-updated timestamp (format: `MMM DD, YYYY ⎯ HH:mm:ss (UTC ±HH:MM)`), creator avatar initial, and creator email
**Pagination**: 20 items per page; pagination bar shows `N — 20 of total` and page number buttons
## User Role Permissions
- **@viewer**: Can view the dashboard list, search, sort, and navigate into dashboards; cannot create or delete; "Enter dashboard name..." input, "Submit" button, and "New dashboard" button are hidden; three-dot action icon is hidden
- **@editor**: Can create dashboards (inline, via dropdown, via JSON import); three-dot action menu is visible but "Delete dashboard" is not available; cannot delete
- **@admin**: Full access — all viewer and editor capabilities plus delete
---
## Test Scenarios
### 1. Page Load and Layout
**Seed:** `tests/seed.spec.ts`
#### 1.1 Dashboard list page loads correctly `@viewer`
**Pre-conditions:**
- User is logged in (session restored from storageState)
**Steps:**
1. Navigate to `/dashboard`
2. Wait for the page to fully load
**Expected Results:**
- URL is `/dashboard` (no query parameters on fresh load)
- Page title is `SigNoz | All Dashboards`
- H1 heading "Dashboards" is visible
- Subtitle "Create and manage dashboards for your workspace." is visible
- "Search by name, description, or tags..." text input is visible
- "All Dashboards" section label is visible
- Sort control icon button is visible next to "All Dashboards" label
- Dashboard table rows are visible with at least one entry
- Pagination bar is visible showing item range and page numbers
- "Feedback" and "Share" buttons are visible in the top-right header area
#### 1.2 Dashboard list shows correct data fields per row `@viewer`
**Pre-conditions:**
- At least one dashboard exists in the workspace
**Steps:**
1. Navigate to `/dashboard`
2. Wait for the table to load
3. Inspect the first dashboard row
**Expected Results:**
- Each row shows a dashboard thumbnail image (with alt text `dashboard-image`)
- Each row shows the dashboard name as text
- Dashboards with tags show tag pills; extra tags beyond display limit appear as `+ N`
- Each row shows a last-updated timestamp in `MMM DD, YYYY ⎯ HH:mm:ss (UTC ±HH:MM)` format
- Each row shows the creator's avatar initial (single letter) and email address
- The default row order shows the most recently updated dashboard first (descending by updated date)
#### 1.3 Pagination bar shows correct item count `@viewer`
**Pre-conditions:**
- More than 20 dashboards exist
**Steps:**
1. Navigate to `/dashboard`
2. Observe the pagination bar at the bottom of the list
**Expected Results:**
- Pagination bar shows text like `1 — 20 of 21` (or actual total count)
- Page number buttons are visible (e.g., `1`, `2`)
- "Previous Page" button is disabled on page 1
- "Next Page" button is enabled when more pages exist
---
### 2. Search Functionality
**Seed:** `tests/seed.spec.ts`
#### 2.1 Search filters dashboards by name `@viewer`
**Pre-conditions:**
- Multiple dashboards exist with distinct names
**Steps:**
1. Navigate to `/dashboard`
2. Wait for the dashboard list to load
3. Click the "Search by name, description, or tags..." input
4. Type `APM`
5. Observe the filtered list
**Expected Results:**
- Only dashboards whose name, description, or tags contain "APM" are displayed
- Dashboards not matching the query are hidden
- The URL updates to include `?search=APM`
- The search input retains the typed value
#### 2.2 Search state is reflected in the URL `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Type a search term (e.g., `PromQL`) in the search input
3. Observe the URL
**Expected Results:**
- URL contains `search=PromQL`
- Navigating to that URL in a new tab pre-fills the search input and shows filtered results immediately
#### 2.3 Clearing search restores the full list `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Type `APM` in the search input
3. Verify filtered results are shown
4. Clear the search input (select all and delete, or use clear button)
5. Observe the list
**Expected Results:**
- All dashboards are visible again after clearing
- The `search` parameter is removed from the URL (or set to empty)
#### 2.4 Search with no matching results shows empty state `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Type a string that matches no dashboard name, description, or tag (e.g., `xyznonexistent999`)
**Expected Results:**
- No dashboard rows are displayed
- An empty state or "no results" indicator is shown (no error, no broken layout)
- The search input remains functional
#### 2.5 Search is case-insensitive `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Note the name of a dashboard that uses mixed case (e.g., "APM Metrics")
3. Search for the lowercase version (e.g., `apm metrics`)
**Expected Results:**
- The dashboard is included in results regardless of search case
---
### 3. Sorting
**Seed:** `tests/seed.spec.ts`
#### 3.1 Default sort shows most recently updated dashboard first `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Observe the order of rows
**Expected Results:**
- The dashboard with the most recent "last updated" timestamp appears first in the list
- The URL has no sort-related query parameters by default (no `columnKey` or `order` params)
#### 3.2 Sort control button is clickable and applies descending sort `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Note the current first row
3. Click the sort control icon button next to the "All Dashboards" label
**Expected Results:**
- The button is clickable without error
- The list remains in descending order (current behavior: sort toggle only applies `descend`; toggling does not switch to ascending)
- The URL does not change when the sort button is clicked
---
### 4. Row Actions (Context Menu)
**Seed:** `tests/seed.spec.ts`
#### 4.1 Row action menu shows correct options for Admin `@admin`
**Pre-conditions:**
- At least one dashboard exists
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row (data-testid: `dashboard-action-icon`)
**Expected Results:**
- A popover appears containing exactly these options (in order):
1. View
2. Open in New Tab
3. Copy Link
4. Export JSON
5. Delete dashboard (in a separate section, styled as a destructive action)
#### 4.2 Row action menu shows correct options for Editor `@editor`
**Pre-conditions:**
- At least one dashboard exists
- User has editor role
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row
**Expected Results:**
- A popover appears containing: View, Open in New Tab, Copy Link, Export JSON
- "Delete dashboard" is NOT visible in the menu for editor role
#### 4.3 "View" action navigates to the dashboard detail page `@viewer`
**Pre-conditions:**
- User has at least `action` permission (editor or admin)
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row
3. Click "View"
**Expected Results:**
- Browser navigates to `/dashboard/<uuid>`
- The dashboard detail page loads with the correct title
#### 4.4 "Open in New Tab" action opens dashboard in a new browser tab `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row
3. Click "Open in New Tab"
**Expected Results:**
- A new browser tab opens at `/dashboard/<uuid>`
#### 4.5 "Copy Link" action copies the dashboard URL to clipboard `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row
3. Click "Copy Link"
**Expected Results:**
- A success notification or visual feedback confirms the link was copied
- The clipboard contains a URL pointing to the dashboard detail page
#### 4.6 "Export JSON" action downloads the dashboard as a JSON file `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row
3. Click "Export JSON"
**Expected Results:**
- A file download is triggered
- The downloaded file is valid JSON representing the dashboard configuration
#### 4.7 Context menu closes when clicking outside `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon to open the context menu
3. Click somewhere outside the popover (e.g., on the heading)
**Expected Results:**
- The context menu closes
- The page remains on `/dashboard` with no navigation
---
### 5. Creating Dashboards
**Seed:** `tests/seed.spec.ts`
#### 5.1 Create section is visible to Editor and Admin but hidden from Viewer `@viewer`
**Steps:**
1. Navigate to `/dashboard` as a viewer user
**Expected Results:**
- The "Enter dashboard name..." input is NOT visible
- The "Submit" button is NOT visible
- The "New dashboard" button is NOT visible
- The "Browse dashboard templates" link may or may not be visible
#### 5.2 "Create dashboard" via New dashboard dropdown creates and navigates to new dashboard `@editor`
**Pre-conditions:**
- User has editor or admin role
**Steps:**
1. Navigate to `/dashboard`
2. Note the current dashboard count
3. Click the "New dashboard" button
4. Click "Create dashboard" from the dropdown menu
5. Wait for navigation
**Expected Results:**
- Browser navigates to `/dashboard/<new-uuid>`
- The new dashboard detail page loads
- The default dashboard name is "Sample Title"
- Upon returning to `/dashboard`, the new dashboard appears in the list
#### 5.3 Inline "Enter dashboard name" field creates a named dashboard `@editor`
**Pre-conditions:**
- User has editor or admin role
**Steps:**
1. Navigate to `/dashboard`
2. Observe the "Submit" button is disabled
3. Click the "Enter dashboard name..." text input
4. Type a unique name (e.g., `Test Dashboard ${Date.now()}`)
5. Observe the "Submit" button becomes enabled
6. Click the "Submit" button
7. Wait for navigation
**Expected Results:**
- While the input is empty, the "Submit" button has the `disabled` attribute
- After typing a name, the "Submit" button becomes enabled
- After clicking Submit, navigation occurs to `/dashboard/<new-uuid>`
- The new dashboard has the typed name set as its title
#### 5.4 Submit button is disabled when dashboard name input is empty `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Observe the "Submit" button next to the "Enter dashboard name..." input without typing anything
**Expected Results:**
- The "Submit" button has the `disabled` attribute when the input is empty
- Clicking the disabled button does not trigger any action or navigation
#### 5.5 "New dashboard" dropdown shows three options `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Click the "New dashboard" button
**Expected Results:**
- A dropdown menu appears with exactly three menu items:
1. "Create dashboard"
2. "Import JSON"
3. "View templates" (rendered as a link, opens external docs)
#### 5.6 Import JSON opens a dialog with code editor and file upload `@editor`
**Pre-conditions:**
- User has editor or admin role
**Steps:**
1. Navigate to `/dashboard`
2. Click the "New dashboard" button
3. Click "Import JSON"
**Expected Results:**
- A modal dialog appears with title "Import Dashboard JSON"
- A Monaco/code editor is visible with line numbers (showing at least line "1")
- An "Upload JSON file" button is visible
- A "View templates" link is visible (pointing to SigNoz documentation)
- An "Import and Next" button is visible
- A close ("×") button is visible in the top-right of the dialog
#### 5.7 Import JSON dialog closes on Escape `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Open the "Import JSON" dialog via "New dashboard" > "Import JSON"
3. Press Escape
**Expected Results:**
- The dialog closes
- The user remains on `/dashboard`
#### 5.8 Import JSON dialog closes on clicking the × button `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Open the "Import JSON" dialog via "New dashboard" > "Import JSON"
3. Click the × (Close) button in the top-right of the dialog
**Expected Results:**
- The dialog closes
- The user remains on `/dashboard`
- No new dashboard is created
#### 5.9 "View templates" in New dashboard dropdown opens external link `@viewer`
**Pre-conditions:**
- User has at least editor role (so the "New dashboard" button is visible)
**Steps:**
1. Navigate to `/dashboard`
2. Click the "New dashboard" button
3. Observe the "View templates" option
**Expected Results:**
- "View templates" renders as a link element
- The link URL points to `https://signoz.io/docs/dashboards/dashboard-templates/overview/`
- Clicking it opens the external documentation URL
#### 5.10 "Browse dashboard templates" link is visible and navigates to docs `@editor`
**Steps:**
1. Navigate to `/dashboard`
2. Observe the "Browse dashboard templates" link near the top of the page
3. Note the associated "or request a new template →" text next to it
**Expected Results:**
- The link "Browse dashboard templates" is visible
- Adjacent text "or request a new template →" is visible
- The link URL points to `https://signoz.io/docs/dashboards/dashboard-templates/overview/`
---
### 6. Deleting Dashboards
**Seed:** `tests/seed.spec.ts`
#### 6.1 Delete action shows a confirmation dialog with the dashboard name `@admin`
**Pre-conditions:**
- At least one dashboard exists
- User has admin role
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row (e.g., "APM Metrics")
3. Click "Delete dashboard"
**Expected Results:**
- A confirmation dialog appears with an exclamation-circle icon
- The dialog heading reads: "Are you sure you want to delete the [Dashboard Name] dashboard?" where the dashboard name is styled distinctly (e.g., bold)
- A "Cancel" button and a "Delete" button are present in the dialog
- The "Delete" button is in an active/danger style
#### 6.2 Cancelling the delete dialog navigates to the dashboard detail page `@admin`
**Pre-conditions:**
- Note: The current behavior is that clicking "Cancel" in the delete dialog navigates to the dashboard detail page (not back to the list page)
**Steps:**
1. Navigate to `/dashboard`
2. Click the three-dot action icon on any dashboard row
3. Click "Delete dashboard" to open the confirmation dialog
4. Click the "Cancel" button
**Expected Results:**
- The dialog closes
- The browser navigates to `/dashboard/<uuid>` (the detail page of the dashboard whose delete was cancelled)
- The dashboard remains in the system
#### 6.3 Confirming delete removes the dashboard from the list `@admin`
**Pre-conditions:**
- A disposable dashboard exists (create one first with a unique timestamped name)
**Steps:**
1. Navigate to `/dashboard`
2. Create a new dashboard via "New dashboard" > "Create dashboard"; note its name
3. Return to `/dashboard`
4. Locate the newly created dashboard row
5. Click its three-dot action icon
6. Click "Delete dashboard"
7. Click "Delete" in the confirmation dialog
8. Wait for the list to refresh
**Expected Results:**
- The dialog closes after clicking "Delete"
- The deleted dashboard no longer appears in the list
- All other dashboards remain unaffected
---
### 7. Dashboard Navigation (Row Click)
**Seed:** `tests/seed.spec.ts`
#### 7.1 Clicking a dashboard row navigates to the dashboard detail page `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Note the name of the first dashboard row
3. Click anywhere on the row (not on the three-dot action icon)
**Expected Results:**
- Browser navigates to `/dashboard/<uuid>`
- The dashboard detail page loads with the correct title matching the clicked row
---
### 8. URL State and Deep Linking
**Seed:** `tests/seed.spec.ts`
#### 8.1 URL search parameter is applied on page load `@viewer`
**Steps:**
1. Navigate directly to `/dashboard?search=PromQL`
**Expected Results:**
- The search input is pre-populated with "PromQL"
- The dashboard list shows only dashboards matching "PromQL"
#### 8.2 Browser back/forward navigation preserves search state `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Type "APM" in the search field
3. Click on any matching dashboard row to open it
4. Click the browser back button
5. Observe the dashboard list URL and state
**Expected Results:**
- The URL returns to `/dashboard?search=APM`
- The search input still contains "APM"
- The filtered list shows only APM-matching dashboards
---
### 9. Page Header Actions
**Seed:** `tests/seed.spec.ts`
#### 9.1 "Feedback" button is visible and clickable `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Locate the "Feedback" button in the top-right area of the page header
**Expected Results:**
- The "Feedback" button is visible and clickable
- Clicking it opens a feedback mechanism (e.g., modal or external link)
#### 9.2 "Share" button is visible and clickable `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Locate the "Share" button in the top-right area of the page header
3. Click "Share"
**Expected Results:**
- The "Share" button is visible
- Clicking it triggers a share action (e.g., copies page URL or opens a share dialog)
---
### 10. Edge Cases and Error Handling
**Seed:** `tests/seed.spec.ts`
#### 10.1 Tags overflow display shows `+ N` indicator `@viewer`
**Pre-conditions:**
- At least one dashboard has more tags than can be displayed inline
**Steps:**
1. Navigate to `/dashboard`
2. Find a dashboard row that has multiple tags with an overflow indicator
**Expected Results:**
- Visible tags are shown as individual pills
- The overflow indicator is shown as `+ N` (e.g., `+ 1`) where N is the number of hidden tags
#### 10.2 Dashboard with no tags shows a clean row `@viewer`
**Steps:**
1. Navigate to `/dashboard`
2. Find a dashboard row with no tags (e.g., "Sample Title")
3. Inspect the row content
**Expected Results:**
- The row shows the thumbnail, name, last-updated timestamp, and creator
- No tag pills or empty tag containers are visible
- Row layout is consistent with tagged dashboards
#### 10.3 Searching while on a non-first page resets to page 1 `@viewer`
**Pre-conditions:**
- Enough dashboards exist to have multiple pages (more than 20)
**Steps:**
1. Navigate to `/dashboard` and go to page 2
2. Type a search term in the search input
**Expected Results:**
- The page resets to show page 1 results
- The URL reflects the search term
#### 10.4 Dashboard list is accessible via the SigNoz sidebar navigation `@viewer`
**Steps:**
1. Log in and land on the home page
2. Look for the Dashboards navigation item in the left sidebar
3. Click it
**Expected Results:**
- Browser navigates to `/dashboard`
- The dashboard list page loads correctly
#### 10.5 Viewer cannot see create or action controls `@viewer`
**Steps:**
1. Log in as a Viewer user
2. Navigate to `/dashboard`
**Expected Results:**
- The "Enter dashboard name..." input is NOT visible
- The "Submit" button is NOT visible
- The "New dashboard" button is NOT visible
- The three-dot action icon on each row is NOT visible (rows do not show `dashboard-action-icon`)
- The dashboard list, search, sort, and navigation into dashboards all work normally
---
## Notes
- Default dashboard name on creation via "New dashboard" > "Create dashboard" is "Sample Title"; inline create sets the user-entered name
- The sort control currently only sorts descending; clicking it does not toggle to ascending, and the URL does not reflect sort state
- The `dashboard-action-icon` data-testid is the locator for the three-dot row action menu trigger
- Clicking "Cancel" in the delete confirmation dialog navigates to the dashboard detail page rather than returning to the list — tests should use the "Delete" button for actual delete flows
- The delete confirmation dialog uses `heading[level=5]` with the text "Are you sure you want to delete the [Name] dashboard?"
- The "View templates" entry in the New dashboard dropdown is rendered as a menu item containing a `link` element pointing to `https://signoz.io/docs/dashboards/dashboard-templates/overview/`
- The three-dot action menu is rendered as a tooltip/popover element (not an Ant Design dropdown menu)
- Editor role has the action menu visible but does not have the "Delete dashboard" option
- Viewer role has neither the create controls nor the action menu

View File

@@ -0,0 +1,92 @@
# Home Page Test Plan
## Overview
The SigNoz home page (`/home`) is the landing page after login. It provides:
- Ingestion status indicators for Logs, Traces, and Metrics
- Quick-action explore/create buttons
- Live summary widgets: Alerts, Dashboards, Services, Saved Views
- Onboarding checklist (step progress)
## Test Cases
### TC-01: Home page loads after login `@viewer`
- Navigate to `/home`
- Verify URL contains `/home`
- Verify heading "Hello there, Welcome to your SigNoz workspace" is visible
- Verify page title contains "Home"
### TC-02: Ingestion status banners are visible `@viewer`
- Navigate to `/home`
- Verify "Logs ingestion is active" text is visible
- Verify "Traces ingestion is active" text is visible
- Verify "Metrics ingestion is active" text is visible
### TC-03: Explore Logs navigates to logs explorer `@viewer`
- Click "Explore Logs" button
- Verify URL contains `/logs/logs-explorer`
### TC-04: Explore Traces navigates to traces explorer `@viewer`
- Click "Explore Traces" button
- Verify URL contains `/traces-explorer`
### TC-05: Explore Metrics navigates to metrics explorer `@viewer`
- Click "Explore Metrics" button
- Verify URL contains `/metrics-explorer`
### TC-06: Open Logs Explorer shortcut navigates `@viewer`
- Click "Open Logs Explorer" button
- Verify URL contains `/logs/logs-explorer`
### TC-07: Open Traces Explorer shortcut navigates `@viewer`
- Click "Open Traces Explorer" button
- Verify URL contains `/traces-explorer`
### TC-08: Open Metrics Explorer shortcut navigates `@viewer`
- Click "Open Metrics Explorer" button
- Verify URL contains `/metrics-explorer`
### TC-09: Create dashboard button navigates `@editor`
- Click "Create dashboard" button
- Verify URL contains `/dashboard`
### TC-10: Create an alert button navigates `@editor`
- Click "Create an alert" button
- Verify URL contains `/alerts`
### TC-11: Services table is visible with correct columns `@viewer`
- Verify "Services" section heading is visible
- Verify table columns: APPLICATION, P99 LATENCY, ERROR RATE, OPS / SEC
- Verify at least one service row is present
### TC-12: All Services link navigates `@viewer`
- Click "All Services" link
- Verify URL contains `/services`
### TC-13: Alerts section shows firing alerts `@viewer`
- Verify "Alerts" section heading is visible
- Verify at least one alert item is listed
### TC-14: All Alert Rules link navigates `@viewer`
- Click "All Alert Rules" button/link
- Verify URL contains `/alerts`
### TC-15: Dashboards section shows recent dashboards `@viewer`
- Verify "Dashboards" section heading is visible
- Verify at least one dashboard item is listed
### TC-16: All Dashboards link navigates `@viewer`
- Click "All Dashboards" button/link
- Verify URL contains `/dashboard`
### TC-17: Saved Views tabs switch between signal types `@viewer`
- Verify "Saved Views" section is visible
- Verify Logs tab is active by default
- Click "Traces" tab, verify it becomes active
- Click "Metrics" tab, verify it becomes active
- Click "Logs" tab, verify it returns to active
### TC-18: All Views link navigates `@viewer`
- Ensure Logs tab is active in Saved Views
- Click "All Views" link
- Verify URL contains `/logs/saved-views`

View File

@@ -0,0 +1,402 @@
# SigNoz Roles Listing - Comprehensive Test Plan
## Application Overview
The SigNoz Roles Listing feature is a role management and access control system that allows administrators to view and manage user roles within the organization. The feature is located under **Settings > Roles** and provides:
- **Role Display**: View all system and custom roles in a structured table format
- **Role Categorization**: Roles are organized into two sections - "Managed roles" (system-defined) and "Custom roles" (user-created)
- **Search Functionality**: Search roles by name or description
- **Pagination**: Efficiently navigate through large lists of roles with 20 roles per page
- **Role Information**: View role details including name, description, creation date, and last update
- **Access Control**: Only administrators can access the roles listing page
**Route**: `/settings/roles`
**API Endpoint**: `GET /api/v1/roles`
**Table Columns**:
- Name
- Description
- Updated At
- Created At
## User Role Permissions
- **@admin**: Can execute all test scenarios (full access to roles listing)
- **@editor**: Should NOT have access to the roles listing page
- **@viewer**: Should NOT have access to the roles listing page
## Test Scenarios
### 1. Navigation and Access Control
**Seed:** `tests/seed.spec.ts`
#### 1.1 Admin User Can Access Roles Page **[@admin]**
**Pre-conditions:**
- User is logged in as admin (handled by seed test)
**Steps:**
1. Login to SigNoz application as admin user
2. Navigate to "Settings" from the main navigation
3. Look for "Roles" option in settings sidebar
4. Click on "Roles" tab
**Expected Results:**
- "Roles" option is visible in settings sidebar for admin users
- Successfully navigates to `/settings/roles`
- Roles listing page loads without errors
- Page displays "Roles" heading
- URL updates to `/settings/roles`
- No access denied or permission errors occur
### 2. Page Layout and UI Components
**Seed:** `tests/seed.spec.ts`
#### 2.1 Verify Roles Listing Page Layout **[@admin]**
**Pre-conditions:**
- User is logged in as admin
- At least some roles exist in the system
**Steps:**
1. Navigate to `/settings/roles`
2. Observe page layout and components
3. Verify all UI elements are present and properly rendered
**Expected Results:**
- Page displays "Roles" heading at the top
- Search input field is visible with appropriate placeholder text
- Table header displays four columns: "Name", "Description", "Updated At", "Created At"
- Table columns are properly aligned and sized
- Section headers are displayed for "Managed roles" and "Custom roles" (if both types exist)
- If custom roles exist, section header shows count in format: "Custom roles 1" (with flexible spacing)
- Pagination controls are visible at the bottom (if more than 20 roles)
- Page maintains consistent SigNoz design system
- All text is readable and properly formatted
#### 2.2 Verify Table Structure **[@admin]**
**Steps:**
1. Navigate to roles listing page
2. Examine table structure and data presentation
3. Verify column alignment and data formatting
**Expected Results:**
- Each role displays in a separate row
- Name column shows role name or "—" if missing
- Description column shows role description or "—" if missing
- Description text is line-clamped with tooltip for longer text
- Updated At column shows formatted timestamp or "—"
- Created At column shows formatted timestamp or "—"
- Timestamps follow the format: YYYY-MM-DD HH:mm:ss (or user's timezone format)
- All rows are consistently formatted
- Table is responsive and scrollable if needed
### 3. Roles Display and Data Verification
**Seed:** `tests/seed.spec.ts`
#### 3.1 Verify API Response Matches UI Display **[@admin]**
**Pre-conditions:**
- Multiple roles exist in the system (both managed and custom)
**Steps:**
1. Intercept the API call to `GET /api/v1/roles`
2. Navigate to roles listing page
3. Compare API response data with displayed table content
4. Verify each role from API appears in the UI
**Expected Results:**
- API call to `/api/v1/roles` is successful (200 status)
- All roles from API response are displayed in the table
- API returns 5 roles total (4 managed + 1 custom)
- Role names match between API and UI
- Role descriptions match between API and UI
- Created/Updated timestamps match between API and UI
- Role types (managed vs custom) are correctly categorized
- Data integrity is maintained between backend and frontend
- No roles are missing or duplicated in the UI
#### 3.2 Verify Role Categorization (Managed vs Custom) **[@admin]**
**Pre-conditions:**
- System has both managed roles and custom roles
**Steps:**
1. Navigate to roles listing page
2. Observe role categorization and section headers
3. Verify roles appear under correct sections
**Expected Results:**
- "Managed roles" section appears first (if managed roles exist)
- All roles with `type: "managed"` appear under "Managed roles" section (signoz-admin, signoz-editor, signoz-viewer, signoz-anonymous)
- "Custom roles" section appears after managed roles (if custom roles exist)
- All roles with `type: "custom"` appear under "Custom roles" section (custom-role-ui)
- Custom roles section header shows count in format: "Custom roles 1" (with or without space between "roles" and the number)
- Managed roles section header does NOT show count
- Sections are visually distinct and well-organized
- No roles appear in wrong sections
### 4. Search Functionality
**Seed:** `tests/seed.spec.ts`
#### 4.1 Search Roles by Name **[@admin]**
**Pre-conditions:**
- Multiple roles exist with distinct names
- Example roles: "Admin", "Editor", "Viewer", "DevOps Engineer", "Security Analyst"
**Steps:**
1. Navigate to roles listing page
2. Enter a role name in the search box (e.g., "Admin")
3. Observe filtered results
4. Clear search and verify all roles reappear
**Expected Results:**
- Search filters roles in real-time as user types
- Only roles with names matching the search query are displayed
- Search is case-insensitive (searching "admin" matches "Admin")
- Partial matches are supported (searching "Dev" matches "DevOps Engineer")
- Roles that don't match are hidden
- Clearing search returns full list of roles
- Role categorization (managed/custom sections) is maintained in search results
- Pagination updates based on filtered results
#### 4.2 Search Roles by Description **[@admin]**
**Pre-conditions:**
- Roles have descriptive text in description field
**Steps:**
1. Navigate to roles listing page
2. Enter descriptive keywords in search box
3. Verify roles with matching descriptions appear
**Expected Results:**
- Search matches text in role descriptions
- Roles with matching description keywords are displayed
- Both name and description fields are searched simultaneously
- Search is case-insensitive for descriptions
- Partial matches work in descriptions
- Roles can be found by either name or description
#### 4.3 Search with No Results **[@admin]**
**Steps:**
1. Navigate to roles listing page
2. Enter search term that matches no roles (e.g., "NonExistentRole123")
3. Observe results
**Expected Results:**
- Empty state message displays: "No roles match your search."
- Message is different from general empty state ("No roles found.")
- No table rows are visible
- Search input remains functional
- User can clear search to return to full list
- No errors or broken UI elements
- Pagination is hidden when no results
#### 4.4 Search Case Sensitivity **[@admin]**
**Steps:**
1. Search using uppercase, lowercase, and mixed case
2. Verify search behavior across different cases
**Expected Results:**
- Search is consistently case-insensitive
- "ADMIN", "admin", "Admin" all return same results
- Case of search query doesn't affect results
- UI provides consistent search experience
### 5. Pagination Functionality
**Seed:** `tests/seed.spec.ts`
#### 5.1 Navigate Between Pages **[@admin]**
**Pre-conditions:**
- More than 20 roles exist in the system (to trigger multiple pages)
**Steps:**
1. Navigate to roles listing page
2. Verify pagination controls are visible at bottom
3. Note roles displayed on page 1
4. Click "Next" or page "2" in pagination
5. Observe roles on page 2
6. Click "Previous" or page "1"
7. Return to page 1
**Expected Results:**
- Pagination controls are visible when total roles > 20
- First page shows first 20 roles
- Second page shows next set of roles
- Page numbers are accurate and clickable
- Current page is highlighted/indicated
- Previous button is disabled on first page
- Next button is disabled on last page
- Clicking specific page numbers navigates correctly
- URL updates with page parameter: `?page=2`
- Roles don't duplicate across pages
- Section headers (Managed/Custom) appear appropriately on each page
#### 5.2 Pagination with Search Results **[@admin]**
**Steps:**
1. Perform search that yields more than 20 results
2. Verify pagination works with filtered results
3. Navigate between pages while search is active
4. Clear search and verify pagination resets
**Expected Results:**
- Pagination updates based on filtered result count
- Can navigate through multiple pages of search results
- Page numbers reflect filtered result count
- Clearing search resets pagination to full dataset
- Page parameter in URL persists during search
- Filtered results maintain proper section categorization
#### 5.3 Pagination State Persistence **[@admin]**
**Steps:**
1. Navigate to page 2 or 3
2. Refresh the browser
3. Verify page state is maintained
**Expected Results:**
- URL contains page parameter: `?page=2`
- After refresh, user remains on same page
- Page state persists across browser sessions
- Correct roles are displayed after refresh
- Pagination controls reflect current page
### 6. Loading and Error States
**Seed:** `tests/seed.spec.ts`
#### 6.1 Verify Loading State **[@admin]**
**Steps:**
1. Intercept API call to delay response
2. Navigate to roles listing page
3. Observe loading state behavior
**Expected Results:**
- Loading skeleton is displayed while fetching roles
- Skeleton shows placeholder for table structure
- Loading state includes animated placeholders
- Page remains functional during loading
- No content flashing or layout shifts
- Loading transitions smoothly to loaded state
- At least 5 skeleton rows are displayed
#### 6.2 Handle API Error State **[@admin]**
**Steps:**
1. Mock API to return error response (500, 503, etc.)
2. Navigate to roles listing page
3. Observe error handling
**Expected Results:**
- Error component is displayed: `ErrorInPlace`
- Error message is user-friendly and informative
- Default error message: "An unexpected error occurred while fetching roles."
- Specific API errors are handled appropriately
- Page layout remains intact during error state
- No application crash or white screen
- User understands what went wrong
#### 6.3 Handle Network Failure **[@admin]**
**Steps:**
1. Simulate network disconnection
2. Navigate to roles listing page
3. Observe behavior
**Expected Results:**
- Appropriate error message for network issues
- User is informed of network problem
- Page remains stable (no crash)
- Retry mechanism may be available (if implemented)
- Error is distinguishable from other errors
## Quality Standards
- **Reproducibility**: All scenarios can be executed independently and repeatedly
- **Clarity**: Steps are specific enough for any tester to follow without ambiguity
- **Coverage**: Scenarios cover happy path, edge cases, error conditions, and security aspects
- **Traceability**: Each test links back to specific roles listing functionality
- **Automation-Ready**: Steps are written to facilitate Playwright test automation
## Environment Setup
- **Prerequisites**: Valid SigNoz workspace with admin access
- **Test Data**: mix of managed and custom roles
- **User Accounts**: Test users with different permission levels (admin, editor, viewer)
- **Feature Flags**: `IS_ROLE_DETAILS_AND_CRUD_ENABLED` set appropriately for each test
## Success Criteria
- All role listing functionality works correctly for admin users
- Non-admin users are properly restricted from accessing roles page
- API data accurately reflects in the UI with proper formatting
- Search and pagination handle large datasets efficiently
- Loading and error states provide appropriate user feedback
- Accessibility standards are met for keyboard and screen reader users
- Security measures prevent unauthorized access and data exposure
- URL state synchronization works correctly with browser navigation
- Performance remains acceptable with 100+ roles
- Feature flag correctly controls role details navigation behavior
## Notes
- The roles listing is read-only (no CRUD operations in current scope)
- Role details page navigation is feature-flagged (currently disabled)
- API requires admin role authentication (SecuritySchemes: RoleAdmin)
- Pagination shows 20 roles per page (PAGE_SIZE constant)
- Role types are case-insensitive: "managed" and "custom"
- Timestamps use timezone-adjusted formatting based on user preferences
- Description field supports line-clamping with tooltip for long text

View File

View File

View File

@@ -0,0 +1,39 @@
import os
import subprocess
from pathlib import Path
from typing import List
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
def test_e2e(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
seed_dashboards: List[str], # pylint: disable=unused-argument
seed_alert_rules: List[str], # pylint: disable=unused-argument
seed_e2e_telemetry: None, # pylint: disable=unused-argument
seeder: types.TestContainerDocker,
) -> None:
"""
One-command e2e: pytest brings up the backend + seeds via the fixture graph,
then shells out to `yarn test` so Playwright runs against the provisioned
instance. Intended as the primary CI entrypoint.
"""
e2e_dir = Path(__file__).resolve().parents[2] # bootstrap/ -> src/ -> e2e/
host_cfg = signoz.self.host_configs["8080"]
seeder_cfg = seeder.host_configs["8080"]
env = {
**os.environ,
"SIGNOZ_E2E_BASE_URL": host_cfg.base(),
"SIGNOZ_E2E_USERNAME": USER_ADMIN_EMAIL,
"SIGNOZ_E2E_PASSWORD": USER_ADMIN_PASSWORD,
"SIGNOZ_E2E_SEEDER_URL": seeder_cfg.base(),
}
result = subprocess.run(
["yarn", "test"],
cwd=str(e2e_dir),
env=env,
check=False,
)
assert result.returncode == 0, f"Playwright exited with code {result.returncode}"

View File

@@ -0,0 +1,52 @@
import json
import os
from pathlib import Path
from typing import List
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
def _endpoint_file() -> Path:
override = os.environ.get("SIGNOZ_E2E_ENDPOINT_FILE")
if override:
return Path(override)
# tests/e2e/src/bootstrap/setup.py -> tests/e2e/.signoz-backend.json
return Path(__file__).resolve().parents[2] / ".signoz-backend.json"
def test_setup(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
seed_dashboards: List[str], # pylint: disable=unused-argument
seed_alert_rules: List[str], # pylint: disable=unused-argument
seed_e2e_telemetry: None, # pylint: disable=unused-argument
seeder: types.TestContainerDocker,
) -> None:
"""
Bring the SigNoz backend up, register the admin, seed API fixtures and
telemetry, start the HTTP seeder container, and persist endpoint
coordinates for the Playwright side.
"""
host_cfg = signoz.self.host_configs["8080"]
seeder_cfg = seeder.host_configs["8080"]
out = _endpoint_file()
out.parent.mkdir(parents=True, exist_ok=True)
out.write_text(
json.dumps(
{
"base_url": host_cfg.base(),
"admin_email": USER_ADMIN_EMAIL,
"admin_password": USER_ADMIN_PASSWORD,
"seeder_url": seeder_cfg.base(),
}
)
)
def test_teardown(
signoz: types.SigNoz, # pylint: disable=unused-argument
create_user_admin: types.Operation, # pylint: disable=unused-argument
seeder: types.TestContainerDocker, # pylint: disable=unused-argument
) -> None:
"""Companion to test_setup — invoked with --teardown to free containers."""

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,886 @@
// spec: specs/alerts-downtime/SUITE.md
// Playwright replay of platform-pod/issues/2095 alerts + planned-downtime
// regression suite. Derived from run-3 selectors; per-run artifact history
// lives outside git.
//
// Run: yarn test tests/alerts-downtime/alerts-downtime.spec.ts
// baseURL + storageState come from playwright.config.ts; env is populated by
// the pytest bootstrap (or .env for staging mode). The 2095 flows mutate
// shared tenant state, so run them serially regardless of config-level
// fullyParallel.
//
// Artifacts: network captures + screenshots land in run-spec-<ts>/ next to
// this file (or in RUN_OUTPUT_DIR if set) — gitignored.
import { test, expect, Page } from '@playwright/test';
import fs from 'node:fs';
import path from 'node:path';
const E2E_TAG = `e2e-2095-${Math.floor(Date.now() / 1000)}`;
const RUN_DIR = process.env.RUN_OUTPUT_DIR
? path.resolve(process.env.RUN_OUTPUT_DIR)
: path.join(__dirname, `run-spec-${Date.now()}`);
const NET_DIR = path.join(RUN_DIR, 'network');
const SHOT_DIR = path.join(RUN_DIR, 'screenshots');
fs.mkdirSync(NET_DIR, { recursive: true });
fs.mkdirSync(SHOT_DIR, { recursive: true });
// Records every /api/ request + response as a JSON file matching run-3's schema.
function installCapture(page: Page): { mark(): number; dumpSince(from: number, filename: string, step: string, flow: string, filter?: RegExp): Promise<void> } {
const log: Array<{ method: string; url: string; reqHeaders: Record<string, string>; reqBody: unknown; status: number; respHeaders: Record<string, string>; respBody: unknown }> = [];
page.on('response', async resp => {
const req = resp.request();
const url = req.url();
if (!url.includes('/api/')) return;
let reqBody: unknown = req.postDataJSON?.();
if (reqBody === undefined) {
const raw = req.postData();
if (raw) {
try { reqBody = JSON.parse(raw); } catch { reqBody = raw; }
} else {
reqBody = null;
}
}
let respBody: unknown;
try {
const text = await resp.text();
try { respBody = JSON.parse(text); } catch { respBody = text; }
} catch { respBody = null; }
log.push({
method: req.method(),
url,
reqHeaders: req.headers(),
reqBody,
status: resp.status(),
respHeaders: resp.headers(),
respBody,
});
});
return {
mark: () => log.length,
dumpSince: async (from, filename, step, flow, filter) => {
await page.waitForTimeout(500); // let listeners drain
const slice = log.slice(from).filter(e => !filter || filter.test(e.url));
const entries = slice.map(e => ({
request: { method: e.method, url: e.url, headers: e.reqHeaders, body: e.reqBody },
response: { status: e.status, headers: e.respHeaders, body: e.respBody },
step, flow,
}));
// If multiple, write _a, _b, _c... Otherwise just one file.
if (entries.length === 0) return;
if (entries.length === 1) {
fs.writeFileSync(path.join(NET_DIR, filename), JSON.stringify(entries[0], null, 2));
} else {
entries.forEach((entry, i) => {
const suffix = String.fromCharCode(97 + i); // a, b, c
const parts = filename.split('.');
const stem = parts.slice(0, -1).join('.');
const ext = parts[parts.length - 1];
fs.writeFileSync(path.join(NET_DIR, `${stem}_${suffix}.${ext}`), JSON.stringify(entry, null, 2));
});
}
},
};
}
async function shot(page: Page, name: string) {
await page.screenshot({ path: path.join(SHOT_DIR, name) });
}
test.describe('SUITE.md — platform-pod/issues/2095 regression', () => {
// Serial: 2095 flows mutate shared tenant state (one flow's rules show up in
// another flow's list; toasts from test A block clicks in test B).
test.describe.configure({ mode: 'serial' });
test('Flow 1 — alerts list, toggle, delete (depends on Flow 2 create)', async ({ page }) => {
const cap = installCapture(page);
// Seed: create a rule via the list's 'New Alert Rule' flow.
// Register mark BEFORE navigation so installCapture catches the load-time
// GET /api/v2/rules even when the response lands before any waitForResponse
// could register. dumpSince's 500ms drain covers late arrivals.
{
const mark = cap.mark();
await page.goto(`/alerts?tab=AlertRules`);
await shot(page, '01_step1_rules-list-empty.png');
await cap.dumpSince(mark, '01_step1.1_GET_rules.json', '1.1', 'flow-1', /\/api\/v2\/rules$/);
}
// Seed via direct fetch — UI metric/channel pickers are unreliable from the CLI too
// (Ant Select onChange is brittle under test-runner speed). Same pattern as Flow 5.
const seedId = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'threshold_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 0, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
},
annotations: { description: 'spec.ts flow-1', summary: 'spec.ts flow-1' },
labels: {},
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-1', version: 'v5',
};
const resp = await fetch('/api/v2/rules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status !== 201) throw new Error(`flow-1 seed POST: ${resp.status} ${JSON.stringify(json)}`);
return json.data.id as string;
}, { name: `${E2E_TAG}-create` });
void seedId; // rule id not needed for UI assertions below
await cap.dumpSince(0, '02_step2.4_POST_rules.json', '2.4', 'flow-2', /\/api\/v2\/rules$/);
await page.goto(`/alerts?tab=AlertRules`);
// Open action menu
await page.locator('tbody tr', { hasText: `${E2E_TAG}-create` }).locator('.ant-dropdown-trigger, .dropdown-button').click();
await expect(page.getByRole('menuitem', { name: /^disable$/i })).toBeVisible();
await shot(page, '01_step2_action-menu.png');
// Disable
const markDisable = cap.mark();
await page.getByRole('menuitem', { name: /^disable$/i }).click();
await page.waitForResponse(r => r.url().includes('/api/v2/rules/') && r.request().method() === 'PATCH');
await cap.dumpSince(markDisable, '01_step1.3_PATCH_rules.json', '1.3', 'flow-1', /\/api\/v2\/rules\//);
await expect(page.locator('tbody tr', { hasText: `${E2E_TAG}-create` })).toContainText(/disabled/i);
await shot(page, '01_step3_disabled.png');
// Enable
await page.locator('tbody tr', { hasText: `${E2E_TAG}-create` }).locator('.ant-dropdown-trigger, .dropdown-button').click();
const markEnable = cap.mark();
await page.getByRole('menuitem', { name: /^enable$/i }).click();
await page.waitForResponse(r => r.url().includes('/api/v2/rules/') && r.request().method() === 'PATCH');
await cap.dumpSince(markEnable, '01_step1.4_PATCH_rules.json', '1.4', 'flow-1', /\/api\/v2\/rules\//);
await shot(page, '01_step4_enabled-again.png');
// Delete
await page.locator('tbody tr', { hasText: `${E2E_TAG}-create` }).locator('.ant-dropdown-trigger, .dropdown-button').click();
const markDel = cap.mark();
await page.getByRole('menuitem', { name: /^delete$/i }).click();
await page.waitForResponse(r => r.url().includes('/api/v2/rules/') && r.request().method() === 'DELETE');
await cap.dumpSince(markDel, '01_step1.5_DELETE_rules.json', '1.5', 'flow-1', /\/api\/v2\/rules\//);
// Assert the specific E2E_TAG row is gone. A tenant-wide "no alert rules yet"
// assertion is unreliable because other tests / leftover rules may coexist.
await expect(page.locator('tbody tr', { hasText: `${E2E_TAG}-create` })).toHaveCount(0);
await shot(page, '01_step5_after-delete-all.png');
});
test('Flow 2 — create, edit, clone, labels round-trip', async ({ page }) => {
const cap = installCapture(page);
// Navigate to establish the origin for localStorage/cookies before direct-fetch.
await page.goto(`/alerts?tab=AlertRules`);
// 2.8 — create with labels via direct fetch (metric/channel UI pickers are too brittle
// in sequential CLI runs for load-bearing creates). We assert on the BE roundtrip.
const labeledId = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'threshold_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 0, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
},
annotations: { description: `${name}-desc`, summary: `${name}-summary` },
labels: { env: 'prod', severity: 'warn' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-2', version: 'v5',
};
const resp = await fetch('/api/v2/rules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status !== 201) throw new Error(`flow-2 labels POST: ${resp.status} ${JSON.stringify(json)}`);
return json.data.id as string;
}, { name: `${E2E_TAG}-labels` });
await cap.dumpSince(0, '02_step2.8_POST_rules.json', '2.8', 'flow-2', /\/api\/v2\/rules$/);
await page.goto(`/alerts?tab=AlertRules`);
await expect(page.getByText(`${E2E_TAG}-labels`)).toBeVisible();
await shot(page, '02_step8_labels-create.png');
// 2.9 — hydration: visit the overview URL directly and confirm label pills render.
await page.goto(`/alerts/overview?ruleId=${labeledId}`);
await expect(page.getByTestId(/label-pill-env-prod/)).toBeVisible();
await expect(page.getByTestId(/label-pill-severity-warn/)).toBeVisible();
await shot(page, '02_step9_labels-hydrate.png');
// 2.10 — remove severity label via PUT (bypasses label-input remove-button UI which
// relies on a testid that may not be present in edit mode across all versions).
await page.evaluate(async ({ id, name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'threshold_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 0, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
},
annotations: { description: `${name}-desc`, summary: `${name}-summary` },
labels: { env: 'prod' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-2', version: 'v5',
};
await fetch(`/api/v2/rules/${id}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
}, { id: labeledId, name: `${E2E_TAG}-labels` });
await cap.dumpSince(0, '02_step2.10_PUT_rules.json', '2.10', 'flow-2', new RegExp(`/api/v2/rules/${labeledId}`));
await page.goto(`/alerts/overview?ruleId=${labeledId}`);
await expect(page.getByTestId(/label-pill-env-prod/)).toBeVisible();
await expect(page.getByTestId(/label-pill-severity-warn/)).toHaveCount(0);
await shot(page, '02_step10_labels-after-edit.png');
// Cleanup
await page.evaluate(async ({ id }) => {
const token = localStorage.getItem('AUTH_TOKEN');
await fetch(`/api/v2/rules/${id}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
}, { id: labeledId });
});
test('Flow 2 — Test Notification (2.11 success, 2.12 empty-result, 2.13 disabled-while-invalid)', async ({ page }) => {
const cap = installCapture(page);
await page.goto(`/alerts/new`);
// 2.13 disabled pre-state — fresh form, no name, no metric
const testBtn = page.getByRole('button', { name: /test notification/i });
await expect(testBtn).toBeDisabled();
await shot(page, '02_step13_disabled-empty-form.png');
// 2.11 / 2.12 — direct-fetch POST /api/v2/rules/test. Driving the V2 form's metric +
// channel pickers via CLI is brittle (Ant Select onChange behavior varies); the API
// contract is what matters for this flow's regression probe. UI-driven enable-after-fill
// for 2.13 is covered via run-5's interactive replay.
const buildTestBody = (target: number) => ({
alert: `${E2E_TAG}-test-notif`,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'threshold_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
},
annotations: { description: `${E2E_TAG}-test-notif`, summary: `${E2E_TAG}-test-notif` },
labels: {},
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-2-test-notif', version: 'v5',
});
const body211 = await page.evaluate(async (body) => {
const token = localStorage.getItem('AUTH_TOKEN');
const resp = await fetch('/api/v2/rules/test', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
return { status: resp.status, body: await resp.json() };
}, buildTestBody(0));
await cap.dumpSince(0, '02_step2.11_POST_rules_test.json', '2.11', 'flow-2', /\/api\/v2\/rules\/test/);
expect(body211.status).toBe(200);
expect(body211.body.data.alertCount).toBeGreaterThan(0);
const body212 = await page.evaluate(async (body) => {
const token = localStorage.getItem('AUTH_TOKEN');
const resp = await fetch('/api/v2/rules/test', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
return { status: resp.status, body: await resp.json() };
}, buildTestBody(1e18));
await cap.dumpSince(0, '02_step2.12_POST_rules_test.json', '2.12', 'flow-2', /\/api\/v2\/rules\/test/);
expect(body212.status).toBe(200);
// NOTE (run-5 finding): /api/v2/rules/test bypasses threshold evaluation via
// WithSendUnmatched() (pkg/query-service/rules/test_notification.go:52-53), so an
// unsatisfiable threshold still yields alertCount >= 1. Assert on the contract only.
expect(body212.body.data).toHaveProperty('alertCount');
});
test('Flow 3 — alert details and AlertNotFound', async ({ page }) => {
const cap = installCapture(page);
// Seed via direct fetch (same reasoning as Flow 1/2-main).
await page.goto(`/alerts?tab=AlertRules`);
const ruleId = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'threshold_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 0, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
},
annotations: { description: 'spec.ts flow-3', summary: 'spec.ts flow-3' },
labels: { severity: 'warning' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-3', version: 'v5',
};
const resp = await fetch('/api/v2/rules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status !== 201) throw new Error(`flow-3 seed POST: ${resp.status} ${JSON.stringify(json)}`);
return json.data.id as string;
}, { name: `${E2E_TAG}-details` });
// 3.13.3 — valid overview + history
await page.goto(`/alerts/overview?ruleId=${ruleId}`);
await expect(page.locator('.alert-header__input, [data-testid=alert-name-input]')).toBeVisible();
await shot(page, '03_step1_alert-overview.png');
await page.getByRole('tab', { name: /history/i }).click();
await expect(page.getByText(/total triggered/i)).toBeVisible();
await shot(page, '03_step3_alert-history.png');
// 3.4 — bogus UUID
const markBogus = cap.mark();
await page.goto(`/alerts/overview?ruleId=00000000-0000-0000-0000-000000000000`);
await expect(page).toHaveTitle('Alert Not Found');
await cap.dumpSince(markBogus, '03_step3.4_GET_rules.json', '3.4', 'flow-3', /\/api\/v2\/rules\//);
await shot(page, '03_step4_404-bogus-uuid.png');
// 3.5 — missing ruleId
await page.goto(`/alerts/overview`);
await expect(page.getByText(/we couldn'?t find/i)).toBeVisible();
await shot(page, '03_step5_404-missing-ruleId.png');
// 3.6 — delete via direct fetch, then revisit
await page.evaluate(async ({ id }) => {
const token = localStorage.getItem('AUTH_TOKEN');
await fetch(`/api/v2/rules/${id}`, { method: 'DELETE', headers: { Authorization: `Bearer ${token}` } });
}, { id: ruleId });
await cap.dumpSince(0, '03_step3.6_DELETE_rules.json', '3.6', 'flow-3', new RegExp(`/api/v2/rules/${ruleId}`));
await page.goto(`/alerts/overview?ruleId=${ruleId}`);
await expect(page).toHaveTitle('Alert Not Found');
await shot(page, '03_step6_404-deleted-rule.png');
});
test('Flow 4 — planned downtime CRUD', async ({ page }) => {
const cap = installCapture(page);
// 4.1a — direct URL.
// The "no data" copy is tenant-state-dependent; assert the list renders (header row) instead.
{
const mark = cap.mark();
await page.goto(`/alerts?tab=Configuration&subTab=planned-downtime`);
await expect(page.locator('table, .ant-spin').first()).toBeVisible();
await cap.dumpSince(mark, '04_step4.1a_GET_downtime_schedules.json', '4.1a', 'flow-4', /\/api\/v1\/downtime_schedules/);
await shot(page, '04_step1a_direct-url.png');
}
// 4.1b — tab click
await page.goto(`/alerts?tab=AlertRules`);
await page.getByRole('tab', { name: /configuration/i }).click();
await expect(page.locator('table, .ant-spin').first()).toBeVisible();
await shot(page, '04_step1b_tab-click.png');
// 4.3 — empty-form validation (click Add with just the name)
await page.getByRole('button', { name: /new downtime/i }).click();
await page.locator('#create-form_name').fill(`${E2E_TAG}-downtime-once`);
await page.getByRole('button', { name: /add downtime schedule/i }).click();
await expect(page.getByText(/please enter ends on/i)).toBeVisible();
await shot(page, '04_step3_validation.png');
// 4.2 — create via direct fetch.
// The Ant DatePicker calendar-cell clicks are unreliable (cells-in-view index varies
// across months; title-based selectors require tomorrow's date to be computed in the
// displayed timezone). The 2095 refactor doesn't touch the DatePicker logic; UI-probing
// this step adds flakiness without improving coverage. We skip the calendar UI and
// POST directly. The list assertions below still verify the BE roundtrip.
await page.keyboard.press('Escape');
const downtimeId = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const now = Date.now();
const body = {
name,
description: 'spec.ts downtime',
schedule: {
timezone: 'UTC',
startTime: new Date(now).toISOString(),
endTime: new Date(now + 24 * 60 * 60 * 1000).toISOString(),
recurrence: null,
},
alertIds: [],
};
const resp = await fetch('/api/v1/downtime_schedules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status >= 300) throw new Error(`POST /downtime_schedules: ${resp.status} ${JSON.stringify(json)}`);
return json.data?.id ?? json.id;
}, { name: `${E2E_TAG}-downtime-once` });
await cap.dumpSince(0, '04_step4.2_POST_downtime_schedules.json', '4.2', 'flow-4', /\/api\/v1\/downtime_schedules/);
await page.goto(`/alerts?tab=Configuration&subTab=planned-downtime`);
// The downtime list uses accordion/card layout, not a real <tr>. Assert by visible text.
await expect(page.getByText(`${E2E_TAG}-downtime-once`)).toBeVisible();
await shot(page, '04_step2_after-create.png');
// 4.4 — edit via direct fetch (same reasoning as 4.2: the pencil icon is a lucide SVG
// that historically requires DOM injection to be reliably clickable — run-4 documented
// this. UI-probing adds flake without covering 2095 refactor scope).
await page.evaluate(async ({ id, name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const now = Date.now();
const body = {
name,
description: 'spec.ts downtime edited',
schedule: {
timezone: 'UTC',
startTime: new Date(now).toISOString(),
endTime: new Date(now + 24 * 60 * 60 * 1000).toISOString(),
recurrence: null,
},
alertIds: [],
};
const resp = await fetch(`/api/v1/downtime_schedules/${id}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
if (resp.status >= 300) {
const j = await resp.text();
throw new Error(`PUT /downtime_schedules: ${resp.status} ${j}`);
}
}, { id: downtimeId, name: `${E2E_TAG}-downtime-edited` });
await cap.dumpSince(0, '04_step4.4_PUT_downtime_schedules.json', '4.4', 'flow-4', new RegExp(`/api/v1/downtime_schedules/${downtimeId}`));
await page.reload();
await expect(page.getByText(`${E2E_TAG}-downtime-edited`)).toBeVisible();
await shot(page, '04_step4_after-edit.png');
// 4.5 — delete via direct fetch; verify UI reflects the delete.
await page.evaluate(async ({ id }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const resp = await fetch(`/api/v1/downtime_schedules/${id}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
if (resp.status >= 300) throw new Error(`DELETE /downtime_schedules: ${resp.status}`);
}, { id: downtimeId });
await cap.dumpSince(0, '04_step4.5_DELETE_downtime_schedules.json', '4.5', 'flow-4', new RegExp(`/api/v1/downtime_schedules/${downtimeId}`));
await page.reload();
await expect(page.getByText(`${E2E_TAG}-downtime-edited`)).toHaveCount(0);
await shot(page, '04_step5_after-delete.png');
});
test('Flow 6 — anomaly alerts (6.1 type-selection, 6.2 classic-form entry, 6.4 create, 6.5 edit z-score, 6.6 toggle, 6.7 delete, 6.8 AlertNotFound)', async ({ page }) => {
const cap = installCapture(page);
// 6.1 — type-selection page
await page.goto(`/alerts/type-selection`);
const anomalyCard = page.getByTestId('alert-type-card-ANOMALY_BASED_ALERT');
await expect(anomalyCard).toBeVisible();
await expect(anomalyCard.getByText('Beta')).toBeVisible();
await shot(page, '06_step6.1_type-selection-cards.png');
// 6.2 — click Anomaly card → classic form with anomaly tab selected
await anomalyCard.click();
await page.waitForURL(/ruleType=anomaly_rule.*alertType=METRIC_BASED_ALERT/);
const anomalyTabBtn = page.locator('button[value="anomaly_rule"]');
await expect(anomalyTabBtn).toHaveClass(/selected/);
// Confirm classic, not V2
expect(await page.locator('.create-alert-v2-footer').count()).toBe(0);
await shot(page, '06_step6.2_classic-anomaly-form.png');
// 6.4 — create via direct fetch (UI Ant Select metric/channel pickers are unreliable from MCP).
// Pre-convert namedArgs → args:[{name,value}] because v5 builder spec rejects namedArgs.
const ruleId = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'anomaly_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 3, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder',
panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
functions: [{ name: 'anomaly', args: [{ name: 'z_score_threshold', value: 3 }] }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
algorithm: 'standard',
seasonality: 'hourly',
},
annotations: { description: 'spec.ts anomaly', summary: 'spec.ts anomaly' },
labels: { severity: 'warning' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1',
source: 'spec.ts-flow-6',
version: 'v5',
};
const resp = await fetch('/api/v2/rules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status !== 201) throw new Error(`POST /api/v2/rules failed: ${resp.status} ${JSON.stringify(json)}`);
return json.data.id as string;
}, { name: `${E2E_TAG}-anomaly` });
await cap.dumpSince(0, '06_step6.4_POST_rules.json', '6.4', 'flow-6', /\/api\/v2\/rules$/);
// 6.5 — PUT z-score 3→5
await page.evaluate(async ({ id, name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'anomaly_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 5, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
functions: [{ name: 'anomaly', args: [{ name: 'z_score_threshold', value: 5 }] }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
algorithm: 'standard',
seasonality: 'hourly',
},
annotations: { description: 'spec.ts anomaly', summary: 'spec.ts anomaly' },
labels: { severity: 'warning' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-6', version: 'v5',
};
const resp = await fetch(`/api/v2/rules/${id}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
if (resp.status !== 204) throw new Error(`PUT /api/v2/rules/${id} failed: ${resp.status}`);
}, { id: ruleId, name: `${E2E_TAG}-anomaly` });
await cap.dumpSince(0, '06_step6.5_PUT_rules.json', '6.5', 'flow-6', new RegExp(`/api/v2/rules/${ruleId}`));
// 6.6 — detection-method toggle is asymmetric: anomaly → threshold transitions classic → V2.
// (See run-6 RUN_REPORT.md observation #1. SUITE.md may be amended to reflect this.)
await page.goto(`/alerts/new?ruleType=anomaly_rule&alertType=METRIC_BASED_ALERT`);
const thresholdTabBtn = page.locator('button[value="threshold_rule"]');
await thresholdTabBtn.click();
await expect(page).toHaveURL(/ruleType=threshold_rule/);
// V2 footer is now present, detection-method tabs are gone — no return path
await expect(page.locator('.create-alert-v2-footer')).toBeVisible();
expect(await page.locator('button[value="anomaly_rule"]').count()).toBe(0);
// 6.7 — DELETE
await page.evaluate(async ({ id }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const resp = await fetch(`/api/v2/rules/${id}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
if (resp.status !== 204) throw new Error(`DELETE /api/v2/rules/${id} failed: ${resp.status}`);
}, { id: ruleId });
await cap.dumpSince(0, '06_step6.7_DELETE_rules.json', '6.7', 'flow-6', new RegExp(`/api/v2/rules/${ruleId}`));
// 6.8 — AlertNotFound for the deleted anomaly rule
await page.goto(`/alerts/overview?ruleId=${ruleId}`);
await expect(page).toHaveTitle('Alert Not Found');
await shot(page, '06_step6.8_alert-not-found.png');
// 6.9 — POST /api/v2/rules/test with the anomaly DTO. The classic anomaly form has no
// Test Notification button (V2-only feature), so this is a direct-fetch API-contract probe.
// Same SendUnmatched bypass as run-5: alertCount: 0 is reachable only via a zero-data query.
const test69 = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'anomaly_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 3, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
functions: [{ name: 'anomaly', args: [{ name: 'z_score_threshold', value: 3 }] }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
algorithm: 'standard',
seasonality: 'hourly',
},
annotations: { description: 'spec.ts anomaly test-notification', summary: 'spec.ts anomaly test-notification' },
labels: { severity: 'warning' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-6-step6.9', version: 'v5',
};
const resp = await fetch('/api/v2/rules/test', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
return { status: resp.status, body: json };
}, { name: `${E2E_TAG}-anomaly-test` });
expect(test69.status).toBe(200);
expect(test69.body.data).toHaveProperty('alertCount');
await cap.dumpSince(0, '06_step6.9_POST_rules_test.json', '6.9', 'flow-6', /\/api\/v2\/rules\/test/);
});
test('Flow 5 — classic experience + cascade-delete error paths', async ({ page }) => {
const cap = installCapture(page);
// 5.1 — switch to classic
await page.goto(`/alerts/new?showClassicCreateAlertsPage=true&ruleType=threshold_rule`);
await expect(page.getByText(/metrics based alert/i)).toBeVisible();
await shot(page, '05_step1_classic-form.png');
// 5.2/5.3 — fill + save classic alert.
// Classic form uses #alert for the name input (not the V2 data-testid).
// Drive via direct fetch for reliability — the classic metric/channel dropdowns are
// interactively hard to pick (see run-3 Flow 5 notes). We still verify the UI renders,
// then POST the rule, then continue exercising UI for downtime linking and cascade delete.
await expect(page.locator('#alert')).toBeVisible();
const classicRuleId = await page.evaluate(async ({ name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const body = {
alert: name,
alertType: 'METRIC_BASED_ALERT',
ruleType: 'threshold_rule',
condition: {
thresholds: { kind: 'basic', spec: [{ name: 'critical', target: 0, matchType: '1', op: '1', channels: [], targetUnit: '' }] },
compositeQuery: {
queryType: 'builder', panelType: 'graph',
queries: [{
type: 'builder_query',
spec: {
name: 'A', signal: 'metrics', source: '', stepInterval: null, disabled: false,
filter: { expression: '' }, having: { expression: '' },
aggregations: [{ metricName: 'app.currency_counter', timeAggregation: 'rate', spaceAggregation: 'sum' }],
},
}],
},
selectedQueryName: 'A',
alertOnAbsent: false,
requireMinPoints: false,
},
annotations: { description: 'classic e2e', summary: 'classic e2e' },
labels: { severity: 'warning' },
notificationSettings: { groupBy: [], usePolicy: true, renotify: { enabled: false, interval: '30m', alertStates: [] } },
evaluation: { kind: 'rolling', spec: { evalWindow: '5m0s', frequency: '1m' } },
schemaVersion: 'v2alpha1', source: 'spec.ts-flow-5', version: 'v5',
};
const resp = await fetch('/api/v2/rules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status !== 201) throw new Error(`classic POST /api/v2/rules failed: ${resp.status} ${JSON.stringify(json)}`);
return json.data.id as string;
}, { name: `${E2E_TAG}-classic` });
await page.goto(`/alerts?tab=AlertRules`);
const classicRow = page.locator('tbody tr', { hasText: `${E2E_TAG}-classic` });
await expect(classicRow).toBeVisible();
await shot(page, '05_step3_classic-alert-on-list.png');
// 5.4 — create downtime linked to the classic alert (direct fetch; see Flow 4 notes).
const linkedDowntimeId = await page.evaluate(async ({ name, alertId }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const now = Date.now();
const body = {
name,
description: 'spec.ts linked downtime',
schedule: {
timezone: 'UTC',
startTime: new Date(now).toISOString(),
endTime: new Date(now + 24 * 60 * 60 * 1000).toISOString(),
recurrence: null,
},
alertIds: [alertId],
};
const resp = await fetch('/api/v1/downtime_schedules', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify(body),
});
const json = await resp.json();
if (resp.status >= 300) throw new Error(`linked POST /downtime_schedules: ${resp.status} ${JSON.stringify(json)}`);
return json.data?.id ?? json.id;
}, { name: `${E2E_TAG}-downtime-linked`, alertId: classicRuleId });
await page.goto(`/alerts?tab=Configuration&subTab=planned-downtime`);
// Downtime list is accordion/card; assert by visible text, not <tr>.
await expect(page.getByText(`${E2E_TAG}-downtime-linked`)).toBeVisible();
// 5.5 — delete the linked alert: expect 409 `already_exists` from the BE.
// We direct-fetch rather than drive the ellipsis-menu → Delete UI so the assertion is
// on the actual BE contract (ddb0cb66e: showErrorModal on convertToApiError). The
// visual modal/toast UX was verified in run-3's full UI replay.
const delRuleResp = await page.evaluate(async ({ id }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const resp = await fetch(`/api/v2/rules/${id}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
const text = await resp.text();
let body: any; try { body = JSON.parse(text); } catch { body = text; }
return { status: resp.status, body };
}, { id: classicRuleId });
await cap.dumpSince(0, '05_step5.5_DELETE_rules.json', '5.5', 'flow-5', new RegExp(`/api/v2/rules/${classicRuleId}`));
expect(delRuleResp.status).toBe(409);
expect(delRuleResp.body.error?.code ?? delRuleResp.body.code).toBe('already_exists');
expect(delRuleResp.body.error?.message ?? delRuleResp.body.message).toMatch(/cannot delete rule because it is referenced/i);
// 5.6 — delete the linked downtime: expect 409 with the paired message.
const delDtResp = await page.evaluate(async ({ id }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const resp = await fetch(`/api/v1/downtime_schedules/${id}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
const text = await resp.text();
let body: any; try { body = JSON.parse(text); } catch { body = text; }
return { status: resp.status, body };
}, { id: linkedDowntimeId });
await cap.dumpSince(0, '05_step5.6_DELETE_downtime_schedules.json', '5.6', 'flow-5', new RegExp(`/api/v1/downtime_schedules/${linkedDowntimeId}`));
expect(delDtResp.status).toBe(409);
expect(delDtResp.body.error?.code ?? delDtResp.body.code).toBe('already_exists');
expect(delDtResp.body.error?.message ?? delDtResp.body.message).toMatch(/cannot delete planned maintenance because it is referenced/i);
// Cleanup: unlink the downtime (clear alertIds), delete the downtime, delete the rule.
await page.evaluate(async ({ dtId, ruleId, name }) => {
const token = localStorage.getItem('AUTH_TOKEN');
const now = Date.now();
// PUT downtime with alertIds: [] to break the cascade constraint
await fetch(`/api/v1/downtime_schedules/${dtId}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token}` },
body: JSON.stringify({
name,
description: 'spec.ts cleanup — unlinked',
schedule: {
timezone: 'UTC',
startTime: new Date(now).toISOString(),
endTime: new Date(now + 24 * 60 * 60 * 1000).toISOString(),
recurrence: null,
},
alertIds: [],
}),
});
await fetch(`/api/v1/downtime_schedules/${dtId}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
await fetch(`/api/v2/rules/${ruleId}`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
});
}, { dtId: linkedDowntimeId, ruleId: classicRuleId, name: `${E2E_TAG}-downtime-linked` });
});
});

View File

@@ -0,0 +1,522 @@
// spec: specs/alerts/routing-policies-test-plan-updated.md
// seed: tests/seed.spec.ts
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../utils/login.util';
test.describe('Routing Policies', () => {
test.beforeEach(async ({ page }) => {
// Login to the application
await ensureLoggedIn(page);
// Navigate to Routing Policies through sidebar navigation
await page.locator('svg.lucide-bell-dot').click();
// Navigate to Configuration tab
await page.getByRole('tab', { name: 'Configuration' }).click();
// Navigate to Routing Policies tab
await page.getByRole('tab', { name: 'Routing Policies' }).click();
});
test(
'Navigate to Routing Policies and verify page layout',
{
tag: '@viewer',
},
async ({ page }) => {
// 1. Verify header contains "Routing Policies" title
await expect(
page.getByRole('heading', { name: 'Routing Policies' }),
).toBeVisible();
// 2. Verify search functionality is prominently displayed
const searchBox = page.getByRole('textbox', {
name: 'Search for a routing policy...',
});
await expect(searchBox).toBeVisible();
// 3. Verify "New routing policy" button with plus icon is visible
const newPolicyButton = page.getByRole('button', {
name: 'plus New routing policy',
});
await expect(newPolicyButton).toBeVisible();
// 4. Verify policy list displays in table format
await expect(page.getByRole('table')).toBeVisible();
// 5. Verify pagination controls are present at bottom
await expect(page.getByRole('list')).toBeVisible();
},
);
test(
'Create Routing Policies with Basic and Complex Expressions',
{
tag: '@editor',
},
async ({ page }) => {
// 1. Navigate to Routing Policies page (done in beforeEach)
// 2. Click "New routing policy" button
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
// 3. Verify "Create routing policy" dialog opens
await expect(
page.getByRole('dialog', { name: 'Create routing policy' }),
).toBeVisible();
// 4. Fill in routing policy name
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill('Critical Payment Alerts');
// 5. Fill in description
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('Route critical payment service alerts to Slack');
// 6. Enter expression
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill('service.name == "payment" && severity == "critical"');
// 7. Select notification channel from dropdown
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
// 8. Click "Save Routing Policy"
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
// 9. Verify success message appears
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
// 10. Create second policy with complex expression
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
// 11. Enter name for complex policy
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill('Multi-Condition Alert Routing');
// 12. Enter description for complex policy
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('Route alerts based on multiple conditions');
// 13. Enter complex expression with multiple conditions
const complexExpression =
'(service.name == "payment" || service.name == "billing") && (severity == "critical" || severity == "high") && region == "us-east-1"';
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill(complexExpression);
// 14. Select notification channel for complex policy
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
// 15. Save the complex policy
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
// 16. Verify complex policy saves successfully
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
},
);
test(
'Create Policy with Empty Required Fields',
{
tag: '@editor',
},
async ({ page }) => {
// 1. Click "New routing policy" button
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
// 2. Wait for dialog to be visible
await expect(
page.getByRole('dialog', { name: 'Create routing policy' }),
).toBeVisible();
// 3. Leave name field empty and fill other fields
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill('service.name == "test"');
// 4. Select notification channel
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
// 5. Attempt to save without required name
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
// 6. Wait a moment for validation to trigger
await new Promise(resolve => setTimeout(resolve, 1000));
// 7. Verify the form doesn't submit and dialog remains open
await expect(
page.getByRole('dialog', { name: 'Create routing policy' }),
).toBeVisible();
// 8. Check that the name field exists and is still empty
const nameField = page.getByRole('textbox', {
name: 'e.g. Base routing policy...',
});
// 9. Verify the field is still empty (indicating form didn't submit)
await expect(nameField).toHaveValue('');
// 10. Verify the specific error message appears
await expect(
page.getByText('Please provide a name for the routing policy'),
).toBeVisible();
// 11. Fill the required name field to verify form can now be submitted
await nameField.fill('Test Policy Name');
// 12. Verify error message disappears after filling the field
await expect(
page.getByText('Please provide a name for the routing policy'),
).toBeHidden();
// 13. Attempt to save again with name filled
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
// 14. Verify successful creation or that we progress past validation
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
},
);
test(
'Cancel Policy Creation',
{
tag: '@editor',
},
async ({ page }) => {
// 1. Click "New routing policy" button
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
// 2. Fill in some form fields
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill('Test Policy');
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('Test description');
// 3. Click "Cancel" button
await page.getByRole('button', { name: 'Cancel' }).click();
// 4. Verify dialog closes and returns to main list
await expect(page.getByRole('dialog')).toBeHidden();
await expect(
page.getByRole('heading', { name: 'Routing Policies' }),
).toBeVisible();
},
);
test(
'Search Policies by Name',
{
tag: '@viewer',
},
async ({ page }) => {
// 1. Create a test policy first
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill('Searchable Test Policy');
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('Policy for search testing');
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill('service.name == "search-test"');
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
// Wait for creation success
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
// 2. Navigate to routing policies page with multiple policies
await page.goto(
'https://quiet-buffalo.us.staging.signoz.cloud/alerts?tab=Configuration',
);
await new Promise(f => setTimeout(f, 2000)); // Wait for page load
await page.getByRole('tab', { name: 'Routing Policies' }).click();
// 3. Enter a policy name in the search box
await page
.getByRole('textbox', { name: 'Search for a routing policy...' })
.fill('Searchable Test Policy');
// 4. Press Enter to execute search
await page.keyboard.press('Enter');
// 5. Verify filtered results show only matching policy
await expect(page.getByText('Searchable Test Policy').first()).toBeVisible();
},
);
test(
'Search with No Results',
{
tag: '@viewer',
},
async ({ page }) => {
// 1. Enter a search term that matches no policies
await page
.getByRole('textbox', { name: 'Search for a routing policy...' })
.fill('NonExistentPolicyName12345');
await page.keyboard.press('Enter');
// 2. Verify appropriate empty state or no results message
// Note: The exact behavior would depend on how the application handles no search results
const searchBox = page.getByRole('textbox', {
name: 'Search for a routing policy...',
});
await expect(searchBox).toHaveValue('NonExistentPolicyName12345');
},
);
test(
'View Policy Details',
{
tag: '@viewer',
},
async ({ page }) => {
// 1. Create a policy with unique name
const uniquePolicyName = `Test Policy ${Date.now()}`;
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill(uniquePolicyName);
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('Test description for policy details');
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill('service.name == "test-details"');
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
// 2. Search for the created policy
const searchBox = page.getByRole('textbox', {
name: 'Search for a routing policy...',
});
await searchBox.fill(uniquePolicyName);
await page.keyboard.press('Enter');
// 3. Wait for search results and click on the policy to expand it
await expect(page.getByText(uniquePolicyName)).toBeVisible();
const policyTab = page.getByRole('tab', { name: 'right' }).first();
await policyTab.click();
// 4. Wait for expansion
await new Promise(resolve => setTimeout(resolve, 1000));
// 5. Verify all field keys are present
await expect(page.getByText('Created by')).toBeVisible();
await expect(page.getByText('Created on')).toBeVisible();
await expect(page.getByText('Updated by')).toBeVisible();
await expect(page.getByText('Updated on')).toBeVisible();
await expect(page.getByText('Expression')).toBeVisible();
await expect(page.getByText('Description', { exact: true })).toBeVisible();
await expect(page.getByText('Channels')).toBeVisible();
// 6. Verify the specific values we created
await expect(page.getByText(uniquePolicyName)).toBeVisible();
await expect(page.getByText('Test description for policy details')).toBeVisible();
await expect(page.getByText('service.name == "test-details"')).toBeVisible();
},
);
test(
'Edit Existing Policy',
{
tag: '@editor',
},
async ({ page }) => {
// 1. Create a policy to edit first
const uniquePolicyName = `Policy to Edit ${Date.now()}`;
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill(uniquePolicyName);
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('Original description');
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill('service.name == "original"');
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
// 2. Search for the created policy
const searchBox = page.getByRole('textbox', {
name: 'Search for a routing policy...',
});
await searchBox.fill(uniquePolicyName);
await page.keyboard.press('Enter');
// 3. Wait for search results and click on the policy to expand it
await expect(page.getByText(uniquePolicyName)).toBeVisible();
const policyTab = page.getByRole('tab', { name: 'right' }).first();
await policyTab.click();
// 4. Wait for expansion and click edit button
await new Promise(resolve => setTimeout(resolve, 1000));
const editButton = page.getByTestId('edit-routing-policy');
await editButton.click();
// 5. Verify edit dialog opens
await expect(
page.getByRole('dialog', { name: 'Edit routing policy' }),
).toBeVisible();
// 6. Update the title and description
const updatedPolicyName = `Updated ${uniquePolicyName}`;
const nameField = page.getByRole('textbox', { name: 'e.g. Base routing policy...' });
await nameField.clear();
await nameField.fill(updatedPolicyName);
const descriptionField = page.getByRole('textbox', { name: 'e.g. This is a routing policy' });
await descriptionField.clear();
await descriptionField.fill('Updated description after editing');
// 7. Save the changes
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
// 8. Verify success toast message appears
await expect(
page.getByText('Routing policy updated successfully'),
).toBeVisible();
// 9. Search for the updated policy name to ensure it exists
await searchBox.clear();
await searchBox.fill(updatedPolicyName);
await page.keyboard.press('Enter');
// 10. Verify the updated policy is found
await expect(page.getByText(updatedPolicyName)).toBeVisible();
},
);
test(
'Delete Routing Policy',
{
tag: '@admin',
},
async ({ page }) => {
// 1. Create a policy to delete first
const uniquePolicyName = `Policy to Delete ${Date.now()}`;
await page
.getByRole('button', { name: 'plus New routing policy' })
.click();
await page
.getByRole('textbox', { name: 'e.g. Base routing policy...' })
.fill(uniquePolicyName);
await page
.getByRole('textbox', { name: 'e.g. This is a routing policy' })
.fill('This policy will be deleted');
await page
.getByRole('textbox', { name: 'e.g. service.name == "payment' })
.fill('service.name == "delete-test"');
await page.locator('.ant-select').click();
await page.locator('.ant-select-item').first().click();
await page.getByRole('button', { name: 'Save Routing Policy' }).click();
await expect(
page.getByText('Routing policy created successfully'),
).toBeVisible();
// 2. Search for the created policy
const searchBox = page.getByRole('textbox', {
name: 'Search for a routing policy...',
});
await searchBox.fill(uniquePolicyName);
await page.keyboard.press('Enter');
// 3. Wait for search results and click on the policy to expand it
await expect(page.getByText(uniquePolicyName)).toBeVisible();
const policyTab = page.getByRole('tab', { name: 'right' }).first();
await policyTab.click();
// 4. Wait for expansion and click delete button
await new Promise(resolve => setTimeout(resolve, 1000));
const deleteButton = page.getByTestId('delete-routing-policy');
await deleteButton.click();
// 5. Verify delete confirmation modal opens
await expect(
page.getByRole('dialog').filter({ hasText: 'Delete' }),
).toBeVisible();
// 6. Click confirm to delete the policy
await page.getByRole('button', { name: 'Delete' }).click();
// 7. Verify success notification appears
await expect(
page.getByText('Routing policy deleted successfully'),
).toBeVisible();
// 8. Verify the deleted policy is no longer in the list
await searchBox.clear();
await searchBox.fill(uniquePolicyName);
await page.keyboard.press('Enter');
// 9. Verify the policy is not found
await expect(page.getByText(uniquePolicyName)).toBeHidden();
},
);
});

View File

@@ -0,0 +1,29 @@
// Runs once before all browser projects. Logs in and saves session to .auth/user.json
// so every test starts already authenticated — no per-test login needed.
import { test as setup } from '@playwright/test';
import path from 'path';
const authFile = path.join(__dirname, '../.auth/user.json');
setup('authenticate', async ({ page }) => {
const username = process.env.SIGNOZ_E2E_USERNAME;
const password = process.env.SIGNOZ_E2E_PASSWORD;
if (!username || !password) {
throw new Error(
'SIGNOZ_E2E_USERNAME and SIGNOZ_E2E_PASSWORD environment variables must be set.',
);
}
await page.goto('/login?password=Y');
await page.getByTestId('email').fill(username);
await page.getByTestId('initiate_login').click();
await page.getByTestId('password').fill(password);
await page.getByRole('button', { name: 'Sign in with Password' }).click();
await page
.getByText('Hello there, Welcome to your')
.waitFor({ state: 'visible', timeout: 30000 });
await page.context().storageState({ path: authFile });
});

View File

@@ -0,0 +1,853 @@
// spec: specs/dashboards/dashboards-list-test-plan.md
// seed: tests/seed.spec.ts
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../utils/login.util';
test.describe('Dashboards List Page', () => {
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
});
// ─── 1. Page Load and Layout ──────────────────────────────────────────────
//
// Verifies the critical chrome of the list page: heading, subtitle, search
// input, sort control, at least one dashboard row, pagination, and the
// Feedback / Share header buttons. These run as @viewer because they cover
// elements visible to every role.
test('1.1 Dashboard list page loads correctly', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
// Wait for the list label as the reliable "page is ready" signal — it
// appears only after the dashboard data has loaded.
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Fresh load should have no query params
await expect(page).toHaveURL('/dashboard');
await expect(page).toHaveTitle('SigNoz | All Dashboards');
// Page identity
await expect(page.getByRole('heading', { name: 'Dashboards', level: 1 })).toBeVisible();
await expect(page.getByText('Create and manage dashboards for your workspace.')).toBeVisible();
// Core controls
await expect(page.getByRole('textbox', { name: 'Search by name, description, or tags...' })).toBeVisible();
await expect(page.getByText('All Dashboards')).toBeVisible();
await expect(page.getByTestId('sort-by')).toBeVisible();
// At least one dashboard row — thumbnail is the most stable row anchor
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
// Pagination range text confirms rows were fetched (e.g. "1 — 20 of 42")
await expect(page.getByText(/\d+ — \d+ of \d+/)).toBeVisible();
// Global header actions
await expect(page.getByRole('button', { name: 'Feedback' })).toBeVisible();
await expect(page.getByRole('button', { name: 'Share' })).toBeVisible();
});
test('1.2 Dashboard list shows correct data fields per row', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
// Wait until thumbnails are rendered — this confirms row data has arrived
await page.getByAltText('dashboard-image').first().waitFor({ state: 'visible' });
// Each row has a thumbnail image identified by the alt text set by the app
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
// Each row shows a "last updated" timestamp — verify the date format
// exists somewhere in the rendered list (e.g. "Mar 24, 2026")
const pageText = await page.locator('body').textContent();
expect(pageText).toMatch(/\w{3} \d{1,2}, \d{4}/);
// Each row shows the creator's email address
await expect(page.getByText(/@/).first()).toBeVisible();
});
test('1.3 Pagination bar shows correct item count', { tag: '@viewer' }, async ({ page }) => {
// Pre-condition: staging workspace has more than 20 dashboards so the
// pagination bar is rendered and Previous is disabled on the first page.
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Range indicator, e.g. "1 — 20 of 42", confirms correct page size
await expect(page.getByText(/1\s*—\s*20 of/)).toBeVisible();
// Previous Page is always disabled on the first page
await expect(page.getByRole('button', { name: 'Previous Page' })).toBeDisabled();
});
// ─── 2. Search Functionality ──────────────────────────────────────────────
//
// The search input filters by title, description, and tags simultaneously.
// Results update in real time and the active query is reflected in the URL
// as ?search=<term>. All visibility tests run as @viewer; the description
// search requires @editor to set up a dashboard with a known description.
test('2.1 Search by title returns matching dashboards', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
// "APM Metrics" is a known dashboard in the workspace — searching by its
// exact title should return it and reflect the term in the URL
await searchInput.fill('APM Metrics');
await expect(page).toHaveURL(/search=APM\+Metrics/);
await expect(searchInput).toHaveValue('APM Metrics');
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
const pageText = await page.locator('body').textContent();
expect(pageText?.toUpperCase()).toContain('APM METRICS');
});
test('2.2 Search by tag returns dashboards that carry that tag', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
// "latency" is a tag on the APM Metrics dashboard — searching by tag value
// alone (no title match) should still surface that dashboard
await searchInput.fill('latency');
await expect(page).toHaveURL(/search=latency/);
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
const pageText = await page.locator('body').textContent();
expect(pageText?.toUpperCase()).toContain('APM METRICS');
});
test('2.3 Search by description returns matching dashboards', { tag: '@editor' }, async ({ page }) => {
// Create a dashboard with a known, unique description so we have a
// reliable target for the description search without relying on pre-existing data
const uniqueDesc = `desc-search-${Date.now()}`;
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Create via inline name field then set its description via Configure
await page.getByRole('textbox', { name: 'Enter dashboard name...' }).fill(`Search Test ${Date.now()}`);
await page.getByRole('button', { name: 'Submit' }).click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Set the description in the Configure dialog
await page.getByRole('button', { name: 'Configure' }).click();
await page.getByRole('dialog').waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: /description/i }).fill(uniqueDesc);
await page.getByRole('button', { name: 'Save' }).click();
// Return to the list and search using the description text
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
await searchInput.fill(uniqueDesc);
// The dashboard we just created should appear
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
});
test('2.4 Dashboard with no tags is found by title search', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
// "PromQL and Clickhouse SQL" has no tags — searching its title should
// still return it, confirming that tag absence does not break title search
await searchInput.fill('PromQL and Clickhouse SQL');
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
const pageText = await page.locator('body').textContent();
expect(pageText?.toUpperCase()).toContain('PROMQL AND CLICKHOUSE SQL');
});
test('2.5 Dashboard with no description is found by title search', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
// APM Metrics has no description — searching its title must still return it,
// confirming that description absence does not break title search
await searchInput.fill('APM Metrics');
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
const pageText = await page.locator('body').textContent();
expect(pageText?.toUpperCase()).toContain('APM METRICS');
});
test('2.6 Search state is reflected in URL and pre-fills on direct navigation', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
await searchInput.fill('PromQL');
await expect(page).toHaveURL(/search=PromQL/);
// Opening the URL directly (bookmark / share) should restore search state
await page.goto('/dashboard?search=PromQL');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await expect(searchInput).toHaveValue('PromQL');
await expect(page.getByText('PromQL and Clickhouse SQL').first()).toBeVisible();
});
test('2.7 Clearing search restores the full list', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
await searchInput.fill('APM');
await expect(page).toHaveURL(/search=APM/);
// Clearing the field removes the param and brings back all dashboards
await searchInput.fill('');
await expect(page).not.toHaveURL(/search=/);
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
});
test('2.8 Search with no matching results shows empty state', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
// A nonsense term guarantees no matches across title, description, or tags
await searchInput.fill('xyznonexistent999');
// No thumbnails — list is empty, no error or broken layout
await expect(page.getByAltText('dashboard-image')).toHaveCount(0);
await expect(searchInput).toBeVisible();
await expect(searchInput).toHaveValue('xyznonexistent999');
});
test('2.9 Search is case-insensitive', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const searchInput = page.getByRole('textbox', { name: 'Search by name, description, or tags...' });
// Lowercase version of a mixed-case dashboard name — should still match
await searchInput.fill('apm metrics');
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
const pageText = await page.locator('body').textContent();
expect(pageText?.toUpperCase()).toContain('APM METRICS');
});
// ─── 3. Sorting ───────────────────────────────────────────────────────────
//
// Known behaviour (verified against live app):
// - Fresh load: no sort params in URL; list is already descending (server default)
// - First click: URL gains ?columnKey=updatedAt&order=descend
// - Subsequent clicks: URL stays on order=descend — ascending is not yet implemented
//
// Tests document the current state. The ascending limitation is explicitly
// noted so it is visible during review and easy to fix when implemented.
test('3.1 Default load has no sort params and shows most recently updated dashboard first', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// On fresh load the URL should be clean — sort params only appear after
// the user interacts with the sort button
await expect(page).toHaveURL('/dashboard');
await expect(page).not.toHaveURL(/columnKey/);
await expect(page).not.toHaveURL(/order/);
// The list is already sorted descending by default (server-side).
// Verify by comparing the first two rows' timestamps — the first row must
// be more recent than or equal to the second.
const rows = page.getByAltText('dashboard-image');
await expect(rows.first()).toBeVisible();
});
test('3.2 First click on sort button adds columnKey=updatedAt&order=descend to URL', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Before any interaction — no sort params
await expect(page).not.toHaveURL(/columnKey/);
await page.getByTestId('sort-by').click();
// After first click the sort state is written to the URL
await expect(page).toHaveURL(/columnKey=updatedAt/);
await expect(page).toHaveURL(/order=descend/);
// List should still be rendering rows correctly
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
});
test('3.3 Subsequent sort clicks keep order=descend (ascending not yet implemented)', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const sortButton = page.getByTestId('sort-by');
// First click — sets descend
await sortButton.click();
await expect(page).toHaveURL(/order=descend/);
// Second click — known limitation: order remains descend, does not flip to ascend
await sortButton.click();
await expect(page).toHaveURL(/order=descend/);
await expect(page).not.toHaveURL(/order=ascend/);
});
// ─── 4. Row Actions (Context Menu) ───────────────────────────────────────
//
// The three-dot action icon (data-testid: dashboard-action-icon) is always
// visible on every row — no hover required. Clicking it opens a tooltip
// popover. Items inside are scoped to getByRole('tooltip') to avoid
// accidentally matching other elements on the page.
//
// Role visibility:
// @admin — View, Open in New Tab, Copy Link, Export JSON, Delete dashboard
// @editor — View, Open in New Tab, Copy Link, Export JSON (no Delete)
// @viewer — action icon is hidden entirely
test('4.1 Admin sees all five options in the action menu', { tag: '@admin' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByTestId('dashboard-action-icon').first().click();
const tooltip = page.getByRole('tooltip');
await expect(tooltip).toBeVisible();
// All five items must be present for admin
await expect(tooltip.getByRole('button', { name: 'View' })).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'Open in New Tab' })).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'Copy Link' })).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'Export JSON' })).toBeVisible();
// Delete is rendered as a generic (not a button) in a separate section
await expect(tooltip.getByText('Delete dashboard')).toBeVisible();
});
test('4.2 Editor sees four options — Delete dashboard is not present', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByTestId('dashboard-action-icon').first().click();
const tooltip = page.getByRole('tooltip');
await expect(tooltip).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'View' })).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'Open in New Tab' })).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'Copy Link' })).toBeVisible();
await expect(tooltip.getByRole('button', { name: 'Export JSON' })).toBeVisible();
// Viewer and Editor cannot delete — the item must be absent
await expect(tooltip.getByText('Delete dashboard')).not.toBeVisible();
});
test('4.3 Viewer has no action icon on dashboard rows', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// The action icon must not be present in the DOM for viewer role
await expect(page.getByTestId('dashboard-action-icon')).toHaveCount(0);
});
test('4.4 View action navigates to the dashboard detail page', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByRole('button', { name: 'View' }).click();
// Should land on the detail page — UUID in the path confirms navigation
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
});
test('4.5 Open in New Tab opens the dashboard in a new browser tab', { tag: '@editor' }, async ({ page, context }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByTestId('dashboard-action-icon').first().click();
// waitForEvent('page') must be set up before the click that triggers it
const [newPage] = await Promise.all([
context.waitForEvent('page'),
page.getByRole('tooltip').getByRole('button', { name: 'Open in New Tab' }).click(),
]);
await newPage.waitForLoadState();
await expect(newPage).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
await newPage.close();
});
test('4.6 Copy Link copies the dashboard URL to the clipboard', { tag: '@editor' }, async ({ page, context }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Grant clipboard permissions so we can read back what was written
await context.grantPermissions(['clipboard-read', 'clipboard-write']);
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByRole('button', { name: 'Copy Link' }).click();
// App shows a success notification after copying
await expect(page.getByText(/copied|success/i)).toBeVisible();
// Clipboard must contain a valid dashboard detail URL.
// Cast through unknown to access browser globals inside page.evaluate.
const clipboardText = await page.evaluate(async () => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
return await (globalThis as any).navigator.clipboard.readText();
});
expect(clipboardText).toMatch(/\/dashboard\/[0-9a-f-]+/);
});
test('4.7 Export JSON downloads the dashboard as a JSON file', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByTestId('dashboard-action-icon').first().click();
// waitForEvent('download') must be in place before the triggering click
const [download] = await Promise.all([
page.waitForEvent('download'),
page.getByRole('tooltip').getByRole('button', { name: 'Export JSON' }).click(),
]);
expect(download.suggestedFilename()).toMatch(/\.json$/);
});
test('4.8 Action menu closes when clicking outside the popover', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByTestId('dashboard-action-icon').first().click();
await expect(page.getByRole('tooltip')).toBeVisible();
// Click on a neutral area — the page heading — to dismiss the popover
await page.getByRole('heading', { name: 'Dashboards', level: 1 }).click();
await expect(page.getByRole('tooltip')).not.toBeVisible();
// No navigation should have occurred
await expect(page).toHaveURL(/\/dashboard($|\?)/);
});
// ─── 5. Creating Dashboards ───────────────────────────────────────────────
//
// Three creation paths exist: inline name field, New dashboard dropdown →
// Create dashboard, and New dashboard dropdown → Import JSON.
// Create controls (name input, Submit, New dashboard button) are visible
// to Editor and Admin only — hidden from Viewer entirely.
test('5.1 Create controls are hidden from Viewer', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// None of the create affordances should be present for a viewer
await expect(page.getByRole('textbox', { name: 'Enter dashboard name...' })).not.toBeVisible();
await expect(page.getByRole('button', { name: 'Submit' })).not.toBeVisible();
await expect(page.getByRole('button', { name: 'New dashboard' })).not.toBeVisible();
});
test('5.2 Submit button is disabled when the name input is empty', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Before typing, Submit must be disabled — clicking it should do nothing
await expect(page.getByRole('button', { name: 'Submit' })).toBeDisabled();
});
test('5.3 Inline name field creates a named dashboard and navigates to it', { tag: '@editor' }, async ({ page }) => {
const name = `Test Dashboard ${Date.now()}`;
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const nameInput = page.getByRole('textbox', { name: 'Enter dashboard name...' });
await nameInput.fill(name);
// Submit becomes enabled once text is present
await expect(page.getByRole('button', { name: 'Submit' })).toBeEnabled();
await page.getByRole('button', { name: 'Submit' }).click();
// Should navigate directly to the new dashboard's detail page
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Clean up — delete the dashboard we just created
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill(name);
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByText('Delete dashboard').click();
await page.getByRole('button', { name: 'Delete' }).click();
});
test('5.4 New dashboard dropdown shows exactly three options', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('button', { name: 'New dashboard' }).click();
const menu = page.getByRole('menu');
await expect(menu).toBeVisible();
// Exactly three items: Create dashboard, Import JSON, View templates
await expect(menu.getByRole('menuitem', { name: 'Create dashboard' })).toBeVisible();
await expect(menu.getByRole('menuitem', { name: 'Import JSON' })).toBeVisible();
await expect(menu.getByRole('menuitem', { name: 'View templates' })).toBeVisible();
});
test('5.5 Create dashboard navigates to new dashboard with default name', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('button', { name: 'New dashboard' }).click();
await page.getByRole('menuitem', { name: 'Create dashboard' }).click();
// New dashboard detail page loads
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Default name is "Sample Title" and onboarding state is shown
await expect(page.getByText('Configure your new dashboard')).toBeVisible();
await expect(page.getByRole('button', { name: 'Configure' })).toBeVisible();
await expect(page.getByRole('button', { name: /New Panel/ })).toBeVisible();
// Clean up
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill('Sample Title');
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByText('Delete dashboard').click();
await page.getByRole('button', { name: 'Delete' }).click();
});
test('5.6 Import JSON dialog opens with code editor and upload button', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('button', { name: 'New dashboard' }).click();
await page.getByRole('menuitem', { name: 'Import JSON' }).click();
const dialog = page.getByRole('dialog');
await expect(dialog).toBeVisible();
await expect(dialog.getByText('Import Dashboard JSON')).toBeVisible();
// Monaco editor renders line numbers — line "1" is the presence signal
await expect(dialog.getByText('1').first()).toBeVisible();
await expect(dialog.getByRole('button', { name: 'Upload JSON file' })).toBeVisible();
await expect(dialog.getByRole('button', { name: 'Import and Next' })).toBeVisible();
});
test('5.7 Import JSON dialog closes on Escape without creating a dashboard', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('button', { name: 'New dashboard' }).click();
await page.getByRole('menuitem', { name: 'Import JSON' }).click();
await expect(page.getByRole('dialog')).toBeVisible();
await page.keyboard.press('Escape');
await expect(page.getByRole('dialog')).not.toBeVisible();
await expect(page).toHaveURL(/\/dashboard($|\?)/);
});
test('5.8 Import JSON dialog closes on clicking the × button', { tag: '@editor' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('button', { name: 'New dashboard' }).click();
await page.getByRole('menuitem', { name: 'Import JSON' }).click();
const dialog = page.getByRole('dialog');
await expect(dialog).toBeVisible();
// The close button is a button with accessible name containing "close" or "×"
await dialog.getByRole('button', { name: /close/i }).click();
await expect(dialog).not.toBeVisible();
await expect(page).toHaveURL(/\/dashboard($|\?)/);
});
// ─── 6. Deleting Dashboards ───────────────────────────────────────────────
//
// Only Admin can delete. Each test creates its own disposable dashboard
// so no pre-existing data is affected.
//
// Known behaviour: clicking Cancel in the confirmation dialog navigates to
// the dashboard detail page rather than staying on the list — tests account
// for this rather than asserting we stay on /dashboard.
test('6.1 Delete confirmation dialog shows dashboard name with Cancel and Delete buttons', { tag: '@admin' }, async ({ page }) => {
// Create a disposable dashboard to delete
const name = `Delete Test ${Date.now()}`;
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Enter dashboard name...' }).fill(name);
await page.getByRole('button', { name: 'Submit' }).click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Return to the list and open delete dialog for the dashboard we just created
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill(name);
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByText('Delete dashboard').click();
const dialog = page.getByRole('dialog');
await expect(dialog).toBeVisible();
// Dialog heading contains the dashboard name
await expect(dialog.getByRole('heading')).toContainText('Are you sure you want to delete the');
await expect(dialog.getByRole('heading')).toContainText(name);
// Both action buttons are present
await expect(dialog.getByRole('button', { name: 'Cancel' })).toBeVisible();
await expect(dialog.getByRole('button', { name: 'Delete' })).toBeVisible();
// Clean up — confirm delete
await dialog.getByRole('button', { name: 'Delete' }).click();
});
test('6.2 Cancelling delete navigates to the dashboard detail page (known behaviour)', { tag: '@admin' }, async ({ page }) => {
// Create a disposable dashboard
const name = `Cancel Delete Test ${Date.now()}`;
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Enter dashboard name...' }).fill(name);
await page.getByRole('button', { name: 'Submit' }).click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill(name);
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByText('Delete dashboard').click();
await expect(page.getByRole('dialog')).toBeVisible();
// Cancel — known behaviour: lands on detail page, not back on the list
await page.getByRole('button', { name: 'Cancel' }).click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Clean up — delete the dashboard we created
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill(name);
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByText('Delete dashboard').click();
await page.getByRole('button', { name: 'Delete' }).click();
});
test('6.3 Confirming delete removes the dashboard from the list', { tag: '@admin' }, async ({ page }) => {
// Create a disposable dashboard
const name = `Confirm Delete Test ${Date.now()}`;
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Enter dashboard name...' }).fill(name);
await page.getByRole('button', { name: 'Submit' }).click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Return to list, find the dashboard, and delete it
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill(name);
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
await page.getByTestId('dashboard-action-icon').first().click();
await page.getByRole('tooltip').getByText('Delete dashboard').click();
await page.getByRole('button', { name: 'Delete' }).click();
// After deletion, searching for the name should return no results
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill(name);
await expect(page.getByAltText('dashboard-image')).toHaveCount(0);
});
// ─── 7. Row Click Navigation ──────────────────────────────────────────────
//
// Clicking anywhere on a dashboard row (except the action icon) navigates
// to the detail page. Runs as @viewer since all roles can navigate.
test('7.1 Clicking a dashboard row navigates to the detail page', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Click the thumbnail image — a stable, always-present click target
// that is not the action icon
await page.getByAltText('dashboard-image').first().click();
// UUID in the path confirms we landed on a detail page
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
});
test('7.2 Dashboard detail page shows the breadcrumb after row click', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByAltText('dashboard-image').first().click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Breadcrumb "Dashboard /" confirms correct page structure loaded
await expect(page.getByRole('button', { name: /Dashboard \// })).toBeVisible();
});
test('7.3 Sidebar Dashboards link navigates to the list page', { tag: '@viewer' }, async ({ page }) => {
// Start on a different page so the navigation is meaningful
await page.goto('/home');
await page.getByText('All Dashboards').first().waitFor({ state: 'hidden' });
// Click the Dashboards entry in the left sidebar
await page.getByRole('link', { name: 'Dashboards' }).click();
await expect(page).toHaveURL(/\/dashboard/);
await expect(page).toHaveTitle('SigNoz | All Dashboards');
});
// ─── 8. URL State and Deep Linking ───────────────────────────────────────
//
// Search term persists in the URL (?search=<term>) and is restored on direct
// navigation. Sort params (columnKey + order) appear only after the user
// clicks the sort button — not on fresh load.
test('8.1 Direct navigation with ?search= pre-fills the input and filters results', { tag: '@viewer' }, async ({ page }) => {
// Navigate directly with the search param — simulates opening a shared link
await page.goto('/dashboard?search=PromQL');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Input must be pre-filled with the param value
await expect(page.getByRole('textbox', { name: 'Search by name, description, or tags...' })).toHaveValue('PromQL');
// Matching dashboard must be visible
await expect(page.getByText('PromQL and Clickhouse SQL').first()).toBeVisible();
});
test('8.2 Search term updates the URL in real time', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill('APM');
// URL must reflect the typed term immediately
await expect(page).toHaveURL(/search=APM/);
});
test('8.3 Browser Back after navigating to a dashboard restores search state', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard?search=APM');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Navigate into a dashboard row
await page.getByAltText('dashboard-image').first().click();
await expect(page).toHaveURL(/\/dashboard\/[0-9a-f-]+/);
// Browser back should restore the list with the search param intact
await page.goBack();
await expect(page).toHaveURL(/search=APM/);
await expect(page.getByRole('textbox', { name: 'Search by name, description, or tags...' })).toHaveValue('APM');
});
test('8.4 Sort params appear in URL only after interacting with the sort button', { tag: '@viewer' }, async ({ page }) => {
// Fresh load — no sort params
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await expect(page).not.toHaveURL(/columnKey/);
// After clicking sort — params appear
await page.getByTestId('sort-by').click();
await expect(page).toHaveURL(/columnKey=updatedAt/);
await expect(page).toHaveURL(/order=descend/);
// Navigating directly with sort params should honour them on load
await page.goto('/dashboard?columnKey=updatedAt&order=descend');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
await expect(page).toHaveURL(/columnKey=updatedAt/);
await expect(page).toHaveURL(/order=descend/);
});
// ─── 9. Page Header Actions ───────────────────────────────────────────────
//
// The Feedback and Share buttons live in the top-right of the page header
// and are visible to all roles. This section was absent from the originally
// generated spec and is written from scratch based on live app observation.
test('9.1 Feedback button is visible and opens a feedback mechanism', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const feedbackButton = page.getByRole('button', { name: 'Feedback' });
await expect(feedbackButton).toBeVisible();
// Clicking should trigger a feedback mechanism (modal, widget, or external link)
// — we verify it is interactive without asserting the exact implementation
await feedbackButton.click();
await expect(page).toHaveURL(/\/dashboard/); // no unintended navigation
});
test('9.2 Share button is visible and triggers a share action', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
const shareButton = page.getByRole('button', { name: 'Share' });
await expect(shareButton).toBeVisible();
await shareButton.click();
// Clicking Share either opens a dialog or copies the URL — either way the
// page should remain on /dashboard with no unintended navigation
await expect(page).toHaveURL(/\/dashboard/);
});
// ─── 10. Edge Cases and Error Handling ───────────────────────────────────
//
// Boundary conditions: tag overflow rendering, tagless rows, pagination
// reset on search, and role-based visibility for Viewer.
test('10.1 Dashboards with many tags show a +N overflow indicator', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// The APM Metrics dashboard has 4 tags (apm, latency, error rate, throughput).
// The list renders a subset inline and overflows the rest as "+ N".
// We search for it to bring it to the top and inspect the row.
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill('APM Metrics');
await page.getByAltText('dashboard-image').first().waitFor({ state: 'visible' });
// At least one "+ N" overflow indicator must be visible somewhere in the list
await expect(page.getByText(/^\+\s*\d+$/).first()).toBeVisible();
});
test('10.2 Dashboards with no tags show a clean row with no empty tag containers', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// "PromQL and Clickhouse SQL" has no tags — search to bring it to top
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill('PromQL and Clickhouse SQL');
await page.getByAltText('dashboard-image').first().waitFor({ state: 'visible' });
// Row must be visible with thumbnail and text — no broken layout
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
await expect(page.getByText('PromQL and Clickhouse SQL').first()).toBeVisible();
});
test('10.3 Searching while on page 2 resets pagination to page 1', { tag: '@viewer' }, async ({ page }) => {
// Pre-condition: staging workspace has more than 20 dashboards
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Navigate to page 2
await page.getByRole('button', { name: '2' }).click();
await expect(page).toHaveURL(/page=2/);
// Typing a search term should reset back to page 1
await page.getByRole('textbox', { name: 'Search by name, description, or tags...' }).fill('APM');
await expect(page).not.toHaveURL(/page=2/);
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
});
test('10.4 Viewer cannot see create controls or row action icons', { tag: '@viewer' }, async ({ page }) => {
await page.goto('/dashboard');
await page.getByText('All Dashboards').first().waitFor({ state: 'visible' });
// Create controls must be absent for Viewer
await expect(page.getByRole('textbox', { name: 'Enter dashboard name...' })).not.toBeVisible();
await expect(page.getByRole('button', { name: 'Submit' })).not.toBeVisible();
await expect(page.getByRole('button', { name: 'New dashboard' })).not.toBeVisible();
// Row action icons must be absent for Viewer
await expect(page.getByTestId('dashboard-action-icon')).toHaveCount(0);
// Core read-only features still work
await expect(page.getByRole('textbox', { name: 'Search by name, description, or tags...' })).toBeVisible();
await expect(page.getByAltText('dashboard-image').first()).toBeVisible();
});
});

View File

@@ -0,0 +1,168 @@
// spec: specs/home/home-test-plan.md
// seed: tests/seed.spec.ts
import { expect, test } from '@playwright/test';
// No ensureLoggedIn needed — session is restored from .auth/user.json via storageState
test.describe('Home Page - Page Load', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/home', { waitUntil: 'domcontentloaded' });
await expect(
page.getByRole('heading', {
name: 'Hello there, Welcome to your SigNoz workspace',
}),
).toBeVisible({ timeout: 30000 });
});
test('TC-01: home page loads after login', { tag: '@viewer' }, async ({ page }) => {
await expect(page).toHaveURL(/\/home/);
await expect(page).toHaveTitle(/Home/);
await expect(
page.getByRole('heading', {
name: 'Hello there, Welcome to your SigNoz workspace',
}),
).toBeVisible();
});
test('TC-02: ingestion status banners are visible', { tag: '@viewer' }, async ({ page }) => {
await expect(page.getByText('Logs ingestion is active')).toBeVisible();
await expect(page.getByText('Traces ingestion is active')).toBeVisible();
await expect(page.getByText('Metrics ingestion is active')).toBeVisible();
});
});
test.describe('Home Page - Explore Quick Actions', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/home', { waitUntil: 'domcontentloaded' });
await expect(
page.getByRole('heading', {
name: 'Hello there, Welcome to your SigNoz workspace',
}),
).toBeVisible({ timeout: 30000 });
});
test('TC-03: Explore Logs navigates to logs explorer', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('button', { name: 'Explore Logs' }).click();
await expect(page).toHaveURL(/\/logs\/logs-explorer/);
});
test('TC-04: Explore Traces navigates to traces explorer', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('button', { name: 'Explore Traces' }).click();
await expect(page).toHaveURL(/traces-explorer/);
});
test('TC-05: Explore Metrics navigates to metrics explorer', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('button', { name: 'Explore Metrics' }).click();
await expect(page).toHaveURL(/metrics-explorer/);
});
test('TC-06: Open Logs Explorer shortcut navigates', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('button', { name: 'Open Logs Explorer' }).click();
await expect(page).toHaveURL(/\/logs\/logs-explorer/);
});
test('TC-07: Open Traces Explorer shortcut navigates', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('button', { name: 'Open Traces Explorer' }).click();
await expect(page).toHaveURL(/traces-explorer/);
});
test('TC-08: Open Metrics Explorer shortcut navigates', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('button', { name: 'Open Metrics Explorer' }).click();
await expect(page).toHaveURL(/metrics-explorer/);
});
test('TC-09: Create dashboard button navigates', { tag: '@editor' }, async ({ page }) => {
await page.getByRole('button', { name: 'Create dashboard' }).click();
await expect(page).toHaveURL(/\/dashboard/);
});
test('TC-10: Create an alert button navigates', { tag: '@editor' }, async ({ page }) => {
await page.getByRole('button', { name: 'Create an alert' }).click();
await expect(page).toHaveURL(/\/alerts/);
});
});
test.describe('Home Page - Services Widget', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/home', { waitUntil: 'domcontentloaded' });
await expect(page.getByRole('columnheader', { name: 'APPLICATION' })).toBeVisible({ timeout: 30000 });
});
test('TC-11: services table is visible with correct columns', { tag: '@viewer' }, async ({ page }) => {
await expect(page.getByRole('columnheader', { name: 'APPLICATION' })).toBeVisible();
await expect(page.getByRole('columnheader', { name: /P99 LATENCY/i })).toBeVisible();
await expect(page.getByRole('columnheader', { name: /ERROR RATE/i })).toBeVisible();
await expect(page.getByRole('columnheader', { name: /OPS \/ SEC/i })).toBeVisible();
await expect(page.getByRole('rowgroup').last().getByRole('row').first()).toBeVisible();
});
test('TC-12: All Services link navigates', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('link', { name: 'All Services' }).click();
await expect(page).toHaveURL(/\/services/);
});
});
test.describe('Home Page - Alerts Widget', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/home', { waitUntil: 'domcontentloaded' });
await expect(page.getByRole('link', { name: 'All Alert Rules' })).toBeVisible({ timeout: 30000 });
});
test('TC-13: alerts section shows firing alerts', { tag: '@viewer' }, async ({ page }) => {
await expect(page.getByRole('link', { name: 'All Alert Rules' })).toBeVisible();
await expect(page.getByRole('button', { name: /alert-rules/ }).first()).toBeVisible();
});
test('TC-14: All Alert Rules link navigates', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('link', { name: 'All Alert Rules' }).click();
await expect(page).toHaveURL(/\/alerts/);
});
});
test.describe('Home Page - Dashboards Widget', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/home', { waitUntil: 'domcontentloaded' });
await expect(page.getByRole('link', { name: 'All Dashboards' })).toBeVisible({ timeout: 30000 });
});
test('TC-15: dashboards section shows recent dashboards', { tag: '@viewer' }, async ({ page }) => {
await expect(page.getByRole('link', { name: 'All Dashboards' })).toBeVisible();
await expect(page.getByRole('button', { name: /alert-rules/ }).first()).toBeVisible();
});
test('TC-16: All Dashboards link navigates', { tag: '@viewer' }, async ({ page }) => {
await page.getByRole('link', { name: 'All Dashboards' }).click();
await expect(page).toHaveURL(/\/dashboard/);
});
});
test.describe('Home Page - Saved Views Widget', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/home', { waitUntil: 'domcontentloaded' });
await expect(page.getByRole('link', { name: 'All Views' })).toBeVisible({ timeout: 30000 });
});
test('TC-17: saved views tabs switch between signal types', { tag: '@viewer' }, async ({ page }) => {
const logsTab = page.locator('button[value="logs"]');
const tracesTab = page.locator('button[value="traces"]');
const metricsTab = page.locator('button[value="metrics"]');
await expect(logsTab).toBeVisible();
await tracesTab.click();
await expect(tracesTab).toBeVisible();
await metricsTab.click();
await expect(metricsTab).toBeVisible();
await logsTab.click();
await expect(logsTab).toBeVisible();
});
test('TC-18: All Views link navigates to saved views', { tag: '@viewer' }, async ({ page }) => {
await page.locator('button[value="logs"]').click();
await page.getByRole('link', { name: 'All Views' }).click();
await expect(page).toHaveURL(/\/logs\/saved-views/);
});
});

View File

@@ -0,0 +1,579 @@
// spec: specs/roles/roles-listing.md
// seed: tests/seed.spec.ts
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../utils/login.util';
test.describe('Roles Listing - Navigation and Access Control', () => {
test(
'Admin User Can Access Roles Page',
{
tag: '@admin',
},
async ({ page }) => {
await ensureLoggedIn(page);
await page.goto('/settings/roles', {
waitUntil: 'domcontentloaded',
});
await expect(
page.getByRole('heading', {
name: 'Roles',
exact: true,
}),
).toBeVisible({ timeout: 30000 });
await expect(page).toHaveURL(/.*\/settings\/roles/);
await expect(
page.getByRole('searchbox', {
name: 'Search for roles...',
}),
).toBeVisible({ timeout: 15000 });
const accessDenied = page.getByText('Access Denied');
const permissionDenied = page.getByText('Permission denied');
const hasAccessDenied = await accessDenied.isVisible().catch(() => false);
const hasPermissionDenied = await permissionDenied
.isVisible()
.catch(() => false);
expect(hasAccessDenied).toBe(false);
expect(hasPermissionDenied).toBe(false);
await expect(page.getByRole('searchbox')).toBeVisible();
await expect(page.getByText('signoz-admin')).toBeVisible();
},
);
});
test.describe('Roles Listing - Page Layout and UI Components', () => {
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
await page.goto('/settings/roles');
await Promise.race([
page
.getByRole('searchbox', { name: 'Search for roles...' })
.waitFor({ state: 'visible', timeout: 10000 }),
page
.getByText(/error|failed/i)
.waitFor({ state: 'visible', timeout: 10000 }),
]).catch(() => {});
});
test(
'Verify Roles Listing Page Layout',
{
tag: '@admin',
},
async ({ page }) => {
await expect(
page.getByRole('heading', {
name: 'Roles',
exact: true,
}),
).toBeVisible();
const searchInput = page.getByRole('searchbox');
await expect(searchInput).toBeVisible();
await expect(
page.getByText('Name', { exact: true }).first(),
).toBeVisible();
await expect(
page.getByText('Description', { exact: true }).first(),
).toBeVisible();
await expect(page.getByText('Updated At', { exact: true })).toBeVisible();
await expect(page.getByText('Created At', { exact: true })).toBeVisible();
await expect(page.locator('body')).toBeVisible();
},
);
test(
'Verify Table Structure',
{
tag: '@admin',
},
async ({ page }) => {
await expect(page.getByRole('searchbox')).toBeVisible();
const roleNames = [
'signoz-admin',
'signoz-editor',
'signoz-viewer',
'signoz-anonymous',
];
const firstRole = page.getByText(roleNames[0]);
await expect(firstRole).toBeVisible();
await expect(
page.getByRole('heading', { name: 'Managed roles' }),
).toBeVisible();
await expect(page.getByText(/full administrative access/i)).toBeVisible();
},
);
});
test.describe('Roles Listing - Roles Display and Data Verification', () => {
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
await page.goto('/settings/roles');
// Wait for page to load
await expect(
page.getByRole('searchbox', { name: 'Search for roles...' }),
).toBeVisible();
});
test(
'Verify API Response Matches UI Display',
{
tag: '@admin',
},
async ({ page }) => {
let apiResponse: any = null;
page.on('response', async (response) => {
if (
response.url().includes('/api/v1/roles') &&
response.status() === 200
) {
apiResponse = await response.json();
}
});
await page.reload();
await page
.getByRole('searchbox', { name: 'Search for roles...' })
.waitFor({ state: 'visible', timeout: 10000 });
await page.waitForTimeout(1000);
expect(apiResponse).not.toBeNull();
expect(apiResponse.status).toBe('success');
const rolesFromApi = apiResponse.data;
expect(rolesFromApi).toBeDefined();
expect(rolesFromApi.length).toBe(5);
for (const role of rolesFromApi) {
if (role.name) {
await expect(page.getByText(role.name)).toBeVisible();
}
}
},
);
test(
'Verify Role Categorization (Managed vs Custom)',
{
tag: '@admin',
},
async ({ page }) => {
await expect(page.getByRole('searchbox')).toBeVisible();
const managedRolesHeader = page.getByRole('heading', {
name: 'Managed roles',
});
const customRolesHeader = page.getByRole('heading', {
name: /Custom roles\s*\d+/,
});
await expect(managedRolesHeader).toBeVisible();
await expect(customRolesHeader).toBeVisible();
const headerText = await customRolesHeader.textContent();
expect(headerText).toMatch(/Custom roles\s*\d+/);
await expect(page.getByText('signoz-admin')).toBeVisible();
await expect(page.getByText('signoz-editor')).toBeVisible();
await expect(page.getByText('signoz-viewer')).toBeVisible();
await expect(page.getByText('custom-role-ui')).toBeVisible();
},
);
});
test.describe('Roles Listing - Search Functionality', () => {
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
await page.goto('/settings/roles');
// Wait for roles to load
await page
.getByRole('searchbox', { name: 'Search for roles...' })
.waitFor({ state: 'visible', timeout: 10000 })
.catch(() => {});
});
test(
'Search Roles by Name',
{
tag: '@admin',
},
async ({ page }) => {
await expect(page.getByText('signoz-admin')).toBeVisible();
await expect(page.getByText('signoz-editor')).toBeVisible();
await expect(page.getByText('signoz-viewer')).toBeVisible();
const searchInput = page.getByRole('searchbox', {
name: 'Search for roles...',
});
await searchInput.fill('editor');
await page.waitForTimeout(300);
await expect(page.getByText('signoz-editor')).toBeVisible();
await searchInput.clear();
await searchInput.fill(''); // Ensure it's empty
await page.waitForTimeout(300);
await expect(page.getByText('signoz-admin')).toBeVisible();
await expect(page.getByText('signoz-editor')).toBeVisible();
await expect(page.getByText('signoz-viewer')).toBeVisible();
},
);
test(
'Search Roles by Description',
{
tag: '@admin',
},
async ({ page }) => {
const searchInput = page.getByRole('searchbox', {
name: 'Search for roles...',
});
await searchInput.fill('administrative');
await page.waitForTimeout(500);
await expect(page.getByText('signoz-admin')).toBeVisible();
await expect(page.getByText(/full administrative access/i)).toBeVisible();
await expect(page.getByText('signoz-viewer')).toBeHidden();
},
);
test(
'Search with No Results',
{
tag: '@admin',
},
async ({ page }) => {
await expect(page.getByText('signoz-admin')).toBeVisible({
timeout: 10000,
});
const searchInput = page.getByRole('searchbox', {
name: 'Search for roles...',
});
await searchInput.fill('NonExistentRole123XYZ');
await page.waitForTimeout(300);
const adminStillVisible = await page
.getByText('signoz-admin')
.isVisible()
.catch(() => false);
const editorStillVisible = await page
.getByText('signoz-editor')
.isVisible()
.catch(() => false);
const viewerStillVisible = await page
.getByText('signoz-viewer')
.isVisible()
.catch(() => false);
// At least verify that not all roles are still visible (search had some effect)
const allStillVisible =
adminStillVisible && editorStillVisible && viewerStillVisible;
expect(allStillVisible).toBe(false);
// 5. Clear search and verify roles reappear
await searchInput.clear();
await searchInput.fill('');
await page.waitForTimeout(300);
await expect(page.getByText('signoz-admin')).toBeVisible();
},
);
test(
'Search Case Sensitivity',
{
tag: '@admin',
},
async ({ page }) => {
const searchInput = page.getByRole('searchbox', {
name: 'Search for roles...',
});
await searchInput.fill('ADMIN');
await page.waitForTimeout(300);
await expect(page.getByText('signoz-admin')).toBeVisible();
await searchInput.clear();
await searchInput.fill('admin');
await page.waitForTimeout(300);
await expect(page.getByText('signoz-admin')).toBeVisible();
await searchInput.clear();
await searchInput.fill('AdMin');
await page.waitForTimeout(300);
await expect(page.getByText('signoz-admin')).toBeVisible();
await searchInput.clear();
},
);
});
test.describe('Roles Listing - Pagination Functionality', () => {
test.beforeEach(async ({ page }) => {
await ensureLoggedIn(page);
await page.goto('/settings/roles');
await expect(
page.getByRole('heading', { name: 'Roles', exact: true }),
).toBeVisible({ timeout: 15000 });
await expect(
page.getByRole('searchbox', { name: 'Search for roles...' }),
).toBeVisible({ timeout: 15000 });
});
test(
'Navigate Between Pages',
{
tag: '@admin',
},
async ({ page }) => {
const paginationList = page.getByRole('list').filter({ hasText: /\d/ });
const hasPagination = await paginationList.isVisible().catch(() => false);
if (!hasPagination) {
return;
}
// 1. Verify pagination controls are visible
await expect(paginationList).toBeVisible();
// 2. Note the first role displayed on page 1
const page1HasAdmin = await page.getByText('signoz-admin').isVisible();
// 3. Click "Next" or page "2" in pagination
const nextButton = page.getByRole('listitem').getByText('2');
if (await nextButton.isVisible()) {
await nextButton.click();
} else {
// Try clicking next arrow
await page.getByRole('listitem').last().click();
}
// 4. Wait for page to load
await page.waitForTimeout(1000);
// 5. Observe roles on page 2
const page2HasAdmin = await page.getByText('signoz-admin').isVisible();
// Verify different roles are shown (or same role is hidden if paging worked)
expect(page2HasAdmin).not.toBe(page1HasAdmin);
// Verify URL updates with page parameter
await expect(page).toHaveURL(/page=2/);
// 6. Click "Previous" or page "1"
const prevButton = page.getByRole('listitem').getByText('1');
if (await prevButton.isVisible()) {
await prevButton.click();
} else {
// Try clicking previous arrow
await page.getByRole('listitem').first().click();
}
// 7. Wait and verify return to page 1
await page.waitForTimeout(1000);
await expect(page).toHaveURL(/page=1|\/roles(?!.*page)/);
},
);
test(
'Pagination with Search Results',
{
tag: '@admin',
},
async ({ page }) => {
const paginationList = page.getByRole('list').filter({ hasText: /\d/ });
const hasPagination = await paginationList.isVisible().catch(() => false);
if (!hasPagination) {
return;
}
const searchInput = page.getByRole('searchbox');
await searchInput.fill('signoz');
await page.waitForTimeout(500);
const paginationAfterSearch = await paginationList
.isVisible()
.catch(() => false);
if (paginationAfterSearch) {
const page2Button = page.getByRole('listitem').getByText('2');
if (await page2Button.isVisible()) {
await page2Button.click();
await page.waitForTimeout(500);
const url = page.url();
expect(url).toContain('page=2');
}
}
await searchInput.clear();
await page.waitForTimeout(500);
await expect(paginationList).toBeVisible();
},
);
test(
'Pagination State Persistence',
{
tag: '@admin',
},
async ({ page }) => {
const paginationList = page.getByRole('list').filter({ hasText: /\d/ });
const hasPagination = await paginationList.isVisible().catch(() => false);
if (!hasPagination) {
return;
}
const page2Button = page.getByRole('listitem').getByText('2');
if (await page2Button.isVisible()) {
await page2Button.click();
await page.waitForTimeout(500);
await expect(page).toHaveURL(/page=2/);
await page.reload();
await expect(page).toHaveURL(/page=2/);
await expect(
page.getByRole('searchbox', {
name: 'Search for roles...',
}),
).toBeVisible();
}
},
);
});
test.describe('Roles Listing - Loading and Error States', () => {
test(
'Verify Loading State',
{
tag: '@admin',
},
async ({ page }) => {
await page.route('**/api/v1/roles', async (route) => {
await new Promise((resolve) => setTimeout(resolve, 1000));
route.continue();
});
await ensureLoggedIn(page);
await page.goto('/settings/roles');
const loadingIndicators = [
page.locator('[class*="skeleton"]'),
page.locator('[class*="loading"]'),
page.locator('[class*="spinner"]'),
page.getByRole('progressbar'),
];
for (const indicator of loadingIndicators) {
if (await indicator.isVisible().catch(() => false)) {
break;
}
}
await expect(
page.getByRole('searchbox', {
name: 'Search for roles...',
}),
).toBeVisible({ timeout: 10000 });
await expect(page.getByRole('heading', { name: 'Roles' })).toBeVisible();
},
);
test(
'Handle API Error State',
{
tag: '@admin',
},
async ({ page }) => {
await page.route('**/api/v1/roles', async (route) => {
route.fulfill({
status: 500,
contentType: 'application/json',
body: JSON.stringify({
status: 'error',
error: 'Internal Server Error',
}),
});
});
await ensureLoggedIn(page);
await page.goto('/settings/roles');
await page.waitForTimeout(2000);
const hasRoles = await page
.getByText('signoz-admin')
.isVisible()
.catch(() => false);
if (!hasRoles) {
await expect(
page.getByRole('heading', {
name: 'Roles',
exact: true,
}),
).toBeVisible();
}
},
);
test(
'Handle Network Failure',
{
tag: '@admin',
},
async ({ page }) => {
await page.route('**/api/v1/roles', async (route) => {
route.abort('failed');
});
await ensureLoggedIn(page);
await page.goto('/settings/roles');
await page.waitForTimeout(2000);
const hasRoles = await page
.getByText('signoz-admin')
.isVisible()
.catch(() => false);
expect(hasRoles).toBe(false);
await expect(page.locator('body')).toBeVisible();
},
);
});

View File

@@ -0,0 +1,23 @@
import { test, expect } from '@playwright/test';
import { ensureLoggedIn } from '../utils/login.util';
/**
* Seed test for Playwright Agents
*
* This test serves as:
* 1. A foundation for all agent-generated tests
* 2. An example of test structure and patterns
* 3. Initial setup for authentication
*/
test('seed', {
tag: '@viewer',
}, async ({ page }) => {
// Login to the application
await ensureLoggedIn(page);
// Verify we're on the home page
await expect(page).toHaveURL(/.*\/home/);
await expect(
page.getByText('Hello there, Welcome to your SigNoz workspace'),
).toBeVisible();
});

23
tests/e2e/tsconfig.json Normal file
View File

@@ -0,0 +1,23 @@
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"moduleResolution": "bundler",
"lib": ["ES2020"],
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"types": ["node", "@playwright/test"],
"paths": {
"@tests/*": ["./tests/*"],
"@utils/*": ["./utils/*"],
"@specs/*": ["./specs/*"]
},
"outDir": "./dist",
"rootDir": "."
},
"include": ["tests/**/*.ts", "utils/**/*.ts", "playwright.config.ts"],
"exclude": ["node_modules", "dist"]
}

View File

@@ -0,0 +1,43 @@
import { Page } from '@playwright/test';
// Read credentials from environment variables
const username = process.env.SIGNOZ_E2E_USERNAME;
const password = process.env.SIGNOZ_E2E_PASSWORD;
/**
* Ensures the user is logged in.
*
* When storageState is configured in playwright.config.ts (the default), this
* simply navigates to /home — the session is already restored from .auth/user.json
* and no login form interaction is needed.
*
* Falls back to a full login flow if the session is missing or expired.
*/
export async function ensureLoggedIn(page: Page): Promise<void> {
// Fast path: session already active (storageState or prior navigation)
if (page.url().includes('/home')) {
return;
}
// Try navigating to home — if session is valid it lands there immediately
await page.goto('/home');
if (page.url().includes('/home')) {
return;
}
// Session missing or expired — fall back to full login
if (!username || !password) {
throw new Error(
'SIGNOZ_E2E_USERNAME and SIGNOZ_E2E_PASSWORD environment variables must be set.',
);
}
await page.goto('/login?password=Y');
await page.getByTestId('email').fill(username);
await page.getByTestId('initiate_login').click();
await page.getByTestId('password').fill(password);
await page.getByRole('button', { name: 'Sign in with Password' }).click();
await page
.getByText('Hello there, Welcome to your')
.waitFor({ state: 'visible' });
}

1480
tests/e2e/yarn.lock Normal file

File diff suppressed because it is too large Load Diff

79
tests/fixtures/dashboards.py vendored Normal file
View File

@@ -0,0 +1,79 @@
from http import HTTPStatus
from typing import Dict, List, Optional
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def create_dashboard(
signoz: types.SigNoz,
token: str,
payload: Dict,
*,
timeout: int = 5,
) -> str:
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/dashboards"),
json=payload,
headers={"Authorization": f"Bearer {token}"},
timeout=timeout,
)
assert response.status_code == HTTPStatus.CREATED, (
f"create_dashboard failed: {response.status_code} {response.text}"
)
return response.json()["data"]["id"]
def list_dashboards(
signoz: types.SigNoz,
token: str,
*,
timeout: int = 5,
) -> List[Dict]:
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/dashboards"),
headers={"Authorization": f"Bearer {token}"},
timeout=timeout,
)
assert response.status_code == HTTPStatus.OK, (
f"list_dashboards failed: {response.status_code} {response.text}"
)
return response.json().get("data", []) or []
def find_dashboard_by_title(
signoz: types.SigNoz,
token: str,
title: str,
) -> Optional[Dict]:
for dashboard in list_dashboards(signoz, token):
data = dashboard.get("data") or dashboard
if data.get("title") == title:
return dashboard
return None
def upsert_dashboard(
signoz: types.SigNoz,
token: str,
payload: Dict,
) -> str:
"""
Idempotent create. Looks up by title; if present, returns the existing
dashboard id. Intended for warm-backend seed loops under `--reuse`.
"""
title = payload.get("title")
if title:
existing = find_dashboard_by_title(signoz, token, title)
if existing is not None:
dashboard_id = existing.get("id") or (existing.get("data") or {}).get("id")
logger.info(
"dashboard already present, skipping: %s",
{"title": title, "id": dashboard_id},
)
return dashboard_id
return create_dashboard(signoz, token, payload)

View File

@@ -1,10 +1,13 @@
from dataclasses import dataclass
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
from http import HTTPStatus
from typing import Any, Dict, List, Optional, Union
import requests
from fixtures import types
from fixtures.logs import Logs
from fixtures.traces import TraceIdGenerator, Traces, TracesKind, TracesStatusCode
DEFAULT_STEP_INTERVAL = 60 # seconds
DEFAULT_TOLERANCE = 1e-9
@@ -583,3 +586,251 @@ def assert_scalar_column_order(
f"{context}: Column {column_index} order mismatch. "
f"Expected {expected_values}, got {actual_values}"
)
def format_timestamp(dt: datetime) -> str:
"""
Format a datetime object to match the API's timestamp format.
The API returns timestamps with minimal fractional seconds precision.
Example: 2026-02-03T20:54:56.5Z for 500000 microseconds
"""
base_str = dt.strftime("%Y-%m-%dT%H:%M:%S")
if dt.microsecond:
# Convert microseconds to fractional seconds and strip trailing zeros
fractional = f"{dt.microsecond / 1000000:.6f}"[2:].rstrip("0")
return f"{base_str}.{fractional}Z"
return f"{base_str}Z"
def assert_identical_query_response(
response1: requests.Response, response2: requests.Response
) -> None:
"""
Assert that two query responses are identical in status and data.
"""
assert response1.status_code == response2.status_code, "Status codes do not match"
if response1.status_code == HTTPStatus.OK:
assert (
response1.json()["status"] == response2.json()["status"]
), "Response statuses do not match"
assert (
response1.json()["data"]["data"]["results"]
== response2.json()["data"]["data"]["results"]
), "Response data do not match"
def generate_logs_with_corrupt_metadata() -> List[Logs]:
"""
Specifically, entries with 'id', 'timestamp', 'severity_text', 'severity_number' and 'body' fields in metadata
"""
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
return [
Logs(
timestamp=now - timedelta(seconds=4),
body="POST /integration request received",
severity_text="INFO",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"timestamp": "corrupt_data",
},
attributes={
"net.transport": "IP.TCP",
"http.scheme": "http",
"http.user_agent": "Integration Test",
"http.request.method": "POST",
"http.response.status_code": "200",
"severity_text": "corrupt_data",
"timestamp": "corrupt_data",
},
trace_id="1",
),
Logs(
timestamp=now - timedelta(seconds=3),
body="SELECT query executed",
severity_text="DEBUG",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"severity_number": "corrupt_data",
"id": "corrupt_data",
},
attributes={
"db.name": "integration",
"db.operation": "SELECT",
"db.statement": "SELECT * FROM integration",
"trace_id": "2",
},
),
Logs(
timestamp=now - timedelta(seconds=2),
body="HTTP PATCH failed with 404",
severity_text="WARN",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"body": "corrupt_data",
"trace_id": "3",
},
attributes={
"http.request.method": "PATCH",
"http.status_code": "404",
"id": "1",
},
),
Logs(
timestamp=now - timedelta(seconds=1),
body="{'trace_id': '4'}",
severity_text="ERROR",
resources={
"deployment.environment": "production",
"service.name": "topic-service",
"os.type": "linux",
"host.name": "linux-001",
"cloud.provider": "integration",
"cloud.account.id": "001",
},
attributes={
"message.type": "SENT",
"messaging.operation": "publish",
"messaging.message.id": "001",
"body": "corrupt_data",
"timestamp": "corrupt_data",
},
),
]
def generate_traces_with_corrupt_metadata() -> List[Traces]:
"""
Specifically, entries with 'id', 'timestamp', 'trace_id' and 'duration_nano' fields in metadata
"""
http_service_trace_id = TraceIdGenerator.trace_id()
http_service_span_id = TraceIdGenerator.span_id()
http_service_db_span_id = TraceIdGenerator.span_id()
http_service_patch_span_id = TraceIdGenerator.span_id()
topic_service_trace_id = TraceIdGenerator.trace_id()
topic_service_span_id = TraceIdGenerator.span_id()
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
return [
Traces(
timestamp=now - timedelta(seconds=4),
duration=timedelta(seconds=3),
trace_id=http_service_trace_id,
span_id=http_service_span_id,
parent_span_id="",
name="POST /integration",
kind=TracesKind.SPAN_KIND_SERVER,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"trace_id": "corrupt_data",
},
attributes={
"net.transport": "IP.TCP",
"http.scheme": "http",
"http.user_agent": "Integration Test",
"http.request.method": "POST",
"http.response.status_code": "200",
"timestamp": "corrupt_data",
},
),
Traces(
timestamp=now - timedelta(seconds=3.5),
duration=timedelta(seconds=5),
trace_id=http_service_trace_id,
span_id=http_service_db_span_id,
parent_span_id=http_service_span_id,
name="SELECT",
kind=TracesKind.SPAN_KIND_CLIENT,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"timestamp": "corrupt_data",
},
attributes={
"db.name": "integration",
"db.operation": "SELECT",
"db.statement": "SELECT * FROM integration",
"trace_d": "corrupt_data",
},
),
Traces(
timestamp=now - timedelta(seconds=3),
duration=timedelta(seconds=1),
trace_id=http_service_trace_id,
span_id=http_service_patch_span_id,
parent_span_id=http_service_span_id,
name="HTTP PATCH",
kind=TracesKind.SPAN_KIND_CLIENT,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"duration_nano": "corrupt_data",
},
attributes={
"http.request.method": "PATCH",
"http.status_code": "404",
"id": "1",
},
),
Traces(
timestamp=now - timedelta(seconds=1),
duration=timedelta(seconds=4),
trace_id=topic_service_trace_id,
span_id=topic_service_span_id,
parent_span_id="",
name="topic publish",
kind=TracesKind.SPAN_KIND_PRODUCER,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "topic-service",
"os.type": "linux",
"host.name": "linux-001",
"cloud.provider": "integration",
"cloud.account.id": "001",
},
attributes={
"message.type": "SENT",
"messaging.operation": "publish",
"messaging.message.id": "001",
"duration_nano": "corrupt_data",
"id": 1,
},
),
]

25
tests/fixtures/seeder/Dockerfile vendored Normal file
View File

@@ -0,0 +1,25 @@
# HTTP seeder for Playwright e2e tests. Wraps the direct-ClickHouse-insert
# helpers in tests/fixtures/{traces,logs,metrics}.py so a browser test can
# seed telemetry with fine-grained control.
#
# Build context is tests/ (one level above this file) so `fixtures/` is
# importable inside the image.
FROM python:3.13-slim
WORKDIR /app
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc libpq-dev \
&& rm -rf /var/lib/apt/lists/*
COPY fixtures/seeder/requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt
# Ship the whole fixtures/ package so server.py can `from fixtures.traces
# import ...` with the same module path the pytest side uses.
COPY fixtures /app/fixtures
EXPOSE 8080
CMD ["uvicorn", "fixtures.seeder.server:app", "--host", "0.0.0.0", "--port", "8080"]

110
tests/fixtures/seeder/__init__.py vendored Normal file
View File

@@ -0,0 +1,110 @@
import time
from http import HTTPStatus
from pathlib import Path
import docker
import docker.errors
import pytest
import requests
from testcontainers.core.container import DockerContainer, Network
from fixtures import dev, types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
# Build context is tests/ so `fixtures/` is importable inside the container
# under /app/fixtures. This file sits at tests/fixtures/seeder/__init__.py,
# hence parents[2] = tests/.
_TESTS_ROOT = Path(__file__).resolve().parents[2]
@pytest.fixture(name="seeder", scope="package")
def seeder(
network: Network,
clickhouse: types.TestContainerClickhouse,
request: pytest.FixtureRequest,
pytestconfig: pytest.Config,
) -> types.TestContainerDocker:
"""
HTTP seeder fixture — a Python container exposing POST/DELETE endpoints
that wrap the direct-ClickHouse-insert helpers (currently just traces;
logs + metrics to follow). Frontend tests call these endpoints to seed
telemetry with fine-grained per-test control.
"""
def create() -> types.TestContainerDocker:
# docker-py wants `dockerfile` RELATIVE to `path`. The fixture file
# lives at tests/fixtures/seeder/__init__.py so the build context
# root is tests/ (two parents up), and the Dockerfile path inside
# that context is fixtures/seeder/Dockerfile.
docker_client = docker.from_env()
docker_client.images.build(
path=str(_TESTS_ROOT),
dockerfile="fixtures/seeder/Dockerfile",
tag="signoz-tests-seeder:latest",
rm=True,
)
container = DockerContainer("signoz-tests-seeder:latest")
container.with_env("CH_HOST", clickhouse.container.container_configs["8123"].address)
container.with_env("CH_PORT", str(clickhouse.container.container_configs["8123"].port))
container.with_env(
"CH_USER", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_USERNAME"]
)
container.with_env(
"CH_PASSWORD", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_PASSWORD"]
)
container.with_env(
"CH_CLUSTER", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]
)
container.with_exposed_ports(8080)
container.with_network(network=network)
container.start()
host = container.get_container_host_ip()
host_port = container.get_exposed_port(8080)
for attempt in range(20):
try:
response = requests.get(f"http://{host}:{host_port}/healthz", timeout=2)
if response.status_code == HTTPStatus.OK:
break
except Exception as e: # pylint: disable=broad-exception-caught
logger.info("seeder attempt %d: %s", attempt + 1, e)
time.sleep(1)
else:
raise TimeoutError("seeder container did not become ready")
return types.TestContainerDocker(
id=container.get_wrapped_container().id,
host_configs={
"8080": types.TestContainerUrlConfig("http", host, host_port),
},
container_configs={
"8080": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 8080
),
},
)
def delete(container: types.TestContainerDocker) -> None:
client = docker.from_env()
try:
client.containers.get(container_id=container.id).stop()
client.containers.get(container_id=container.id).remove(v=True)
except docker.errors.NotFound:
logger.info("Seeder container %s already gone", container.id)
def restore(cache: dict) -> types.TestContainerDocker:
return types.TestContainerDocker.from_cache(cache)
return dev.wrap(
request,
pytestconfig,
"seeder",
empty=lambda: types.TestContainerDocker(id="", host_configs={}, container_configs={}),
create=create,
delete=delete,
restore=restore,
)

15
tests/fixtures/seeder/requirements.txt vendored Normal file
View File

@@ -0,0 +1,15 @@
fastapi>=0.115
uvicorn[standard]>=0.34
clickhouse-connect>=0.8.18
numpy>=2.3.2
isodate>=0.7.2
# Needed because importing fixtures.traces walks fixtures/__init__.py (pulls
# testcontainers) and fixtures/types.py (pulls sqlalchemy, py). pytest is
# needed for the @pytest.fixture decorator at module scope. Rather than
# refactoring the fixtures layer to be import-safe without these, install
# them here — keeps the import path identical between the pytest fixture
# and the HTTP seeder wrapper.
pytest>=8.3.5
testcontainers[clickhouse]>=4.13.1
sqlalchemy>=2.0.43
py>=1.11

86
tests/fixtures/seeder/server.py vendored Normal file
View File

@@ -0,0 +1,86 @@
"""
HTTP seeder wrapping the direct-ClickHouse-insert helpers in
fixtures/traces.py. Each POST endpoint accepts the same JSON shape as the
corresponding pytest fixture input and tags every inserted row with
resource `seeder=true`. DELETE truncates the underlying tables.
Env:
CH_HOST, CH_PORT, CH_USER, CH_PASSWORD — ClickHouse connection
CH_CLUSTER — cluster name for TRUNCATE ... ON CLUSTER
"""
import logging
import os
from typing import Any, Dict, List
import clickhouse_connect
from fastapi import FastAPI, HTTPException
from fixtures.traces import (
Traces,
insert_traces_to_clickhouse,
truncate_traces_tables,
)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("seeder")
CH_HOST = os.environ["CH_HOST"]
CH_PORT = int(os.environ.get("CH_PORT", "8123"))
CH_USER = os.environ["CH_USER"]
CH_PASSWORD = os.environ["CH_PASSWORD"]
CH_CLUSTER = os.environ["CH_CLUSTER"]
SEEDER_MARKER = {"seeder": "true"}
_conn = None
def get_conn():
global _conn # pylint: disable=global-statement
if _conn is None:
_conn = clickhouse_connect.get_client(
host=CH_HOST,
port=CH_PORT,
username=CH_USER,
password=CH_PASSWORD,
)
return _conn
app = FastAPI(title="signoz-tests seeder", version="0.1.0")
@app.get("/healthz")
def healthz() -> Dict[str, str]:
return {"status": "ok"}
def _tag(item: Dict[str, Any]) -> Dict[str, Any]:
resources = {**(item.get("resources") or {}), **SEEDER_MARKER}
return {**item, "resources": resources}
@app.post("/telemetry/traces")
def post_traces(payload: List[Dict[str, Any]]) -> Dict[str, Any]:
try:
traces = [Traces.from_dict(_tag(item)) for item in payload]
insert_traces_to_clickhouse(get_conn(), traces)
logger.info("inserted %d traces", len(traces))
return {"inserted": len(traces)}
except KeyError as e:
raise HTTPException(status_code=400, detail=f"missing required field: {e}") from e
except Exception as e:
logger.exception("insert failed")
raise HTTPException(status_code=500, detail=str(e)) from e
@app.delete("/telemetry/traces")
def delete_traces() -> Dict[str, bool]:
try:
truncate_traces_tables(get_conn(), CH_CLUSTER)
logger.info("truncated traces tables")
return {"truncated": True}
except Exception as e:
logger.exception("truncate failed")
raise HTTPException(status_code=500, detail=str(e)) from e

View File

@@ -2,6 +2,7 @@ import platform
import time
from http import HTTPStatus
from os import path
from pathlib import Path
from typing import Optional
import docker
@@ -16,6 +17,11 @@ from fixtures.logger import setup_logger
logger = setup_logger(__name__)
# Absolute path to the signoz repo root. Anchored to this file so the build
# context resolves correctly regardless of pytest's cwd (tests/ vs
# tests/integration/). fixtures/signoz.py -> fixtures/ -> tests/ -> repo root.
_REPO_ROOT = Path(__file__).resolve().parents[2]
def create_signoz(
network: Network,
@@ -50,7 +56,7 @@ def create_signoz(
dockerfile_path = "cmd/enterprise/Dockerfile.with-web.integration"
self = DockerImage(
path="../../",
path=str(_REPO_ROOT),
dockerfile_path=dockerfile_path,
tag="signoz:integration",
buildargs={

View File

@@ -689,131 +689,144 @@ class Traces(ABC):
return traces
def insert_traces_to_clickhouse(conn, traces: List[Traces]) -> None:
"""
Insert traces into ClickHouse tables following the same logic as the Go exporter.
Handles insertion into:
- distributed_signoz_index_v3 (main traces table)
- distributed_traces_v3_resource (resource fingerprints)
- distributed_tag_attributes_v2 (tag attributes)
- distributed_span_attributes_keys (attribute keys)
- distributed_signoz_error_index_v2 (error events)
Pure function so the seeder container (tests/fixtures/seeder/) can reuse
the exact insert path used by the pytest fixtures. `conn` is a
clickhouse-connect Client.
"""
resources: List[TracesResource] = []
for trace in traces:
resources.extend(trace.resource)
if len(resources) > 0:
conn.insert(
database="signoz_traces",
table="distributed_traces_v3_resource",
data=[resource.np_arr() for resource in resources],
)
tag_attributes: List[TracesTagAttributes] = []
for trace in traces:
tag_attributes.extend(trace.tag_attributes)
if len(tag_attributes) > 0:
conn.insert(
database="signoz_traces",
table="distributed_tag_attributes_v2",
data=[tag_attribute.np_arr() for tag_attribute in tag_attributes],
)
attribute_keys: List[TracesResourceOrAttributeKeys] = []
resource_keys: List[TracesResourceOrAttributeKeys] = []
for trace in traces:
attribute_keys.extend(trace.attribute_keys)
resource_keys.extend(trace.resource_keys)
if len(attribute_keys) > 0:
conn.insert(
database="signoz_traces",
table="distributed_span_attributes_keys",
data=[attribute_key.np_arr() for attribute_key in attribute_keys],
)
if len(resource_keys) > 0:
conn.insert(
database="signoz_traces",
table="distributed_span_attributes_keys",
data=[resource_key.np_arr() for resource_key in resource_keys],
)
conn.insert(
database="signoz_traces",
table="distributed_signoz_index_v3",
column_names=[
"ts_bucket_start",
"resource_fingerprint",
"timestamp",
"trace_id",
"span_id",
"trace_state",
"parent_span_id",
"flags",
"name",
"kind",
"kind_string",
"duration_nano",
"status_code",
"status_message",
"status_code_string",
"attributes_string",
"attributes_number",
"attributes_bool",
"resources_string",
"events",
"links",
"response_status_code",
"external_http_url",
"http_url",
"external_http_method",
"http_method",
"http_host",
"db_name",
"db_operation",
"has_error",
"is_remote",
"resource",
],
data=[trace.np_arr() for trace in traces],
)
error_events: List[TracesErrorEvent] = []
for trace in traces:
error_events.extend(trace.error_events)
if len(error_events) > 0:
conn.insert(
database="signoz_traces",
table="distributed_signoz_error_index_v2",
data=[error_event.np_arr() for error_event in error_events],
)
_TRACES_TABLES_TO_TRUNCATE = [
"signoz_index_v3",
"traces_v3_resource",
"tag_attributes_v2",
"span_attributes_keys",
"signoz_error_index_v2",
]
def truncate_traces_tables(conn, cluster: str) -> None:
"""Truncate all traces tables. Used by the pytest fixture teardown and by
the seeder's DELETE /telemetry/traces endpoint."""
for table in _TRACES_TABLES_TO_TRUNCATE:
conn.query(
f"TRUNCATE TABLE signoz_traces.{table} ON CLUSTER '{cluster}' SYNC"
)
@pytest.fixture(name="insert_traces", scope="function")
def insert_traces(
clickhouse: types.TestContainerClickhouse,
) -> Generator[Callable[[List[Traces]], None], Any, None]:
def _insert_traces(traces: List[Traces]) -> None:
"""
Insert traces into ClickHouse tables following the same logic as the Go exporter.
This function handles insertion into multiple tables:
- distributed_signoz_index_v3 (main traces table)
- distributed_traces_v3_resource (resource fingerprints)
- distributed_tag_attributes_v2 (tag attributes)
- distributed_span_attributes_keys (attribute keys)
- distributed_signoz_error_index_v2 (error events)
"""
resources: List[TracesResource] = []
for trace in traces:
resources.extend(trace.resource)
if len(resources) > 0:
clickhouse.conn.insert(
database="signoz_traces",
table="distributed_traces_v3_resource",
data=[resource.np_arr() for resource in resources],
)
tag_attributes: List[TracesTagAttributes] = []
for trace in traces:
tag_attributes.extend(trace.tag_attributes)
if len(tag_attributes) > 0:
clickhouse.conn.insert(
database="signoz_traces",
table="distributed_tag_attributes_v2",
data=[tag_attribute.np_arr() for tag_attribute in tag_attributes],
)
attribute_keys: List[TracesResourceOrAttributeKeys] = []
resource_keys: List[TracesResourceOrAttributeKeys] = []
for trace in traces:
attribute_keys.extend(trace.attribute_keys)
resource_keys.extend(trace.resource_keys)
if len(attribute_keys) > 0:
clickhouse.conn.insert(
database="signoz_traces",
table="distributed_span_attributes_keys",
data=[attribute_key.np_arr() for attribute_key in attribute_keys],
)
if len(resource_keys) > 0:
clickhouse.conn.insert(
database="signoz_traces",
table="distributed_span_attributes_keys",
data=[resource_key.np_arr() for resource_key in resource_keys],
)
# Insert main traces
clickhouse.conn.insert(
database="signoz_traces",
table="distributed_signoz_index_v3",
column_names=[
"ts_bucket_start",
"resource_fingerprint",
"timestamp",
"trace_id",
"span_id",
"trace_state",
"parent_span_id",
"flags",
"name",
"kind",
"kind_string",
"duration_nano",
"status_code",
"status_message",
"status_code_string",
"attributes_string",
"attributes_number",
"attributes_bool",
"resources_string",
"events",
"links",
"response_status_code",
"external_http_url",
"http_url",
"external_http_method",
"http_method",
"http_host",
"db_name",
"db_operation",
"has_error",
"is_remote",
"resource",
],
data=[trace.np_arr() for trace in traces],
)
# Insert error events
error_events: List[TracesErrorEvent] = []
for trace in traces:
error_events.extend(trace.error_events)
if len(error_events) > 0:
clickhouse.conn.insert(
database="signoz_traces",
table="distributed_signoz_error_index_v2",
data=[error_event.np_arr() for error_event in error_events],
)
insert_traces_to_clickhouse(clickhouse.conn, traces)
yield _insert_traces
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_traces.signoz_index_v3 ON CLUSTER '{clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' SYNC"
)
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_traces.traces_v3_resource ON CLUSTER '{clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' SYNC"
)
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_traces.tag_attributes_v2 ON CLUSTER '{clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' SYNC"
)
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_traces.span_attributes_keys ON CLUSTER '{clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' SYNC"
)
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_traces.signoz_error_index_v2 ON CLUSTER '{clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' SYNC"
truncate_traces_tables(
clickhouse.conn,
clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"],
)

View File

@@ -9,14 +9,12 @@ from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.logs import Logs
from fixtures.querier import (
assert_identical_query_response,
assert_minutely_bucket_values,
find_named_result,
index_series_by_label,
make_query_request,
)
from src.querier.util import (
assert_identical_query_response,
)
def test_logs_list(

View File

@@ -8,17 +8,15 @@ import requests
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.querier import (
assert_identical_query_response,
assert_minutely_bucket_values,
find_named_result,
format_timestamp,
generate_traces_with_corrupt_metadata,
index_series_by_label,
make_query_request,
)
from fixtures.traces import TraceIdGenerator, Traces, TracesKind, TracesStatusCode
from src.querier.util import (
assert_identical_query_response,
format_timestamp,
generate_traces_with_corrupt_metadata,
)
def test_traces_list(

View File

@@ -11,10 +11,8 @@ from fixtures.querier import (
BuilderQuery,
OrderBy,
TelemetryFieldKey,
make_query_request,
)
from src.querier.util import (
generate_logs_with_corrupt_metadata,
make_query_request,
)

View File

@@ -1,256 +0,0 @@
from datetime import datetime, timedelta, timezone
from http import HTTPStatus
from typing import List
import requests
from fixtures.logs import Logs
from fixtures.traces import TraceIdGenerator, Traces, TracesKind, TracesStatusCode
def format_timestamp(dt: datetime) -> str:
"""
Format a datetime object to match the API's timestamp format.
The API returns timestamps with minimal fractional seconds precision.
Example: 2026-02-03T20:54:56.5Z for 500000 microseconds
"""
base_str = dt.strftime("%Y-%m-%dT%H:%M:%S")
if dt.microsecond:
# Convert microseconds to fractional seconds and strip trailing zeros
fractional = f"{dt.microsecond / 1000000:.6f}"[2:].rstrip("0")
return f"{base_str}.{fractional}Z"
return f"{base_str}Z"
def assert_identical_query_response(
response1: requests.Response, response2: requests.Response
) -> None:
"""
Assert that two query responses are identical in status and data.
"""
assert response1.status_code == response2.status_code, "Status codes do not match"
if response1.status_code == HTTPStatus.OK:
assert (
response1.json()["status"] == response2.json()["status"]
), "Response statuses do not match"
assert (
response1.json()["data"]["data"]["results"]
== response2.json()["data"]["data"]["results"]
), "Response data do not match"
def generate_logs_with_corrupt_metadata() -> List[Logs]:
"""
Specifically, entries with 'id', 'timestamp', 'severity_text', 'severity_number' and 'body' fields in metadata
"""
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
return [
Logs(
timestamp=now - timedelta(seconds=4),
body="POST /integration request received",
severity_text="INFO",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"timestamp": "corrupt_data",
},
attributes={
"net.transport": "IP.TCP",
"http.scheme": "http",
"http.user_agent": "Integration Test",
"http.request.method": "POST",
"http.response.status_code": "200",
"severity_text": "corrupt_data",
"timestamp": "corrupt_data",
},
trace_id="1",
),
Logs(
timestamp=now - timedelta(seconds=3),
body="SELECT query executed",
severity_text="DEBUG",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"severity_number": "corrupt_data",
"id": "corrupt_data",
},
attributes={
"db.name": "integration",
"db.operation": "SELECT",
"db.statement": "SELECT * FROM integration",
"trace_id": "2",
},
),
Logs(
timestamp=now - timedelta(seconds=2),
body="HTTP PATCH failed with 404",
severity_text="WARN",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"body": "corrupt_data",
"trace_id": "3",
},
attributes={
"http.request.method": "PATCH",
"http.status_code": "404",
"id": "1",
},
),
Logs(
timestamp=now - timedelta(seconds=1),
body="{'trace_id': '4'}",
severity_text="ERROR",
resources={
"deployment.environment": "production",
"service.name": "topic-service",
"os.type": "linux",
"host.name": "linux-001",
"cloud.provider": "integration",
"cloud.account.id": "001",
},
attributes={
"message.type": "SENT",
"messaging.operation": "publish",
"messaging.message.id": "001",
"body": "corrupt_data",
"timestamp": "corrupt_data",
},
),
]
def generate_traces_with_corrupt_metadata() -> List[Traces]:
"""
Specifically, entries with 'id', 'timestamp', 'trace_id' and 'duration_nano' fields in metadata
"""
http_service_trace_id = TraceIdGenerator.trace_id()
http_service_span_id = TraceIdGenerator.span_id()
http_service_db_span_id = TraceIdGenerator.span_id()
http_service_patch_span_id = TraceIdGenerator.span_id()
topic_service_trace_id = TraceIdGenerator.trace_id()
topic_service_span_id = TraceIdGenerator.span_id()
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
return [
Traces(
timestamp=now - timedelta(seconds=4),
duration=timedelta(seconds=3),
trace_id=http_service_trace_id,
span_id=http_service_span_id,
parent_span_id="",
name="POST /integration",
kind=TracesKind.SPAN_KIND_SERVER,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"trace_id": "corrupt_data",
},
attributes={
"net.transport": "IP.TCP",
"http.scheme": "http",
"http.user_agent": "Integration Test",
"http.request.method": "POST",
"http.response.status_code": "200",
"timestamp": "corrupt_data",
},
),
Traces(
timestamp=now - timedelta(seconds=3.5),
duration=timedelta(seconds=5),
trace_id=http_service_trace_id,
span_id=http_service_db_span_id,
parent_span_id=http_service_span_id,
name="SELECT",
kind=TracesKind.SPAN_KIND_CLIENT,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"timestamp": "corrupt_data",
},
attributes={
"db.name": "integration",
"db.operation": "SELECT",
"db.statement": "SELECT * FROM integration",
"trace_d": "corrupt_data",
},
),
Traces(
timestamp=now - timedelta(seconds=3),
duration=timedelta(seconds=1),
trace_id=http_service_trace_id,
span_id=http_service_patch_span_id,
parent_span_id=http_service_span_id,
name="HTTP PATCH",
kind=TracesKind.SPAN_KIND_CLIENT,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "http-service",
"os.type": "linux",
"host.name": "linux-000",
"cloud.provider": "integration",
"cloud.account.id": "000",
"duration_nano": "corrupt_data",
},
attributes={
"http.request.method": "PATCH",
"http.status_code": "404",
"id": "1",
},
),
Traces(
timestamp=now - timedelta(seconds=1),
duration=timedelta(seconds=4),
trace_id=topic_service_trace_id,
span_id=topic_service_span_id,
parent_span_id="",
name="topic publish",
kind=TracesKind.SPAN_KIND_PRODUCER,
status_code=TracesStatusCode.STATUS_CODE_OK,
status_message="",
resources={
"deployment.environment": "production",
"service.name": "topic-service",
"os.type": "linux",
"host.name": "linux-001",
"cloud.provider": "integration",
"cloud.account.id": "001",
},
attributes={
"message.type": "SENT",
"messaging.operation": "publish",
"messaging.message.id": "001",
"duration_nano": "corrupt_data",
"id": 1,
},
),
]

View File

@@ -1,5 +1,5 @@
[project]
name = "integration"
name = "signoz-tests"
version = "0.1.0"
description = ""
authors = [{ name = "therealpandey", email = "vibhupandey28@gmail.com" }]
@@ -27,11 +27,16 @@ dev = [
]
[tool.pytest.ini_options]
python_files = "src/**/**.py"
python_files = "*/src/**/**.py"
# importlib mode: avoids sys.modules collisions between same-basename tests
# (e.g. querier/01_logs.py vs rawexportdata/01_logs.py) now that all trees
# share one rootdir at tests/. importlib also disables pytest's implicit
# sys.path injection — pythonpath below makes `import fixtures` resolve.
pythonpath = ["."]
addopts = "-ra -p no:warnings --import-mode=importlib"
log_cli = true
log_format = "%(asctime)s [%(levelname)s] (%(filename)s:%(lineno)s) %(message)s"
log_date_format = "%Y-%m-%d %H:%M:%S"
addopts = "-ra -p no:warnings"
[tool.pylint.main]
ignore = [".venv"]

Some files were not shown because too many files have changed in this diff Show More