Compare commits

...

108 Commits

Author SHA1 Message Date
grandwizard28
2f4cfcf3bc cleanup(tests): simplify review pass
- fixtures/fs.py: testdata path resolved to tests/testdata after the
  fixture move; integration tests with data-driven parametrize (e.g.
  alerts/02_basic_alert_conditions.py) were all failing with
  FileNotFoundError. Walk to tests/integration/testdata now.
- fixtures/auth.py: extract _login helper so apply_license stops
  duplicating the GET /sessions/context + POST /sessions/email_password
  pair. Add a retry loop on POST /api/v3/licenses so a BE that isn't
  quite ready at bring-up time doesn't fail the fixture.
- seeder/server.py: use FastAPI lifespan to open+close the ClickHouse
  client instead of a lazy module-level global; collapse the verbose
  module docstring.
- fixtures/seeder.py + e2e/bootstrap/setup.py: trim docstrings/comments
  that narrated WHAT the code does — per-repo convention keeps only
  non-obvious WHY.
- .github/workflows/integrationci.yaml: gate the Chrome + chromedriver
  install on matrix.suite == 'callbackauthn' (the only suite that uses
  Selenium). Saves ~30s × 50 jobs on every PR run.
2026-04-22 21:56:21 +05:30
grandwizard28
d0f3c386d4 chore(tests/e2e): drop bootstrap/run.py
The run.py entrypoint was just setup.py + subprocess('yarn test'); CI
splits those steps anyway (separate provision / test / teardown for
clean artifact capture) and locally the two-step flow is equivalent.
Removing the duplicate entrypoint; docs updated accordingly.
2026-04-22 21:11:13 +05:30
grandwizard28
7b990fa56b refactor(tests): drop __file__.parents[N] path tricks; use pytestconfig.rootpath
The pytest rootdir is already tests/, so anywhere we were computing
_REPO_ROOT / _TESTS_ROOT / e2e-dir from Path(__file__).resolve().parents[N]
can just use pytestconfig.rootpath (or .parent for the repo root).

- fixtures/signoz.py: DockerImage path → pytestconfig.rootpath.parent
- fixtures/seeder.py: docker-py build path → pytestconfig.rootpath
- e2e/bootstrap/setup.py: .env.local path → pytestconfig.rootpath / e2e
- e2e/bootstrap/run.py: yarn-test cwd → pytestconfig.rootpath / e2e
2026-04-22 21:05:41 +05:30
grandwizard28
511482c2cc ci(tests): drop auditquerier from integrationci matrix for now 2026-04-22 21:02:12 +05:30
grandwizard28
5fef44f775 fix(tests/pyproject): ignore node_modules and py module in pylint 2026-04-22 21:01:22 +05:30
grandwizard28
c2ab71aa39 fix(tests/fixtures): apply black formatting to truncate helpers 2026-04-22 20:59:21 +05:30
grandwizard28
314b4208e6 ci(tests/e2e): upload entire artifacts/ dir, 5-day retention 2026-04-22 20:41:32 +05:30
grandwizard28
2bef4c8b50 ci(tests): fix integrationci paths + add e2eci workflow
integrationci:
- Matrix path was integration/src/<suite> (old layout); current layout
  is integration/tests/<suite>. Renames the matrix key src -> suite and
  fixes the pytest path accordingly.
- Adds auditquerier and rawexportdata to the matrix (new suites).
- Drops bootstrap from the matrix — it's no longer a test suite, just
  the pytest lifecycle entry.

e2eci (new, replaces the broken frontend/-based run-e2e.yaml):
- Label-gated trigger mirroring integrationci: requires safe-to-test +
  safe-to-e2e. Runs on pull_request / pull_request_target.
- Installs Python (uv) and Node (yarn), syncs tests/ deps, installs
  Playwright browsers for the matrix project.
- Brings the stack up via e2e/bootstrap/setup.py::test_setup --with-web
  (build signoz-with-web container once), runs playwright against it,
  tears down in an always-run step.
- Uploads the HTML report + per-test traces as artifacts.
- Matrix starts with chromium only (firefox / webkit can follow).
2026-04-22 20:16:23 +05:30
grandwizard28
22f5f66dc4 docs(contributing/tests): update integration.md to current repo layout 2026-04-22 20:08:54 +05:30
grandwizard28
bbd5c7e28e docs(contributing): relocate integration.md + e2e.md to tests/
Moves docs/contributing/go/integration.md -> docs/contributing/tests/integration.md
and docs/contributing/e2e.md -> docs/contributing/tests/e2e.md so the test-
contributor docs live under contributing/tests/. The previous top-level
promotion at docs/contributing/integration.md is removed; go/readme.md
drops the dangling integration link.
2026-04-22 20:05:33 +05:30
grandwizard28
68d659a10a docs(contributing): new integration.md + e2e.md at top level
Promotes the two test-contributor docs from docs/contributing/tests/
to docs/contributing/ and rewrites them in the long-form Q&A format
of docs/contributing/go/integration.md (prerequisites → setup →
framework → writing → running → configuring → remember).

Reflects the current state: shared fixtures package at tests/fixtures/,
flat integration suites under tests/integration/tests/, e2e specs
grouped by resource under tests/e2e/tests/<feature>/, apply_license
fixture in the bootstrap, authedPage Playwright fixture, and the
artifacts/{html,json,results} output layout.
2026-04-22 20:01:10 +05:30
grandwizard28
e346805b57 chore(tests/e2e): drop legacy specs, trim alerts.spec.ts to one smoke test
Deletes tests/e2e/legacy/ (five old 2095-replay specs) and the two
sibling alerts suite files (downtime, cascade-delete). alerts.spec.ts
is reduced to a single TC-01 smoke test that loads /alerts and asserts
the tabs render — a fresh minimum to build on.
2026-04-22 19:51:53 +05:30
grandwizard28
7f7d1968e1 chore(tests/e2e): rename playwright outputDir to artifacts/results 2026-04-22 18:28:19 +05:30
grandwizard28
5dc5957fb3 chore(tests/fixtures): drop unused dashboards.py 2026-04-22 15:52:52 +05:30
grandwizard28
c832d81488 chore: remove claude files 2026-04-22 15:30:45 +05:30
grandwizard28
e36cf45431 chore: cleanup 2026-04-22 15:29:29 +05:30
grandwizard28
97aceea50c feat(tests/fixtures/auth): apply_license fixture + wire into e2e bootstrap
- Adds a package-scoped apply_license fixture that stubs the Zeus
  /v2/licenses/me mock and POSTs /api/v3/licenses so the BE flips to
  ENTERPRISE. The fixture also PUTs org_onboarding=true because the
  license enables the onboarding flag which would otherwise hijack
  every post-login navigation to a questionnaire.
- Wires apply_license into e2e/bootstrap/setup.py::test_setup and
  ::test_teardown alongside create_user_admin.
- Existing add_license helper stays as-is for integration tests.
- Login fixture now waits for the URL to leave /login instead of a
  pre-license "Hello there" welcome string (the post-login landing
  page varies with license state).
- TC-07 anomaly test no longer skips (license enables the flag) and
  drops the legacy test-notification API contract probe that needs
  seeded metric data (covered by the integration suite).
2026-04-22 15:28:55 +05:30
grandwizard28
7f0ff08066 refactor(tests/e2e): group alerts specs under tests/alerts/ 2026-04-22 15:00:43 +05:30
grandwizard28
6ab90a782b feat(tests/e2e): re-author 2095 alerts + downtime regression
Three fresh specs split by resource replace the 736-line
legacy monolith at tests/e2e/legacy/alerts/alerts-downtime.spec.ts:

- alerts.spec.ts: rule list CRUD, labels round-trip, test-notification
  pre-state, details/history/AlertNotFound, anomaly (EE-gated, skip on
  community)
- downtime.spec.ts: planned-downtime CRUD round-trip
- cascade-delete.spec.ts: 409 paths on rule/downtime delete when linked

UI-first: Playwright traces capture the BE conversations, so direct
page.request calls are reserved for seeding where the query-builder
setup is incidental to the test, API-contract probes, and cleanup.
2026-04-22 14:53:32 +05:30
grandwizard28
3e9a618721 fix(tests/seeder): add python3-dev so psycopg2 can compile in the image
Consolidating seeder deps into pyproject.toml pulled in psycopg2, which
needs Python dev headers (Python.h) to build from source. The apt layer
had gcc + libpq-dev but was missing python3-dev, so \`uv sync --frozen
--no-install-project --no-dev\` failed with "gcc failed with exit code 1"
during the seeder image build.

Add python3-dev to the apt install line; image size bump ~50MB for dev
headers. Alternative would have been swapping psycopg2 for
psycopg2-binary in pyproject.toml, but that'd affect the whole test
project for one Dockerfile concern — wrong scope.
2026-04-22 01:04:52 +05:30
grandwizard28
8af041a345 refactor(tests): drop -utils deprecation shims; import from canonical modules
The shims we introduced during the phase-3 merges (authutils, alertutils,
cloudintegrationsutils, idputils, gatewayutils) and the phase-4 primitive
renames (dev, utils, driver) have done their job — integration/ tests can
now import directly from the real modules.

Rewrite every shim-import in tests/integration/tests/:
  fixtures.authutils → fixtures.auth
  fixtures.alertutils → fixtures.alerts
  fixtures.cloudintegrationsutils → fixtures.cloudintegrations
  fixtures.idputils → fixtures.idp
  fixtures.gatewayutils → fixtures.gateway
  fixtures.utils (get_testdata_file_path) → fixtures.fs

Delete all 8 shim files:
  fixtures/{authutils,alertutils,cloudintegrationsutils,idputils,
  gatewayutils,dev,utils,driver}.py

Nothing in active code (integration tests, e2e fixtures, bootstrap, seeder)
imported fixtures.dev or fixtures.driver, so those had no callers to
sweep — just delete.

500 tests still collect.
2026-04-22 01:00:18 +05:30
grandwizard28
52cf194111 refactor(tests/e2e): move existing specs to legacy/ pending fresh rewrite
Park the 5 current spec files under tests/e2e/legacy/ while fresh specs
get written in tests/e2e/tests/ against the new conventions (TC-NN
titles, authedPage fixture, minimal direct-fetch). Playwright's testDir
stays pointed at ./tests — `yarn test` now finds 0 tests until the
first fresh spec lands. legacy/ is preserved for reference but not
collected by default.

Add a .gitkeep under tests/ so the empty dir survives in git between
the move and the first new spec.

Running legacy on demand:
  npx playwright test --config tests/e2e/playwright.config.ts \
    --project chromium legacy/<spec>.ts
(or temporarily point testDir at ./legacy in the config). No yarn
script wired — legacy is expected to rot as fresh specs replace it.
2026-04-22 00:50:50 +05:30
grandwizard28
f80b899dbe chore(tests/e2e): drop unused README.md and .mcp.json 2026-04-22 00:50:14 +05:30
grandwizard28
d2898efce8 chore(tests/e2e): drop seed.spec.ts 2026-04-22 00:35:35 +05:30
grandwizard28
c999a2f44f refactor(tests/e2e): cache auth storageState in memory, drop .auth/ dir
The fixture was writing each user's storageState to .auth/<user>.json and
then handing Playwright the file path. But Playwright's
browser.newContext({ storageState }) accepts the object form too —
ctx.storageState() without a path arg returns the cookies+origins
inline.

Keeping the cache in memory means no filesystem roundtrip per login, no
.auth/ dir to maintain, no stale JSON persisting across runs, and no
gitignore entry for it. Each worker's Map holds one Promise<StorageState>
per unique user, resolved on first login and reused thereafter.

Drop the .auth/ entry from tests/e2e/.gitignore; delete the (now unused)
on-disk .auth/ dir.
2026-04-22 00:33:43 +05:30
grandwizard28
c3a889d7e1 refactor(tests/e2e): move auth from project-level storageState to per-suite fixture
Replaced auth.setup.ts + globally-mounted storageState with a test-scoped
authedPage fixture in tests/e2e/fixtures/auth.ts. Each suite controls its
own identity via `test.use({ user: ... })`; specs that need to run
unauthenticated just request the stock `page` fixture instead.

fixtures/auth.ts:
- Declares `user` as a test option, defaulting to ADMIN (creds from
  .env.local / .env).
- authedPage resolves to a Page whose context has storageState mounted
  for that user. First request per (user, worker) triggers one login
  and writes a per-user storageState file under .auth/; subsequent
  requests reuse it via a Promise-valued cache.
- Exposes `User` type and `ADMIN` constant so future suites can declare
  additional users (EDITOR, VIEWER) as credentials become available.

playwright.config.ts:
- Drop authFile constant, `setup` project, storageState + dependencies
  on each browser project.

tests/auth.setup.ts:
- Deleted. Login logic now lives inside fixtures/auth.ts's login() helper,
  called on demand by the fixture rather than upfront for the whole run.

Spec migration (6 files):
- Import `test, expect` from ../fixtures/auth (or ../../fixtures/auth)
  instead of @playwright/test.
- Drop `ensureLoggedIn` imports and `await ensureLoggedIn(page)` calls.
- Swap `{ page }` → `{ authedPage: page }` in test and beforeEach
  destructures (local var stays `page` via aliasing so test bodies need
  no further changes).

Cost: N logins per run, where N = unique users × workers (= 1 × 2–4
today, vs the old 1 globally). Tradeoff for explicit per-suite control.

Specs that need unauth later just use `async ({ page }) => ...` — the
fixture isn't invoked, so no login fires.

291 tests still list (previously 292: the old auth.setup.ts counted as
one fake "test"; it's gone now).
2026-04-22 00:31:07 +05:30
grandwizard28
c35ff1d4d5 chore(tests/e2e): drop .cursorrules 2026-04-22 00:13:35 +05:30
grandwizard28
40d429bc89 refactor(tests/e2e): drop SIGNOZ_USER_ROLE env filter and @admin/@editor/@viewer tags
The filter claimed to be role-based but only grep'd by tag — the actual
browser session is always admin (bootstrap creates one admin, auth.setup.ts
saves one storageState, every project uses it). Tagging tests `@viewer`
didn't mean they ran as a viewer; it just meant they'd be in the subset
selected when SIGNOZ_USER_ROLE=Viewer. Superset semantics (admin sees
everything) meant the filter was at best a narrower test selection and
at worst a misleading assertion of role coverage.

Gone:
- getRoleGrepPattern() + grep: line in playwright.config.ts.
- The dedicated setup project's grep override (no filter to override).
- SIGNOZ_USER_ROLE entries in .env.example, README, docs/contributing.
- The "Role-Based Testing" section + all role-tagging guidance and
  example snippets in .cursorrules.
- All `{ tag: '@viewer' | '@editor' | '@admin' }` annotations on the 90
  affected test sites across 5 spec files (single-line and multi-line
  forms). ~90 annotations gone.

For ad-hoc selection, `yarn test --grep <pattern>` still works on
Playwright's normal grep (test titles/paths).

Real role-based coverage (separate users + storageStates per role) is a
different problem — not pretending this was it.
2026-04-22 00:12:43 +05:30
grandwizard28
8a58db9bda refactor(tests/e2e): one artifacts/ subdir per reporter
Within artifacts/, give each reporter its own named subdir so the layout
tells you what wrote what:

  artifacts/
    html/              # HTML reporter (was artifacts/html-report)
    json/results.json  # JSON reporter (was artifacts/results.json)
    test-results/      # outputDir — per-test traces/screenshots/videos

`yarn report` and the .cursorrules cat example point at the new paths.
2026-04-21 23:40:03 +05:30
grandwizard28
fe735b66b7 refactor(tests/e2e): consolidate Playwright output under artifacts/
All Playwright outputs now land under a single tests/e2e/artifacts/ dir
so CI can archive it in one command (tar / zip / upload-artifact). Each
piece was writing to its own sibling of tests/e2e/ before.

playwright.config.ts:
- outputDir: 'artifacts/test-results' — per-test traces, screenshots,
  videos (was default test-results/).
- HTML reporter → 'artifacts/html-report' (was default
  playwright-report/); open: 'never' so CI doesn't spawn a browser on
  report generation.
- JSON reporter → 'artifacts/results.json' (was
  'test-results/results.json').

package.json: `yarn report` now points playwright show-report at the new
HTML folder.

Ignore updates — replace the two old paths with /artifacts/ in
tests/e2e/.gitignore, tests/e2e/.prettierignore, and tests/.dockerignore
(seeder image build context).

.cursorrules: update the `cat test-results/results.json` example to the
new path so AI-generated snippets reach for the right file.

Delete the empty test-results/ and playwright-report/ dirs that prior
runs left behind.
2026-04-21 23:38:21 +05:30
grandwizard28
887d5c47b2 refactor(tests/e2e): move alerts-downtime.spec.ts into alerts/
The spec lives mostly in the alerts domain (6 of 7 flows), with the
planned-downtime CRUD (Flow 4) and cascade-delete (Flow 5) as
cross-feature collateral. The standalone alerts-downtime/ dir was
compound-named, breaking the one-feature-per-dir pattern every other
dir under tests/ follows, and duplicating the spec's own filename.

Move to tests/alerts/alerts-downtime.spec.ts. Empty alerts-downtime/
dir removed.
2026-04-21 23:32:15 +05:30
grandwizard28
e0778973ac refactor(tests/e2e/alerts-downtime): drop custom network + screenshot capture
The spec wrapped every /api/ response in a bespoke installCapture(), wrote
hand-named JSON files per call (01_step1.1_GET_rules.json, ...), and took
step-by-step screenshots — all going into run-spec-<ts>/ next to the spec
(gitignored).

Playwright already records equivalent data via `trace` (network bodies,
screenshots per step, DOM snapshots, console — viewable via
`playwright show-trace`). The capture infra was duplicating that for the
one-shot 2095 regression audit; no downstream consumer reads the JSON or
PNG artifacts now.

- Remove installCapture, shot, RUN_DIR/NET_DIR/SHOT_DIR, fs/path imports.
- Strip cap.mark()/cap.dumpSince()/shot() calls throughout the 7 flows.
- Collapse the block-scopes that only existed to bound mark variables.
- Drop the "Artifacts" paragraph from the file's top-of-file comment.
- Remove the `tests/alerts-downtime/run-spec-*/` entry from .gitignore.

Spec drops from 885 lines to 736 (≈17% smaller). All 7 flows + their
assertions are unchanged. For debug access, rely on
`trace: 'on-first-retry'` (already set in playwright.config.ts) + `yarn
show-trace`.
2026-04-21 23:25:40 +05:30
grandwizard28
9f2269fee6 refactor(tests/e2e): emit .env.local instead of .signoz-backend.json
The old flow (pytest writes JSON → global.setup.ts loads it → exports
env vars) was doing what dotenv already does. Collapse to the native
pattern:

- bootstrap/setup.py writes tests/e2e/.env.local with the four coords
  (BASE_URL, USERNAME, PASSWORD, SEEDER_URL). File header marks it as
  generated.
- playwright.config.ts loads .env first, then .env.local with
  override=true. User-provided defaults stay in .env; generated values
  win when present.
- Delete tests/e2e/global.setup.ts (36 lines gone) and its globalSetup
  reference in playwright.config.ts.

Subprocess-injected env (run.py shelling out to yarn test) still wins
because dotenv doesn't overwrite already-set process.env keys.

Rename the test-only override env var SIGNOZ_E2E_ENDPOINT_FILE →
SIGNOZ_E2E_ENV_FILE for accuracy. Update .env.example, .gitignore (drop
.signoz-backend.json, keep .env.local with its explanatory comment),
tests/README.md, docs/contributing/tests/e2e.md.
2026-04-21 23:16:41 +05:30
grandwizard28
6481b660ee refactor(tests/e2e): relocate auth helper into fixtures/; expose authedPage
Rename tests/e2e/utils/login.util.ts → tests/e2e/fixtures/auth.ts and
drop the (now-empty) utils/ dir. "Fixtures" is the unit of per-test
shared setup on both the Python and TS sides of this project — naming
them consistently across trees makes the parallel obvious.

fixtures/auth.ts now exports three things:

- `test` — Playwright test extended with an authedPage fixture. New
  specs can request `authedPage` as a param and skip the
  `beforeEach(() => ensureLoggedIn(page))` boilerplate entirely.
- `expect` — re-exported from @playwright/test so callers have one
  import.
- `ensureLoggedIn(page)` — the underlying helper, still exported for
  specs that want per-call control.

Update the 4 specs that imported from utils/login.util to point at the
new path; no behavior change in those specs (they keep calling
ensureLoggedIn in beforeEach). Refactoring them to use authedPage can
happen spec-by-spec later.

Also update the path example in .cursorrules so AI-generated snippets
reach for the new import path.
2026-04-21 23:04:32 +05:30
grandwizard28
de0396d5bd refactor(tests/e2e): drop pre-seed fixtures; each spec owns its data
The seeder (tests/seeder/) was built so specs can POST telemetry
per-test. Global pre-seeding via tests/e2e/conftest.py (seed_dashboards,
seed_alert_rules, seed_e2e_telemetry) is the exact anti-pattern that
setup obsoletes — shared state across specs, order-dependent runs, no
reset between tests.

- Delete tests/e2e/conftest.py (3 fixtures, all pre-seed).
- Delete tests/e2e/testdata/dashboards/apm-metrics.json — its only
  consumer was seed_dashboards. tests/e2e/testdata/ now empty and gone.
- Drop seed_dashboards, seed_alert_rules, seed_e2e_telemetry params
  from bootstrap/setup.py::test_setup and bootstrap/run.py::test_e2e.
  test_teardown never depended on them.
- Refresh the module docstrings on both bootstrap tests to reflect the
  new model (backend + seeder up; specs seed themselves).
- Update tests/README.md and docs/contributing/tests/e2e.md: remove the
  testdata/ + conftest.py references, document the per-spec seeding
  rule (telemetry via seeder endpoints, dashboards/alerts via SigNoz
  REST API from the spec).

Known breakage: tests/e2e/tests/dashboards/dashboards-list.spec.ts
expects at least one dashboard to exist. With seed_dashboards gone, it
will fail until that spec is updated to create its own dashboard via
the SigNoz API in test.beforeAll. Followup.
2026-04-21 22:40:41 +05:30
grandwizard28
49ef953b15 fix(tests/e2e): correct endpoint-file path in setup.py after src/ flatten
Same class of stale-path bug as the run.py fix: after the e2e/src/
flatten, setup.py sits one level closer to the e2e root. parents[2] now
lands at tests/ instead of tests/e2e/, so .signoz-backend.json would be
written to tests/.signoz-backend.json and the Playwright global.setup.ts
(which expects tests/e2e/.signoz-backend.json) wouldn't find it.

parents[1] is correct.
2026-04-21 22:37:57 +05:30
grandwizard28
24513f305d fix(tests/e2e): correct e2e_dir path after src/ flatten
After phase 2 (flatten tests/e2e/src/ into tests/e2e/), the run.py file
sits one level closer to the e2e root. parents[2] now resolves to tests/
instead of tests/e2e/, so yarn test would subprocess from the wrong cwd.

parents[1] is the correct index now.
2026-04-21 22:37:01 +05:30
grandwizard28
34d36ecd2c refactor(tests/seeder): use fixtures.logger.setup_logger
Drop the one-off logging.basicConfig + logging.getLogger("seeder") in
favor of the shared setup_logger helper that every fixtures/*.py already
uses. Keeps log format consistent across pytest runs and the seeder
container.

fixtures.logger ships into the image via the existing COPY fixtures step
in Dockerfile.seeder — no build change needed.
2026-04-21 22:30:41 +05:30
grandwizard28
0e13757719 chore(tests/e2e): drop examples/example-test-plan.md
Init-agents boilerplate. Fresh planner agents don't need a checked-in
template; they can write to the .gitignore'd specs/ scratch dir.

tests/integration/.qodo/ was also removed (untracked, empty; .qodo is
already in the root .gitignore).
2026-04-21 22:29:26 +05:30
grandwizard28
8633b3d358 docs(contributing/tests): move e2e/integration guides out of test dirs
Pull the e2e contributor guide out of tests/e2e/CLAUDE.md (which read
like a full agent-workflow reference doc) and into
docs/contributing/tests/e2e.md alongside the existing development / go
guides.

- Delete tests/e2e/CLAUDE.md; its content (layout, commands, role tags,
  locator priority, Playwright agent workflow) lives in the new e2e.md
  with references to the now-.gitignore'd specs/ dir removed.
- Add docs/contributing/tests/integration.md — short guide covering
  layout, runner commands, filename conventions, and the flow for
  adding a new suite (there was no contributor doc for this before).
- Trim tests/e2e/README.md to quick-start + commands; link out to the
  full guide. Readers who just want to run tests get the 5 commands
  they need; anything deeper is one hop away.
2026-04-21 22:27:43 +05:30
grandwizard28
c4bde774e1 refactor(tests/e2e): drop specs/ + strip // spec: back-pointers
specs/ held markdown test plans that mirrored tests/ 1:1 as pre-code
scratch. Once a test exists, the plan is stale the moment the test
diverges — they're AI-planner output, not source of truth. Keep the
workflow alive by .gitignore-ing specs/ (the planner agent can still
write locally) but stop shipping stale plans in the repo.

Strip the `// spec: specs/...` and `// seed: tests/seed.spec.ts` header
comments from 5 .spec.ts files. The spec pointer is dead; the seed
pointer was convention-only — Playwright collects regardless.
2026-04-21 22:22:51 +05:30
grandwizard28
acff718113 refactor(tests/e2e): flatten src/ into bootstrap/
Drop the e2e/src/ wrapper — the only Python content under it was
bootstrap/, which is now a direct child of e2e/. Keeps integration and
e2e symmetric (both have bootstrap/, tests/, testdata/ as peers).

Also delete bootstrap/__init__.py on both integration and e2e sides.
With --import-mode=importlib, pytest walks up from each .py file to find
the highest __init__.py-containing dir and uses that as the package root.
Without integration/__init__.py or e2e/__init__.py above bootstrap/, both
setup.py files resolved to the same dotted name `bootstrap.setup`, causing
a sys.modules collision that silently dropped test_telemetry_databases_exist
from integration's bootstrap. With no __init__.py anywhere, pytest treats
each setup.py as a standalone module via spec_from_file_location and both
are collected cleanly.

Updates tests/README.md, tests/e2e/README.md, and tests/e2e/CLAUDE.md path
references from e2e/src/bootstrap/ to e2e/bootstrap/.
2026-04-21 22:18:36 +05:30
grandwizard28
8d4122df22 refactor(tests/integration): flatten src/ into bootstrap/ + tests/
Drop the redundant src/ layer in the integration tree. 'src' carries no
information — the directory IS integration test source. After flatten:

  tests/integration/
    bootstrap/setup.py        was src/bootstrap/setup.py
    tests/<suite>/*.py        was src/<suite>/*.py (16 suites)
    testdata/

Updates:
- Makefile: py-test-setup/py-test-teardown/py-test target paths.
- tests/README.md: layout diagram + command examples.
- tests/pyproject.toml: python_files glob now matches basenames
  explicitly — "[0-9][0-9]_*.py" for NN-prefixed suite files plus
  "setup.py" and "run.py" for bootstrap entrypoints. The old "*/src/.."
  glob stopped matching anything here and would have caused pytest to
  try collecting seeder/server.py as a test.
2026-04-21 21:03:40 +05:30
grandwizard28
138b0cd606 refactor(tests/seeder): install deps via uv from pyproject, drop requirements.txt
The seeder's requirements.txt duplicated 7 of 10 deps from pyproject.toml
with overlapping version pins — a standing drift risk. The comment on top
of the file admitted the real problem: the seeder image already ships
pytest + testcontainers + sqlalchemy because importing fixtures.traces
walks fixtures/__init__.py and fixtures/types.py. "Don't ship test infra"
was already violated.

- Add fastapi, uvicorn[standard], and py to pyproject.toml dependencies
  (the three seeder-only deps that were not yet in pyproject; `py` was a
  latent gap since fixtures/types.py uses py.path.local but pytest only
  pulls it in transitively).
- Switch the Dockerfile to `uv sync --frozen --no-install-project --no-dev`
  so the container env matches local dev exactly (uv.lock is the single
  source of truth for versions).
- Move tests/seeder/Dockerfile → tests/Dockerfile.seeder so it lives
  alongside the pyproject at the root of the build context.
- Delete tests/seeder/requirements.txt.

The seeder image grows by ~40-50MB (selenium, psycopg2, wiremock now come
along from main deps); accepted as a cost of single source of truth since
the seeder is dev-only infra, not a shipped artifact.
2026-04-21 19:55:25 +05:30
grandwizard28
c17e54ad01 refactor(fixtures/browser): rename from driver to match peer primitives
fixtures.driver was the Selenium WebDriver fixture — rename the module to
fixtures.browser so it sits next to fixtures.http as a named primitive.
The fixture name inside (driver) stays — that's the Selenium-canonical
term and tests reference it directly.

conftest.py pytest_plugins entry points at the new module. A deprecation
shim at fixtures.driver keeps any external caller working until the
follow-up sweep.
2026-04-21 19:44:49 +05:30
grandwizard28
51581160eb refactor(fixtures/time,fs): split utils by responsibility
fixtures.utils only held two time parsers (parse_timestamp, parse_duration)
and one path helper (get_testdata_file_path) — a "utils" grab bag.

- Time parsers move to fixtures.time (utils.py → time.py via git rename).
- get_testdata_file_path moves into fixtures.fs where other filesystem
  helpers live.
- Internal callers (alerts, logs, metrics, traces) update to the new paths.

Replace fixtures.utils with a deprecation shim that re-exports all three
functions so integration/ import sites keep working until the follow-up
sweep.
2026-04-21 19:42:36 +05:30
grandwizard28
7959e9eadd refactor(fixtures/reuse): rename from dev to describe what the module is
The module wraps pytest-cache resource reuse/teardown for container
fixtures; "dev" conveyed nothing about its role. Rename to fixtures.reuse
and update the 12 internal callers that imported `from fixtures import
dev, types` to use `reuse` instead.

Replace fixtures.dev with a deprecation shim so any external caller keeps
working until the follow-up sweep.
2026-04-21 19:34:39 +05:30
grandwizard28
a6faab083f refactor(fixtures/idp): rename idputils to idp now that keycloak owns the container
With the Keycloak container provider at fixtures.keycloak, the fixtures.idp
name is free for what idputils always was — API/browser helpers for OIDC
and SAML admin flows against the IdP container.

- fixtures.idputils → fixtures.idp (git rename).
- conftest.py pytest_plugins swaps fixtures.idputils for fixtures.idp so
  the create_saml_client / create_oidc_client fixtures register under the
  canonical path.

Replace fixtures.idputils with a deprecation shim re-exporting from
fixtures.idp so integration/ import sites (callbackauthn) keep working
until they are swept in a follow-up.
2026-04-21 19:26:34 +05:30
grandwizard28
d43c2bb4d7 refactor(fixtures/cloudintegrations): merge cloudintegrationsutils helpers
Pull the pure-helper functions from cloudintegrationsutils.py
(deprecated_simulate_agent_checkin, setup_create_account_mocks,
simulate_agent_checkin) into cloudintegrations.py next to the fixtures
they complement. Fixtures stay on top; helpers go below.

Replace cloudintegrationsutils.py with a deprecation shim that re-exports
from fixtures.cloudintegrations so integration/ import sites keep working
until they are swept in a follow-up.
2026-04-21 19:07:02 +05:30
grandwizard28
68c8504ac7 refactor(fixtures/alerts): merge alertutils helpers into alerts
Pull the pure-helper functions from alertutils.py
(collect_webhook_firing_alerts, _verify_alerts_labels,
verify_webhook_alert_expectation, update_rule_channel_name) into alerts.py
next to the fixtures they complement. Fixtures stay on top; helpers go
below.

Replace alertutils.py with a deprecation shim that re-exports from
fixtures.alerts so integration/ import sites keep working until they are
swept in a follow-up.
2026-04-21 19:05:39 +05:30
grandwizard28
523dcd6219 refactor(fixtures/auth): merge authutils helpers into auth
Pull the pure-helper functions from authutils.py (create_active_user,
find_user_by_email, find_user_with_roles_by_email, assert_user_has_role,
change_user_role) into auth.py next to the fixtures they complement.
Fixtures remain on top; helpers go below. Drop the module docstring.

Replace authutils.py with a deprecation shim that re-exports from
fixtures.auth so integration/ import sites (9 files) keep working until
they are swept in a follow-up. Suppress the wildcard-import warnings in
the shim only.
2026-04-21 19:02:32 +05:30
grandwizard28
5ebe95e3d6 fix(fixtures/gatewayutils): silence wildcard-import in deprecation shim
The shim intentionally re-exports via `from fixtures.gateway import *`;
pylint flags the wildcard and every unused-wildcard symbol. Suppress both
in the shim only — the live module has no wildcard.
2026-04-21 19:02:04 +05:30
grandwizard28
527963b7f4 refactor(fixtures/gateway): drop -utils suffix
The module only held helper functions (no fixtures). Rename to match the
domain and leave a shim at the old path so integration/ import sites keep
working until they are swept in a follow-up.
2026-04-21 18:58:50 +05:30
grandwizard28
afcc02882d refactor(fixtures/keycloak): rename from idp.py to name the concrete tech
The container provider at fixtures/idp.py brought up a Keycloak image. Name
it for what it is so we can use fixtures/idp.py later for API-side IdP
helpers (OIDC/SAML admin flows) without an idp-vs-idputils naming collision.

- fixtures/idp.py → fixtures/keycloak.py (git rename).
- fixtures.idputils updates its one internal import to fixtures.keycloak.
- conftest.py pytest_plugins entry points at the new module.

No caller outside fixtures/ imports fixtures.idp directly, so no shim is
needed. The "idp" fixture name (how tests reference it) is unchanged.
2026-04-21 18:56:09 +05:30
grandwizard28
f4748f7088 refactor(tests/seeder): extract from fixtures/ into top-level package
Move the HTTP seeder (Dockerfile, requirements.txt, server.py) out of
tests/fixtures/seeder/ and into its own tests/seeder/ top-level package.
The pytest fixture that builds and runs the image moves to
tests/fixtures/seeder.py so it sits next to the other container fixtures.

Rationale: the seeder is a standalone containerized Python service, not a
pytest fixture. It ships a Dockerfile, its own requirements.txt, and a
server.py entrypoint — none of which belong under a package whose purpose
is shared pytest code.

Image-side changes:
- Dockerfile now copies seeder/ alongside fixtures/ and launches
  seeder.server:app instead of fixtures.seeder.server:app.
- Build context stays tests/ (unchanged), so fixtures.* imports inside
  server.py continue to resolve.

Fixture-side changes:
- _TESTS_ROOT computation drops one parent (parents[1] now that the file
  is at tests/fixtures/seeder.py, not tests/fixtures/seeder/__init__.py).
- The dockerfile= path passed to docker-py becomes seeder/Dockerfile.

No behavior change; every consumer still imports the seeder fixture as
before and gets the same container.
2026-04-21 18:50:32 +05:30
grandwizard28
36d766d3d9 refactor(fixtures/seeder): align status codes with HTTP semantics
- POST /telemetry/{traces,logs,metrics}: return 201 Created (kept the
  {inserted: N} body so callers can verify the count landed).
- DELETE /telemetry/{traces,logs,metrics}: return 204 No Content with
  an empty body.
2026-04-21 17:49:59 +05:30
grandwizard28
96188a38b4 feat(fixtures/seeder): add logs and metrics endpoints
Extend the seeder with POST/DELETE endpoints for logs and metrics,
following the same shape as the existing traces endpoints:

- POST /telemetry/logs accepts a JSON list matching Logs.from_dict;
  tags each row's resources with seeder=true.
- POST /telemetry/metrics accepts a JSON list matching Metrics.from_dict;
  tags resource_attrs with seeder=true (Metrics.from_dict unpacks
  resource_attrs rather than a resources dict).
- DELETE /telemetry/logs, DELETE /telemetry/metrics truncate via the
  shared truncate_*_tables helpers.

Requirements gain svix-ksuid because fixtures/logs.py imports KsuidMs
for log id generation.

Verified end-to-end against the warm backend: POST inserted=1 on each
signal, DELETE truncated=true on each.
2026-04-21 17:00:02 +05:30
grandwizard28
8cfa3bbe94 refactor(fixtures/logs,metrics): extract insert + truncate helpers
Mirror the traces refactor: pull the ClickHouse insert path out of the
insert_logs / insert_metrics pytest fixtures into plain module-level
functions (insert_logs_to_clickhouse, insert_metrics_to_clickhouse) and
move the per-table TRUNCATE loops into truncate_logs_tables /
truncate_metrics_tables. The fixtures become thin wrappers — zero
behavioural change.

Sets up the seeder container to expose POST/DELETE endpoints for logs
and metrics using the exact same code paths as the pytest fixtures.
2026-04-21 16:59:13 +05:30
grandwizard28
0d97f543df fix(alerts-downtime): capture load-time GETs before navigation
Flow 1 registered cap.mark() AFTER page.goto() and then called
page.waitForResponse(/api/v2/rules) — but against a fast local backend
the GET /api/v2/rules response arrived during page.goto, before the
waiter could register, and the test timed out at 30s.

installCapture's page.on('response') listener runs from before the
navigation, so moving mark() above page.goto() and relying on
dumpSince's 500ms drain is enough. No lost precision.

One site only; the same pattern exists in later flows (via per-action
waitForResponse) and may surface similar races — those are left for a
follow-up once the backend-side 2095 migration lands on main (current
frontend still calls PATCH /api/v1/rules/:id which the spec's assertion
doesn't match anyway).
2026-04-21 00:48:14 +05:30
grandwizard28
be7099b2b4 feat(tests/e2e): surface seeder_url to Playwright via globalSetup
- bootstrap/setup.py: test_setup now depends on the seeder fixture and
  writes seeder_url into .signoz-backend.json alongside base_url.
- bootstrap/run.py: test_e2e exports SIGNOZ_E2E_SEEDER_URL to the
  subprocessed yarn test so Playwright specs can reach the seeder
  directly in the one-command path.
- global.setup.ts: if .signoz-backend.json carries seeder_url, populate
  process.env.SIGNOZ_E2E_SEEDER_URL. Remains optional — staging mode
  leaves it unset.

Playwright specs that want per-test telemetry can:
  await fetch(process.env.SIGNOZ_E2E_SEEDER_URL + '/telemetry/traces', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify([...])
  });
and await a truncate via DELETE on teardown.
2026-04-21 00:47:58 +05:30
grandwizard28
ab6e8291fe feat(fixtures/seeder): HTTP seeder container for fine-grained telemetry seeding
Adds a sibling container alongside signoz/clickhouse/postgres that exposes
HTTP endpoints for direct-ClickHouse telemetry seeding, so Playwright
tests can shape per-test data without going through OTel or the SigNoz
ingestion path.

tests/fixtures/seeder/:
- Dockerfile: python:3.13-slim + the shared fixtures/ tree so the
  container can import fixtures.traces and reuse the exact insert path
  used by pytest.
- server.py: FastAPI app with GET /healthz, POST /telemetry/traces
  (accepts a JSON list matching Traces.from_dict input; auto-tags each
  inserted row with resource seeder=true), DELETE /telemetry/traces
  (truncates all traces tables).
- requirements.txt: fastapi, uvicorn, clickhouse-connect, numpy plus
  sqlalchemy/pytest/testcontainers because fixtures/{__init__,types,
  traces}.py import them at module load.

tests/fixtures/seeder/__init__.py: pytest fixture (`seeder`, package-
scoped) that builds the image via docker-py (testcontainers DockerImage
had multi-segment dockerfile issues), starts the container on the
shared network wired to ClickHouse via env vars, and waits for
/healthz. Cache key + restore follow the dev.wrap pattern other
fixtures use for --reuse.

tests/.dockerignore: exclude .venv, caches, e2e node_modules, and test
outputs so the build context is small and deterministic.

tests/conftest.py: register fixtures.seeder as a pytest plugin.

Currently traces-only — logs + metrics follow the same pattern.
2026-04-21 00:47:43 +05:30
grandwizard28
0839c532bc refactor(fixtures/traces): extract insert + truncate helpers
Pull the ClickHouse insert path out of the insert_traces pytest fixture
into a plain module-level function insert_traces_to_clickhouse(conn,
traces), and move the per-table TRUNCATE loop into truncate_traces_tables
(conn, cluster). The fixture becomes a thin wrapper over both — zero
behavioural change.

Lets the HTTP seeder container (tests/fixtures/seeder/) reuse the exact
same insert + truncate code the pytest fixture uses, so the two stay in
sync as the trace schema evolves.
2026-04-21 00:47:23 +05:30
grandwizard28
5ef206a666 feat(tests/e2e): alerts-downtime regression suite (platform-pod/issues/2095)
Import the 34-step regression suite originally developed on
platform-pod/issues/2095-frontend. Targets the alerts and planned-downtime
frontend flows after their migration to generated OpenAPI clients and
generated react-query hooks.

- specs/alerts-downtime/: SUITE.md (the stable spec), README.md (scope +
  open observations from the original runs), results-schema.md (legacy
  per-run artifact shape, retained for context).
- tests/alerts-downtime/alerts-downtime.spec.ts: 881-line Playwright spec
  covering 6 flows — alert CRUD/toggle, alert detail 404, planned
  downtime CRUD, notification channel routing, anomaly alerts.

Integration with the shared suite:
- Uses baseURL + storageState from tests/e2e/playwright.config.ts (no
  separate config). page.goto calls use relative paths; SIGNOZ_E2E_*
  env vars from the pytest bootstrap drive auth.
- test.describe.configure({ mode: 'serial' }) at the top of the describe:
  the flows mutate shared tenant state, so parallel runs cause cross-
  flow interference (documented in the original 2095 config).
- Per-run artifacts (network captures + screenshots) land in
  tests/e2e/tests/alerts-downtime/run-spec-<ts>/ by default — gitignored.

Historical per-run artifacts (~7.5MB of screenshots across run-1 through
run-7) are not imported; they lived at e2e/2095/run-*/ on the original
branch and remain there if needed.
2026-04-20 23:34:12 +05:30
grandwizard28
fce92115a9 fix(tests/fixtures/signoz.py): anchor Docker build context to repo root
Previously used path="../../" which resolved to the repo root only when
pytest's cwd was tests/integration/. After hoisting the pytest project
to tests/, that same relative path pointed one level above the repo
root and the build failed with:

  Cannot locate specified Dockerfile: cmd/enterprise/Dockerfile.with-web.integration

Anchor the build context to an absolute path computed from __file__ so
the fixture works regardless of pytest cwd.
2026-04-20 21:45:27 +05:30
grandwizard28
9743002edf docs(tests): describe pytest-master workflow and shared fixture layout
- tests/README.md (new): top-level map of the shared pytest project,
  fixture-ownership rule (shared vs per-tree), and common commands.
- tests/e2e/README.md: lead with the one-command pytest run and the
  warm-backend dev loop; keep the staging fallback as option 2.
- tests/e2e/CLAUDE.md: updated commands so agent contexts reflect the
  pytest-driven lifecycle.
- tests/e2e/.env.example: drop unused SIGNOZ_E2E_ENV_TYPE; note the file
  is only needed for staging mode.
2026-04-20 21:06:32 +05:30
grandwizard28
0efde7b5ce feat(tests/e2e): pytest-driven backend bring-up, seeding, and playwright runner
Wire the Playwright suite into the shared pytest fixture graph so the
backend + its seeded state are provisioned locally instead of pointing
at remote staging.

Python side (owns lifecycle):
- tests/fixtures/dashboards.py — generic create/list/upsert_dashboard
  helpers (shared infra; testdata stays per-tree).
- tests/e2e/conftest.py — e2e-scoped pytest fixtures: seed_dashboards
  (idempotent upsert from tests/e2e/testdata/dashboards/*.json),
  seed_alert_rules (from tests/e2e/testdata/alerts/*.json, via existing
  create_alert_rule), seed_e2e_telemetry (fresh traces/logs across a
  few synthetic services so /home and Services pages have data).
- tests/e2e/src/bootstrap/setup.py — test_setup depends on the fixture
  graph and persists backend coordinates to tests/e2e/.signoz-backend.json;
  test_teardown is the --teardown target.
- tests/e2e/src/bootstrap/run.py — test_e2e: one-command entrypoint that
  brings up the backend + seeds, then subprocesses yarn test and asserts
  Playwright exits 0.
- tests/conftest.py — register fixtures.dashboards plugin.

Playwright side (just reads):
- tests/e2e/global.setup.ts — loads .signoz-backend.json and injects
  SIGNOZ_E2E_BASE_URL/USERNAME/PASSWORD. No-op when env is already
  populated (staging mode, or pytest-driven runs where env is pre-set).
- playwright.config.ts registers globalSetup.
- package.json gains test:staging; existing scripts unchanged.

Testdata layout: tests/e2e/testdata/{dashboards,alerts,channels}/*.json
— per-tree (integration has its own tests/integration/testdata/).
2026-04-20 21:03:52 +05:30
grandwizard28
8bdaecbe25 feat(tests/e2e): import Playwright suite from signoz-e2e
Relocate the standalone signoz-e2e repository into tests/e2e/ as a
sibling of tests/integration/. The suite still points at remote
staging by default; subsequent commits wire it to the shared pytest
fixture graph so the backend can be provisioned locally.

Excluded from the import: .git, .github (CI migration deferred),
.auth, node_modules, test-results, playwright-report.
2026-04-20 20:40:02 +05:30
grandwizard28
deb90abd9c refactor(tests): hoist pytest project to tests/ root for shared fixtures
Lift pyproject.toml, uv.lock, conftest.py, and fixtures/ up from
tests/integration/ so the pytest project becomes shared infrastructure
rather than integration's private property. A sibling tests/e2e/ can
reuse the same fixture graph (containers, auth, seeding) without
duplicating plugins.

Also:
- Merge tests/integration/src/querier/util.py into tests/fixtures/querier.py
  (response assertions and corrupt-metadata generators belong with the
  other querier helpers).
- Use --import-mode=importlib + pythonpath=["."] in pyproject so
  same-basename tests across src/*/ do not collide at the now-wider
  rootdir.
- Broaden python_files to "*/src/**/**.py" so future test trees under
  tests/e2e/src/ get discovered.
- Update Makefile py-* targets and integrationci.yaml to cd into tests/
  and reference integration/src/... paths.
2026-04-20 20:39:16 +05:30
Pandey
52992c0e80 chore(switch): switch for some time to @therealpandey (#11017) 2026-04-20 13:29:03 +00:00
Yunus M
5d6ada7a5b fix: semantic token issues in aws refactor (#11014)
* fix: semantic token issues in aws refactor

* fix: semantic token issues in aws refactor

* chore: remove unnecessary light mode styles
2026-04-20 12:34:25 +00:00
Nikhil Soni
dbe55d4ae0 chore: remove setting of trace cache since it was not getting used (#10986)
* chore: remove caching spans since v2 was not using it

So we can directly introduce redis instead of relying
on in-memory cache

* chore: remove unnecessary logs
2026-04-20 11:11:12 +00:00
Yunus M
837da705b3 refactor: aws integrations (#10937)
* chore: clean up integrations code for better code organisation and extensibility

* feat: render integration in new route

* refactor: reorganize AWS integration components and update imports

- Moved AWS-related components to a new directory structure for better organization.
- Updated import paths to reflect the new structure.
- Removed unused components and styles related to the previous integration setup.
- Adjusted constants and integration logic to ensure compatibility with the new structure.

* feat: enhance IntegrationDetailHeader with loading state and styles

* feat: improve light mode styles

* feat: improve light mode styles

* feat: add new Azure integration components and update existing ones

* refactor: update integration types and improve imports

* refactor: update integration types and improve imports

* feat: integrate azure account connect / edit APIs

* feat: integrate service update api

* fix: sorting logic for enabled and not enabled services

* fix: aws integration - minor ui improvements

* feat: add search functionality and no results UI for integrations

* feat: integrate disconnect integration api

* fix: update integrations util path to fix test case

* chore: move cursor rules to folder to follow the current format

* chore: remove cursor rules from gitignore

* chore: skip request integration service test in aws

* fix: show scrollbar in drawer for overflowing content

* fix: selected service getting reset on config update

* feat: use semantic tokens

* feat: update aws integrations as per new design

* refactor: enhance AWS service details and list UI with loading states and improved layout

* refactor: remove unused AWS service components and update connection status handling

* feat: add S3BucketsSelector component and integrate it into ServiceDetails

* feat: implement ServiceDetails for S3 Sync with comprehensive tests and mock data

* feat: maintain width of save - discard buttons

* feat: add react-hook-form for form handling in ServiceDetails and enhance S3BucketsSelector styles

* chore: downgrade react-hook-form to version 7.40.0 in package.json and update yarn.lock

* feat: enhance AzureAccountForm with react-hook-form integration and improve styling for form

* feat: refactor S3 Sync service tests to remove unnecessary act calls and add ResizeObserver mock

* chore: add @uiw/codemirror-theme-dracula theme

* fix: use copyToClipboard instead of navigator clipboard

* refactor: update cloud integration API types

* refactor: simplify service selection logic and update dashboard URL path

* refactor: remove Azure integrations files

* feat: add providerAccountId to AWS cloud account mapping and update related components

* feat: enhance AWS services list with empty state UI and improve connection handling

* fix: use new account invalidation method and correct region selection logic

* refactor: update mock data and API response structures for AWS integration tests

* fix: do not call connection_status for aws

* fix: clear s3 buckets if log collection is false

* refactor: improve AWS cloud account mapping and clean up unused code in account settings modal

* refactor: remove unused account services and status hooks

* refactor: remove AWS API integration and related hooks

* refactor: remove unused api and components

* refactor: remove duplicate files

* refactor: remove unused codemirror theme

* refactor: remove unused Azure account configuration interfaces

* feat: update image imports in ServicesList and IntegrationsList components

* refactor: update image imports to use URL constructor and unify toast imports from @signozhq/ui

* refactor: update integration icons to use imported assets for AWS and Azure logos

* fix: use semantic tokens

* fix: use semantic tokens

* fix: format style files

* refactor: remove unused SVG and test files, update styles and structure in Integrations components
2026-04-20 11:09:13 +00:00
swapnil-signoz
a9b458f1f6 refactor: moving types to cloud provider specific namespace/pkg (#10976)
* refactor: moving types to cloud provider specific namespace/pkg

* refactor: separating cloud provider types

* refactor: using upper case key for AWS
2026-04-20 10:42:13 +00:00
Naman Verma
ac26299c3d docs: perses schema for dashboards (#10609)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* docs: perses schema for dashboards

* chore: no need for Signal type in commons, only used once

* chore: no need for PageSize type in commons, only used once

* chore: rm comment

* chore: remove stub for time series chart

* chore: remove manually written manifest and package

* chore: remove validate file

* chore: no config folder

* chore: no config folder

* chore: no commons (for now)

* feat: validation script

* fix: remove fields from variable specs that are there in ListVariable

* chore: test file with way more examples

* chore: test file with way more examples

* chore: checkpoint for half correct setup

* chore: rearrange specs in package.json

* chore: py script not needed

* chore: rename

* chore: folders in schemas for arranging

* chore: folders in schemas for arranging

* fix: proper composite query schema

* feat: custom time series schema

* chore: comment explaining when to use composite query and when not

* feat: promql example

* chore: remove upstream import

* fix: promql fix

* docs: time series panel schema without upstream ref

* chore: object for visualization section

* docs: bar chart panel schema without upstream ref

* docs: number panel schema without upstream ref

* docs: number panel schema without upstream ref

* docs: pie chart panel schema without upstream ref

* docs: table chart panel schema without upstream ref

* docs: histogram chart panel schema without upstream ref

* docs: list panel schema without upstream ref

* chore: a more complex example

* chore: examples for panel types

* chore: remaining fields file

* fix: no more online validation

* chore: replace yAxisUnit by unit

* chore: no need for threshold prefix inside threshold obj

* chore: remove unimplemented join query schema

* fix: no nesting in context links

* fix: less verbose field names in dynamic var

* chore: actually name every panel as a panel

* chore: common package for panels' repeated definitions

* chore: common package for queries' repeated definitions

* chore: common package for variables' repeated definitions

* fix: functions in formula

* fix: only allow one of metric or expr aggregation in builder query

* fix: datasource in perses.json

* fix: promql step duration schema

* fix: proper type for selectFields

* chore: single version for all schemas

* fix: normalise enum defs

* chore: change attr name to name

* chore: common threshold type

* chore: doc for how to add a panel spec

* feat: textbox variable

* feat: go struct based schema for dashboardv2 with validations and some tests

* fix: go mod fix

* chore: perses folder not needed anymore

* chore: use perses updated/createdat

* fix: builder query validation (might need to revisit, 3 types seems bad)

* chore: go lint fixes

* chore: define constants for enum values

* chore: nil factory case not needed

* chore: nil factory case not needed

* chore: slight rearrange for builder spec readability

* feat: add TimeSeriesChartAppearance

* chore: no omit empty

* chore: span gaps in schema

* chore: context link not needed in plugins

* chore: remove format from threshold with label, rearrange structs

* test: fix unit tests

* chore: refer to common struct

* feat: query type and panel type matching

* test: unit tests improvement first pass

* test: unit tests improvement second pass

* test: unit tests improvement third pass

* test: unit tests improvement fourth pass

* test: unit test for dashboard with sections

* test: unit test for dashboard with sections

* fix: add missing dashboard metadata fields

* chore: go lint fixes

* chore: go lint fixes

* chore: changes for create v2 api

* chore: more info in StorableDashboardDataV2

* chore: diff check in update method

* chore: add required true tag to required fields

* feat: update metadata methods

* chore: go mod tidy

* chore: put id in metadata.name, authtypes for v2

* revert: only the schema for now in this PR

* chore: comment for why v1.DashboardSpec is chosen

* chore: change source to signal in DynamicVariableSpec

* fix: string values for precision option

* feat: literal options for comparison operator

* fix: missing required tag in threshold fields

* chore: use valuer.string for plugin kind enums

* chore: use only TelemetryFieldKey in ListPanelSpec

* chore: simplify variable plugin validation

* fix: do not allow nil panels

* fix: do not allow nil plugin spec

* fix: signal should be an enum not a string

* chore: rearrange enums to separate those with default values

* test: unit tests for invalid enum values

* fix: all enums should have a default value

* refactor: extract UnmarshalBuilderQueryBySignal to deduplicate signal dispatch

* refactor: proper struct for span gaps

* chore: back to normal strings for kind enums

* chore: ticks in err messages

* chore: ticks in err messages

* chore: remove unused struct

* chore: snake case for non-kind enum values

* chore: proper error wrapping

* chore: accept int values in PrecisionOption as fallback

* fix: actually update the plugin from map to custom struct

* feat: disallow unknown fields in plugins

* chore: make enums valuer.string

* chore: proper enum types in constants

* chore: rename value to avoid overriding valuer.string method

* test: db cycle test

* fix: lint fix in some other file

* test: remove collapse info from sections

* test: use testify package

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2026-04-20 09:29:03 +00:00
Nityananda Gohain
96d816bd1a feat: improve perf for queries with empty resource filter (#10861)
* feat: improve perf for queries with empty resource filter

* fix: update comment

* fix: add nolint:nilnil

* fix: address comments

* fix: update comment

---------

Co-authored-by: Ankit Nayan <ankit@signoz.io>
2026-04-20 03:30:16 +00:00
Pandey
f52d89c338 docs(go): add types.md covering core-type and Postable/Gettable/Storable conventions (#10998)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* docs(go): add types.md covering core-type and Postable/Gettable/Storable conventions

Document the core-type-first pattern used across pkg/types/: always define
the domain type X, and introduce Postable/Gettable/Updatable/Storable
flavors only when their shape actually differs from X. Walk through
Channel (core only), AuthDomain (all four flavors), and Rule (the
in-progress v2 split) as worked examples.

* docs(go): drop Rule migration example from types.md

* docs(go): allow both NewFromX and ToX conversion forms in types.md

* docs(go): move validation guidance to core type X in types.md

* docs(go): drop exact file:line refs in types.md, use inline examples

* docs(go): split spelling guidance into Updatable and Storable bullets

* docs(go): trim conversion examples to Channel and AuthDomain

* docs(go): swap remaining examples to AuthDomain/Channel where possible
2026-04-19 16:35:47 +00:00
aniketio-ctrl
691e919a41 feat(billing): add zeus put meters api (#10923)
* feat(billing): add zeus put meters api

* feat(billing): add zeus put meters api
2026-04-19 11:30:09 +00:00
Pandey
61ae49d4ab refactor(ruler): introduce v2 Rule read type and validate uuid on delete (#10997)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* refactor(ruler): add Rule v2 read type and rename storage Rule to StorableRule

* refactor(ruler): map GettableRule to Rule before responding on v2 routes

* docs(openapi): regenerate spec with RuletypesRule on v2 rules routes

* docs(frontend): regenerate API clients with RuletypesRuleDTO

* refactor(ruler): validate uuid-v7 on delete rule handler
2026-04-18 12:39:50 +00:00
Pandey
a5e1f71cf6 refactor(ruler): tighten rule and downtime OpenAPI schema (#10995)
* refactor(ruler): add Enum() on AlertType

* refactor(ruler): convert RepeatType and RepeatOn to valuer.String with Enum()

* refactor(ruler): mark required fields on Recurrence

* refactor(ruler): mark required tag on CumulativeSchedule.Type

* refactor(ruler): rename GettablePlannedMaintenance to PlannedMaintenance

* docs: regenerate OpenAPI spec and frontend clients with tightened schema

* refactor(ruler): add PostablePlannedMaintenance input type with Validate

* refactor(ruler): rename EditPlannedMaintenance to Update and GetAll to List

* refactor(ruler): switch Create/Update to *PostablePlannedMaintenance

* refactor(ruler): convert PlannedMaintenance.Id string to ID valuer.UUID

* refactor(ruler): return *PlannedMaintenance from CreatePlannedMaintenance

* docs: regenerate OpenAPI spec and frontend clients for Postable/ID changes

* refactor(ruler): type PlannedMaintenance.Status as MaintenanceStatus enum

* refactor(ruler): type PlannedMaintenance.Kind as MaintenanceKind enum

* refactor(ruler): mark GettableRule.Id required

* refactor(ruler): mark GettableRule.State required

* refactor(ruler): make GettableRule timestamps non-pointer and users nullable

* refactor(ruler): return bare array from v2 ListRules instead of wrapped object

* docs: regenerate OpenAPI spec and frontend clients for schema pass
2026-04-18 09:46:07 +00:00
Pandey
c9610df66d refactor(ruler): move rules and planned maintenance handlers to signozapiserver (#10957)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* refactor(ruler): define Ruler and Handler interfaces with signozruler implementation

Expand the Ruler interface with rule management and planned maintenance
methods matching rules.Manager signatures. Add Handler interface for
HTTP endpoints. Implement handler in signozruler wrapping ruler.Ruler,
and update provider to embed *rules.Manager for interface satisfaction.

* refactor(ruler): move eval_delay from query-service constants to ruler config

Replace constants.GetEvalDelay() with config.EvalDelay on ruler.Config,
defaulting to 2m. This removes the signozruler dependency on
pkg/query-service/constants.

* refactor(ruler): use time.Duration for eval_delay config

Match the convention used by all other configs in the codebase.
TextDuration is for preserving human-readable text through JSON
round-trips in user-facing rule definitions, not for internal config.

* refactor(ruler): add godoc comments and spacing to Ruler interface

* refactor(ruler): wire ruler handler through signoz.New and signozapiserver

- Add Start/Stop to Ruler interface for lifecycle management
- Add rulerCallback to signoz.New() for EE customization
- Wire ruler.Handler through Handlers, signozapiserver provider
- Register 12 routes in signozapiserver/ruler.go (7 rules, 5 downtime)
- Update cmd/community and cmd/enterprise to pass rulerCallback
- Move rules.Manager creation from server.go to signoz.New via callback
- Change APIHandler.ruleManager type from *rules.Manager to ruler.Ruler
- Remove makeRulesManager from both OSS and EE server.go

* refactor(ruler): remove old rules and downtime_schedules routes from http_handler

Remove 7 rules CRUD routes and 5 downtime_schedules routes plus their
handler methods from http_handler.go. These are now served by
signozapiserver/ruler.go via handler.New() with OpenAPIDef.

The 4 v1 history routes (stats, timeline, top_contributors,
overall_status) remain in http_handler.go as they depend on
interfaces.Reader and have v2 equivalents already in signozapiserver.

* refactor(ruler): use ProviderFactory pattern and register in factory.Registry

Replace the rulerCallback with rulerProviderFactories following the
standard ProviderFactory pattern (like auditorProviderFactories). The
ruler is now created via factory.NewProviderFromNamedMap and registered
in factory.Registry for lifecycle management. Start/Stop are no longer
called manually in server.go.

- Ruler interface embeds factory.Service (Start/Stop return error)
- signozruler.NewFactory accepts all deps including EE task funcs
- provider uses named field (not embedding) with explicit delegation
- cmd/community passes nil task funcs, cmd/enterprise passes EE funcs
- Remove NewRulerProviderFactories (replaced by callback from cmd/)
- Remove manual Start/Stop from both OSS and EE server.go

* fix(ruler): make Start block on stopC per factory.Service contract

rules.Manager.Start is non-blocking (run() just closes a channel).
Add stopC to provider so Start blocks until Stop closes it, matching
the factory.Service contract used by the Registry.

* refactor(ruler): remove unused RM() accessor from EE APIHandler

* refactor(ruler): remove RuleManager from APIHandlerOpts

Use Signoz.Ruler directly instead of passing it through opts.

* refactor(ruler): add /api/v1/rules/test and mark /api/v1/testRule as deprecated

* refactor(ruler): use binding.JSON.BindBody for downtime schedule decode

* refactor(ruler): add TODOs for raw string params on Ruler interface

Mark CreateRule, EditRule, PatchRule, TestNotification, and DeleteRule
with TODOs to accept typed params instead of raw JSON strings. Requires
changing the storage model since the manager stores raw JSON as Data.

* refactor(ruler): add TODO on MaintenanceStore to not expose store directly

* docs: regenerate OpenAPI spec and frontend API clients with ruler routes

* refactor(ruler): rename downtime_schedules tag to downtimeschedules

* refactor(ruler): add query params to ListDowntimeSchedules OpenAPIDef

Add ListPlannedMaintenanceParams struct with active/recurring fields.
Use binding.Query.BindQuery in the handler instead of raw URL parsing.
Add RequestQuery to the OpenAPIDef so params appear in the OpenAPI spec
and generated frontend client.

* refactor(ruler): add GettableTestRule response type to TestRule endpoint

Define GettableTestRule struct with AlertCount and Message fields.
Use it as the Response in TestRule OpenAPIDef so the generated frontend
client has a proper response type instead of string.

* refactor(ruler): tighten schema with oneOf unions and required fields

Surface the polymorphism in RuleThresholdData and EvaluationEnvelope via
JSONSchemaOneOf (the same pattern as QueryEnvelope), so the generated
TS types are discriminated unions with typed `spec` instead of unknown.
Also mark `alert`, `ruleType`, and `condition` required on PostableRule
so the generated TS types are non-optional for callers.

* refactor(ruler): add Enum() on EvaluationKind, ScheduleType, ThresholdKind

Surface the fixed set of accepted values for these valuer-wrapped kind
types so OpenAPI emits proper string-enum schemas and the generated TS
types become string-literal unions instead of plain string.

* refactor(ruler): mark required fields on nested rule and maintenance types

Surface fields already enforced by Validate()/UnmarshalJSON as required
in the OpenAPI schema so the generated TS types match runtime behavior.

Touches RuleCondition (compositeQuery, op, matchType), RuleThresholdData
(kind, spec), BasicRuleThreshold (name, target, op, matchType),
RollingWindow (evalWindow, frequency), CumulativeWindow (schedule,
frequency, timezone), EvaluationEnvelope (kind, spec), Schedule
(timezone), GettablePlannedMaintenance (name, schedule).

Does not mark server-populated fields (id, createdAt, updatedAt, status,
kind) on GettablePlannedMaintenance required, since the same struct is
reused for request bodies in MaintenanceStore.CreatePlannedMaintenance.

* refactor(ruler): tighten AlertCompositeQuery, QueryType, PanelType schema

Missed in the earlier tightening pass. AlertCompositeQuery.queries,
panelType, queryType are all required for a valid composite query;
QueryType and PanelType are valuer-wrapped with fixed value sets, so
expose them as enums in the OpenAPI schema.

* refactor(ruler): wrap sql.ErrNoRows as TypeNotFound in by-ID lookups

GetStoredRule and GetPlannedMaintenanceByID previously returned bun's
raw Scan error, so a missing ID leaked "sql: no rows in result set" to
the HTTP response with a 500 status. WrapNotFoundErrf converts
sql.ErrNoRows into TypeNotFound so render.Error emits 404 with a stable
`not_found` code, and passes other errors through unchanged.

* refactor(ruler): move migrated rules routes to /api/v2/rules

The 7 rules routes now live at /api/v2/rules, /api/v2/rules/{id}, and
/api/v2/rules/test — served via handler.New with render.Success and
render.Error. The legacy /api/v1/rules paths will be restored in the
query-service http handler in a follow-up so existing clients keep
receiving the SuccessResponse envelope unchanged.

Drop the /api/v1/testRule deprecated alias from signozapiserver; the
original lives on main's http_handler.go and is restored alongside the
other v1 paths.

Downtime schedule routes stay at /api/v1/downtime_schedules — single
track, no legacy restore planned.

* refactor(ruler): restore /api/v1/rules legacy handlers for back-compat

Bring the 7 rule CRUD/test handlers and their router.HandleFunc lines
back to http_handler.go so /api/v1/rules, /api/v1/rules/{id}, and
/api/v1/testRule continue to emit the legacy SuccessResponse envelope.
The v2 versions under signozapiserver are the new home for the render
envelope used by generated clients.

Delegation uses aH.ruleManager (populated from opts.Signoz.Ruler in
NewAPIHandler), so a single ruler.Ruler instance serves both paths — no
second rules.Manager is instantiated.

Downtime schedules stay single-track under signozapiserver; the 5
downtime handlers are not restored.

* docs: regenerate OpenAPI spec and frontend clients for /api/v2/rules

* refactor(ruler): return 201 Created on POST /api/v2/rules

A successful create now responds with 201 Created and the full
GettableRule body, matching REST convention for resource creation.
Regenerates the OpenAPI spec and frontend clients to reflect the new
status code.

* refactor(ruler): restore dropped sorter TODO in legacy listRules

The legacy listRules handler was copied verbatim from main during the
v1 back-compat restore, but an inner blank line and the load-bearing
`// todo(amol): need to add sorter` comment were stripped. Put them
back so the legacy block round-trips cleanly against main.

* refactor(ruler): return 201 Created on POST /api/v1/downtime_schedules

Match the REST convention already applied to POST /api/v2/rules:
successful creates respond with 201 Created. Response body remains
empty (nil); the generated frontend client surface is unchanged since
no response type was declared.

A richer "return the created resource" response body is a separate
follow-up — holding off until the ruletypes naming cleanup lands.

* fix(ruler): signal Healthy only after manager.Start closes m.block

The ruler provider didn't implement factory.Healthy, so the registry
fell back to factory.closedC and marked the service StateRunning the
instant its Start goroutine spawned — before rules.Manager.Start had
closed m.block. /api/v2/healthz therefore returned 200 while rule
evaluation was still gated, and integration tests that POSTed a rule
immediately after the readiness check saw their task goroutines stuck
on <-m.block until the next frequency tick.

Add a healthyC channel and close it inside Start only after
manager.Start returns; implement factory.Healthy so the registry and
/api/v2/healthz wait on the real readiness signal.

* fix: add the withhealthy interface

* fix(ruler): alias legacy RULES_EVAL_DELAY env var in backward-compat

The eval_delay config was moved from query-service constants (read from
RULES_EVAL_DELAY) onto ruler.Config (read via mapstructure from
SIGNOZ_RULER_EVAL__DELAY). That silently broke the legacy env var for
any existing deployment — notably the alerts integration-test fixture
which sets RULES_EVAL_DELAY=0s to let rules evaluate against just-
inserted data. The resulting default 2m delay pushed the query window
far enough back that the fixture's rate spike fell outside it, causing
8 of 24 parametrize cases in 02_basic_alert_conditions.py to fail with
"Expected N alerts to be fired but got 0 alerts".

Add RULES_EVAL_DELAY to mergeAndEnsureBackwardCompatibility alongside
the ~10 other aliased legacy env vars. Emits the standard deprecation
warning and overrides config.Ruler.EvalDelay.
2026-04-18 08:25:16 +00:00
Pandey
ef298af388 feat(apiserver): derive HTTP route prefix from global.external_url (#10943)
* feat(apiserver): derive HTTP route prefix from global.external_url

The path component of global.external_url is now used as the base path
for all HTTP routes (API and web frontend), enabling SigNoz to be served
behind a reverse proxy at a sub-path (e.g. https://example.com/signoz/).

The prefix is applied via http.StripPrefix at the outermost handler
level, requiring zero changes to route registration code. Health
endpoints (/api/v1/health, /api/v2/healthz, /api/v2/readyz,
/api/v2/livez) remain accessible without the prefix for container
healthchecks.

Removes web.prefix config in favor of the unified global.external_url
approach, avoiding the desync bugs seen in projects with separate
API/UI prefix configs (ArgoCD, Prometheus).

closes SigNoz/platform-pod#1775

* feat(web): template index.html with dynamic base href from global.external_url

Read index.html at startup, parse as Go template with [[ ]] delimiters,
execute with BasePath derived from global.external_url, and cache the
rendered bytes in memory. This injects <base href="/signoz/" /> (or
whatever the route prefix is) so the browser resolves relative URLs
correctly when SigNoz is served at a sub-path.

Inject global.Config into the routerweb provider via the factory closure
pattern. Static files (JS, CSS, images) are still served from disk
unchanged.

* refactor(web): extract index.html templating into web.NewIndex

Move the template parsing and execution logic from routerweb provider
into pkg/web/template.go. NewIndex logs and returns raw bytes on
template failure; NewIndexE returns the error for callers that need it.

Rename BasePath to BaseHref to match the HTML attribute it populates.
Inject global.Config into routerweb via the factory closure pattern.

* refactor(global): rename RoutePrefix to ExternalPath, add ExternalPathTrailing

Rename RoutePrefix() to ExternalPath() to accurately reflect what it
returns: the path component of the external URL. Add
ExternalPathTrailing() which returns the path with a trailing slash,
used for HTML base href injection.

* refactor(web): make index filename configurable via web.index

Move the hardcoded indexFileName const from routerweb/provider.go to
web.Config.Index with default "index.html". This allows overriding the
SPA entrypoint file via configuration.

* refactor(web): collapse testdata_basepath into testdata

Use a single testdata directory with a templated index.html for all
routerweb tests. Remove the redundant testdata_basepath directory.

* test(web): add no-template and invalid-template index test cases

Add three distinct index fixtures in testdata:
- index.html: correct [[ ]] template with BaseHref
- index_no_template.html: plain HTML, no placeholders
- index_invalid_template.html: malformed template syntax

Tests verify: template substitution works, plain files pass through
unchanged, and invalid templates fall back to serving raw bytes.
Consolidate test helpers into startServer/get.

* refactor(web): rename test fixtures to no_template, valid_template, invalid_template

Drop the index_ prefix from test fixtures. Use web instead of w for
the variable name in test helpers.

* test(web): add SPA fallback paths to no_template and invalid_template tests

Test /, /does-not-exist, and /assets in all three template test cases
to verify SPA fallback behavior (non-existent paths and directories
serve the index) regardless of template type.

* test(web): use exact match instead of contains in template tests

Match the full expected response body in TestServeTemplatedIndex
instead of using assert.Contains.

* style(web): use raw string literals for expected test values

* refactor(web): rename get test helper to httpGet

* refactor(web): use table-driven tests with named path cases

Replace for-loop path iteration with explicit table-driven test cases
for each path. Each path (root, non-existent, directory) is a named
subtest case in all three template tests.

* chore: remove redundant comments from added code

* style: add blank lines between logical blocks

* fix(web): resolve lint errors in provider and template

Fix errcheck on rw.Write in serveIndex, use ErrorContext instead of
Error in NewIndex for sloglint compliance. Move serveIndex below
ServeHTTP to order public methods before private ones.

* style: formatting and test cleanup from review

Restructure Validate nil check, rename expectErr to fail with
early-return, trim trailing newlines in test assertions, remove
t.Parallel from subtests, inline short config literals, restore
struct field comments in web.Config.

* fix: remove unused files

* fix: remove unused files

* perf(web): cache http.FileServer on provider instead of creating per-request

* refactor(web): use html/template for context-aware escaping in index rendering

---------

Co-authored-by: SagarRajput-7 <162284829+SagarRajput-7@users.noreply.github.com>
2026-04-18 06:47:17 +00:00
Yunus M
b5c146afdf chore: update @signozhq packages and adjust styles in AuthHeader and AnnouncementTooltip components (#10989)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-04-17 14:41:22 +00:00
Abhishek Kumar Singh
dfbcd1e0ec feat: AlertManager templater (#10581)
* chore: custom notifiers in alert manager

* chore: lint fixs

* chore: fix email linter

* chore: added tracing to msteamsv2 notifier

* feat: alert manager template to template title and notification body

* chore: updated test name + code for timeout errors

* chore: added utils for using variables with $ notation

* chore: exposed templates for alertmanager types

* feat: added preprocessor for alert templater

* chore: hooked preProcess function in expandTitle and body, added labels and annotations in alertdata

* chore: fix lint issues

* chore: added handling for missing variable used in template

* feat: converted alerttemplater to interface and updated tests

* refactor: added extractCommonKV instead of 2 different functions

* test: fix preprocessor test case

* feat: added support for  and  in templating

* chore: lint fix

* chore: renamed the interface

* chore: added test for missing function

* refactor: test case and sb related changed

* refactor: comments and test improvements

* chore: lint fix

* chore: updated comments

* chore: updated newline to markdown format

* chore: updated br with new line in test and logs added

* refactor: review comments

* refactor: lint fixes

* chore: updated licenses for notifiers

* chore: updated email notifier from upstream

* feat: return single templating result from  with flag for template type

* fix: variables with symbols in template

* chore: removed notifier test files

* refactor: changes as per internal review

* chore: lint issue

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2026-04-17 13:00:15 +00:00
Yunus M
d57acae088 chore: update all absolute tokens to semantic tokens (#10742)
* chore: update absolute tokens to semantic tokens

* chore: update absolute tokens to semantic tokens - pre login pages

* chore: update absolute tokens to semantic tokens - trace details

* chore: update hex, absolute tokens to semantic tokens

* chore: remove .claude/launch.json

* chore: fix formatting errors

* chore: prettier fixes

* chore: prettier fixes

* chore: prettier fixes

* chore: update @signozhq/design-tokens to version 2.1.4 and adjust styles in periscope.scss

* chore: update absolute tokens to semantic tokens

* chore: update borders

* chore: update snapshots

* chore: update snapshots

* chore: remove shadow in table footer

* fix: update test snapshots

* fix: remove extra padding in panel selection modal

* fix: remove the default background in welcome checklist popover

* fix: update background styles to use var(--l2-background) instead of var(--l3-background)

* chore: use badge from @signozhq/ui (#10834)

* chore: use badge from @signozhq/ui

* chore: remove unused tooltip comp

* chore: use sonner component from @signozhq/ui (#10835)

* chore: use badge from @signozhq/ui

* chore: remove unused tooltip comp

* chore: use sonner component from @signozhq/ui

* chore: use switch component from @signozhq/ui (#10836)

* fix: update test cases to use components from signozhq/ui

* fix: patch getComputedStyle to handle CSS parsing errors in tests

* fix: update sonner import for signozhq/ui

* fix: remove unused toast import from InviteMembersModal test

* fix(yarn.lock): sync with current changes

* feat: fix slow test case

---------

Co-authored-by: Vinícius Lourenço <vinicius@signoz.io>
Co-authored-by: aks07 <adityasinghssj1@gmail.com>
2026-04-17 10:05:35 +00:00
SagarRajput-7
0b0cfc04f1 chore: skipped flaky jest tests (#10981)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: skipped flaky jest tests

* chore: skipped flaky jest tests
2026-04-17 08:03:37 +00:00
Vikrant Gupta
c8099a88c3 feat(member): add v2 reset password token endpoints and show invite expiry status (#10954)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* fix(member): better UX for pending invite users

* fix(member): add integration tests and reuse timezone util

* fix(member): rename deprecated and remove dead files

* fix(member): do not use hypened endpoints

* fix(member): user friendly button text

* fix(member): update the API endpoints and integration tests

* fix(member): simplify handler naming convention

* fix(member): added v2 API for update my password

* fix(member): remove more dead code

* fix(member): fix integration tests

* fix(member): fix integration tests
2026-04-16 19:40:51 +00:00
SagarRajput-7
c9d5ca944a feat: added info dismissible callout for the license row in workspace settings (#10938)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat: added info dismissible callout for the license row in workspace settings

* feat: addressed comments

* feat: addressed comments
2026-04-16 12:37:18 +00:00
SagarRajput-7
c24579be12 chore: updated the onboarding contributor doc as per the new enhancement in the asset management (#10955) 2026-04-16 10:59:20 +00:00
Abhishek Kumar Singh
d085a8fd53 chore: custom notifiers in alert manager (#10541)
* chore: custom notifiers in alert manager

* chore: lint fixs

* chore: fix email linter

* chore: added tracing to msteamsv2 notifier

* chore: updated test name + code for timeout errors

* refactor: review comments

* refactor: lint fixes

* chore: updated licenses for notifiers

* chore: updated email notifier from upstream

* chore: updated license header with short notation

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2026-04-16 10:33:46 +00:00
SagarRajput-7
6406a739a1 chore: public dir cleanup, remove duplicate assets since they are moved to src/assets (#10951) 2026-04-16 08:19:45 +00:00
SagarRajput-7
a7ce8b2d24 chore: added eslint, stylelint and asset migration across the codebase (#10915)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: jest module resolution fix

* chore: remove the snapshot update since no migration is done under this pr

* chore: added eslint and stylelint for the asset migration task

* chore: asset migration - using the src/assets across the codebase, instead of the public dir

* chore: code refactor

* chore: fmt fix

* chore: strengthen the lint rule and migration the gaps found

* chore: addressed feedback comments

* chore: addressed comments and added public reference rule

* chore: addressed comments

* feat: addressed comments
2026-04-16 07:14:44 +00:00
Pandey
b3da6fb251 refactor(alertmanager): move API handlers to signozapiserver (#10941)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* refactor(alertmanager): move API handlers to signozapiserver

Extract Handler interface in pkg/alertmanager/handler.go and move
the implementation from api.go to signozalertmanager/handler.go.

Register all alertmanager routes (channels, route policies, alerts)
in signozapiserver via handler.New() with OpenAPIDef. Remove
AlertmanagerAPI injection from http_handler.go.

This enables future AuditDef instrumentation on these routes.

* fix(review): rename param, add /api/v1/channels/test endpoint

- Rename `am` to `alertmanagerService` in NewHandlers
- Add /api/v1/channels/test as the canonical test endpoint
- Mark /api/v1/testChannel as deprecated
- Regenerate OpenAPI spec

* fix(review): use camelCase for channel orgId json tag

* fix(review): remove section comments from alertmanager routes

* fix(review): use routepolicies tag without hyphen

* chore: regenerate frontend API clients for alertmanager routes

* fix: add required/nullable/enum tags to alertmanager OpenAPI types

- PostableRoutePolicy: mark expression, name, channels as required
- GettableRoutePolicy: change CreatedAt/UpdatedAt from pointer to value
- Channel: mark name, type, data, orgId as required
- ExpressionKind: add Enum() for rule/policy values
- Regenerate OpenAPI spec and frontend clients

* fix: use typed response for GetAlerts endpoint

* fix: add Receiver request type to channel mutation endpoints

CreateChannel, UpdateChannelByID, TestChannel, and TestChannelDeprecated
all read req.Body as a Receiver config. The OpenAPIDef must declare
the request type so the generated SDK includes the body parameter.

* fix: change CreateChannel access from EditAccess to AdminAccess

Aligns CreateChannel endpoint with the rest of the channel mutation
endpoints (update/delete) which all require admin access. This is
consistent with the frontend where notifications are not accessible
to editors.
2026-04-15 17:31:43 +00:00
primus-bot[bot]
be1a0fa3a5 chore(release): bump to v0.119.0 (#10936)
Some checks failed
build-staging / js-build (push) Has been cancelled
build-staging / prepare (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
Co-authored-by: Priyanshu Shrivastava <priyanshu@signoz.io>
2026-04-15 08:35:52 +00:00
Nikhil Mantri
6ad2711c7a feat(/fields/*) : introduce a new param metricNamespace in the APIs for prefix match (#10779)
* chore: initial commit

* chore: added metricNamespace as a new param

* chore: go generate openapi, update spec

* chore: frontend yarn generate:api

* chore: added metricnamespace support in /fields/values as well as added integration tests

* chore: corrected comment

* chore: added unit tests for getMetricsKeys and getMeterSourceMetricKeys

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
2026-04-15 07:26:42 +00:00
swapnil-signoz
4f59cb0de3 chore: updating cloud integration agent version (#10933)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-04-15 05:01:26 +00:00
Srikanth Chekuri
304c39e08c fix(route-policy): allow undefined variables for expression (#10934)
* fix(route-policy): allow undefined variables for expression

* chore: fix lint
2026-04-15 04:55:21 +00:00
swapnil-signoz
3df0da3a4e chore: removing underscore attribute based cloud integration dashboards (#10929)
Some checks failed
build-staging / js-build (push) Has been cancelled
build-staging / prepare (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-04-14 13:08:07 +00:00
Tushar Vats
381966adcd fix: flaky integration tests (#10927) 2026-04-14 10:52:16 +00:00
Piyush Singariya
df9023c74c chore: upgrade collector version v0.144.3-rc.4 (#10921)
* fix: upgraded collector version

* fix: qbtoexpr tests

* fix: go sum

* chore: upgrade collector version v0.144.3-rc.4

* fix: tests

* ci: test fix
2026-04-14 08:57:28 +00:00
Nikhil Soni
10fc166e97 feat: add docs and agent skill info banner to ClickHouse query editor (#10713)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat: add clickhouse query doc and skill links

* fix: switch to using callout component for info

* chore: update banner message

* chore: move styles to a separate file and rename folder

to match conventions

* fix: rename folder in git as well
2026-04-14 06:17:13 +00:00
SagarRajput-7
22b137c92b chore: moved json to ts for the onboarding data sources (#10905)
* chore: moved json to ts for the onboarding data sources

* chore: migrated unsupported occurences in scss files

* chore: jest module resolution fix

* chore: remove the snapshot update since no migration is done under this pr
2026-04-14 05:54:49 +00:00
SagarRajput-7
0b20afc469 chore: duplicated all the assets required for migration from public dir to src/assets dir (#10904)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: duplicated all the assets required for migration from public dir to src/assets dir

* chore: jest module resolution fix

* chore: remove the snapshot update since no migration is done under this pr
2026-04-13 19:12:17 +00:00
swapnil-signoz
dce496d099 refactor: cloud integration modules implementation (#10718)
* feat: adding cloud integration type for refactor

* refactor: store interfaces to use local types and error

* feat: adding sql store implementation

* refactor: removing interface check

* feat: adding updated types for cloud integration

* refactor: using struct for map

* refactor: update cloud integration types and module interface

* fix: correct GetService signature and remove shadowed Data field

* feat: implement cloud integration store

* refactor: adding comments and removed wrong code

* refactor: streamlining types

* refactor: add comments for backward compatibility in PostableAgentCheckInRequest

* refactor: update Dashboard struct comments and remove unused fields

* refactor: split upsert store method

* feat: adding integration test

* refactor: clean up types

* refactor: renaming service type to service id

* refactor: using serviceID type

* feat: adding method for service id creation

* refactor: updating store methods

* refactor: clean up

* refactor: clean up

* refactor: review comments

* refactor: clean up

* feat: adding handlers

* fix: lint and ci issues

* fix: lint issues

* fix: update error code for service not found

* feat: adding handler skeleton

* chore: removing todo comment

* feat: adding frontend openapi schema

* feat: adding module implementation for create account

* fix: returning valid error instead of panic

* fix: module test

* refactor: simplify ingestion key retrieval logic

* feat: adding module implementation for AWS

* refactor: ci lint changes

* refactor: python formatting change

* fix: new storable account func was unsetting provider account id

* refactor: python lint changes

* refactor: adding validation on update account request

* refactor: reverting older tests and adding new tests

* chore: lint changes

* feat: using service account for API key

* refactor: renaming tests and cleanup

* refactor: removing dashboard overview images

* feat: adding service definition store

* chore: adding TODO comments

* feat: adding API for getting connection credentials

* feat: adding openapi spec for the endpoint

* feat: adding tests for credential API

* feat: adding cloud integration config

* refactor: updating test with new env variable for config

* refactor: moving few cloud provider interface methods to types

* refactor: review comments resolution

* refactor: lint changes

* refactor: code clean up

* refactor: removing email domain function

* refactor: review comments and clean up

* refactor: lint fixes

* refactor: review changes

- Added get connected account module method
- Fixed integration tests
- Removed cloud integration store as callback function's param

* refactor: changing wrong dashboard id for EKS definition
2026-04-13 15:16:26 +00:00
aniketio-ctrl
c15e3ca59e feat(billing): use meter api from zeus (#10903)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat(meter-api): use meter api from zeus

* feat(meter-api): bug fixes

* feat(meter-api): bug fixes

* feat(meter-api): bug fixes
2026-04-13 13:34:06 +00:00
Vikrant Gupta
9ec6da5e76 fix(authz): unauthenticated error code for deleted service account (#10878)
* fix(authz): populate correct error for deleted service account

* chore(authz): reduce the regex restrictions on service accounts

* chore(authz): reduce the regex restrictions on service accounts

* fix(authz): populate correct error for deleted service account

* fix(authz): populate correct error for deleted service account
2026-04-13 13:19:25 +00:00
Vikrant Gupta
3e241944e7 chore(sqlstore): added connection max lifetime support (#10913)
* chore(sqlstore): added connection max lifetime param support

* chore(sqlstore): fix test

* chore(sqlstore): default to 0 for conn_max_lifetime

* chore(sqlstore): rename the config

* chore(sqlstore): rename the config
2026-04-13 13:09:00 +00:00
SagarRajput-7
5251339a16 chore: config setup for asset migration, allow use of @ alias (#10901)
* chore: config setup for asset migration, allow use of @ alias

* chore: added modulewrapper for jests

* chore: jest module resolution fix

* chore: remove the snapshot update since no migration is done under this pr
2026-04-13 12:31:47 +00:00
Abhi kumar
702a16f80d chore: added test-ids to identify elements better in e2es (#10914) 2026-04-13 12:12:53 +00:00
1263 changed files with 49676 additions and 55679 deletions

64
.github/CODEOWNERS vendored
View File

@@ -16,38 +16,38 @@ go.mod @therealpandey
# Scaffold Owners
/pkg/config/ @vikrantgupta25
/pkg/errors/ @vikrantgupta25
/pkg/factory/ @vikrantgupta25
/pkg/types/ @vikrantgupta25
/pkg/valuer/ @vikrantgupta25
/cmd/ @vikrantgupta25
.golangci.yml @vikrantgupta25
/pkg/config/ @therealpandey
/pkg/errors/ @therealpandey
/pkg/factory/ @therealpandey
/pkg/types/ @therealpandey
/pkg/valuer/ @therealpandey
/cmd/ @therealpandey
.golangci.yml @therealpandey
# Zeus Owners
/pkg/zeus/ @vikrantgupta25
/ee/zeus/ @vikrantgupta25
/pkg/licensing/ @vikrantgupta25
/ee/licensing/ @vikrantgupta25
/pkg/zeus/ @therealpandey
/ee/zeus/ @therealpandey
/pkg/licensing/ @therealpandey
/ee/licensing/ @therealpandey
# SQL Owners
/pkg/sqlmigration/ @vikrantgupta25
/ee/sqlmigration/ @vikrantgupta25
/pkg/sqlschema/ @vikrantgupta25
/ee/sqlschema/ @vikrantgupta25
/pkg/sqlmigration/ @therealpandey
/ee/sqlmigration/ @therealpandey
/pkg/sqlschema/ @therealpandey
/ee/sqlschema/ @therealpandey
# Analytics Owners
/pkg/analytics/ @vikrantgupta25
/pkg/statsreporter/ @vikrantgupta25
/pkg/analytics/ @therealpandey
/pkg/statsreporter/ @therealpandey
# Emailing Owners
/pkg/emailing/ @vikrantgupta25
/pkg/types/emailtypes/ @vikrantgupta25
/templates/email/ @vikrantgupta25
/pkg/emailing/ @therealpandey
/pkg/types/emailtypes/ @therealpandey
/templates/email/ @therealpandey
# Querier Owners
@@ -97,23 +97,23 @@ go.mod @therealpandey
# AuthN / AuthZ Owners
/pkg/authz/ @vikrantgupta25
/ee/authz/ @vikrantgupta25
/pkg/authn/ @vikrantgupta25
/ee/authn/ @vikrantgupta25
/pkg/modules/user/ @vikrantgupta25
/pkg/modules/session/ @vikrantgupta25
/pkg/modules/organization/ @vikrantgupta25
/pkg/modules/authdomain/ @vikrantgupta25
/pkg/modules/role/ @vikrantgupta25
/pkg/authz/ @therealpandey
/ee/authz/ @therealpandey
/pkg/authn/ @therealpandey
/ee/authn/ @therealpandey
/pkg/modules/user/ @therealpandey
/pkg/modules/session/ @therealpandey
/pkg/modules/organization/ @therealpandey
/pkg/modules/authdomain/ @therealpandey
/pkg/modules/role/ @therealpandey
# IdentN Owners
/pkg/identn/ @vikrantgupta25
/pkg/http/middleware/identn.go @vikrantgupta25
/pkg/identn/ @therealpandey
/pkg/http/middleware/identn.go @therealpandey
# Integration tests
/tests/integration/ @vikrantgupta25
/tests/integration/ @therealpandey
# OpenAPI types generator

70
.github/workflows/e2eci.yaml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: e2eci
on:
pull_request:
types:
- labeled
pull_request_target:
types:
- labeled
jobs:
test:
strategy:
fail-fast: false
matrix:
project:
- chromium
if: |
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-e2e')
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: checkout
uses: actions/checkout@v4
- name: python
uses: actions/setup-python@v5
with:
python-version: 3.13
- name: uv
uses: astral-sh/setup-uv@v4
- name: node
uses: actions/setup-node@v4
with:
node-version: lts/*
- name: python-install
run: |
cd tests && uv sync
- name: yarn-install
run: |
cd tests/e2e && yarn install --frozen-lockfile
- name: playwright-browsers
run: |
cd tests/e2e && yarn playwright install --with-deps ${{ matrix.project }}
- name: bring-up-stack
run: |
cd tests && \
uv run pytest \
--basetemp=./tmp/ \
-vv --reuse --with-web \
e2e/bootstrap/setup.py::test_setup
- name: playwright-test
run: |
cd tests/e2e && \
yarn playwright test --project=${{ matrix.project }}
- name: teardown-stack
if: always()
run: |
cd tests && \
uv run pytest \
--basetemp=./tmp/ \
-vv --teardown \
e2e/bootstrap/setup.py::test_teardown
- name: upload-artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: playwright-artifacts-${{ matrix.project }}
path: tests/e2e/artifacts/
retention-days: 5

View File

@@ -25,11 +25,11 @@ jobs:
uses: astral-sh/setup-uv@v4
- name: install
run: |
cd tests/integration && uv sync
cd tests && uv sync
- name: fmt
run: |
make py-fmt
git diff --exit-code -- tests/integration/
git diff --exit-code -- tests/
- name: lint
run: |
make py-lint
@@ -37,21 +37,21 @@ jobs:
strategy:
fail-fast: false
matrix:
src:
- bootstrap
- passwordauthn
suite:
- alerts
- callbackauthn
- cloudintegrations
- dashboard
- ingestionkeys
- logspipelines
- passwordauthn
- preference
- querier
- rawexportdata
- role
- ttl
- alerts
- ingestionkeys
- rootuser
- serviceaccount
- ttl
sqlstore-provider:
- postgres
- sqlite
@@ -79,8 +79,9 @@ jobs:
uses: astral-sh/setup-uv@v4
- name: install
run: |
cd tests/integration && uv sync
cd tests && uv sync
- name: webdriver
if: matrix.suite == 'callbackauthn'
run: |
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" | sudo tee -a /etc/apt/sources.list.d/google-chrome.list
@@ -99,10 +100,10 @@ jobs:
google-chrome-stable --version
- name: run
run: |
cd tests/integration && \
cd tests && \
uv run pytest \
--basetemp=./tmp/ \
src/${{matrix.src}} \
integration/tests/${{matrix.suite}} \
--sqlstore-provider ${{matrix.sqlstore-provider}} \
--sqlite-mode ${{matrix.sqlite-mode}} \
--postgres-version ${{matrix.postgres-version}} \

View File

@@ -1,62 +0,0 @@
name: e2eci
on:
workflow_dispatch:
inputs:
userRole:
description: "Role of the user (ADMIN, EDITOR, VIEWER)"
required: true
type: choice
options:
- ADMIN
- EDITOR
- VIEWER
jobs:
test:
name: Run Playwright Tests
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: lts/*
- name: Mask secrets and input
run: |
echo "::add-mask::${{ secrets.BASE_URL }}"
echo "::add-mask::${{ secrets.LOGIN_USERNAME }}"
echo "::add-mask::${{ secrets.LOGIN_PASSWORD }}"
echo "::add-mask::${{ github.event.inputs.userRole }}"
- name: Install dependencies
working-directory: frontend
run: |
npm install -g yarn
yarn
- name: Install Playwright Browsers
working-directory: frontend
run: yarn playwright install --with-deps
- name: Run Playwright Tests
working-directory: frontend
run: |
BASE_URL="${{ secrets.BASE_URL }}" \
LOGIN_USERNAME="${{ secrets.LOGIN_USERNAME }}" \
LOGIN_PASSWORD="${{ secrets.LOGIN_PASSWORD }}" \
USER_ROLE="${{ github.event.inputs.userRole }}" \
yarn playwright test
- name: Upload Playwright Report
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: frontend/playwright-report/
retention-days: 30

View File

@@ -201,26 +201,26 @@ docker-buildx-enterprise: go-build-enterprise js-build
# python commands
##############################################################
.PHONY: py-fmt
py-fmt: ## Run black for integration tests
@cd tests/integration && uv run black .
py-fmt: ## Run black across the shared tests project
@cd tests && uv run black .
.PHONY: py-lint
py-lint: ## Run lint for integration tests
@cd tests/integration && uv run isort .
@cd tests/integration && uv run autoflake .
@cd tests/integration && uv run pylint .
py-lint: ## Run lint across the shared tests project
@cd tests && uv run isort .
@cd tests && uv run autoflake .
@cd tests && uv run pylint .
.PHONY: py-test-setup
py-test-setup: ## Runs integration tests
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --reuse --capture=no src/bootstrap/setup.py::test_setup
py-test-setup: ## Bring up the shared SigNoz backend used by integration and e2e tests
@cd tests && uv run pytest --basetemp=./tmp/ -vv --reuse --capture=no integration/bootstrap/setup.py::test_setup
.PHONY: py-test-teardown
py-test-teardown: ## Runs integration tests with teardown
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --teardown --capture=no src/bootstrap/setup.py::test_teardown
py-test-teardown: ## Tear down the shared SigNoz backend
@cd tests && uv run pytest --basetemp=./tmp/ -vv --teardown --capture=no integration/bootstrap/setup.py::test_teardown
.PHONY: py-test
py-test: ## Runs integration tests
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --capture=no src/
@cd tests && uv run pytest --basetemp=./tmp/ -vv --capture=no integration/tests/
.PHONY: py-clean
py-clean: ## Clear all pycache and pytest cache from tests directory recursively

View File

@@ -7,6 +7,7 @@ import (
"github.com/spf13/cobra"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/auditor"
"github.com/SigNoz/signoz/pkg/authn"
@@ -14,22 +15,33 @@ import (
"github.com/SigNoz/signoz/pkg/authz/openfgaauthz"
"github.com/SigNoz/signoz/pkg/authz/openfgaschema"
"github.com/SigNoz/signoz/pkg/authz/openfgaserver"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/gateway/noopgateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler"
"github.com/SigNoz/signoz/pkg/ruler/signozruler"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/SigNoz/signoz/pkg/zeus/noopzeus"
@@ -71,7 +83,7 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
},
signoz.NewEmailingProviderFactories(),
signoz.NewCacheProviderFactories(),
signoz.NewWebProviderFactories(),
signoz.NewWebProviderFactories(config.Global),
func(sqlstore sqlstore.SQLStore) factory.NamedMap[factory.ProviderFactory[sqlschema.SQLSchema, sqlschema.Config]] {
return signoz.NewSQLSchemaProviderFactories(sqlstore)
},
@@ -100,6 +112,12 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(ps factory.ProviderSettings, q querier.Querier, a analytics.Analytics) querier.Handler {
return querier.NewHandler(ps, q, a)
},
func(_ sqlstore.SQLStore, _ global.Global, _ zeus.Zeus, _ gateway.Gateway, _ licensing.Licensing, _ serviceaccount.Module, _ cloudintegration.Config) (cloudintegration.Module, error) {
return implcloudintegration.NewModule(), nil
},
func(c cache.Cache, am alertmanager.Alertmanager, ss sqlstore.SQLStore, ts telemetrystore.TelemetryStore, ms telemetrytypes.MetadataStore, p prometheus.Prometheus, og organization.Getter, rsh rulestatehistory.Module, q querier.Querier, qp queryparser.QueryParser) factory.NamedMap[factory.ProviderFactory[ruler.Ruler, ruler.Config]] {
return factory.MustNewNamedMap(signozruler.NewFactory(c, am, ss, ts, ms, p, og, rsh, q, qp, nil, nil))
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", errors.Attr(err))

View File

@@ -17,31 +17,47 @@ import (
"github.com/SigNoz/signoz/ee/gateway/httpgateway"
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/ee/modules/cloudintegration/implcloudintegration/implcloudprovider"
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
eequerier "github.com/SigNoz/signoz/ee/querier"
enterpriseapp "github.com/SigNoz/signoz/ee/query-service/app"
eerules "github.com/SigNoz/signoz/ee/query-service/rules"
"github.com/SigNoz/signoz/ee/sqlschema/postgressqlschema"
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
enterprisezeus "github.com/SigNoz/signoz/ee/zeus"
"github.com/SigNoz/signoz/ee/zeus/httpzeus"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/auditor"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
pkgcloudintegration "github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler"
"github.com/SigNoz/signoz/pkg/ruler/signozruler"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/sqlstore/sqlstorehook"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/cloudintegrationtypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
)
@@ -89,7 +105,7 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
},
signoz.NewEmailingProviderFactories(),
signoz.NewCacheProviderFactories(),
signoz.NewWebProviderFactories(),
signoz.NewWebProviderFactories(config.Global),
func(sqlstore sqlstore.SQLStore) factory.NamedMap[factory.ProviderFactory[sqlschema.SQLSchema, sqlschema.Config]] {
existingFactories := signoz.NewSQLSchemaProviderFactories(sqlstore)
if err := existingFactories.Add(postgressqlschema.NewFactory(sqlstore)); err != nil {
@@ -127,7 +143,6 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
return nil, err
}
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx), openfgaDataStore, licensing, dashboardModule), nil
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, queryParser, querier, licensing)
@@ -146,8 +161,24 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
communityHandler := querier.NewHandler(ps, q, a)
return eequerier.NewHandler(ps, q, communityHandler)
},
)
func(sqlStore sqlstore.SQLStore, global global.Global, zeus zeus.Zeus, gateway gateway.Gateway, licensing licensing.Licensing, serviceAccount serviceaccount.Module, config cloudintegration.Config) (cloudintegration.Module, error) {
defStore := pkgcloudintegration.NewServiceDefinitionStore()
awsCloudProviderModule, err := implcloudprovider.NewAWSCloudProvider(defStore)
if err != nil {
return nil, err
}
azureCloudProviderModule := implcloudprovider.NewAzureCloudProvider()
cloudProvidersMap := map[cloudintegrationtypes.CloudProviderType]cloudintegration.CloudProviderModule{
cloudintegrationtypes.CloudProviderTypeAWS: awsCloudProviderModule,
cloudintegrationtypes.CloudProviderTypeAzure: azureCloudProviderModule,
}
return implcloudintegration.NewModule(pkgcloudintegration.NewStore(sqlStore), global, zeus, gateway, licensing, serviceAccount, cloudProvidersMap, config)
},
func(c cache.Cache, am alertmanager.Alertmanager, ss sqlstore.SQLStore, ts telemetrystore.TelemetryStore, ms telemetrytypes.MetadataStore, p prometheus.Prometheus, og organization.Getter, rsh rulestatehistory.Module, q querier.Querier, qp queryparser.QueryParser) factory.NamedMap[factory.ProviderFactory[ruler.Ruler, ruler.Config]] {
return factory.MustNewNamedMap(signozruler.NewFactory(c, am, ss, ts, ms, p, og, rsh, q, qp, eerules.PrepareTaskFunc, eerules.TestNotification))
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", errors.Attr(err))
return err

View File

@@ -6,6 +6,8 @@
##################### Global #####################
global:
# the url under which the signoz apiserver is externally reachable.
# the path component (e.g. /signoz in https://example.com/signoz) is used
# as the base path for all HTTP routes (both API and web frontend).
external_url: <unset>
# the url where the SigNoz backend receives telemetry data (traces, metrics, logs) from instrumented applications.
ingestion_url: <unset>
@@ -50,8 +52,8 @@ pprof:
web:
# Whether to enable the web frontend
enabled: true
# The prefix to serve web on
prefix: /
# The index file to use as the SPA entrypoint.
index: index.html
# The directory containing the static build files.
directory: /etc/signoz/web
@@ -82,6 +84,9 @@ sqlstore:
provider: sqlite
# The maximum number of open connections to the database.
max_open_conns: 100
# The maximum amount of time a connection may be reused.
# If max_conn_lifetime == 0, connections are not closed due to a connection's age.
max_conn_lifetime: 0
sqlite:
# The path to the SQLite database file.
path: /var/lib/signoz/signoz.db
@@ -395,3 +400,10 @@ auditor:
max_interval: 30s
# The total maximum time spent retrying.
max_elapsed_time: 60s
##################### Cloud Integration #####################
cloudintegration:
# cloud integration agent configuration
agent:
# The version of the cloud integration agent.
version: v0.0.8

View File

@@ -190,7 +190,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.118.0
image: signoz/signoz:v0.119.0
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port

View File

@@ -117,7 +117,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.118.0
image: signoz/signoz:v0.119.0
ports:
- "8080:8080" # signoz port
volumes:

View File

@@ -181,7 +181,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.118.0}
image: signoz/signoz:${VERSION:-v0.119.0}
container_name: signoz
ports:
- "8080:8080" # signoz port

View File

@@ -109,7 +109,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.118.0}
image: signoz/signoz:${VERSION:-v0.119.0}
container_name: signoz
ports:
- "8080:8080" # signoz port

File diff suppressed because it is too large Load Diff

View File

@@ -1,216 +0,0 @@
# Integration Tests
SigNoz uses integration tests to verify that different components work together correctly in a real environment. These tests run against actual services (ClickHouse, PostgreSQL, etc.) to ensure end-to-end functionality.
## How to set up the integration test environment?
### Prerequisites
Before running integration tests, ensure you have the following installed:
- Python 3.13+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Docker (for containerized services)
### Initial Setup
1. Navigate to the integration tests directory:
```bash
cd tests/integration
```
2. Install dependencies using uv:
```bash
uv sync
```
> **_NOTE:_** the build backend could throw an error while installing `psycopg2`, pleae see https://www.psycopg.org/docs/install.html#build-prerequisites
### Starting the Test Environment
To spin up all the containers necessary for writing integration tests and keep them running:
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/bootstrap/setup.py::test_setup
```
This command will:
- Start all required services (ClickHouse, PostgreSQL, Zookeeper, etc.)
- Keep containers running due to the `--reuse` flag
- Verify that the setup is working correctly
### Stopping the Test Environment
When you're done writing integration tests, clean up the environment:
```bash
uv run pytest --basetemp=./tmp/ -vv --teardown -s src/bootstrap/setup.py::test_teardown
```
This will destroy the running integration test setup and clean up resources.
## Understanding the Integration Test Framework
Python and pytest form the foundation of the integration testing framework. Testcontainers are used to spin up disposable integration environments. Wiremock is used to spin up **test doubles** of other services.
- **Why Python/pytest?** It's expressive, low-boilerplate, and has powerful fixture capabilities that make integration testing straightforward. Extensive libraries for HTTP requests, JSON handling, and data analysis (numpy) make it easier to test APIs and verify data
- **Why testcontainers?** They let us spin up isolated dependencies that match our production environment without complex setup.
- **Why wiremock?** Well maintained, documented and extensible.
```
.
├── conftest.py
├── fixtures
│ ├── __init__.py
│ ├── auth.py
│ ├── clickhouse.py
│ ├── fs.py
│ ├── http.py
│ ├── migrator.py
│ ├── network.py
│ ├── postgres.py
│ ├── signoz.py
│ ├── sql.py
│ ├── sqlite.py
│ ├── types.py
│ └── zookeeper.py
├── uv.lock
├── pyproject.toml
└── src
└── bootstrap
├── __init__.py
├── 01_database.py
├── 02_register.py
└── 03_license.py
```
Each test suite follows some important principles:
1. **Organization**: Test suites live under `src/` in self-contained packages. Fixtures (a pytest concept) live inside `fixtures/`.
2. **Execution Order**: Files are prefixed with two-digit numbers (`01_`, `02_`, `03_`) to ensure sequential execution.
3. **Time Constraints**: Each suite should complete in under 10 minutes (setup takes ~4 mins).
### Test Suite Design
Test suites should target functional domains or subsystems within SigNoz. When designing a test suite, consider these principles:
- **Functional Cohesion**: Group tests around a specific capability or service boundary
- **Data Flow**: Follow the path of data through related components
- **Change Patterns**: Components frequently modified together should be tested together
The exact boundaries for modules are intentionally flexible, allowing teams to define logical groupings based on their specific context and knowledge of the system.
Eg: The **bootstrap** integration test suite validates core system functionality:
- Database initialization
- Version check
Other test suites can be **pipelines, auth, querier.**
## How to write an integration test?
Now start writing an integration test. Create a new file `src/bootstrap/05_version.py` and paste the following:
```python
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_version(signoz: types.SigNoz) -> None:
response = requests.get(signoz.self.host_config.get("/api/v1/version"), timeout=2)
logger.info(response)
```
We have written a simple test which calls the `version` endpoint of the container in step 1. In **order to just run this function, run the following command:**
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/bootstrap/05_version.py::test_version
```
> Note: The `--reuse` flag is used to reuse the environment if it is already running. Always use this flag when writing and running integration tests. If you don't use this flag, the environment will be destroyed and recreated every time you run the test.
Here's another example of how to write a more comprehensive integration test:
```python
from http import HTTPStatus
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_user_registration(signoz: types.SigNoz) -> None:
"""Test user registration functionality."""
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/register"),
json={
"name": "testuser",
"orgId": "",
"orgName": "test.org",
"email": "test@example.com",
"password": "password123Z$",
},
timeout=2,
)
assert response.status_code == HTTPStatus.OK
assert response.json()["setupCompleted"] is True
```
## How to run integration tests?
### Running All Tests
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/
```
### Running Specific Test Categories
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>
# Run querier tests
uv run pytest --basetemp=./tmp/ -vv --reuse src/querier/
# Run auth tests
uv run pytest --basetemp=./tmp/ -vv --reuse src/auth/
```
### Running Individual Tests
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>/<file>.py::test_name
# Run test_register in file 01_register.py in passwordauthn suite
uv run pytest --basetemp=./tmp/ -vv --reuse src/passwordauthn/01_register.py::test_register
```
## How to configure different options for integration tests?
Tests can be configured using pytest options:
- `--sqlstore-provider` - Choose database provider (default: postgres)
- `--sqlite-mode` - SQLite journal mode: `delete` or `wal` (default: delete). Only relevant when `--sqlstore-provider=sqlite`.
- `--postgres-version` - PostgreSQL version (default: 15)
- `--clickhouse-version` - ClickHouse version (default: 25.5.6)
- `--zookeeper-version` - Zookeeper version (default: 3.7.1)
Example:
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse --sqlstore-provider=postgres --postgres-version=14 src/auth/
```
## What should I remember?
- **Always use the `--reuse` flag** when setting up the environment to keep containers running
- **Use the `--teardown` flag** when cleaning up to avoid resource leaks
- **Follow the naming convention** with two-digit numeric prefixes (`01_`, `02_`) for test execution order
- **Use proper timeouts** in HTTP requests to avoid hanging tests
- **Clean up test data** between tests to avoid interference
- **Use descriptive test names** that clearly indicate what is being tested
- **Leverage fixtures** for common setup and authentication
- **Test both success and failure scenarios** to ensure robust functionality
- **`--sqlite-mode=wal` does not work on macOS.** The integration test environment runs SigNoz inside a Linux container with the SQLite database file mounted from the macOS host. WAL mode requires shared memory between connections, and connections crossing the VM boundary (macOS host ↔ Linux container) cannot share the WAL index, resulting in `SQLITE_IOERR_SHORT_READ`. WAL mode is tested in CI on Linux only.

View File

@@ -15,8 +15,8 @@ We **recommend** (almost enforce) reviewing these guides before contributing to
- [Endpoint](endpoint.md) - HTTP endpoint patterns
- [Flagger](flagger.md) - Feature flag patterns
- [Handler](handler.md) - HTTP handler patterns
- [Integration](integration.md) - Integration testing
- [Provider](provider.md) - Dependency injection and provider patterns
- [Packages](packages.md) - Naming, layout, and conventions for `pkg/` packages
- [Service](service.md) - Managed service lifecycle with `factory.Service`
- [SQL](sql.md) - Database and SQL patterns
- [Types](types.md) - Domain types, request/response bodies, and storage rows in `pkg/types/`

View File

@@ -0,0 +1,152 @@
# Types
Domain types in `pkg/types/<domain>/` live on three serialization boundaries — inbound HTTP, outbound HTTP, and SQL — on top of an in-memory domain representation. SigNoz's convention is **core-type-first**: every domain defines a single canonical type `X`, and specialized flavors (`PostableX`, `GettableX`, `UpdatableX`, `StorableX`) are introduced **only when they actually differ from `X`**. This guide spells out when each flavor is warranted and how they relate to each other.
Before reading, make sure you have read [abstractions.md](abstractions.md) — the rules here build on its guidance that every new type must earn its place.
## The core type is required
Every domain package in `pkg/types/<domain>/` defines exactly one core type `X`: `AuthDomain`, `Channel`, `Rule`, `Dashboard`, `Role`, `PlannedMaintenance`. This is the canonical in-memory representation of the domain object. Domain methods, validation invariants, and business logic hang off `X` — not off the flavor types.
Two rules shape how the core type behaves:
- **Conversions can be either `New<Output>From<Input>` or a receiver-style `(x *X) ToY()` method.** Either form is fine; pick whichever reads best at the call site:
```go
// Constructor form
func NewGettableAuthDomainFromAuthDomain(d *AuthDomain, info *AuthNProviderInfo) *GettableAuthDomain
// Receiver form
func (m *PlannedMaintenanceWithRules) ToPlannedMaintenance() *PlannedMaintenance
```
- **`X` can double as the storage row** when the DB shape would be identical. `Channel` embeds `bun.BaseModel` directly, and there is no `StorableChannel`. This is the preferred shape when it works.
Domain packages under `pkg/types/` must not import from other `pkg/` packages. Keep the core type's methods lightweight and push orchestration out to the module layer.
## Add a flavor only when it differs
For each of the four flavors, create it only if its shape diverges from `X`. If a flavor would have the same fields and tags as `X`, reuse `X` directly, or declare a type alias. Every flavor must earn its place per [abstractions.md](abstractions.md) rule 6 ("Wrappers must add semantics, not just rename").
| Flavor | Create it when it differs in… |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `PostableX` | JSON shape differs from `X` — typically no `Id`, no audit fields, no server-computed fields. Often owns input validation via `Validate()` or a custom `UnmarshalJSON`. |
| `GettableX` | Response shape adds server-computed fields that are not persisted — e.g., `GettableAuthDomain` adds `AuthNProviderInfo`, which is resolved at read time. |
| `UpdatableX` | Only a strict subset of `PostableX` is replaceable on PUT. If the updatable shape equals `PostableX`, reuse `PostableX`. |
| `StorableX` | DB row shape differs from `X` — usually `X` carries nested typed config while `StorableX` carries a flat `Data string` JSON column, plus bun tags, audit mixins, and an `OrgID`. If `X` already has those, skip the flavor. |
The failure mode this rule exists to prevent: minting all four flavors on reflex for every new resource, even when two or three are structurally identical. Each unnecessary flavor is another type contributors must understand and another conversion that can drift.
## Worked examples
### Channel — core type only
```go
type Channels = []*Channel
type GettableChannels = []*Channel
type Channel struct {
bun.BaseModel `bun:"table:notification_channel"`
types.Identifiable
types.TimeAuditable
Name string `json:"name" required:"true" bun:"name"`
Type string `json:"type" required:"true" bun:"type"`
Data string `json:"data" required:"true" bun:"data"`
OrgID string `json:"orgId" required:"true" bun:"org_id"`
}
```
`Channel` is both the domain type and the bun row. `GettableChannels` is a **type alias** because `*Channel` already serializes correctly as a response. There is no `StorableChannel`, `PostableChannel`, or `UpdatableChannel` — those would be identical to `Channel` and so do not exist. Prefer this shape when it works.
### AuthDomain — all four flavors
```go
type AuthDomain struct {
storableAuthDomain *StorableAuthDomain
authDomainConfig *AuthDomainConfig
}
type StorableAuthDomain struct {
bun.BaseModel `bun:"table:auth_domain"`
types.Identifiable
Name string `bun:"name"`
Data string `bun:"data"` // AuthDomainConfig serialized as JSON
OrgID valuer.UUID `bun:"org_id"`
types.TimeAuditable
}
type PostableAuthDomain struct {
Config AuthDomainConfig `json:"config"`
Name string `json:"name"`
}
type UpdateableAuthDomain struct {
Config AuthDomainConfig `json:"config"` // Name intentionally absent
}
type GettableAuthDomain struct {
*StorableAuthDomain
*AuthDomainConfig
AuthNProviderInfo *AuthNProviderInfo `json:"authNProviderInfo"`
}
```
Each flavor exists for a concrete reason:
- `StorableAuthDomain` stores the typed config as an opaque `Data string` column, so the schema does not need to migrate every time a config field is added.
- `PostableAuthDomain` carries the config as a structured object (not a string) for the request.
- `UpdateableAuthDomain` excludes `Name` because a domain's name cannot change after creation.
- `GettableAuthDomain` adds `AuthNProviderInfo`, which is derived at read time and never persisted.
The core `AuthDomain` holds the two live halves — `storableAuthDomain` and `authDomainConfig` — and owns business methods such as `Update(config)`. Conversions use the `New<Output>From<Input>` form: `NewAuthDomainFromConfig`, `NewAuthDomainFromStorableAuthDomain`, `NewGettableAuthDomainFromAuthDomain`.
## Conventions that tie the flavors together
- **Conversions** use either a `New<Output>From<Input>` constructor — e.g. `NewChannelFromReceiver`, `NewGettableAuthDomainFromAuthDomain` — or a receiver-style `ToY()` method. Both forms coexist in the codebase; use whichever fits the call site.
- **Validation belongs on the core type `X`.** Putting it on `X` means every write path — HTTP create, HTTP update, in-process migration, replay — runs the same checks. `Validate()` on `PostableX` is reserved for checks that are specific to the request shape and do not apply to `X`. `UnmarshalJSON` on `PostableX` is a separate tool that lives there because decoding only happens at the HTTP boundary — `PostableAuthDomain.UnmarshalJSON` rejecting a malformed domain name at decode time is the canonical example.
```go
// Domain invariants: every write path re-runs these.
func (x *X) Validate() error { ... }
// Request-shape-only: checks that do not apply once the value is persisted.
func (p *PostableX) Validate() error { ... }
```
- **Type aliases, not wrappers**, when two shapes are identical. `type GettableChannels = []*Channel` is correct because it adds no semantics beyond the underlying type.
- **Serialization tags** follow [handler.md](handler.md): `required:"true"` means the JSON key must be present, `nullable:"true"` is required on any slice or map that may serialize as `null`, and types with a fixed value set must implement `Enum() []any`.
## A note on `UpdatableX` and `PatchableX`
- `UpdatableX` — the body for PUT (full replace) when the shape is a strict subset of `PostableX`. If the updatable shape equals `PostableX`, reuse `PostableX`.
- `PatchableX` — the body for PATCH (partial update); only the fields a client is allowed to patch. For example, `PatchableRole` carries a single `Description` field even though `Role` has many — clients may patch the description but not anything else.
```go
type PatchableRole struct {
Description string `json:"description"`
}
```
Both are optional. Do not introduce them if `PostableX` already covers the case.
## What to avoid
- **Do not mint a flavor that mirrors the core type.** If `StorableX` would have the same fields as `X`, use `X` directly with `bun.BaseModel` embedded. `Channel` is the canonical example.
- **Do not bolt domain methods onto `StorableX`.** Storage types are data carriers. Domain methods live on `X`.
- **Do not invent new suffixes** (`Creatable`, `Fetchable`, `Savable`). The core type plus `Postable` / `Gettable` / `Updatable` / `Patchable` / `Storable` covers every case that exists today.
- **Spelling — `Updatable`, not `Updateable`.** `Updateable` is a common typo. Prefer the shorter form when introducing new types, and rename any stragglers you come across.
- **Spelling — `Storable`, not `Storeable`.** `Storeable` is a common typo. Prefer the shorter form when introducing new types, and rename any stragglers you come across.
## What should I remember?
- Every domain package defines the core type `X`. Only `X` is mandatory.
- Add `PostableX` / `GettableX` / `UpdatableX` / `StorableX` one at a time, only when the shape actually diverges from `X`.
- Domain logic lives on `X`, not on the flavor types.
- Conversions can be a `New<Output>From<Input>` constructor or a receiver-style `ToY()` method — pick whichever reads best at the call site.
- Use a type alias when two shapes are truly identical.
- `pkg/types/<domain>/` must not import from other `pkg/` packages.
## Further reading
- [abstractions.md](abstractions.md) — when to introduce a new type at all.
- [handler.md](handler.md) — struct tag rules at the HTTP boundary.
- [packages.md](packages.md) — where types live under `pkg/types/`.
- [sql.md](sql.md) — star-schema requirements for `StorableX`.

View File

@@ -7,12 +7,12 @@ This guide explains how to add new data sources to the SigNoz onboarding flow. T
The configuration is located at:
```
frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.json
frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.ts
```
## JSON Structure Overview
## Structure Overview
The configuration file is a JSON array containing data source objects. Each object represents a selectable option in the onboarding flow.
The configuration file exports a TypeScript array (`onboardingConfigWithLinks`) containing data source objects. Each object represents a selectable option in the onboarding flow. SVG logos are imported as ES modules at the top of the file.
## Data Source Object Keys
@@ -24,7 +24,7 @@ The configuration file is a JSON array containing data source objects. Each obje
| `label` | `string` | Display name shown to users (e.g., `"AWS EC2"`) |
| `tags` | `string[]` | Array of category tags for grouping (e.g., `["AWS"]`, `["database"]`) |
| `module` | `string` | Destination module after onboarding completion |
| `imgUrl` | `string` | Path to the logo/icon **(SVG required)** (e.g., `"/Logos/ec2.svg"`) |
| `imgUrl` | `string` | Imported SVG URL **(SVG required)** (e.g., `import ec2Url from '@/assets/Logos/ec2.svg'`, then use `ec2Url`) |
### Optional Keys
@@ -57,36 +57,34 @@ The `module` key determines where users are redirected after completing onboardi
The `question` object enables multi-step selection flows:
```json
{
"question": {
"desc": "What would you like to monitor?",
"type": "select",
"helpText": "Choose the telemetry type you want to collect.",
"helpLink": "/docs/azure-monitoring/overview/",
"helpLinkText": "Read the guide →",
"options": [
{
"key": "logging",
"label": "Logs",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/logging/"
},
{
"key": "metrics",
"label": "Metrics",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/metrics/"
},
{
"key": "tracing",
"label": "Traces",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/tracing/"
}
]
}
}
```ts
question: {
desc: 'What would you like to monitor?',
type: 'select',
helpText: 'Choose the telemetry type you want to collect.',
helpLink: '/docs/azure-monitoring/overview/',
helpLinkText: 'Read the guide →',
options: [
{
key: 'logging',
label: 'Logs',
imgUrl: azureVmUrl,
link: '/docs/azure-monitoring/app-service/logging/',
},
{
key: 'metrics',
label: 'Metrics',
imgUrl: azureVmUrl,
link: '/docs/azure-monitoring/app-service/metrics/',
},
{
key: 'tracing',
label: 'Traces',
imgUrl: azureVmUrl,
link: '/docs/azure-monitoring/app-service/tracing/',
},
],
},
```
### Question Keys
@@ -106,152 +104,161 @@ Options can be simple (direct link) or nested (with another question):
### Simple Option (Direct Link)
```json
```ts
{
"key": "aws-ec2-logs",
"label": "Logs",
"imgUrl": "/Logos/ec2.svg",
"link": "/docs/userguide/collect_logs_from_file/"
}
key: 'aws-ec2-logs',
label: 'Logs',
imgUrl: ec2Url,
link: '/docs/userguide/collect_logs_from_file/',
},
```
### Option with Internal Redirect
```json
```ts
{
"key": "aws-ec2-metrics-one-click",
"label": "One Click AWS",
"imgUrl": "/Logos/ec2.svg",
"link": "/integrations?integration=aws-integration&service=ec2",
"internalRedirect": true
}
key: 'aws-ec2-metrics-one-click',
label: 'One Click AWS',
imgUrl: ec2Url,
link: '/integrations?integration=aws-integration&service=ec2',
internalRedirect: true,
},
```
> **Important**: Set `internalRedirect: true` only for internal app routes (like `/integrations?...`). Docs links should NOT have this flag.
### Nested Option (Multi-step Flow)
```json
```ts
{
"key": "aws-ec2-metrics",
"label": "Metrics",
"imgUrl": "/Logos/ec2.svg",
"question": {
"desc": "How would you like to set up monitoring?",
"helpText": "Choose your setup method.",
"options": [...]
}
}
key: 'aws-ec2-metrics',
label: 'Metrics',
imgUrl: ec2Url,
question: {
desc: 'How would you like to set up monitoring?',
helpText: 'Choose your setup method.',
options: [...],
},
},
```
## Examples
### Simple Data Source (Direct Link)
```json
```ts
import elbUrl from '@/assets/Logos/elb.svg';
// inside the onboardingConfigWithLinks array:
{
"dataSource": "aws-elb",
"label": "AWS ELB",
"tags": ["AWS"],
"module": "logs",
"relatedSearchKeywords": [
"aws",
"aws elb",
"elb logs",
"elastic load balancer"
dataSource: 'aws-elb',
label: 'AWS ELB',
tags: ['AWS'],
module: 'logs',
relatedSearchKeywords: [
'aws',
'aws elb',
'elb logs',
'elastic load balancer',
],
"imgUrl": "/Logos/elb.svg",
"link": "/docs/aws-monitoring/elb/"
}
imgUrl: elbUrl,
link: '/docs/aws-monitoring/elb/',
},
```
### Data Source with Single Question Level
```json
```ts
import azureVmUrl from '@/assets/Logos/azure-vm.svg';
// inside the onboardingConfigWithLinks array:
{
"dataSource": "app-service",
"label": "App Service",
"imgUrl": "/Logos/azure-vm.svg",
"tags": ["Azure"],
"module": "apm",
"relatedSearchKeywords": ["azure", "app service"],
"question": {
"desc": "What telemetry data do you want to visualise?",
"type": "select",
"options": [
dataSource: 'app-service',
label: 'App Service',
imgUrl: azureVmUrl,
tags: ['Azure'],
module: 'apm',
relatedSearchKeywords: ['azure', 'app service'],
question: {
desc: 'What telemetry data do you want to visualise?',
type: 'select',
options: [
{
"key": "logging",
"label": "Logs",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/logging/"
key: 'logging',
label: 'Logs',
imgUrl: azureVmUrl,
link: '/docs/azure-monitoring/app-service/logging/',
},
{
"key": "metrics",
"label": "Metrics",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/metrics/"
key: 'metrics',
label: 'Metrics',
imgUrl: azureVmUrl,
link: '/docs/azure-monitoring/app-service/metrics/',
},
{
"key": "tracing",
"label": "Traces",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/tracing/"
}
]
}
}
key: 'tracing',
label: 'Traces',
imgUrl: azureVmUrl,
link: '/docs/azure-monitoring/app-service/tracing/',
},
],
},
},
```
### Data Source with Nested Questions (2-3 Levels)
```json
```ts
import ec2Url from '@/assets/Logos/ec2.svg';
// inside the onboardingConfigWithLinks array:
{
"dataSource": "aws-ec2",
"label": "AWS EC2",
"tags": ["AWS"],
"module": "logs",
"relatedSearchKeywords": ["aws", "aws ec2", "ec2 logs", "ec2 metrics"],
"imgUrl": "/Logos/ec2.svg",
"question": {
"desc": "What would you like to monitor for AWS EC2?",
"type": "select",
"helpText": "Choose the type of telemetry data you want to collect.",
"options": [
dataSource: 'aws-ec2',
label: 'AWS EC2',
tags: ['AWS'],
module: 'logs',
relatedSearchKeywords: ['aws', 'aws ec2', 'ec2 logs', 'ec2 metrics'],
imgUrl: ec2Url,
question: {
desc: 'What would you like to monitor for AWS EC2?',
type: 'select',
helpText: 'Choose the type of telemetry data you want to collect.',
options: [
{
"key": "aws-ec2-logs",
"label": "Logs",
"imgUrl": "/Logos/ec2.svg",
"link": "/docs/userguide/collect_logs_from_file/"
key: 'aws-ec2-logs',
label: 'Logs',
imgUrl: ec2Url,
link: '/docs/userguide/collect_logs_from_file/',
},
{
"key": "aws-ec2-metrics",
"label": "Metrics",
"imgUrl": "/Logos/ec2.svg",
"question": {
"desc": "How would you like to set up EC2 Metrics monitoring?",
"helpText": "One Click uses AWS CloudWatch integration. Manual setup uses OpenTelemetry.",
"helpLink": "/docs/aws-monitoring/one-click-vs-manual/",
"helpLinkText": "Read the comparison guide →",
"options": [
key: 'aws-ec2-metrics',
label: 'Metrics',
imgUrl: ec2Url,
question: {
desc: 'How would you like to set up EC2 Metrics monitoring?',
helpText: 'One Click uses AWS CloudWatch integration. Manual setup uses OpenTelemetry.',
helpLink: '/docs/aws-monitoring/one-click-vs-manual/',
helpLinkText: 'Read the comparison guide →',
options: [
{
"key": "aws-ec2-metrics-one-click",
"label": "One Click AWS",
"imgUrl": "/Logos/ec2.svg",
"link": "/integrations?integration=aws-integration&service=ec2",
"internalRedirect": true
key: 'aws-ec2-metrics-one-click',
label: 'One Click AWS',
imgUrl: ec2Url,
link: '/integrations?integration=aws-integration&service=ec2',
internalRedirect: true,
},
{
"key": "aws-ec2-metrics-manual",
"label": "Manual Setup",
"imgUrl": "/Logos/ec2.svg",
"link": "/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/"
}
]
}
}
]
}
}
key: 'aws-ec2-metrics-manual',
label: 'Manual Setup',
imgUrl: ec2Url,
link: '/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/',
},
],
},
},
],
},
},
```
## Best Practices
@@ -270,11 +277,16 @@ Options can be simple (direct link) or nested (with another question):
### 3. Logos
- Place logo files in `public/Logos/`
- Place logo files in `src/assets/Logos/`
- Use SVG format
- Reference as `"/Logos/your-logo.svg"`
- Import the SVG at the top of the file and reference the imported variable:
```ts
import myServiceUrl from '@/assets/Logos/my-service.svg';
// then in the config object:
imgUrl: myServiceUrl,
```
- **Fetching Icons**: New icons can be easily fetched from [OpenBrand](https://openbrand.sh/). Use the pattern `https://openbrand.sh/?url=<TARGET_URL>`, where `<TARGET_URL>` is the URL-encoded link to the service's website. For example, to get Render's logo, use [https://openbrand.sh/?url=https%3A%2F%2Frender.com](https://openbrand.sh/?url=https%3A%2F%2Frender.com).
- **Optimize new SVGs**: Run any newly downloaded SVGs through an optimizer like [SVGOMG (svgo)](https://svgomg.net/) or use `npx svgo public/Logos/your-logo.svg` to minimise their size before committing.
- **Optimize new SVGs**: Run any newly downloaded SVGs through an optimizer like [SVGOMG (svgo)](https://svgomg.net/) or use `npx svgo src/assets/Logos/your-logo.svg` to minimise their size before committing.
### 4. Links
@@ -290,8 +302,8 @@ Options can be simple (direct link) or nested (with another question):
## Adding a New Data Source
1. Add your data source object to the JSON array
2. Ensure the logo exists in `public/Logos/`
1. Add the logo SVG to `src/assets/Logos/` and add a top-level import in the config file (e.g., `import myServiceUrl from '@/assets/Logos/my-service.svg'`)
2. Add your data source object to the `onboardingConfigWithLinks` array, referencing the imported variable for `imgUrl`
3. Test the flow locally with `yarn dev`
4. Validation:
- Navigate to the [onboarding page](http://localhost:3301/get-started-with-signoz-cloud) on your local machine

View File

@@ -0,0 +1,261 @@
# E2E Tests
SigNoz uses end-to-end tests to verify the frontend works correctly against a real backend. These tests use Playwright to drive a real browser against a containerized SigNoz stack that pytest brings up — the same fixture graph integration tests use, with an extra HTTP seeder container for per-spec telemetry seeding.
## How to set up the E2E test environment?
### Prerequisites
Before running E2E tests, ensure you have the following installed:
- Python 3.13+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Docker (for containerized services)
- Node 18+ and Yarn
### Initial Setup
1. Install Python deps for the shared tests project:
```bash
cd tests
uv sync
```
2. Install Node deps and Playwright browsers:
```bash
cd e2e
yarn install
yarn install:browsers # one-time Playwright browser install
```
### Starting the Test Environment
To spin up the backend stack (SigNoz, ClickHouse, Postgres, Zookeeper, Zeus mock, gateway mock, seeder, migrator-with-web) and keep it running:
```bash
cd tests
uv run pytest --basetemp=./tmp/ -vv --reuse --with-web \
e2e/bootstrap/setup.py::test_setup
```
This command will:
- Bring up all containers via pytest fixtures
- Register the admin user (`admin@integration.test` / `password123Z$`)
- Apply the enterprise license (via a WireMock stub of Zeus) and dismiss the org-onboarding prompt so specs can navigate directly to feature pages
- Start the HTTP seeder container (`tests/seeder/` — exposing `/telemetry/{traces,logs,metrics}` POST + DELETE)
- Write backend coordinates to `tests/e2e/.env.local` (loaded by `playwright.config.ts` via dotenv)
- Keep containers running via the `--reuse` flag
The `--with-web` flag builds the frontend into the SigNoz container — required for E2E. The build takes ~4 mins on a cold start.
### Stopping the Test Environment
When you're done writing E2E tests, clean up the environment:
```bash
cd tests
uv run pytest --basetemp=./tmp/ -vv --teardown \
e2e/bootstrap/setup.py::test_teardown
```
## Understanding the E2E Test Framework
Playwright drives a real browser (Chromium / Firefox / WebKit) against the running SigNoz frontend. The backend is brought up by the same pytest fixture graph integration tests use, so both suites share one source of truth for container lifecycle, license seeding, and test-user accounts.
- **Why Playwright?** First-class TypeScript support, network interception, automatic wait-for-visibility, built-in trace viewer that captures every request/response the UI triggers — so specs rarely need separate API probes alongside UI clicks.
- **Why pytest for lifecycle?** The integration suite already owns container bring-up. Reusing it keeps the E2E stack exactly in sync with the integration stack and avoids a parallel lifecycle framework.
- **Why a separate seeder container?** Per-spec telemetry seeding (traces / logs / metrics) needs a thin HTTP wrapper around the ClickHouse insert helpers so a browser spec can POST from inside the test. The seeder lives at `tests/seeder/`, is built from `tests/Dockerfile.seeder`, and reuses the same `fixtures/{traces,logs,metrics}.py` as integration tests.
```
tests/
├── fixtures/ # shared with integration (see integration.md)
├── integration/ # pytest integration suite
├── seeder/ # standalone HTTP seeder container
│ ├── __init__.py
│ ├── Dockerfile
│ └── server.py # FastAPI app wrapping fixtures.{traces,logs,metrics}
└── e2e/
├── package.json
├── playwright.config.ts # loads .env + .env.local via dotenv
├── .env.example # staging-mode template
├── .env.local # generated by bootstrap/setup.py (gitignored)
├── bootstrap/
│ └── setup.py # test_setup / test_teardown — pytest lifecycle
├── fixtures/
│ └── auth.ts # authedPage Playwright fixture + per-worker storageState cache
├── tests/ # Playwright .spec.ts files, one dir per feature area
│ └── alerts/
│ └── alerts.spec.ts
└── artifacts/ # per-run output (gitignored)
├── html/ # HTML reporter output
├── json/ # JSON reporter output
└── results/ # per-test traces / screenshots / videos on failure
```
Each spec follows these principles:
1. **Directory per feature**: `tests/e2e/tests/<feature>/*.spec.ts`. Cross-resource junction concerns (e.g. cascade-delete) go in their own file, not packed into one giant spec.
2. **Test titles use `TC-NN`**: `test('TC-01 alerts page — tabs render', ...)`. Preserves ordering at a glance and maps to external coverage tracking.
3. **UI-first**: drive flows through the UI. Playwright traces capture every BE request/response the UI triggers, so asserting on UI outcomes implicitly validates BE contracts. Reach for direct `page.request.*` only when the test's *purpose* is asserting a response contract (use `page.waitForResponse` on a UI click) or when a specific UI step is structurally flaky (e.g. Ant DatePicker calendar-cell indices) — and even then try UI first.
4. **Self-contained state**: each spec creates what it needs and cleans up in `try/finally`. No global pre-seeding fixtures.
## How to write an E2E test?
Create a new file `tests/e2e/tests/alerts/smoke.spec.ts`:
```typescript
import { test, expect } from '../../fixtures/auth';
test('TC-01 alerts page — tabs render', async ({ authedPage: page }) => {
await page.goto('/alerts');
await expect(page.getByRole('tab', { name: /alert rules/i })).toBeVisible();
await expect(page.getByRole('tab', { name: /configuration/i })).toBeVisible();
});
```
The `authedPage` fixture (from `tests/e2e/fixtures/auth.ts`) gives you a `Page` whose browser context is already authenticated as the admin user. First use per worker triggers one login; the resulting `storageState` is held in memory and reused for later requests.
To run just this test (assuming the stack is up via `test_setup`):
```bash
cd tests/e2e
npx playwright test tests/alerts/smoke.spec.ts --project=chromium
```
Here's a more comprehensive example that exercises a CRUD flow via the UI:
```typescript
import { test, expect } from '../../fixtures/auth';
test.describe.configure({ mode: 'serial' });
test('TC-02 alerts list — create, toggle, delete', async ({ authedPage: page }) => {
await page.goto('/alerts?tab=AlertRules');
const name = 'smoke-rule';
// Seed via UI — click "New Alert", fill form, save.
await page.getByRole('button', { name: /new alert/i }).click();
await page.getByTestId('alert-name-input').fill(name);
// ... fill metric / threshold / save ...
// Find the row and exercise the action menu.
const row = page.locator('tr', { hasText: name });
await expect(row).toBeVisible();
await row.locator('[data-testid="alert-actions"] button').first().click();
// waitForResponse captures the network call the UI triggers — no parallel fetch needed.
const patchWait = page.waitForResponse(
(r) => r.url().includes('/rules/') && r.request().method() === 'PATCH',
);
await page.getByRole('menuitem').filter({ hasText: /^disable$/i }).click();
await patchWait;
await expect(row).toContainText(/disabled/i);
});
```
### Locator priority
1. `getByRole('button', { name: 'Submit' })`
2. `getByLabel('Email')`
3. `getByPlaceholder('...')`
4. `getByText('...')`
5. `getByTestId('...')`
6. `locator('.ant-select')` — last resort (Ant Design dropdowns often have no semantic alternative)
## How to run E2E tests?
### Running All Tests
With the stack already up, from `tests/e2e/`:
```bash
yarn test # headless, all projects
```
### Running Specific Projects
```bash
yarn test:chromium # chromium only
yarn test:firefox
yarn test:webkit
```
### Running Specific Tests
```bash
cd tests/e2e
# Single feature dir
npx playwright test tests/alerts/ --project=chromium
# Single file
npx playwright test tests/alerts/alerts.spec.ts --project=chromium
# Single test by title grep
npx playwright test --project=chromium -g "TC-01"
```
### Iterative modes
```bash
yarn test:ui # Playwright UI mode — watch + step through
yarn test:headed # headed browser
yarn test:debug # Playwright inspector, pause-on-breakpoint
yarn codegen # record-and-replay locator generation
yarn report # open the last HTML report (artifacts/html)
```
### Staging fallback
Point `SIGNOZ_E2E_BASE_URL` at a remote env via `.env` — no local backend bring-up, no `.env.local` generated, Playwright hits the URL directly:
```bash
cd tests/e2e
cp .env.example .env # fill SIGNOZ_E2E_USERNAME / PASSWORD
yarn test:staging
```
## How to configure different options for E2E tests?
### Environment variables
| Variable | Description |
|---|---|
| `SIGNOZ_E2E_BASE_URL` | Base URL the browser targets. Written by `bootstrap/setup.py` for local mode; set manually for staging. |
| `SIGNOZ_E2E_USERNAME` | Admin email. Bootstrap writes `admin@integration.test`. |
| `SIGNOZ_E2E_PASSWORD` | Admin password. Bootstrap writes the integration-test default. |
| `SIGNOZ_E2E_SEEDER_URL` | Seeder HTTP base URL — hit by specs that need per-test telemetry. |
Loading order in `playwright.config.ts`: `.env` first (user-provided, staging), then `.env.local` with `override: true` (bootstrap-generated, local mode). Anything already set in `process.env` at yarn-test time wins because dotenv doesn't touch vars that are already present.
### Playwright options
The full `playwright.config.ts` is the source of truth. Common things to tweak:
- `projects` — Chromium / Firefox / WebKit are enabled by default. Disable to speed up iteration.
- `retries``2` on CI (`process.env.CI`), `0` locally.
- `fullyParallel: true` — files run in parallel by worker; within a file, use `test.describe.configure({ mode: 'serial' })` if tests share list pages / mutate shared state.
- `trace: 'on-first-retry'`, `screenshot: 'only-on-failure'`, `video: 'retain-on-failure'` — default diagnostic artifacts land in `artifacts/results/<test>/`.
### Pytest options (bootstrap side)
The same pytest flags integration tests expose work here, since E2E reuses the shared fixture graph:
- `--reuse` — keep containers warm between runs (required for all iteration).
- `--teardown` — tear everything down.
- `--with-web` — build the frontend into the SigNoz container. **Required for E2E**; integration tests don't need it.
- `--sqlstore-provider`, `--postgres-version`, `--clickhouse-version`, etc. — see `docs/contributing/integration.md`.
## What should I remember?
- **Always use the `--reuse` flag** when setting up the E2E stack. `--with-web` adds a ~4 min frontend build; you only want to pay that once.
- **Don't teardown before setup.** `--reuse` correctly handles partially-set-up state, so chaining teardown → setup wastes time.
- **Prefer UI-driven flows.** Playwright captures BE requests in the trace; a parallel `fetch` probe is almost always redundant. Drop to `page.request.*` only when the UI can't reach what you need.
- **Use `page.waitForResponse` on UI clicks** to assert BE contracts — it still exercises the UI trigger path.
- **Title every test `TC-NN <short description>`** — keeps the suite navigable and reportable.
- **Split by resource, not by regression suite.** One spec per feature resource; cross-resource junction concerns (cascade-delete, linked-edit) get their own file.
- **Use short descriptive resource names** (`alerts-list-rule`, `labels-rule`, `downtime-once`) — no timestamp disambiguation. Each test owns its resources and cleans up in `try/finally`.
- **Never commit `test.only`** — a pre-commit check or CI runs with `forbidOnly: true`.
- **Prefer explicit waits over `page.waitForTimeout(ms)`.** `await expect(locator).toBeVisible()` is always better than `waitForTimeout(5000)`.
- **Unique test names won't save you from shared-tenant state.** When two tests hit the same list page, either serialize (`describe.configure({ mode: 'serial' })`) or isolate cleanup religiously.
- **Artifacts go to `tests/e2e/artifacts/`** — HTML report at `artifacts/html`, traces at `artifacts/results/<test>/`. All gitignored; archive the dir in CI.

View File

@@ -0,0 +1,251 @@
# Integration Tests
SigNoz uses integration tests to verify that different components work together correctly in a real environment. These tests run against actual services (ClickHouse, PostgreSQL, SigNoz, Zeus mock, Keycloak, etc.) spun up as containers, so suites exercise the same code paths production does.
## How to set up the integration test environment?
### Prerequisites
Before running integration tests, ensure you have the following installed:
- Python 3.13+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Docker (for containerized services)
### Initial Setup
1. Navigate to the shared tests project:
```bash
cd tests
```
2. Install dependencies using uv:
```bash
uv sync
```
> **_NOTE:_** the build backend could throw an error while installing `psycopg2`, please see https://www.psycopg.org/docs/install.html#build-prerequisites
### Starting the Test Environment
To spin up all the containers necessary for writing integration tests and keep them running:
```bash
make py-test-setup
```
Under the hood this runs, from `tests/`:
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse integration/bootstrap/setup.py::test_setup
```
This command will:
- Start all required services (ClickHouse, PostgreSQL, Zookeeper, SigNoz, Zeus mock, gateway mock)
- Register an admin user
- Keep containers running via the `--reuse` flag
### Stopping the Test Environment
When you're done writing integration tests, clean up the environment:
```bash
make py-test-teardown
```
Which runs:
```bash
uv run pytest --basetemp=./tmp/ -vv --teardown integration/bootstrap/setup.py::test_teardown
```
This destroys the running integration test setup and cleans up resources.
## Understanding the Integration Test Framework
Python and pytest form the foundation of the integration testing framework. Testcontainers are used to spin up disposable integration environments. WireMock is used to spin up **test doubles** of external services (Zeus cloud API, gateway, etc.).
- **Why Python/pytest?** It's expressive, low-boilerplate, and has powerful fixture capabilities that make integration testing straightforward. Extensive libraries for HTTP requests, JSON handling, and data analysis (numpy) make it easier to test APIs and verify data.
- **Why testcontainers?** They let us spin up isolated dependencies that match our production environment without complex setup.
- **Why WireMock?** Well maintained, documented, and extensible.
```
tests/
├── conftest.py # pytest_plugins registration
├── pyproject.toml
├── uv.lock
├── fixtures/ # shared fixture library (flat package)
│ ├── __init__.py
│ ├── auth.py # admin/editor/viewer users, tokens, license
│ ├── clickhouse.py
│ ├── http.py # WireMock helpers
│ ├── keycloak.py # IdP container
│ ├── postgres.py
│ ├── signoz.py # SigNoz-backend container
│ ├── sql.py
│ ├── types.py
│ └── ... # logs, metrics, traces, alerts, dashboards, ...
├── integration/
│ ├── bootstrap/
│ │ └── setup.py # test_setup / test_teardown
│ ├── testdata/ # JSON / JSONL / YAML inputs per suite
│ └── tests/ # one directory per feature area
│ ├── alerts/
│ │ ├── 01_*.py # numbered suite files
│ │ └── conftest.py # optional suite-local fixtures
│ ├── auditquerier/
│ ├── cloudintegrations/
│ ├── dashboard/
│ ├── passwordauthn/
│ ├── querier/
│ └── ...
└── e2e/ # Playwright suite (see docs/contributing/e2e.md)
```
Each test suite follows these principles:
1. **Organization**: Suites live under `tests/integration/tests/` in self-contained packages. Shared fixtures live in the top-level `tests/fixtures/` package so the e2e tree can reuse them.
2. **Execution Order**: Files are prefixed with two-digit numbers (`01_`, `02_`, `03_`) to ensure sequential execution when tests depend on ordering.
3. **Time Constraints**: Each suite should complete in under 10 minutes (setup takes ~4 mins).
### Test Suite Design
Test suites should target functional domains or subsystems within SigNoz. When designing a test suite, consider these principles:
- **Functional Cohesion**: Group tests around a specific capability or service boundary
- **Data Flow**: Follow the path of data through related components
- **Change Patterns**: Components frequently modified together should be tested together
The exact boundaries for suites are intentionally flexible, allowing contributors to define logical groupings based on their domain knowledge. Current suites cover alerts, audit querier, callback authn, cloud integrations, dashboards, ingestion keys, logs pipelines, password authn, preferences, querier, raw export data, roles, root user, service accounts, and TTL.
## How to write an integration test?
Now start writing an integration test. Create a new file `tests/integration/tests/bootstrap/01_version.py` and paste the following:
```python
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_version(signoz: types.SigNoz) -> None:
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/version"),
timeout=2,
)
logger.info(response)
```
We have written a simple test which calls the `version` endpoint of the SigNoz backend. **To run just this function, run the following command:**
```bash
cd tests
uv run pytest --basetemp=./tmp/ -vv --reuse \
integration/tests/bootstrap/01_version.py::test_version
```
> **Note:** The `--reuse` flag is used to reuse the environment if it is already running. Always use this flag when writing and running integration tests. Without it the environment is destroyed and recreated every run.
Here's another example of how to write a more comprehensive integration test:
```python
from http import HTTPStatus
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_user_registration(signoz: types.SigNoz) -> None:
"""Test user registration functionality."""
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/register"),
json={
"name": "testuser",
"orgId": "",
"orgName": "test.org",
"email": "test@example.com",
"password": "password123Z$",
},
timeout=2,
)
assert response.status_code == HTTPStatus.OK
assert response.json()["setupCompleted"] is True
```
Test inputs (JSON fixtures, expected payloads) go under `tests/integration/testdata/<suite>/` and are loaded via `fixtures.fs.get_testdata_file_path`.
## How to run integration tests?
### Running All Tests
```bash
make py-test
```
Which runs:
```bash
uv run pytest --basetemp=./tmp/ -vv integration/tests/
```
### Running Specific Test Categories
```bash
cd tests
uv run pytest --basetemp=./tmp/ -vv --reuse integration/tests/<suite>/
# Run querier tests
uv run pytest --basetemp=./tmp/ -vv --reuse integration/tests/querier/
# Run passwordauthn tests
uv run pytest --basetemp=./tmp/ -vv --reuse integration/tests/passwordauthn/
```
### Running Individual Tests
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse \
integration/tests/<suite>/<file>.py::test_name
# Run test_register in 01_register.py in the passwordauthn suite
uv run pytest --basetemp=./tmp/ -vv --reuse \
integration/tests/passwordauthn/01_register.py::test_register
```
## How to configure different options for integration tests?
Tests can be configured using pytest options:
- `--sqlstore-provider` — Choose the SQL store provider (default: `postgres`)
- `--sqlite-mode` — SQLite journal mode: `delete` or `wal` (default: `delete`). Only relevant when `--sqlstore-provider=sqlite`.
- `--postgres-version` — PostgreSQL version (default: `15`)
- `--clickhouse-version` — ClickHouse version (default: `25.5.6`)
- `--zookeeper-version` — Zookeeper version (default: `3.7.1`)
- `--schema-migrator-version` — SigNoz schema migrator version (default: `v0.144.2`)
Example:
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse \
--sqlstore-provider=postgres --postgres-version=14 \
integration/tests/passwordauthn/
```
## What should I remember?
- **Always use the `--reuse` flag** when setting up the environment or running tests to keep containers warm. Without it every run rebuilds the stack (~4 mins).
- **Use the `--teardown` flag** only when cleaning up — mixing `--teardown` with `--reuse` is a contradiction.
- **Do not pre-emptively teardown before setup.** If the stack is partially up, `--reuse` picks up from wherever it is. `make py-test-teardown` then `make py-test-setup` wastes minutes.
- **Follow the naming convention** with two-digit numeric prefixes (`01_`, `02_`) for ordered test execution within a suite.
- **Use proper timeouts** in HTTP requests to avoid hanging tests (`timeout=5` is typical).
- **Clean up test data** between tests in the same suite to avoid interference — or rely on a fresh SigNoz container if you need full isolation.
- **Use descriptive test names** that clearly indicate what is being tested.
- **Leverage fixtures** for common setup. The shared fixture package is at `tests/fixtures/` — reuse before adding new ones.
- **Test both success and failure scenarios** (4xx / 5xx paths) to ensure robust functionality.
- **Run `make py-fmt` and `make py-lint` before committing** Python changes — black + isort + autoflake + pylint.
- **`--sqlite-mode=wal` does not work on macOS.** The integration test environment runs SigNoz inside a Linux container with the SQLite database file mounted from the macOS host. WAL mode requires shared memory between connections, and connections crossing the VM boundary (macOS host ↔ Linux container) cannot share the WAL index, resulting in `SQLITE_IOERR_SHORT_READ`. WAL mode is tested in CI on Linux only.

View File

@@ -0,0 +1,127 @@
package implcloudprovider
import (
"context"
"fmt"
"net/url"
"sort"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/types/cloudintegrationtypes"
)
type awscloudprovider struct {
serviceDefinitions cloudintegrationtypes.ServiceDefinitionStore
}
func NewAWSCloudProvider(defStore cloudintegrationtypes.ServiceDefinitionStore) (cloudintegration.CloudProviderModule, error) {
return &awscloudprovider{serviceDefinitions: defStore}, nil
}
func (provider *awscloudprovider) GetConnectionArtifact(ctx context.Context, account *cloudintegrationtypes.Account, req *cloudintegrationtypes.GetConnectionArtifactRequest) (*cloudintegrationtypes.ConnectionArtifact, error) {
baseURL := fmt.Sprintf(cloudintegrationtypes.CloudFormationQuickCreateBaseURL.StringValue(), req.Config.AWS.DeploymentRegion)
u, _ := url.Parse(baseURL)
q := u.Query()
q.Set("region", req.Config.AWS.DeploymentRegion)
u.Fragment = "/stacks/quickcreate"
u.RawQuery = q.Encode()
q = u.Query()
q.Set("stackName", cloudintegrationtypes.AgentCloudFormationBaseStackName.StringValue())
q.Set("templateURL", fmt.Sprintf(cloudintegrationtypes.AgentCloudFormationTemplateS3Path.StringValue(), req.Config.AgentVersion))
q.Set("param_SigNozIntegrationAgentVersion", req.Config.AgentVersion)
q.Set("param_SigNozApiUrl", req.Credentials.SigNozAPIURL)
q.Set("param_SigNozApiKey", req.Credentials.SigNozAPIKey)
q.Set("param_SigNozAccountId", account.ID.StringValue())
q.Set("param_IngestionUrl", req.Credentials.IngestionURL)
q.Set("param_IngestionKey", req.Credentials.IngestionKey)
return &cloudintegrationtypes.ConnectionArtifact{
AWS: cloudintegrationtypes.NewAWSConnectionArtifact(u.String() + "?&" + q.Encode()), // this format is required by AWS
}, nil
}
func (provider *awscloudprovider) ListServiceDefinitions(ctx context.Context) ([]*cloudintegrationtypes.ServiceDefinition, error) {
return provider.serviceDefinitions.List(ctx, cloudintegrationtypes.CloudProviderTypeAWS)
}
func (provider *awscloudprovider) GetServiceDefinition(ctx context.Context, serviceID cloudintegrationtypes.ServiceID) (*cloudintegrationtypes.ServiceDefinition, error) {
serviceDef, err := provider.serviceDefinitions.Get(ctx, cloudintegrationtypes.CloudProviderTypeAWS, serviceID)
if err != nil {
return nil, err
}
// override cloud integration dashboard id
for index, dashboard := range serviceDef.Assets.Dashboards {
serviceDef.Assets.Dashboards[index].ID = cloudintegrationtypes.GetCloudIntegrationDashboardID(cloudintegrationtypes.CloudProviderTypeAWS, serviceID.StringValue(), dashboard.ID)
}
return serviceDef, nil
}
func (provider *awscloudprovider) BuildIntegrationConfig(
ctx context.Context,
account *cloudintegrationtypes.Account,
services []*cloudintegrationtypes.StorableCloudIntegrationService,
) (*cloudintegrationtypes.ProviderIntegrationConfig, error) {
// Sort services for deterministic output
sort.Slice(services, func(i, j int) bool {
return services[i].Type.StringValue() < services[j].Type.StringValue()
})
compiledMetrics := new(cloudintegrationtypes.AWSMetricsCollectionStrategy)
compiledLogs := new(cloudintegrationtypes.AWSLogsCollectionStrategy)
var compiledS3Buckets map[string][]string
for _, storedSvc := range services {
svcCfg, err := cloudintegrationtypes.NewServiceConfigFromJSON(cloudintegrationtypes.CloudProviderTypeAWS, storedSvc.Config)
if err != nil {
return nil, err
}
svcDef, err := provider.GetServiceDefinition(ctx, storedSvc.Type)
if err != nil {
return nil, err
}
strategy := svcDef.TelemetryCollectionStrategy.AWS
logsEnabled := svcCfg.IsLogsEnabled(cloudintegrationtypes.CloudProviderTypeAWS)
// S3Sync: logs come directly from configured S3 buckets, not CloudWatch subscriptions
if storedSvc.Type == cloudintegrationtypes.AWSServiceS3Sync {
if logsEnabled && svcCfg.AWS.Logs.S3Buckets != nil {
compiledS3Buckets = svcCfg.AWS.Logs.S3Buckets
}
// no need to go ahead as the code block specifically checks for the S3Sync service
continue
}
if logsEnabled && strategy.Logs != nil {
compiledLogs.Subscriptions = append(compiledLogs.Subscriptions, strategy.Logs.Subscriptions...)
}
metricsEnabled := svcCfg.IsMetricsEnabled(cloudintegrationtypes.CloudProviderTypeAWS)
if metricsEnabled && strategy.Metrics != nil {
compiledMetrics.StreamFilters = append(compiledMetrics.StreamFilters, strategy.Metrics.StreamFilters...)
}
}
collectionStrategy := new(cloudintegrationtypes.AWSTelemetryCollectionStrategy)
if len(compiledMetrics.StreamFilters) > 0 {
collectionStrategy.Metrics = compiledMetrics
}
if len(compiledLogs.Subscriptions) > 0 {
collectionStrategy.Logs = compiledLogs
}
if compiledS3Buckets != nil {
collectionStrategy.S3Buckets = compiledS3Buckets
}
return &cloudintegrationtypes.ProviderIntegrationConfig{
AWS: cloudintegrationtypes.NewAWSIntegrationConfig(account.Config.AWS.Regions, collectionStrategy),
}, nil
}

View File

@@ -0,0 +1,34 @@
package implcloudprovider
import (
"context"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/types/cloudintegrationtypes"
)
type azurecloudprovider struct{}
func NewAzureCloudProvider() cloudintegration.CloudProviderModule {
return &azurecloudprovider{}
}
func (provider *azurecloudprovider) GetConnectionArtifact(ctx context.Context, account *cloudintegrationtypes.Account, req *cloudintegrationtypes.GetConnectionArtifactRequest) (*cloudintegrationtypes.ConnectionArtifact, error) {
panic("implement me")
}
func (provider *azurecloudprovider) ListServiceDefinitions(ctx context.Context) ([]*cloudintegrationtypes.ServiceDefinition, error) {
panic("implement me")
}
func (provider *azurecloudprovider) GetServiceDefinition(ctx context.Context, serviceID cloudintegrationtypes.ServiceID) (*cloudintegrationtypes.ServiceDefinition, error) {
panic("implement me")
}
func (provider *azurecloudprovider) BuildIntegrationConfig(
ctx context.Context,
account *cloudintegrationtypes.Account,
services []*cloudintegrationtypes.StorableCloudIntegrationService,
) (*cloudintegrationtypes.ProviderIntegrationConfig, error) {
panic("implement me")
}

View File

@@ -0,0 +1,540 @@
package implcloudintegration
import (
"context"
"fmt"
"sort"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/cloudintegrationtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/types/serviceaccounttypes"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/zeus"
)
type module struct {
store cloudintegrationtypes.Store
gateway gateway.Gateway
zeus zeus.Zeus
licensing licensing.Licensing
global global.Global
serviceAccount serviceaccount.Module
cloudProvidersMap map[cloudintegrationtypes.CloudProviderType]cloudintegration.CloudProviderModule
config cloudintegration.Config
}
func NewModule(
store cloudintegrationtypes.Store,
global global.Global,
zeus zeus.Zeus,
gateway gateway.Gateway,
licensing licensing.Licensing,
serviceAccount serviceaccount.Module,
cloudProvidersMap map[cloudintegrationtypes.CloudProviderType]cloudintegration.CloudProviderModule,
config cloudintegration.Config,
) (cloudintegration.Module, error) {
return &module{
store: store,
global: global,
zeus: zeus,
gateway: gateway,
licensing: licensing,
serviceAccount: serviceAccount,
cloudProvidersMap: cloudProvidersMap,
config: config,
}, nil
}
// GetConnectionCredentials returns credentials required to generate connection artifact. eg. apiKey, ingestionKey etc.
// It will return creds it can deduce and return empty value for others.
func (module *module) GetConnectionCredentials(ctx context.Context, orgID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) (*cloudintegrationtypes.Credentials, error) {
// get license to get the deployment details
license, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
// get deployment details from zeus, ignore error
respBytes, _ := module.zeus.GetDeployment(ctx, license.Key)
var signozAPIURL string
if len(respBytes) > 0 {
// parse deployment details, ignore error, if client is asking api url every time check for possible error
deployment, _ := zeustypes.NewGettableDeployment(respBytes)
if deployment != nil {
signozAPIURL, _ = cloudintegrationtypes.GetSigNozAPIURLFromDeployment(deployment)
}
}
// ignore error
apiKey, _ := module.getOrCreateAPIKey(ctx, orgID, provider)
// ignore error
ingestionKey, _ := module.getOrCreateIngestionKey(ctx, orgID, provider)
return cloudintegrationtypes.NewCredentials(
signozAPIURL,
apiKey,
module.global.GetConfig(ctx).IngestionURL,
ingestionKey,
), nil
}
func (module *module) CreateAccount(ctx context.Context, account *cloudintegrationtypes.Account) error {
_, err := module.licensing.GetActive(ctx, account.OrgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableCloudIntegration, err := cloudintegrationtypes.NewStorableCloudIntegration(account)
if err != nil {
return err
}
return module.store.CreateAccount(ctx, storableCloudIntegration)
}
func (module *module) GetConnectionArtifact(ctx context.Context, account *cloudintegrationtypes.Account, req *cloudintegrationtypes.GetConnectionArtifactRequest) (*cloudintegrationtypes.ConnectionArtifact, error) {
_, err := module.licensing.GetActive(ctx, account.OrgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
cloudProviderModule, err := module.getCloudProvider(account.Provider)
if err != nil {
return nil, err
}
req.Config.SetAgentVersion(module.config.Agent.Version)
return cloudProviderModule.GetConnectionArtifact(ctx, account, req)
}
func (module *module) GetAccount(ctx context.Context, orgID valuer.UUID, accountID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) (*cloudintegrationtypes.Account, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableAccount, err := module.store.GetAccountByID(ctx, orgID, accountID, provider)
if err != nil {
return nil, err
}
return cloudintegrationtypes.NewAccountFromStorable(storableAccount)
}
func (module *module) GetConnectedAccount(ctx context.Context, orgID, accountID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) (*cloudintegrationtypes.Account, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableAccount, err := module.store.GetConnectedAccount(ctx, orgID, accountID, provider)
if err != nil {
return nil, err
}
return cloudintegrationtypes.NewAccountFromStorable(storableAccount)
}
// ListAccounts return only agent connected accounts.
func (module *module) ListAccounts(ctx context.Context, orgID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) ([]*cloudintegrationtypes.Account, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableAccounts, err := module.store.ListConnectedAccounts(ctx, orgID, provider)
if err != nil {
return nil, err
}
return cloudintegrationtypes.NewAccountsFromStorables(storableAccounts)
}
func (module *module) AgentCheckIn(ctx context.Context, orgID valuer.UUID, provider cloudintegrationtypes.CloudProviderType, req *cloudintegrationtypes.AgentCheckInRequest) (*cloudintegrationtypes.AgentCheckInResponse, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
connectedAccount, err := module.store.GetConnectedAccountByProviderAccountID(ctx, orgID, req.ProviderAccountID, provider)
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return nil, err
}
// If a different integration is already connected to this provider account ID, reject the check-in.
// Allow re-check-in from the same integration (e.g. agent restarting).
if connectedAccount != nil && connectedAccount.ID != req.CloudIntegrationID {
errMessage := fmt.Sprintf("provider account id %s is already connected to cloud integration id %s", req.ProviderAccountID, connectedAccount.ID)
return nil, errors.New(errors.TypeAlreadyExists, cloudintegrationtypes.ErrCodeCloudIntegrationAlreadyConnected, errMessage)
}
account, err := module.store.GetAccountByID(ctx, orgID, req.CloudIntegrationID, provider)
if err != nil {
return nil, err
}
// If account has been removed (disconnected), return a minimal response with empty integration config.
// The agent uses this response to clean up resources
if account.RemovedAt != nil {
return cloudintegrationtypes.NewAgentCheckInResponse(
req.ProviderAccountID,
account.ID.StringValue(),
new(cloudintegrationtypes.ProviderIntegrationConfig),
account.RemovedAt,
), nil
}
// update account with cloud provider account id and agent report (heartbeat)
account.Update(&req.ProviderAccountID, cloudintegrationtypes.NewAgentReport(req.Data))
err = module.store.UpdateAccount(ctx, account)
if err != nil {
return nil, err
}
// Get account as domain object for config access (enabled regions, etc.)
domainAccount, err := cloudintegrationtypes.NewAccountFromStorable(account)
if err != nil {
return nil, err
}
cloudProvider, err := module.getCloudProvider(provider)
if err != nil {
return nil, err
}
storedServices, err := module.store.ListServices(ctx, req.CloudIntegrationID)
if err != nil {
return nil, err
}
// Delegate integration config building entirely to the provider module
integrationConfig, err := cloudProvider.BuildIntegrationConfig(ctx, domainAccount, storedServices)
if err != nil {
return nil, err
}
return cloudintegrationtypes.NewAgentCheckInResponse(
req.ProviderAccountID,
account.ID.StringValue(),
integrationConfig,
account.RemovedAt,
), nil
}
func (module *module) UpdateAccount(ctx context.Context, account *cloudintegrationtypes.Account) error {
_, err := module.licensing.GetActive(ctx, account.OrgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableAccount, err := cloudintegrationtypes.NewStorableCloudIntegration(account)
if err != nil {
return err
}
return module.store.UpdateAccount(ctx, storableAccount)
}
func (module *module) DisconnectAccount(ctx context.Context, orgID valuer.UUID, accountID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) error {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return module.store.RemoveAccount(ctx, orgID, accountID, provider)
}
func (module *module) ListServicesMetadata(ctx context.Context, orgID valuer.UUID, provider cloudintegrationtypes.CloudProviderType, integrationID valuer.UUID) ([]*cloudintegrationtypes.ServiceMetadata, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
cloudProvider, err := module.getCloudProvider(provider)
if err != nil {
return nil, err
}
serviceDefinitions, err := cloudProvider.ListServiceDefinitions(ctx)
if err != nil {
return nil, err
}
enabledServiceIDs := map[string]bool{}
if !integrationID.IsZero() {
storedServices, err := module.store.ListServices(ctx, integrationID)
if err != nil {
return nil, err
}
for _, svc := range storedServices {
serviceConfig, err := cloudintegrationtypes.NewServiceConfigFromJSON(provider, svc.Config)
if err != nil {
return nil, err
}
if serviceConfig.IsServiceEnabled(provider) {
enabledServiceIDs[svc.Type.StringValue()] = true
}
}
}
resp := make([]*cloudintegrationtypes.ServiceMetadata, 0, len(serviceDefinitions))
for _, serviceDefinition := range serviceDefinitions {
resp = append(resp, cloudintegrationtypes.NewServiceMetadata(*serviceDefinition, enabledServiceIDs[serviceDefinition.ID]))
}
return resp, nil
}
func (module *module) GetService(ctx context.Context, orgID valuer.UUID, serviceID cloudintegrationtypes.ServiceID, provider cloudintegrationtypes.CloudProviderType, cloudIntegrationID valuer.UUID) (*cloudintegrationtypes.Service, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
cloudProvider, err := module.getCloudProvider(provider)
if err != nil {
return nil, err
}
serviceDefinition, err := cloudProvider.GetServiceDefinition(ctx, serviceID)
if err != nil {
return nil, err
}
var integrationService *cloudintegrationtypes.CloudIntegrationService
if !cloudIntegrationID.IsZero() {
storedService, err := module.store.GetServiceByServiceID(ctx, cloudIntegrationID, serviceID)
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return nil, err
}
if storedService != nil {
serviceConfig, err := cloudintegrationtypes.NewServiceConfigFromJSON(provider, storedService.Config)
if err != nil {
return nil, err
}
integrationService = cloudintegrationtypes.NewCloudIntegrationServiceFromStorable(storedService, serviceConfig)
}
}
return cloudintegrationtypes.NewService(*serviceDefinition, integrationService), nil
}
func (module *module) CreateService(ctx context.Context, orgID valuer.UUID, service *cloudintegrationtypes.CloudIntegrationService, provider cloudintegrationtypes.CloudProviderType) error {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
cloudProvider, err := module.getCloudProvider(provider)
if err != nil {
return err
}
serviceDefinition, err := cloudProvider.GetServiceDefinition(ctx, service.Type)
if err != nil {
return err
}
configJSON, err := service.Config.ToJSON(provider, service.Type, &serviceDefinition.SupportedSignals)
if err != nil {
return err
}
return module.store.CreateService(ctx, cloudintegrationtypes.NewStorableCloudIntegrationService(service, string(configJSON)))
}
func (module *module) UpdateService(ctx context.Context, orgID valuer.UUID, integrationService *cloudintegrationtypes.CloudIntegrationService, provider cloudintegrationtypes.CloudProviderType) error {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
cloudProvider, err := module.getCloudProvider(provider)
if err != nil {
return err
}
serviceDefinition, err := cloudProvider.GetServiceDefinition(ctx, integrationService.Type)
if err != nil {
return err
}
configJSON, err := integrationService.Config.ToJSON(provider, integrationService.Type, &serviceDefinition.SupportedSignals)
if err != nil {
return err
}
storableService := cloudintegrationtypes.NewStorableCloudIntegrationService(integrationService, string(configJSON))
return module.store.UpdateService(ctx, storableService)
}
func (module *module) GetDashboardByID(ctx context.Context, orgID valuer.UUID, id string) (*dashboardtypes.Dashboard, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
_, _, _, err = cloudintegrationtypes.ParseCloudIntegrationDashboardID(id)
if err != nil {
return nil, err
}
allDashboards, err := module.listDashboards(ctx, orgID)
if err != nil {
return nil, err
}
for _, d := range allDashboards {
if d.ID == id {
return d, nil
}
}
return nil, errors.New(errors.TypeNotFound, cloudintegrationtypes.ErrCodeCloudIntegrationNotFound, "cloud integration dashboard not found")
}
func (module *module) ListDashboards(ctx context.Context, orgID valuer.UUID) ([]*dashboardtypes.Dashboard, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return module.listDashboards(ctx, orgID)
}
func (module *module) Collect(ctx context.Context, orgID valuer.UUID) (map[string]any, error) {
stats := make(map[string]any)
// get connected accounts for AWS
awsAccountsCount, err := module.store.CountConnectedAccounts(ctx, orgID, cloudintegrationtypes.CloudProviderTypeAWS)
if err == nil {
stats["cloudintegration.aws.connectedaccounts.count"] = awsAccountsCount
}
// NOTE: not adding stats for services for now.
// TODO: add more cloud providers when supported
return stats, nil
}
func (module *module) getCloudProvider(provider cloudintegrationtypes.CloudProviderType) (cloudintegration.CloudProviderModule, error) {
if cloudProviderModule, ok := module.cloudProvidersMap[provider]; ok {
return cloudProviderModule, nil
}
return nil, errors.NewInvalidInputf(cloudintegrationtypes.ErrCodeCloudProviderInvalidInput, "invalid cloud provider: %s", provider.StringValue())
}
func (module *module) getOrCreateIngestionKey(ctx context.Context, orgID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) (string, error) {
keyName := cloudintegrationtypes.NewIngestionKeyName(provider)
result, err := module.gateway.SearchIngestionKeysByName(ctx, orgID, keyName, 1, 10)
if err != nil {
return "", errors.WrapInternalf(err, errors.CodeInternal, "couldn't search ingestion keys")
}
// ideally there should be only one key per cloud integration provider
if len(result.Keys) > 0 {
return result.Keys[0].Value, nil
}
createdIngestionKey, err := module.gateway.CreateIngestionKey(ctx, orgID, keyName, []string{"integration"}, time.Time{})
if err != nil {
return "", errors.WrapInternalf(err, errors.CodeInternal, "couldn't create ingestion key")
}
return createdIngestionKey.Value, nil
}
func (module *module) getOrCreateAPIKey(ctx context.Context, orgID valuer.UUID, provider cloudintegrationtypes.CloudProviderType) (string, error) {
domain := module.serviceAccount.Config().Email.Domain
serviceAccount := serviceaccounttypes.NewServiceAccount("integration", domain, serviceaccounttypes.ServiceAccountStatusActive, orgID)
serviceAccount, err := module.serviceAccount.GetOrCreate(ctx, orgID, serviceAccount)
if err != nil {
return "", err
}
err = module.serviceAccount.SetRoleByName(ctx, orgID, serviceAccount.ID, authtypes.SigNozViewerRoleName)
if err != nil {
return "", err
}
factorAPIKey, err := serviceAccount.NewFactorAPIKey(provider.StringValue(), 0)
if err != nil {
return "", err
}
factorAPIKey, err = module.serviceAccount.GetOrCreateFactorAPIKey(ctx, factorAPIKey)
if err != nil {
return "", err
}
return factorAPIKey.Key, nil
}
func (module *module) listDashboards(ctx context.Context, orgID valuer.UUID) ([]*dashboardtypes.Dashboard, error) {
var allDashboards []*dashboardtypes.Dashboard
for provider := range module.cloudProvidersMap {
cloudProvider, err := module.getCloudProvider(provider)
if err != nil {
return nil, err
}
connectedAccounts, err := module.store.ListConnectedAccounts(ctx, orgID, provider)
if err != nil {
return nil, err
}
for _, storableAccount := range connectedAccounts {
storedServices, err := module.store.ListServices(ctx, storableAccount.ID)
if err != nil {
return nil, err
}
for _, storedSvc := range storedServices {
serviceConfig, err := cloudintegrationtypes.NewServiceConfigFromJSON(provider, storedSvc.Config)
if err != nil || !serviceConfig.IsMetricsEnabled(provider) {
continue
}
svcDef, err := cloudProvider.GetServiceDefinition(ctx, storedSvc.Type)
if err != nil || svcDef == nil {
continue
}
dashboards := cloudintegrationtypes.GetDashboardsFromAssets(
storedSvc.Type.StringValue(),
orgID,
provider,
storableAccount.CreatedAt,
svcDef.Assets,
)
allDashboards = append(allDashboards, dashboards...)
}
}
}
sort.Slice(allDashboards, func(i, j int) bool {
return allDashboards[i].ID < allDashboards[j].ID
})
return allDashboards, nil
}

View File

@@ -6,7 +6,6 @@ import (
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/query-service/usage"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/http/middleware"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
@@ -15,7 +14,6 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/app/logparsingpipeline"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
rules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/version"
@@ -24,7 +22,6 @@ import (
type APIHandlerOptions struct {
DataConnector interfaces.Reader
RulesManager *rules.Manager
UsageManager *usage.Manager
IntegrationsController *integrations.Controller
CloudIntegrationsController *cloudintegrations.Controller
@@ -44,12 +41,10 @@ type APIHandler struct {
func NewAPIHandler(opts APIHandlerOptions, signoz *signoz.SigNoz, config signoz.Config) (*APIHandler, error) {
baseHandler, err := baseapp.NewAPIHandler(baseapp.APIHandlerOpts{
Reader: opts.DataConnector,
RuleManager: opts.RulesManager,
IntegrationsController: opts.IntegrationsController,
CloudIntegrationsController: opts.CloudIntegrationsController,
LogsParsingPipelineController: opts.LogsParsingPipelineController,
FluxInterval: opts.FluxInterval,
AlertmanagerAPI: alertmanager.NewAPI(signoz.Alertmanager),
LicensingAPI: httplicensing.NewLicensingAPI(signoz.Licensing),
Signoz: signoz,
QueryParserAPI: queryparser.NewAPI(signoz.Instrumentation.ToProviderSettings(), signoz.QueryParser),
@@ -66,10 +61,6 @@ func NewAPIHandler(opts APIHandlerOptions, signoz *signoz.SigNoz, config signoz.
return ah, nil
}
func (ah *APIHandler) RM() *rules.Manager {
return ah.opts.RulesManager
}
func (ah *APIHandler) UM() *usage.Manager {
return ah.opts.UsageManager
}

View File

@@ -7,6 +7,10 @@ import (
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/featuretypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type DayWiseBreakdown struct {
@@ -45,15 +49,17 @@ type details struct {
BillTotal float64 `json:"billTotal"`
}
type billingData struct {
BillingPeriodStart int64 `json:"billingPeriodStart"`
BillingPeriodEnd int64 `json:"billingPeriodEnd"`
Details details `json:"details"`
Discount float64 `json:"discount"`
SubscriptionStatus string `json:"subscriptionStatus"`
}
type billingDetails struct {
Status string `json:"status"`
Data struct {
BillingPeriodStart int64 `json:"billingPeriodStart"`
BillingPeriodEnd int64 `json:"billingPeriodEnd"`
Details details `json:"details"`
Discount float64 `json:"discount"`
SubscriptionStatus string `json:"subscriptionStatus"`
} `json:"data"`
Status string `json:"status"`
Data billingData `json:"data"`
}
func (ah *APIHandler) getBilling(w http.ResponseWriter, r *http.Request) {
@@ -64,6 +70,33 @@ func (ah *APIHandler) getBilling(w http.ResponseWriter, r *http.Request) {
return
}
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
orgID := valuer.MustNewUUID(claims.OrgID)
evalCtx := featuretypes.NewFlaggerEvaluationContext(orgID)
useZeus := ah.Signoz.Flagger.BooleanOrEmpty(r.Context(), flagger.FeatureGetMetersFromZeus, evalCtx)
if useZeus {
data, err := ah.Signoz.Zeus.GetMeters(r.Context(), licenseKey)
if err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
var billing billingData
if err := json.Unmarshal(data, &billing); err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
ah.Respond(w, billing)
return
}
billingURL := fmt.Sprintf("%s/usage?licenseKey=%s", constants.LicenseSignozIo, licenseKey)
hClient := &http.Client{}
@@ -79,13 +112,11 @@ func (ah *APIHandler) getBilling(w http.ResponseWriter, r *http.Request) {
return
}
// decode response body
var billingResponse billingDetails
if err := json.NewDecoder(billingResp.Body).Decode(&billingResponse); err != nil {
RespondError(w, model.InternalError(err), nil)
return
}
// TODO(srikanthccv):Fetch the current day usage and add it to the response
ah.Respond(w, billingResponse.Data)
}

View File

@@ -12,10 +12,6 @@ import (
"github.com/SigNoz/signoz/pkg/cache/memorycache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/gorilla/handlers"
@@ -23,18 +19,10 @@ import (
"github.com/soheilhy/cmux"
"github.com/SigNoz/signoz/ee/query-service/app/api"
"github.com/SigNoz/signoz/ee/query-service/rules"
"github.com/SigNoz/signoz/ee/query-service/usage"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/http/middleware"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/web"
"log/slog"
@@ -49,7 +37,6 @@ import (
opAmpModel "github.com/SigNoz/signoz/pkg/query-service/app/opamp/model"
baseconst "github.com/SigNoz/signoz/pkg/query-service/constants"
"github.com/SigNoz/signoz/pkg/query-service/healthcheck"
baserules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils"
)
@@ -57,7 +44,6 @@ import (
type Server struct {
config signoz.Config
signoz *signoz.SigNoz
ruleManager *baserules.Manager
// public http router
httpConn net.Listener
@@ -97,24 +83,6 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
nil,
)
rm, err := makeRulesManager(
signoz.Cache,
signoz.Alertmanager,
signoz.SQLStore,
signoz.TelemetryStore,
signoz.TelemetryMetadataStore,
signoz.Prometheus,
signoz.Modules.OrgGetter,
signoz.Modules.RuleStateHistory,
signoz.Querier,
signoz.Instrumentation.ToProviderSettings(),
signoz.QueryParser,
)
if err != nil {
return nil, err
}
// initiate opamp
opAmpModel.Init(signoz.SQLStore, signoz.Instrumentation.Logger(), signoz.Modules.OrgGetter)
@@ -152,7 +120,7 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
}
// start the usagemanager
usageManager, err := usage.New(signoz.Licensing, signoz.TelemetryStore.ClickhouseDB(), signoz.Zeus, signoz.Modules.OrgGetter)
usageManager, err := usage.New(signoz.Licensing, signoz.TelemetryStore.ClickhouseDB(), signoz.Zeus, signoz.Modules.OrgGetter, signoz.Flagger)
if err != nil {
return nil, err
}
@@ -163,7 +131,6 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
apiOpts := api.APIHandlerOptions{
DataConnector: reader,
RulesManager: rm,
UsageManager: usageManager,
IntegrationsController: integrationsController,
CloudIntegrationsController: cloudIntegrationsController,
@@ -180,8 +147,7 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
s := &Server{
config: config,
signoz: signoz,
ruleManager: rm,
signoz: signoz,
httpHostPort: baseconst.HTTPHostPort,
unavailableChannel: make(chan healthcheck.Status),
usageManager: usageManager,
@@ -262,6 +228,20 @@ func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*h
return nil, err
}
routePrefix := s.config.Global.ExternalPath()
if routePrefix != "" {
prefixed := http.StripPrefix(routePrefix, handler)
handler = http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
switch req.URL.Path {
case "/api/v1/health", "/api/v2/healthz", "/api/v2/readyz", "/api/v2/livez":
r.ServeHTTP(w, req)
return
}
prefixed.ServeHTTP(w, req)
})
}
return &http.Server{
Handler: handler,
}, nil
@@ -288,8 +268,6 @@ func (s *Server) initListeners() error {
// Start listening on http and private http port concurrently
func (s *Server) Start(ctx context.Context) error {
s.ruleManager.Start(ctx)
err := s.initListeners()
if err != nil {
return err
@@ -333,47 +311,9 @@ func (s *Server) Stop(ctx context.Context) error {
s.opampServer.Stop()
if s.ruleManager != nil {
s.ruleManager.Stop(ctx)
}
// stop usage manager
s.usageManager.Stop(ctx)
return nil
}
func makeRulesManager(cache cache.Cache, alertmanager alertmanager.Alertmanager, sqlstore sqlstore.SQLStore, telemetryStore telemetrystore.TelemetryStore, metadataStore telemetrytypes.MetadataStore, prometheus prometheus.Prometheus, orgGetter organization.Getter, ruleStateHistoryModule rulestatehistory.Module, querier querier.Querier, providerSettings factory.ProviderSettings, queryParser queryparser.QueryParser) (*baserules.Manager, error) {
ruleStore := sqlrulestore.NewRuleStore(sqlstore, queryParser, providerSettings)
maintenanceStore := sqlrulestore.NewMaintenanceStore(sqlstore)
// create manager opts
managerOpts := &baserules.ManagerOptions{
TelemetryStore: telemetryStore,
MetadataStore: metadataStore,
Prometheus: prometheus,
Context: context.Background(),
Querier: querier,
Logger: providerSettings.Logger,
Cache: cache,
EvalDelay: baseconst.GetEvalDelay(),
PrepareTaskFunc: rules.PrepareTaskFunc,
PrepareTestRuleFunc: rules.TestNotification,
Alertmanager: alertmanager,
OrgGetter: orgGetter,
RuleStore: ruleStore,
MaintenanceStore: maintenanceStore,
SQLStore: sqlstore,
QueryParser: queryParser,
RuleStateHistoryModule: ruleStateHistoryModule,
}
// create Manager
manager, err := baserules.NewManager(managerOpts)
if err != nil {
return nil, fmt.Errorf("rule manager error: %v", err)
}
slog.Info("rules manager is ready")
return manager, nil
}

View File

@@ -16,9 +16,11 @@ import (
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/query-service/utils/encryption"
"github.com/SigNoz/signoz/pkg/types/featuretypes"
"github.com/SigNoz/signoz/pkg/zeus"
)
@@ -43,15 +45,18 @@ type Manager struct {
zeus zeus.Zeus
orgGetter organization.Getter
flagger flagger.Flagger
}
func New(licenseService licensing.Licensing, clickhouseConn clickhouse.Conn, zeus zeus.Zeus, orgGetter organization.Getter) (*Manager, error) {
func New(licenseService licensing.Licensing, clickhouseConn clickhouse.Conn, zeus zeus.Zeus, orgGetter organization.Getter, flagger flagger.Flagger) (*Manager, error) {
m := &Manager{
clickhouseConn: clickhouseConn,
licenseService: licenseService,
scheduler: gocron.NewScheduler(time.UTC).Every(1).Day().At("00:00"), // send usage every at 00:00 UTC
zeus: zeus,
orgGetter: orgGetter,
flagger: flagger,
}
return m, nil
}
@@ -168,7 +173,14 @@ func (lm *Manager) UploadUsage(ctx context.Context) {
return
}
errv2 = lm.zeus.PutMeters(ctx, payload.LicenseKey.String(), body)
evalCtx := featuretypes.NewFlaggerEvaluationContext(organization.ID)
useZeus := lm.flagger.BooleanOrEmpty(ctx, flagger.FeaturePutMetersInZeus, evalCtx)
if useZeus {
errv2 = lm.zeus.PutMetersV2(ctx, payload.LicenseKey.String(), body)
} else {
errv2 = lm.zeus.PutMeters(ctx, payload.LicenseKey.String(), body)
}
if errv2 != nil {
slog.ErrorContext(ctx, "failed to upload usage", errors.Attr(errv2))
// not returning error here since it is captured in the failed count

View File

@@ -54,6 +54,7 @@ func New(ctx context.Context, providerSettings factory.ProviderSettings, config
// Set the maximum number of open connections
pgConfig.MaxConns = int32(config.Connection.MaxOpenConns)
pgConfig.MaxConnLifetime = config.Connection.MaxConnLifetime
// Use pgxpool to create a connection pool
pool, err := pgxpool.NewWithConfig(ctx, pgConfig)

View File

@@ -109,6 +109,21 @@ func (provider *Provider) GetDeployment(ctx context.Context, key string) ([]byte
return []byte(gjson.GetBytes(response, "data").String()), nil
}
func (provider *Provider) GetMeters(ctx context.Context, key string) ([]byte, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v1/meters"),
http.MethodGet,
key,
nil,
)
if err != nil {
return nil, err
}
return []byte(gjson.GetBytes(response, "data").String()), nil
}
func (provider *Provider) PutMeters(ctx context.Context, key string, data []byte) error {
_, err := provider.do(
ctx,
@@ -121,6 +136,18 @@ func (provider *Provider) PutMeters(ctx context.Context, key string, data []byte
return err
}
func (provider *Provider) PutMetersV2(ctx context.Context, key string, data []byte) error {
_, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v1/meters"),
http.MethodPost,
key,
data,
)
return err
}
func (provider *Provider) PutProfile(ctx context.Context, key string, profile *zeustypes.PostableProfile) error {
body, err := json.Marshal(profile)
if err != nil {

View File

@@ -1,5 +1,7 @@
node_modules
build
eslint-rules/
stylelint-rules/
*.typegen.ts
i18-generate-hash.js
src/parser/TraceOperatorParser/**

View File

@@ -1,3 +1,10 @@
// eslint-disable-next-line @typescript-eslint/no-var-requires
const path = require('path');
// eslint-disable-next-line @typescript-eslint/no-var-requires
const rulesDirPlugin = require('eslint-plugin-rulesdir');
// eslint-rules/ always points to frontend/eslint-rules/ regardless of workspace root.
rulesDirPlugin.RULES_DIR = path.join(__dirname, 'eslint-rules');
/**
* ESLint Configuration for SigNoz Frontend
*/
@@ -32,6 +39,7 @@ module.exports = {
sourceType: 'module',
},
plugins: [
'rulesdir', // Local custom rules
'react', // React-specific rules
'@typescript-eslint', // TypeScript linting
'simple-import-sort', // Auto-sort imports
@@ -56,6 +64,9 @@ module.exports = {
},
},
rules: {
// Asset migration — base-path safety
'rulesdir/no-unsupported-asset-pattern': 'error',
// Code quality rules
'prefer-const': 'error', // Enforces const for variables never reassigned
'no-var': 'error', // Disallows var, enforces let/const

View File

@@ -0,0 +1,9 @@
const path = require('path');
module.exports = {
plugins: [path.join(__dirname, 'stylelint-rules/no-unsupported-asset-url.js')],
customSyntax: 'postcss-scss',
rules: {
'local/no-unsupported-asset-url': true,
},
};

View File

@@ -0,0 +1 @@
export default 'test-file-stub';

View File

@@ -0,0 +1,390 @@
'use strict';
/**
* ESLint rule: no-unsupported-asset-pattern
*
* Enforces that all asset references (SVG, PNG, etc.) go through Vite's module
* pipeline via ES imports (`import fooUrl from '@/assets/...'`) rather than
* hard-coded strings or public/ paths.
*
* Why this matters: when the app is served from a sub-path (base-path deployment),
* Vite rewrites ES import URLs automatically. String literals and public/ references
* bypass that rewrite and break at runtime.
*
* Covers four AST patterns:
* 1. Literal — plain string: "/Icons/logo.svg"
* 2. TemplateLiteral — template string: `/Icons/${name}.svg`
* 3. BinaryExpression — concatenation: "/Icons/" + name + ".svg"
* 4. ImportDeclaration / ImportExpression — static & dynamic imports
*/
const {
hasAssetExtension,
containsAssetExtension,
extractUrlPath,
isAbsolutePath,
isPublicRelative,
isRelativePublicDir,
isValidAssetImport,
isExternalUrl,
} = require('./shared/asset-patterns');
// Known public/ sub-directories that should never appear in dynamic asset paths.
const PUBLIC_DIR_SEGMENTS = ['/Icons/', '/Images/', '/Logos/', '/svgs/'];
/**
* Recursively extracts the static string parts from a binary `+` expression or
* template literal. Returns `[null]` for any dynamic (non-string) node so
* callers can detect that the prefix became unknowable.
*
* Example: `"/Icons/" + iconName + ".svg"` → ["/Icons/", null, ".svg"]
*/
function collectBinaryStringParts(node) {
if (node.type === 'Literal' && typeof node.value === 'string')
return [node.value];
if (node.type === 'BinaryExpression' && node.operator === '+') {
return [
...collectBinaryStringParts(node.left),
...collectBinaryStringParts(node.right),
];
}
if (node.type === 'TemplateLiteral') {
return node.quasis.map((q) => q.value.raw);
}
// Unknown / dynamic node — signals "prefix is no longer fully static"
return [null];
}
module.exports = {
meta: {
type: 'problem',
docs: {
description:
'Disallow Vite-unsafe asset reference patterns that break runtime base-path deployments',
category: 'Asset Migration',
recommended: true,
},
schema: [],
messages: {
absoluteString:
'Absolute asset path "{{ value }}" is not base-path-safe. ' +
"Use an ES import instead: import fooUrl from '@/assets/...' and reference the variable.",
templateLiteral:
'Dynamic asset path with absolute prefix is not base-path-safe. ' +
"Use new URL('./asset.svg', import.meta.url).href for dynamic asset paths.",
absoluteImport:
'Asset imported via absolute path is not supported. ' +
"Use import fooUrl from '@/assets/...' instead.",
publicImport:
"Assets in public/ bypass Vite's module pipeline — their URLs are not base-path-aware and will break when the app is served from a sub-path (e.g. /app/). " +
"Use an ES import instead: import fooUrl from '@/assets/...' so Vite injects the correct base path.",
relativePublicString:
'Relative public-dir path "{{ value }}" is not base-path-safe. ' +
"Use an ES import instead: import fooUrl from '@/assets/...' and reference the variable.",
invalidAssetImport:
"Asset '{{ src }}' must be imported from src/assets/ using either '@/assets/...' " +
'or a relative path into src/assets/.',
},
},
create(context) {
return {
/**
* Catches plain string literals used as asset paths, e.g.:
* src="/Icons/logo.svg" or url("../public/Images/bg.png")
*
* Import declaration sources are skipped here — handled by ImportDeclaration.
* Also unwraps CSS `url(...)` wrappers before checking.
*/
Literal(node) {
if (node.parent && node.parent.type === 'ImportDeclaration') {
return;
}
const value = node.value;
if (typeof value !== 'string') return;
if (isExternalUrl(value)) return;
if (isAbsolutePath(value) && containsAssetExtension(value)) {
context.report({
node,
messageId: 'absoluteString',
data: { value },
});
return;
}
if (isRelativePublicDir(value) && containsAssetExtension(value)) {
context.report({
node,
messageId: 'relativePublicString',
data: { value },
});
return;
}
// Catches relative paths that start with "public/" e.g. 'public/Logos/aws-dark.svg'.
// isRelativePublicDir only covers known sub-dirs (Icons/, Logos/, etc.),
// so this handles the case where the full "public/" prefix is written explicitly.
if (isPublicRelative(value) && containsAssetExtension(value)) {
context.report({
node,
messageId: 'relativePublicString',
data: { value },
});
return;
}
// Also check the path inside a CSS url("...") wrapper
const urlPath = extractUrlPath(value);
if (urlPath && isExternalUrl(urlPath)) return;
if (urlPath && isAbsolutePath(urlPath) && containsAssetExtension(urlPath)) {
context.report({
node,
messageId: 'absoluteString',
data: { value: urlPath },
});
return;
}
if (
urlPath &&
isRelativePublicDir(urlPath) &&
containsAssetExtension(urlPath)
) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: urlPath },
});
return;
}
if (
urlPath &&
isPublicRelative(urlPath) &&
containsAssetExtension(urlPath)
) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: urlPath },
});
}
},
/**
* Catches template literals used as asset paths, e.g.:
* `/Icons/${name}.svg`
* `url('/Images/${bg}.png')`
*/
TemplateLiteral(node) {
const quasis = node.quasis;
if (!quasis || quasis.length === 0) return;
const firstQuasi = quasis[0].value.raw;
if (isExternalUrl(firstQuasi)) return;
const hasAssetExt = quasis.some((q) => containsAssetExtension(q.value.raw));
if (isAbsolutePath(firstQuasi) && hasAssetExt) {
context.report({
node,
messageId: 'templateLiteral',
});
return;
}
if (isRelativePublicDir(firstQuasi) && hasAssetExt) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: firstQuasi },
});
return;
}
// Expression-first template with known public-dir segment: `${base}/Icons/foo.svg`
const hasPublicSegment = quasis.some((q) =>
PUBLIC_DIR_SEGMENTS.some((seg) => q.value.raw.includes(seg)),
);
if (hasPublicSegment && hasAssetExt) {
context.report({
node,
messageId: 'templateLiteral',
});
return;
}
// No-interpolation template (single quasi): treat like a plain string
// and also unwrap any css url(...) wrapper.
if (quasis.length === 1) {
// Check the raw string first (no url() wrapper)
if (isPublicRelative(firstQuasi) && hasAssetExt) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: firstQuasi },
});
return;
}
const urlPath = extractUrlPath(firstQuasi);
if (urlPath && isExternalUrl(urlPath)) return;
if (urlPath && isAbsolutePath(urlPath) && hasAssetExtension(urlPath)) {
context.report({
node,
messageId: 'templateLiteral',
});
return;
}
if (
urlPath &&
isRelativePublicDir(urlPath) &&
hasAssetExtension(urlPath)
) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: urlPath },
});
return;
}
if (urlPath && isPublicRelative(urlPath) && hasAssetExtension(urlPath)) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: urlPath },
});
}
return;
}
// CSS url() with an absolute path inside a multi-quasi template, e.g.:
// `url('/Icons/${name}.svg')`
if (firstQuasi.includes('url(') && hasAssetExt) {
const urlMatch = firstQuasi.match(/^url\(\s*['"]?\//);
if (urlMatch) {
context.report({
node,
messageId: 'templateLiteral',
});
}
}
},
/**
* Catches string concatenation used to build asset paths, e.g.:
* "/Icons/" + name + ".svg"
*
* Collects the leading static parts (before the first dynamic value)
* to determine the path prefix. If any part carries a known asset
* extension, the expression is flagged.
*/
BinaryExpression(node) {
if (node.operator !== '+') return;
const parts = collectBinaryStringParts(node);
// Collect only the leading static parts; stop at the first dynamic (null) part
const prefixParts = [];
for (const part of parts) {
if (part === null) break;
prefixParts.push(part);
}
const staticPrefix = prefixParts.join('');
if (isExternalUrl(staticPrefix)) return;
const hasExt = parts.some(
(part) => part !== null && containsAssetExtension(part),
);
if (isAbsolutePath(staticPrefix) && hasExt) {
context.report({
node,
messageId: 'templateLiteral',
});
return;
}
if (isPublicRelative(staticPrefix) && hasExt) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: staticPrefix },
});
return;
}
if (isRelativePublicDir(staticPrefix) && hasExt) {
context.report({
node,
messageId: 'relativePublicString',
data: { value: staticPrefix },
});
}
},
/**
* Catches static asset imports that don't go through src/assets/, e.g.:
* import logo from '/public/Icons/logo.svg' ← absolute path
* import logo from '../../public/logo.svg' ← relative into public/
* import logo from '../somewhere/logo.svg' ← outside src/assets/
*
* Valid pattern: import fooUrl from '@/assets/...' or relative within src/assets/
*/
ImportDeclaration(node) {
const src = node.source.value;
if (typeof src !== 'string') return;
if (!hasAssetExtension(src)) return;
if (isAbsolutePath(src)) {
context.report({ node, messageId: 'absoluteImport' });
return;
}
if (isPublicRelative(src)) {
context.report({ node, messageId: 'publicImport' });
return;
}
if (!isValidAssetImport(src)) {
context.report({
node,
messageId: 'invalidAssetImport',
data: { src },
});
}
},
/**
* Same checks as ImportDeclaration but for dynamic imports:
* const logo = await import('/Icons/logo.svg')
*
* Only literal sources are checked; fully dynamic expressions are ignored
* since their paths cannot be statically analysed.
*/
ImportExpression(node) {
const src = node.source;
if (!src || src.type !== 'Literal' || typeof src.value !== 'string') return;
const value = src.value;
if (!hasAssetExtension(value)) return;
if (isAbsolutePath(value)) {
context.report({ node, messageId: 'absoluteImport' });
return;
}
if (isPublicRelative(value)) {
context.report({ node, messageId: 'publicImport' });
return;
}
if (!isValidAssetImport(value)) {
context.report({
node,
messageId: 'invalidAssetImport',
data: { src: value },
});
}
},
};
},
};

View File

@@ -0,0 +1,3 @@
{
"type": "commonjs"
}

View File

@@ -0,0 +1,121 @@
'use strict';
const ALLOWED_ASSET_EXTENSIONS = [
'.svg',
'.png',
'.webp',
'.jpg',
'.jpeg',
'.gif',
];
/**
* Returns true if the string ends with an asset extension.
* e.g. "/Icons/foo.svg" → true, "/Icons/foo.svg.bak" → false
*/
function hasAssetExtension(str) {
if (typeof str !== 'string') return false;
return ALLOWED_ASSET_EXTENSIONS.some((ext) => str.endsWith(ext));
}
// Like hasAssetExtension but also matches mid-string with boundary check,
// e.g. "/foo.svg?v=1" → true, "/icons.svg-dir/" → true (- is non-alphanumeric boundary)
function containsAssetExtension(str) {
if (typeof str !== 'string') return false;
return ALLOWED_ASSET_EXTENSIONS.some((ext) => {
const idx = str.indexOf(ext);
if (idx === -1) return false;
const afterIdx = idx + ext.length;
// Broad boundary (any non-alphanumeric) is intentional — the real guard against
// false positives is the upstream conditions (isAbsolutePath, isRelativePublicDir, etc.)
// that must pass before this is reached. "/icons.svg-dir/" → true (- is a boundary).
return afterIdx >= str.length || /[^a-zA-Z0-9]/.test(str[afterIdx]);
});
}
/**
* Extracts the asset path from a CSS url() wrapper.
* Handles single quotes, double quotes, unquoted, and whitespace variations.
* e.g.
* "url('/Icons/foo.svg')" → "/Icons/foo.svg"
* "url( '../assets/bg.png' )" → "../assets/bg.png"
* "url(/Icons/foo.svg)" → "/Icons/foo.svg"
* Returns null if the string is not a url() wrapper.
*/
function extractUrlPath(str) {
if (typeof str !== 'string') return null;
// Match url( [whitespace] [quote?] path [quote?] [whitespace] )
// Capture group: [^'")\s]+ matches path until quote, closing paren, or whitespace
const match = str.match(/^url\(\s*['"]?([^'")\s]+)['"]?\s*\)$/);
return match ? match[1] : null;
}
/**
* Returns true if the string is an absolute path (starts with /).
* Absolute paths in url() bypass <base href> and fail under any URL prefix.
*/
function isAbsolutePath(str) {
if (typeof str !== 'string') return false;
return str.startsWith('/') && !str.startsWith('//');
}
/**
* Returns true if the path imports from the public/ directory.
* Relative imports into public/ cause asset duplication in dist/.
*/
function isPublicRelative(str) {
if (typeof str !== 'string') return false;
return str.includes('/public/') || str.startsWith('public/');
}
/**
* Returns true if the string is a relative reference into a known public-dir folder.
* e.g. "Icons/foo.svg", `Logos/aws-dark.svg`, "Images/bg.png"
* These bypass Vite's module pipeline even without a leading slash.
*/
const PUBLIC_DIR_SEGMENTS = ['Icons/', 'Images/', 'Logos/', 'svgs/'];
function isRelativePublicDir(str) {
if (typeof str !== 'string') return false;
return PUBLIC_DIR_SEGMENTS.some((seg) => str.startsWith(seg));
}
/**
* Returns true if an asset import path is valid (goes through Vite's module pipeline).
* Valid: @/assets/..., any relative path containing /assets/, or node_modules packages.
* Invalid: absolute paths, public/ dir, or relative paths outside src/assets/.
*/
function isValidAssetImport(str) {
if (typeof str !== 'string') return false;
if (str.startsWith('@/assets/')) return true;
if (str.includes('/assets/')) return true;
// Not starting with . or / means it's a node_modules package — always valid
if (!str.startsWith('.') && !str.startsWith('/')) return true;
return false;
}
/**
* Returns true if the string is an external URL.
* Used to avoid false positives on CDN/API URLs with asset extensions.
*/
function isExternalUrl(str) {
if (typeof str !== 'string') return false;
return (
str.startsWith('http://') ||
str.startsWith('https://') ||
str.startsWith('//')
);
}
module.exports = {
ALLOWED_ASSET_EXTENSIONS,
PUBLIC_DIR_SEGMENTS,
hasAssetExtension,
containsAssetExtension,
extractUrlPath,
isAbsolutePath,
isPublicRelative,
isRelativePublicDir,
isValidAssetImport,
isExternalUrl,
};

View File

@@ -62,6 +62,29 @@
<link data-react-helmet="true" rel="shortcut icon" href="/favicon.ico" />
</head>
<body data-theme="default">
<script>
// Apply theme class synchronously before React renders to prevent flash.
// Mirrors the logic in ThemeProvider (hooks/useDarkMode/index.tsx).
(function () {
try {
var theme = localStorage.getItem('THEME');
var autoSwitch = localStorage.getItem('THEME_AUTO_SWITCH') === 'true';
if (autoSwitch) {
theme = window.matchMedia('(prefers-color-scheme: dark)').matches
? 'dark'
: 'light';
}
if (theme === 'light') {
document.body.classList.add('lightMode');
} else {
// Default to dark when no preference is stored
document.body.classList.add('dark', 'darkMode');
}
} catch (e) {
document.body.classList.add('dark', 'darkMode');
}
})();
</script>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>

View File

@@ -11,7 +11,11 @@ const config: Config.InitialOptions = {
moduleFileExtensions: ['ts', 'tsx', 'js', 'json'],
modulePathIgnorePatterns: ['dist'],
moduleNameMapper: {
'\\.(png|jpg|jpeg|gif|svg|webp|avif|ico|bmp|tiff)$':
'<rootDir>/__mocks__/fileMock.ts',
'^@/(.*)$': '<rootDir>/src/$1',
'\\.(css|less|scss)$': '<rootDir>/__mocks__/cssMock.ts',
'\\.module\\.mjs$': '<rootDir>/__mocks__/cssMock.ts',
'\\.md$': '<rootDir>/__mocks__/cssMock.ts',
'^uplot$': '<rootDir>/__mocks__/uplotMock.ts',
'^@signozhq/resizable$': '<rootDir>/__mocks__/resizableMock.tsx',
@@ -41,7 +45,7 @@ const config: Config.InitialOptions = {
'^.+\\.(js|jsx)$': 'babel-jest',
},
transformIgnorePatterns: [
'node_modules/(?!(lodash-es|react-dnd|core-dnd|@react-dnd|dnd-core|react-dnd-html5-backend|axios|@signozhq/design-tokens|@signozhq/table|@signozhq/calendar|@signozhq/input|@signozhq/popover|@signozhq/button|@signozhq/sonner|@signozhq/*|date-fns|d3-interpolate|d3-color|api|@codemirror|@lezer|@marijn|@grafana|nuqs)/)',
'node_modules/(?!(lodash-es|react-dnd|core-dnd|@react-dnd|dnd-core|react-dnd-html5-backend|axios|@signozhq/design-tokens|@signozhq/table|@signozhq/calendar|@signozhq/input|@signozhq/popover|@signozhq/button|@signozhq/*|date-fns|d3-interpolate|d3-color|api|@codemirror|@lezer|@marijn|@grafana|nuqs)/)',
],
setupFilesAfterEnv: ['<rootDir>/jest.setup.ts'],
testPathIgnorePatterns: ['/node_modules/', '/public/'],

View File

@@ -24,6 +24,30 @@ window.matchMedia =
};
};
// Patch getComputedStyle to handle CSS parsing errors from @signozhq/* packages.
// These packages inject CSS at import time via style-inject / vite-plugin-css-injected-by-js.
// jsdom's nwsapi cannot parse some of the injected selectors (e.g. Tailwind's :animate-in),
// causing SyntaxErrors during getComputedStyle / getByRole calls.
const _origGetComputedStyle = window.getComputedStyle;
window.getComputedStyle = function (
elt: Element,
pseudoElt?: string | null,
): CSSStyleDeclaration {
try {
return _origGetComputedStyle.call(window, elt, pseudoElt);
} catch {
// Return a minimal CSSStyleDeclaration so callers (testing-library, Radix UI)
// see the element as visible and without animations.
return ({
display: '',
visibility: '',
opacity: '1',
animationName: 'none',
getPropertyValue: () => '',
} as unknown) as CSSStyleDeclaration;
}
};
beforeAll(() => server.listen());
afterEach(() => server.resetHandlers());

View File

@@ -10,9 +10,10 @@
"preview": "vite preview",
"prettify": "prettier --write .",
"fmt": "prettier --check .",
"lint": "eslint ./src",
"lint": "eslint ./src && stylelint \"src/**/*.scss\"",
"lint:generated": "eslint ./src/api/generated --fix",
"lint:fix": "eslint ./src --fix",
"lint:styles": "stylelint \"src/**/*.scss\"",
"jest": "jest",
"jest:coverage": "jest --coverage",
"jest:watch": "jest --watch",
@@ -47,26 +48,23 @@
"@radix-ui/react-tooltip": "1.0.7",
"@sentry/react": "8.41.0",
"@sentry/vite-plugin": "2.22.6",
"@signozhq/badge": "0.0.2",
"@signozhq/button": "0.0.2",
"@signozhq/calendar": "0.0.0",
"@signozhq/callout": "0.0.2",
"@signozhq/checkbox": "0.0.2",
"@signozhq/combobox": "0.0.2",
"@signozhq/command": "0.0.0",
"@signozhq/design-tokens": "2.1.1",
"@signozhq/dialog": "^0.0.2",
"@signozhq/drawer": "0.0.4",
"@signozhq/button": "0.0.5",
"@signozhq/calendar": "0.1.1",
"@signozhq/callout": "0.0.4",
"@signozhq/checkbox": "0.0.4",
"@signozhq/combobox": "0.0.4",
"@signozhq/command": "0.0.2",
"@signozhq/design-tokens": "2.1.4",
"@signozhq/dialog": "0.0.4",
"@signozhq/drawer": "0.0.6",
"@signozhq/icons": "0.1.0",
"@signozhq/input": "0.0.2",
"@signozhq/popover": "0.0.0",
"@signozhq/radio-group": "0.0.2",
"@signozhq/resizable": "0.0.0",
"@signozhq/sonner": "0.1.0",
"@signozhq/switch": "0.0.2",
"@signozhq/table": "0.3.7",
"@signozhq/toggle-group": "0.0.1",
"@signozhq/tooltip": "0.0.2",
"@signozhq/input": "0.0.4",
"@signozhq/popover": "0.1.2",
"@signozhq/radio-group": "0.0.4",
"@signozhq/resizable": "0.0.2",
"@signozhq/tabs": "0.0.11",
"@signozhq/table": "0.3.8",
"@signozhq/toggle-group": "0.0.3",
"@signozhq/ui": "0.0.5",
"@tanstack/react-table": "8.21.3",
"@tanstack/react-virtual": "3.13.22",
@@ -229,6 +227,7 @@
"eslint-plugin-prettier": "^4.0.0",
"eslint-plugin-react": "^7.24.0",
"eslint-plugin-react-hooks": "^4.3.0",
"eslint-plugin-rulesdir": "0.2.2",
"eslint-plugin-simple-import-sort": "^7.0.0",
"eslint-plugin-sonarjs": "^0.12.0",
"husky": "^7.0.4",
@@ -244,6 +243,7 @@
"orval": "7.18.0",
"portfinder-sync": "^0.0.2",
"postcss": "8.5.6",
"postcss-scss": "4.0.9",
"prettier": "2.2.1",
"prop-types": "15.8.1",
"react-hooks-testing-library": "0.6.0",
@@ -251,6 +251,8 @@
"redux-mock-store": "1.5.4",
"sass": "1.97.3",
"sharp": "0.34.5",
"stylelint": "17.7.0",
"stylelint-scss": "7.0.0",
"svgo": "4.0.0",
"ts-api-utils": "2.4.0",
"ts-jest": "29.4.6",

View File

@@ -244,12 +244,18 @@ export const ShortcutsPage = Loadable(
() => import(/* webpackChunkName: "ShortcutsPage" */ 'pages/Settings'),
);
export const InstalledIntegrations = Loadable(
export const Integrations = Loadable(
() =>
import(
/* webpackChunkName: "InstalledIntegrations" */ 'pages/IntegrationsModulePage'
),
);
export const IntegrationsDetailsPage = Loadable(
() =>
import(
/* webpackChunkName: "IntegrationsDetailsPage" */ 'pages/IntegrationsDetailsPage'
),
);
export const MessagingQueuesMainPage = Loadable(
() =>

View File

@@ -18,7 +18,8 @@ import {
ForgotPassword,
Home,
InfrastructureMonitoring,
InstalledIntegrations,
Integrations,
IntegrationsDetailsPage,
LicensePage,
ListAllALertsPage,
LiveLogs,
@@ -389,10 +390,17 @@ const routes: AppRoutes[] = [
isPrivate: true,
key: 'WORKSPACE_ACCESS_RESTRICTED',
},
{
path: ROUTES.INTEGRATIONS_DETAIL,
exact: true,
component: IntegrationsDetailsPage,
isPrivate: true,
key: 'INTEGRATIONS_DETAIL',
},
{
path: ROUTES.INTEGRATIONS,
exact: true,
component: InstalledIntegrations,
component: Integrations,
isPrivate: true,
key: 'INTEGRATIONS',
},

View File

@@ -1,19 +0,0 @@
import axios from 'api';
import { ErrorResponse, SuccessResponse } from 'types/api';
const removeAwsIntegrationAccount = async (
accountId: string,
): Promise<SuccessResponse<Record<string, never>> | ErrorResponse> => {
const response = await axios.post(
`/cloud-integrations/aws/accounts/${accountId}/disconnect`,
);
return {
statusCode: 200,
error: null,
message: response.data.status,
payload: response.data.data,
};
};
export default removeAwsIntegrationAccount;

View File

@@ -0,0 +1,97 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {
InvalidateOptions,
QueryClient,
QueryFunction,
QueryKey,
UseQueryOptions,
UseQueryResult,
} from 'react-query';
import { useQuery } from 'react-query';
import type { ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type { GetAlerts200, RenderErrorResponseDTO } from '../sigNoz.schemas';
/**
* This endpoint returns alerts for the organization
* @summary Get alerts
*/
export const getAlerts = (signal?: AbortSignal) => {
return GeneratedAPIInstance<GetAlerts200>({
url: `/api/v1/alerts`,
method: 'GET',
signal,
});
};
export const getGetAlertsQueryKey = () => {
return [`/api/v1/alerts`] as const;
};
export const getGetAlertsQueryOptions = <
TData = Awaited<ReturnType<typeof getAlerts>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof getAlerts>>, TError, TData>;
}) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getGetAlertsQueryKey();
const queryFn: QueryFunction<Awaited<ReturnType<typeof getAlerts>>> = ({
signal,
}) => getAlerts(signal);
return { queryKey, queryFn, ...queryOptions } as UseQueryOptions<
Awaited<ReturnType<typeof getAlerts>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetAlertsQueryResult = NonNullable<
Awaited<ReturnType<typeof getAlerts>>
>;
export type GetAlertsQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get alerts
*/
export function useGetAlerts<
TData = Awaited<ReturnType<typeof getAlerts>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof getAlerts>>, TError, TData>;
}): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetAlertsQueryOptions(options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get alerts
*/
export const invalidateGetAlerts = async (
queryClient: QueryClient,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetAlertsQueryKey() },
options,
);
return queryClient;
};

View File

@@ -0,0 +1,646 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {
InvalidateOptions,
MutationFunction,
QueryClient,
QueryFunction,
QueryKey,
UseMutationOptions,
UseMutationResult,
UseQueryOptions,
UseQueryResult,
} from 'react-query';
import { useMutation, useQuery } from 'react-query';
import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
ConfigReceiverDTO,
CreateChannel201,
DeleteChannelByIDPathParameters,
GetChannelByID200,
GetChannelByIDPathParameters,
ListChannels200,
RenderErrorResponseDTO,
UpdateChannelByIDPathParameters,
} from '../sigNoz.schemas';
/**
* This endpoint lists all notification channels for the organization
* @summary List notification channels
*/
export const listChannels = (signal?: AbortSignal) => {
return GeneratedAPIInstance<ListChannels200>({
url: `/api/v1/channels`,
method: 'GET',
signal,
});
};
export const getListChannelsQueryKey = () => {
return [`/api/v1/channels`] as const;
};
export const getListChannelsQueryOptions = <
TData = Awaited<ReturnType<typeof listChannels>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof listChannels>>,
TError,
TData
>;
}) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getListChannelsQueryKey();
const queryFn: QueryFunction<Awaited<ReturnType<typeof listChannels>>> = ({
signal,
}) => listChannels(signal);
return { queryKey, queryFn, ...queryOptions } as UseQueryOptions<
Awaited<ReturnType<typeof listChannels>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type ListChannelsQueryResult = NonNullable<
Awaited<ReturnType<typeof listChannels>>
>;
export type ListChannelsQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary List notification channels
*/
export function useListChannels<
TData = Awaited<ReturnType<typeof listChannels>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof listChannels>>,
TError,
TData
>;
}): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getListChannelsQueryOptions(options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary List notification channels
*/
export const invalidateListChannels = async (
queryClient: QueryClient,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getListChannelsQueryKey() },
options,
);
return queryClient;
};
/**
* This endpoint creates a notification channel
* @summary Create notification channel
*/
export const createChannel = (
configReceiverDTO: BodyType<ConfigReceiverDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateChannel201>({
url: `/api/v1/channels`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: configReceiverDTO,
signal,
});
};
export const getCreateChannelMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
> => {
const mutationKey = ['createChannel'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createChannel>>,
{ data: BodyType<ConfigReceiverDTO> }
> = (props) => {
const { data } = props ?? {};
return createChannel(data);
};
return { mutationFn, ...mutationOptions };
};
export type CreateChannelMutationResult = NonNullable<
Awaited<ReturnType<typeof createChannel>>
>;
export type CreateChannelMutationBody = BodyType<ConfigReceiverDTO>;
export type CreateChannelMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Create notification channel
*/
export const useCreateChannel = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
> => {
const mutationOptions = getCreateChannelMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint deletes a notification channel by ID
* @summary Delete notification channel
*/
export const deleteChannelByID = ({ id }: DeleteChannelByIDPathParameters) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/channels/${id}`,
method: 'DELETE',
});
};
export const getDeleteChannelByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteChannelByID>>,
TError,
{ pathParams: DeleteChannelByIDPathParameters },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof deleteChannelByID>>,
TError,
{ pathParams: DeleteChannelByIDPathParameters },
TContext
> => {
const mutationKey = ['deleteChannelByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof deleteChannelByID>>,
{ pathParams: DeleteChannelByIDPathParameters }
> = (props) => {
const { pathParams } = props ?? {};
return deleteChannelByID(pathParams);
};
return { mutationFn, ...mutationOptions };
};
export type DeleteChannelByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof deleteChannelByID>>
>;
export type DeleteChannelByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Delete notification channel
*/
export const useDeleteChannelByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteChannelByID>>,
TError,
{ pathParams: DeleteChannelByIDPathParameters },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof deleteChannelByID>>,
TError,
{ pathParams: DeleteChannelByIDPathParameters },
TContext
> => {
const mutationOptions = getDeleteChannelByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint returns a notification channel by ID
* @summary Get notification channel by ID
*/
export const getChannelByID = (
{ id }: GetChannelByIDPathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetChannelByID200>({
url: `/api/v1/channels/${id}`,
method: 'GET',
signal,
});
};
export const getGetChannelByIDQueryKey = ({
id,
}: GetChannelByIDPathParameters) => {
return [`/api/v1/channels/${id}`] as const;
};
export const getGetChannelByIDQueryOptions = <
TData = Awaited<ReturnType<typeof getChannelByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetChannelByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getChannelByID>>,
TError,
TData
>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getGetChannelByIDQueryKey({ id });
const queryFn: QueryFunction<Awaited<ReturnType<typeof getChannelByID>>> = ({
signal,
}) => getChannelByID({ id }, signal);
return {
queryKey,
queryFn,
enabled: !!id,
...queryOptions,
} as UseQueryOptions<
Awaited<ReturnType<typeof getChannelByID>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetChannelByIDQueryResult = NonNullable<
Awaited<ReturnType<typeof getChannelByID>>
>;
export type GetChannelByIDQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get notification channel by ID
*/
export function useGetChannelByID<
TData = Awaited<ReturnType<typeof getChannelByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetChannelByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getChannelByID>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetChannelByIDQueryOptions({ id }, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get notification channel by ID
*/
export const invalidateGetChannelByID = async (
queryClient: QueryClient,
{ id }: GetChannelByIDPathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetChannelByIDQueryKey({ id }) },
options,
);
return queryClient;
};
/**
* This endpoint updates a notification channel by ID
* @summary Update notification channel
*/
export const updateChannelByID = (
{ id }: UpdateChannelByIDPathParameters,
configReceiverDTO: BodyType<ConfigReceiverDTO>,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/channels/${id}`,
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
data: configReceiverDTO,
});
};
export const getUpdateChannelByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateChannelByID>>,
TError,
{
pathParams: UpdateChannelByIDPathParameters;
data: BodyType<ConfigReceiverDTO>;
},
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof updateChannelByID>>,
TError,
{
pathParams: UpdateChannelByIDPathParameters;
data: BodyType<ConfigReceiverDTO>;
},
TContext
> => {
const mutationKey = ['updateChannelByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof updateChannelByID>>,
{
pathParams: UpdateChannelByIDPathParameters;
data: BodyType<ConfigReceiverDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
return updateChannelByID(pathParams, data);
};
return { mutationFn, ...mutationOptions };
};
export type UpdateChannelByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof updateChannelByID>>
>;
export type UpdateChannelByIDMutationBody = BodyType<ConfigReceiverDTO>;
export type UpdateChannelByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Update notification channel
*/
export const useUpdateChannelByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateChannelByID>>,
TError,
{
pathParams: UpdateChannelByIDPathParameters;
data: BodyType<ConfigReceiverDTO>;
},
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof updateChannelByID>>,
TError,
{
pathParams: UpdateChannelByIDPathParameters;
data: BodyType<ConfigReceiverDTO>;
},
TContext
> => {
const mutationOptions = getUpdateChannelByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint tests a notification channel by sending a test notification
* @summary Test notification channel
*/
export const testChannel = (
configReceiverDTO: BodyType<ConfigReceiverDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/channels/test`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: configReceiverDTO,
signal,
});
};
export const getTestChannelMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof testChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof testChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
> => {
const mutationKey = ['testChannel'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof testChannel>>,
{ data: BodyType<ConfigReceiverDTO> }
> = (props) => {
const { data } = props ?? {};
return testChannel(data);
};
return { mutationFn, ...mutationOptions };
};
export type TestChannelMutationResult = NonNullable<
Awaited<ReturnType<typeof testChannel>>
>;
export type TestChannelMutationBody = BodyType<ConfigReceiverDTO>;
export type TestChannelMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Test notification channel
*/
export const useTestChannel = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof testChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof testChannel>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
> => {
const mutationOptions = getTestChannelMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* Deprecated: use /api/v1/channels/test instead
* @deprecated
* @summary Test notification channel (deprecated)
*/
export const testChannelDeprecated = (
configReceiverDTO: BodyType<ConfigReceiverDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/testChannel`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: configReceiverDTO,
signal,
});
};
export const getTestChannelDeprecatedMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof testChannelDeprecated>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof testChannelDeprecated>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
> => {
const mutationKey = ['testChannelDeprecated'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof testChannelDeprecated>>,
{ data: BodyType<ConfigReceiverDTO> }
> = (props) => {
const { data } = props ?? {};
return testChannelDeprecated(data);
};
return { mutationFn, ...mutationOptions };
};
export type TestChannelDeprecatedMutationResult = NonNullable<
Awaited<ReturnType<typeof testChannelDeprecated>>
>;
export type TestChannelDeprecatedMutationBody = BodyType<ConfigReceiverDTO>;
export type TestChannelDeprecatedMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @deprecated
* @summary Test notification channel (deprecated)
*/
export const useTestChannelDeprecated = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof testChannelDeprecated>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof testChannelDeprecated>>,
TError,
{ data: BodyType<ConfigReceiverDTO> },
TContext
> => {
const mutationOptions = getTestChannelDeprecatedMutationOptions(options);
return useMutation(mutationOptions);
};

View File

@@ -0,0 +1,496 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {
InvalidateOptions,
MutationFunction,
QueryClient,
QueryFunction,
QueryKey,
UseMutationOptions,
UseMutationResult,
UseQueryOptions,
UseQueryResult,
} from 'react-query';
import { useMutation, useQuery } from 'react-query';
import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
CreateDowntimeSchedule201,
DeleteDowntimeScheduleByIDPathParameters,
GetDowntimeScheduleByID200,
GetDowntimeScheduleByIDPathParameters,
ListDowntimeSchedules200,
ListDowntimeSchedulesParams,
RenderErrorResponseDTO,
RuletypesPostablePlannedMaintenanceDTO,
UpdateDowntimeScheduleByIDPathParameters,
} from '../sigNoz.schemas';
/**
* This endpoint lists all planned maintenance / downtime schedules
* @summary List downtime schedules
*/
export const listDowntimeSchedules = (
params?: ListDowntimeSchedulesParams,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<ListDowntimeSchedules200>({
url: `/api/v1/downtime_schedules`,
method: 'GET',
params,
signal,
});
};
export const getListDowntimeSchedulesQueryKey = (
params?: ListDowntimeSchedulesParams,
) => {
return [`/api/v1/downtime_schedules`, ...(params ? [params] : [])] as const;
};
export const getListDowntimeSchedulesQueryOptions = <
TData = Awaited<ReturnType<typeof listDowntimeSchedules>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
params?: ListDowntimeSchedulesParams,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof listDowntimeSchedules>>,
TError,
TData
>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey =
queryOptions?.queryKey ?? getListDowntimeSchedulesQueryKey(params);
const queryFn: QueryFunction<
Awaited<ReturnType<typeof listDowntimeSchedules>>
> = ({ signal }) => listDowntimeSchedules(params, signal);
return { queryKey, queryFn, ...queryOptions } as UseQueryOptions<
Awaited<ReturnType<typeof listDowntimeSchedules>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type ListDowntimeSchedulesQueryResult = NonNullable<
Awaited<ReturnType<typeof listDowntimeSchedules>>
>;
export type ListDowntimeSchedulesQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary List downtime schedules
*/
export function useListDowntimeSchedules<
TData = Awaited<ReturnType<typeof listDowntimeSchedules>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
params?: ListDowntimeSchedulesParams,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof listDowntimeSchedules>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getListDowntimeSchedulesQueryOptions(params, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary List downtime schedules
*/
export const invalidateListDowntimeSchedules = async (
queryClient: QueryClient,
params?: ListDowntimeSchedulesParams,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getListDowntimeSchedulesQueryKey(params) },
options,
);
return queryClient;
};
/**
* This endpoint creates a new planned maintenance / downtime schedule
* @summary Create downtime schedule
*/
export const createDowntimeSchedule = (
ruletypesPostablePlannedMaintenanceDTO: BodyType<RuletypesPostablePlannedMaintenanceDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateDowntimeSchedule201>({
url: `/api/v1/downtime_schedules`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: ruletypesPostablePlannedMaintenanceDTO,
signal,
});
};
export const getCreateDowntimeScheduleMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createDowntimeSchedule>>,
TError,
{ data: BodyType<RuletypesPostablePlannedMaintenanceDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createDowntimeSchedule>>,
TError,
{ data: BodyType<RuletypesPostablePlannedMaintenanceDTO> },
TContext
> => {
const mutationKey = ['createDowntimeSchedule'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createDowntimeSchedule>>,
{ data: BodyType<RuletypesPostablePlannedMaintenanceDTO> }
> = (props) => {
const { data } = props ?? {};
return createDowntimeSchedule(data);
};
return { mutationFn, ...mutationOptions };
};
export type CreateDowntimeScheduleMutationResult = NonNullable<
Awaited<ReturnType<typeof createDowntimeSchedule>>
>;
export type CreateDowntimeScheduleMutationBody = BodyType<RuletypesPostablePlannedMaintenanceDTO>;
export type CreateDowntimeScheduleMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Create downtime schedule
*/
export const useCreateDowntimeSchedule = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createDowntimeSchedule>>,
TError,
{ data: BodyType<RuletypesPostablePlannedMaintenanceDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createDowntimeSchedule>>,
TError,
{ data: BodyType<RuletypesPostablePlannedMaintenanceDTO> },
TContext
> => {
const mutationOptions = getCreateDowntimeScheduleMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint deletes a downtime schedule by ID
* @summary Delete downtime schedule
*/
export const deleteDowntimeScheduleByID = ({
id,
}: DeleteDowntimeScheduleByIDPathParameters) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/downtime_schedules/${id}`,
method: 'DELETE',
});
};
export const getDeleteDowntimeScheduleByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteDowntimeScheduleByID>>,
TError,
{ pathParams: DeleteDowntimeScheduleByIDPathParameters },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof deleteDowntimeScheduleByID>>,
TError,
{ pathParams: DeleteDowntimeScheduleByIDPathParameters },
TContext
> => {
const mutationKey = ['deleteDowntimeScheduleByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof deleteDowntimeScheduleByID>>,
{ pathParams: DeleteDowntimeScheduleByIDPathParameters }
> = (props) => {
const { pathParams } = props ?? {};
return deleteDowntimeScheduleByID(pathParams);
};
return { mutationFn, ...mutationOptions };
};
export type DeleteDowntimeScheduleByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof deleteDowntimeScheduleByID>>
>;
export type DeleteDowntimeScheduleByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Delete downtime schedule
*/
export const useDeleteDowntimeScheduleByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteDowntimeScheduleByID>>,
TError,
{ pathParams: DeleteDowntimeScheduleByIDPathParameters },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof deleteDowntimeScheduleByID>>,
TError,
{ pathParams: DeleteDowntimeScheduleByIDPathParameters },
TContext
> => {
const mutationOptions = getDeleteDowntimeScheduleByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint returns a downtime schedule by ID
* @summary Get downtime schedule by ID
*/
export const getDowntimeScheduleByID = (
{ id }: GetDowntimeScheduleByIDPathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetDowntimeScheduleByID200>({
url: `/api/v1/downtime_schedules/${id}`,
method: 'GET',
signal,
});
};
export const getGetDowntimeScheduleByIDQueryKey = ({
id,
}: GetDowntimeScheduleByIDPathParameters) => {
return [`/api/v1/downtime_schedules/${id}`] as const;
};
export const getGetDowntimeScheduleByIDQueryOptions = <
TData = Awaited<ReturnType<typeof getDowntimeScheduleByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetDowntimeScheduleByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getDowntimeScheduleByID>>,
TError,
TData
>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey =
queryOptions?.queryKey ?? getGetDowntimeScheduleByIDQueryKey({ id });
const queryFn: QueryFunction<
Awaited<ReturnType<typeof getDowntimeScheduleByID>>
> = ({ signal }) => getDowntimeScheduleByID({ id }, signal);
return {
queryKey,
queryFn,
enabled: !!id,
...queryOptions,
} as UseQueryOptions<
Awaited<ReturnType<typeof getDowntimeScheduleByID>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetDowntimeScheduleByIDQueryResult = NonNullable<
Awaited<ReturnType<typeof getDowntimeScheduleByID>>
>;
export type GetDowntimeScheduleByIDQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get downtime schedule by ID
*/
export function useGetDowntimeScheduleByID<
TData = Awaited<ReturnType<typeof getDowntimeScheduleByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetDowntimeScheduleByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getDowntimeScheduleByID>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetDowntimeScheduleByIDQueryOptions({ id }, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get downtime schedule by ID
*/
export const invalidateGetDowntimeScheduleByID = async (
queryClient: QueryClient,
{ id }: GetDowntimeScheduleByIDPathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetDowntimeScheduleByIDQueryKey({ id }) },
options,
);
return queryClient;
};
/**
* This endpoint updates a downtime schedule by ID
* @summary Update downtime schedule
*/
export const updateDowntimeScheduleByID = (
{ id }: UpdateDowntimeScheduleByIDPathParameters,
ruletypesPostablePlannedMaintenanceDTO: BodyType<RuletypesPostablePlannedMaintenanceDTO>,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/downtime_schedules/${id}`,
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
data: ruletypesPostablePlannedMaintenanceDTO,
});
};
export const getUpdateDowntimeScheduleByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateDowntimeScheduleByID>>,
TError,
{
pathParams: UpdateDowntimeScheduleByIDPathParameters;
data: BodyType<RuletypesPostablePlannedMaintenanceDTO>;
},
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof updateDowntimeScheduleByID>>,
TError,
{
pathParams: UpdateDowntimeScheduleByIDPathParameters;
data: BodyType<RuletypesPostablePlannedMaintenanceDTO>;
},
TContext
> => {
const mutationKey = ['updateDowntimeScheduleByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof updateDowntimeScheduleByID>>,
{
pathParams: UpdateDowntimeScheduleByIDPathParameters;
data: BodyType<RuletypesPostablePlannedMaintenanceDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
return updateDowntimeScheduleByID(pathParams, data);
};
return { mutationFn, ...mutationOptions };
};
export type UpdateDowntimeScheduleByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof updateDowntimeScheduleByID>>
>;
export type UpdateDowntimeScheduleByIDMutationBody = BodyType<RuletypesPostablePlannedMaintenanceDTO>;
export type UpdateDowntimeScheduleByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Update downtime schedule
*/
export const useUpdateDowntimeScheduleByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateDowntimeScheduleByID>>,
TError,
{
pathParams: UpdateDowntimeScheduleByIDPathParameters;
data: BodyType<RuletypesPostablePlannedMaintenanceDTO>;
},
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof updateDowntimeScheduleByID>>,
TError,
{
pathParams: UpdateDowntimeScheduleByIDPathParameters;
data: BodyType<RuletypesPostablePlannedMaintenanceDTO>;
},
TContext
> => {
const mutationOptions = getUpdateDowntimeScheduleByIDMutationOptions(options);
return useMutation(mutationOptions);
};

View File

@@ -0,0 +1,482 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {
InvalidateOptions,
MutationFunction,
QueryClient,
QueryFunction,
QueryKey,
UseMutationOptions,
UseMutationResult,
UseQueryOptions,
UseQueryResult,
} from 'react-query';
import { useMutation, useQuery } from 'react-query';
import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
AlertmanagertypesPostableRoutePolicyDTO,
CreateRoutePolicy201,
DeleteRoutePolicyByIDPathParameters,
GetAllRoutePolicies200,
GetRoutePolicyByID200,
GetRoutePolicyByIDPathParameters,
RenderErrorResponseDTO,
UpdateRoutePolicy200,
UpdateRoutePolicyPathParameters,
} from '../sigNoz.schemas';
/**
* This endpoint lists all route policies for the organization
* @summary List route policies
*/
export const getAllRoutePolicies = (signal?: AbortSignal) => {
return GeneratedAPIInstance<GetAllRoutePolicies200>({
url: `/api/v1/route_policies`,
method: 'GET',
signal,
});
};
export const getGetAllRoutePoliciesQueryKey = () => {
return [`/api/v1/route_policies`] as const;
};
export const getGetAllRoutePoliciesQueryOptions = <
TData = Awaited<ReturnType<typeof getAllRoutePolicies>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getAllRoutePolicies>>,
TError,
TData
>;
}) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getGetAllRoutePoliciesQueryKey();
const queryFn: QueryFunction<
Awaited<ReturnType<typeof getAllRoutePolicies>>
> = ({ signal }) => getAllRoutePolicies(signal);
return { queryKey, queryFn, ...queryOptions } as UseQueryOptions<
Awaited<ReturnType<typeof getAllRoutePolicies>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetAllRoutePoliciesQueryResult = NonNullable<
Awaited<ReturnType<typeof getAllRoutePolicies>>
>;
export type GetAllRoutePoliciesQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary List route policies
*/
export function useGetAllRoutePolicies<
TData = Awaited<ReturnType<typeof getAllRoutePolicies>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getAllRoutePolicies>>,
TError,
TData
>;
}): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetAllRoutePoliciesQueryOptions(options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary List route policies
*/
export const invalidateGetAllRoutePolicies = async (
queryClient: QueryClient,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetAllRoutePoliciesQueryKey() },
options,
);
return queryClient;
};
/**
* This endpoint creates a route policy
* @summary Create route policy
*/
export const createRoutePolicy = (
alertmanagertypesPostableRoutePolicyDTO: BodyType<AlertmanagertypesPostableRoutePolicyDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateRoutePolicy201>({
url: `/api/v1/route_policies`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: alertmanagertypesPostableRoutePolicyDTO,
signal,
});
};
export const getCreateRoutePolicyMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createRoutePolicy>>,
TError,
{ data: BodyType<AlertmanagertypesPostableRoutePolicyDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createRoutePolicy>>,
TError,
{ data: BodyType<AlertmanagertypesPostableRoutePolicyDTO> },
TContext
> => {
const mutationKey = ['createRoutePolicy'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createRoutePolicy>>,
{ data: BodyType<AlertmanagertypesPostableRoutePolicyDTO> }
> = (props) => {
const { data } = props ?? {};
return createRoutePolicy(data);
};
return { mutationFn, ...mutationOptions };
};
export type CreateRoutePolicyMutationResult = NonNullable<
Awaited<ReturnType<typeof createRoutePolicy>>
>;
export type CreateRoutePolicyMutationBody = BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
export type CreateRoutePolicyMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Create route policy
*/
export const useCreateRoutePolicy = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createRoutePolicy>>,
TError,
{ data: BodyType<AlertmanagertypesPostableRoutePolicyDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createRoutePolicy>>,
TError,
{ data: BodyType<AlertmanagertypesPostableRoutePolicyDTO> },
TContext
> => {
const mutationOptions = getCreateRoutePolicyMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint deletes a route policy by ID
* @summary Delete route policy
*/
export const deleteRoutePolicyByID = ({
id,
}: DeleteRoutePolicyByIDPathParameters) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/route_policies/${id}`,
method: 'DELETE',
});
};
export const getDeleteRoutePolicyByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteRoutePolicyByID>>,
TError,
{ pathParams: DeleteRoutePolicyByIDPathParameters },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof deleteRoutePolicyByID>>,
TError,
{ pathParams: DeleteRoutePolicyByIDPathParameters },
TContext
> => {
const mutationKey = ['deleteRoutePolicyByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof deleteRoutePolicyByID>>,
{ pathParams: DeleteRoutePolicyByIDPathParameters }
> = (props) => {
const { pathParams } = props ?? {};
return deleteRoutePolicyByID(pathParams);
};
return { mutationFn, ...mutationOptions };
};
export type DeleteRoutePolicyByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof deleteRoutePolicyByID>>
>;
export type DeleteRoutePolicyByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Delete route policy
*/
export const useDeleteRoutePolicyByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteRoutePolicyByID>>,
TError,
{ pathParams: DeleteRoutePolicyByIDPathParameters },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof deleteRoutePolicyByID>>,
TError,
{ pathParams: DeleteRoutePolicyByIDPathParameters },
TContext
> => {
const mutationOptions = getDeleteRoutePolicyByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint returns a route policy by ID
* @summary Get route policy by ID
*/
export const getRoutePolicyByID = (
{ id }: GetRoutePolicyByIDPathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetRoutePolicyByID200>({
url: `/api/v1/route_policies/${id}`,
method: 'GET',
signal,
});
};
export const getGetRoutePolicyByIDQueryKey = ({
id,
}: GetRoutePolicyByIDPathParameters) => {
return [`/api/v1/route_policies/${id}`] as const;
};
export const getGetRoutePolicyByIDQueryOptions = <
TData = Awaited<ReturnType<typeof getRoutePolicyByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetRoutePolicyByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getRoutePolicyByID>>,
TError,
TData
>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey =
queryOptions?.queryKey ?? getGetRoutePolicyByIDQueryKey({ id });
const queryFn: QueryFunction<
Awaited<ReturnType<typeof getRoutePolicyByID>>
> = ({ signal }) => getRoutePolicyByID({ id }, signal);
return {
queryKey,
queryFn,
enabled: !!id,
...queryOptions,
} as UseQueryOptions<
Awaited<ReturnType<typeof getRoutePolicyByID>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetRoutePolicyByIDQueryResult = NonNullable<
Awaited<ReturnType<typeof getRoutePolicyByID>>
>;
export type GetRoutePolicyByIDQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get route policy by ID
*/
export function useGetRoutePolicyByID<
TData = Awaited<ReturnType<typeof getRoutePolicyByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetRoutePolicyByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getRoutePolicyByID>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetRoutePolicyByIDQueryOptions({ id }, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get route policy by ID
*/
export const invalidateGetRoutePolicyByID = async (
queryClient: QueryClient,
{ id }: GetRoutePolicyByIDPathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetRoutePolicyByIDQueryKey({ id }) },
options,
);
return queryClient;
};
/**
* This endpoint updates a route policy by ID
* @summary Update route policy
*/
export const updateRoutePolicy = (
{ id }: UpdateRoutePolicyPathParameters,
alertmanagertypesPostableRoutePolicyDTO: BodyType<AlertmanagertypesPostableRoutePolicyDTO>,
) => {
return GeneratedAPIInstance<UpdateRoutePolicy200>({
url: `/api/v1/route_policies/${id}`,
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
data: alertmanagertypesPostableRoutePolicyDTO,
});
};
export const getUpdateRoutePolicyMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateRoutePolicy>>,
TError,
{
pathParams: UpdateRoutePolicyPathParameters;
data: BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
},
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof updateRoutePolicy>>,
TError,
{
pathParams: UpdateRoutePolicyPathParameters;
data: BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
},
TContext
> => {
const mutationKey = ['updateRoutePolicy'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof updateRoutePolicy>>,
{
pathParams: UpdateRoutePolicyPathParameters;
data: BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
return updateRoutePolicy(pathParams, data);
};
return { mutationFn, ...mutationOptions };
};
export type UpdateRoutePolicyMutationResult = NonNullable<
Awaited<ReturnType<typeof updateRoutePolicy>>
>;
export type UpdateRoutePolicyMutationBody = BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
export type UpdateRoutePolicyMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Update route policy
*/
export const useUpdateRoutePolicy = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateRoutePolicy>>,
TError,
{
pathParams: UpdateRoutePolicyPathParameters;
data: BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
},
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof updateRoutePolicy>>,
TError,
{
pathParams: UpdateRoutePolicyPathParameters;
data: BodyType<AlertmanagertypesPostableRoutePolicyDTO>;
},
TContext
> => {
const mutationOptions = getUpdateRoutePolicyMutationOptions(options);
return useMutation(mutationOptions);
};

View File

@@ -6,17 +6,24 @@
*/
import type {
InvalidateOptions,
MutationFunction,
QueryClient,
QueryFunction,
QueryKey,
UseMutationOptions,
UseMutationResult,
UseQueryOptions,
UseQueryResult,
} from 'react-query';
import { useQuery } from 'react-query';
import { useMutation, useQuery } from 'react-query';
import type { ErrorType } from '../../../generatedAPIInstance';
import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
CreateRule201,
DeleteRuleByIDPathParameters,
GetRuleByID200,
GetRuleByIDPathParameters,
GetRuleHistoryFilterKeys200,
GetRuleHistoryFilterKeysParams,
GetRuleHistoryFilterKeysPathParameters,
@@ -35,9 +42,548 @@ import type {
GetRuleHistoryTopContributors200,
GetRuleHistoryTopContributorsParams,
GetRuleHistoryTopContributorsPathParameters,
ListRules200,
PatchRuleByID200,
PatchRuleByIDPathParameters,
RenderErrorResponseDTO,
RuletypesPostableRuleDTO,
TestRule200,
UpdateRuleByIDPathParameters,
} from '../sigNoz.schemas';
/**
* This endpoint lists all alert rules with their current evaluation state
* @summary List alert rules
*/
export const listRules = (signal?: AbortSignal) => {
return GeneratedAPIInstance<ListRules200>({
url: `/api/v2/rules`,
method: 'GET',
signal,
});
};
export const getListRulesQueryKey = () => {
return [`/api/v2/rules`] as const;
};
export const getListRulesQueryOptions = <
TData = Awaited<ReturnType<typeof listRules>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof listRules>>, TError, TData>;
}) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getListRulesQueryKey();
const queryFn: QueryFunction<Awaited<ReturnType<typeof listRules>>> = ({
signal,
}) => listRules(signal);
return { queryKey, queryFn, ...queryOptions } as UseQueryOptions<
Awaited<ReturnType<typeof listRules>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type ListRulesQueryResult = NonNullable<
Awaited<ReturnType<typeof listRules>>
>;
export type ListRulesQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary List alert rules
*/
export function useListRules<
TData = Awaited<ReturnType<typeof listRules>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof listRules>>, TError, TData>;
}): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getListRulesQueryOptions(options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary List alert rules
*/
export const invalidateListRules = async (
queryClient: QueryClient,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getListRulesQueryKey() },
options,
);
return queryClient;
};
/**
* This endpoint creates a new alert rule
* @summary Create alert rule
*/
export const createRule = (
ruletypesPostableRuleDTO: BodyType<RuletypesPostableRuleDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateRule201>({
url: `/api/v2/rules`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: ruletypesPostableRuleDTO,
signal,
});
};
export const getCreateRuleMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
> => {
const mutationKey = ['createRule'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createRule>>,
{ data: BodyType<RuletypesPostableRuleDTO> }
> = (props) => {
const { data } = props ?? {};
return createRule(data);
};
return { mutationFn, ...mutationOptions };
};
export type CreateRuleMutationResult = NonNullable<
Awaited<ReturnType<typeof createRule>>
>;
export type CreateRuleMutationBody = BodyType<RuletypesPostableRuleDTO>;
export type CreateRuleMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Create alert rule
*/
export const useCreateRule = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
> => {
const mutationOptions = getCreateRuleMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint deletes an alert rule by ID
* @summary Delete alert rule
*/
export const deleteRuleByID = ({ id }: DeleteRuleByIDPathParameters) => {
return GeneratedAPIInstance<void>({
url: `/api/v2/rules/${id}`,
method: 'DELETE',
});
};
export const getDeleteRuleByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteRuleByID>>,
TError,
{ pathParams: DeleteRuleByIDPathParameters },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof deleteRuleByID>>,
TError,
{ pathParams: DeleteRuleByIDPathParameters },
TContext
> => {
const mutationKey = ['deleteRuleByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof deleteRuleByID>>,
{ pathParams: DeleteRuleByIDPathParameters }
> = (props) => {
const { pathParams } = props ?? {};
return deleteRuleByID(pathParams);
};
return { mutationFn, ...mutationOptions };
};
export type DeleteRuleByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof deleteRuleByID>>
>;
export type DeleteRuleByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Delete alert rule
*/
export const useDeleteRuleByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteRuleByID>>,
TError,
{ pathParams: DeleteRuleByIDPathParameters },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof deleteRuleByID>>,
TError,
{ pathParams: DeleteRuleByIDPathParameters },
TContext
> => {
const mutationOptions = getDeleteRuleByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint returns an alert rule by ID
* @summary Get alert rule by ID
*/
export const getRuleByID = (
{ id }: GetRuleByIDPathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetRuleByID200>({
url: `/api/v2/rules/${id}`,
method: 'GET',
signal,
});
};
export const getGetRuleByIDQueryKey = ({ id }: GetRuleByIDPathParameters) => {
return [`/api/v2/rules/${id}`] as const;
};
export const getGetRuleByIDQueryOptions = <
TData = Awaited<ReturnType<typeof getRuleByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetRuleByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getRuleByID>>,
TError,
TData
>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getGetRuleByIDQueryKey({ id });
const queryFn: QueryFunction<Awaited<ReturnType<typeof getRuleByID>>> = ({
signal,
}) => getRuleByID({ id }, signal);
return {
queryKey,
queryFn,
enabled: !!id,
...queryOptions,
} as UseQueryOptions<
Awaited<ReturnType<typeof getRuleByID>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetRuleByIDQueryResult = NonNullable<
Awaited<ReturnType<typeof getRuleByID>>
>;
export type GetRuleByIDQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get alert rule by ID
*/
export function useGetRuleByID<
TData = Awaited<ReturnType<typeof getRuleByID>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetRuleByIDPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getRuleByID>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetRuleByIDQueryOptions({ id }, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get alert rule by ID
*/
export const invalidateGetRuleByID = async (
queryClient: QueryClient,
{ id }: GetRuleByIDPathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetRuleByIDQueryKey({ id }) },
options,
);
return queryClient;
};
/**
* This endpoint applies a partial update to an alert rule by ID
* @summary Patch alert rule
*/
export const patchRuleByID = (
{ id }: PatchRuleByIDPathParameters,
ruletypesPostableRuleDTO: BodyType<RuletypesPostableRuleDTO>,
) => {
return GeneratedAPIInstance<PatchRuleByID200>({
url: `/api/v2/rules/${id}`,
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
data: ruletypesPostableRuleDTO,
});
};
export const getPatchRuleByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof patchRuleByID>>,
TError,
{
pathParams: PatchRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof patchRuleByID>>,
TError,
{
pathParams: PatchRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
> => {
const mutationKey = ['patchRuleByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof patchRuleByID>>,
{
pathParams: PatchRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
return patchRuleByID(pathParams, data);
};
return { mutationFn, ...mutationOptions };
};
export type PatchRuleByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof patchRuleByID>>
>;
export type PatchRuleByIDMutationBody = BodyType<RuletypesPostableRuleDTO>;
export type PatchRuleByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Patch alert rule
*/
export const usePatchRuleByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof patchRuleByID>>,
TError,
{
pathParams: PatchRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof patchRuleByID>>,
TError,
{
pathParams: PatchRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
> => {
const mutationOptions = getPatchRuleByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint updates an alert rule by ID
* @summary Update alert rule
*/
export const updateRuleByID = (
{ id }: UpdateRuleByIDPathParameters,
ruletypesPostableRuleDTO: BodyType<RuletypesPostableRuleDTO>,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v2/rules/${id}`,
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
data: ruletypesPostableRuleDTO,
});
};
export const getUpdateRuleByIDMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateRuleByID>>,
TError,
{
pathParams: UpdateRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof updateRuleByID>>,
TError,
{
pathParams: UpdateRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
> => {
const mutationKey = ['updateRuleByID'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof updateRuleByID>>,
{
pathParams: UpdateRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
return updateRuleByID(pathParams, data);
};
return { mutationFn, ...mutationOptions };
};
export type UpdateRuleByIDMutationResult = NonNullable<
Awaited<ReturnType<typeof updateRuleByID>>
>;
export type UpdateRuleByIDMutationBody = BodyType<RuletypesPostableRuleDTO>;
export type UpdateRuleByIDMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Update alert rule
*/
export const useUpdateRuleByID = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateRuleByID>>,
TError,
{
pathParams: UpdateRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof updateRuleByID>>,
TError,
{
pathParams: UpdateRuleByIDPathParameters;
data: BodyType<RuletypesPostableRuleDTO>;
},
TContext
> => {
const mutationOptions = getUpdateRuleByIDMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* Returns distinct label keys from rule history entries for the selected range.
* @summary Get rule history filter keys
@@ -742,3 +1288,87 @@ export const invalidateGetRuleHistoryTopContributors = async (
return queryClient;
};
/**
* This endpoint fires a test notification for the given rule definition
* @summary Test alert rule
*/
export const testRule = (
ruletypesPostableRuleDTO: BodyType<RuletypesPostableRuleDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<TestRule200>({
url: `/api/v2/rules/test`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: ruletypesPostableRuleDTO,
signal,
});
};
export const getTestRuleMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof testRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof testRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
> => {
const mutationKey = ['testRule'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof testRule>>,
{ data: BodyType<RuletypesPostableRuleDTO> }
> = (props) => {
const { data } = props ?? {};
return testRule(data);
};
return { mutationFn, ...mutationOptions };
};
export type TestRuleMutationResult = NonNullable<
Awaited<ReturnType<typeof testRule>>
>;
export type TestRuleMutationBody = BodyType<RuletypesPostableRuleDTO>;
export type TestRuleMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Test alert rule
*/
export const useTestRule = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof testRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof testRule>>,
TError,
{ data: BodyType<RuletypesPostableRuleDTO> },
TContext
> => {
const mutationOptions = getTestRuleMutationOptions(options);
return useMutation(mutationOptions);
};

File diff suppressed because it is too large Load Diff

View File

@@ -20,12 +20,15 @@ import { useMutation, useQuery } from 'react-query';
import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
ChangePasswordPathParameters,
CreateInvite201,
CreateResetPasswordToken201,
CreateResetPasswordTokenPathParameters,
DeleteUserPathParameters,
GetMyUser200,
GetMyUserDeprecated200,
GetResetPasswordToken200,
GetResetPasswordTokenDeprecated200,
GetResetPasswordTokenDeprecatedPathParameters,
GetResetPasswordTokenPathParameters,
GetRolesByUserID200,
GetRolesByUserIDPathParameters,
@@ -53,134 +56,36 @@ import type {
UpdateUserPathParameters,
} from '../sigNoz.schemas';
/**
* This endpoint changes the password by id
* @summary Change password
*/
export const changePassword = (
{ id }: ChangePasswordPathParameters,
typesChangePasswordRequestDTO: BodyType<TypesChangePasswordRequestDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/changePassword/${id}`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: typesChangePasswordRequestDTO,
signal,
});
};
export const getChangePasswordMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof changePassword>>,
TError,
{
pathParams: ChangePasswordPathParameters;
data: BodyType<TypesChangePasswordRequestDTO>;
},
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof changePassword>>,
TError,
{
pathParams: ChangePasswordPathParameters;
data: BodyType<TypesChangePasswordRequestDTO>;
},
TContext
> => {
const mutationKey = ['changePassword'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof changePassword>>,
{
pathParams: ChangePasswordPathParameters;
data: BodyType<TypesChangePasswordRequestDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
return changePassword(pathParams, data);
};
return { mutationFn, ...mutationOptions };
};
export type ChangePasswordMutationResult = NonNullable<
Awaited<ReturnType<typeof changePassword>>
>;
export type ChangePasswordMutationBody = BodyType<TypesChangePasswordRequestDTO>;
export type ChangePasswordMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Change password
*/
export const useChangePassword = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof changePassword>>,
TError,
{
pathParams: ChangePasswordPathParameters;
data: BodyType<TypesChangePasswordRequestDTO>;
},
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof changePassword>>,
TError,
{
pathParams: ChangePasswordPathParameters;
data: BodyType<TypesChangePasswordRequestDTO>;
},
TContext
> => {
const mutationOptions = getChangePasswordMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint returns the reset password token by id
* @deprecated
* @summary Get reset password token
*/
export const getResetPasswordToken = (
{ id }: GetResetPasswordTokenPathParameters,
export const getResetPasswordTokenDeprecated = (
{ id }: GetResetPasswordTokenDeprecatedPathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetResetPasswordToken200>({
return GeneratedAPIInstance<GetResetPasswordTokenDeprecated200>({
url: `/api/v1/getResetPasswordToken/${id}`,
method: 'GET',
signal,
});
};
export const getGetResetPasswordTokenQueryKey = ({
export const getGetResetPasswordTokenDeprecatedQueryKey = ({
id,
}: GetResetPasswordTokenPathParameters) => {
}: GetResetPasswordTokenDeprecatedPathParameters) => {
return [`/api/v1/getResetPasswordToken/${id}`] as const;
};
export const getGetResetPasswordTokenQueryOptions = <
TData = Awaited<ReturnType<typeof getResetPasswordToken>>,
export const getGetResetPasswordTokenDeprecatedQueryOptions = <
TData = Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetResetPasswordTokenPathParameters,
{ id }: GetResetPasswordTokenDeprecatedPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getResetPasswordToken>>,
Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>,
TError,
TData
>;
@@ -189,11 +94,11 @@ export const getGetResetPasswordTokenQueryOptions = <
const { query: queryOptions } = options ?? {};
const queryKey =
queryOptions?.queryKey ?? getGetResetPasswordTokenQueryKey({ id });
queryOptions?.queryKey ?? getGetResetPasswordTokenDeprecatedQueryKey({ id });
const queryFn: QueryFunction<
Awaited<ReturnType<typeof getResetPasswordToken>>
> = ({ signal }) => getResetPasswordToken({ id }, signal);
Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>
> = ({ signal }) => getResetPasswordTokenDeprecated({ id }, signal);
return {
queryKey,
@@ -201,35 +106,39 @@ export const getGetResetPasswordTokenQueryOptions = <
enabled: !!id,
...queryOptions,
} as UseQueryOptions<
Awaited<ReturnType<typeof getResetPasswordToken>>,
Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetResetPasswordTokenQueryResult = NonNullable<
Awaited<ReturnType<typeof getResetPasswordToken>>
export type GetResetPasswordTokenDeprecatedQueryResult = NonNullable<
Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>
>;
export type GetResetPasswordTokenQueryError = ErrorType<RenderErrorResponseDTO>;
export type GetResetPasswordTokenDeprecatedQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @deprecated
* @summary Get reset password token
*/
export function useGetResetPasswordToken<
TData = Awaited<ReturnType<typeof getResetPasswordToken>>,
export function useGetResetPasswordTokenDeprecated<
TData = Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetResetPasswordTokenPathParameters,
{ id }: GetResetPasswordTokenDeprecatedPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getResetPasswordToken>>,
Awaited<ReturnType<typeof getResetPasswordTokenDeprecated>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetResetPasswordTokenQueryOptions({ id }, options);
const queryOptions = getGetResetPasswordTokenDeprecatedQueryOptions(
{ id },
options,
);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
@@ -241,15 +150,16 @@ export function useGetResetPasswordToken<
}
/**
* @deprecated
* @summary Get reset password token
*/
export const invalidateGetResetPasswordToken = async (
export const invalidateGetResetPasswordTokenDeprecated = async (
queryClient: QueryClient,
{ id }: GetResetPasswordTokenPathParameters,
{ id }: GetResetPasswordTokenDeprecatedPathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetResetPasswordTokenQueryKey({ id }) },
{ queryKey: getGetResetPasswordTokenDeprecatedQueryKey({ id }) },
options,
);
@@ -1407,6 +1317,189 @@ export const useUpdateUser = <
return useMutation(mutationOptions);
};
/**
* This endpoint returns the existing reset password token for a user.
* @summary Get reset password token for a user
*/
export const getResetPasswordToken = (
{ id }: GetResetPasswordTokenPathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetResetPasswordToken200>({
url: `/api/v2/users/${id}/reset_password_tokens`,
method: 'GET',
signal,
});
};
export const getGetResetPasswordTokenQueryKey = ({
id,
}: GetResetPasswordTokenPathParameters) => {
return [`/api/v2/users/${id}/reset_password_tokens`] as const;
};
export const getGetResetPasswordTokenQueryOptions = <
TData = Awaited<ReturnType<typeof getResetPasswordToken>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetResetPasswordTokenPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getResetPasswordToken>>,
TError,
TData
>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey =
queryOptions?.queryKey ?? getGetResetPasswordTokenQueryKey({ id });
const queryFn: QueryFunction<
Awaited<ReturnType<typeof getResetPasswordToken>>
> = ({ signal }) => getResetPasswordToken({ id }, signal);
return {
queryKey,
queryFn,
enabled: !!id,
...queryOptions,
} as UseQueryOptions<
Awaited<ReturnType<typeof getResetPasswordToken>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type GetResetPasswordTokenQueryResult = NonNullable<
Awaited<ReturnType<typeof getResetPasswordToken>>
>;
export type GetResetPasswordTokenQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get reset password token for a user
*/
export function useGetResetPasswordToken<
TData = Awaited<ReturnType<typeof getResetPasswordToken>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ id }: GetResetPasswordTokenPathParameters,
options?: {
query?: UseQueryOptions<
Awaited<ReturnType<typeof getResetPasswordToken>>,
TError,
TData
>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetResetPasswordTokenQueryOptions({ id }, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get reset password token for a user
*/
export const invalidateGetResetPasswordToken = async (
queryClient: QueryClient,
{ id }: GetResetPasswordTokenPathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetResetPasswordTokenQueryKey({ id }) },
options,
);
return queryClient;
};
/**
* This endpoint creates or regenerates a reset password token for a user. If a valid token exists, it is returned. If expired, a new one is created.
* @summary Create or regenerate reset password token for a user
*/
export const createResetPasswordToken = ({
id,
}: CreateResetPasswordTokenPathParameters) => {
return GeneratedAPIInstance<CreateResetPasswordToken201>({
url: `/api/v2/users/${id}/reset_password_tokens`,
method: 'PUT',
});
};
export const getCreateResetPasswordTokenMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createResetPasswordToken>>,
TError,
{ pathParams: CreateResetPasswordTokenPathParameters },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createResetPasswordToken>>,
TError,
{ pathParams: CreateResetPasswordTokenPathParameters },
TContext
> => {
const mutationKey = ['createResetPasswordToken'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createResetPasswordToken>>,
{ pathParams: CreateResetPasswordTokenPathParameters }
> = (props) => {
const { pathParams } = props ?? {};
return createResetPasswordToken(pathParams);
};
return { mutationFn, ...mutationOptions };
};
export type CreateResetPasswordTokenMutationResult = NonNullable<
Awaited<ReturnType<typeof createResetPasswordToken>>
>;
export type CreateResetPasswordTokenMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Create or regenerate reset password token for a user
*/
export const useCreateResetPasswordToken = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createResetPasswordToken>>,
TError,
{ pathParams: CreateResetPasswordTokenPathParameters },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createResetPasswordToken>>,
TError,
{ pathParams: CreateResetPasswordTokenPathParameters },
TContext
> => {
const mutationOptions = getCreateResetPasswordTokenMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint returns the user roles by user id
* @summary Get user roles
@@ -1850,3 +1943,84 @@ export const useUpdateMyUserV2 = <
return useMutation(mutationOptions);
};
/**
* This endpoint updates the password of the user I belong to
* @summary Updates my password
*/
export const updateMyPassword = (
typesChangePasswordRequestDTO: BodyType<TypesChangePasswordRequestDTO>,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v2/users/me/factor_password`,
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
data: typesChangePasswordRequestDTO,
});
};
export const getUpdateMyPasswordMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateMyPassword>>,
TError,
{ data: BodyType<TypesChangePasswordRequestDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof updateMyPassword>>,
TError,
{ data: BodyType<TypesChangePasswordRequestDTO> },
TContext
> => {
const mutationKey = ['updateMyPassword'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof updateMyPassword>>,
{ data: BodyType<TypesChangePasswordRequestDTO> }
> = (props) => {
const { data } = props ?? {};
return updateMyPassword(data);
};
return { mutationFn, ...mutationOptions };
};
export type UpdateMyPasswordMutationResult = NonNullable<
Awaited<ReturnType<typeof updateMyPassword>>
>;
export type UpdateMyPasswordMutationBody = BodyType<TypesChangePasswordRequestDTO>;
export type UpdateMyPasswordMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Updates my password
*/
export const useUpdateMyPassword = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof updateMyPassword>>,
TError,
{ data: BodyType<TypesChangePasswordRequestDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof updateMyPassword>>,
TError,
{ data: BodyType<TypesChangePasswordRequestDTO> },
TContext
> => {
const mutationOptions = getUpdateMyPasswordMutationOptions(options);
return useMutation(mutationOptions);
};

View File

@@ -1,88 +0,0 @@
import axios from 'api';
import {
CloudAccount,
Service,
ServiceData,
UpdateServiceConfigPayload,
UpdateServiceConfigResponse,
} from 'container/CloudIntegrationPage/ServicesSection/types';
import {
AccountConfigPayload,
AccountConfigResponse,
ConnectionParams,
ConnectionUrlResponse,
} from 'types/api/integrations/aws';
export const getAwsAccounts = async (): Promise<CloudAccount[]> => {
const response = await axios.get('/cloud-integrations/aws/accounts');
return response.data.data.accounts;
};
export const getAwsServices = async (
cloudAccountId?: string,
): Promise<Service[]> => {
const params = cloudAccountId
? { cloud_account_id: cloudAccountId }
: undefined;
const response = await axios.get('/cloud-integrations/aws/services', {
params,
});
return response.data.data.services;
};
export const getServiceDetails = async (
serviceId: string,
cloudAccountId?: string,
): Promise<ServiceData> => {
const params = cloudAccountId
? { cloud_account_id: cloudAccountId }
: undefined;
const response = await axios.get(
`/cloud-integrations/aws/services/${serviceId}`,
{ params },
);
return response.data.data;
};
export const generateConnectionUrl = async (params: {
agent_config: { region: string };
account_config: { regions: string[] };
account_id?: string;
}): Promise<ConnectionUrlResponse> => {
const response = await axios.post(
'/cloud-integrations/aws/accounts/generate-connection-url',
params,
);
return response.data.data;
};
export const updateAccountConfig = async (
accountId: string,
payload: AccountConfigPayload,
): Promise<AccountConfigResponse> => {
const response = await axios.post<AccountConfigResponse>(
`/cloud-integrations/aws/accounts/${accountId}/config`,
payload,
);
return response.data;
};
export const updateServiceConfig = async (
serviceId: string,
payload: UpdateServiceConfigPayload,
): Promise<UpdateServiceConfigResponse> => {
const response = await axios.post<UpdateServiceConfigResponse>(
`/cloud-integrations/aws/services/${serviceId}/config`,
payload,
);
return response.data;
};
export const getConnectionParams = async (): Promise<ConnectionParams> => {
const response = await axios.get(
'/cloud-integrations/aws/accounts/generate-connection-params',
);
return response.data.data;
};

View File

@@ -1,27 +0,0 @@
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
import { PayloadProps, Props } from 'types/api/user/changeMyPassword';
const changeMyPassword = async (
props: Props,
): Promise<SuccessResponseV2<PayloadProps>> => {
try {
const response = await axios.post<PayloadProps>(
`/changePassword/${props.userId}`,
{
...props,
},
);
return {
httpStatusCode: response.status,
data: response.data,
};
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
};
export default changeMyPassword;

View File

@@ -1,28 +0,0 @@
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
import {
GetResetPasswordToken,
PayloadProps,
Props,
} from 'types/api/user/getResetPasswordToken';
const getResetPasswordToken = async (
props: Props,
): Promise<SuccessResponseV2<GetResetPasswordToken>> => {
try {
const response = await axios.get<PayloadProps>(
`/getResetPasswordToken/${props.userId}`,
);
return {
httpStatusCode: response.status,
data: response.data.data,
};
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
};
export default getResetPasswordToken;

View File

Before

Width:  |  Height:  |  Size: 9.1 KiB

After

Width:  |  Height:  |  Size: 9.1 KiB

View File

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

Before

Width:  |  Height:  |  Size: 507 B

After

Width:  |  Height:  |  Size: 507 B

View File

Before

Width:  |  Height:  |  Size: 9.0 KiB

After

Width:  |  Height:  |  Size: 9.0 KiB

View File

Before

Width:  |  Height:  |  Size: 644 B

After

Width:  |  Height:  |  Size: 644 B

View File

Before

Width:  |  Height:  |  Size: 5.2 KiB

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

Before

Width:  |  Height:  |  Size: 4.5 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 5.2 KiB

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 5.9 KiB

View File

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

Before

Width:  |  Height:  |  Size: 418 B

After

Width:  |  Height:  |  Size: 418 B

View File

Before

Width:  |  Height:  |  Size: 604 B

After

Width:  |  Height:  |  Size: 604 B

View File

Before

Width:  |  Height:  |  Size: 878 B

After

Width:  |  Height:  |  Size: 878 B

View File

Before

Width:  |  Height:  |  Size: 7.3 KiB

After

Width:  |  Height:  |  Size: 7.3 KiB

View File

Before

Width:  |  Height:  |  Size: 7.5 KiB

After

Width:  |  Height:  |  Size: 7.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

View File

Before

Width:  |  Height:  |  Size: 6.1 KiB

After

Width:  |  Height:  |  Size: 6.1 KiB

View File

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 88 KiB

View File

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

Before

Width:  |  Height:  |  Size: 3.4 KiB

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

Before

Width:  |  Height:  |  Size: 303 B

After

Width:  |  Height:  |  Size: 303 B

View File

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

Before

Width:  |  Height:  |  Size: 629 B

After

Width:  |  Height:  |  Size: 629 B

View File

Before

Width:  |  Height:  |  Size: 439 B

After

Width:  |  Height:  |  Size: 439 B

View File

Before

Width:  |  Height:  |  Size: 467 B

After

Width:  |  Height:  |  Size: 467 B

View File

Before

Width:  |  Height:  |  Size: 2.1 KiB

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

Before

Width:  |  Height:  |  Size: 2.1 KiB

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

Before

Width:  |  Height:  |  Size: 300 B

After

Width:  |  Height:  |  Size: 300 B

View File

Before

Width:  |  Height:  |  Size: 348 B

After

Width:  |  Height:  |  Size: 348 B

View File

Before

Width:  |  Height:  |  Size: 3.6 KiB

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

View File

Before

Width:  |  Height:  |  Size: 7.2 KiB

After

Width:  |  Height:  |  Size: 7.2 KiB

View File

Before

Width:  |  Height:  |  Size: 6.8 KiB

After

Width:  |  Height:  |  Size: 6.8 KiB

View File

Before

Width:  |  Height:  |  Size: 910 B

After

Width:  |  Height:  |  Size: 910 B

View File

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 105 KiB

View File

Before

Width:  |  Height:  |  Size: 214 KiB

After

Width:  |  Height:  |  Size: 214 KiB

View File

Before

Width:  |  Height:  |  Size: 204 KiB

After

Width:  |  Height:  |  Size: 204 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

Some files were not shown because too many files have changed in this diff Show More