Compare commits

..

60 Commits

Author SHA1 Message Date
grandwizard28
af933429c1 refactor(instrumentation): flatten code source into code.filepath, code.function, code.lineno
Replace the nested `slog.Source` group with flat OTel semantic convention
keys by adding a Source log handler wrapper that extracts source info
from record.PC. Remove AddSource from the JSON handler since the wrapper
handles it directly.
2026-03-22 15:08:10 +05:30
Pandey
95ed125bd9 feat(instrumentation): add OTel exception semantic convention log handler (#10665)
Some checks are pending
build-staging / prepare (push) Waiting to run
build-staging / js-build (push) Blocked by required conditions
Release Drafter / update_release_draft (push) Waiting to run
build-staging / go-build (push) Blocked by required conditions
build-staging / staging (push) Blocked by required conditions
* feat(instrumentation): add OTel exception semantic convention log handler

Add a loghandler.Wrapper that enriches error log records with OpenTelemetry
exception semantic convention attributes (exception.type, exception.code,
exception.message, exception.stacktrace).

- Add errors.Attr() helper for standardized error logging under "exception" key
- Add exception log handler that replaces raw error attrs with structured group
- Wire exception handler into the instrumentation SDK logger chain
- Remove LogValue() from errors.base as the handler now owns structuring

* refactor: replace "error", err with errors.Attr(err) across codebase

Migrate all slog error logging from ad-hoc "error", err key-value pairs
to the standardized errors.Attr(err) helper, enabling the exception log
handler to enrich these logs with OTel semantic convention attributes.

* refactor: enforce attr-only slog style across codebase

Change sloglint from kv-only to attr-only, requiring all slog calls to
use typed attributes (slog.String, slog.Any, etc.) instead of key-value
pairs. Convert all existing kv-style slog calls in non-excluded paths.

* refactor: tighten slog.Any to specific types and standardize error attrs

- Replace slog.Any with slog.String for string values (action, key, where_clause)
- Replace slog.Any with slog.Uint64 for uint64 values (start, end, step, etc.)
- Replace slog.Any("err", err) with errors.Attr(err) in dispatcher and segment analytics
- Replace slog.Any("error", ctx.Err()) with errors.Attr in factory registry

* fix(instrumentation): use Unwrapb message for exception.message

Use the explicit error message (m) from Unwrapb instead of
foundErr.Error(), which resolves to the inner cause's message
for wrapped errors.

* feat(errors): capture stacktrace at error creation time

Store program counters ([]uintptr) in base errors at creation time
using runtime.Callers, inspired by thanos-io/thanos/pkg/errors. The
exception log handler reads the stacktrace from the error instead of
capturing at log time, showing where the error originated.

* fix(instrumentation): apply default log wrappers uniformly in NewLogger

Move correlation, filtering, and exception wrappers into NewLogger so
all call sites (including CLI loggers in cmd/) get them automatically.

* refactor(instrumentation): remove variadic wrappers from NewLogger

NewLogger no longer accepts arbitrary wrappers. The core wrappers
(correlation, filtering, exception) are hardcoded, preventing callers
from accidentally duplicating behavior.

* refactor: migrate remaining "error", <var> to errors.Attr across legacy paths

Replace all remaining "error", <variable> key-value pairs with
errors.Attr(<variable>) in pkg/query-service/ and ee/query-service/
paths that were missed in the initial migration due to non-standard
variable names (res.Err, filterErr, apiErrorObj.Err, etc).

* refactor(instrumentation): use flat exception.* keys instead of nested group

Use flat keys (exception.type, exception.code, exception.message,
exception.stacktrace) instead of a nested slog.Group in the exception
log handler.
2026-03-22 04:06:31 +00:00
Pandey
1a85ccb373 chore: remove unused config files from conf/ (#10663)
Some checks failed
build-staging / prepare (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Remove `conf/prometheus.yml` and `conf/cache-config.yml` which are
legacy artifacts with zero references anywhere in the codebase.
2026-03-20 16:02:21 +00:00
Debopam Roy
78b0836974 fix(api-monitoring): border being hidden on hover (#9415)
* feat: change the cursor to pointer on hovering domain rows

* feat: external api -> domains table -> right drawer -> all the columns have same color

* feat: hover style of the left tab when the right tab is active
2026-03-20 13:37:10 +00:00
Veer Shah
7df1f25bcd fix(ui): add cursor pointer on External APIs domain rows (#10654)
The domain rows in the External APIs table are clickable
(they open a detail drawer) but showed the default cursor,
giving no visual affordance. Added cursor: pointer to the
.expanded-clickable-row class that is already applied via
onRow() in DomainList.tsx.

Fixes #9403
2026-03-20 11:55:24 +00:00
Karan Balani
a5f8c199a5 feat: deprecate old user invite apis (#10600)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: deprecate old user invite apis

* chore: add back validation for pending user in list user apis for integration tests

* fix: allow pending user to be updated

* feat: updated members page with new status response and remove invite endpoint api (#10624)

* feat: updated members page with new status response and remove invite endpoint api

* feat: removed deprecated invite endpoint apis

* feat: delete orphaned type files

* feat: changed text for copy, cancel and ingeneral messaging for invited users

* feat: test case and pagination fix

* feat: feedback, refactor and test mock update

* feat: updated the confirmation dialog description as now the we cant permanently delete the member

* feat: refactored the member status mapping

* feat: used the open api spec hooks in editmember and updated the test cases

* feat: added error handling

---------

Co-authored-by: SagarRajput-7 <162284829+SagarRajput-7@users.noreply.github.com>
Co-authored-by: SagarRajput-7 <sagar@signoz.io>
2026-03-19 20:44:07 +00:00
Pandey
dfe2a6a9e5 fix(tests): fix flaky rootuser integration tests (#10660)
* fix(tests): fix flaky rootuser integration tests

setupCompleted becomes true before the root user is actually created
because the reconciliation runs in an async goroutine. Add a second
polling phase that waits for /api/v1/user to return 200, confirming
the root user exists and the impersonation provider can resolve.

* fix(ci): fail py-fmt check when formatter changes files

Add git diff --exit-code after make py-fmt so CI fails if any files
are not properly formatted, instead of silently passing.

Also includes black formatting fix for 01_rootuser.py.

* refactor(tests): remove redundant unauthenticated request test

test_unauthenticated_request_succeeds is redundant now that
test_root_user_created already polls /api/v1/user without auth
and waits for 200.
2026-03-19 20:22:02 +00:00
SagarRajput-7
16a915be94 feat: new service account page with crud and listing (#10535)
* feat: new service_account page with crud and listing

* feat: multiple style and functionality fixes

* feat: multiple style and functionality fixes

* feat: feedback fix

* feat: added pagination and sorter

* feat: feedback fix

* feat: feedback and refactor

* feat: updated with new schemas changes

* feat: feedback and refactor

* feat: added announcement banner

* feat: added IS_SERVICE_ACCOUNTS_ENABLED to hide Service Account feature and announcement banner

* feat: added test cases

* feat: updated subtitle

* feat: feedback, refactor and test enhancments

* feat: enhancement in role select component

* feat: enhancement and implemented nuqs usage across the Service account feature

* feat: refactor drawer page to have api fetch and added cache on the list call for faster lookup

* feat: refactor edit key modal to be more url controlled than props

* feat: moved onsucess and onerror inside the hook

* feat: enhancement in createserviceaccount modal with nuqs and refactor

* feat: enhancement in addkey, editkey and disable modal

* feat: refactor in revoke key modal to make it isolated and url controlled

* feat: updated types after latest rebase change

* feat: feedback, refactor and cleanup

* feat: udpate test case
2026-03-19 20:05:06 +00:00
Pandey
cc6f2170a5 refactor: remove DeprecatedFlags CLI flag backward compatibility (#10659)
Remove the DeprecatedFlags struct and all associated CLI flags
(--max-idle-conns, --max-open-conns, --dial-timeout, --flux-interval,
--flux-interval-for-trace-detail, --prefer-span-metrics, --cluster,
--gateway-url) that were superseded by environment variable-based
configuration. Deprecated environment variable handling is retained.

Closes #6805
2026-03-19 19:43:37 +00:00
Pandey
4ffab5f580 feat: add --config flag for YAML configuration files (#10649)
Some checks failed
build-staging / staging (push) Has been cancelled
build-staging / prepare (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-03-19 19:09:15 +00:00
Abhi kumar
644228735b fix: added fix for isolated points render with null both side (#10630)
* feat: added section in panel settings

* feat: added changes for spangaps thresholds

* chore: minor changes

* chore: broke down rightcontainer component into sub-components

* chore: minor cleanup

* chore: fixed disconnect points logic

* fix: added fix for isolated points render with null both side

* chore: minor naming fix

* chore: fixed ui issues

* chore: fixed styles

* chore: hidden chart appearance

* chore: pr review comments

* chore: pr review comments

* chore: updated conditions
2026-03-19 11:49:29 +00:00
Karan Balani
29ec71b98f fix: allow gateway apis on editor access (#10646)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* fix: allow gateway apis on editor access

* fix: fmtlint and middleware

* fix: fmtlint trailing new lines
2026-03-19 10:30:28 +00:00
Pandey
ca9cbd92e4 feat(identn): implement an impersonation identn (#10641)
* feat(identn): implement an impersonation identn

* fix: prevent nil pointer error

* feat: dry org code by implementing getbyidorname

* feat: add integration tests for root user and impersonation

* fix: fix lint
2026-03-19 10:13:12 +00:00
Abhi kumar
0faef8705d feat: added changes for spangaps thresholds (#10570)
* feat: added section in panel settings

* feat: added changes for spangaps thresholds

* chore: minor changes

* fix: fixed failing tests

* fix: minor style fixes

* chore: updated the categorisation

* feat: added chart appearance settings in panel

* feat: added fill mode in timeseries

* chore: broke down rightcontainer component into sub-components

* chore: minor cleanup

* chore: fixed disconnect points logic

* chore: added spanGaps selection component

* chore: updated rightcontainer

* fix: added fix for the UI

* chore: fixed ui issues

* chore: fixed styles

* chore: default spangaps to true

* chore: moved preparedata to utils

* chore: hidden chart appearance

* chore: pr review comments

* chore: pr review comments

* chore: updated variable names

* chore: fixed minor changes

* chore: updated test

* chore: updated test
2026-03-19 09:40:02 +00:00
Ashwin Bhatkal
2ca9085b52 chore: add eslint rule for zustand getState (#10648) 2026-03-19 09:13:09 +00:00
Piyush Singariya
b7d0c8b5a2 Revert "Revert "fix: "In Progress" stuck agent config (#10476)" (#10633)" (#10644)
This reverts commit c8fcc48022.
2026-03-19 04:48:17 +00:00
Vinicius Lourenço
ce5499d5a7 feat(authz): migrate authorization to authz instead of user.role (#10486)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat(authz): migrate authorization to authz instead of user.role

* fixup! feat(authz): migrate authorization to authz instead of user.role

address comments

* fixup! fixup! feat(authz): migrate authorization to authz instead of user.role

Allow anonymous to go to unauthorized, otherwise it will loop in errors

* fixup! fixup! fixup! feat(authz): migrate authorization to authz instead of user.role

Improve error message when anonymous

* fixup! fixup! fixup! fixup! feat(authz): migrate authorization to authz instead of user.role

Format breaking with new css
2026-03-18 18:24:11 +00:00
Pandey
4554a09a42 fix: handle foreign key constraint on rule and planned maintenance deletion (#10632)
* fix: handle foreign key constraint on rule and planned maintenance deletion

* fix: handle foreign key constraint on rule and planned maintenance deletion

* fix: handle foreign key constraint on rule and planned maintenance deletion
2026-03-18 16:38:37 +00:00
swapnil-signoz
794a7f4ca6 fix: adding migration to fix wrong index on cloud integration table (#10607)
* fix: adding migration for fixing wrong cloud integration unique index

* refactor: removing std errors pkg

* refactor: normalising account_id if empty

* feat: adding integration test
2026-03-18 16:01:55 +00:00
aniketio-ctrl
fd3b1c5374 fix(checkout): pass downstream error meesage to UI (#10636)
* fix(checkout): pass downstream error meesage to UI

* fix(checkout): pass downstream error meesage to UI

* fix(checkout): pass downstream error meesage to UI

* fix(checkout): pass downstream error meesage to UI

* fix(checkout): pass downstream error meesage to UI
2026-03-18 15:28:01 +00:00
Vinicius Lourenço
e52c5683dd feat(signozhq-ui): add @signozhq/ui lib (#10616) 2026-03-18 13:44:25 +00:00
Abhi kumar
90e3cb6775 feat: replaced external apis barchart with the new bar chart (#10460)
* feat: replaced external apis barchart with the new bar chart

* fix: tests

* chore: fixed tsc
2026-03-18 13:36:23 +00:00
primus-bot[bot]
155f287462 chore(release): bump to v0.116.1 (#10635)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2026-03-18 12:28:33 +00:00
Piyush Singariya
c8fcc48022 Revert "fix: "In Progress" stuck agent config (#10476)" (#10633)
This reverts commit fd19ff8e5e.
2026-03-18 11:30:39 +00:00
Vikrant Gupta
44b6885639 fix(identn): identn provider claims (#10631)
* fix(identn): identn provider claims

* fix(identn): add integration tests

* fix(identn): use identn provider from claims
2026-03-18 11:23:50 +00:00
Piyush Singariya
0e5a128325 refactor: consolidate body column for JSON logs (#10325)
* feat: has JSON QB

* fix: tests expected queries and values

* fix: ignored .vscode in gitignore

* fix: tests GroupBy

* revert: gitignore change

* fix: build json plans in metadata

* fix: empty filteredArrays condition

* fix: tests

* fix: tests

* fix: json qb test fix

* fix: review based on tushar

* fix: changes based on review from Srikanth

* fix: remove unnecessary bool checking

* fix: removed comment

* fix: merge json body columns together

* chore: var renamed

* fix: merge conflict

* test: fix

* fix: tests

* fix: go test flakiness

* chore: merge json fields

* fix: handle datatype collision

* revert: few unrelated changes

* revert: more unrelated change

* test: blocked on pr #10153

* feat: mapping body_v2.message:string map to body

* fix: go.mod required changes

* fix: remove unused function

* fix: test fixed

* fix: go mod changes

* fix: tests

* fix: go lint

* revert: remvoing unused function

* revert: change ReadMultiple is needed

* fix: body.message not being mapped correctly

* fix: append warnings from fieldkeys

* fix: change warning to a const to fix tests

* chore: addressing comments from Nitya

* chore: remove unnecessary change

* fix: shift warning attachment to getKeySelectors

* fix: lint error

* feat: update message as typehint in JSON Column (#10545)

* fix: cursor comments

* chore: minor changes based on review

* fix: message field key search in JSON Logs (#10577)

* feat: work in progress

* fix: test run success

* fix: in progress

* fix: excluding message from metadata fetch

* test: cleared

* fix: key name in metadata

* fix: uncomment tests

* chore: change to method for staticfields

* fix: remove confusing comments; remove usage of logical keyword

* chore: shift method above business logic

* chore: changes based on review

* fix: comments in metadata_store.go

* fix: fallback expr switch case

* revert: remove unused JSON Field datatype

* fix: remove the exception checking

* chore: keep message contained to field mapper

* chore: text search tests

* fix: package test fixed

* fix: redundant code block removal

* fix: retain staticfield implementation and spell fix

* fix: nil param lint

---------

Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
2026-03-18 10:48:17 +00:00
Piyush Singariya
fd19ff8e5e fix: "In Progress" stuck agent config (#10476)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* fix: in progress status stuck in logs pipelines

* fix: stuck in progress logs pipeline status

* fix: changes based on review

* revert: comment change

* fix: change order of handling updation

* fix: check newstatus deploy status
2026-03-18 08:31:26 +00:00
swapnil-signoz
7b9e93162f feat: adding cloud integration type for refactor (#10453)
* feat: adding cloud integration type for refactor

* refactor: store interfaces to use local types and error

* feat: adding updated types for cloud integration

* refactor: using struct for map

* refactor: update cloud integration types and module interface

* fix: correct GetService signature and remove shadowed Data field

* refactor: adding comments and removed wrong code

* refactor: streamlining types

* refactor: add comments for backward compatibility in PostableAgentCheckInRequest

* refactor: update Dashboard struct comments and remove unused fields

* refactor: clean up types

* refactor: renaming service type to service id

* refactor: using serviceID type

* feat: adding method for service id creation

* refactor: updating store methods

* refactor: clean up

* refactor: review comments
2026-03-18 08:20:18 +00:00
primus-bot[bot]
f106f57097 chore(release): bump to v0.116.0 (#10626)
Co-authored-by: primus-bot[bot] <171087277+primus-bot[bot]@users.noreply.github.com>
2026-03-18 06:47:16 +00:00
Vikrant Gupta
5bafdeb373 fix(user): add config for user invite token expiry (#10618)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* fix(user): increase expiry for reset password token for invites

* fix(user): increase expiry for reset password token for invites

* fix(user): increase expiry for reset password token for invites

* fix(user): increase expiry for reset password token for invites
2026-03-17 16:57:29 +00:00
Naman Verma
24b72084ac fix: return not-found error with diagnostic info for absent metrics (#10560)
* fix: check for metric type without query range constraint

* revert: revert check for metric type without query range constraint

* chore: move temporality+type fetcher to the case where it is actually used

* fix: don't send absent metrics to query builder

* chore: better package import name

* test: unit test add mock for metadata call (which is expected in the test's scenario)

* revert: revert seeding of absent metrics

* fix: throw a not found err if metric data is missing

* test: unit test add mock for metadata call (which is expected in the test's scenario)

* revert: no need for special err handling in threshold rule

* chore: add last seen info in err message

* test: fix broken dashboard test

* test: integration test for short time range query

* chore: python lint issue
2026-03-17 16:15:32 +00:00
Pandey
2db83b453d refactor: merge roletypes into authtypes (#10614)
* refactor: merge roletypes into authtypes

* refactor: merge roletypes into authtypes

* refactor: update openapi spec

* feat: split CI

* fix: fix tsc of frontend
2026-03-17 15:43:58 +00:00
Amaresh S M
2f012715b4 fix(frontend/vite): avoid inlining whole process.env into bundle (#10605) 2026-03-17 11:51:30 +00:00
Vikrant Gupta
aa05a7bf14 chore(identn): add me as codeowner for identn (#10612) 2026-03-17 11:29:34 +00:00
Vikrant Gupta
99327960b0 feat(authn): move identn to factory and config (#10608)
* feat(authn): move identn to factory and config

* feat(authn): add support for enabling identNs

* feat(authn): add support for enabling identNs
2026-03-17 11:22:26 +00:00
Pandey
12b02a1002 feat(sqlschema): add support for partial unique indexes (#10604)
* feat(sqlschema): add support for partial unique indexes

* feat(sqlschema): add support for multiple indexes

* feat(sqlschema): add support for multiple indexes

* feat(sqlschema): move normalizer to its own struct

* feat(sqlschema): move normalizer tests to normalizer

* feat(sqlschema): move normalizer tests to normalizer

* feat(sqlschema): add more index tests from docs
2026-03-17 11:22:11 +00:00
Vikrant Gupta
4ce220ba92 feat(authn): introduce identN (#10601)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat(authn): introduce identity resolvers

* feat(authn): clean the interface DI

* feat(authn): renmae the interface to identN

* feat(authn): pending identN rename

* feat(authn): still handling renames

* feat(authn): deprecate authtype

* feat(authn): clean the rotate handling

* feat(authn): still handling renames
2026-03-17 07:27:36 +00:00
Abhi kumar
0211ddf0cb chore: broke down rightcontainer component into sub-components (#10575)
* feat: added section in panel settings

* chore: minor changes

* fix: fixed failing tests

* fix: minor style fixes

* chore: updated the categorisation

* feat: added chart appearance settings in panel

* feat: added fill mode in timeseries

* chore: broke down rightcontainer component into sub-components

* chore: updated styles + made panel config resizable

* chore: updated styles

* chore: minor styles improvements

* chore: formatting unit section fix

* chore: disabled chart apperance section

* chore: prettier fmt fix

* fix: transform react-resizable-panels in jest config

* fix: failing test

* chore: updated transition timing

* chore: fixed resizable handle styling

* chore: pr review changes

* chore: pr review changes

* chore: pr review changes

* chore: pr review changes

* chore: pr review changes
2026-03-17 06:44:34 +00:00
Pandey
e5eb62e45b feat: add more support to sqlschema (#10602)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-03-16 17:24:13 +00:00
Nageshbansal
7371dcacf0 chore: deprecates generator from deploy (#10447) 2026-03-16 14:55:33 +00:00
Abhi kumar
3cdf3e06f3 feat: added chart appearance settings in panel (#10573)
* feat: added section in panel settings

* chore: minor changes

* fix: fixed failing tests

* fix: minor style fixes

* chore: updated the categorisation

* feat: added chart appearance settings in panel

* feat: added fill mode in timeseries

* chore: updated styles + made panel config resizable

* chore: updated styles

* chore: minor styles improvements

* chore: formatting unit section fix

* chore: disabled chart apperance section

* chore: prettier fmt fix

* fix: transform react-resizable-panels in jest config

* fix: failing test

* chore: updated transition timing

* chore: fixed resizable handle styling

* chore: pr review changes

* chore: pr review changes
2026-03-16 14:08:15 +00:00
Pandey
f8c38df2bf refactor: replace zap logger with slog across codebase (#10599)
* refactor: replace zap logger with slog across codebase

* refactor: fix lint

* refactor: fix lint
2026-03-16 12:09:39 +00:00
Pandey
cab4a56694 chore: add myself as codeowner for CI and go.mod (#10597)
Clarified CODEOWNERS comments and updated owner assignments.
2026-03-16 10:01:36 +00:00
Ashwin Bhatkal
78041fe457 chore: send slack notification on dequeue only and not merge (#10596)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-03-16 09:38:04 +00:00
Ashwin Bhatkal
09b6382820 chore: separate dashboard slider from dashboard provider + refactor (#10572)
* chore: separate dashboard slider from dashboard provider + refactor

* chore: resolve self comments
2026-03-16 08:12:09 +00:00
Ashwin Bhatkal
9689b847f0 chore: add slack notification on dequeue from merge queue (#10580)
* chore: add slack notification on merge queue failure

* chore: break type

* chore: update yaml

* chore: update yaml

* chore: update yaml

* chore: update yaml

* chore: update yaml

* chore: update yaml

* chore: update yaml

* chore: resolve comments
2026-03-16 07:12:19 +00:00
Vishal Sharma
15e5938e95 fix: add allInOneLightMode SVG for light mode (#10589) 2026-03-16 06:59:28 +00:00
Abhi kumar
c5ef455283 fix: added fix for panel setting scrollbar issue (#10587)
Some checks failed
build-staging / prepare (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
* fix: added fix for panel setting scrollbar issue

* fix: added changes for panel switch
2026-03-13 19:30:49 +00:00
Ishan
2316b5be83 Sig 3634 revert (#10578)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* Revert "Revert "feat: Option to zoom out OR reset zoom in the explorer pages (#10464)" (#10574)"

This reverts commit 5b8d5fbfd3.

* fix: stop bubble
2026-03-13 15:29:28 +00:00
Abhi kumar
937ebc1582 feat: added section in panel settings (#10569)
* feat: added section in panel settings

* chore: minor changes

* fix: fixed failing tests

* fix: minor style fixes

* chore: updated the categorisation

* chore: updated styles

* chore: minor styles improvements

* chore: formatting unit section fix
2026-03-13 13:22:10 +00:00
Ashwin Bhatkal
dcc8173c79 fix: variables initial url state (#10579)
* fix: variables-initial-url-state

* chore: add tests
2026-03-13 11:16:47 +00:00
Ashwin Bhatkal
4b4ef5ce58 fix: edit mode variables not persisting value (#10576)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* fix: edit mode variables not persisting value

* chore: move into hook

* chore: add tests

* chore: fix tests

* chore: move functions
2026-03-13 07:49:40 +00:00
Yunus M
5b8d5fbfd3 Revert "feat: Option to zoom out OR reset zoom in the explorer pages (#10464)" (#10574)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
This reverts commit 557451ed81.
2026-03-12 19:24:49 +00:00
Ashwin Bhatkal
0271be11e6 chore: remove dashboard provider from the root (#10526)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: remove dashboard provider from the root

* chore: fix tests

* chore: fix tests

* chore: remove dashboardId from provider

* chore: remove old instances of dashboard provider

* chore: separate dashboard widget fully

* chore: fix tests

* chore: resolve self comments
2026-03-12 14:51:49 +00:00
Vikrant Gupta
92d220c4d9 feat(serviceaccount): domain changes for service account (#10568)
* feat(serviceaccount): domain type changes

* feat(serviceaccount): domain type changes

* feat(serviceaccount): domain type changes
2026-03-12 11:06:04 +00:00
Vikrant Gupta
0ed8169bad feat(authz): add service account authz changes (#10567) 2026-03-12 09:42:50 +00:00
SagarRajput-7
ed553fb02e feat: removed plan name and added copiable license info in custom domain card (#10558)
* feat: removed plan name and added copiable license info in custom domain card

* feat: added condition on the license row in custom domain card

* feat: code refactor and making license row a common component

* feat: added test case and addressed feedback

* feat: style improvement

* feat: added maskedkey util and refactored code

* feat: updated test case
2026-03-12 09:24:41 +00:00
Ashwin Bhatkal
47daba3c17 chore: link session url with sentry alert (#10566) 2026-03-12 09:19:31 +00:00
Srikanth Chekuri
2b3310809a fix: newServer uses the stored config hash for mismatch (#10563) 2026-03-12 08:26:22 +00:00
Ashwin Bhatkal
542a648cc3 chore: remove toScrollWidgetId from dashboard provider (#10562)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: remove toScrollWidgetId from dashboard provider

* chore: remove dead files

* chore: fix tests
2026-03-12 06:03:02 +00:00
563 changed files with 21980 additions and 31837 deletions

19
.github/CODEOWNERS vendored
View File

@@ -1,8 +1,6 @@
# CODEOWNERS info: https://help.github.com/en/articles/about-code-owners
# Owners are automatically requested for review for PRs that changes code
# that they own.
# Owners are automatically requested for review for PRs that changes code that they own.
/frontend/ @SigNoz/frontend-maintainers
@@ -11,8 +9,10 @@
/frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.json @makeavish
/frontend/src/container/OnboardingV2Container/AddDataSource/AddDataSource.tsx @makeavish
/deploy/ @SigNoz/devops
.github @SigNoz/devops
# CI
/deploy/ @therealpandey
.github @therealpandey
go.mod @therealpandey
# Scaffold Owners
@@ -105,6 +105,10 @@
/pkg/modules/authdomain/ @vikrantgupta25
/pkg/modules/role/ @vikrantgupta25
# IdentN Owners
/pkg/identn/ @vikrantgupta25
/pkg/http/middleware/identn.go @vikrantgupta25
# Integration tests
/tests/integration/ @vikrantgupta25
@@ -127,12 +131,15 @@
/frontend/src/pages/DashboardsListPage/ @SigNoz/pulse-frontend
/frontend/src/container/ListOfDashboard/ @SigNoz/pulse-frontend
# Dashboard Widget Page
/frontend/src/pages/DashboardWidget/ @SigNoz/pulse-frontend
/frontend/src/container/NewWidget/ @SigNoz/pulse-frontend
## Dashboard Page
/frontend/src/pages/DashboardPage/ @SigNoz/pulse-frontend
/frontend/src/container/DashboardContainer/ @SigNoz/pulse-frontend
/frontend/src/container/GridCardLayout/ @SigNoz/pulse-frontend
/frontend/src/container/NewWidget/ @SigNoz/pulse-frontend
## Public Dashboard Page

View File

@@ -77,10 +77,6 @@ jobs:
uses: actions/setup-node@v5
with:
node-version: "22"
- name: setup-pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: docker-community
shell: bash
run: |
@@ -106,17 +102,3 @@ jobs:
run: |
go run cmd/enterprise/*.go generate openapi
git diff --compact-summary --exit-code || (echo; echo "Unexpected difference in openapi spec. Run go run cmd/enterprise/*.go generate openapi locally and commit."; exit 1)
- name: node-install
uses: actions/setup-node@v5
with:
node-version: "22"
- name: setup-pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: install-frontend
run: cd frontend && pnpm install
- name: generate-api-clients
run: |
cd frontend && pnpm generate:api
git diff --compact-summary --exit-code || (echo; echo "Unexpected difference in generated api clients. Run pnpm generate:api in frontend/ locally and commit."; exit 1)

View File

@@ -25,10 +25,6 @@ jobs:
uses: actions/setup-node@v5
with:
node-version: "22"
- name: setup-pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: build-frontend
run: make js-build
- name: upload-frontend-artifact

View File

@@ -41,10 +41,6 @@ jobs:
uses: actions/setup-node@v5
with:
node-version: "22"
- name: setup-pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: build-frontend
run: make js-build
- name: upload-frontend-artifact

View File

@@ -29,6 +29,7 @@ jobs:
- name: fmt
run: |
make py-fmt
git diff --exit-code -- tests/integration/
- name: lint
run: |
make py-lint
@@ -49,6 +50,7 @@ jobs:
- ttl
- alerts
- ingestionkeys
- rootuser
sqlstore-provider:
- postgres
- sqlite

View File

@@ -52,16 +52,16 @@ jobs:
with:
PRIMUS_REF: main
JS_SRC: frontend
md-languages:
languages:
if: |
github.event_name == 'merge_group' ||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
runs-on: ubuntu-latest
steps:
- name: checkout
- name: self-checkout
uses: actions/checkout@v4
- name: validate md languages
- name: run
run: bash frontend/scripts/validate-md-languages.sh
authz:
if: |
@@ -70,49 +70,55 @@ jobs:
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
runs-on: ubuntu-latest
steps:
- name: Checkout code
- name: self-checkout
uses: actions/checkout@v5
- name: Set up Node.js
- name: node-install
uses: actions/setup-node@v5
with:
node-version: "22"
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: Install frontend dependencies
- name: deps-install
working-directory: ./frontend
run: |
pnpm install
- name: Install uv
yarn install
- name: uv-install
uses: astral-sh/setup-uv@v5
- name: Install Python dependencies
- name: uv-deps
working-directory: ./tests/integration
run: |
uv sync
- name: Start test environment
- name: setup-test
run: |
make py-test-setup
- name: Generate permissions.type.ts
- name: generate
working-directory: ./frontend
run: |
pnpm generate:permissions-type
- name: Teardown test environment
yarn generate:permissions-type
- name: teardown-test
if: always()
run: |
make py-test-teardown
- name: Check for changes
- name: validate
run: |
if ! git diff --exit-code frontend/src/hooks/useAuthZ/permissions.type.ts; then
echo "::error::frontend/src/hooks/useAuthZ/permissions.type.ts is out of date. Please run the generator locally and commit the changes: pnpm generate:permissions-type (from the frontend directory)"
echo "::error::frontend/src/hooks/useAuthZ/permissions.type.ts is out of date. Please run the generator locally and commit the changes: npm run generate:permissions-type (from the frontend directory)"
exit 1
fi
openapi:
if: |
github.event_name == 'merge_group' ||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
runs-on: ubuntu-latest
steps:
- name: self-checkout
uses: actions/checkout@v4
- name: node-install
uses: actions/setup-node@v5
with:
node-version: "22"
- name: install-frontend
run: cd frontend && yarn install
- name: generate-api-clients
run: |
cd frontend && yarn generate:api
git diff --compact-summary --exit-code || (echo; echo "Unexpected difference in generated api clients. Run yarn generate:api in frontend/ locally and commit."; exit 1)

60
.github/workflows/mergequeueci.yaml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: mergequeueci
on:
pull_request:
types:
- dequeued
jobs:
notify:
runs-on: ubuntu-latest
if: github.event.pull_request.merged == false
steps:
- name: alert
uses: slackapi/slack-github-action@v2.1.1
with:
webhook: ${{ secrets.SLACK_MERGE_QUEUE_WEBHOOK }}
webhook-type: incoming-webhook
payload: |
{
"text": ":x: PR removed from merge queue",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": ":x: PR Removed from Merge Queue"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*<${{ github.event.pull_request.html_url }}|PR #${{ github.event.pull_request.number }}: ${{ github.event.pull_request.title }}>*"
}
},
{
"type": "divider"
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Author*\n@${{ github.event.pull_request.user.login }}"
}
]
}
]
}
- name: comment
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
PR_URL: ${{ github.event.pull_request.html_url }}
run: |
gh api repos/${{ github.repository }}/issues/$PR_NUMBER/comments \
-f body="> :x: **PR removed from merge queue**
>
> @$PR_AUTHOR your PR was removed from the merge queue. Fix the issue and re-queue when ready."

View File

@@ -27,11 +27,6 @@ jobs:
with:
node-version: lts/*
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: Mask secrets and input
run: |
echo "::add-mask::${{ secrets.BASE_URL }}"
@@ -42,11 +37,12 @@ jobs:
- name: Install dependencies
working-directory: frontend
run: |
pnpm install
npm install -g yarn
yarn
- name: Install Playwright Browsers
working-directory: frontend
run: pnpm playwright install --with-deps
run: yarn playwright install --with-deps
- name: Run Playwright Tests
working-directory: frontend
@@ -55,7 +51,7 @@ jobs:
LOGIN_USERNAME="${{ secrets.LOGIN_USERNAME }}" \
LOGIN_PASSWORD="${{ secrets.LOGIN_PASSWORD }}" \
USER_ROLE="${{ github.event.inputs.userRole }}" \
pnpm playwright test
yarn playwright test
- name: Upload Playwright Report
uses: actions/upload-artifact@v4

View File

@@ -1,21 +1,19 @@
# Please adjust to your needs (see https://www.gitpod.io/docs/config-gitpod-file)
# and commit this file to your remote git repository to share the goodness with others.
tasks:
- name: Run Docker Images
init: |
cd ./deploy/docker
sudo docker compose up -d
- name: Install pnpm
init: |
npm i -g pnpm
- name: Run Frontend
init: |
cd ./frontend
pnpm install
command: pnpm dev
yarn install
command:
yarn dev
ports:
- port: 8080

View File

@@ -35,7 +35,7 @@ linters:
- identical
sloglint:
no-mixed-args: true
kv-only: true
attr-only: true
no-global: all
context: all
static-msg: true

View File

@@ -154,7 +154,7 @@ $(GO_BUILD_ARCHS_ENTERPRISE_RACE): go-build-enterprise-race-%: $(TARGET_DIR)
.PHONY: js-build
js-build: ## Builds the js frontend
@echo ">> building js frontend"
@cd $(JS_BUILD_CONTEXT) && CI=1 pnpm install && pnpm build
@cd $(JS_BUILD_CONTEXT) && CI=1 yarn install && yarn build
##############################################################
# docker commands

View File

@@ -4,12 +4,15 @@ import (
"context"
"log/slog"
"github.com/spf13/cobra"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/authz/openfgaauthz"
"github.com/SigNoz/signoz/pkg/authz/openfgaschema"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/gateway/noopgateway"
@@ -28,18 +31,17 @@ import (
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/SigNoz/signoz/pkg/zeus/noopzeus"
"github.com/spf13/cobra"
)
func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
var flags signoz.DeprecatedFlags
var configFiles []string
serverCmd := &cobra.Command{
Use: "server",
Short: "Run the SigNoz server",
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
RunE: func(currCmd *cobra.Command, args []string) error {
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, flags)
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, configFiles)
if err != nil {
return err
}
@@ -48,7 +50,7 @@ func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
},
}
flags.RegisterFlags(serverCmd)
serverCmd.Flags().StringArrayVar(&configFiles, "config", nil, "path to a YAML configuration file (can be specified multiple times, later files override earlier ones)")
parentCmd.AddCommand(serverCmd)
}
@@ -90,37 +92,37 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
logger.ErrorContext(ctx, "failed to create signoz", errors.Attr(err))
return err
}
server, err := app.NewServer(config, signoz)
if err != nil {
logger.ErrorContext(ctx, "failed to create server", "error", err)
logger.ErrorContext(ctx, "failed to create server", errors.Attr(err))
return err
}
if err := server.Start(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start server", "error", err)
logger.ErrorContext(ctx, "failed to start server", errors.Attr(err))
return err
}
signoz.Start(ctx)
if err := signoz.Wait(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start signoz", "error", err)
logger.ErrorContext(ctx, "failed to start signoz", errors.Attr(err))
return err
}
err = server.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop server", "error", err)
logger.ErrorContext(ctx, "failed to stop server", errors.Attr(err))
return err
}
err = signoz.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop signoz", "error", err)
logger.ErrorContext(ctx, "failed to stop signoz", errors.Attr(err))
return err
}

View File

@@ -10,18 +10,23 @@ import (
"github.com/SigNoz/signoz/pkg/signoz"
)
func NewSigNozConfig(ctx context.Context, logger *slog.Logger, flags signoz.DeprecatedFlags) (signoz.Config, error) {
func NewSigNozConfig(ctx context.Context, logger *slog.Logger, configFiles []string) (signoz.Config, error) {
uris := make([]string, 0, len(configFiles)+1)
for _, f := range configFiles {
uris = append(uris, "file:"+f)
}
uris = append(uris, "env:")
config, err := signoz.NewConfig(
ctx,
logger,
config.ResolverConfig{
Uris: []string{"env:"},
Uris: uris,
ProviderFactories: []config.ProviderFactory{
envprovider.NewFactory(),
fileprovider.NewFactory(),
},
},
flags,
)
if err != nil {
return signoz.Config{}, err

86
cmd/config_test.go Normal file
View File

@@ -0,0 +1,86 @@
package cmd
import (
"context"
"log/slog"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewSigNozConfig_NoConfigFiles(t *testing.T) {
logger := slog.New(slog.DiscardHandler)
config, err := NewSigNozConfig(context.Background(), logger, nil)
require.NoError(t, err)
assert.NotZero(t, config)
}
func TestNewSigNozConfig_SingleConfigFile(t *testing.T) {
dir := t.TempDir()
configPath := filepath.Join(dir, "config.yaml")
err := os.WriteFile(configPath, []byte(`
cache:
provider: "redis"
`), 0644)
require.NoError(t, err)
logger := slog.New(slog.DiscardHandler)
config, err := NewSigNozConfig(context.Background(), logger, []string{configPath})
require.NoError(t, err)
assert.Equal(t, "redis", config.Cache.Provider)
}
func TestNewSigNozConfig_MultipleConfigFiles_LaterOverridesEarlier(t *testing.T) {
dir := t.TempDir()
basePath := filepath.Join(dir, "base.yaml")
err := os.WriteFile(basePath, []byte(`
cache:
provider: "memory"
sqlstore:
provider: "sqlite"
`), 0644)
require.NoError(t, err)
overridePath := filepath.Join(dir, "override.yaml")
err = os.WriteFile(overridePath, []byte(`
cache:
provider: "redis"
`), 0644)
require.NoError(t, err)
logger := slog.New(slog.DiscardHandler)
config, err := NewSigNozConfig(context.Background(), logger, []string{basePath, overridePath})
require.NoError(t, err)
// Later file overrides earlier
assert.Equal(t, "redis", config.Cache.Provider)
// Value from base file that wasn't overridden persists
assert.Equal(t, "sqlite", config.SQLStore.Provider)
}
func TestNewSigNozConfig_EnvOverridesConfigFile(t *testing.T) {
dir := t.TempDir()
configPath := filepath.Join(dir, "config.yaml")
err := os.WriteFile(configPath, []byte(`
cache:
provider: "fromfile"
`), 0644)
require.NoError(t, err)
t.Setenv("SIGNOZ_CACHE_PROVIDER", "fromenv")
logger := slog.New(slog.DiscardHandler)
config, err := NewSigNozConfig(context.Background(), logger, []string{configPath})
require.NoError(t, err)
// Env should override file
assert.Equal(t, "fromenv", config.Cache.Provider)
}
func TestNewSigNozConfig_NonexistentFile(t *testing.T) {
logger := slog.New(slog.DiscardHandler)
_, err := NewSigNozConfig(context.Background(), logger, []string{"/nonexistent/config.yaml"})
assert.Error(t, err)
}

View File

@@ -3,9 +3,8 @@ FROM node:22-bookworm AS build
WORKDIR /opt/
COPY ./frontend/ ./
ENV NODE_OPTIONS=--max-old-space-size=8192
RUN CI=1 npm i -g pnpm
RUN CI=1 pnpm install
RUN CI=1 pnpm build
RUN CI=1 yarn install
RUN CI=1 yarn build
FROM golang:1.25-bookworm

View File

@@ -5,16 +5,18 @@ import (
"log/slog"
"time"
"github.com/spf13/cobra"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/ee/authn/callbackauthn/oidccallbackauthn"
"github.com/SigNoz/signoz/ee/authn/callbackauthn/samlcallbackauthn"
"github.com/SigNoz/signoz/ee/authz/openfgaauthz"
eequerier "github.com/SigNoz/signoz/ee/querier"
"github.com/SigNoz/signoz/ee/authz/openfgaschema"
"github.com/SigNoz/signoz/ee/gateway/httpgateway"
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
eequerier "github.com/SigNoz/signoz/ee/querier"
enterpriseapp "github.com/SigNoz/signoz/ee/query-service/app"
"github.com/SigNoz/signoz/ee/sqlschema/postgressqlschema"
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
@@ -23,6 +25,7 @@ import (
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/licensing"
@@ -38,18 +41,17 @@ import (
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/spf13/cobra"
)
func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
var flags signoz.DeprecatedFlags
var configFiles []string
serverCmd := &cobra.Command{
Use: "server",
Short: "Run the SigNoz server",
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
RunE: func(currCmd *cobra.Command, args []string) error {
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, flags)
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, configFiles)
if err != nil {
return err
}
@@ -58,7 +60,7 @@ func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
},
}
flags.RegisterFlags(serverCmd)
serverCmd.Flags().StringArrayVar(&configFiles, "config", nil, "path to a YAML configuration file (can be specified multiple times, later files override earlier ones)")
parentCmd.AddCommand(serverCmd)
}
@@ -69,7 +71,7 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
// add enterprise sqlstore factories to the community sqlstore factories
sqlstoreFactories := signoz.NewSQLStoreProviderFactories()
if err := sqlstoreFactories.Add(postgressqlstore.NewFactory(sqlstorehook.NewLoggingFactory(), sqlstorehook.NewInstrumentationFactory())); err != nil {
logger.ErrorContext(ctx, "failed to add postgressqlstore factory", "error", err)
logger.ErrorContext(ctx, "failed to add postgressqlstore factory", errors.Attr(err))
return err
}
@@ -132,37 +134,37 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
logger.ErrorContext(ctx, "failed to create signoz", errors.Attr(err))
return err
}
server, err := enterpriseapp.NewServer(config, signoz)
if err != nil {
logger.ErrorContext(ctx, "failed to create server", "error", err)
logger.ErrorContext(ctx, "failed to create server", errors.Attr(err))
return err
}
if err := server.Start(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start server", "error", err)
logger.ErrorContext(ctx, "failed to start server", errors.Attr(err))
return err
}
signoz.Start(ctx)
if err := signoz.Wait(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start signoz", "error", err)
logger.ErrorContext(ctx, "failed to start signoz", errors.Attr(err))
return err
}
err = server.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop server", "error", err)
logger.ErrorContext(ctx, "failed to stop server", errors.Attr(err))
return err
}
err = signoz.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop signoz", "error", err)
logger.ErrorContext(ctx, "failed to stop signoz", errors.Attr(err))
return err
}

View File

@@ -4,9 +4,10 @@ import (
"log/slog"
"os"
"github.com/SigNoz/signoz/pkg/version"
"github.com/spf13/cobra"
"go.uber.org/zap" //nolint:depguard
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/version"
)
var RootCmd = &cobra.Command{
@@ -19,15 +20,9 @@ var RootCmd = &cobra.Command{
}
func Execute(logger *slog.Logger) {
zapLogger := newZapLogger()
zap.ReplaceGlobals(zapLogger)
defer func() {
_ = zapLogger.Sync()
}()
err := RootCmd.Execute()
if err != nil {
logger.ErrorContext(RootCmd.Context(), "error running command", "error", err)
logger.ErrorContext(RootCmd.Context(), "error running command", errors.Attr(err))
os.Exit(1)
}
}

View File

@@ -1,110 +0,0 @@
package cmd
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"go.uber.org/zap" //nolint:depguard
"go.uber.org/zap/zapcore" //nolint:depguard
)
// Deprecated: Use `NewLogger` from `pkg/instrumentation` instead.
func newZapLogger() *zap.Logger {
config := zap.NewProductionConfig()
config.EncoderConfig.TimeKey = "timestamp"
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
// Extract sampling config before building the logger.
// We need to disable sampling in the config and apply it manually later
// to ensure correct core ordering. See filteringCore documentation for details.
samplerConfig := config.Sampling
config.Sampling = nil
logger, _ := config.Build()
// Wrap with custom core wrapping to filter certain log entries.
// The order of wrapping is important:
// 1. First wrap with filteringCore
// 2. Then wrap with sampler
//
// This creates the call chain: sampler -> filteringCore -> ioCore
//
// During logging:
// - sampler.Check decides whether to sample the log entry
// - If sampled, filteringCore.Check is called
// - filteringCore adds itself to CheckedEntry.cores
// - All cores in CheckedEntry.cores have their Write method called
// - filteringCore.Write can now filter the entry before passing to ioCore
//
// If we didn't disable the sampler above, filteringCore would have wrapped
// sampler. By calling sampler.Check we would have allowed it to call
// ioCore.Check that adds itself to CheckedEntry.cores. Then ioCore.Write
// would have bypassed our checks, making filtering impossible.
return logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
core = &filteringCore{core}
if samplerConfig != nil {
core = zapcore.NewSamplerWithOptions(
core,
time.Second,
samplerConfig.Initial,
samplerConfig.Thereafter,
)
}
return core
}))
}
// filteringCore wraps a zapcore.Core to filter out log entries based on a
// custom logic.
//
// Note: This core must be positioned before the sampler in the core chain
// to ensure Write is called. See newZapLogger for ordering details.
type filteringCore struct {
zapcore.Core
}
// filter determines whether a log entry should be written based on its fields.
// Returns false if the entry should be suppressed, true otherwise.
//
// Current filters:
// - context.Canceled: These are expected errors from cancelled operations,
// and create noise in logs.
func (c *filteringCore) filter(fields []zapcore.Field) bool {
for _, field := range fields {
if field.Type == zapcore.ErrorType {
if loggedErr, ok := field.Interface.(error); ok {
// Suppress logs containing context.Canceled errors
if errors.Is(loggedErr, context.Canceled) {
return false
}
}
}
}
return true
}
// With implements zapcore.Core.With
// It returns a new copy with the added context.
func (c *filteringCore) With(fields []zapcore.Field) zapcore.Core {
return &filteringCore{c.Core.With(fields)}
}
// Check implements zapcore.Core.Check.
// It adds this core to the CheckedEntry if the log level is enabled,
// ensuring that Write will be called for this entry.
func (c *filteringCore) Check(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {
if c.Enabled(ent.Level) {
return ce.AddCore(ent, c)
}
return ce
}
// Write implements zapcore.Core.Write.
// It filters log entries based on their fields before delegating to the wrapped core.
func (c *filteringCore) Write(ent zapcore.Entry, fields []zapcore.Field) error {
if !c.filter(fields) {
return nil
}
return c.Core.Write(ent, fields)
}

View File

@@ -1,4 +0,0 @@
provider: "inmemory"
inmemory:
ttl: 60m
cleanupInterval: 10m

View File

@@ -308,6 +308,9 @@ user:
allow_self: true
# The duration within which a user can reset their password.
max_token_lifetime: 6h
invite:
# The duration within which a user can accept their invite.
max_token_lifetime: 48h
root:
# Whether to enable the root user. When enabled, a root user is provisioned
# on startup using the email and password below. The root user cannot be
@@ -321,3 +324,22 @@ user:
org:
name: default
id: 00000000-0000-0000-0000-000000000000
##################### IdentN #####################
identn:
tokenizer:
# toggle tokenizer identN
enabled: true
# headers to use for tokenizer identN resolver
headers:
- Authorization
- Sec-WebSocket-Protocol
apikey:
# toggle apikey identN
enabled: true
# headers to use for apikey identN resolver
headers:
- SIGNOZ-API-KEY
impersonation:
# toggle impersonation identN, when enabled, all requests will impersonate the root user
enabled: false

View File

@@ -1,25 +0,0 @@
# my global config
global:
scrape_interval: 5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- 127.0.0.1:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
- 'alerts.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs: []
remote_read:
- url: tcp://localhost:9000/signoz_metrics

View File

@@ -190,7 +190,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.115.0
image: signoz/signoz:v0.116.1
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port

View File

@@ -117,7 +117,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.115.0
image: signoz/signoz:v0.116.1
ports:
- "8080:8080" # signoz port
volumes:

View File

@@ -1,38 +0,0 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
deploy:
restart_policy:
condition: on-failure
services:
hotrod:
<<: *common
image: jaegertracing/example-hotrod:1.61.0
command: [ "all" ]
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318 #
load-hotrod:
<<: *common
image: "signoz/locust:1.2.3"
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../../../common/locust-scripts:/locust
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -1,69 +0,0 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
deploy:
mode: global
restart_policy:
condition: on-failure
services:
otel-agent:
<<: *common
image: otel/opentelemetry-collector-contrib:0.111.0
command:
- --config=/etc/otel-collector-config.yaml
volumes:
- ./otel-agent-config.yaml:/etc/otel-collector-config.yaml
- /:/hostfs:ro
environment:
- SIGNOZ_COLLECTOR_ENDPOINT=http://host.docker.internal:4317 # In case of external SigNoz or cloud, update the endpoint and access token
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
# - SIGNOZ_ACCESS_TOKEN="<your-access-token>"
# Before exposing the ports, make sure the ports are not used by other services
# ports:
# - "4317:4317"
# - "4318:4318"
otel-metrics:
<<: *common
image: otel/opentelemetry-collector-contrib:0.111.0
user: 0:0 # If you have security concerns, you can replace this with your `UID:GID` that has necessary permissions to docker.sock
command:
- --config=/etc/otel-collector-config.yaml
volumes:
- ./otel-metrics-config.yaml:/etc/otel-collector-config.yaml
- /var/run/docker.sock:/var/run/docker.sock
environment:
- SIGNOZ_COLLECTOR_ENDPOINT=http://host.docker.internal:4317 # In case of external SigNoz or cloud, update the endpoint and access token
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
# - SIGNOZ_ACCESS_TOKEN="<your-access-token>"
# Before exposing the ports, make sure the ports are not used by other services
# ports:
# - "4317:4317"
# - "4318:4318"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
logspout:
<<: *common
image: "gliderlabs/logspout:v3.2.14"
command: syslog+tcp://otel-agent:2255
user: root
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- otel-agent
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -1,102 +0,0 @@
receivers:
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-agent
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-agent
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^(signoz_(logspout|signoz|otel-collector|clickhouse|zookeeper))|(infra_(logspout|otel-agent|otel-metrics)).*"'
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors:
# - ec2
# - gcp
# - azure
- env
- system
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
otlp:
endpoint: ${env:SIGNOZ_COLLECTOR_ENDPOINT}
tls:
insecure: true
headers:
signoz-access-token: ${env:SIGNOZ_ACCESS_TOKEN}
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/prometheus:
receivers: [prometheus]
processors: [resourcedetection, batch]
exporters: [otlp]
logs:
receivers: [otlp, tcplog/docker]
processors: [resourcedetection, batch]
exporters: [otlp]

View File

@@ -1,103 +0,0 @@
receivers:
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-metrics
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-metrics
# For Docker daemon metrics to be scraped, it must be configured to expose
# Prometheus metrics, as documented here: https://docs.docker.com/config/daemon/prometheus/
# - job_name: docker-daemon
# dockerswarm_sd_configs:
# - host: unix:///var/run/docker.sock
# role: nodes
# relabel_configs:
# - source_labels: [__meta_dockerswarm_node_address]
# target_label: __address__
# replacement: $1:9323
- job_name: "dockerswarm"
dockerswarm_sd_configs:
- host: unix:///var/run/docker.sock
role: tasks
relabel_configs:
- action: keep
regex: running
source_labels:
- __meta_dockerswarm_task_desired_state
- action: keep
regex: true
source_labels:
- __meta_dockerswarm_service_label_signoz_io_scrape
- regex: ([^:]+)(?::\d+)?
replacement: $1
source_labels:
- __address__
target_label: swarm_container_ip
- separator: .
source_labels:
- __meta_dockerswarm_service_name
- __meta_dockerswarm_task_slot
- __meta_dockerswarm_task_id
target_label: swarm_container_name
- target_label: __address__
source_labels:
- swarm_container_ip
- __meta_dockerswarm_service_label_signoz_io_port
separator: ":"
- source_labels:
- __meta_dockerswarm_service_label_signoz_io_path
target_label: __metrics_path__
- source_labels:
- __meta_dockerswarm_service_label_com_docker_stack_namespace
target_label: namespace
- source_labels:
- __meta_dockerswarm_service_name
target_label: service_name
- source_labels:
- __meta_dockerswarm_task_id
target_label: service_instance_id
- source_labels:
- __meta_dockerswarm_node_hostname
target_label: host_name
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
detectors:
- env
- system
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
otlp:
endpoint: ${env:SIGNOZ_COLLECTOR_ENDPOINT}
tls:
insecure: true
headers:
signoz-access-token: ${env:SIGNOZ_ACCESS_TOKEN}
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
pipelines:
metrics:
receivers: [prometheus]
processors: [resourcedetection, batch]
exporters: [otlp]

View File

@@ -181,7 +181,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.115.0}
image: signoz/signoz:${VERSION:-v0.116.1}
container_name: signoz
ports:
- "8080:8080" # signoz port

View File

@@ -109,7 +109,7 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.115.0}
image: signoz/signoz:${VERSION:-v0.116.1}
container_name: signoz
ports:
- "8080:8080" # signoz port

View File

@@ -1,39 +0,0 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
restart: unless-stopped
services:
hotrod:
<<: *common
image: jaegertracing/example-hotrod:1.61.0
container_name: hotrod
command: [ "all" ]
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318 # In case of external SigNoz or cloud, update the endpoint and access token
# - OTEL_OTLP_HEADERS=signoz-access-token=<your-access-token>
load-hotrod:
<<: *common
image: "signoz/locust:1.2.3"
container_name: load-hotrod
environment:
ATTACKED_HOST: http://hotrod:8080
LOCUST_MODE: standalone
NO_PROXY: standalone
TASK_DELAY_FROM: 5
TASK_DELAY_TO: 30
QUIET_MODE: "${QUIET_MODE:-false}"
LOCUST_OPTS: "--headless -u 10 -r 1"
volumes:
- ../../../common/locust-scripts:/locust
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -1,43 +0,0 @@
version: "3"
x-common: &common
networks:
- signoz-net
extra_hosts:
- host.docker.internal:host-gateway
logging:
options:
max-size: 50m
max-file: "3"
restart: unless-stopped
services:
otel-agent:
<<: *common
image: otel/opentelemetry-collector-contrib:0.111.0
command:
- --config=/etc/otel-collector-config.yaml
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- /:/hostfs:ro
- /var/run/docker.sock:/var/run/docker.sock
environment:
- SIGNOZ_COLLECTOR_ENDPOINT=http://host.docker.internal:4317 # In case of external SigNoz or cloud, update the endpoint and access token
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux # Replace signoz-host with the actual hostname
# - SIGNOZ_ACCESS_TOKEN="<your-access-token>"
# Before exposing the ports, make sure the ports are not used by other services
# ports:
# - "4317:4317"
# - "4318:4318"
logspout:
<<: *common
image: "gliderlabs/logspout:v3.2.14"
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command: syslog+tcp://otel-agent:2255
depends_on:
- otel-agent
networks:
signoz-net:
name: signoz-net
external: true

View File

@@ -1,139 +0,0 @@
receivers:
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
# For Docker daemon metrics to be scraped, it must be configured to expose
# Prometheus metrics, as documented here: https://docs.docker.com/config/daemon/prometheus/
# - job_name: docker-daemon
# static_configs:
# - targets:
# - host.docker.internal:9323
# labels:
# job_name: docker-daemon
- job_name: docker-container
docker_sd_configs:
- host: unix:///var/run/docker.sock
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_docker_container_label_signoz_io_scrape
- regex: true
source_labels:
- __meta_docker_container_label_signoz_io_path
target_label: __metrics_path__
- regex: (.+)
source_labels:
- __meta_docker_container_label_signoz_io_path
target_label: __metrics_path__
- separator: ":"
source_labels:
- __meta_docker_network_ip
- __meta_docker_container_label_signoz_io_port
target_label: __address__
- regex: '/(.*)'
replacement: '$1'
source_labels:
- __meta_docker_container_name
target_label: container_name
- regex: __meta_docker_container_label_signoz_io_(.+)
action: labelmap
replacement: $1
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^signoz|(signoz-(|otel-collector|clickhouse|zookeeper))|(infra-(logspout|otel-agent)-.*)"'
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors:
# - ec2
# - gcp
# - azure
- env
- system
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
otlp:
endpoint: ${env:SIGNOZ_COLLECTOR_ENDPOINT}
tls:
insecure: true
headers:
signoz-access-token: ${env:SIGNOZ_ACCESS_TOKEN}
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [otlp]
metrics/prometheus:
receivers: [prometheus]
processors: [resourcedetection, batch]
exporters: [otlp]
logs:
receivers: [otlp, tcplog/docker]
processors: [resourcedetection, batch]
exporters: [otlp]

View File

@@ -220,6 +220,13 @@ components:
- additions
- deletions
type: object
AuthtypesPatchableRole:
properties:
description:
type: string
required:
- description
type: object
AuthtypesPostableAuthDomain:
properties:
config:
@@ -236,6 +243,15 @@ components:
password:
type: string
type: object
AuthtypesPostableRole:
properties:
description:
type: string
name:
type: string
required:
- name
type: object
AuthtypesPostableRotateToken:
properties:
refreshToken:
@@ -251,6 +267,31 @@ components:
- name
- type
type: object
AuthtypesRole:
properties:
createdAt:
format: date-time
type: string
description:
type: string
id:
type: string
name:
type: string
orgId:
type: string
type:
type: string
updatedAt:
format: date-time
type: string
required:
- id
- name
- description
- type
- orgId
type: object
AuthtypesRoleMapping:
properties:
defaultRole:
@@ -557,6 +598,39 @@ components:
required:
- config
type: object
GlobaltypesAPIKeyConfig:
properties:
enabled:
type: boolean
type: object
GlobaltypesConfig:
properties:
external_url:
type: string
identN:
$ref: '#/components/schemas/GlobaltypesIdentNConfig'
ingestion_url:
type: string
type: object
GlobaltypesIdentNConfig:
properties:
apikey:
$ref: '#/components/schemas/GlobaltypesAPIKeyConfig'
impersonation:
$ref: '#/components/schemas/GlobaltypesImpersonationConfig'
tokenizer:
$ref: '#/components/schemas/GlobaltypesTokenizerConfig'
type: object
GlobaltypesImpersonationConfig:
properties:
enabled:
type: boolean
type: object
GlobaltypesTokenizerConfig:
properties:
enabled:
type: boolean
type: object
MetricsexplorertypesListMetric:
properties:
description:
@@ -1722,65 +1796,24 @@ components:
- status
- error
type: object
RoletypesPatchableRole:
properties:
description:
type: string
required:
- description
type: object
RoletypesPostableRole:
properties:
description:
type: string
name:
type: string
required:
- name
type: object
RoletypesRole:
properties:
createdAt:
format: date-time
type: string
description:
type: string
id:
type: string
name:
type: string
orgId:
type: string
type:
type: string
updatedAt:
format: date-time
type: string
required:
- id
- name
- description
- type
- orgId
type: object
ServiceaccounttypesFactorAPIKey:
properties:
createdAt:
format: date-time
type: string
expires_at:
expiresAt:
minimum: 0
type: integer
id:
type: string
key:
type: string
last_used:
lastObservedAt:
format: date-time
type: string
name:
type: string
service_account_id:
serviceAccountId:
type: string
updatedAt:
format: date-time
@@ -1788,9 +1821,9 @@ components:
required:
- id
- key
- expires_at
- last_used
- service_account_id
- expiresAt
- lastObservedAt
- serviceAccountId
type: object
ServiceaccounttypesGettableFactorAPIKeyWithKey:
properties:
@@ -1804,14 +1837,14 @@ components:
type: object
ServiceaccounttypesPostableFactorAPIKey:
properties:
expires_at:
expiresAt:
minimum: 0
type: integer
name:
type: string
required:
- name
- expires_at
- expiresAt
type: object
ServiceaccounttypesPostableServiceAccount:
properties:
@@ -1833,13 +1866,16 @@ components:
createdAt:
format: date-time
type: string
deletedAt:
format: date-time
type: string
email:
type: string
id:
type: string
name:
type: string
orgID:
orgId:
type: string
roles:
items:
@@ -1856,18 +1892,19 @@ components:
- email
- roles
- status
- orgID
- orgId
- deletedAt
type: object
ServiceaccounttypesUpdatableFactorAPIKey:
properties:
expires_at:
expiresAt:
minimum: 0
type: integer
name:
type: string
required:
- name
- expires_at
- expiresAt
type: object
ServiceaccounttypesUpdatableServiceAccount:
properties:
@@ -2026,13 +2063,6 @@ components:
required:
- id
type: object
TypesGettableGlobalConfig:
properties:
external_url:
type: string
ingestion_url:
type: string
type: object
TypesIdentifiable:
properties:
id:
@@ -2097,17 +2127,6 @@ components:
role:
type: string
type: object
TypesPostableAcceptInvite:
properties:
displayName:
type: string
password:
type: string
sourceUrl:
type: string
token:
type: string
type: object
TypesPostableBulkInviteRequest:
properties:
invites:
@@ -3251,7 +3270,7 @@ paths:
schema:
properties:
data:
$ref: '#/components/schemas/TypesGettableGlobalConfig'
$ref: '#/components/schemas/GlobaltypesConfig'
status:
type: string
required:
@@ -3259,80 +3278,16 @@ paths:
- data
type: object
description: OK
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- EDITOR
- tokenizer:
- EDITOR
summary: Get global config
tags:
- global
/api/v1/invite:
get:
deprecated: false
description: This endpoint lists all invites
operationId: ListInvite
responses:
"200":
content:
application/json:
schema:
properties:
data:
items:
$ref: '#/components/schemas/TypesInvite'
type: array
status:
type: string
required:
- status
- data
type: object
description: OK
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: List invites
tags:
- users
post:
deprecated: false
description: This endpoint creates an invite for a user
@@ -3395,151 +3350,6 @@ paths:
summary: Create invite
tags:
- users
/api/v1/invite/{id}:
delete:
deprecated: false
description: This endpoint deletes an invite by id
operationId: DeleteInvite
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
"204":
description: No Content
"400":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Bad Request
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"404":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Not Found
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: Delete invite
tags:
- users
/api/v1/invite/{token}:
get:
deprecated: false
description: This endpoint gets an invite by token
operationId: GetInvite
parameters:
- in: path
name: token
required: true
schema:
type: string
responses:
"200":
content:
application/json:
schema:
properties:
data:
$ref: '#/components/schemas/TypesInvite'
status:
type: string
required:
- status
- data
type: object
description: OK
"400":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Bad Request
"404":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Not Found
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
summary: Get invite
tags:
- users
/api/v1/invite/accept:
post:
deprecated: false
description: This endpoint accepts an invite by token
operationId: AcceptInvite
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/TypesPostableAcceptInvite'
responses:
"201":
content:
application/json:
schema:
properties:
data:
$ref: '#/components/schemas/TypesUser'
status:
type: string
required:
- status
- data
type: object
description: Created
"400":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Bad Request
"404":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Not Found
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
summary: Accept invite
tags:
- users
/api/v1/invite/bulk:
post:
deprecated: false
@@ -4230,7 +4040,7 @@ paths:
properties:
data:
items:
$ref: '#/components/schemas/RoletypesRole'
$ref: '#/components/schemas/AuthtypesRole'
type: array
status:
type: string
@@ -4273,7 +4083,7 @@ paths:
content:
application/json:
schema:
$ref: '#/components/schemas/RoletypesPostableRole'
$ref: '#/components/schemas/AuthtypesPostableRole'
responses:
"201":
content:
@@ -4418,7 +4228,7 @@ paths:
schema:
properties:
data:
$ref: '#/components/schemas/RoletypesRole'
$ref: '#/components/schemas/AuthtypesRole'
status:
type: string
required:
@@ -4466,7 +4276,7 @@ paths:
content:
application/json:
schema:
$ref: '#/components/schemas/RoletypesPatchableRole'
$ref: '#/components/schemas/AuthtypesPatchableRole'
responses:
"204":
content:
@@ -5810,9 +5620,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Get ingestion keys for workspace
tags:
- gateway
@@ -5860,9 +5670,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Create ingestion key for workspace
tags:
- gateway
@@ -5900,9 +5710,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Delete ingestion key for workspace
tags:
- gateway
@@ -5944,9 +5754,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Update ingestion key for workspace
tags:
- gateway
@@ -6001,9 +5811,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Create limit for the ingestion key
tags:
- gateway
@@ -6041,9 +5851,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Delete limit for the ingestion key
tags:
- gateway
@@ -6085,9 +5895,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Update limit for the ingestion key
tags:
- gateway
@@ -6145,9 +5955,9 @@ paths:
description: Internal Server Error
security:
- api_key:
- ADMIN
- EDITOR
- tokenizer:
- ADMIN
- EDITOR
summary: Search ingestion keys for workspace
tags:
- gateway

View File

@@ -13,12 +13,13 @@ Before diving in, make sure you have these tools installed:
- Download from [go.dev/dl](https://go.dev/dl/)
- Check [go.mod](../../go.mod#L3) for the minimum version
- **Node** - Powers our frontend
- Download from [nodejs.org](https://nodejs.org)
- Check [.nvmrc](../../frontend/.nvmrc) for the version
- **Pnpm** - Our frontend package manager
- Follow the [installation guide](https://pnpm.io/installation)
- **Yarn** - Our frontend package manager
- Follow the [installation guide](https://yarnpkg.com/getting-started/install)
- **Docker** - For running Clickhouse and Postgres locally
- Get it from [docs.docker.com/get-docker](https://docs.docker.com/get-docker/)
@@ -94,7 +95,7 @@ This command:
2. Install dependencies:
```bash
pnpm install
yarn install
```
3. Create a `.env` file in this directory:
@@ -104,10 +105,10 @@ This command:
4. Start the development server:
```bash
pnpm dev
yarn dev
```
> 💡 **Tip**: `pnpm dev` will automatically rebuild when you make changes to the code
> 💡 **Tip**: `yarn dev` will automatically rebuild when you make changes to the code
Now you're all set to start developing! Happy coding! 🎉

View File

@@ -18,40 +18,40 @@ The configuration file is a JSON array containing data source objects. Each obje
### Required Keys
| Key | Type | Description |
| ------------ | ---------- | --------------------------------------------------------------------- |
| `dataSource` | `string` | Unique identifier for the data source (kebab-case, e.g., `"aws-ec2"`) |
| `label` | `string` | Display name shown to users (e.g., `"AWS EC2"`) |
| `tags` | `string[]` | Array of category tags for grouping (e.g., `["AWS"]`, `["database"]`) |
| `module` | `string` | Destination module after onboarding completion |
| `imgUrl` | `string` | Path to the logo/icon **(SVG required)** (e.g., `"/Logos/ec2.svg"`) |
| Key | Type | Description |
|-----|------|-------------|
| `dataSource` | `string` | Unique identifier for the data source (kebab-case, e.g., `"aws-ec2"`) |
| `label` | `string` | Display name shown to users (e.g., `"AWS EC2"`) |
| `tags` | `string[]` | Array of category tags for grouping (e.g., `["AWS"]`, `["database"]`) |
| `module` | `string` | Destination module after onboarding completion |
| `imgUrl` | `string` | Path to the logo/icon **(SVG required)** (e.g., `"/Logos/ec2.svg"`) |
### Optional Keys
| Key | Type | Description |
| ----------------------- | ---------- | -------------------------------------------------------------- |
| `link` | `string` | Docs link to redirect to (e.g., `"/docs/aws-monitoring/ec2/"`) |
| `relatedSearchKeywords` | `string[]` | Array of keywords for search functionality |
| `question` | `object` | Nested question object for multi-step flows |
| `internalRedirect` | `boolean` | When `true`, navigates within the app instead of showing docs |
| Key | Type | Description |
|-----|------|-------------|
| `link` | `string` | Docs link to redirect to (e.g., `"/docs/aws-monitoring/ec2/"`) |
| `relatedSearchKeywords` | `string[]` | Array of keywords for search functionality |
| `question` | `object` | Nested question object for multi-step flows |
| `internalRedirect` | `boolean` | When `true`, navigates within the app instead of showing docs |
## Module Values
The `module` key determines where users are redirected after completing onboarding:
| Value | Destination |
| ------------------------- | -------------------------------------- |
| `apm` | APM / Traces |
| `logs` | Logs Explorer |
| `metrics` | Metrics Explorer |
| `dashboards` | Dashboards |
| `infra-monitoring-hosts` | Infrastructure Monitoring - Hosts |
| `infra-monitoring-k8s` | Infrastructure Monitoring - Kubernetes |
| `messaging-queues-kafka` | Messaging Queues - Kafka |
| `messaging-queues-celery` | Messaging Queues - Celery |
| `integrations` | Integrations page |
| `home` | Home page |
| `api-monitoring` | API Monitoring |
| Value | Destination |
|-------|-------------|
| `apm` | APM / Traces |
| `logs` | Logs Explorer |
| `metrics` | Metrics Explorer |
| `dashboards` | Dashboards |
| `infra-monitoring-hosts` | Infrastructure Monitoring - Hosts |
| `infra-monitoring-k8s` | Infrastructure Monitoring - Kubernetes |
| `messaging-queues-kafka` | Messaging Queues - Kafka |
| `messaging-queues-celery` | Messaging Queues - Celery |
| `integrations` | Integrations page |
| `home` | Home page |
| `api-monitoring` | API Monitoring |
## Question Object Structure
@@ -91,14 +91,14 @@ The `question` object enables multi-step selection flows:
### Question Keys
| Key | Type | Description |
| -------------- | -------- | ----------------------------------------------------------- |
| `desc` | `string` | Question text displayed to the user |
| `type` | `string` | Currently only `"select"` is supported |
| `helpText` | `string` | (Optional) Additional help text below the question |
| `helpLink` | `string` | (Optional) Docs link for the help section |
| Key | Type | Description |
|-----|------|-------------|
| `desc` | `string` | Question text displayed to the user |
| `type` | `string` | Currently only `"select"` is supported |
| `helpText` | `string` | (Optional) Additional help text below the question |
| `helpLink` | `string` | (Optional) Docs link for the help section |
| `helpLinkText` | `string` | (Optional) Text for the help link (default: "Learn more →") |
| `options` | `array` | Array of option objects |
| `options` | `array` | Array of option objects |
## Option Object Structure
@@ -291,7 +291,7 @@ Options can be simple (direct link) or nested (with another question):
1. Add your data source object to the JSON array
2. Ensure the logo exists in `public/Logos/`
3. Test the flow locally with `pnpm dev`
3. Test the flow locally with `yarn dev`
4. Validation:
- Navigate to the [onboarding page](http://localhost:3301/get-started-with-signoz-cloud) on your local machine
- Data source appears in the list

View File

@@ -74,37 +74,37 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
instrumentationtypes.CodeFunctionName: "getResults",
})
// TODO(srikanthccv): parallelize this?
p.logger.InfoContext(ctx, "fetching results for current period", "anomaly_current_period_query", params.CurrentPeriodQuery)
p.logger.InfoContext(ctx, "fetching results for current period", slog.Any("anomaly_current_period_query", params.CurrentPeriodQuery))
currentPeriodResults, err := p.querier.QueryRange(ctx, orgID, &params.CurrentPeriodQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past period", "anomaly_past_period_query", params.PastPeriodQuery)
p.logger.InfoContext(ctx, "fetching results for past period", slog.Any("anomaly_past_period_query", params.PastPeriodQuery))
pastPeriodResults, err := p.querier.QueryRange(ctx, orgID, &params.PastPeriodQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for current season", "anomaly_current_season_query", params.CurrentSeasonQuery)
p.logger.InfoContext(ctx, "fetching results for current season", slog.Any("anomaly_current_season_query", params.CurrentSeasonQuery))
currentSeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.CurrentSeasonQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past season", "anomaly_past_season_query", params.PastSeasonQuery)
p.logger.InfoContext(ctx, "fetching results for past season", slog.Any("anomaly_past_season_query", params.PastSeasonQuery))
pastSeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.PastSeasonQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past 2 season", "anomaly_past_2season_query", params.Past2SeasonQuery)
p.logger.InfoContext(ctx, "fetching results for past 2 season", slog.Any("anomaly_past_2season_query", params.Past2SeasonQuery))
past2SeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.Past2SeasonQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past 3 season", "anomaly_past_3season_query", params.Past3SeasonQuery)
p.logger.InfoContext(ctx, "fetching results for past 3 season", slog.Any("anomaly_past_3season_query", params.Past3SeasonQuery))
past3SeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.Past3SeasonQuery)
if err != nil {
return nil, err
@@ -212,17 +212,17 @@ func (p *BaseSeasonalProvider) getPredictedSeries(
if predictedValue < 0 {
// this should not happen (except when the data has extreme outliers)
// we will use the moving avg of the previous period series in this case
p.logger.WarnContext(ctx, "predicted value is less than 0 for series", "anomaly_predicted_value", predictedValue, "anomaly_labels", series.Labels)
p.logger.WarnContext(ctx, "predicted value is less than 0 for series", slog.Float64("anomaly_predicted_value", predictedValue), slog.Any("anomaly_labels", series.Labels))
predictedValue = p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
}
p.logger.DebugContext(ctx, "predicted value for series",
"anomaly_moving_avg", movingAvg,
"anomaly_avg", avg,
"anomaly_mean", mean,
"anomaly_labels", series.Labels,
"anomaly_predicted_value", predictedValue,
"anomaly_curr", curr.Value,
slog.Float64("anomaly_moving_avg", movingAvg),
slog.Float64("anomaly_avg", avg),
slog.Float64("anomaly_mean", mean),
slog.Any("anomaly_labels", series.Labels),
slog.Float64("anomaly_predicted_value", predictedValue),
slog.Float64("anomaly_curr", curr.Value),
)
predictedSeries.Values = append(predictedSeries.Values, &qbtypes.TimeSeriesValue{
Timestamp: curr.Timestamp,
@@ -412,7 +412,7 @@ func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, orgID valuer.UU
past3SeasonSeries := p.getMatchingSeries(ctx, past3SeasonResult, series)
stdDev := p.getStdDev(currentSeasonSeries)
p.logger.InfoContext(ctx, "calculated standard deviation for series", "anomaly_std_dev", stdDev, "anomaly_labels", series.Labels)
p.logger.InfoContext(ctx, "calculated standard deviation for series", slog.Float64("anomaly_std_dev", stdDev), slog.Any("anomaly_labels", series.Labels))
prevSeriesAvg := p.getAvg(pastPeriodSeries)
currentSeasonSeriesAvg := p.getAvg(currentSeasonSeries)
@@ -420,12 +420,12 @@ func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, orgID valuer.UU
past2SeasonSeriesAvg := p.getAvg(past2SeasonSeries)
past3SeasonSeriesAvg := p.getAvg(past3SeasonSeries)
p.logger.InfoContext(ctx, "calculated mean for series",
"anomaly_prev_series_avg", prevSeriesAvg,
"anomaly_current_season_series_avg", currentSeasonSeriesAvg,
"anomaly_past_season_series_avg", pastSeasonSeriesAvg,
"anomaly_past_2season_series_avg", past2SeasonSeriesAvg,
"anomaly_past_3season_series_avg", past3SeasonSeriesAvg,
"anomaly_labels", series.Labels,
slog.Float64("anomaly_prev_series_avg", prevSeriesAvg),
slog.Float64("anomaly_current_season_series_avg", currentSeasonSeriesAvg),
slog.Float64("anomaly_past_season_series_avg", pastSeasonSeriesAvg),
slog.Float64("anomaly_past_2season_series_avg", past2SeasonSeriesAvg),
slog.Float64("anomaly_past_3season_series_avg", past3SeasonSeriesAvg),
slog.Any("anomaly_labels", series.Labels),
)
predictedSeries := p.getPredictedSeries(

View File

@@ -3,6 +3,7 @@ package oidccallbackauthn
import (
"context"
"fmt"
"log/slog"
"net/url"
"github.com/SigNoz/signoz/pkg/authn"
@@ -150,7 +151,7 @@ func (a *AuthN) HandleCallback(ctx context.Context, query url.Values) (*authtype
// Some IDPs return a single group as a string instead of an array
groups = append(groups, g)
default:
a.settings.Logger().WarnContext(ctx, "oidc: unsupported groups type", "type", fmt.Sprintf("%T", claimValue))
a.settings.Logger().WarnContext(ctx, "oidc: unsupported groups type", slog.String("type", fmt.Sprintf("%T", claimValue)))
}
}
}

View File

@@ -13,7 +13,6 @@ import (
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
openfgav1 "github.com/openfga/api/proto/openfga/v1"
openfgapkgtransformer "github.com/openfga/language/pkg/go/transformer"
@@ -23,7 +22,7 @@ type provider struct {
pkgAuthzService authz.AuthZ
openfgaServer *openfgaserver.Server
licensing licensing.Licensing
store roletypes.Store
store authtypes.RoleStore
registry []authz.RegisterTypeable
}
@@ -82,23 +81,23 @@ func (provider *provider) Write(ctx context.Context, additions []*openfgav1.Tupl
return provider.openfgaServer.Write(ctx, additions, deletions)
}
func (provider *provider) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*roletypes.Role, error) {
func (provider *provider) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*authtypes.Role, error) {
return provider.pkgAuthzService.Get(ctx, orgID, id)
}
func (provider *provider) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*roletypes.Role, error) {
func (provider *provider) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*authtypes.Role, error) {
return provider.pkgAuthzService.GetByOrgIDAndName(ctx, orgID, name)
}
func (provider *provider) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes.Role, error) {
func (provider *provider) List(ctx context.Context, orgID valuer.UUID) ([]*authtypes.Role, error) {
return provider.pkgAuthzService.List(ctx, orgID)
}
func (provider *provider) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*roletypes.Role, error) {
func (provider *provider) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*authtypes.Role, error) {
return provider.pkgAuthzService.ListByOrgIDAndNames(ctx, orgID, names)
}
func (provider *provider) ListByOrgIDAndIDs(ctx context.Context, orgID valuer.UUID, ids []valuer.UUID) ([]*roletypes.Role, error) {
func (provider *provider) ListByOrgIDAndIDs(ctx context.Context, orgID valuer.UUID, ids []valuer.UUID) ([]*authtypes.Role, error) {
return provider.pkgAuthzService.ListByOrgIDAndIDs(ctx, orgID, ids)
}
@@ -114,7 +113,7 @@ func (provider *provider) Revoke(ctx context.Context, orgID valuer.UUID, names [
return provider.pkgAuthzService.Revoke(ctx, orgID, names, subject)
}
func (provider *provider) CreateManagedRoles(ctx context.Context, orgID valuer.UUID, managedRoles []*roletypes.Role) error {
func (provider *provider) CreateManagedRoles(ctx context.Context, orgID valuer.UUID, managedRoles []*authtypes.Role) error {
return provider.pkgAuthzService.CreateManagedRoles(ctx, orgID, managedRoles)
}
@@ -136,16 +135,16 @@ func (provider *provider) CreateManagedUserRoleTransactions(ctx context.Context,
return provider.Write(ctx, tuples, nil)
}
func (provider *provider) Create(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) error {
func (provider *provider) Create(ctx context.Context, orgID valuer.UUID, role *authtypes.Role) error {
_, err := provider.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return provider.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
return provider.store.Create(ctx, authtypes.NewStorableRoleFromRole(role))
}
func (provider *provider) GetOrCreate(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) (*roletypes.Role, error) {
func (provider *provider) GetOrCreate(ctx context.Context, orgID valuer.UUID, role *authtypes.Role) (*authtypes.Role, error) {
_, err := provider.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
@@ -159,10 +158,10 @@ func (provider *provider) GetOrCreate(ctx context.Context, orgID valuer.UUID, ro
}
if existingRole != nil {
return roletypes.NewRoleFromStorableRole(existingRole), nil
return authtypes.NewRoleFromStorableRole(existingRole), nil
}
err = provider.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
err = provider.store.Create(ctx, authtypes.NewStorableRoleFromRole(role))
if err != nil {
return nil, err
}
@@ -217,13 +216,13 @@ func (provider *provider) GetObjects(ctx context.Context, orgID valuer.UUID, id
return objects, nil
}
func (provider *provider) Patch(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) error {
func (provider *provider) Patch(ctx context.Context, orgID valuer.UUID, role *authtypes.Role) error {
_, err := provider.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return provider.store.Update(ctx, orgID, roletypes.NewStorableRoleFromRole(role))
return provider.store.Update(ctx, orgID, authtypes.NewStorableRoleFromRole(role))
}
func (provider *provider) PatchObjects(ctx context.Context, orgID valuer.UUID, name string, relation authtypes.Relation, additions, deletions []*authtypes.Object) error {
@@ -232,12 +231,12 @@ func (provider *provider) PatchObjects(ctx context.Context, orgID valuer.UUID, n
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
additionTuples, err := roletypes.GetAdditionTuples(name, orgID, relation, additions)
additionTuples, err := authtypes.GetAdditionTuples(name, orgID, relation, additions)
if err != nil {
return err
}
deletionTuples, err := roletypes.GetDeletionTuples(name, orgID, relation, deletions)
deletionTuples, err := authtypes.GetDeletionTuples(name, orgID, relation, deletions)
if err != nil {
return err
}
@@ -261,7 +260,7 @@ func (provider *provider) Delete(ctx context.Context, orgID valuer.UUID, id valu
return err
}
role := roletypes.NewRoleFromStorableRole(storableRole)
role := authtypes.NewRoleFromStorableRole(storableRole)
err = role.ErrIfManaged()
if err != nil {
return err
@@ -271,7 +270,7 @@ func (provider *provider) Delete(ctx context.Context, orgID valuer.UUID, id valu
}
func (provider *provider) MustGetTypeables() []authtypes.Typeable {
return []authtypes.Typeable{authtypes.TypeableRole, roletypes.TypeableResourcesRoles}
return []authtypes.Typeable{authtypes.TypeableRole, authtypes.TypeableResourcesRoles}
}
func (provider *provider) getManagedRoleGrantTuples(orgID valuer.UUID, userID valuer.UUID) ([]*openfgav1.TupleKey, error) {
@@ -283,7 +282,7 @@ func (provider *provider) getManagedRoleGrantTuples(orgID valuer.UUID, userID va
adminSubject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, roletypes.SigNozAdminRoleName),
authtypes.MustNewSelector(authtypes.TypeRole, authtypes.SigNozAdminRoleName),
},
orgID,
)
@@ -298,7 +297,7 @@ func (provider *provider) getManagedRoleGrantTuples(orgID valuer.UUID, userID va
anonymousSubject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, roletypes.SigNozAnonymousRoleName),
authtypes.MustNewSelector(authtypes.TypeRole, authtypes.SigNozAnonymousRoleName),
},
orgID,
)

View File

@@ -2,39 +2,45 @@ module base
type organisation
relations
define read: [user, role#assignee]
define update: [user, role#assignee]
define read: [user, serviceaccount, role#assignee]
define update: [user, serviceaccount, role#assignee]
type user
relations
define read: [user, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
define read: [user, serviceaccount, role#assignee]
define update: [user, serviceaccount, role#assignee]
define delete: [user, serviceaccount, role#assignee]
type serviceaccount
relations
define read: [user, serviceaccount, role#assignee]
define update: [user, serviceaccount, role#assignee]
define delete: [user, serviceaccount, role#assignee]
type anonymous
type role
relations
define assignee: [user, anonymous]
define assignee: [user, serviceaccount, anonymous]
define read: [user, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
define read: [user, serviceaccount, role#assignee]
define update: [user, serviceaccount, role#assignee]
define delete: [user, serviceaccount, role#assignee]
type metaresources
relations
define create: [user, role#assignee]
define list: [user, role#assignee]
define create: [user, serviceaccount, role#assignee]
define list: [user, serviceaccount, role#assignee]
type metaresource
relations
define read: [user, anonymous, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
define read: [user, serviceaccount, anonymous, role#assignee]
define update: [user, serviceaccount, role#assignee]
define delete: [user, serviceaccount, role#assignee]
define block: [user, role#assignee]
define block: [user, serviceaccount, role#assignee]
type telemetryresource
relations
define read: [user, role#assignee]
define read: [user, serviceaccount, role#assignee]

View File

@@ -3,8 +3,11 @@ package httplicensing
import (
"context"
"encoding/json"
"log/slog"
"time"
"github.com/tidwall/gjson"
"github.com/SigNoz/signoz/ee/licensing/licensingstore/sqllicensingstore"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/errors"
@@ -16,7 +19,6 @@ import (
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/tidwall/gjson"
)
type provider struct {
@@ -55,7 +57,7 @@ func (provider *provider) Start(ctx context.Context) error {
err := provider.Validate(ctx)
if err != nil {
provider.settings.Logger().ErrorContext(ctx, "failed to validate license from upstream server", "error", err)
provider.settings.Logger().ErrorContext(ctx, "failed to validate license from upstream server", errors.Attr(err))
}
for {
@@ -65,7 +67,7 @@ func (provider *provider) Start(ctx context.Context) error {
case <-tick.C:
err := provider.Validate(ctx)
if err != nil {
provider.settings.Logger().ErrorContext(ctx, "failed to validate license from upstream server", "error", err)
provider.settings.Logger().ErrorContext(ctx, "failed to validate license from upstream server", errors.Attr(err))
}
}
}
@@ -133,7 +135,7 @@ func (provider *provider) Refresh(ctx context.Context, organizationID valuer.UUI
if errors.Ast(err, errors.TypeNotFound) {
return nil
}
provider.settings.Logger().ErrorContext(ctx, "license validation failed", "org_id", organizationID.StringValue())
provider.settings.Logger().ErrorContext(ctx, "license validation failed", slog.String("org_id", organizationID.StringValue()))
return err
}
@@ -198,7 +200,10 @@ func (provider *provider) Checkout(ctx context.Context, organizationID valuer.UU
response, err := provider.zeus.GetCheckoutURL(ctx, activeLicense.Key, body)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to generate checkout session")
if errors.Ast(err, errors.TypeAlreadyExists) {
return nil, errors.WithAdditionalf(err, "checkout has already been completed for this account. Please click 'Refresh Status' to sync your subscription")
}
return nil, err
}
return &licensetypes.GettableSubscription{RedirectURL: gjson.GetBytes(response, "url").String()}, nil
@@ -217,7 +222,7 @@ func (provider *provider) Portal(ctx context.Context, organizationID valuer.UUID
response, err := provider.zeus.GetPortalURL(ctx, activeLicense.Key, body)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to generate portal session")
return nil, err
}
return &licensetypes.GettableSubscription{RedirectURL: gjson.GetBytes(response, "url").String()}, nil

View File

@@ -19,7 +19,6 @@ import (
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/types/instrumentationtypes"
"github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -224,7 +223,7 @@ func (module *module) MustGetTypeables() []authtypes.Typeable {
func (module *module) MustGetManagedRoleTransactions() map[string][]*authtypes.Transaction {
return map[string][]*authtypes.Transaction{
roletypes.SigNozAnonymousRoleName: {
authtypes.SigNozAnonymousRoleName: {
{
ID: valuer.GenerateUUID(),
Relation: authtypes.RelationRead,

View File

@@ -5,6 +5,7 @@ import (
"context"
"encoding/json"
"io"
"log/slog"
"net/http"
"runtime/debug"
@@ -61,10 +62,10 @@ func (h *handler) QueryRange(rw http.ResponseWriter, req *http.Request) {
queryJSON, _ := json.Marshal(queryRangeRequest)
h.set.Logger.ErrorContext(ctx, "panic in QueryRange",
"error", r,
"user", claims.UserID,
"payload", string(queryJSON),
"stacktrace", stackTrace,
slog.Any("error", r),
slog.Any("user", claims.UserID),
slog.String("payload", string(queryJSON)),
slog.String("stacktrace", stackTrace),
)
render.Error(rw, errors.NewInternalf(

View File

@@ -2,6 +2,7 @@ package anomaly
import (
"context"
"log/slog"
"math"
"time"
@@ -13,7 +14,6 @@ import (
"github.com/SigNoz/signoz/pkg/types/ctxtypes"
"github.com/SigNoz/signoz/pkg/types/instrumentationtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"go.uber.org/zap"
)
var (
@@ -67,7 +67,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
instrumentationtypes.CodeNamespace: "anomaly",
instrumentationtypes.CodeFunctionName: "getResults",
})
zap.L().Info("fetching results for current period", zap.Any("currentPeriodQuery", params.CurrentPeriodQuery))
slog.InfoContext(ctx, "fetching results for current period", "current_period_query", params.CurrentPeriodQuery)
currentPeriodResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.CurrentPeriodQuery)
if err != nil {
return nil, err
@@ -78,7 +78,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
return nil, err
}
zap.L().Info("fetching results for past period", zap.Any("pastPeriodQuery", params.PastPeriodQuery))
slog.InfoContext(ctx, "fetching results for past period", "past_period_query", params.PastPeriodQuery)
pastPeriodResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.PastPeriodQuery)
if err != nil {
return nil, err
@@ -89,7 +89,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
return nil, err
}
zap.L().Info("fetching results for current season", zap.Any("currentSeasonQuery", params.CurrentSeasonQuery))
slog.InfoContext(ctx, "fetching results for current season", "current_season_query", params.CurrentSeasonQuery)
currentSeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.CurrentSeasonQuery)
if err != nil {
return nil, err
@@ -100,7 +100,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
return nil, err
}
zap.L().Info("fetching results for past season", zap.Any("pastSeasonQuery", params.PastSeasonQuery))
slog.InfoContext(ctx, "fetching results for past season", "past_season_query", params.PastSeasonQuery)
pastSeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.PastSeasonQuery)
if err != nil {
return nil, err
@@ -111,7 +111,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
return nil, err
}
zap.L().Info("fetching results for past 2 season", zap.Any("past2SeasonQuery", params.Past2SeasonQuery))
slog.InfoContext(ctx, "fetching results for past 2 season", "past_2_season_query", params.Past2SeasonQuery)
past2SeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.Past2SeasonQuery)
if err != nil {
return nil, err
@@ -122,7 +122,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
return nil, err
}
zap.L().Info("fetching results for past 3 season", zap.Any("past3SeasonQuery", params.Past3SeasonQuery))
slog.InfoContext(ctx, "fetching results for past 3 season", "past_3_season_query", params.Past3SeasonQuery)
past3SeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.Past3SeasonQuery)
if err != nil {
return nil, err
@@ -235,17 +235,17 @@ func (p *BaseSeasonalProvider) getPredictedSeries(
if predictedValue < 0 {
// this should not happen (except when the data has extreme outliers)
// we will use the moving avg of the previous period series in this case
zap.L().Warn("predictedValue is less than 0", zap.Float64("predictedValue", predictedValue), zap.Any("labels", series.Labels))
slog.Warn("predicted value is less than 0", "predicted_value", predictedValue, "labels", series.Labels)
predictedValue = p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
}
zap.L().Debug("predictedSeries",
zap.Float64("movingAvg", movingAvg),
zap.Float64("avg", avg),
zap.Float64("mean", mean),
zap.Any("labels", series.Labels),
zap.Float64("predictedValue", predictedValue),
zap.Float64("curr", curr.Value),
slog.Debug("predicted series",
"moving_avg", movingAvg,
"avg", avg,
"mean", mean,
"labels", series.Labels,
"predicted_value", predictedValue,
"curr", curr.Value,
)
predictedSeries.Points = append(predictedSeries.Points, v3.Point{
Timestamp: curr.Timestamp,
@@ -418,7 +418,7 @@ func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, orgID valuer.UU
for _, series := range result.Series {
stdDev := p.getStdDev(series)
zap.L().Info("stdDev", zap.Float64("stdDev", stdDev), zap.Any("labels", series.Labels))
slog.InfoContext(ctx, "computed standard deviation", "std_dev", stdDev, "labels", series.Labels)
pastPeriodSeries := p.getMatchingSeries(pastPeriodResult, series)
currentSeasonSeries := p.getMatchingSeries(currentSeasonResult, series)
@@ -431,7 +431,7 @@ func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, orgID valuer.UU
pastSeasonSeriesAvg := p.getAvg(pastSeasonSeries)
past2SeasonSeriesAvg := p.getAvg(past2SeasonSeries)
past3SeasonSeriesAvg := p.getAvg(past3SeasonSeries)
zap.L().Info("getAvg", zap.Float64("prevSeriesAvg", prevSeriesAvg), zap.Float64("currentSeasonSeriesAvg", currentSeasonSeriesAvg), zap.Float64("pastSeasonSeriesAvg", pastSeasonSeriesAvg), zap.Float64("past2SeasonSeriesAvg", past2SeasonSeriesAvg), zap.Float64("past3SeasonSeriesAvg", past3SeasonSeriesAvg), zap.Any("labels", series.Labels))
slog.InfoContext(ctx, "computed averages", "prev_series_avg", prevSeriesAvg, "current_season_series_avg", currentSeasonSeriesAvg, "past_season_series_avg", pastSeasonSeriesAvg, "past_2_season_series_avg", past2SeasonSeriesAvg, "past_3_season_series_avg", past3SeasonSeriesAvg, "labels", series.Labels)
predictedSeries := p.getPredictedSeries(
series,

View File

@@ -18,7 +18,7 @@ import (
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
"go.uber.org/zap"
"log/slog"
)
type CloudIntegrationConnectionParamsResponse struct {
@@ -71,7 +71,7 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
// Return the API Key (PAT) even if the rest of the params can not be deduced.
// Params not returned from here will be requested from the user via form inputs.
// This enables gracefully degraded but working experience even for non-cloud deployments.
zap.L().Info("ingestion params and signoz api url can not be deduced since no license was found")
slog.InfoContext(r.Context(), "ingestion params and signoz api url can not be deduced since no license was found")
ah.Respond(w, result)
return
}
@@ -103,7 +103,7 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
result.IngestionKey = ingestionKey
} else {
zap.L().Info("ingestion key can't be deduced since no gateway url has been configured")
slog.InfoContext(r.Context(), "ingestion key can't be deduced since no gateway url has been configured")
}
ah.Respond(w, result)
@@ -138,9 +138,8 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
}
}
zap.L().Info(
"no PAT found for cloud integration, creating a new one",
zap.String("cloudProvider", cloudProvider),
slog.InfoContext(ctx, "no PAT found for cloud integration, creating a new one",
"cloud_provider", cloudProvider,
)
newPAT, err := types.NewStorableAPIKey(
@@ -287,9 +286,8 @@ func getOrCreateCloudProviderIngestionKey(
}
}
zap.L().Info(
"no existing ingestion key found for cloud integration, creating a new one",
zap.String("cloudProvider", cloudProvider),
slog.InfoContext(ctx, "no existing ingestion key found for cloud integration, creating a new one",
"cloud_provider", cloudProvider,
)
createKeyResult, apiErr := requestGateway[createIngestionKeyResponse](
ctx, gatewayUrl, licenseKey, "/v1/workspaces/me/keys",

View File

@@ -4,10 +4,15 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"time"
signozerrors "github.com/SigNoz/signoz/pkg/errors"
"log/slog"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/http/render"
@@ -15,7 +20,6 @@ import (
"github.com/SigNoz/signoz/pkg/types/featuretypes"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
"go.uber.org/zap"
)
func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
@@ -35,23 +39,23 @@ func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
}
if constants.FetchFeatures == "true" {
zap.L().Debug("fetching license")
slog.DebugContext(ctx, "fetching license")
license, err := ah.Signoz.Licensing.GetActive(ctx, orgID)
if err != nil {
zap.L().Error("failed to fetch license", zap.Error(err))
slog.ErrorContext(ctx, "failed to fetch license", signozerrors.Attr(err))
} else if license == nil {
zap.L().Debug("no active license found")
slog.DebugContext(ctx, "no active license found")
} else {
licenseKey := license.Key
zap.L().Debug("fetching zeus features")
slog.DebugContext(ctx, "fetching zeus features")
zeusFeatures, err := fetchZeusFeatures(constants.ZeusFeaturesURL, licenseKey)
if err == nil {
zap.L().Debug("fetched zeus features", zap.Any("features", zeusFeatures))
slog.DebugContext(ctx, "fetched zeus features", "features", zeusFeatures)
// merge featureSet and zeusFeatures in featureSet with higher priority to zeusFeatures
featureSet = MergeFeatureSets(zeusFeatures, featureSet)
} else {
zap.L().Error("failed to fetch zeus features", zap.Error(err))
slog.ErrorContext(ctx, "failed to fetch zeus features", signozerrors.Attr(err))
}
}
}

View File

@@ -7,6 +7,7 @@ import (
"net/http"
"github.com/SigNoz/signoz/ee/query-service/anomaly"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
@@ -14,7 +15,7 @@ import (
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"go.uber.org/zap"
"log/slog"
)
func (aH *APIHandler) queryRangeV4(w http.ResponseWriter, r *http.Request) {
@@ -35,7 +36,7 @@ func (aH *APIHandler) queryRangeV4(w http.ResponseWriter, r *http.Request) {
queryRangeParams, apiErrorObj := baseapp.ParseQueryRangeParams(r)
if apiErrorObj != nil {
zap.L().Error("error parsing metric query range params", zap.Error(apiErrorObj.Err))
slog.ErrorContext(r.Context(), "error parsing metric query range params", errors.Attr(apiErrorObj.Err))
RespondError(w, apiErrorObj, nil)
return
}
@@ -44,7 +45,7 @@ func (aH *APIHandler) queryRangeV4(w http.ResponseWriter, r *http.Request) {
// add temporality for each metric
temporalityErr := aH.PopulateTemporality(r.Context(), orgID, queryRangeParams)
if temporalityErr != nil {
zap.L().Error("Error while adding temporality for metrics", zap.Error(temporalityErr))
slog.ErrorContext(r.Context(), "error while adding temporality for metrics", errors.Attr(temporalityErr))
RespondError(w, &model.ApiError{Typ: model.ErrorInternal, Err: temporalityErr}, nil)
return
}

View File

@@ -8,16 +8,21 @@ import (
_ "net/http/pprof" // http profiler
"slices"
"go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux"
"go.opentelemetry.io/otel/propagation"
"github.com/SigNoz/signoz/pkg/cache/memorycache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/ruler/rulestore/sqlrulestore"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux"
"go.opentelemetry.io/otel/propagation"
"github.com/gorilla/handlers"
"github.com/rs/cors"
"github.com/soheilhy/cmux"
"github.com/SigNoz/signoz/ee/query-service/app/api"
"github.com/SigNoz/signoz/ee/query-service/rules"
"github.com/SigNoz/signoz/ee/query-service/usage"
@@ -31,8 +36,8 @@ import (
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/web"
"github.com/rs/cors"
"github.com/soheilhy/cmux"
"log/slog"
"github.com/SigNoz/signoz/pkg/query-service/agentConf"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
@@ -47,7 +52,6 @@ import (
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
baserules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils"
"go.uber.org/zap"
)
// Server runs HTTP, Mux and a grpc server
@@ -83,6 +87,7 @@ func NewServer(config signoz.Config, signoz *signoz.SigNoz) (*Server, error) {
}
reader := clickhouseReader.NewReader(
signoz.Instrumentation.Logger(),
signoz.SQLStore,
signoz.TelemetryStore,
signoz.Prometheus,
@@ -216,8 +221,7 @@ func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*h
}),
otelmux.WithPublicEndpoint(),
))
r.Use(middleware.NewAuthN([]string{"Authorization", "Sec-WebSocket-Protocol"}, s.signoz.Sharder, s.signoz.Tokenizer, s.signoz.Instrumentation.Logger()).Wrap)
r.Use(middleware.NewAPIKey(s.signoz.SQLStore, []string{"SIGNOZ-API-KEY"}, s.signoz.Instrumentation.Logger(), s.signoz.Sharder).Wrap)
r.Use(middleware.NewIdentN(s.signoz.IdentNResolver, s.signoz.Sharder, s.signoz.Instrumentation.Logger()).Wrap)
r.Use(middleware.NewTimeout(s.signoz.Instrumentation.Logger(),
s.config.APIServer.Timeout.ExcludedRoutes,
s.config.APIServer.Timeout.Default,
@@ -278,7 +282,7 @@ func (s *Server) initListeners() error {
return err
}
zap.L().Info(fmt.Sprintf("Query server started listening on %s...", s.httpHostPort))
slog.Info(fmt.Sprintf("Query server started listening on %s...", s.httpHostPort))
return nil
}
@@ -298,31 +302,31 @@ func (s *Server) Start(ctx context.Context) error {
}
go func() {
zap.L().Info("Starting HTTP server", zap.Int("port", httpPort), zap.String("addr", s.httpHostPort))
slog.Info("Starting HTTP server", "port", httpPort, "addr", s.httpHostPort)
switch err := s.httpServer.Serve(s.httpConn); err {
case nil, http.ErrServerClosed, cmux.ErrListenerClosed:
// normal exit, nothing to do
default:
zap.L().Error("Could not start HTTP server", zap.Error(err))
slog.Error("Could not start HTTP server", errors.Attr(err))
}
s.unavailableChannel <- healthcheck.Unavailable
}()
go func() {
zap.L().Info("Starting pprof server", zap.String("addr", baseconst.DebugHttpPort))
slog.Info("Starting pprof server", "addr", baseconst.DebugHttpPort)
err = http.ListenAndServe(baseconst.DebugHttpPort, nil)
if err != nil {
zap.L().Error("Could not start pprof server", zap.Error(err))
slog.Error("Could not start pprof server", errors.Attr(err))
}
}()
go func() {
zap.L().Info("Starting OpAmp Websocket server", zap.String("addr", baseconst.OpAmpWsEndpoint))
slog.Info("Starting OpAmp Websocket server", "addr", baseconst.OpAmpWsEndpoint)
err := s.opampServer.Start(baseconst.OpAmpWsEndpoint)
if err != nil {
zap.L().Error("opamp ws server failed to start", zap.Error(err))
slog.Error("opamp ws server failed to start", errors.Attr(err))
s.unavailableChannel <- healthcheck.Unavailable
}
}()
@@ -358,10 +362,9 @@ func makeRulesManager(ch baseint.Reader, cache cache.Cache, alertmanager alertma
MetadataStore: metadataStore,
Prometheus: prometheus,
Context: context.Background(),
Logger: zap.L(),
Reader: ch,
Querier: querier,
SLogger: providerSettings.Logger,
Logger: providerSettings.Logger,
Cache: cache,
EvalDelay: baseconst.GetEvalDelay(),
PrepareTaskFunc: rules.PrepareTaskFunc,
@@ -380,7 +383,7 @@ func makeRulesManager(ch baseint.Reader, cache cache.Cache, alertmanager alertma
return nil, fmt.Errorf("rule manager error: %v", err)
}
zap.L().Info("rules manager is ready")
slog.Info("rules manager is ready")
return manager, nil
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/SigNoz/signoz/ee/query-service/anomaly"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/query-service/common"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/transition"
@@ -308,7 +309,7 @@ func (r *AnomalyRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUID,
filteredSeries, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, seriesToProcess)
// In case of error we log the error and continue with the original series
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
r.logger.ErrorContext(ctx, "Error filtering new series, ", errors.Attr(filterErr), "rule_name", r.Name())
} else {
seriesToProcess = filteredSeries
}
@@ -391,7 +392,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (int, error) {
result, err := tmpl.Expand()
if err != nil {
result = fmt.Sprintf("<error expanding template: %s>", err)
r.logger.ErrorContext(ctx, "Expanding alert template failed", "error", err, "data", tmplData, "rule_name", r.Name())
r.logger.ErrorContext(ctx, "Expanding alert template failed", errors.Attr(err), "data", tmplData, "rule_name", r.Name())
}
return result
}
@@ -467,7 +468,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (int, error) {
for fp, a := range r.Active {
labelsJSON, err := json.Marshal(a.QueryResultLables)
if err != nil {
r.logger.ErrorContext(ctx, "error marshaling labels", "error", err, "labels", a.Labels)
r.logger.ErrorContext(ctx, "error marshaling labels", errors.Attr(err), "labels", a.Labels)
}
if _, ok := resultFPs[fp]; !ok {
// If the alert was previously firing, keep it around for a given

View File

@@ -2,6 +2,7 @@ package rules
import (
"context"
"log/slog"
"testing"
"time"
@@ -116,7 +117,7 @@ func TestAnomalyRule_NoData_AlertOnAbsent(t *testing.T) {
telemetryStore := telemetrystoretest.New(telemetrystore.Config{}, nil)
options := clickhouseReader.NewOptions("primaryNamespace")
reader := clickhouseReader.NewReader(nil, telemetryStore, nil, "", time.Second, nil, nil, options)
reader := clickhouseReader.NewReader(slog.Default(), nil, telemetryStore, nil, "", time.Second, nil, nil, options)
rule, err := NewAnomalyRule(
"test-anomaly-rule",
@@ -247,7 +248,7 @@ func TestAnomalyRule_NoData_AbsentFor(t *testing.T) {
telemetryStore := telemetrystoretest.New(telemetrystore.Config{}, nil)
options := clickhouseReader.NewOptions("primaryNamespace")
reader := clickhouseReader.NewReader(nil, telemetryStore, nil, "", time.Second, nil, nil, options)
reader := clickhouseReader.NewReader(slog.Default(), nil, telemetryStore, nil, "", time.Second, nil, nil, options)
rule, err := NewAnomalyRule("test-anomaly-rule", valuer.GenerateUUID(), &postableRule, reader, nil, logger, nil)
require.NoError(t, err)

View File

@@ -6,14 +6,16 @@ import (
"time"
"log/slog"
"github.com/google/uuid"
"github.com/SigNoz/signoz/pkg/errors"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
baserules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/google/uuid"
"go.uber.org/zap"
)
func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error) {
@@ -34,7 +36,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
opts.Rule,
opts.Reader,
opts.Querier,
opts.SLogger,
opts.Logger,
baserules.WithEvalDelay(opts.ManagerOpts.EvalDelay),
baserules.WithSQLStore(opts.SQLStore),
baserules.WithQueryParser(opts.ManagerOpts.QueryParser),
@@ -57,7 +59,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
ruleId,
opts.OrgID,
opts.Rule,
opts.SLogger,
opts.Logger,
opts.Reader,
opts.ManagerOpts.Prometheus,
baserules.WithSQLStore(opts.SQLStore),
@@ -82,7 +84,7 @@ func PrepareTaskFunc(opts baserules.PrepareTaskOptions) (baserules.Task, error)
opts.Rule,
opts.Reader,
opts.Querier,
opts.SLogger,
opts.Logger,
opts.Cache,
baserules.WithEvalDelay(opts.ManagerOpts.EvalDelay),
baserules.WithSQLStore(opts.SQLStore),
@@ -142,7 +144,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
parsedRule,
opts.Reader,
opts.Querier,
opts.SLogger,
opts.Logger,
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
baserules.WithSQLStore(opts.SQLStore),
@@ -151,7 +153,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
)
if err != nil {
zap.L().Error("failed to prepare a new threshold rule for test", zap.String("name", alertname), zap.Error(err))
slog.Error("failed to prepare a new threshold rule for test", "name", alertname, errors.Attr(err))
return 0, basemodel.BadRequest(err)
}
@@ -162,7 +164,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
alertname,
opts.OrgID,
parsedRule,
opts.SLogger,
opts.Logger,
opts.Reader,
opts.ManagerOpts.Prometheus,
baserules.WithSendAlways(),
@@ -173,7 +175,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
)
if err != nil {
zap.L().Error("failed to prepare a new promql rule for test", zap.String("name", alertname), zap.Error(err))
slog.Error("failed to prepare a new promql rule for test", "name", alertname, errors.Attr(err))
return 0, basemodel.BadRequest(err)
}
} else if parsedRule.RuleType == ruletypes.RuleTypeAnomaly {
@@ -184,7 +186,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
parsedRule,
opts.Reader,
opts.Querier,
opts.SLogger,
opts.Logger,
opts.Cache,
baserules.WithSendAlways(),
baserules.WithSendUnmatched(),
@@ -193,7 +195,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
baserules.WithMetadataStore(opts.ManagerOpts.MetadataStore),
)
if err != nil {
zap.L().Error("failed to prepare a new anomaly rule for test", zap.String("name", alertname), zap.Error(err))
slog.Error("failed to prepare a new anomaly rule for test", "name", alertname, errors.Attr(err))
return 0, basemodel.BadRequest(err)
}
} else {
@@ -205,7 +207,7 @@ func TestNotification(opts baserules.PrepareTestRuleOptions) (int, *basemodel.Ap
alertsFound, err := rule.Eval(ctx, ts)
if err != nil {
zap.L().Error("evaluating rule failed", zap.String("rule", rule.Name()), zap.Error(err))
slog.Error("evaluating rule failed", "rule", rule.Name(), errors.Attr(err))
return 0, basemodel.InternalError(fmt.Errorf("rule evaluation failed"))
}
rule.SendAlerts(ctx, ts, 0, time.Minute, opts.NotifyFunc)

View File

@@ -80,6 +80,21 @@ func TestManager_TestNotification_SendUnmatched_ThresholdRule(t *testing.T) {
alertDataRows := cmock.NewRows(cols, tc.Values)
mock := telemetryStore.Mock()
// Mock metadata queries for FetchTemporalityAndTypeMulti
// First query: fetchMetricsTemporalityAndType (from signoz_metrics time series table)
metadataCols := []cmock.ColumnType{
{Name: "metric_name", Type: "String"},
{Name: "temporality", Type: "String"},
{Name: "type", Type: "String"},
{Name: "is_monotonic", Type: "Bool"},
}
metadataRows := cmock.NewRows(metadataCols, [][]any{
{"probe_success", metrictypes.Unspecified, metrictypes.GaugeType, false},
})
mock.ExpectQuery("*distributed_time_series_v4*").WithArgs(nil, nil, nil).WillReturnRows(metadataRows)
// Second query: fetchMeterSourceMetricsTemporalityAndType (from signoz_meter table)
emptyMetadataRows := cmock.NewRows(metadataCols, [][]any{})
mock.ExpectQuery("*meter*").WithArgs(nil).WillReturnRows(emptyMetadataRows)
// Generate query arguments for the metric query
evalTime := time.Now().UTC()

View File

@@ -8,13 +8,14 @@ import (
"sync/atomic"
"time"
"log/slog"
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/go-co-op/gocron"
"github.com/google/uuid"
"go.uber.org/zap"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/query-service/utils/encryption"
@@ -76,19 +77,19 @@ func (lm *Manager) Start(ctx context.Context) error {
func (lm *Manager) UploadUsage(ctx context.Context) {
organizations, err := lm.orgGetter.ListByOwnedKeyRange(ctx)
if err != nil {
zap.L().Error("failed to get organizations", zap.Error(err))
slog.ErrorContext(ctx, "failed to get organizations", errors.Attr(err))
return
}
for _, organization := range organizations {
// check if license is present or not
license, err := lm.licenseService.GetActive(ctx, organization.ID)
if err != nil {
zap.L().Error("failed to get active license", zap.Error(err))
slog.ErrorContext(ctx, "failed to get active license", errors.Attr(err))
return
}
if license == nil {
// we will not start the usage reporting if license is not present.
zap.L().Info("no license present, skipping usage reporting")
slog.InfoContext(ctx, "no license present, skipping usage reporting")
return
}
@@ -115,7 +116,7 @@ func (lm *Manager) UploadUsage(ctx context.Context) {
dbusages := []model.UsageDB{}
err := lm.clickhouseConn.Select(ctx, &dbusages, fmt.Sprintf(query, db, db), time.Now().Add(-(24 * time.Hour)))
if err != nil && !strings.Contains(err.Error(), "doesn't exist") {
zap.L().Error("failed to get usage from clickhouse: %v", zap.Error(err))
slog.ErrorContext(ctx, "failed to get usage from clickhouse", errors.Attr(err))
return
}
for _, u := range dbusages {
@@ -125,24 +126,24 @@ func (lm *Manager) UploadUsage(ctx context.Context) {
}
if len(usages) <= 0 {
zap.L().Info("no snapshots to upload, skipping.")
slog.InfoContext(ctx, "no snapshots to upload, skipping")
return
}
zap.L().Info("uploading usage data")
slog.InfoContext(ctx, "uploading usage data")
usagesPayload := []model.Usage{}
for _, usage := range usages {
usageDataBytes, err := encryption.Decrypt([]byte(usage.ExporterID[:32]), []byte(usage.Data))
if err != nil {
zap.L().Error("error while decrypting usage data: %v", zap.Error(err))
slog.ErrorContext(ctx, "error while decrypting usage data", errors.Attr(err))
return
}
usageData := model.Usage{}
err = json.Unmarshal(usageDataBytes, &usageData)
if err != nil {
zap.L().Error("error while unmarshalling usage data: %v", zap.Error(err))
slog.ErrorContext(ctx, "error while unmarshalling usage data", errors.Attr(err))
return
}
@@ -163,13 +164,13 @@ func (lm *Manager) UploadUsage(ctx context.Context) {
body, errv2 := json.Marshal(payload)
if errv2 != nil {
zap.L().Error("error while marshalling usage payload: %v", zap.Error(errv2))
slog.ErrorContext(ctx, "error while marshalling usage payload", errors.Attr(errv2))
return
}
errv2 = lm.zeus.PutMeters(ctx, payload.LicenseKey.String(), body)
if errv2 != nil {
zap.L().Error("failed to upload usage: %v", zap.Error(errv2))
slog.ErrorContext(ctx, "failed to upload usage", errors.Attr(errv2))
// not returning error here since it is captured in the failed count
return
}
@@ -179,7 +180,7 @@ func (lm *Manager) UploadUsage(ctx context.Context) {
func (lm *Manager) Stop(ctx context.Context) {
lm.scheduler.Stop()
zap.L().Info("sending usage data before shutting down")
slog.InfoContext(ctx, "sending usage data before shutting down")
// send usage before shutting down
lm.UploadUsage(ctx)
atomic.StoreUint32(&locker, stateUnlocked)

View File

@@ -4,10 +4,12 @@ import (
"context"
"database/sql"
"github.com/uptrace/bun"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/uptrace/bun"
)
type provider struct {
@@ -32,9 +34,9 @@ func New(ctx context.Context, providerSettings factory.ProviderSettings, config
fmter: fmter,
settings: settings,
operator: sqlschema.NewOperator(fmter, sqlschema.OperatorSupport{
DropConstraint: true,
ColumnIfNotExistsExists: true,
AlterColumnSetNotNull: true,
SCreateAndDropConstraint: true,
SAlterTableAddAndDropColumnIfNotExistsAndExists: true,
SAlterTableAlterColumnSetAndDrop: true,
}),
}, nil
}
@@ -72,8 +74,9 @@ WHERE
if err != nil {
return nil, nil, err
}
if len(columns) == 0 {
return nil, nil, sql.ErrNoRows
return nil, nil, provider.sqlstore.WrapNotFoundErrf(sql.ErrNoRows, errors.CodeNotFound, "table (%s) not found", tableName)
}
sqlschemaColumns := make([]*sqlschema.Column, 0)
@@ -111,7 +114,7 @@ WHERE
defer func() {
if err := constraintsRows.Close(); err != nil {
provider.settings.Logger().ErrorContext(ctx, "error closing rows", "error", err)
provider.settings.Logger().ErrorContext(ctx, "error closing rows", errors.Attr(err))
}
}()
@@ -172,7 +175,7 @@ WHERE
defer func() {
if err := foreignKeyConstraintsRows.Close(); err != nil {
provider.settings.Logger().ErrorContext(ctx, "error closing rows", "error", err)
provider.settings.Logger().ErrorContext(ctx, "error closing rows", errors.Attr(err))
}
}()
@@ -220,7 +223,9 @@ SELECT
ci.relname AS index_name,
i.indisunique AS unique,
i.indisprimary AS primary,
a.attname AS column_name
a.attname AS column_name,
array_position(i.indkey, a.attnum) AS column_position,
pg_get_expr(i.indpred, i.indrelid) AS predicate
FROM
pg_index i
LEFT JOIN pg_class ct ON ct.oid = i.indrelid
@@ -231,18 +236,24 @@ WHERE
a.attnum = ANY(i.indkey)
AND con.oid IS NULL
AND ct.relkind = 'r'
AND ct.relname = ?`, string(name))
AND ct.relname = ?
ORDER BY index_name, column_position`, string(name))
if err != nil {
return nil, err
return nil, provider.sqlstore.WrapNotFoundErrf(err, errors.CodeNotFound, "no indices for table (%s) found", name)
}
defer func() {
if err := rows.Close(); err != nil {
provider.settings.Logger().ErrorContext(ctx, "error closing rows", "error", err)
provider.settings.Logger().ErrorContext(ctx, "error closing rows", errors.Attr(err))
}
}()
uniqueIndicesMap := make(map[string]*sqlschema.UniqueIndex)
type indexEntry struct {
columns []sqlschema.ColumnName
predicate *string
}
uniqueIndicesMap := make(map[string]*indexEntry)
for rows.Next() {
var (
tableName string
@@ -250,27 +261,53 @@ WHERE
unique bool
primary bool
columnName string
// starts from 0 and is unused in this function, this is to ensure that the column names are in the correct order
columnPosition int
predicate *string
)
if err := rows.Scan(&tableName, &indexName, &unique, &primary, &columnName); err != nil {
if err := rows.Scan(&tableName, &indexName, &unique, &primary, &columnName, &columnPosition, &predicate); err != nil {
return nil, err
}
if unique {
if _, ok := uniqueIndicesMap[indexName]; !ok {
uniqueIndicesMap[indexName] = &sqlschema.UniqueIndex{
TableName: name,
ColumnNames: []sqlschema.ColumnName{sqlschema.ColumnName(columnName)},
uniqueIndicesMap[indexName] = &indexEntry{
columns: []sqlschema.ColumnName{sqlschema.ColumnName(columnName)},
predicate: predicate,
}
} else {
uniqueIndicesMap[indexName].ColumnNames = append(uniqueIndicesMap[indexName].ColumnNames, sqlschema.ColumnName(columnName))
uniqueIndicesMap[indexName].columns = append(uniqueIndicesMap[indexName].columns, sqlschema.ColumnName(columnName))
}
}
}
indices := make([]sqlschema.Index, 0)
for _, index := range uniqueIndicesMap {
indices = append(indices, index)
for indexName, entry := range uniqueIndicesMap {
if entry.predicate != nil {
index := &sqlschema.PartialUniqueIndex{
TableName: name,
ColumnNames: entry.columns,
Where: *entry.predicate,
}
if index.Name() == indexName {
indices = append(indices, index)
} else {
indices = append(indices, index.Named(indexName))
}
} else {
index := &sqlschema.UniqueIndex{
TableName: name,
ColumnNames: entry.columns,
}
if index.Name() == indexName {
indices = append(indices, index)
} else {
indices = append(indices, index.Named(indexName))
}
}
}
return indices, nil

View File

@@ -101,7 +101,7 @@ func (provider *provider) WrapNotFoundErrf(err error, code errors.Code, format s
func (provider *provider) WrapAlreadyExistsErrf(err error, code errors.Code, format string, args ...any) error {
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) && pgErr.Code == "23505" {
if errors.As(err, &pgErr) && (pgErr.Code == "23505" || pgErr.Code == "23503") {
return errors.Wrapf(err, errors.TypeAlreadyExists, code, format, args...)
}

View File

@@ -4,5 +4,4 @@ build
i18-generate-hash.js
src/parser/TraceOperatorParser/**
orval.config.ts
pnpm-lock.yaml
orval.config.ts

View File

@@ -41,7 +41,7 @@ module.exports = {
'jsx-a11y', // Accessibility rules
'import', // Import/export linting
'sonarjs', // Code quality/complexity
// TODO: Uncomment after running: pnpm add -D eslint-plugin-spellcheck
// TODO: Uncomment after running: yarn add -D eslint-plugin-spellcheck
// 'spellcheck', // Correct spellings
],
settings: {
@@ -193,6 +193,16 @@ module.exports = {
],
},
],
'no-restricted-syntax': [
'error',
{
selector:
// TODO: Make this generic on removal of redux
"CallExpression[callee.property.name='getState'][callee.object.name=/^use/]",
message:
'Avoid calling .getState() directly. Export a standalone action from the store instead.',
},
],
},
overrides: [
{
@@ -217,5 +227,13 @@ module.exports = {
'@typescript-eslint/no-unused-vars': 'warn',
},
},
{
// Store definition files are the only place .getState() is permitted —
// they are the canonical source for standalone action exports.
files: ['**/*Store.{ts,tsx}'],
rules: {
'no-restricted-syntax': 'off',
},
},
],
};

View File

@@ -1,7 +1,7 @@
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
cd frontend && pnpm run commitlint --edit $1
cd frontend && yarn run commitlint --edit $1
branch="$(git rev-parse --abbrev-ref HEAD)"

View File

@@ -1,4 +1,4 @@
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
cd frontend && pnpm lint-staged
cd frontend && yarn lint-staged

View File

@@ -1,2 +1 @@
registry = 'https://registry.npmjs.org/'
node-linker = hoisted
registry = 'https://registry.npmjs.org/'

View File

@@ -12,6 +12,4 @@ public/
# Ignore all files in parser folder:
src/parser/**
src/TraceOperator/parser/**
pnpm-lock.yaml
src/TraceOperator/parser/**

View File

@@ -28,8 +28,8 @@ Follow the steps below
1. ```git clone https://github.com/SigNoz/signoz.git && cd signoz/frontend```
1. change baseURL to ```<test environment URL>``` in file ```src/constants/env.ts```
1. ```pnpm install```
1. ```pnpm dev```
1. ```yarn install```
1. ```yarn dev```
```Note: Please ping us in #contributing channel in our slack community and we will DM you with <test environment URL>```
@@ -41,7 +41,7 @@ This project was bootstrapped with [Create React App](https://github.com/faceboo
In the project directory, you can run:
### `pnpm start`
### `yarn start`
Runs the app in the development mode.\
Open [http://localhost:3301](http://localhost:3301) to view it in the browser.
@@ -49,12 +49,12 @@ Open [http://localhost:3301](http://localhost:3301) to view it in the browser.
The page will reload if you make edits.\
You will also see any lint errors in the console.
### `pnpm test`
### `yarn test`
Launches the test runner in the interactive watch mode.\
See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information.
### `pnpm build`
### `yarn build`
Builds the app for production to the `build` folder.\
It correctly bundles React in production mode and optimizes the build for the best performance.
@@ -64,7 +64,7 @@ Your app is ready to be deployed!
See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information.
### `pnpm eject`
### `yarn eject`
**Note: this is a one-way operation. Once you `eject`, you cant go back!**
@@ -100,6 +100,6 @@ This section has moved here: [https://facebook.github.io/create-react-app/docs/a
This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment)
### `pnpm build` fails to minify
### `yarn build` fails to minify
This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify)

View File

@@ -0,0 +1,29 @@
import { PropsWithChildren } from 'react';
type CommonProps = PropsWithChildren<{
className?: string;
minSize?: number;
maxSize?: number;
defaultSize?: number;
direction?: 'horizontal' | 'vertical';
autoSaveId?: string;
withHandle?: boolean;
}>;
export function ResizablePanelGroup({
children,
className,
}: CommonProps): JSX.Element {
return <div className={className}>{children}</div>;
}
export function ResizablePanel({
children,
className,
}: CommonProps): JSX.Element {
return <div className={className}>{children}</div>;
}
export function ResizableHandle({ className }: CommonProps): JSX.Element {
return <div className={className} />;
}

View File

@@ -14,6 +14,7 @@ const config: Config.InitialOptions = {
'\\.(css|less|scss)$': '<rootDir>/__mocks__/cssMock.ts',
'\\.md$': '<rootDir>/__mocks__/cssMock.ts',
'^uplot$': '<rootDir>/__mocks__/uplotMock.ts',
'^@signozhq/resizable$': '<rootDir>/__mocks__/resizableMock.tsx',
'^hooks/useSafeNavigate$': USE_SAFE_NAVIGATE_MOCK_PATH,
'^src/hooks/useSafeNavigate$': USE_SAFE_NAVIGATE_MOCK_PATH,
'^.*/useSafeNavigate$': USE_SAFE_NAVIGATE_MOCK_PATH,
@@ -23,7 +24,8 @@ const config: Config.InitialOptions = {
'<rootDir>/node_modules/@signozhq/icons/dist/index.esm.js',
'^react-syntax-highlighter/dist/esm/(.*)$':
'<rootDir>/node_modules/react-syntax-highlighter/dist/cjs/$1',
'^@signozhq/([^/]+)$': '<rootDir>/node_modules/@signozhq/$1/dist/$1.js',
'^@signozhq/(?!ui$)([^/]+)$':
'<rootDir>/node_modules/@signozhq/$1/dist/$1.js',
},
extensionsToTreatAsEsm: ['.ts'],
testMatch: ['<rootDir>/src/**/*?(*.)(test).(ts|js)?(x)'],

View File

@@ -6,7 +6,6 @@
* Adds custom matchers from the react testing library to all tests
*/
import '@testing-library/jest-dom';
import '@testing-library/jest-dom/extend-expect';
import 'jest-styled-components';
import { server } from './src/mocks-server/server';

View File

@@ -80,7 +80,7 @@ export default defineConfig({
header: (info: { title: string; version: string }): string[] => [
`! Do not edit manually`,
`* The file has been auto-generated using Orval for SigNoz`,
`* regenerate with 'pnpm generate:api'`,
`* regenerate with 'yarn generate:api'`,
...(info.title ? [info.title] : []),
...(info.version ? [`OpenAPI spec version: ${info.version}`] : []),
],

View File

@@ -11,6 +11,7 @@
"prettify": "prettier --write .",
"fmt": "prettier --check .",
"lint": "eslint ./src",
"lint:generated": "eslint ./src/api/generated --fix",
"lint:fix": "eslint ./src --fix",
"jest": "jest",
"jest:coverage": "jest --coverage",
@@ -33,7 +34,6 @@
"@ant-design/icons": "4.8.0",
"@codemirror/autocomplete": "6.18.6",
"@codemirror/lang-javascript": "6.2.3",
"@codemirror/state": "6.5.4",
"@dnd-kit/core": "6.1.0",
"@dnd-kit/modifiers": "7.0.0",
"@dnd-kit/sortable": "8.0.0",
@@ -41,7 +41,7 @@
"@grafana/data": "^11.2.3",
"@mdx-js/loader": "2.3.0",
"@mdx-js/react": "2.3.0",
"@monaco-editor/react": "~4.3.1",
"@monaco-editor/react": "^4.3.1",
"@playwright/test": "1.55.1",
"@radix-ui/react-tabs": "1.0.4",
"@radix-ui/react-tooltip": "1.0.7",
@@ -65,8 +65,9 @@
"@signozhq/sonner": "0.1.0",
"@signozhq/switch": "0.0.2",
"@signozhq/table": "0.3.7",
"@signozhq/toggle-group": "^0.0.1",
"@signozhq/toggle-group": "0.0.1",
"@signozhq/tooltip": "0.0.2",
"@signozhq/ui": "0.0.5",
"@tanstack/react-table": "8.20.6",
"@tanstack/react-virtual": "3.11.2",
"@uiw/codemirror-theme-copilot": "4.23.11",
@@ -98,7 +99,7 @@
"color-alpha": "2.0.0",
"cross-env": "^7.0.3",
"crypto-js": "4.2.0",
"d3-hierarchy": "1.1.9",
"d3-hierarchy": "3.1.2",
"dayjs": "^1.10.7",
"dompurify": "3.2.4",
"dotenv": "8.2.0",
@@ -123,7 +124,6 @@
"overlayscrollbars-react": "^0.5.6",
"papaparse": "5.4.1",
"posthog-js": "1.298.0",
"rc-select": "~14.10.0",
"rc-tween-one": "3.0.6",
"react": "18.2.0",
"react-addons-update": "15.6.3",
@@ -137,12 +137,13 @@
"react-full-screen": "1.1.1",
"react-grid-layout": "^1.3.4",
"react-helmet-async": "1.3.0",
"react-hook-form": "7.71.2",
"react-i18next": "^11.16.1",
"react-lottie": "1.2.10",
"react-markdown": "8.0.7",
"react-query": "3.39.3",
"react-redux": "^7.2.2",
"react-router-dom": "5.3.4",
"react-router-dom": "^5.2.0",
"react-router-dom-v5-compat": "6.27.0",
"react-syntax-highlighter": "15.5.0",
"react-use": "^17.3.2",
@@ -188,17 +189,14 @@
"@commitlint/config-conventional": "^20.4.2",
"@faker-js/faker": "9.3.0",
"@jest/globals": "30.2.0",
"@jest/types": "^30.3.0",
"@testing-library/jest-dom": "5.16.5",
"@testing-library/react": "13.4.0",
"@testing-library/user-event": "14.4.3",
"@types/color": "^3.0.3",
"@types/crypto-js": "4.2.2",
"@types/d3-hierarchy": "1.1.11",
"@types/dompurify": "^2.4.0",
"@types/event-source-polyfill": "^1.0.0",
"@types/fontfaceobserver": "2.1.0",
"@types/history": "^4.7.11",
"@types/jest": "30.0.0",
"@types/lodash-es": "^4.17.4",
"@types/mini-css-extract-plugin": "^2.5.1",
@@ -213,11 +211,10 @@
"@types/react-lottie": "1.2.10",
"@types/react-redux": "^7.1.11",
"@types/react-resizable": "3.0.3",
"@types/react-router-dom": "5.3.3",
"@types/react-router-dom": "^5.1.6",
"@types/react-syntax-highlighter": "15.5.13",
"@types/redux-mock-store": "1.0.4",
"@types/styled-components": "^5.1.4",
"@types/testing-library__jest-dom": "^5.14.5",
"@types/uuid": "^8.3.1",
"@typescript-eslint/eslint-plugin": "^4.33.0",
"@typescript-eslint/parser": "^4.33.0",
@@ -233,7 +230,6 @@
"eslint-plugin-react-hooks": "^4.3.0",
"eslint-plugin-simple-import-sort": "^7.0.0",
"eslint-plugin-sonarjs": "^0.12.0",
"glob": "^13.0.6",
"husky": "^7.0.4",
"imagemin": "^8.0.1",
"imagemin-svgo": "^10.0.1",
@@ -255,7 +251,6 @@
"sass": "1.97.3",
"sharp": "0.34.5",
"svgo": "4.0.0",
"tailwindcss": "3",
"ts-api-utils": "2.4.0",
"ts-jest": "29.4.6",
"ts-node": "^10.2.1",
@@ -289,30 +284,6 @@
"brace-expansion": "^2.0.2",
"on-headers": "^1.1.0",
"tmp": "0.2.4",
"vite": "npm:rolldown-vite@7.3.1",
"@types/d3-hierarchy": "1.1.11"
},
"pnpm": {
"overrides": {
"@types/react": "18.0.26",
"@types/react-dom": "18.0.10",
"debug": "4.3.4",
"semver": "7.5.4",
"xml2js": "0.5.0",
"phin": "^3.7.1",
"body-parser": "1.20.3",
"http-proxy-middleware": "3.0.5",
"cross-spawn": "7.0.5",
"cookie": "^0.7.1",
"serialize-javascript": "6.0.2",
"prismjs": "1.30.0",
"got": "11.8.5",
"form-data": "4.0.4",
"brace-expansion": "^2.0.2",
"on-headers": "^1.1.0",
"tmp": "0.2.4",
"vite": "npm:rolldown-vite@7.3.1",
"@types/d3-hierarchy": "1.1.11"
}
"vite": "npm:rolldown-vite@7.3.1"
}
}
}

View File

@@ -1,11 +1,9 @@
import { defineConfig, devices } from '@playwright/test';
import dotenv from 'dotenv';
import path from 'path';
import { fileURLToPath } from 'url';
// Read from ".env" file.
const dirname = path.dirname(fileURLToPath(import.meta.url));
dotenv.config({ path: path.resolve(dirname, '.env') });
dotenv.config({ path: path.resolve(__dirname, '.env') });
/**
* Read environment variables from file.

23275
frontend/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 214 KiB

View File

@@ -15,5 +15,6 @@
"logs_to_metrics": "Logs To Metrics",
"roles": "Roles",
"role_details": "Role Details",
"members": "Members"
"members": "Members",
"service_accounts": "Service Accounts"
}

View File

@@ -50,5 +50,8 @@
"INFRASTRUCTURE_MONITORING_KUBERNETES": "SigNoz | Infra Monitoring",
"METER_EXPLORER": "SigNoz | Meter Explorer",
"METER_EXPLORER_VIEWS": "SigNoz | Meter Explorer Views",
"METER": "SigNoz | Meter"
"METER": "SigNoz | Meter",
"ROLES_SETTINGS": "SigNoz | Roles",
"MEMBERS_SETTINGS": "SigNoz | Members",
"SERVICE_ACCOUNTS_SETTINGS": "SigNoz | Service Accounts"
}

View File

@@ -15,5 +15,6 @@
"logs_to_metrics": "Logs To Metrics",
"roles": "Roles",
"role_details": "Role Details",
"members": "Members"
"members": "Members",
"service_accounts": "Service Accounts"
}

View File

@@ -75,5 +75,6 @@
"METER_EXPLORER_VIEWS": "SigNoz | Meter Explorer Views",
"METER": "SigNoz | Meter",
"ROLES_SETTINGS": "SigNoz | Roles",
"MEMBERS_SETTINGS": "SigNoz | Members"
"MEMBERS_SETTINGS": "SigNoz | Members",
"SERVICE_ACCOUNTS_SETTINGS": "SigNoz | Service Accounts"
}

View File

@@ -180,7 +180,7 @@ async function main() {
PERMISSIONS_TYPE_FILE,
);
log('Linting generated file...');
execSync(`cd frontend && pnpm eslint --fix ${relativePath}`, {
execSync(`cd frontend && yarn eslint --fix ${relativePath}`, {
cwd: rootDir,
stdio: 'inherit',
});

View File

@@ -25,7 +25,7 @@ echo "\n✅ Prettier formatting successful"
# Fix linting issues
echo "\n\n---\nRunning eslint...\n"
if ! pnpm lint --fix --quiet src/api/generated; then
if ! yarn lint:generated; then
echo "ESLint check failed! Please fix linting errors before proceeding."
exit 1
fi

View File

@@ -128,6 +128,7 @@ function PrivateRoute({ children }: PrivateRouteProps): JSX.Element {
isAdmin &&
(path === ROUTES.SETTINGS ||
path === ROUTES.ORG_SETTINGS ||
path === ROUTES.MEMBERS_SETTINGS ||
path === ROUTES.BILLING ||
path === ROUTES.MY_SETTINGS);

View File

@@ -29,7 +29,6 @@ import posthog from 'posthog-js';
import { useAppContext } from 'providers/App/App';
import { IUser } from 'providers/App/types';
import { CmdKProvider } from 'providers/cmdKProvider';
import { DashboardProvider } from 'providers/Dashboard/Dashboard';
import { ErrorModalProvider } from 'providers/ErrorModalProvider';
import { PreferenceContextProvider } from 'providers/preferences/context/PreferenceContextProvider';
import { QueryBuilderProvider } from 'providers/QueryBuilder';
@@ -321,6 +320,19 @@ function App(): JSX.Element {
// Session Replay
replaysSessionSampleRate: 0.1, // This sets the sample rate at 10%. You may want to change it to 100% while in development and then sample at a lower rate in production.
replaysOnErrorSampleRate: 1.0, // If you're not already sampling the entire session, change the sample rate to 100% when sampling sessions where errors occur.
beforeSend(event) {
const sessionReplayUrl = posthog.get_session_replay_url?.({
withTimestamp: true,
});
if (sessionReplayUrl) {
// eslint-disable-next-line no-param-reassign
event.contexts = {
...event.contexts,
posthog: { session_replay_url: sessionReplayUrl },
};
}
return event;
},
});
setIsSentryInitialized(true);
@@ -371,28 +383,26 @@ function App(): JSX.Element {
<PrivateRoute>
<ResourceProvider>
<QueryBuilderProvider>
<DashboardProvider>
<KeyboardHotkeysProvider>
<AppLayout>
<PreferenceContextProvider>
<Suspense fallback={<Spinner size="large" tip="Loading..." />}>
<Switch>
{routes.map(({ path, component, exact }) => (
<Route
key={`${path}`}
exact={exact}
path={path}
component={component}
/>
))}
<Route exact path="/" component={Home} />
<Route path="*" component={NotFound} />
</Switch>
</Suspense>
</PreferenceContextProvider>
</AppLayout>
</KeyboardHotkeysProvider>
</DashboardProvider>
<KeyboardHotkeysProvider>
<AppLayout>
<PreferenceContextProvider>
<Suspense fallback={<Spinner size="large" tip="Loading..." />}>
<Switch>
{routes.map(({ path, component, exact }) => (
<Route
key={`${path}`}
exact={exact}
path={path}
component={component}
/>
))}
<Route exact path="/" component={Home} />
<Route path="*" component={NotFound} />
</Switch>
</Suspense>
</PreferenceContextProvider>
</AppLayout>
</KeyboardHotkeysProvider>
</QueryBuilderProvider>
</ResourceProvider>
</PrivateRoute>

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {
@@ -21,6 +21,8 @@ import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
AuthtypesPatchableObjectsDTO,
AuthtypesPatchableRoleDTO,
AuthtypesPostableRoleDTO,
CreateRole201,
DeleteRolePathParameters,
GetObjects200,
@@ -31,8 +33,6 @@ import type {
PatchObjectsPathParameters,
PatchRolePathParameters,
RenderErrorResponseDTO,
RoletypesPatchableRoleDTO,
RoletypesPostableRoleDTO,
} from '../sigNoz.schemas';
/**
@@ -118,14 +118,14 @@ export const invalidateListRoles = async (
* @summary Create role
*/
export const createRole = (
roletypesPostableRoleDTO: BodyType<RoletypesPostableRoleDTO>,
authtypesPostableRoleDTO: BodyType<AuthtypesPostableRoleDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateRole201>({
url: `/api/v1/roles`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: roletypesPostableRoleDTO,
data: authtypesPostableRoleDTO,
signal,
});
};
@@ -137,13 +137,13 @@ export const getCreateRoleMutationOptions = <
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createRole>>,
TError,
{ data: BodyType<RoletypesPostableRoleDTO> },
{ data: BodyType<AuthtypesPostableRoleDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createRole>>,
TError,
{ data: BodyType<RoletypesPostableRoleDTO> },
{ data: BodyType<AuthtypesPostableRoleDTO> },
TContext
> => {
const mutationKey = ['createRole'];
@@ -157,7 +157,7 @@ export const getCreateRoleMutationOptions = <
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createRole>>,
{ data: BodyType<RoletypesPostableRoleDTO> }
{ data: BodyType<AuthtypesPostableRoleDTO> }
> = (props) => {
const { data } = props ?? {};
@@ -170,7 +170,7 @@ export const getCreateRoleMutationOptions = <
export type CreateRoleMutationResult = NonNullable<
Awaited<ReturnType<typeof createRole>>
>;
export type CreateRoleMutationBody = BodyType<RoletypesPostableRoleDTO>;
export type CreateRoleMutationBody = BodyType<AuthtypesPostableRoleDTO>;
export type CreateRoleMutationError = ErrorType<RenderErrorResponseDTO>;
/**
@@ -183,13 +183,13 @@ export const useCreateRole = <
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createRole>>,
TError,
{ data: BodyType<RoletypesPostableRoleDTO> },
{ data: BodyType<AuthtypesPostableRoleDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createRole>>,
TError,
{ data: BodyType<RoletypesPostableRoleDTO> },
{ data: BodyType<AuthtypesPostableRoleDTO> },
TContext
> => {
const mutationOptions = getCreateRoleMutationOptions(options);
@@ -370,13 +370,13 @@ export const invalidateGetRole = async (
*/
export const patchRole = (
{ id }: PatchRolePathParameters,
roletypesPatchableRoleDTO: BodyType<RoletypesPatchableRoleDTO>,
authtypesPatchableRoleDTO: BodyType<AuthtypesPatchableRoleDTO>,
) => {
return GeneratedAPIInstance<string>({
url: `/api/v1/roles/${id}`,
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
data: roletypesPatchableRoleDTO,
data: authtypesPatchableRoleDTO,
});
};
@@ -389,7 +389,7 @@ export const getPatchRoleMutationOptions = <
TError,
{
pathParams: PatchRolePathParameters;
data: BodyType<RoletypesPatchableRoleDTO>;
data: BodyType<AuthtypesPatchableRoleDTO>;
},
TContext
>;
@@ -398,7 +398,7 @@ export const getPatchRoleMutationOptions = <
TError,
{
pathParams: PatchRolePathParameters;
data: BodyType<RoletypesPatchableRoleDTO>;
data: BodyType<AuthtypesPatchableRoleDTO>;
},
TContext
> => {
@@ -415,7 +415,7 @@ export const getPatchRoleMutationOptions = <
Awaited<ReturnType<typeof patchRole>>,
{
pathParams: PatchRolePathParameters;
data: BodyType<RoletypesPatchableRoleDTO>;
data: BodyType<AuthtypesPatchableRoleDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
@@ -429,7 +429,7 @@ export const getPatchRoleMutationOptions = <
export type PatchRoleMutationResult = NonNullable<
Awaited<ReturnType<typeof patchRole>>
>;
export type PatchRoleMutationBody = BodyType<RoletypesPatchableRoleDTO>;
export type PatchRoleMutationBody = BodyType<AuthtypesPatchableRoleDTO>;
export type PatchRoleMutationError = ErrorType<RenderErrorResponseDTO>;
/**
@@ -444,7 +444,7 @@ export const usePatchRole = <
TError,
{
pathParams: PatchRolePathParameters;
data: BodyType<RoletypesPatchableRoleDTO>;
data: BodyType<AuthtypesPatchableRoleDTO>;
},
TContext
>;
@@ -453,7 +453,7 @@ export const usePatchRole = <
TError,
{
pathParams: PatchRolePathParameters;
data: BodyType<RoletypesPatchableRoleDTO>;
data: BodyType<AuthtypesPatchableRoleDTO>;
},
TContext
> => {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
export interface AuthtypesAttributeMappingDTO {
@@ -278,6 +278,13 @@ export interface AuthtypesPatchableObjectsDTO {
deletions: AuthtypesGettableObjectsDTO[] | null;
}
export interface AuthtypesPatchableRoleDTO {
/**
* @type string
*/
description: string;
}
export interface AuthtypesPostableAuthDomainDTO {
config?: AuthtypesAuthDomainConfigDTO;
/**
@@ -301,6 +308,17 @@ export interface AuthtypesPostableEmailPasswordSessionDTO {
password?: string;
}
export interface AuthtypesPostableRoleDTO {
/**
* @type string
*/
description?: string;
/**
* @type string
*/
name: string;
}
export interface AuthtypesPostableRotateTokenDTO {
/**
* @type string
@@ -319,6 +337,39 @@ export interface AuthtypesResourceDTO {
type: string;
}
export interface AuthtypesRoleDTO {
/**
* @type string
* @format date-time
*/
createdAt?: Date;
/**
* @type string
*/
description: string;
/**
* @type string
*/
id: string;
/**
* @type string
*/
name: string;
/**
* @type string
*/
orgId: string;
/**
* @type string
*/
type: string;
/**
* @type string
* @format date-time
*/
updatedAt?: Date;
}
/**
* @nullable
*/
@@ -725,6 +776,45 @@ export interface GatewaytypesUpdatableIngestionKeyLimitDTO {
tags?: string[] | null;
}
export interface GlobaltypesAPIKeyConfigDTO {
/**
* @type boolean
*/
enabled?: boolean;
}
export interface GlobaltypesConfigDTO {
/**
* @type string
*/
external_url?: string;
identN?: GlobaltypesIdentNConfigDTO;
/**
* @type string
*/
ingestion_url?: string;
}
export interface GlobaltypesIdentNConfigDTO {
apikey?: GlobaltypesAPIKeyConfigDTO;
impersonation?: GlobaltypesImpersonationConfigDTO;
tokenizer?: GlobaltypesTokenizerConfigDTO;
}
export interface GlobaltypesImpersonationConfigDTO {
/**
* @type boolean
*/
enabled?: boolean;
}
export interface GlobaltypesTokenizerConfigDTO {
/**
* @type boolean
*/
enabled?: boolean;
}
export interface MetricsexplorertypesListMetricDTO {
/**
* @type string
@@ -2039,57 +2129,6 @@ export interface RenderErrorResponseDTO {
status: string;
}
export interface RoletypesPatchableRoleDTO {
/**
* @type string
*/
description: string;
}
export interface RoletypesPostableRoleDTO {
/**
* @type string
*/
description?: string;
/**
* @type string
*/
name: string;
}
export interface RoletypesRoleDTO {
/**
* @type string
* @format date-time
*/
createdAt?: Date;
/**
* @type string
*/
description: string;
/**
* @type string
*/
id: string;
/**
* @type string
*/
name: string;
/**
* @type string
*/
orgId: string;
/**
* @type string
*/
type: string;
/**
* @type string
* @format date-time
*/
updatedAt?: Date;
}
export interface ServiceaccounttypesFactorAPIKeyDTO {
/**
* @type string
@@ -2100,7 +2139,7 @@ export interface ServiceaccounttypesFactorAPIKeyDTO {
* @type integer
* @minimum 0
*/
expires_at: number;
expiresAt: number;
/**
* @type string
*/
@@ -2113,7 +2152,7 @@ export interface ServiceaccounttypesFactorAPIKeyDTO {
* @type string
* @format date-time
*/
last_used: Date;
lastObservedAt: Date;
/**
* @type string
*/
@@ -2121,7 +2160,7 @@ export interface ServiceaccounttypesFactorAPIKeyDTO {
/**
* @type string
*/
service_account_id: string;
serviceAccountId: string;
/**
* @type string
* @format date-time
@@ -2145,7 +2184,7 @@ export interface ServiceaccounttypesPostableFactorAPIKeyDTO {
* @type integer
* @minimum 0
*/
expires_at: number;
expiresAt: number;
/**
* @type string
*/
@@ -2173,6 +2212,11 @@ export interface ServiceaccounttypesServiceAccountDTO {
* @format date-time
*/
createdAt?: Date;
/**
* @type string
* @format date-time
*/
deletedAt: Date;
/**
* @type string
*/
@@ -2188,7 +2232,7 @@ export interface ServiceaccounttypesServiceAccountDTO {
/**
* @type string
*/
orgID: string;
orgId: string;
/**
* @type array
*/
@@ -2209,7 +2253,7 @@ export interface ServiceaccounttypesUpdatableFactorAPIKeyDTO {
* @type integer
* @minimum 0
*/
expires_at: number;
expiresAt: number;
/**
* @type string
*/
@@ -2397,17 +2441,6 @@ export interface TypesGettableAPIKeyDTO {
userId?: string;
}
export interface TypesGettableGlobalConfigDTO {
/**
* @type string
*/
external_url?: string;
/**
* @type string
*/
ingestion_url?: string;
}
export interface TypesIdentifiableDTO {
/**
* @type string
@@ -2506,25 +2539,6 @@ export interface TypesPostableAPIKeyDTO {
role?: string;
}
export interface TypesPostableAcceptInviteDTO {
/**
* @type string
*/
displayName?: string;
/**
* @type string
*/
password?: string;
/**
* @type string
*/
sourceUrl?: string;
/**
* @type string
*/
token?: string;
}
export interface TypesPostableBulkInviteRequestDTO {
/**
* @type array
@@ -3021,18 +3035,7 @@ export type GetResetPasswordToken200 = {
};
export type GetGlobalConfig200 = {
data: TypesGettableGlobalConfigDTO;
/**
* @type string
*/
status: string;
};
export type ListInvite200 = {
/**
* @type array
*/
data: TypesInviteDTO[];
data: GlobaltypesConfigDTO;
/**
* @type string
*/
@@ -3047,28 +3050,6 @@ export type CreateInvite201 = {
status: string;
};
export type DeleteInvitePathParameters = {
id: string;
};
export type GetInvitePathParameters = {
token: string;
};
export type GetInvite200 = {
data: TypesInviteDTO;
/**
* @type string
*/
status: string;
};
export type AcceptInvite201 = {
data: TypesUserDTO;
/**
* @type string
*/
status: string;
};
export type ListPromotedAndIndexedPaths200 = {
/**
* @type array
@@ -3158,7 +3139,7 @@ export type ListRoles200 = {
/**
* @type array
*/
data: RoletypesRoleDTO[];
data: AuthtypesRoleDTO[];
/**
* @type string
*/
@@ -3180,7 +3161,7 @@ export type GetRolePathParameters = {
id: string;
};
export type GetRole200 = {
data: RoletypesRoleDTO;
data: AuthtypesRoleDTO;
/**
* @type string
*/

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {
@@ -20,26 +20,20 @@ import { useMutation, useQuery } from 'react-query';
import type { BodyType, ErrorType } from '../../../generatedAPIInstance';
import { GeneratedAPIInstance } from '../../../generatedAPIInstance';
import type {
AcceptInvite201,
ChangePasswordPathParameters,
CreateAPIKey201,
CreateInvite201,
DeleteInvitePathParameters,
DeleteUserPathParameters,
GetInvite200,
GetInvitePathParameters,
GetMyUser200,
GetResetPasswordToken200,
GetResetPasswordTokenPathParameters,
GetUser200,
GetUserPathParameters,
ListAPIKeys200,
ListInvite200,
ListUsers200,
RenderErrorResponseDTO,
RevokeAPIKeyPathParameters,
TypesChangePasswordRequestDTO,
TypesPostableAcceptInviteDTO,
TypesPostableAPIKeyDTO,
TypesPostableBulkInviteRequestDTO,
TypesPostableForgotPasswordDTO,
@@ -255,84 +249,6 @@ export const invalidateGetResetPasswordToken = async (
return queryClient;
};
/**
* This endpoint lists all invites
* @summary List invites
*/
export const listInvite = (signal?: AbortSignal) => {
return GeneratedAPIInstance<ListInvite200>({
url: `/api/v1/invite`,
method: 'GET',
signal,
});
};
export const getListInviteQueryKey = () => {
return [`/api/v1/invite`] as const;
};
export const getListInviteQueryOptions = <
TData = Awaited<ReturnType<typeof listInvite>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof listInvite>>, TError, TData>;
}) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getListInviteQueryKey();
const queryFn: QueryFunction<Awaited<ReturnType<typeof listInvite>>> = ({
signal,
}) => listInvite(signal);
return { queryKey, queryFn, ...queryOptions } as UseQueryOptions<
Awaited<ReturnType<typeof listInvite>>,
TError,
TData
> & { queryKey: QueryKey };
};
export type ListInviteQueryResult = NonNullable<
Awaited<ReturnType<typeof listInvite>>
>;
export type ListInviteQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary List invites
*/
export function useListInvite<
TData = Awaited<ReturnType<typeof listInvite>>,
TError = ErrorType<RenderErrorResponseDTO>
>(options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof listInvite>>, TError, TData>;
}): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getListInviteQueryOptions(options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary List invites
*/
export const invalidateListInvite = async (
queryClient: QueryClient,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getListInviteQueryKey() },
options,
);
return queryClient;
};
/**
* This endpoint creates an invite for a user
* @summary Create invite
@@ -416,257 +332,6 @@ export const useCreateInvite = <
return useMutation(mutationOptions);
};
/**
* This endpoint deletes an invite by id
* @summary Delete invite
*/
export const deleteInvite = ({ id }: DeleteInvitePathParameters) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/invite/${id}`,
method: 'DELETE',
});
};
export const getDeleteInviteMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteInvite>>,
TError,
{ pathParams: DeleteInvitePathParameters },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof deleteInvite>>,
TError,
{ pathParams: DeleteInvitePathParameters },
TContext
> => {
const mutationKey = ['deleteInvite'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof deleteInvite>>,
{ pathParams: DeleteInvitePathParameters }
> = (props) => {
const { pathParams } = props ?? {};
return deleteInvite(pathParams);
};
return { mutationFn, ...mutationOptions };
};
export type DeleteInviteMutationResult = NonNullable<
Awaited<ReturnType<typeof deleteInvite>>
>;
export type DeleteInviteMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Delete invite
*/
export const useDeleteInvite = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof deleteInvite>>,
TError,
{ pathParams: DeleteInvitePathParameters },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof deleteInvite>>,
TError,
{ pathParams: DeleteInvitePathParameters },
TContext
> => {
const mutationOptions = getDeleteInviteMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint gets an invite by token
* @summary Get invite
*/
export const getInvite = (
{ token }: GetInvitePathParameters,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetInvite200>({
url: `/api/v1/invite/${token}`,
method: 'GET',
signal,
});
};
export const getGetInviteQueryKey = ({ token }: GetInvitePathParameters) => {
return [`/api/v1/invite/${token}`] as const;
};
export const getGetInviteQueryOptions = <
TData = Awaited<ReturnType<typeof getInvite>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ token }: GetInvitePathParameters,
options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof getInvite>>, TError, TData>;
},
) => {
const { query: queryOptions } = options ?? {};
const queryKey = queryOptions?.queryKey ?? getGetInviteQueryKey({ token });
const queryFn: QueryFunction<Awaited<ReturnType<typeof getInvite>>> = ({
signal,
}) => getInvite({ token }, signal);
return {
queryKey,
queryFn,
enabled: !!token,
...queryOptions,
} as UseQueryOptions<Awaited<ReturnType<typeof getInvite>>, TError, TData> & {
queryKey: QueryKey;
};
};
export type GetInviteQueryResult = NonNullable<
Awaited<ReturnType<typeof getInvite>>
>;
export type GetInviteQueryError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Get invite
*/
export function useGetInvite<
TData = Awaited<ReturnType<typeof getInvite>>,
TError = ErrorType<RenderErrorResponseDTO>
>(
{ token }: GetInvitePathParameters,
options?: {
query?: UseQueryOptions<Awaited<ReturnType<typeof getInvite>>, TError, TData>;
},
): UseQueryResult<TData, TError> & { queryKey: QueryKey } {
const queryOptions = getGetInviteQueryOptions({ token }, options);
const query = useQuery(queryOptions) as UseQueryResult<TData, TError> & {
queryKey: QueryKey;
};
query.queryKey = queryOptions.queryKey;
return query;
}
/**
* @summary Get invite
*/
export const invalidateGetInvite = async (
queryClient: QueryClient,
{ token }: GetInvitePathParameters,
options?: InvalidateOptions,
): Promise<QueryClient> => {
await queryClient.invalidateQueries(
{ queryKey: getGetInviteQueryKey({ token }) },
options,
);
return queryClient;
};
/**
* This endpoint accepts an invite by token
* @summary Accept invite
*/
export const acceptInvite = (
typesPostableAcceptInviteDTO: BodyType<TypesPostableAcceptInviteDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<AcceptInvite201>({
url: `/api/v1/invite/accept`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: typesPostableAcceptInviteDTO,
signal,
});
};
export const getAcceptInviteMutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof acceptInvite>>,
TError,
{ data: BodyType<TypesPostableAcceptInviteDTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof acceptInvite>>,
TError,
{ data: BodyType<TypesPostableAcceptInviteDTO> },
TContext
> => {
const mutationKey = ['acceptInvite'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof acceptInvite>>,
{ data: BodyType<TypesPostableAcceptInviteDTO> }
> = (props) => {
const { data } = props ?? {};
return acceptInvite(data);
};
return { mutationFn, ...mutationOptions };
};
export type AcceptInviteMutationResult = NonNullable<
Awaited<ReturnType<typeof acceptInvite>>
>;
export type AcceptInviteMutationBody = BodyType<TypesPostableAcceptInviteDTO>;
export type AcceptInviteMutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Accept invite
*/
export const useAcceptInvite = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof acceptInvite>>,
TError,
{ data: BodyType<TypesPostableAcceptInviteDTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof acceptInvite>>,
TError,
{ data: BodyType<TypesPostableAcceptInviteDTO> },
TContext
> => {
const mutationOptions = getAcceptInviteMutationOptions(options);
return useMutation(mutationOptions);
};
/**
* This endpoint creates a bulk invite for a user
* @summary Create bulk invite

View File

@@ -1,7 +1,7 @@
/**
* ! Do not edit manually
* * The file has been auto-generated using Orval for SigNoz
* * regenerate with 'pnpm generate:api'
* * regenerate with 'yarn generate:api'
* SigNoz
*/
import type {

View File

@@ -81,7 +81,8 @@ export const interceptorRejected = async (
response.config.url !== '/sessions/email_password' &&
!(
response.config.url === '/sessions' && response.config.method === 'delete'
)
) &&
response.config.url !== '/authz/check'
) {
try {
const accessToken = getLocalStorageApi(LOCALSTORAGE.AUTH_TOKEN);

View File

@@ -0,0 +1,152 @@
import axios, { AxiosHeaders, AxiosResponse } from 'axios';
import { interceptorRejected } from './index';
jest.mock('api/browser/localstorage/get', () => ({
__esModule: true,
default: jest.fn(() => 'mock-token'),
}));
jest.mock('api/v2/sessions/rotate/post', () => ({
__esModule: true,
default: jest.fn(() =>
Promise.resolve({
data: { accessToken: 'new-token', refreshToken: 'new-refresh' },
}),
),
}));
jest.mock('AppRoutes/utils', () => ({
__esModule: true,
default: jest.fn(),
}));
jest.mock('axios', () => {
const actualAxios = jest.requireActual('axios');
const mockAxios = jest.fn().mockResolvedValue({ data: 'success' });
return {
...actualAxios,
default: Object.assign(mockAxios, {
...actualAxios.default,
isAxiosError: jest.fn().mockReturnValue(true),
create: actualAxios.create,
}),
__esModule: true,
};
});
describe('interceptorRejected', () => {
beforeEach(() => {
jest.clearAllMocks();
((axios as unknown) as jest.Mock).mockResolvedValue({ data: 'success' });
((axios.isAxiosError as unknown) as jest.Mock).mockReturnValue(true);
});
it('should preserve array payload structure when retrying a 401 request', async () => {
const arrayPayload = [
{ relation: 'assignee', object: { resource: { name: 'role' } } },
{ relation: 'assignee', object: { resource: { name: 'editor' } } },
];
const error = ({
response: {
status: 401,
config: {
url: '/some-endpoint',
method: 'POST',
baseURL: 'http://localhost/',
headers: new AxiosHeaders(),
data: JSON.stringify(arrayPayload),
},
},
config: {
url: '/some-endpoint',
method: 'POST',
baseURL: 'http://localhost/',
headers: new AxiosHeaders(),
data: JSON.stringify(arrayPayload),
},
} as unknown) as AxiosResponse;
try {
await interceptorRejected(error);
} catch {
// Expected to reject after retry
}
const mockAxiosFn = (axios as unknown) as jest.Mock;
expect(mockAxiosFn.mock.calls.length).toBe(1);
const retryCallConfig = mockAxiosFn.mock.calls[0][0];
expect(Array.isArray(JSON.parse(retryCallConfig.data))).toBe(true);
expect(JSON.parse(retryCallConfig.data)).toEqual(arrayPayload);
});
it('should preserve object payload structure when retrying a 401 request', async () => {
const objectPayload = { key: 'value', nested: { data: 123 } };
const error = ({
response: {
status: 401,
config: {
url: '/some-endpoint',
method: 'POST',
baseURL: 'http://localhost/',
headers: new AxiosHeaders(),
data: JSON.stringify(objectPayload),
},
},
config: {
url: '/some-endpoint',
method: 'POST',
baseURL: 'http://localhost/',
headers: new AxiosHeaders(),
data: JSON.stringify(objectPayload),
},
} as unknown) as AxiosResponse;
try {
await interceptorRejected(error);
} catch {
// Expected to reject after retry
}
const mockAxiosFn = (axios as unknown) as jest.Mock;
expect(mockAxiosFn.mock.calls.length).toBe(1);
const retryCallConfig = mockAxiosFn.mock.calls[0][0];
expect(JSON.parse(retryCallConfig.data)).toEqual(objectPayload);
});
it('should handle undefined data gracefully when retrying', async () => {
const error = ({
response: {
status: 401,
config: {
url: '/some-endpoint',
method: 'GET',
baseURL: 'http://localhost/',
headers: new AxiosHeaders(),
data: undefined,
},
},
config: {
url: '/some-endpoint',
method: 'GET',
baseURL: 'http://localhost/',
headers: new AxiosHeaders(),
data: undefined,
},
} as unknown) as AxiosResponse;
try {
await interceptorRejected(error);
} catch {
// Expected to reject after retry
}
const mockAxiosFn = (axios as unknown) as jest.Mock;
expect(mockAxiosFn.mock.calls.length).toBe(1);
const retryCallConfig = mockAxiosFn.mock.calls[0][0];
expect(retryCallConfig.data).toBeUndefined();
});
});

View File

@@ -1,19 +0,0 @@
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
import { PayloadProps, PendingInvite } from 'types/api/user/getPendingInvites';
const get = async (): Promise<SuccessResponseV2<PendingInvite[]>> => {
try {
const response = await axios.get<PayloadProps>(`/invite`);
return {
httpStatusCode: response.status,
data: response.data.data,
};
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
};
export default get;

View File

@@ -1,22 +0,0 @@
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
import { PayloadProps, Props } from 'types/api/user/accept';
import { UserResponse } from 'types/api/user/getUser';
const accept = async (
props: Props,
): Promise<SuccessResponseV2<UserResponse>> => {
try {
const response = await axios.post<PayloadProps>(`/invite/accept`, props);
return {
httpStatusCode: response.status,
data: response.data.data,
};
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
};
export default accept;

View File

@@ -1,20 +0,0 @@
import axios from 'api';
import { ErrorResponseHandlerV2 } from 'api/ErrorResponseHandlerV2';
import { AxiosError } from 'axios';
import { ErrorV2Resp, SuccessResponseV2 } from 'types/api';
import { Props } from 'types/api/user/deleteInvite';
const del = async (props: Props): Promise<SuccessResponseV2<null>> => {
try {
const response = await axios.delete(`/invite/${props.id}`);
return {
httpStatusCode: response.status,
data: null,
};
} catch (error) {
ErrorResponseHandlerV2(error as AxiosError<ErrorV2Resp>);
}
};
export default del;

Some files were not shown because too many files have changed in this diff Show More