Compare commits

..

106 Commits

Author SHA1 Message Date
Naman Verma
301d0103b0 Merge branch 'nv/delete-v2-dashboard' into nv/patch-dashboard 2026-05-05 11:59:12 +05:30
Naman Verma
dc99772ee4 Merge branch 'nv/other-dashboard-v2-update-methods' into nv/delete-v2-dashboard 2026-05-05 11:58:45 +05:30
Naman Verma
80849ebfeb Merge branch 'nv/v2-dashboard-update' into nv/other-dashboard-v2-update-methods 2026-05-05 11:58:22 +05:30
Naman Verma
2c0c7240a4 Merge branch 'nv/v2-dashboard-get' into nv/v2-dashboard-update 2026-05-05 11:56:39 +05:30
Naman Verma
28cb0a8be7 Merge branch 'nv/v2-dashboard-create' into nv/v2-dashboard-get 2026-05-05 11:54:54 +05:30
Naman Verma
54832cad34 Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-05-05 11:54:38 +05:30
Naman Verma
a45178d709 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-05-05 11:54:21 +05:30
Naman Verma
c4224ecf72 Merge branch 'main' into nv/dashboardv2 2026-05-05 11:53:56 +05:30
Naman Verma
14927c89d3 feat: patch dashboard api 2026-05-05 09:22:25 +05:30
Pandey
8409a9798d fix(authdomain): nest config response, rename Updateable→Updatable, return Identifiable on create (#11176)
Some checks failed
build-staging / staging (push) Has been cancelled
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* fix(authdomain): nest config response, rename Updateable→Updatable, return Identifiable on create

Three small API-shape corrections on auth_domain:

- GettableAuthDomain previously embedded AuthDomainConfig, which
  flattened sso_enabled / saml_config / oidc_config / google_auth_config /
  role_mapping at the response root and made the response shape
  diverge from the request shape (PostableAuthDomain has them under
  `config`). Move it under a named `Config` field with a `config`
  json tag so request and response carry the same nested object.
- UpdateableAuthDomain → UpdatableAuthDomain (typo fix; aligns with
  UpdatableUser already in the codebase).
- CreateAuthDomain previously returned the full GettableAuthDomain;
  the only field clients actually need from the create response is
  the new ID. Switch to Identifiable so the contract states what the
  endpoint guarantees and clients re-Read for the full domain when
  needed.

Frontend schema and OpenAPI spec regenerated.

* fix(authdomain-frontend): adapt to nested config + Identifiable create response

Regenerate the orval client (`yarn generate:api`) and update the
auth-domain UI for the API shape changes from the previous commit:

- `record.ssoType`, `.ssoEnabled`, `.googleAuthConfig`, `.oidcConfig`,
  `.samlConfig`, `.roleMapping` are now nested under `record.config.*`
  in `AuthtypesGettableAuthDomainDTO` — update SSOEnforcementToggle,
  CreateEdit form initial-values, the list page's Configure button,
  and the auth-domain test mocks.
- `mockCreateSuccessResponse` now returns `{ id }` (Identifiable)
  instead of the full domain.

`yarn generate:api` ran clean: lint OK, tsgo OK.

* fix(authdomain): align CreateAuthDomain success code with handler + adjust integration test

The Create handler returns http.StatusCreated but the OpenAPI
annotation said StatusOK. Sync the annotation to 201, regenerate the
spec + frontend client.

The callbackauthn integration test (01_domain.py) still read
`domain["ssoType"]` off the GET response — now nested under
`domain["config"]["ssoType"]` after the previous shape change. Update
the assertion.
2026-05-04 20:44:41 +00:00
Vinicius Lourenço
de6e4890ae feat(query-search-v2): add initial expression support & store to manage (#11062)
* feat(query-search-v2): add initial expression support & store to manage

* fix(qbv2): format issue
2026-05-04 16:22:52 +00:00
Naman Verma
55487dde3a Merge branch 'nv/other-dashboard-v2-update-methods' into nv/delete-v2-dashboard 2026-05-04 19:27:40 +05:30
Naman Verma
fc5717af51 Merge branch 'nv/v2-dashboard-update' into nv/other-dashboard-v2-update-methods 2026-05-04 19:27:26 +05:30
Naman Verma
8bf650192e Merge branch 'nv/v2-dashboard-get' into nv/v2-dashboard-update 2026-05-04 19:27:14 +05:30
Naman Verma
f8fb7e5f8d Merge branch 'nv/v2-dashboard-create' into nv/v2-dashboard-get 2026-05-04 19:27:02 +05:30
Naman Verma
ff578f7d92 Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-05-04 19:26:49 +05:30
Naman Verma
cd630b1152 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-05-04 19:26:36 +05:30
Naman Verma
bd0842ac17 fix: query-less panels not allowed 2026-05-04 19:25:49 +05:30
Naman Verma
b3e3dd13b4 Merge branch 'nv/other-dashboard-v2-update-methods' into nv/delete-v2-dashboard 2026-05-04 17:48:42 +05:30
Naman Verma
710d5531f3 Merge branch 'nv/v2-dashboard-update' into nv/other-dashboard-v2-update-methods 2026-05-04 17:45:15 +05:30
Naman Verma
e37e427079 fix: merge fixes 2026-05-04 17:40:46 +05:30
Naman Verma
1e99ab4659 Merge branch 'nv/v2-dashboard-get' into nv/v2-dashboard-update 2026-05-04 17:40:26 +05:30
Naman Verma
3353cda021 Merge branch 'nv/v2-dashboard-create' into nv/v2-dashboard-get 2026-05-04 17:35:33 +05:30
Naman Verma
f5a71037bf Merge branch 'nv/v2-dashboard-create' into nv/v2-dashboard-get 2026-05-04 17:33:03 +05:30
Naman Verma
97b85c386a fix: no v2 package and its consequences 2026-05-04 17:27:58 +05:30
Naman Verma
00bdf50c1c fix: no v2 package and its consequences 2026-05-04 17:26:12 +05:30
Naman Verma
5dec4ec580 Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-05-04 17:18:39 +05:30
Naman Verma
325767c240 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-05-04 17:17:32 +05:30
Naman Verma
5fed2a4585 chore: no v2 subpackage 2026-05-04 17:16:39 +05:30
Naman Verma
664337ae0f Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-05-04 16:19:29 +05:30
Naman Verma
a0ea276681 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-05-04 16:18:03 +05:30
Naman Verma
2dc8699f08 fix: wrap errors 2026-05-04 14:55:38 +05:30
Naman Verma
ed81ed8ab5 fix: no need for copying textboxvariablespec 2026-05-04 14:44:42 +05:30
Naman Verma
48c9da19df fix: return 500 err if spec is nil for composite kind w/ code comment 2026-05-04 14:34:16 +05:30
Naman Verma
eb9663d518 fix: remove extra (un)marshal cycle 2026-05-04 14:18:37 +05:30
Naman Verma
a56a862338 fix: add allowed values in err messages 2026-05-04 14:16:22 +05:30
Naman Verma
021f33f65e Merge branch 'main' into nv/dashboardv2 2026-05-04 12:52:31 +05:30
Naman Verma
ca96c71146 feat: delete dashboard v2 API and hard delete cron job 2026-05-03 15:01:43 +05:30
Naman Verma
de2909d1d1 feat: lock, unlock, create public, update public v2 dashboard APIs 2026-05-03 14:38:24 +05:30
Naman Verma
f311fcabf7 feat: v2 dashboard update API 2026-04-29 18:39:48 +05:30
Naman Verma
a37c07f881 feat: v2 dashboard GET API 2026-04-29 15:12:47 +05:30
Naman Verma
4d9386f418 fix: merge conflicts 2026-04-29 14:36:39 +05:30
Naman Verma
737473521d Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-04-29 14:33:25 +05:30
Naman Verma
1863db8ba8 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-04-29 14:32:14 +05:30
Naman Verma
661af09a13 Merge branch 'main' into nv/dashboardv2 2026-04-29 14:31:59 +05:30
Naman Verma
6024fa2b91 fix: remove extra spec from builder query marshalling 2026-04-29 14:31:16 +05:30
Naman Verma
8996a96387 chore: use existing mapper 2026-04-29 14:09:34 +05:30
Naman Verma
d6db5c2aab test: integration test fixes 2026-04-29 12:56:14 +05:30
Naman Verma
709590ea1b test: integration tests for create API 2026-04-29 12:23:12 +05:30
Naman Verma
1add46b4c5 fix: module should also validate postable dashboard 2026-04-28 20:05:38 +05:30
Naman Verma
8401261e20 Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-04-28 20:04:33 +05:30
Naman Verma
0ff34a7274 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-04-28 20:04:08 +05:30
Naman Verma
44e3bd9608 chore: separate method for validation 2026-04-28 20:03:48 +05:30
Naman Verma
c3944d779e fix: more dashboard request validations 2026-04-28 19:59:11 +05:30
Naman Verma
f5ec783a53 fix: go lint fix 2026-04-28 19:33:28 +05:30
Naman Verma
35b729c425 Merge branch 'nv/tags-for-dashboard-create' into nv/v2-dashboard-create 2026-04-28 19:30:42 +05:30
Naman Verma
4f43c3d803 fix: use existing tag's casing if new tag is a prefix of an existing tag 2026-04-28 19:30:07 +05:30
Naman Verma
5dbde6c64d fix: only return name of a tag in dashboard response 2026-04-28 19:13:03 +05:30
Naman Verma
fb6fdd54ec feat: v2 create dashboard API 2026-04-28 15:05:29 +05:30
Naman Verma
64b8ba62da Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-04-28 15:04:11 +05:30
Naman Verma
7c66df408b Merge branch 'main' into nv/dashboardv2 2026-04-28 15:04:03 +05:30
Naman Verma
54049de391 chore: follow proper unmarshal json method structure 2026-04-28 15:02:49 +05:30
Naman Verma
a82f4237c8 Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-04-28 09:52:23 +05:30
Naman Verma
89606b6238 Merge branch 'main' into nv/dashboardv2 2026-04-28 09:52:13 +05:30
Naman Verma
db5ce958eb Merge branch 'nv/dashboardv2' into nv/tags-for-dashboard-create 2026-04-28 09:49:01 +05:30
Naman Verma
c8d3a9a54b feat: enum for entity type that other modules can register 2026-04-28 09:47:24 +05:30
Naman Verma
637870b1fc feat: define tags module for v2 dashboard creation 2026-04-27 22:14:47 +05:30
Naman Verma
d46a7e24c9 Merge branch 'main' into nv/dashboardv2 2026-04-27 22:12:12 +05:30
Naman Verma
2a451e1c31 test: test for drift detection mechanics 2026-04-27 18:57:41 +05:30
Naman Verma
60b6d1d890 chore: better method name extractKindAndSpec 2026-04-27 18:42:31 +05:30
Naman Verma
36f755b232 chore: cleanup comments 2026-04-27 18:39:41 +05:30
Naman Verma
c1b3e3683a chore: code movement 2026-04-27 18:37:57 +05:30
Naman Verma
4c68544b1a chore: go lint fix (godot) 2026-04-27 18:37:05 +05:30
Naman Verma
90d9ab95f9 chore: code movement 2026-04-27 18:35:18 +05:30
Naman Verma
065e712e0c chore: code movement 2026-04-27 18:33:48 +05:30
Naman Verma
50db309ecd chore: code movement 2026-04-27 18:32:41 +05:30
Naman Verma
261bc552b0 chore: cleanup testing code 2026-04-27 18:24:52 +05:30
Naman Verma
bab720e98b Merge branch 'main' into nv/dashboardv2 2026-04-27 18:21:09 +05:30
Naman Verma
71fef6636b chore: better method name 2026-04-27 18:18:14 +05:30
Naman Verma
fc3cdecbbb chore: cleaner comment 2026-04-27 18:15:21 +05:30
Naman Verma
860fcfa641 chore: cleaner comment 2026-04-27 18:14:27 +05:30
Naman Verma
a090e3a4aa chore: cleaner comment 2026-04-27 18:14:02 +05:30
Naman Verma
6cf73e2ade chore: better comment to explain what restrictKindToLiteral does 2026-04-27 18:13:34 +05:30
Naman Verma
bbcb6a45d6 chore: renames and code rearrangement 2026-04-27 17:53:54 +05:30
Naman Verma
d13934febc fix: remove textbox plugin from openapi spec 2026-04-27 17:29:36 +05:30
Naman Verma
d5a7b7523d fix: strict decode variable spec as well 2026-04-27 17:27:51 +05:30
Naman Verma
5b8984f131 Merge branch 'main' into nv/dashboardv2 2026-04-27 17:18:44 +05:30
Naman Verma
6ddc5f1f12 chore: better error messages 2026-04-27 17:18:11 +05:30
Naman Verma
055968bfad fix: dot at the end of a comment 2026-04-27 17:07:58 +05:30
Naman Verma
1bf0f38ed9 fix: js lint errors 2026-04-27 17:07:38 +05:30
Naman Verma
842125e20a chore: too many comments 2026-04-27 16:50:41 +05:30
Naman Verma
6dab35caf8 chore: better file name 2026-04-27 16:43:42 +05:30
Naman Verma
047e9e2001 chore: better file names 2026-04-27 16:42:31 +05:30
Naman Verma
45eaa7db58 test: add tests for spec wrappers 2026-04-27 16:36:27 +05:30
Naman Verma
8a3d894eba chore: comment cleanup 2026-04-27 16:32:29 +05:30
Naman Verma
5239060b53 chore: move plugin maps to correct file 2026-04-27 16:30:33 +05:30
Naman Verma
42c6f507ac test: more descriptive test file name 2026-04-27 15:42:54 +05:30
Naman Verma
1b695a0b80 chore: separate file for perses replicas 2026-04-27 15:42:21 +05:30
Naman Verma
438cfab155 chore: comment clean up 2026-04-27 15:39:46 +05:30
Naman Verma
69f7617e01 Merge branch 'main' into nv/dashboardv2 2026-04-27 15:36:58 +05:30
Naman Verma
4420a7e1fc test: much bigger json for data column 2026-04-24 22:16:03 +05:30
Naman Verma
b4bc68c5c5 test: data column in perf tests should match real data 2026-04-24 17:17:37 +05:30
Naman Verma
eb9eb317cc test: perf test script for both sql flavours 2026-04-23 17:14:33 +05:30
Naman Verma
0b1eb16a42 test: fixes in dashboard perf testing data generator 2026-04-23 15:42:58 +05:30
Naman Verma
05a4d12183 test: script to generate test dashboard data in a sql db 2026-04-23 14:19:58 +05:30
Naman Verma
bbaf64c4f0 feat: openapi spec generation 2026-04-21 13:41:06 +05:30
112 changed files with 7095 additions and 4754 deletions

View File

@@ -18,17 +18,16 @@ import (
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/gateway/noopgateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
@@ -102,8 +101,8 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx), openfgaDataStore), nil
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, _ querier.Querier, _ licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(impldashboard.NewStore(store), settings, analytics, orgGetter, queryParser)
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, _ querier.Querier, _ licensing.Licensing, tagModule tag.Module) dashboard.Module {
return impldashboard.NewModule(impldashboard.NewStore(store), store, settings, analytics, orgGetter, queryParser, tagModule)
},
func(_ licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return noopgateway.NewProviderFactory()
@@ -111,9 +110,6 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(_ licensing.Licensing) factory.NamedMap[factory.ProviderFactory[auditor.Auditor, auditor.Config]] {
return signoz.NewAuditorProviderFactories()
},
func(_ context.Context, _ flagger.Flagger, _ licensing.Licensing, _ telemetrystore.TelemetryStore, _ sqlstore.SQLStore, _ organization.Getter, _ zeus.Zeus) (factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]], string) {
return signoz.NewMeterReporterProviderFactories(), "noop"
},
func(ps factory.ProviderSettings, q querier.Querier, a analytics.Analytics) querier.Handler {
return querier.NewHandler(ps, q, a)
},

View File

@@ -17,14 +17,6 @@ import (
"github.com/SigNoz/signoz/ee/gateway/httpgateway"
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/metercollector/baseplatformfeemetercollector"
"github.com/SigNoz/signoz/ee/metercollector/datapointcountmetercollector"
"github.com/SigNoz/signoz/ee/metercollector/datapointsizemetercollector"
"github.com/SigNoz/signoz/ee/metercollector/logcountmetercollector"
"github.com/SigNoz/signoz/ee/metercollector/logsizemetercollector"
"github.com/SigNoz/signoz/ee/metercollector/spancountmetercollector"
"github.com/SigNoz/signoz/ee/metercollector/spansizemetercollector"
"github.com/SigNoz/signoz/ee/meterreporter/httpmeterreporter"
"github.com/SigNoz/signoz/ee/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/ee/modules/cloudintegration/implcloudintegration/implcloudprovider"
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
@@ -43,16 +35,14 @@ import (
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
pkgflagger "github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
pkgcloudintegration "github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
@@ -68,10 +58,7 @@ import (
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/cloudintegrationtypes"
"github.com/SigNoz/signoz/pkg/types/featuretypes"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
)
@@ -158,8 +145,8 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
}
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx), openfgaDataStore, licensing, onBeforeRoleDelete, dashboardModule), nil
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, queryParser, querier, licensing)
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing, tagModule tag.Module) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), store, settings, analytics, orgGetter, queryParser, querier, licensing, tagModule)
},
func(licensing licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return httpgateway.NewProviderFactory(licensing)
@@ -171,19 +158,6 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
}
return factories
},
func(ctx context.Context, flagger pkgflagger.Flagger, licensing licensing.Licensing, telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore, orgGetter organization.Getter, zeus zeus.Zeus) (factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]], string) {
factories := signoz.NewMeterReporterProviderFactories()
if err := factories.Add(httpmeterreporter.NewFactory(newMeterCollectors(licensing, telemetryStore, sqlStore), licensing, telemetryStore, orgGetter, zeus)); err != nil {
panic(err)
}
evalCtx := featuretypes.NewFlaggerEvaluationContext(valuer.UUID{})
if flagger.BooleanOrEmpty(ctx, pkgflagger.FeatureUseMeterReporter, evalCtx) {
return factories, "http"
}
return factories, "noop"
},
func(ps factory.ProviderSettings, q querier.Querier, a analytics.Analytics) querier.Handler {
communityHandler := querier.NewHandler(ps, q, a)
return eequerier.NewHandler(ps, q, communityHandler)
@@ -243,15 +217,3 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
return nil
}
func newMeterCollectors(licensing licensing.Licensing, telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) map[metercollectortypes.Name]metercollector.MeterCollector {
return map[metercollectortypes.Name]metercollector.MeterCollector{
baseplatformfeemetercollector.MeterName: baseplatformfeemetercollector.New(licensing),
logcountmetercollector.MeterName: logcountmetercollector.New(telemetryStore, sqlStore),
logsizemetercollector.MeterName: logsizemetercollector.New(telemetryStore, sqlStore),
datapointcountmetercollector.MeterName: datapointcountmetercollector.New(telemetryStore, sqlStore),
datapointsizemetercollector.MeterName: datapointsizemetercollector.New(telemetryStore, sqlStore),
spancountmetercollector.MeterName: spancountmetercollector.New(telemetryStore, sqlStore),
spansizemetercollector.MeterName: spansizemetercollector.New(telemetryStore, sqlStore),
}
}

View File

@@ -429,10 +429,3 @@ authz:
openfga:
# maximum tuples allowed per openfga write operation.
max_tuples_per_write: 100
##################### Meter Reporter #####################
meterreporter:
# The interval between collection ticks. Minimum 5m.
interval: 6h
# The per-tick timeout that bounds collect-and-ship work. Minimum 3m and must be less than interval.
timeout: 5m

File diff suppressed because it is too large Load Diff

View File

@@ -1,58 +0,0 @@
// Package baseplatformfeemetercollector collects the license-derived base platform fee meter.
package baseplatformfeemetercollector
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.base.platform.fee")
meterUnit = metercollectortypes.UnitCount
meterAggregation = metercollectortypes.AggregationMax
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects base platform fee meters.
type Provider struct {
licensing licensing.Licensing
}
func New(licensing licensing.Licensing) *Provider {
return &Provider{licensing: licensing}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect emits value 1 when the org has an active license.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
license, err := p.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "fetch active license for base platform fee meter")
}
if license == nil || license.Key == "" {
return nil, nil
}
return []meterreportertypes.Meter{
meterreportertypes.NewMeter(MeterName, 1, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
}),
}, nil
}

View File

@@ -1,107 +0,0 @@
package baseplatformfeemetercollector
import (
"context"
"testing"
"time"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestCollectEmitsBasePlatformFeeMeterForValidLicense(t *testing.T) {
orgID := valuer.GenerateUUID()
window := completedWindow()
provider := New(&fakeLicensing{
license: &licensetypes.License{Key: "license-key"},
})
readings, err := provider.Collect(context.Background(), orgID, window)
require.NoError(t, err)
require.Equal(t, []meterreportertypes.Meter{
meterreportertypes.NewMeter(MeterName, 1, metercollectortypes.UnitCount, metercollectortypes.AggregationMax, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
}),
}, readings)
}
func TestCollectSkipsNilLicense(t *testing.T) {
readings, err := New(&fakeLicensing{}).Collect(context.Background(), valuer.GenerateUUID(), completedWindow())
require.NoError(t, err)
require.Empty(t, readings)
}
func TestProviderMetadata(t *testing.T) {
provider := New(&fakeLicensing{})
require.Equal(t, "signoz.meter.base.platform.fee", provider.Name().String())
require.Equal(t, metercollectortypes.UnitCount, provider.Unit())
require.Equal(t, metercollectortypes.AggregationMax, provider.Aggregation())
}
func TestCollectRejectsInvalidWindowBeforeLicensing(t *testing.T) {
readings, err := New(nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}
func completedWindow() meterreportertypes.Window {
start := time.Date(2026, 5, 4, 0, 0, 0, 0, time.UTC)
return meterreportertypes.Window{
StartUnixMilli: start.UnixMilli(),
EndUnixMilli: start.AddDate(0, 0, 1).UnixMilli(),
IsCompleted: true,
}
}
var _ licensing.Licensing = (*fakeLicensing)(nil)
type fakeLicensing struct {
license *licensetypes.License
err error
}
func (f *fakeLicensing) Start(context.Context) error {
return nil
}
func (f *fakeLicensing) Stop(context.Context) error {
return nil
}
func (f *fakeLicensing) Validate(context.Context) error {
return nil
}
func (f *fakeLicensing) Activate(context.Context, valuer.UUID, string) error {
return nil
}
func (f *fakeLicensing) GetActive(context.Context, valuer.UUID) (*licensetypes.License, error) {
return f.license, f.err
}
func (f *fakeLicensing) Refresh(context.Context, valuer.UUID) error {
return nil
}
func (f *fakeLicensing) Checkout(context.Context, valuer.UUID, *licensetypes.PostableSubscription) (*licensetypes.GettableSubscription, error) {
return &licensetypes.GettableSubscription{}, nil
}
func (f *fakeLicensing) Portal(context.Context, valuer.UUID, *licensetypes.PostableSubscription) (*licensetypes.GettableSubscription, error) {
return &licensetypes.GettableSubscription{}, nil
}
func (f *fakeLicensing) GetFeatureFlags(context.Context, valuer.UUID) ([]*licensetypes.Feature, error) {
return nil, nil
}
func (f *fakeLicensing) Collect(context.Context, valuer.UUID) (map[string]any, error) {
return map[string]any{}, nil
}

View File

@@ -1,276 +0,0 @@
// Package datapointcountmetercollector collects metric datapoint count meters
// by workspace and retention. Keep the query local to this meter.
package datapointcountmetercollector
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/ee/metercollector/retention"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrymetrics"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.metric.datapoint.count")
meterUnit = metercollectortypes.UnitCount
meterAggregation = metercollectortypes.AggregationSum
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects datapoint count meters.
type Provider struct {
telemetryStore telemetrystore.TelemetryStore
sqlStore sqlstore.SQLStore
}
func New(telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) *Provider {
return &Provider{
telemetryStore: telemetryStore,
sqlStore: sqlStore,
}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect aggregates datapoint count for the window and emits an empty-day sentinel.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
meterName := MeterName.String()
slices, err := retention.LoadActiveSlices(
ctx,
p.sqlStore,
orgID,
telemetrymetrics.DBName+"."+telemetrymetrics.SamplesV4LocalTableName,
retentiontypes.DefaultMetricsRetentionDays,
window.StartUnixMilli, window.EndUnixMilli,
)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load retention slices for meter %q", meterName)
}
type bucket struct {
dimensions map[string]string
value float64
}
accumulator := make(map[string]*bucket)
for _, slice := range slices {
query, args, dimensionColumns, err := buildQuery(meterName, slice)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build retention query for meter %q", meterName)
}
rows, err := p.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "query meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
if err := func() error {
defer rows.Close()
for rows.Next() {
dimensionValues := make([]string, len(dimensionColumns))
var retentionDays int32
var retentionRuleIndex int32
var value float64
scanDest := make([]any, 0, len(dimensionValues)+3)
for i := range dimensionValues {
scanDest = append(scanDest, &dimensionValues[i])
}
scanDest = append(scanDest, &retentionDays, &retentionRuleIndex, &value)
if err := rows.Scan(scanDest...); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "scan meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
dimensions, err := buildDimensions(orgID, int(retentionDays), int(retentionRuleIndex), dimensionColumns, dimensionValues, slice.Rules)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build dimensions for meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
key := bucketKey(dimensions)
b, ok := accumulator[key]
if !ok {
b = &bucket{dimensions: dimensions}
accumulator[key] = b
}
b.value += value
}
if err := rows.Err(); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "iterate meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]meterreportertypes.Meter, 0, len(accumulator))
for _, b := range accumulator {
meters = append(meters, meterreportertypes.NewMeter(MeterName, b.value, meterUnit, meterAggregation, window, b.dimensions))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(slices) > 0 {
meters = append(meters, meterreportertypes.NewMeter(MeterName, 0, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(slices[len(slices)-1].DefaultDays),
}))
}
return meters, nil
}
// buildQuery stays local because each meter owns its billing query.
func buildQuery(meterName string, slice retentiontypes.Slice) (string, []any, []dimensionColumn, error) {
retentionExpr, err := retention.BuildMultiIfSQL(slice.Rules, slice.DefaultDays)
if err != nil {
return "", nil, nil, err
}
retentionRuleIndexExpr, err := retention.BuildRuleIndexSQL(slice.Rules)
if err != nil {
return "", nil, nil, err
}
columns, err := dimensionColumnsFor(slice.Rules)
if err != nil {
return "", nil, nil, err
}
selects := make([]string, 0, len(columns)+3)
groupBy := make([]string, 0, len(columns)+2)
for _, column := range columns {
selects = append(selects, fmt.Sprintf("JSONExtractString(labels, '%s') AS %s", column.key, column.alias))
groupBy = append(groupBy, column.alias)
}
selects = append(selects,
retentionExpr+" AS retention_days",
retentionRuleIndexExpr+" AS retention_rule_index",
"ifNull(sum(value), 0) AS value",
)
groupBy = append(groupBy, "retention_days", "retention_rule_index")
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", slice.StartMs),
sb.LT("unix_milli", slice.EndMs),
)
sb.GroupBy(groupBy...)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, columns, nil
}
type dimensionColumn struct {
key string
alias string
}
func dimensionColumnsFor(rules []retentiontypes.CustomRetentionRule) ([]dimensionColumn, error) {
dimensionKeys, err := retention.RuleDimensionKeys(rules)
if err != nil {
return nil, err
}
keys := make([]string, 0, len(dimensionKeys)+1)
keys = append(keys, metercollector.DimensionWorkspaceKeyID)
for _, key := range dimensionKeys {
if key == metercollector.DimensionWorkspaceKeyID {
continue
}
keys = append(keys, key)
}
columns := make([]dimensionColumn, len(keys))
for i, key := range keys {
columns[i] = dimensionColumn{key: key, alias: fmt.Sprintf("dim_%d", i)}
}
return columns, nil
}
func buildDimensions(
orgID valuer.UUID,
retentionDays int,
retentionRuleIndex int,
columns []dimensionColumn,
values []string,
rules []retentiontypes.CustomRetentionRule,
) (map[string]string, error) {
if len(columns) != len(values) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "dimension column/value count mismatch: %d columns, %d values", len(columns), len(values))
}
valuesByKey := make(map[string]string, len(columns))
for i, column := range columns {
valuesByKey[column.key] = values[i]
}
dimensions := map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(retentionDays),
}
addNonEmpty(dimensions, metercollector.DimensionWorkspaceKeyID, valuesByKey[metercollector.DimensionWorkspaceKeyID])
if retentionRuleIndex < 0 {
return dimensions, nil
}
if retentionRuleIndex >= len(rules) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "retention rule index %d out of range for %d rules", retentionRuleIndex, len(rules))
}
for _, filter := range rules[retentionRuleIndex].Filters {
addNonEmpty(dimensions, filter.Key, valuesByKey[filter.Key])
}
return dimensions, nil
}
func addNonEmpty(dimensions map[string]string, key, value string) {
if value == "" {
return
}
dimensions[key] = value
}
func bucketKey(dimensions map[string]string) string {
keys := make([]string, 0, len(dimensions))
for key := range dimensions {
keys = append(keys, key)
}
sort.Strings(keys)
var b strings.Builder
for _, key := range keys {
value := dimensions[key]
b.WriteString(strconv.Itoa(len(key)))
b.WriteByte(':')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(strconv.Itoa(len(value)))
b.WriteByte(':')
b.WriteString(value)
b.WriteByte(';')
}
return b.String()
}

View File

@@ -1,67 +0,0 @@
package datapointcountmetercollector
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestBuildDimensions(t *testing.T) {
orgID := valuer.GenerateUUID()
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
columns := []dimensionColumn{
{key: metercollector.DimensionWorkspaceKeyID, alias: "dim_0"},
{key: "service.name", alias: "dim_1"},
}
dimensions, err := buildDimensions(orgID, 30, 0, columns, []string{"workspace-1", "api"}, rules)
require.NoError(t, err)
require.Equal(t, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
}, dimensions)
}
func TestProviderMetadata(t *testing.T) {
provider := New(nil, nil)
require.Equal(t, "signoz.meter.metric.datapoint.count", provider.Name().String())
require.Equal(t, metercollectortypes.UnitCount, provider.Unit())
require.Equal(t, metercollectortypes.AggregationSum, provider.Aggregation())
}
func TestBucketKeyIsStable(t *testing.T) {
first := bucketKey(map[string]string{
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
})
second := bucketKey(map[string]string{
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
})
require.Equal(t, first, second)
require.NotEmpty(t, first)
}
func TestCollectRejectsInvalidWindowBeforeQuerying(t *testing.T) {
readings, err := New(nil, nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}

View File

@@ -1,276 +0,0 @@
// Package datapointsizemetercollector collects metric datapoint size meters
// by workspace and retention. Keep the query local to this meter.
package datapointsizemetercollector
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/ee/metercollector/retention"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrymetrics"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.metric.datapoint.size")
meterUnit = metercollectortypes.UnitBytes
meterAggregation = metercollectortypes.AggregationSum
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects datapoint size meters.
type Provider struct {
telemetryStore telemetrystore.TelemetryStore
sqlStore sqlstore.SQLStore
}
func New(telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) *Provider {
return &Provider{
telemetryStore: telemetryStore,
sqlStore: sqlStore,
}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect aggregates datapoint size for the window and emits an empty-day sentinel.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
meterName := MeterName.String()
slices, err := retention.LoadActiveSlices(
ctx,
p.sqlStore,
orgID,
telemetrymetrics.DBName+"."+telemetrymetrics.SamplesV4LocalTableName,
retentiontypes.DefaultMetricsRetentionDays,
window.StartUnixMilli, window.EndUnixMilli,
)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load retention slices for meter %q", meterName)
}
type bucket struct {
dimensions map[string]string
value float64
}
accumulator := make(map[string]*bucket)
for _, slice := range slices {
query, args, dimensionColumns, err := buildQuery(meterName, slice)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build retention query for meter %q", meterName)
}
rows, err := p.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "query meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
if err := func() error {
defer rows.Close()
for rows.Next() {
dimensionValues := make([]string, len(dimensionColumns))
var retentionDays int32
var retentionRuleIndex int32
var value float64
scanDest := make([]any, 0, len(dimensionValues)+3)
for i := range dimensionValues {
scanDest = append(scanDest, &dimensionValues[i])
}
scanDest = append(scanDest, &retentionDays, &retentionRuleIndex, &value)
if err := rows.Scan(scanDest...); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "scan meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
dimensions, err := buildDimensions(orgID, int(retentionDays), int(retentionRuleIndex), dimensionColumns, dimensionValues, slice.Rules)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build dimensions for meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
key := bucketKey(dimensions)
b, ok := accumulator[key]
if !ok {
b = &bucket{dimensions: dimensions}
accumulator[key] = b
}
b.value += value
}
if err := rows.Err(); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "iterate meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]meterreportertypes.Meter, 0, len(accumulator))
for _, b := range accumulator {
meters = append(meters, meterreportertypes.NewMeter(MeterName, b.value, meterUnit, meterAggregation, window, b.dimensions))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(slices) > 0 {
meters = append(meters, meterreportertypes.NewMeter(MeterName, 0, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(slices[len(slices)-1].DefaultDays),
}))
}
return meters, nil
}
// buildQuery stays local because each meter owns its billing query.
func buildQuery(meterName string, slice retentiontypes.Slice) (string, []any, []dimensionColumn, error) {
retentionExpr, err := retention.BuildMultiIfSQL(slice.Rules, slice.DefaultDays)
if err != nil {
return "", nil, nil, err
}
retentionRuleIndexExpr, err := retention.BuildRuleIndexSQL(slice.Rules)
if err != nil {
return "", nil, nil, err
}
columns, err := dimensionColumnsFor(slice.Rules)
if err != nil {
return "", nil, nil, err
}
selects := make([]string, 0, len(columns)+3)
groupBy := make([]string, 0, len(columns)+2)
for _, column := range columns {
selects = append(selects, fmt.Sprintf("JSONExtractString(labels, '%s') AS %s", column.key, column.alias))
groupBy = append(groupBy, column.alias)
}
selects = append(selects,
retentionExpr+" AS retention_days",
retentionRuleIndexExpr+" AS retention_rule_index",
"ifNull(sum(value), 0) AS value",
)
groupBy = append(groupBy, "retention_days", "retention_rule_index")
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", slice.StartMs),
sb.LT("unix_milli", slice.EndMs),
)
sb.GroupBy(groupBy...)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, columns, nil
}
type dimensionColumn struct {
key string
alias string
}
func dimensionColumnsFor(rules []retentiontypes.CustomRetentionRule) ([]dimensionColumn, error) {
dimensionKeys, err := retention.RuleDimensionKeys(rules)
if err != nil {
return nil, err
}
keys := make([]string, 0, len(dimensionKeys)+1)
keys = append(keys, metercollector.DimensionWorkspaceKeyID)
for _, key := range dimensionKeys {
if key == metercollector.DimensionWorkspaceKeyID {
continue
}
keys = append(keys, key)
}
columns := make([]dimensionColumn, len(keys))
for i, key := range keys {
columns[i] = dimensionColumn{key: key, alias: fmt.Sprintf("dim_%d", i)}
}
return columns, nil
}
func buildDimensions(
orgID valuer.UUID,
retentionDays int,
retentionRuleIndex int,
columns []dimensionColumn,
values []string,
rules []retentiontypes.CustomRetentionRule,
) (map[string]string, error) {
if len(columns) != len(values) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "dimension column/value count mismatch: %d columns, %d values", len(columns), len(values))
}
valuesByKey := make(map[string]string, len(columns))
for i, column := range columns {
valuesByKey[column.key] = values[i]
}
dimensions := map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(retentionDays),
}
addNonEmpty(dimensions, metercollector.DimensionWorkspaceKeyID, valuesByKey[metercollector.DimensionWorkspaceKeyID])
if retentionRuleIndex < 0 {
return dimensions, nil
}
if retentionRuleIndex >= len(rules) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "retention rule index %d out of range for %d rules", retentionRuleIndex, len(rules))
}
for _, filter := range rules[retentionRuleIndex].Filters {
addNonEmpty(dimensions, filter.Key, valuesByKey[filter.Key])
}
return dimensions, nil
}
func addNonEmpty(dimensions map[string]string, key, value string) {
if value == "" {
return
}
dimensions[key] = value
}
func bucketKey(dimensions map[string]string) string {
keys := make([]string, 0, len(dimensions))
for key := range dimensions {
keys = append(keys, key)
}
sort.Strings(keys)
var b strings.Builder
for _, key := range keys {
value := dimensions[key]
b.WriteString(strconv.Itoa(len(key)))
b.WriteByte(':')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(strconv.Itoa(len(value)))
b.WriteByte(':')
b.WriteString(value)
b.WriteByte(';')
}
return b.String()
}

View File

@@ -1,67 +0,0 @@
package datapointsizemetercollector
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestBuildDimensions(t *testing.T) {
orgID := valuer.GenerateUUID()
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
columns := []dimensionColumn{
{key: metercollector.DimensionWorkspaceKeyID, alias: "dim_0"},
{key: "service.name", alias: "dim_1"},
}
dimensions, err := buildDimensions(orgID, 30, 0, columns, []string{"workspace-1", "api"}, rules)
require.NoError(t, err)
require.Equal(t, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
}, dimensions)
}
func TestProviderMetadata(t *testing.T) {
provider := New(nil, nil)
require.Equal(t, "signoz.meter.metric.datapoint.size", provider.Name().String())
require.Equal(t, metercollectortypes.UnitBytes, provider.Unit())
require.Equal(t, metercollectortypes.AggregationSum, provider.Aggregation())
}
func TestBucketKeyIsStable(t *testing.T) {
first := bucketKey(map[string]string{
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
})
second := bucketKey(map[string]string{
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
})
require.Equal(t, first, second)
require.NotEmpty(t, first)
}
func TestCollectRejectsInvalidWindowBeforeQuerying(t *testing.T) {
readings, err := New(nil, nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}

View File

@@ -1,276 +0,0 @@
// Package logcountmetercollector collects log count meters by workspace and
// retention. Keep the query local to this meter.
package logcountmetercollector
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/ee/metercollector/retention"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrylogs"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.log.count")
meterUnit = metercollectortypes.UnitCount
meterAggregation = metercollectortypes.AggregationSum
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects log count meters.
type Provider struct {
telemetryStore telemetrystore.TelemetryStore
sqlStore sqlstore.SQLStore
}
func New(telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) *Provider {
return &Provider{
telemetryStore: telemetryStore,
sqlStore: sqlStore,
}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect aggregates log count for the window and emits an empty-day sentinel.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
meterName := MeterName.String()
slices, err := retention.LoadActiveSlices(
ctx,
p.sqlStore,
orgID,
telemetrylogs.DBName+"."+telemetrylogs.LogsV2LocalTableName,
retentiontypes.DefaultLogsRetentionDays,
window.StartUnixMilli, window.EndUnixMilli,
)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load retention slices for meter %q", meterName)
}
type bucket struct {
dimensions map[string]string
value float64
}
accumulator := make(map[string]*bucket)
for _, slice := range slices {
query, args, dimensionColumns, err := buildQuery(meterName, slice)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build retention query for meter %q", meterName)
}
rows, err := p.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "query meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
if err := func() error {
defer rows.Close()
for rows.Next() {
dimensionValues := make([]string, len(dimensionColumns))
var retentionDays int32
var retentionRuleIndex int32
var value float64
scanDest := make([]any, 0, len(dimensionValues)+3)
for i := range dimensionValues {
scanDest = append(scanDest, &dimensionValues[i])
}
scanDest = append(scanDest, &retentionDays, &retentionRuleIndex, &value)
if err := rows.Scan(scanDest...); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "scan meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
dimensions, err := buildDimensions(orgID, int(retentionDays), int(retentionRuleIndex), dimensionColumns, dimensionValues, slice.Rules)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build dimensions for meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
key := bucketKey(dimensions)
b, ok := accumulator[key]
if !ok {
b = &bucket{dimensions: dimensions}
accumulator[key] = b
}
b.value += value
}
if err := rows.Err(); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "iterate meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]meterreportertypes.Meter, 0, len(accumulator))
for _, b := range accumulator {
meters = append(meters, meterreportertypes.NewMeter(MeterName, b.value, meterUnit, meterAggregation, window, b.dimensions))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(slices) > 0 {
meters = append(meters, meterreportertypes.NewMeter(MeterName, 0, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(slices[len(slices)-1].DefaultDays),
}))
}
return meters, nil
}
// buildQuery stays local because each meter owns its billing query.
func buildQuery(meterName string, slice retentiontypes.Slice) (string, []any, []dimensionColumn, error) {
retentionExpr, err := retention.BuildMultiIfSQL(slice.Rules, slice.DefaultDays)
if err != nil {
return "", nil, nil, err
}
retentionRuleIndexExpr, err := retention.BuildRuleIndexSQL(slice.Rules)
if err != nil {
return "", nil, nil, err
}
columns, err := dimensionColumnsFor(slice.Rules)
if err != nil {
return "", nil, nil, err
}
selects := make([]string, 0, len(columns)+3)
groupBy := make([]string, 0, len(columns)+2)
for _, column := range columns {
selects = append(selects, fmt.Sprintf("JSONExtractString(labels, '%s') AS %s", column.key, column.alias))
groupBy = append(groupBy, column.alias)
}
selects = append(selects,
retentionExpr+" AS retention_days",
retentionRuleIndexExpr+" AS retention_rule_index",
"ifNull(sum(value), 0) AS value",
)
groupBy = append(groupBy, "retention_days", "retention_rule_index")
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", slice.StartMs),
sb.LT("unix_milli", slice.EndMs),
)
sb.GroupBy(groupBy...)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, columns, nil
}
type dimensionColumn struct {
key string
alias string
}
func dimensionColumnsFor(rules []retentiontypes.CustomRetentionRule) ([]dimensionColumn, error) {
dimensionKeys, err := retention.RuleDimensionKeys(rules)
if err != nil {
return nil, err
}
keys := make([]string, 0, len(dimensionKeys)+1)
keys = append(keys, metercollector.DimensionWorkspaceKeyID)
for _, key := range dimensionKeys {
if key == metercollector.DimensionWorkspaceKeyID {
continue
}
keys = append(keys, key)
}
columns := make([]dimensionColumn, len(keys))
for i, key := range keys {
columns[i] = dimensionColumn{key: key, alias: fmt.Sprintf("dim_%d", i)}
}
return columns, nil
}
func buildDimensions(
orgID valuer.UUID,
retentionDays int,
retentionRuleIndex int,
columns []dimensionColumn,
values []string,
rules []retentiontypes.CustomRetentionRule,
) (map[string]string, error) {
if len(columns) != len(values) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "dimension column/value count mismatch: %d columns, %d values", len(columns), len(values))
}
valuesByKey := make(map[string]string, len(columns))
for i, column := range columns {
valuesByKey[column.key] = values[i]
}
dimensions := map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(retentionDays),
}
addNonEmpty(dimensions, metercollector.DimensionWorkspaceKeyID, valuesByKey[metercollector.DimensionWorkspaceKeyID])
if retentionRuleIndex < 0 {
return dimensions, nil
}
if retentionRuleIndex >= len(rules) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "retention rule index %d out of range for %d rules", retentionRuleIndex, len(rules))
}
for _, filter := range rules[retentionRuleIndex].Filters {
addNonEmpty(dimensions, filter.Key, valuesByKey[filter.Key])
}
return dimensions, nil
}
func addNonEmpty(dimensions map[string]string, key, value string) {
if value == "" {
return
}
dimensions[key] = value
}
func bucketKey(dimensions map[string]string) string {
keys := make([]string, 0, len(dimensions))
for key := range dimensions {
keys = append(keys, key)
}
sort.Strings(keys)
var b strings.Builder
for _, key := range keys {
value := dimensions[key]
b.WriteString(strconv.Itoa(len(key)))
b.WriteByte(':')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(strconv.Itoa(len(value)))
b.WriteByte(':')
b.WriteString(value)
b.WriteByte(';')
}
return b.String()
}

View File

@@ -1,67 +0,0 @@
package logcountmetercollector
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestBuildDimensions(t *testing.T) {
orgID := valuer.GenerateUUID()
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
columns := []dimensionColumn{
{key: metercollector.DimensionWorkspaceKeyID, alias: "dim_0"},
{key: "service.name", alias: "dim_1"},
}
dimensions, err := buildDimensions(orgID, 30, 0, columns, []string{"workspace-1", "api"}, rules)
require.NoError(t, err)
require.Equal(t, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
}, dimensions)
}
func TestProviderMetadata(t *testing.T) {
provider := New(nil, nil)
require.Equal(t, "signoz.meter.log.count", provider.Name().String())
require.Equal(t, metercollectortypes.UnitCount, provider.Unit())
require.Equal(t, metercollectortypes.AggregationSum, provider.Aggregation())
}
func TestBucketKeyIsStable(t *testing.T) {
first := bucketKey(map[string]string{
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
})
second := bucketKey(map[string]string{
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
})
require.Equal(t, first, second)
require.NotEmpty(t, first)
}
func TestCollectRejectsInvalidWindowBeforeQuerying(t *testing.T) {
readings, err := New(nil, nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}

View File

@@ -1,276 +0,0 @@
// Package logsizemetercollector collects log size meters by workspace and
// retention. Keep the query local to this meter.
package logsizemetercollector
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/ee/metercollector/retention"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrylogs"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.log.size")
meterUnit = metercollectortypes.UnitBytes
meterAggregation = metercollectortypes.AggregationSum
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects log size meters.
type Provider struct {
telemetryStore telemetrystore.TelemetryStore
sqlStore sqlstore.SQLStore
}
func New(telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) *Provider {
return &Provider{
telemetryStore: telemetryStore,
sqlStore: sqlStore,
}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect aggregates log size for the window and emits an empty-day sentinel.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
meterName := MeterName.String()
slices, err := retention.LoadActiveSlices(
ctx,
p.sqlStore,
orgID,
telemetrylogs.DBName+"."+telemetrylogs.LogsV2LocalTableName,
retentiontypes.DefaultLogsRetentionDays,
window.StartUnixMilli, window.EndUnixMilli,
)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load retention slices for meter %q", meterName)
}
type bucket struct {
dimensions map[string]string
value float64
}
accumulator := make(map[string]*bucket)
for _, slice := range slices {
query, args, dimensionColumns, err := buildQuery(meterName, slice)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build retention query for meter %q", meterName)
}
rows, err := p.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "query meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
if err := func() error {
defer rows.Close()
for rows.Next() {
dimensionValues := make([]string, len(dimensionColumns))
var retentionDays int32
var retentionRuleIndex int32
var value float64
scanDest := make([]any, 0, len(dimensionValues)+3)
for i := range dimensionValues {
scanDest = append(scanDest, &dimensionValues[i])
}
scanDest = append(scanDest, &retentionDays, &retentionRuleIndex, &value)
if err := rows.Scan(scanDest...); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "scan meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
dimensions, err := buildDimensions(orgID, int(retentionDays), int(retentionRuleIndex), dimensionColumns, dimensionValues, slice.Rules)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build dimensions for meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
key := bucketKey(dimensions)
b, ok := accumulator[key]
if !ok {
b = &bucket{dimensions: dimensions}
accumulator[key] = b
}
b.value += value
}
if err := rows.Err(); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "iterate meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]meterreportertypes.Meter, 0, len(accumulator))
for _, b := range accumulator {
meters = append(meters, meterreportertypes.NewMeter(MeterName, b.value, meterUnit, meterAggregation, window, b.dimensions))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(slices) > 0 {
meters = append(meters, meterreportertypes.NewMeter(MeterName, 0, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(slices[len(slices)-1].DefaultDays),
}))
}
return meters, nil
}
// buildQuery stays local because each meter owns its billing query.
func buildQuery(meterName string, slice retentiontypes.Slice) (string, []any, []dimensionColumn, error) {
retentionExpr, err := retention.BuildMultiIfSQL(slice.Rules, slice.DefaultDays)
if err != nil {
return "", nil, nil, err
}
retentionRuleIndexExpr, err := retention.BuildRuleIndexSQL(slice.Rules)
if err != nil {
return "", nil, nil, err
}
columns, err := dimensionColumnsFor(slice.Rules)
if err != nil {
return "", nil, nil, err
}
selects := make([]string, 0, len(columns)+3)
groupBy := make([]string, 0, len(columns)+2)
for _, column := range columns {
selects = append(selects, fmt.Sprintf("JSONExtractString(labels, '%s') AS %s", column.key, column.alias))
groupBy = append(groupBy, column.alias)
}
selects = append(selects,
retentionExpr+" AS retention_days",
retentionRuleIndexExpr+" AS retention_rule_index",
"ifNull(sum(value), 0) AS value",
)
groupBy = append(groupBy, "retention_days", "retention_rule_index")
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", slice.StartMs),
sb.LT("unix_milli", slice.EndMs),
)
sb.GroupBy(groupBy...)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, columns, nil
}
type dimensionColumn struct {
key string
alias string
}
func dimensionColumnsFor(rules []retentiontypes.CustomRetentionRule) ([]dimensionColumn, error) {
dimensionKeys, err := retention.RuleDimensionKeys(rules)
if err != nil {
return nil, err
}
keys := make([]string, 0, len(dimensionKeys)+1)
keys = append(keys, metercollector.DimensionWorkspaceKeyID)
for _, key := range dimensionKeys {
if key == metercollector.DimensionWorkspaceKeyID {
continue
}
keys = append(keys, key)
}
columns := make([]dimensionColumn, len(keys))
for i, key := range keys {
columns[i] = dimensionColumn{key: key, alias: fmt.Sprintf("dim_%d", i)}
}
return columns, nil
}
func buildDimensions(
orgID valuer.UUID,
retentionDays int,
retentionRuleIndex int,
columns []dimensionColumn,
values []string,
rules []retentiontypes.CustomRetentionRule,
) (map[string]string, error) {
if len(columns) != len(values) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "dimension column/value count mismatch: %d columns, %d values", len(columns), len(values))
}
valuesByKey := make(map[string]string, len(columns))
for i, column := range columns {
valuesByKey[column.key] = values[i]
}
dimensions := map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(retentionDays),
}
addNonEmpty(dimensions, metercollector.DimensionWorkspaceKeyID, valuesByKey[metercollector.DimensionWorkspaceKeyID])
if retentionRuleIndex < 0 {
return dimensions, nil
}
if retentionRuleIndex >= len(rules) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "retention rule index %d out of range for %d rules", retentionRuleIndex, len(rules))
}
for _, filter := range rules[retentionRuleIndex].Filters {
addNonEmpty(dimensions, filter.Key, valuesByKey[filter.Key])
}
return dimensions, nil
}
func addNonEmpty(dimensions map[string]string, key, value string) {
if value == "" {
return
}
dimensions[key] = value
}
func bucketKey(dimensions map[string]string) string {
keys := make([]string, 0, len(dimensions))
for key := range dimensions {
keys = append(keys, key)
}
sort.Strings(keys)
var b strings.Builder
for _, key := range keys {
value := dimensions[key]
b.WriteString(strconv.Itoa(len(key)))
b.WriteByte(':')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(strconv.Itoa(len(value)))
b.WriteByte(':')
b.WriteString(value)
b.WriteByte(';')
}
return b.String()
}

View File

@@ -1,67 +0,0 @@
package logsizemetercollector
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestBuildDimensions(t *testing.T) {
orgID := valuer.GenerateUUID()
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
columns := []dimensionColumn{
{key: metercollector.DimensionWorkspaceKeyID, alias: "dim_0"},
{key: "service.name", alias: "dim_1"},
}
dimensions, err := buildDimensions(orgID, 30, 0, columns, []string{"workspace-1", "api"}, rules)
require.NoError(t, err)
require.Equal(t, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
}, dimensions)
}
func TestProviderMetadata(t *testing.T) {
provider := New(nil, nil)
require.Equal(t, "signoz.meter.log.size", provider.Name().String())
require.Equal(t, metercollectortypes.UnitBytes, provider.Unit())
require.Equal(t, metercollectortypes.AggregationSum, provider.Aggregation())
}
func TestBucketKeyIsStable(t *testing.T) {
first := bucketKey(map[string]string{
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
})
second := bucketKey(map[string]string{
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
})
require.Equal(t, first, second)
require.NotEmpty(t, first)
}
func TestCollectRejectsInvalidWindowBeforeQuerying(t *testing.T) {
readings, err := New(nil, nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}

View File

@@ -1,255 +0,0 @@
// Package retention builds retention slices and SQL expressions for meters.
// Collectors still own their table names, defaults, and aggregation queries.
package retention
import (
"context"
"encoding/json"
"fmt"
"regexp"
"strconv"
"strings"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
const secondsPerDay = 24 * 60 * 60
// These values are inlined into SQL, so keep the allowlist strict.
var (
labelKeyPattern = regexp.MustCompile(`^[A-Za-z0-9_.\-]+$`)
labelValuePattern = regexp.MustCompile(`^[A-Za-z0-9_.\-:]+$`)
)
// LoadActiveSlices returns TTL slices covering [startMs, endMs).
// tableName must be fully qualified, for example "signoz_logs.logs_v2".
func LoadActiveSlices(
ctx context.Context,
sqlstore sqlstore.SQLStore,
orgID valuer.UUID,
tableName string,
fallbackDefaultDays int,
startMs, endMs int64,
) ([]retentiontypes.Slice, error) {
if startMs >= endMs {
return nil, nil
}
if sqlstore == nil {
return nil, errors.New(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "sqlstore is nil")
}
if tableName == "" {
return nil, errors.New(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "tableName is empty")
}
if fallbackDefaultDays <= 0 {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "non-positive fallbackDefaultDays %d", fallbackDefaultDays)
}
rows := []*retentiontypes.TTLSetting{}
err := sqlstore.
BunDB().
NewSelect().
Model(&rows).
Where("table_name = ?", tableName).
Where("org_id = ?", orgID.StringValue()).
Where("status = ?", retentiontypes.TTLSettingStatusSuccess).
Where("created_at < ?", time.UnixMilli(endMs).UTC()).
OrderExpr("created_at ASC").
Scan(ctx)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load ttl_setting rows for org %q table %q", orgID.StringValue(), tableName)
}
return buildSlicesFromRows(rows, fallbackDefaultDays, startMs, endMs)
}
func buildSlicesFromRows(rows []*retentiontypes.TTLSetting, fallbackDefaultDays int, startMs, endMs int64) ([]retentiontypes.Slice, error) {
if startMs >= endMs {
return nil, nil
}
// The latest row before the window is active at the window start.
var activeAtStart *retentiontypes.TTLSetting
inWindow := make([]*retentiontypes.TTLSetting, 0, len(rows))
for _, row := range rows {
rowMs := row.CreatedAt.UnixMilli()
if rowMs <= startMs {
activeAtStart = row
continue
}
if rowMs >= endMs {
continue
}
inWindow = append(inWindow, row)
}
activeRules, activeDefault, err := parseTTLSetting(activeAtStart, fallbackDefaultDays)
if err != nil {
return nil, err
}
slices := make([]retentiontypes.Slice, 0, len(inWindow)+1)
cursor := startMs
for _, row := range inWindow {
rowMs := row.CreatedAt.UnixMilli()
if rowMs <= cursor {
// Same-ms updates collapse: replace active config, no empty slice.
activeRules, activeDefault, err = parseTTLSetting(row, fallbackDefaultDays)
if err != nil {
return nil, err
}
continue
}
slices = append(slices, retentiontypes.Slice{
StartMs: cursor,
EndMs: rowMs,
Rules: activeRules,
DefaultDays: activeDefault,
})
cursor = rowMs
activeRules, activeDefault, err = parseTTLSetting(row, fallbackDefaultDays)
if err != nil {
return nil, err
}
}
if cursor < endMs {
slices = append(slices, retentiontypes.Slice{
StartMs: cursor,
EndMs: endMs,
Rules: activeRules,
DefaultDays: activeDefault,
})
}
return slices, nil
}
// parseTTLSetting returns rules and default days for one ttl_setting row.
func parseTTLSetting(row *retentiontypes.TTLSetting, fallbackDefaultDays int) ([]retentiontypes.CustomRetentionRule, int, error) {
if row == nil {
return nil, fallbackDefaultDays, nil
}
defaultDays := row.TTL
if row.Condition == "" {
// V1 stores seconds; round up to days.
defaultDays = (row.TTL + secondsPerDay - 1) / secondsPerDay
}
if defaultDays <= 0 {
defaultDays = fallbackDefaultDays
}
if row.Condition == "" {
return nil, defaultDays, nil
}
var rules []retentiontypes.CustomRetentionRule
if err := json.Unmarshal([]byte(row.Condition), &rules); err != nil {
return nil, 0, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "parse ttl_setting condition for row %q", row.ID.StringValue())
}
return rules, defaultDays, nil
}
// BuildMultiIfSQL renders the retention-days expression for one slice.
func BuildMultiIfSQL(rules []retentiontypes.CustomRetentionRule, defaultDays int) (string, error) {
if defaultDays <= 0 {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "non-positive default retention %d", defaultDays)
}
if len(rules) == 0 {
return "toInt32(" + strconv.Itoa(defaultDays) + ")", nil
}
arms := make([]string, 0, 2*len(rules)+1)
for ruleIndex, rule := range rules {
if rule.TTLDays <= 0 {
return "", errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "rule %d has non-positive ttl_days %d", ruleIndex, rule.TTLDays)
}
conditionExpr, err := buildRuleConditionSQL(ruleIndex, rule)
if err != nil {
return "", err
}
arms = append(arms, conditionExpr)
arms = append(arms, strconv.Itoa(rule.TTLDays))
}
arms = append(arms, strconv.Itoa(defaultDays))
return "toInt32(multiIf(" + strings.Join(arms, ", ") + "))", nil
}
// BuildRuleIndexSQL renders the matched rule index, or -1 for fallback.
func BuildRuleIndexSQL(rules []retentiontypes.CustomRetentionRule) (string, error) {
if len(rules) == 0 {
return "toInt32(-1)", nil
}
arms := make([]string, 0, 2*len(rules)+1)
for ruleIndex, rule := range rules {
conditionExpr, err := buildRuleConditionSQL(ruleIndex, rule)
if err != nil {
return "", err
}
arms = append(arms, conditionExpr)
arms = append(arms, strconv.Itoa(ruleIndex))
}
arms = append(arms, "-1")
return "toInt32(multiIf(" + strings.Join(arms, ", ") + "))", nil
}
func buildRuleConditionSQL(ruleIndex int, rule retentiontypes.CustomRetentionRule) (string, error) {
if len(rule.Filters) == 0 {
return "", errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "rule %d has no filters", ruleIndex)
}
filterExprs := make([]string, 0, len(rule.Filters))
for filterIndex, filter := range rule.Filters {
if !labelKeyPattern.MatchString(filter.Key) {
return "", errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "rule %d filter %d has invalid key %q", ruleIndex, filterIndex, filter.Key)
}
if len(filter.Values) == 0 {
return "", errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "rule %d filter %d has no values", ruleIndex, filterIndex)
}
quoted := make([]string, len(filter.Values))
for valueIndex, value := range filter.Values {
if !labelValuePattern.MatchString(value) {
return "", errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "rule %d filter %d value %d is invalid %q", ruleIndex, filterIndex, valueIndex, value)
}
quoted[valueIndex] = "'" + value + "'"
}
filterExprs = append(filterExprs, fmt.Sprintf("JSONExtractString(labels, '%s') IN (%s)", filter.Key, strings.Join(quoted, ", ")))
}
return strings.Join(filterExprs, " AND "), nil
}
// RuleDimensionKeys returns unique label keys referenced by retention rules.
func RuleDimensionKeys(rules []retentiontypes.CustomRetentionRule) ([]string, error) {
keys := make([]string, 0)
seen := make(map[string]struct{})
for ruleIndex, rule := range rules {
for filterIndex, filter := range rule.Filters {
if !labelKeyPattern.MatchString(filter.Key) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "rule %d filter %d has invalid key %q", ruleIndex, filterIndex, filter.Key)
}
if _, ok := seen[filter.Key]; ok {
continue
}
seen[filter.Key] = struct{}{}
keys = append(keys, filter.Key)
}
}
return keys, nil
}

View File

@@ -1,153 +0,0 @@
package retention
import (
"encoding/json"
"testing"
"time"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/stretchr/testify/require"
)
func TestBuildSlicesFromRows(t *testing.T) {
start := time.Date(2026, 5, 4, 0, 0, 0, 0, time.UTC)
end := start.AddDate(0, 0, 1)
ruleA := retentiontypes.CustomRetentionRule{
Filters: []retentiontypes.FilterCondition{{Key: "service.name", Values: []string{"api"}}},
TTLDays: 7,
}
ruleB := retentiontypes.CustomRetentionRule{
Filters: []retentiontypes.FilterCondition{{Key: "env", Values: []string{"prod"}}},
TTLDays: 15,
}
t.Run("row before window is active at start", func(t *testing.T) {
slices, err := buildSlicesFromRows(
[]*retentiontypes.TTLSetting{
ttlSetting(t, start.Add(-time.Hour), 45, []retentiontypes.CustomRetentionRule{ruleA}),
},
30,
start.UnixMilli(),
end.UnixMilli(),
)
require.NoError(t, err)
require.Equal(t, []retentiontypes.Slice{{
StartMs: start.UnixMilli(),
EndMs: end.UnixMilli(),
Rules: []retentiontypes.CustomRetentionRule{ruleA},
DefaultDays: 45,
}}, slices)
})
t.Run("row inside window splits slices", func(t *testing.T) {
firstChange := start.Add(6 * time.Hour)
secondChange := start.Add(18 * time.Hour)
slices, err := buildSlicesFromRows(
[]*retentiontypes.TTLSetting{
ttlSetting(t, firstChange, 21, []retentiontypes.CustomRetentionRule{ruleA}),
ttlSetting(t, secondChange, 14, []retentiontypes.CustomRetentionRule{ruleB}),
},
30,
start.UnixMilli(),
end.UnixMilli(),
)
require.NoError(t, err)
require.Equal(t, []retentiontypes.Slice{
{
StartMs: start.UnixMilli(),
EndMs: firstChange.UnixMilli(),
DefaultDays: 30,
},
{
StartMs: firstChange.UnixMilli(),
EndMs: secondChange.UnixMilli(),
Rules: []retentiontypes.CustomRetentionRule{ruleA},
DefaultDays: 21,
},
{
StartMs: secondChange.UnixMilli(),
EndMs: end.UnixMilli(),
Rules: []retentiontypes.CustomRetentionRule{ruleB},
DefaultDays: 14,
},
}, slices)
})
t.Run("no rows uses fallback", func(t *testing.T) {
slices, err := buildSlicesFromRows(nil, 30, start.UnixMilli(), end.UnixMilli())
require.NoError(t, err)
require.Equal(t, []retentiontypes.Slice{{
StartMs: start.UnixMilli(),
EndMs: end.UnixMilli(),
DefaultDays: 30,
}}, slices)
})
}
func TestRetentionSQL(t *testing.T) {
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api", "worker"},
}},
TTLDays: 7,
}}
retentionSQL, err := BuildMultiIfSQL(rules, 30)
require.NoError(t, err)
require.Equal(t, "toInt32(multiIf(JSONExtractString(labels, 'service.name') IN ('api', 'worker'), 7, 30))", retentionSQL)
ruleIndexSQL, err := BuildRuleIndexSQL(rules)
require.NoError(t, err)
require.Equal(t, "toInt32(multiIf(JSONExtractString(labels, 'service.name') IN ('api', 'worker'), 0, -1))", ruleIndexSQL)
invalidRules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
_, err = BuildMultiIfSQL(invalidRules, 30)
require.Error(t, err)
_, err = BuildRuleIndexSQL(invalidRules)
require.Error(t, err)
}
func TestRuleDimensionKeysDedupes(t *testing.T) {
keys, err := RuleDimensionKeys([]retentiontypes.CustomRetentionRule{
{
Filters: []retentiontypes.FilterCondition{
{Key: "service.name", Values: []string{"api"}},
{Key: "env", Values: []string{"prod"}},
},
TTLDays: 7,
},
{
Filters: []retentiontypes.FilterCondition{
{Key: "service.name", Values: []string{"worker"}},
{Key: "cluster", Values: []string{"primary"}},
},
TTLDays: 15,
},
})
require.NoError(t, err)
require.Equal(t, []string{"service.name", "env", "cluster"}, keys)
}
func ttlSetting(t *testing.T, createdAt time.Time, ttlDays int, rules []retentiontypes.CustomRetentionRule) *retentiontypes.TTLSetting {
t.Helper()
condition, err := json.Marshal(rules)
require.NoError(t, err)
return &retentiontypes.TTLSetting{
CreatedAt: createdAt,
TTL: ttlDays,
Condition: string(condition),
}
}

View File

@@ -1,276 +0,0 @@
// Package spancountmetercollector collects span count meters by workspace and
// retention. Keep the query local to this meter.
package spancountmetercollector
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/ee/metercollector/retention"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrytraces"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.span.count")
meterUnit = metercollectortypes.UnitCount
meterAggregation = metercollectortypes.AggregationSum
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects span count meters.
type Provider struct {
telemetryStore telemetrystore.TelemetryStore
sqlStore sqlstore.SQLStore
}
func New(telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) *Provider {
return &Provider{
telemetryStore: telemetryStore,
sqlStore: sqlStore,
}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect aggregates span count for the window and emits an empty-day sentinel.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
meterName := MeterName.String()
slices, err := retention.LoadActiveSlices(
ctx,
p.sqlStore,
orgID,
telemetrytraces.DBName+"."+telemetrytraces.SpanIndexV3LocalTableName,
retentiontypes.DefaultTracesRetentionDays,
window.StartUnixMilli, window.EndUnixMilli,
)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load retention slices for meter %q", meterName)
}
type bucket struct {
dimensions map[string]string
value float64
}
accumulator := make(map[string]*bucket)
for _, slice := range slices {
query, args, dimensionColumns, err := buildQuery(meterName, slice)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build retention query for meter %q", meterName)
}
rows, err := p.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "query meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
if err := func() error {
defer rows.Close()
for rows.Next() {
dimensionValues := make([]string, len(dimensionColumns))
var retentionDays int32
var retentionRuleIndex int32
var value float64
scanDest := make([]any, 0, len(dimensionValues)+3)
for i := range dimensionValues {
scanDest = append(scanDest, &dimensionValues[i])
}
scanDest = append(scanDest, &retentionDays, &retentionRuleIndex, &value)
if err := rows.Scan(scanDest...); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "scan meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
dimensions, err := buildDimensions(orgID, int(retentionDays), int(retentionRuleIndex), dimensionColumns, dimensionValues, slice.Rules)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build dimensions for meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
key := bucketKey(dimensions)
b, ok := accumulator[key]
if !ok {
b = &bucket{dimensions: dimensions}
accumulator[key] = b
}
b.value += value
}
if err := rows.Err(); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "iterate meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]meterreportertypes.Meter, 0, len(accumulator))
for _, b := range accumulator {
meters = append(meters, meterreportertypes.NewMeter(MeterName, b.value, meterUnit, meterAggregation, window, b.dimensions))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(slices) > 0 {
meters = append(meters, meterreportertypes.NewMeter(MeterName, 0, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(slices[len(slices)-1].DefaultDays),
}))
}
return meters, nil
}
// buildQuery stays local because each meter owns its billing query.
func buildQuery(meterName string, slice retentiontypes.Slice) (string, []any, []dimensionColumn, error) {
retentionExpr, err := retention.BuildMultiIfSQL(slice.Rules, slice.DefaultDays)
if err != nil {
return "", nil, nil, err
}
retentionRuleIndexExpr, err := retention.BuildRuleIndexSQL(slice.Rules)
if err != nil {
return "", nil, nil, err
}
columns, err := dimensionColumnsFor(slice.Rules)
if err != nil {
return "", nil, nil, err
}
selects := make([]string, 0, len(columns)+3)
groupBy := make([]string, 0, len(columns)+2)
for _, column := range columns {
selects = append(selects, fmt.Sprintf("JSONExtractString(labels, '%s') AS %s", column.key, column.alias))
groupBy = append(groupBy, column.alias)
}
selects = append(selects,
retentionExpr+" AS retention_days",
retentionRuleIndexExpr+" AS retention_rule_index",
"ifNull(sum(value), 0) AS value",
)
groupBy = append(groupBy, "retention_days", "retention_rule_index")
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", slice.StartMs),
sb.LT("unix_milli", slice.EndMs),
)
sb.GroupBy(groupBy...)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, columns, nil
}
type dimensionColumn struct {
key string
alias string
}
func dimensionColumnsFor(rules []retentiontypes.CustomRetentionRule) ([]dimensionColumn, error) {
dimensionKeys, err := retention.RuleDimensionKeys(rules)
if err != nil {
return nil, err
}
keys := make([]string, 0, len(dimensionKeys)+1)
keys = append(keys, metercollector.DimensionWorkspaceKeyID)
for _, key := range dimensionKeys {
if key == metercollector.DimensionWorkspaceKeyID {
continue
}
keys = append(keys, key)
}
columns := make([]dimensionColumn, len(keys))
for i, key := range keys {
columns[i] = dimensionColumn{key: key, alias: fmt.Sprintf("dim_%d", i)}
}
return columns, nil
}
func buildDimensions(
orgID valuer.UUID,
retentionDays int,
retentionRuleIndex int,
columns []dimensionColumn,
values []string,
rules []retentiontypes.CustomRetentionRule,
) (map[string]string, error) {
if len(columns) != len(values) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "dimension column/value count mismatch: %d columns, %d values", len(columns), len(values))
}
valuesByKey := make(map[string]string, len(columns))
for i, column := range columns {
valuesByKey[column.key] = values[i]
}
dimensions := map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(retentionDays),
}
addNonEmpty(dimensions, metercollector.DimensionWorkspaceKeyID, valuesByKey[metercollector.DimensionWorkspaceKeyID])
if retentionRuleIndex < 0 {
return dimensions, nil
}
if retentionRuleIndex >= len(rules) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "retention rule index %d out of range for %d rules", retentionRuleIndex, len(rules))
}
for _, filter := range rules[retentionRuleIndex].Filters {
addNonEmpty(dimensions, filter.Key, valuesByKey[filter.Key])
}
return dimensions, nil
}
func addNonEmpty(dimensions map[string]string, key, value string) {
if value == "" {
return
}
dimensions[key] = value
}
func bucketKey(dimensions map[string]string) string {
keys := make([]string, 0, len(dimensions))
for key := range dimensions {
keys = append(keys, key)
}
sort.Strings(keys)
var b strings.Builder
for _, key := range keys {
value := dimensions[key]
b.WriteString(strconv.Itoa(len(key)))
b.WriteByte(':')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(strconv.Itoa(len(value)))
b.WriteByte(':')
b.WriteString(value)
b.WriteByte(';')
}
return b.String()
}

View File

@@ -1,67 +0,0 @@
package spancountmetercollector
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestBuildDimensions(t *testing.T) {
orgID := valuer.GenerateUUID()
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
columns := []dimensionColumn{
{key: metercollector.DimensionWorkspaceKeyID, alias: "dim_0"},
{key: "service.name", alias: "dim_1"},
}
dimensions, err := buildDimensions(orgID, 30, 0, columns, []string{"workspace-1", "api"}, rules)
require.NoError(t, err)
require.Equal(t, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
}, dimensions)
}
func TestProviderMetadata(t *testing.T) {
provider := New(nil, nil)
require.Equal(t, "signoz.meter.span.count", provider.Name().String())
require.Equal(t, metercollectortypes.UnitCount, provider.Unit())
require.Equal(t, metercollectortypes.AggregationSum, provider.Aggregation())
}
func TestBucketKeyIsStable(t *testing.T) {
first := bucketKey(map[string]string{
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
})
second := bucketKey(map[string]string{
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
})
require.Equal(t, first, second)
require.NotEmpty(t, first)
}
func TestCollectRejectsInvalidWindowBeforeQuerying(t *testing.T) {
readings, err := New(nil, nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}

View File

@@ -1,276 +0,0 @@
// Package spansizemetercollector collects span size meters by workspace and
// retention. Keep the query local to this meter.
package spansizemetercollector
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/ee/metercollector/retention"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrytraces"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterName is the typed registry key for this collector.
var (
MeterName = metercollectortypes.MustNewName("signoz.meter.span.size")
meterUnit = metercollectortypes.UnitBytes
meterAggregation = metercollectortypes.AggregationSum
)
var _ metercollector.MeterCollector = (*Provider)(nil)
// Provider collects span size meters.
type Provider struct {
telemetryStore telemetrystore.TelemetryStore
sqlStore sqlstore.SQLStore
}
func New(telemetryStore telemetrystore.TelemetryStore, sqlStore sqlstore.SQLStore) *Provider {
return &Provider{
telemetryStore: telemetryStore,
sqlStore: sqlStore,
}
}
func (p *Provider) Name() metercollectortypes.Name { return MeterName }
func (p *Provider) Unit() metercollectortypes.Unit { return meterUnit }
func (p *Provider) Aggregation() metercollectortypes.Aggregation {
return meterAggregation
}
// Collect aggregates span size for the window and emits an empty-day sentinel.
func (p *Provider) Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
if !window.IsValid() {
return nil, errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeCollectFailed, "invalid window [%d, %d)", window.StartUnixMilli, window.EndUnixMilli)
}
meterName := MeterName.String()
slices, err := retention.LoadActiveSlices(
ctx,
p.sqlStore,
orgID,
telemetrytraces.DBName+"."+telemetrytraces.SpanIndexV3LocalTableName,
retentiontypes.DefaultTracesRetentionDays,
window.StartUnixMilli, window.EndUnixMilli,
)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "load retention slices for meter %q", meterName)
}
type bucket struct {
dimensions map[string]string
value float64
}
accumulator := make(map[string]*bucket)
for _, slice := range slices {
query, args, dimensionColumns, err := buildQuery(meterName, slice)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build retention query for meter %q", meterName)
}
rows, err := p.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "query meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
if err := func() error {
defer rows.Close()
for rows.Next() {
dimensionValues := make([]string, len(dimensionColumns))
var retentionDays int32
var retentionRuleIndex int32
var value float64
scanDest := make([]any, 0, len(dimensionValues)+3)
for i := range dimensionValues {
scanDest = append(scanDest, &dimensionValues[i])
}
scanDest = append(scanDest, &retentionDays, &retentionRuleIndex, &value)
if err := rows.Scan(scanDest...); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "scan meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
dimensions, err := buildDimensions(orgID, int(retentionDays), int(retentionRuleIndex), dimensionColumns, dimensionValues, slice.Rules)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "build dimensions for meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
key := bucketKey(dimensions)
b, ok := accumulator[key]
if !ok {
b = &bucket{dimensions: dimensions}
accumulator[key] = b
}
b.value += value
}
if err := rows.Err(); err != nil {
return errors.Wrapf(err, errors.TypeInternal, metercollector.ErrCodeCollectFailed, "iterate meter %q slice [%d, %d)", meterName, slice.StartMs, slice.EndMs)
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]meterreportertypes.Meter, 0, len(accumulator))
for _, b := range accumulator {
meters = append(meters, meterreportertypes.NewMeter(MeterName, b.value, meterUnit, meterAggregation, window, b.dimensions))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(slices) > 0 {
meters = append(meters, meterreportertypes.NewMeter(MeterName, 0, meterUnit, meterAggregation, window, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(slices[len(slices)-1].DefaultDays),
}))
}
return meters, nil
}
// buildQuery stays local because each meter owns its billing query.
func buildQuery(meterName string, slice retentiontypes.Slice) (string, []any, []dimensionColumn, error) {
retentionExpr, err := retention.BuildMultiIfSQL(slice.Rules, slice.DefaultDays)
if err != nil {
return "", nil, nil, err
}
retentionRuleIndexExpr, err := retention.BuildRuleIndexSQL(slice.Rules)
if err != nil {
return "", nil, nil, err
}
columns, err := dimensionColumnsFor(slice.Rules)
if err != nil {
return "", nil, nil, err
}
selects := make([]string, 0, len(columns)+3)
groupBy := make([]string, 0, len(columns)+2)
for _, column := range columns {
selects = append(selects, fmt.Sprintf("JSONExtractString(labels, '%s') AS %s", column.key, column.alias))
groupBy = append(groupBy, column.alias)
}
selects = append(selects,
retentionExpr+" AS retention_days",
retentionRuleIndexExpr+" AS retention_rule_index",
"ifNull(sum(value), 0) AS value",
)
groupBy = append(groupBy, "retention_days", "retention_rule_index")
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", slice.StartMs),
sb.LT("unix_milli", slice.EndMs),
)
sb.GroupBy(groupBy...)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, columns, nil
}
type dimensionColumn struct {
key string
alias string
}
func dimensionColumnsFor(rules []retentiontypes.CustomRetentionRule) ([]dimensionColumn, error) {
dimensionKeys, err := retention.RuleDimensionKeys(rules)
if err != nil {
return nil, err
}
keys := make([]string, 0, len(dimensionKeys)+1)
keys = append(keys, metercollector.DimensionWorkspaceKeyID)
for _, key := range dimensionKeys {
if key == metercollector.DimensionWorkspaceKeyID {
continue
}
keys = append(keys, key)
}
columns := make([]dimensionColumn, len(keys))
for i, key := range keys {
columns[i] = dimensionColumn{key: key, alias: fmt.Sprintf("dim_%d", i)}
}
return columns, nil
}
func buildDimensions(
orgID valuer.UUID,
retentionDays int,
retentionRuleIndex int,
columns []dimensionColumn,
values []string,
rules []retentiontypes.CustomRetentionRule,
) (map[string]string, error) {
if len(columns) != len(values) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "dimension column/value count mismatch: %d columns, %d values", len(columns), len(values))
}
valuesByKey := make(map[string]string, len(columns))
for i, column := range columns {
valuesByKey[column.key] = values[i]
}
dimensions := map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: strconv.Itoa(retentionDays),
}
addNonEmpty(dimensions, metercollector.DimensionWorkspaceKeyID, valuesByKey[metercollector.DimensionWorkspaceKeyID])
if retentionRuleIndex < 0 {
return dimensions, nil
}
if retentionRuleIndex >= len(rules) {
return nil, errors.Newf(errors.TypeInternal, metercollector.ErrCodeCollectFailed, "retention rule index %d out of range for %d rules", retentionRuleIndex, len(rules))
}
for _, filter := range rules[retentionRuleIndex].Filters {
addNonEmpty(dimensions, filter.Key, valuesByKey[filter.Key])
}
return dimensions, nil
}
func addNonEmpty(dimensions map[string]string, key, value string) {
if value == "" {
return
}
dimensions[key] = value
}
func bucketKey(dimensions map[string]string) string {
keys := make([]string, 0, len(dimensions))
for key := range dimensions {
keys = append(keys, key)
}
sort.Strings(keys)
var b strings.Builder
for _, key := range keys {
value := dimensions[key]
b.WriteString(strconv.Itoa(len(key)))
b.WriteByte(':')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(strconv.Itoa(len(value)))
b.WriteByte(':')
b.WriteString(value)
b.WriteByte(';')
}
return b.String()
}

View File

@@ -1,67 +0,0 @@
package spansizemetercollector
import (
"context"
"testing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestBuildDimensions(t *testing.T) {
orgID := valuer.GenerateUUID()
rules := []retentiontypes.CustomRetentionRule{{
Filters: []retentiontypes.FilterCondition{{
Key: "service.name",
Values: []string{"api"},
}},
TTLDays: 7,
}}
columns := []dimensionColumn{
{key: metercollector.DimensionWorkspaceKeyID, alias: "dim_0"},
{key: "service.name", alias: "dim_1"},
}
dimensions, err := buildDimensions(orgID, 30, 0, columns, []string{"workspace-1", "api"}, rules)
require.NoError(t, err)
require.Equal(t, map[string]string{
metercollector.DimensionOrganizationID: orgID.StringValue(),
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
}, dimensions)
}
func TestProviderMetadata(t *testing.T) {
provider := New(nil, nil)
require.Equal(t, "signoz.meter.span.size", provider.Name().String())
require.Equal(t, metercollectortypes.UnitBytes, provider.Unit())
require.Equal(t, metercollectortypes.AggregationSum, provider.Aggregation())
}
func TestBucketKeyIsStable(t *testing.T) {
first := bucketKey(map[string]string{
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
metercollector.DimensionWorkspaceKeyID: "workspace-1",
})
second := bucketKey(map[string]string{
metercollector.DimensionWorkspaceKeyID: "workspace-1",
"service.name": "api",
metercollector.DimensionRetentionDays: "30",
})
require.Equal(t, first, second)
require.NotEmpty(t, first)
}
func TestCollectRejectsInvalidWindowBeforeQuerying(t *testing.T) {
readings, err := New(nil, nil).Collect(context.Background(), valuer.GenerateUUID(), meterreportertypes.Window{})
require.Error(t, err)
require.Nil(t, readings)
}

View File

@@ -1,630 +0,0 @@
package httpmeterreporter
import (
"context"
"fmt"
"log/slog"
"sort"
"sync"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/huandu/go-sqlbuilder"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/trace"
)
var _ factory.ServiceWithHealthy = (*Provider)(nil)
var errCodeReportFailed = errors.MustNewCode("meterreporter_report_failed")
const (
phaseSealed = "sealed"
phaseToday = "today"
attrPhase = "phase"
attrResult = "result"
attrMeterReporterProvider = "meterreporter.provider"
attrOrgID = "meterreporter.org_id"
attrOrgCount = "meterreporter.org_count"
attrMeter = "meterreporter.meter"
attrDate = "meterreporter.date"
attrReadings = "meterreporter.readings"
attrReadingsCollected = "meterreporter.readings_collected"
attrReadingsDropped = "meterreporter.readings_dropped"
attrWindowStartUnixMilli = "meterreporter.window_start_unix_milli"
attrWindowEndUnixMilli = "meterreporter.window_end_unix_milli"
attrWindowCompleted = "meterreporter.window_completed"
attrCatchupStart = "meterreporter.catchup_start"
attrCatchupEnd = "meterreporter.catchup_end"
attrDurationMs = "meterreporter.duration_ms"
attrDryRun = "meterreporter.dry_run"
attrIdempotencyKey = "meterreporter.idempotency_key"
resultSuccess = "success"
resultFailure = "failure"
providerName = "http"
)
// Provider collects registered meters and ships them to Zeus.
type Provider struct {
settings factory.ScopedProviderSettings
config meterreporter.Config
collectors []metercollector.MeterCollector
licensing licensing.Licensing
telemetryStore telemetrystore.TelemetryStore
orgGetter organization.Getter
zeus zeus.Zeus
healthyC chan struct{}
stopC chan struct{}
goroutinesWg sync.WaitGroup
metrics *reporterMetrics
}
// NewFactory registers the HTTP meter reporter.
func NewFactory(
collectors map[metercollectortypes.Name]metercollector.MeterCollector,
licensing licensing.Licensing,
telemetryStore telemetrystore.TelemetryStore,
orgGetter organization.Getter,
zeus zeus.Zeus,
) factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config] {
return factory.NewProviderFactory(
factory.MustNewName(providerName),
func(ctx context.Context, providerSettings factory.ProviderSettings, config meterreporter.Config) (meterreporter.Reporter, error) {
return newProvider(ctx, providerSettings, config, collectors, licensing, telemetryStore, orgGetter, zeus)
},
)
}
func newProvider(
_ context.Context,
providerSettings factory.ProviderSettings,
config meterreporter.Config,
collectors map[metercollectortypes.Name]metercollector.MeterCollector,
licensing licensing.Licensing,
telemetryStore telemetrystore.TelemetryStore,
orgGetter organization.Getter,
zeus zeus.Zeus,
) (*Provider, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/meterreporter/httpmeterreporter")
metrics, err := newReporterMetrics(settings.Meter())
if err != nil {
return nil, err
}
orderedCollectors, err := validateCollectors(collectors)
if err != nil {
return nil, err
}
return &Provider{
settings: settings,
config: config,
collectors: orderedCollectors,
licensing: licensing,
telemetryStore: telemetryStore,
orgGetter: orgGetter,
zeus: zeus,
healthyC: make(chan struct{}),
stopC: make(chan struct{}),
metrics: metrics,
}, nil
}
func validateCollectors(collectors map[metercollectortypes.Name]metercollector.MeterCollector) ([]metercollector.MeterCollector, error) {
ordered := make([]metercollector.MeterCollector, 0, len(collectors))
for name, collector := range collectors {
if name.IsZero() {
return nil, errors.New(errors.TypeInvalidInput, meterreporter.ErrCodeInvalidInput, "empty meter name in collector registry")
}
if collector == nil {
return nil, errors.Newf(errors.TypeInvalidInput, meterreporter.ErrCodeInvalidInput, "nil collector for meter %q", name.String())
}
if collector.Name() != name {
return nil, errors.Newf(errors.TypeInvalidInput, meterreporter.ErrCodeInvalidInput, "registry key %q does not match collector.Name() %q", name.String(), collector.Name().String())
}
if collector.Unit().IsZero() {
return nil, errors.Newf(errors.TypeInvalidInput, meterreporter.ErrCodeInvalidInput, "meter %q has empty unit", name.String())
}
if collector.Aggregation().IsZero() {
return nil, errors.Newf(errors.TypeInvalidInput, meterreporter.ErrCodeInvalidInput, "meter %q has empty aggregation", name.String())
}
ordered = append(ordered, collector)
}
sort.Slice(ordered, func(i, j int) bool {
return ordered[i].Name().String() < ordered[j].Name().String()
})
return ordered, nil
}
// Start runs an immediate tick, then repeats on Config.Interval.
func (provider *Provider) Start(ctx context.Context) error {
close(provider.healthyC)
provider.settings.Logger().InfoContext(ctx, "meter reporter started",
slog.Duration("interval", provider.config.Interval),
slog.Duration("timeout", provider.config.Timeout),
slog.Int("catchup_max_days_per_tick", provider.config.CatchupMaxDaysPerTick),
slog.Int("meters", len(provider.collectors)),
)
provider.goroutinesWg.Add(1)
go func() {
defer provider.goroutinesWg.Done()
provider.runTick(ctx)
ticker := time.NewTicker(provider.config.Interval)
defer ticker.Stop()
for {
select {
case <-provider.stopC:
return
case <-ticker.C:
provider.runTick(ctx)
}
}
}()
provider.goroutinesWg.Wait()
return nil
}
// Stop signals the tick loop and waits for any in-flight tick.
func (provider *Provider) Stop(ctx context.Context) error {
<-provider.healthyC
provider.settings.Logger().InfoContext(ctx, "meter reporter stopping")
select {
case <-provider.stopC:
// already closed
default:
close(provider.stopC)
}
provider.goroutinesWg.Wait()
provider.settings.Logger().InfoContext(ctx, "meter reporter stopped")
return nil
}
func (provider *Provider) Healthy() <-chan struct{} {
return provider.healthyC
}
// runTick executes one collect-and-ship cycle under Config.Timeout.
func (provider *Provider) runTick(parentCtx context.Context) {
tickStart := time.Now()
ctx, span := provider.settings.Tracer().Start(parentCtx, "meterreporter.Tick", trace.WithAttributes(
attribute.String(attrMeterReporterProvider, providerName),
attribute.Int("meterreporter.meters", len(provider.collectors)),
attribute.Int("meterreporter.catchup_max_days_per_tick", provider.config.CatchupMaxDaysPerTick),
))
defer span.End()
provider.metrics.ticks.Add(ctx, 1)
ctx, cancel := context.WithTimeout(ctx, provider.config.Timeout)
defer cancel()
provider.settings.Logger().DebugContext(ctx, "meter reporter tick started",
slog.Duration("timeout", provider.config.Timeout),
slog.Int("meters", len(provider.collectors)),
)
if err := provider.tick(ctx); err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
span.SetAttributes(
attribute.String(attrResult, resultFailure),
attribute.Int64(attrDurationMs, time.Since(tickStart).Milliseconds()),
)
provider.settings.Logger().ErrorContext(ctx, "meter reporter tick failed",
errors.Attr(err),
slog.Duration("timeout", provider.config.Timeout),
slog.Duration("duration", time.Since(tickStart)),
)
return
}
span.SetAttributes(
attribute.String(attrResult, resultSuccess),
attribute.Int64(attrDurationMs, time.Since(tickStart).Milliseconds()),
)
provider.settings.Logger().DebugContext(ctx, "meter reporter tick completed", slog.Duration("duration", time.Since(tickStart)))
}
// tick processes sealed catchup days, then today's partial window.
func (provider *Provider) tick(ctx context.Context) error {
now := time.Now().UTC()
// Use one timestamp so a tick cannot straddle midnight.
todayStart := time.Date(now.Year(), now.Month(), now.Day(), 0, 0, 0, 0, time.UTC)
yesterday := todayStart.AddDate(0, 0, -1)
orgs, err := provider.orgGetter.ListByOwnedKeyRange(ctx)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errCodeReportFailed, "failed to list organizations")
}
trace.SpanFromContext(ctx).SetAttributes(attribute.Int(attrOrgCount, len(orgs)))
if len(orgs) == 0 {
provider.settings.Logger().InfoContext(ctx, "skipping meter reporter tick; no organizations found")
return nil
}
org := orgs[0]
if len(orgs) > 1 {
// signoz_meter samples have no org marker.
provider.settings.Logger().WarnContext(ctx, "multiple orgs on a single instance; reporting only the first",
slog.Int("org_count", len(orgs)),
slog.String("selected_org_id", org.ID.StringValue()),
)
}
trace.SpanFromContext(ctx).SetAttributes(attribute.String(attrOrgID, org.ID.StringValue()))
license, err := provider.licensing.GetActive(ctx, org.ID)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errCodeReportFailed, "failed to fetch active license for org %q", org.ID.StringValue())
}
if license == nil || license.Key == "" {
provider.settings.Logger().WarnContext(ctx, "skipping tick, nil/empty license for org", slog.String("org_id", org.ID.StringValue()))
return nil
}
// TODO: re-enable once /v2/meters/checkpoints is live in staging. Until
// then we run with an empty checkpoint map; bootstrap floors are taken
// from data and dropCheckpointed becomes a no-op for the sealed window.
// checkpoints, err := provider.zeus.GetMeterCheckpoints(ctx, license.Key)
// if err != nil {
// provider.metrics.checkpointErrors.Add(ctx, 1)
// provider.settings.Logger().ErrorContext(ctx, "skipping tick: meter checkpoints call failed", errors.Attr(err))
// return nil
// }
// checkpointsByMeter := make(map[string]time.Time, len(checkpoints))
// for _, checkpoint := range checkpoints {
// checkpointsByMeter[checkpoint.Name] = checkpoint.Checkpoint.UTC()
// }
checkpointsByMeter := make(map[string]time.Time)
floor := provider.dataFloor(ctx, todayStart)
catchupStart := provider.catchupStart(floor, todayStart, checkpointsByMeter)
end := catchupStart.AddDate(0, 0, provider.config.CatchupMaxDaysPerTick-1)
if end.After(yesterday) {
end = yesterday
}
trace.SpanFromContext(ctx).SetAttributes(
attribute.String(attrCatchupStart, catchupStart.Format("2006-01-02")),
attribute.String(attrCatchupEnd, end.Format("2006-01-02")),
)
provider.settings.Logger().DebugContext(ctx, "meter reporter catchup window selected",
slog.String("org_id", org.ID.StringValue()),
slog.Time("data_floor", floor),
slog.Time("catchup_start", catchupStart),
slog.Time("catchup_end", end),
slog.Int("catchup_max_days_per_tick", provider.config.CatchupMaxDaysPerTick),
)
for day := catchupStart; !day.After(end); day = day.AddDate(0, 0, 1) {
window := meterreportertypes.Window{
StartUnixMilli: day.UnixMilli(),
EndUnixMilli: day.AddDate(0, 0, 1).UnixMilli(),
IsCompleted: true,
}
err := provider.runPhase(ctx, org.ID, license.Key, window, checkpointsByMeter)
result := resultSuccess
if err != nil {
result = resultFailure
}
provider.metrics.catchupDaysProcessed.Add(ctx, 1, metric.WithAttributes(attribute.String(attrResult, result)))
if err != nil {
provider.settings.Logger().WarnContext(ctx, "stopping sealed catchup after failed day",
errors.Attr(err),
slog.String("date", day.Format("2006-01-02")),
)
break
}
}
// Today's partial window runs every tick.
todayWindow := meterreportertypes.Window{
StartUnixMilli: todayStart.UnixMilli(),
EndUnixMilli: now.UnixMilli(),
IsCompleted: false,
}
_ = provider.runPhase(ctx, org.ID, license.Key, todayWindow, checkpointsByMeter)
return nil
}
// runPhase collects all meters for one window and ships the batch.
func (provider *Provider) runPhase(ctx context.Context, orgID valuer.UUID, licenseKey string, window meterreportertypes.Window, checkpointsByMeter map[string]time.Time) error {
phaseLabel := phaseToday
if window.IsCompleted {
phaseLabel = phaseSealed
}
phaseAttr := metric.WithAttributes(attribute.String(attrPhase, phaseLabel))
date := time.UnixMilli(window.StartUnixMilli).UTC().Format("2006-01-02")
phaseStart := time.Now()
ctx, span := provider.settings.Tracer().Start(ctx, "meterreporter.RunPhase", trace.WithAttributes(
attribute.String(attrPhase, phaseLabel),
attribute.String(attrOrgID, orgID.StringValue()),
attribute.String(attrDate, date),
attribute.Int64(attrWindowStartUnixMilli, window.StartUnixMilli),
attribute.Int64(attrWindowEndUnixMilli, window.EndUnixMilli),
attribute.Bool(attrWindowCompleted, window.IsCompleted),
))
defer span.End()
provider.settings.Logger().DebugContext(ctx, "meter reporter phase started",
slog.String("org_id", orgID.StringValue()),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Int64("start_unix_milli", window.StartUnixMilli),
slog.Int64("end_unix_milli", window.EndUnixMilli),
slog.Int("meters", len(provider.collectors)),
)
collectStart := time.Now()
readings := make([]meterreportertypes.Meter, 0, len(provider.collectors))
for _, collector := range provider.collectors {
meterName := collector.Name().String()
collectStart := time.Now()
collectCtx, collectSpan := provider.settings.Tracer().Start(ctx, "meterreporter.CollectMeter", trace.WithAttributes(
attribute.String(attrPhase, phaseLabel),
attribute.String(attrOrgID, orgID.StringValue()),
attribute.String(attrMeter, meterName),
attribute.String(attrDate, date),
attribute.Int64(attrWindowStartUnixMilli, window.StartUnixMilli),
attribute.Int64(attrWindowEndUnixMilli, window.EndUnixMilli),
attribute.Bool(attrWindowCompleted, window.IsCompleted),
))
collectedReadings, err := collector.Collect(collectCtx, orgID, window)
if err != nil {
collectSpan.RecordError(err)
collectSpan.SetStatus(codes.Error, err.Error())
collectSpan.SetAttributes(
attribute.String(attrResult, resultFailure),
attribute.Int64(attrDurationMs, time.Since(collectStart).Milliseconds()),
)
collectSpan.End()
provider.metrics.collectErrors.Add(ctx, 1, phaseAttr)
provider.settings.Logger().WarnContext(ctx, "meter collection failed",
errors.Attr(err),
slog.String("meter", meterName),
slog.String("org_id", orgID.StringValue()),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Duration("duration", time.Since(collectStart)),
)
continue
}
collectSpan.SetAttributes(
attribute.String(attrResult, resultSuccess),
attribute.Int(attrReadings, len(collectedReadings)),
attribute.Int64(attrDurationMs, time.Since(collectStart).Milliseconds()),
)
collectSpan.End()
provider.settings.Logger().DebugContext(ctx, "meter collection completed",
slog.String("meter", meterName),
slog.String("org_id", orgID.StringValue()),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Int("readings", len(collectedReadings)),
slog.Duration("duration", time.Since(collectStart)),
)
readings = append(readings, collectedReadings...)
}
collectDuration := time.Since(collectStart)
provider.metrics.collectDuration.Add(ctx, collectDuration.Seconds(), phaseAttr)
provider.metrics.collectOperations.Add(ctx, 1, phaseAttr)
span.SetAttributes(attribute.Int(attrReadingsCollected, len(readings)))
if window.IsCompleted {
beforeDrop := len(readings)
readings = dropCheckpointed(readings, time.UnixMilli(window.StartUnixMilli).UTC(), checkpointsByMeter)
dropped := beforeDrop - len(readings)
span.SetAttributes(attribute.Int(attrReadingsDropped, dropped))
if dropped > 0 {
provider.settings.Logger().DebugContext(ctx, "dropped checkpointed meter readings",
slog.String("org_id", orgID.StringValue()),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Int("dropped", dropped),
slog.Int("remaining", len(readings)),
)
}
}
if len(readings) == 0 {
span.SetAttributes(
attribute.String(attrResult, resultSuccess),
attribute.Int(attrReadings, 0),
attribute.Int64(attrDurationMs, time.Since(phaseStart).Milliseconds()),
)
provider.settings.Logger().DebugContext(ctx, "meter reporter phase produced no readings",
slog.String("org_id", orgID.StringValue()),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Duration("collect_duration", collectDuration),
slog.Duration("duration", time.Since(phaseStart)),
)
return nil
}
shipStart := time.Now()
err := provider.shipReadings(ctx, licenseKey, date, readings)
shipDuration := time.Since(shipStart)
provider.metrics.shipDuration.Add(ctx, shipDuration.Seconds(), phaseAttr)
provider.metrics.shipOperations.Add(ctx, 1, phaseAttr)
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
span.SetAttributes(attribute.String(attrResult, resultFailure))
provider.metrics.postErrors.Add(ctx, 1, phaseAttr)
provider.settings.Logger().ErrorContext(ctx, "failed to ship meter readings",
errors.Attr(err),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Int("readings", len(readings)),
slog.Duration("ship_duration", shipDuration),
)
return err
}
provider.metrics.readingsEmitted.Add(ctx, int64(len(readings)), phaseAttr)
span.SetAttributes(
attribute.String(attrResult, resultSuccess),
attribute.Int(attrReadings, len(readings)),
attribute.Int64(attrDurationMs, time.Since(phaseStart).Milliseconds()),
)
provider.settings.Logger().InfoContext(ctx, "meter reporter phase shipped",
slog.String("org_id", orgID.StringValue()),
slog.String("phase", phaseLabel),
slog.String("date", date),
slog.Int("readings", len(readings)),
slog.Duration("collect_duration", collectDuration),
slog.Duration("ship_duration", shipDuration),
slog.Duration("duration", time.Since(phaseStart)),
)
return nil
}
// dropCheckpointed removes readings already covered by meter checkpoints.
func dropCheckpointed(readings []meterreportertypes.Meter, windowDay time.Time, checkpointsByMeter map[string]time.Time) []meterreportertypes.Meter {
if len(checkpointsByMeter) == 0 {
return readings
}
kept := readings[:0]
for _, reading := range readings {
checkpoint, ok := checkpointsByMeter[reading.MeterName]
if !ok || checkpoint.Before(windowDay) {
kept = append(kept, reading)
}
}
return kept
}
// catchupStart returns the earliest UTC day that still needs sealed reporting.
func (provider *Provider) catchupStart(floor time.Time, todayStart time.Time, checkpointsByMeter map[string]time.Time) time.Time {
catchupStart := todayStart
for _, collector := range provider.collectors {
next := floor
if checkpoint, ok := checkpointsByMeter[collector.Name().String()]; ok {
next = checkpoint.AddDate(0, 0, 1)
if next.Before(floor) {
next = floor
}
}
if next.Before(catchupStart) {
catchupStart = next
}
}
yesterday := todayStart.AddDate(0, 0, -1)
if catchupStart.After(yesterday) {
catchupStart = yesterday
}
return catchupStart
}
// dataFloor returns the earliest signoz_meter sample day, or today on failure.
func (provider *Provider) dataFloor(ctx context.Context, todayStart time.Time) time.Time {
ctx, span := provider.settings.Tracer().Start(ctx, "meterreporter.DataFloor")
defer span.End()
if provider.telemetryStore == nil {
span.SetAttributes(attribute.String(attrResult, resultSuccess))
return todayStart
}
sb := sqlbuilder.NewSelectBuilder()
sb.Select("ifNull(min(unix_milli), 0)")
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
var minMs int64
if err := provider.telemetryStore.ClickhouseDB().QueryRow(ctx, query, args...).Scan(&minMs); err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
span.SetAttributes(attribute.String(attrResult, resultFailure))
provider.settings.Logger().WarnContext(ctx, "failed to read data floor; falling back to latest sealed day", errors.Attr(err))
return todayStart
}
if minMs == 0 {
span.SetAttributes(
attribute.String(attrResult, resultSuccess),
attribute.Int64("meterreporter.data_floor_unix_milli", 0),
)
return todayStart
}
minDay := time.UnixMilli(minMs).UTC()
floor := time.Date(minDay.Year(), minDay.Month(), minDay.Day(), 0, 0, 0, 0, time.UTC)
span.SetAttributes(
attribute.String(attrResult, resultSuccess),
attribute.Int64("meterreporter.data_floor_unix_milli", floor.UnixMilli()),
)
provider.settings.Logger().DebugContext(ctx, "meter reporter data floor loaded", slog.Time("data_floor", floor))
return floor
}
// shipReadings sends one day's meter batch to Zeus.
func (provider *Provider) shipReadings(ctx context.Context, licenseKey string, date string, readings []meterreportertypes.Meter) error {
idempotencyKey := fmt.Sprintf("meter-cron:%s", date)
ctx, span := provider.settings.Tracer().Start(ctx, "meterreporter.ShipReadings", trace.WithAttributes(
attribute.String(attrDate, date),
attribute.Int(attrReadings, len(readings)),
attribute.String(attrIdempotencyKey, idempotencyKey),
attribute.Bool(attrDryRun, true),
))
defer span.End()
provider.settings.Logger().InfoContext(ctx, "meter readings prepared for shipment",
slog.String("date", date),
slog.Int("readings", len(readings)),
slog.String("idempotency_key", idempotencyKey),
slog.Bool("dry_run", true),
)
// Temporary visibility while /v2/meters is offline.
for _, reading := range readings {
provider.settings.Logger().InfoContext(ctx, "meter reading prepared for shipment",
slog.String("meter", reading.MeterName),
slog.Float64("value", reading.Value),
slog.String("unit", reading.Unit.StringValue()),
slog.String("aggregation", reading.Aggregation.StringValue()),
slog.Int64("start_unix_milli", reading.StartUnixMilli),
slog.Int64("end_unix_milli", reading.EndUnixMilli),
slog.Bool("is_completed", reading.IsCompleted),
slog.Any("dimensions", reading.Dimensions),
slog.String("idempotency_key", idempotencyKey),
)
}
// TODO: re-enable once /v2/meters is live in staging.
// body, err := json.Marshal(meterreportertypes.PostableMeters{Meters: readings})
// if err != nil {
// return errors.Wrapf(err, errors.TypeInternal, errCodeReportFailed, "marshal meter readings for %s", date)
// }
// if err := provider.zeus.PutMetersV3(ctx, licenseKey, idempotencyKey, body); err != nil {
// return errors.Wrapf(err, errors.TypeInternal, errCodeReportFailed, "ship meter readings for %s", date)
// }
_ = licenseKey
span.SetAttributes(attribute.String(attrResult, resultSuccess))
return nil
}

View File

@@ -1,108 +0,0 @@
package httpmeterreporter
import (
"context"
"testing"
"time"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/require"
)
func TestValidateCollectorsRejectsBadRegistry(t *testing.T) {
meterA := metercollectortypes.MustNewName("signoz.test.a")
meterB := metercollectortypes.MustNewName("signoz.test.b")
t.Run("key name mismatch", func(t *testing.T) {
_, err := validateCollectors(map[metercollectortypes.Name]metercollector.MeterCollector{
meterA: testCollector{name: meterB},
})
require.Error(t, err)
})
t.Run("nil collector", func(t *testing.T) {
_, err := validateCollectors(map[metercollectortypes.Name]metercollector.MeterCollector{
meterA: nil,
})
require.Error(t, err)
})
}
func TestDropCheckpointed(t *testing.T) {
meterA := metercollectortypes.MustNewName("signoz.test.a")
meterB := metercollectortypes.MustNewName("signoz.test.b")
meterC := metercollectortypes.MustNewName("signoz.test.c")
windowDay := time.Date(2026, 5, 4, 0, 0, 0, 0, time.UTC)
window := meterreportertypes.Window{
StartUnixMilli: windowDay.UnixMilli(),
EndUnixMilli: windowDay.AddDate(0, 0, 1).UnixMilli(),
IsCompleted: true,
}
readings := []meterreportertypes.Meter{
meterreportertypes.NewMeter(meterA, 0, metercollectortypes.UnitCount, metercollectortypes.AggregationSum, window, nil),
meterreportertypes.NewMeter(meterB, 0, metercollectortypes.UnitCount, metercollectortypes.AggregationSum, window, nil),
meterreportertypes.NewMeter(meterC, 0, metercollectortypes.UnitCount, metercollectortypes.AggregationSum, window, nil),
}
kept := dropCheckpointed(readings, windowDay, map[string]time.Time{
meterA.String(): windowDay,
meterB.String(): windowDay.AddDate(0, 0, -1),
})
require.Equal(t, []meterreportertypes.Meter{
meterreportertypes.NewMeter(meterB, 0, metercollectortypes.UnitCount, metercollectortypes.AggregationSum, window, nil),
meterreportertypes.NewMeter(meterC, 0, metercollectortypes.UnitCount, metercollectortypes.AggregationSum, window, nil),
}, kept)
}
func TestCatchupStart(t *testing.T) {
meterA := metercollectortypes.MustNewName("signoz.test.a")
floor := time.Date(2026, 5, 1, 0, 0, 0, 0, time.UTC)
todayStart := time.Date(2026, 5, 5, 0, 0, 0, 0, time.UTC)
provider := &Provider{
collectors: []metercollector.MeterCollector{
testCollector{name: meterA},
},
}
t.Run("no checkpoint starts at floor", func(t *testing.T) {
require.Equal(t, floor, provider.catchupStart(floor, todayStart, nil))
})
t.Run("checkpoint advances by one day", func(t *testing.T) {
require.Equal(t, floor.AddDate(0, 0, 2), provider.catchupStart(floor, todayStart, map[string]time.Time{
meterA.String(): floor.AddDate(0, 0, 1),
}))
})
}
type testCollector struct {
name metercollectortypes.Name
unit metercollectortypes.Unit
aggregation metercollectortypes.Aggregation
}
func (c testCollector) Name() metercollectortypes.Name {
return c.name
}
func (c testCollector) Unit() metercollectortypes.Unit {
if c.unit.IsZero() {
return metercollectortypes.UnitCount
}
return c.unit
}
func (c testCollector) Aggregation() metercollectortypes.Aggregation {
if c.aggregation.IsZero() {
return metercollectortypes.AggregationSum
}
return c.aggregation
}
func (c testCollector) Collect(context.Context, valuer.UUID, meterreportertypes.Window) ([]meterreportertypes.Meter, error) {
return nil, nil
}

View File

@@ -1,90 +0,0 @@
package httpmeterreporter
import (
"github.com/SigNoz/signoz/pkg/errors"
"go.opentelemetry.io/otel/metric"
)
type reporterMetrics struct {
ticks metric.Int64Counter
readingsEmitted metric.Int64Counter
collectErrors metric.Int64Counter
postErrors metric.Int64Counter
checkpointErrors metric.Int64Counter
catchupDaysProcessed metric.Int64Counter
collectDuration metric.Float64Counter
collectOperations metric.Int64Counter
shipDuration metric.Float64Counter
shipOperations metric.Int64Counter
}
func newReporterMetrics(meter metric.Meter) (*reporterMetrics, error) {
var errs error
ticks, err := meter.Int64Counter("signoz.meterreporter.ticks", metric.WithDescription("Meter reporter ticks."))
if err != nil {
errs = errors.Join(errs, err)
}
readingsEmitted, err := meter.Int64Counter("signoz.meterreporter.readings.emitted", metric.WithDescription("Meter readings shipped to Zeus."))
if err != nil {
errs = errors.Join(errs, err)
}
collectErrors, err := meter.Int64Counter("signoz.meterreporter.collect.errors", metric.WithDescription("Meter collection errors."))
if err != nil {
errs = errors.Join(errs, err)
}
postErrors, err := meter.Int64Counter("signoz.meterreporter.post.errors", metric.WithDescription("Zeus POST failures."))
if err != nil {
errs = errors.Join(errs, err)
}
checkpointErrors, err := meter.Int64Counter("signoz.meterreporter.checkpoint.errors", metric.WithDescription("Zeus checkpoint read failures."))
if err != nil {
errs = errors.Join(errs, err)
}
catchupDaysProcessed, err := meter.Int64Counter("signoz.meterreporter.catchup.days_processed", metric.WithDescription("Sealed catchup days processed."))
if err != nil {
errs = errors.Join(errs, err)
}
collectDuration, err := meter.Float64Counter("signoz.meterreporter.collect.duration.seconds", metric.WithDescription("Cumulative collection duration."), metric.WithUnit("s"))
if err != nil {
errs = errors.Join(errs, err)
}
collectOperations, err := meter.Int64Counter("signoz.meterreporter.collect.operations", metric.WithDescription("Collection phases measured."))
if err != nil {
errs = errors.Join(errs, err)
}
shipDuration, err := meter.Float64Counter("signoz.meterreporter.ship.duration.seconds", metric.WithDescription("Cumulative ship duration."), metric.WithUnit("s"))
if err != nil {
errs = errors.Join(errs, err)
}
shipOperations, err := meter.Int64Counter("signoz.meterreporter.ship.operations", metric.WithDescription("Ship phases measured."))
if err != nil {
errs = errors.Join(errs, err)
}
if errs != nil {
return nil, errs
}
return &reporterMetrics{
ticks: ticks,
readingsEmitted: readingsEmitted,
collectErrors: collectErrors,
postErrors: postErrors,
checkpointErrors: checkpointErrors,
catchupDaysProcessed: catchupDaysProcessed,
collectDuration: collectDuration,
collectOperations: collectOperations,
shipDuration: shipDuration,
shipOperations: shipOperations,
}, nil
}

View File

@@ -11,8 +11,10 @@ import (
"github.com/SigNoz/signoz/pkg/modules/dashboard"
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/ctxtypes"
@@ -30,9 +32,9 @@ type module struct {
licensing licensing.Licensing
}
func NewModule(store dashboardtypes.Store, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
func NewModule(store dashboardtypes.Store, sqlstore sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing, tagModule tag.Module) dashboard.Module {
scopedProviderSettings := factory.NewScopedProviderSettings(settings, "github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard")
pkgDashboardModule := pkgimpldashboard.NewModule(store, settings, analytics, orgGetter, queryParser)
pkgDashboardModule := pkgimpldashboard.NewModule(store, sqlstore, settings, analytics, orgGetter, queryParser, tagModule)
return &module{
pkgDashboardModule: pkgDashboardModule,
@@ -197,6 +199,72 @@ func (module *module) Create(ctx context.Context, orgID valuer.UUID, createdBy s
return module.pkgDashboardModule.Create(ctx, orgID, createdBy, creator, data)
}
func (module *module) CreateV2(ctx context.Context, orgID valuer.UUID, createdBy string, creator valuer.UUID, postable dashboardtypes.PostableDashboardV2) (*dashboardtypes.DashboardV2, error) {
return module.pkgDashboardModule.CreateV2(ctx, orgID, createdBy, creator, postable)
}
func (module *module) GetV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.DashboardV2, error) {
return module.pkgDashboardModule.GetV2(ctx, orgID, id)
}
func (module *module) UpdateV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, updateable dashboardtypes.UpdateableDashboardV2) (*dashboardtypes.DashboardV2, error) {
return module.pkgDashboardModule.UpdateV2(ctx, orgID, id, updatedBy, updateable)
}
func (module *module) PatchV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, patch dashboardtypes.PatchableDashboardV2) (*dashboardtypes.DashboardV2, error) {
return module.pkgDashboardModule.PatchV2(ctx, orgID, id, updatedBy, patch)
}
func (module *module) LockUnlockV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, isAdmin bool, lock bool) error {
return module.pkgDashboardModule.LockUnlockV2(ctx, orgID, id, updatedBy, isAdmin, lock)
}
func (module *module) CreatePublicV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, postable dashboardtypes.PostablePublicDashboard) (*dashboardtypes.DashboardV2, error) {
if _, err := module.licensing.GetActive(ctx, orgID); err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
existing, err := module.pkgDashboardModule.GetV2(ctx, orgID, id)
if err != nil {
return nil, err
}
if existing.PublicConfig != nil {
return nil, errors.Newf(errors.TypeAlreadyExists, dashboardtypes.ErrCodePublicDashboardAlreadyExists, "dashboard with id %s is already public", id)
}
publicDashboard := dashboardtypes.NewPublicDashboard(postable.TimeRangeEnabled, postable.DefaultTimeRange, id)
if err := module.store.CreatePublic(ctx, dashboardtypes.NewStorablePublicDashboardFromPublicDashboard(publicDashboard)); err != nil {
return nil, err
}
existing.PublicConfig = publicDashboard
return existing, nil
}
func (module *module) DeleteV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, deletedBy string) error {
return module.pkgDashboardModule.DeleteV2(ctx, orgID, id, deletedBy)
}
func (module *module) UpdatePublicV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatable dashboardtypes.UpdatablePublicDashboard) (*dashboardtypes.DashboardV2, error) {
if _, err := module.licensing.GetActive(ctx, orgID); err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
existing, err := module.pkgDashboardModule.GetV2(ctx, orgID, id)
if err != nil {
return nil, err
}
if existing.PublicConfig == nil {
return nil, errors.Newf(errors.TypeNotFound, dashboardtypes.ErrCodePublicDashboardNotFound, "dashboard with id %s isn't public", id)
}
existing.PublicConfig.Update(updatable.TimeRangeEnabled, updatable.DefaultTimeRange)
if err := module.store.UpdatePublic(ctx, dashboardtypes.NewStorablePublicDashboardFromPublicDashboard(existing.PublicConfig)); err != nil {
return nil, err
}
return existing, nil
}
func (module *module) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.Dashboard, error) {
return module.pkgDashboardModule.Get(ctx, orgID, id)
}

View File

@@ -150,72 +150,6 @@ func (provider *Provider) PutMetersV2(ctx context.Context, key string, data []by
return err
}
func (provider *Provider) PutMetersV3(ctx context.Context, key string, idempotencyKey string, data []byte) error {
headers := http.Header{}
if idempotencyKey != "" {
headers.Set("X-Idempotency-Key", idempotencyKey)
}
_, err := provider.doWithHeaders(
ctx,
provider.config.URL.JoinPath("/v2/meters"),
http.MethodPost,
key,
data,
headers,
)
return err
}
func (provider *Provider) GetMeterCheckpoints(ctx context.Context, key string) ([]zeustypes.MeterCheckpoint, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/meters/checkpoints"),
http.MethodGet,
key,
nil,
)
if err != nil {
return nil, err
}
checkpointValues := gjson.GetBytes(response, "data.checkpoints")
if !checkpointValues.Exists() || checkpointValues.Type == gjson.Null {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoints are required")
}
if !checkpointValues.IsArray() {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoints must be an array")
}
checkpointResults := checkpointValues.Array()
checkpoints := make([]zeustypes.MeterCheckpoint, 0, len(checkpointResults))
for _, checkpointValue := range checkpointResults {
name := checkpointValue.Get("name").String()
if name == "" {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoint name is required")
}
checkpointString := checkpointValue.Get("checkpoint").String()
if checkpointString == "" {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoint is required for %q", name)
}
checkpoint, err := time.Parse("2006-01-02", checkpointString)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, zeus.ErrCodeResponseMalformed, "parse meter checkpoint %q for %q", checkpointString, name)
}
checkpoints = append(checkpoints, zeustypes.MeterCheckpoint{
Name: name,
Checkpoint: checkpoint,
})
}
return checkpoints, nil
}
func (provider *Provider) PutProfile(ctx context.Context, key string, profile *zeustypes.PostableProfile) error {
body, err := json.Marshal(profile)
if err != nil {
@@ -251,21 +185,12 @@ func (provider *Provider) PutHost(ctx context.Context, key string, host *zeustyp
}
func (provider *Provider) do(ctx context.Context, url *url.URL, method string, key string, requestBody []byte) ([]byte, error) {
return provider.doWithHeaders(ctx, url, method, key, requestBody, nil)
}
func (provider *Provider) doWithHeaders(ctx context.Context, url *url.URL, method string, key string, requestBody []byte, extraHeaders http.Header) ([]byte, error) {
request, err := http.NewRequestWithContext(ctx, method, url.String(), bytes.NewBuffer(requestBody))
if err != nil {
return nil, err
}
request.Header.Set("X-Signoz-Cloud-Api-Key", key)
request.Header.Set("Content-Type", "application/json")
for k, vs := range extraHeaders {
for _, v := range vs {
request.Header.Add(k, v)
}
}
response, err := provider.httpClient.Do(request)
if err != nil {

View File

@@ -19,8 +19,8 @@ import type {
import type {
AuthtypesPostableAuthDomainDTO,
AuthtypesUpdateableAuthDomainDTO,
CreateAuthDomain200,
AuthtypesUpdatableAuthDomainDTO,
CreateAuthDomain201,
DeleteAuthDomainPathParameters,
GetAuthDomain200,
GetAuthDomainPathParameters,
@@ -126,7 +126,7 @@ export const createAuthDomain = (
authtypesPostableAuthDomainDTO: BodyType<AuthtypesPostableAuthDomainDTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateAuthDomain200>({
return GeneratedAPIInstance<CreateAuthDomain201>({
url: `/api/v1/domains`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
@@ -388,13 +388,13 @@ export const invalidateGetAuthDomain = async (
*/
export const updateAuthDomain = (
{ id }: UpdateAuthDomainPathParameters,
authtypesUpdateableAuthDomainDTO: BodyType<AuthtypesUpdateableAuthDomainDTO>,
authtypesUpdatableAuthDomainDTO: BodyType<AuthtypesUpdatableAuthDomainDTO>,
) => {
return GeneratedAPIInstance<void>({
url: `/api/v1/domains/${id}`,
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
data: authtypesUpdateableAuthDomainDTO,
data: authtypesUpdatableAuthDomainDTO,
});
};
@@ -407,7 +407,7 @@ export const getUpdateAuthDomainMutationOptions = <
TError,
{
pathParams: UpdateAuthDomainPathParameters;
data: BodyType<AuthtypesUpdateableAuthDomainDTO>;
data: BodyType<AuthtypesUpdatableAuthDomainDTO>;
},
TContext
>;
@@ -416,7 +416,7 @@ export const getUpdateAuthDomainMutationOptions = <
TError,
{
pathParams: UpdateAuthDomainPathParameters;
data: BodyType<AuthtypesUpdateableAuthDomainDTO>;
data: BodyType<AuthtypesUpdatableAuthDomainDTO>;
},
TContext
> => {
@@ -433,7 +433,7 @@ export const getUpdateAuthDomainMutationOptions = <
Awaited<ReturnType<typeof updateAuthDomain>>,
{
pathParams: UpdateAuthDomainPathParameters;
data: BodyType<AuthtypesUpdateableAuthDomainDTO>;
data: BodyType<AuthtypesUpdatableAuthDomainDTO>;
}
> = (props) => {
const { pathParams, data } = props ?? {};
@@ -448,7 +448,7 @@ export type UpdateAuthDomainMutationResult = NonNullable<
Awaited<ReturnType<typeof updateAuthDomain>>
>;
export type UpdateAuthDomainMutationBody =
BodyType<AuthtypesUpdateableAuthDomainDTO>;
BodyType<AuthtypesUpdatableAuthDomainDTO>;
export type UpdateAuthDomainMutationError = ErrorType<RenderErrorResponseDTO>;
/**
@@ -463,7 +463,7 @@ export const useUpdateAuthDomain = <
TError,
{
pathParams: UpdateAuthDomainPathParameters;
data: BodyType<AuthtypesUpdateableAuthDomainDTO>;
data: BodyType<AuthtypesUpdatableAuthDomainDTO>;
},
TContext
>;
@@ -472,7 +472,7 @@ export const useUpdateAuthDomain = <
TError,
{
pathParams: UpdateAuthDomainPathParameters;
data: BodyType<AuthtypesUpdateableAuthDomainDTO>;
data: BodyType<AuthtypesUpdatableAuthDomainDTO>;
},
TContext
> => {

View File

@@ -18,8 +18,10 @@ import type {
} from 'react-query';
import type {
CreateDashboardV2201,
CreatePublicDashboard201,
CreatePublicDashboardPathParameters,
DashboardtypesPostableDashboardV2DTO,
DashboardtypesPostablePublicDashboardDTO,
DashboardtypesUpdatablePublicDashboardDTO,
DeletePublicDashboardPathParameters,
@@ -634,3 +636,88 @@ export const invalidateGetPublicDashboardWidgetQueryRange = async (
return queryClient;
};
/**
* This endpoint creates a v2-shape dashboard with structured metadata, a typed data tree, and resolved tags.
* @summary Create dashboard (v2)
*/
export const createDashboardV2 = (
dashboardtypesPostableDashboardV2DTO: BodyType<DashboardtypesPostableDashboardV2DTO>,
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<CreateDashboardV2201>({
url: `/api/v2/dashboards`,
method: 'POST',
headers: { 'Content-Type': 'application/json' },
data: dashboardtypesPostableDashboardV2DTO,
signal,
});
};
export const getCreateDashboardV2MutationOptions = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown,
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createDashboardV2>>,
TError,
{ data: BodyType<DashboardtypesPostableDashboardV2DTO> },
TContext
>;
}): UseMutationOptions<
Awaited<ReturnType<typeof createDashboardV2>>,
TError,
{ data: BodyType<DashboardtypesPostableDashboardV2DTO> },
TContext
> => {
const mutationKey = ['createDashboardV2'];
const { mutation: mutationOptions } = options
? options.mutation &&
'mutationKey' in options.mutation &&
options.mutation.mutationKey
? options
: { ...options, mutation: { ...options.mutation, mutationKey } }
: { mutation: { mutationKey } };
const mutationFn: MutationFunction<
Awaited<ReturnType<typeof createDashboardV2>>,
{ data: BodyType<DashboardtypesPostableDashboardV2DTO> }
> = (props) => {
const { data } = props ?? {};
return createDashboardV2(data);
};
return { mutationFn, ...mutationOptions };
};
export type CreateDashboardV2MutationResult = NonNullable<
Awaited<ReturnType<typeof createDashboardV2>>
>;
export type CreateDashboardV2MutationBody =
BodyType<DashboardtypesPostableDashboardV2DTO>;
export type CreateDashboardV2MutationError = ErrorType<RenderErrorResponseDTO>;
/**
* @summary Create dashboard (v2)
*/
export const useCreateDashboardV2 = <
TError = ErrorType<RenderErrorResponseDTO>,
TContext = unknown,
>(options?: {
mutation?: UseMutationOptions<
Awaited<ReturnType<typeof createDashboardV2>>,
TError,
{ data: BodyType<DashboardtypesPostableDashboardV2DTO> },
TContext
>;
}): UseMutationResult<
Awaited<ReturnType<typeof createDashboardV2>>,
TError,
{ data: BodyType<DashboardtypesPostableDashboardV2DTO> },
TContext
> => {
const mutationOptions = getCreateDashboardV2MutationOptions(options);
return useMutation(mutationOptions);
};

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,126 @@
import { ReactNode, useCallback, useEffect, useMemo, useRef } from 'react';
import { parseAsString, useQueryState } from 'nuqs';
import { useStore } from 'zustand';
import {
combineInitialAndUserExpression,
getUserExpressionFromCombined,
} from '../utils';
import { QuerySearchV2Context } from './context';
import type { QuerySearchV2ContextValue } from './QuerySearchV2.store';
import { createExpressionStore } from './QuerySearchV2.store';
export interface QuerySearchV2ProviderProps {
queryParamKey: string;
initialExpression?: string;
/**
* @default false
*/
persistOnUnmount?: boolean;
children: ReactNode;
}
/**
* Provider component that creates a scoped zustand store and exposes
* expression state to children via context.
*/
export function QuerySearchV2Provider({
initialExpression = '',
persistOnUnmount = false,
queryParamKey,
children,
}: QuerySearchV2ProviderProps): JSX.Element {
const storeRef = useRef(createExpressionStore());
const store = storeRef.current;
const [urlExpression, setUrlExpression] = useQueryState(
queryParamKey,
parseAsString,
);
const committedExpression = useStore(store, (s) => s.committedExpression);
const setInputExpression = useStore(store, (s) => s.setInputExpression);
const commitExpression = useStore(store, (s) => s.commitExpression);
const initializeFromUrl = useStore(store, (s) => s.initializeFromUrl);
const resetExpression = useStore(store, (s) => s.resetExpression);
const isInitialized = useRef(false);
useEffect(() => {
if (!isInitialized.current && urlExpression) {
const cleanedExpression = getUserExpressionFromCombined(
initialExpression,
urlExpression,
);
initializeFromUrl(cleanedExpression);
isInitialized.current = true;
}
}, [urlExpression, initialExpression, initializeFromUrl]);
useEffect(() => {
if (isInitialized.current || !urlExpression) {
setUrlExpression(committedExpression || null);
}
}, [committedExpression, setUrlExpression, urlExpression]);
useEffect(() => {
return (): void => {
if (!persistOnUnmount) {
setUrlExpression(null);
resetExpression();
}
};
}, [persistOnUnmount, setUrlExpression, resetExpression]);
const handleChange = useCallback(
(expression: string): void => {
const userOnly = getUserExpressionFromCombined(
initialExpression,
expression,
);
setInputExpression(userOnly);
},
[initialExpression, setInputExpression],
);
const handleRun = useCallback(
(expression: string): void => {
const userOnly = getUserExpressionFromCombined(
initialExpression,
expression,
);
commitExpression(userOnly);
},
[initialExpression, commitExpression],
);
const combinedExpression = useMemo(
() => combineInitialAndUserExpression(initialExpression, committedExpression),
[initialExpression, committedExpression],
);
const contextValue = useMemo<QuerySearchV2ContextValue>(
() => ({
expression: combinedExpression,
userExpression: committedExpression,
initialExpression,
querySearchProps: {
initialExpression: initialExpression.trim() ? initialExpression : undefined,
onChange: handleChange,
onRun: handleRun,
},
}),
[
combinedExpression,
committedExpression,
initialExpression,
handleChange,
handleRun,
],
);
return (
<QuerySearchV2Context.Provider value={contextValue}>
{children}
</QuerySearchV2Context.Provider>
);
}

View File

@@ -0,0 +1,60 @@
import { createStore, StoreApi } from 'zustand';
export type QuerySearchV2Store = {
/**
* User-typed expression (local state, updates on typing)
*/
inputExpression: string;
/**
* Committed expression (synced to URL, updates on submit)
*/
committedExpression: string;
setInputExpression: (expression: string) => void;
commitExpression: (expression: string) => void;
resetExpression: () => void;
initializeFromUrl: (urlExpression: string) => void;
};
export interface QuerySearchProps {
initialExpression: string | undefined;
onChange: (expression: string) => void;
onRun: (expression: string) => void;
}
export interface QuerySearchV2ContextValue {
/**
* Combined expression: "initialExpression AND (userExpression)"
*/
expression: string;
userExpression: string;
initialExpression: string;
querySearchProps: QuerySearchProps;
}
export function createExpressionStore(): StoreApi<QuerySearchV2Store> {
return createStore<QuerySearchV2Store>((set) => ({
inputExpression: '',
committedExpression: '',
setInputExpression: (expression: string): void => {
set({ inputExpression: expression });
},
commitExpression: (expression: string): void => {
set({
inputExpression: expression,
committedExpression: expression,
});
},
resetExpression: (): void => {
set({
inputExpression: '',
committedExpression: '',
});
},
initializeFromUrl: (urlExpression: string): void => {
set({
inputExpression: urlExpression,
committedExpression: urlExpression,
});
},
}));
}

View File

@@ -0,0 +1,95 @@
import { ReactNode } from 'react';
import { act, renderHook } from '@testing-library/react';
import { useQuerySearchV2Context } from '../context';
import {
QuerySearchV2Provider,
QuerySearchV2ProviderProps,
} from '../QuerySearchV2.provider';
const mockSetQueryState = jest.fn();
let mockUrlValue: string | null = null;
jest.mock('nuqs', () => ({
parseAsString: {},
useQueryState: jest.fn(() => [mockUrlValue, mockSetQueryState]),
}));
function createWrapper(
props: Partial<QuerySearchV2ProviderProps> = {},
): ({ children }: { children: ReactNode }) => JSX.Element {
return function Wrapper({ children }: { children: ReactNode }): JSX.Element {
return (
<QuerySearchV2Provider queryParamKey="testExpression" {...props}>
{children}
</QuerySearchV2Provider>
);
};
}
describe('QuerySearchExpressionProvider', () => {
beforeEach(() => {
jest.clearAllMocks();
mockUrlValue = null;
});
it('should provide initial context values', () => {
const { result } = renderHook(() => useQuerySearchV2Context(), {
wrapper: createWrapper(),
});
expect(result.current.expression).toBe('');
expect(result.current.userExpression).toBe('');
expect(result.current.initialExpression).toBe('');
});
it('should combine initialExpression with userExpression', () => {
const { result } = renderHook(() => useQuerySearchV2Context(), {
wrapper: createWrapper({ initialExpression: 'k8s.pod.name = "my-pod"' }),
});
expect(result.current.expression).toBe('k8s.pod.name = "my-pod"');
expect(result.current.initialExpression).toBe('k8s.pod.name = "my-pod"');
act(() => {
result.current.querySearchProps.onChange('service = "api"');
});
act(() => {
result.current.querySearchProps.onRun('service = "api"');
});
expect(result.current.expression).toBe(
'k8s.pod.name = "my-pod" AND (service = "api")',
);
expect(result.current.userExpression).toBe('service = "api"');
});
it('should provide querySearchProps with correct callbacks', () => {
const { result } = renderHook(() => useQuerySearchV2Context(), {
wrapper: createWrapper({ initialExpression: 'initial' }),
});
expect(result.current.querySearchProps.initialExpression).toBe('initial');
expect(typeof result.current.querySearchProps.onChange).toBe('function');
expect(typeof result.current.querySearchProps.onRun).toBe('function');
});
it('should initialize from URL value on mount', () => {
mockUrlValue = 'status = 500';
const { result } = renderHook(() => useQuerySearchV2Context(), {
wrapper: createWrapper(),
});
expect(result.current.userExpression).toBe('status = 500');
expect(result.current.expression).toBe('status = 500');
});
it('should throw error when used outside provider', () => {
expect(() => {
renderHook(() => useQuerySearchV2Context());
}).toThrow(
'useQuerySearchV2Context must be used within a QuerySearchV2Provider',
);
});
});

View File

@@ -0,0 +1,61 @@
import { createExpressionStore } from '../QuerySearchV2.store';
describe('createExpressionStore', () => {
it('should create a store with initial state', () => {
const store = createExpressionStore();
const state = store.getState();
expect(state.inputExpression).toBe('');
expect(state.committedExpression).toBe('');
});
it('should update inputExpression via setInputExpression', () => {
const store = createExpressionStore();
store.getState().setInputExpression('service.name = "api"');
expect(store.getState().inputExpression).toBe('service.name = "api"');
expect(store.getState().committedExpression).toBe('');
});
it('should update both expressions via commitExpression', () => {
const store = createExpressionStore();
store.getState().setInputExpression('service.name = "api"');
store.getState().commitExpression('service.name = "api"');
expect(store.getState().inputExpression).toBe('service.name = "api"');
expect(store.getState().committedExpression).toBe('service.name = "api"');
});
it('should reset all state via resetExpression', () => {
const store = createExpressionStore();
store.getState().setInputExpression('service.name = "api"');
store.getState().commitExpression('service.name = "api"');
store.getState().resetExpression();
expect(store.getState().inputExpression).toBe('');
expect(store.getState().committedExpression).toBe('');
});
it('should initialize from URL value', () => {
const store = createExpressionStore();
store.getState().initializeFromUrl('status = 500');
expect(store.getState().inputExpression).toBe('status = 500');
expect(store.getState().committedExpression).toBe('status = 500');
});
it('should create isolated store instances', () => {
const store1 = createExpressionStore();
const store2 = createExpressionStore();
store1.getState().setInputExpression('expr1');
store2.getState().setInputExpression('expr2');
expect(store1.getState().inputExpression).toBe('expr1');
expect(store2.getState().inputExpression).toBe('expr2');
});
});

View File

@@ -0,0 +1,17 @@
// eslint-disable-next-line no-restricted-imports -- React Context required for scoped store pattern
import { createContext, useContext } from 'react';
import type { QuerySearchV2ContextValue } from './QuerySearchV2.store';
export const QuerySearchV2Context =
createContext<QuerySearchV2ContextValue | null>(null);
export function useQuerySearchV2Context(): QuerySearchV2ContextValue {
const context = useContext(QuerySearchV2Context);
if (!context) {
throw new Error(
'useQuerySearchV2Context must be used within a QuerySearchV2Provider',
);
}
return context;
}

View File

@@ -0,0 +1,8 @@
export { useQuerySearchV2Context } from './context';
export type { QuerySearchV2ProviderProps } from './QuerySearchV2.provider';
export { QuerySearchV2Provider } from './QuerySearchV2.provider';
export type {
QuerySearchProps,
QuerySearchV2ContextValue,
QuerySearchV2Store,
} from './QuerySearchV2.store';

View File

@@ -19,6 +19,13 @@
display: flex;
flex-direction: row;
.query-search-initial-scope-label {
position: absolute;
left: 8px;
top: 10px;
z-index: 10;
}
.query-where-clause-editor {
flex: 1;
min-width: 400px;
@@ -53,6 +60,10 @@
}
}
}
&.hasInitialExpression .cm-editor .cm-content {
padding-left: 22px !important;
}
}
.cm-editor {
@@ -68,7 +79,6 @@
border-radius: 2px;
border: 1px solid var(--l1-border);
padding: 0px !important;
background-color: var(--l1-background) !important;
&:focus-within {
border-color: var(--l1-border);

View File

@@ -30,7 +30,7 @@ import { useDashboardVariablesByType } from 'hooks/dashboard/useDashboardVariabl
import { useIsDarkMode } from 'hooks/useDarkMode';
import useDebounce from 'hooks/useDebounce';
import { debounce, isNull } from 'lodash-es';
import { Info, TriangleAlert } from 'lucide-react';
import { Filter, Info, TriangleAlert } from 'lucide-react';
import {
IDetailedError,
IQueryContext,
@@ -47,6 +47,7 @@ import { validateQuery } from 'utils/queryValidationUtils';
import { unquote } from 'utils/stringUtils';
import { queryExamples } from './constants';
import { combineInitialAndUserExpression } from './utils';
import './QuerySearch.styles.scss';
@@ -85,6 +86,8 @@ interface QuerySearchProps {
hardcodedAttributeKeys?: QueryKeyDataSuggestionsProps[];
onRun?: (query: string) => void;
showFilterSuggestionsWithoutMetric?: boolean;
/** When set, the editor shows only the user expression; API/filter uses `initial AND (user)`. */
initialExpression?: string;
}
function QuerySearch({
@@ -96,6 +99,7 @@ function QuerySearch({
signalSource,
hardcodedAttributeKeys,
showFilterSuggestionsWithoutMetric,
initialExpression,
}: QuerySearchProps): JSX.Element {
const isDarkMode = useIsDarkMode();
const [valueSuggestions, setValueSuggestions] = useState<any[]>([]);
@@ -112,18 +116,26 @@ function QuerySearch({
const [isFocused, setIsFocused] = useState(false);
const editorRef = useRef<EditorView | null>(null);
const handleQueryValidation = useCallback((newExpression: string): void => {
try {
const validationResponse = validateQuery(newExpression);
setValidation(validationResponse);
} catch (error) {
setValidation({
isValid: false,
message: 'Failed to process query',
errors: [error as IDetailedError],
});
}
}, []);
const isScopedFilter = initialExpression !== undefined;
const validateExpressionForEditor = useCallback(
(editorDoc: string): void => {
const toValidate = isScopedFilter
? combineInitialAndUserExpression(initialExpression ?? '', editorDoc)
: editorDoc;
try {
const validationResponse = validateQuery(toValidate);
setValidation(validationResponse);
} catch (error) {
setValidation({
isValid: false,
message: 'Failed to process query',
errors: [error as IDetailedError],
});
}
},
[initialExpression, isScopedFilter],
);
const getCurrentExpression = useCallback(
(): string => editorRef.current?.state.doc.toString() || '',
@@ -165,6 +177,8 @@ function QuerySearch({
setIsEditorReady(true);
}, []);
const prevQueryDataExpressionRef = useRef<string | undefined>();
useEffect(
() => {
if (!isEditorReady) {
@@ -173,13 +187,22 @@ function QuerySearch({
const newExpression = queryData.filter?.expression || '';
const currentExpression = getCurrentExpression();
const prevExpression = prevQueryDataExpressionRef.current;
// Do not update codemirror editor if the expression is the same
if (newExpression !== currentExpression && !isFocused) {
// Only sync editor when queryData.filter?.expression actually changed from external source
// Not when focus changed (which would reset uncommitted user input)
const queryDataExpressionChanged = prevExpression !== newExpression;
prevQueryDataExpressionRef.current = newExpression;
if (
queryDataExpressionChanged &&
newExpression !== currentExpression &&
!isFocused
) {
updateEditorValue(newExpression, { skipOnChange: true });
if (newExpression) {
handleQueryValidation(newExpression);
}
}
if (!isFocused) {
validateExpressionForEditor(currentExpression);
}
},
// eslint-disable-next-line react-hooks/exhaustive-deps
@@ -284,7 +307,7 @@ function QuerySearch({
}
});
}
setKeySuggestions(Array.from(merged.values()));
setKeySuggestions([...merged.values()]);
// Force reopen the completion if editor is available and focused
if (editorRef.current) {
@@ -337,7 +360,7 @@ function QuerySearch({
// If value contains single quotes, escape them and wrap in single quotes
if (value.includes("'")) {
// Replace single quotes with escaped single quotes
const escapedValue = value.replace(/'/g, "\\'");
const escapedValue = value.replaceAll(/'/g, "\\'");
return `'${escapedValue}'`;
}
@@ -614,7 +637,7 @@ function QuerySearch({
const handleBlur = (): void => {
const currentExpression = getCurrentExpression();
handleQueryValidation(currentExpression);
validateExpressionForEditor(currentExpression);
setIsFocused(false);
};
@@ -632,7 +655,6 @@ function QuerySearch({
);
const handleExampleClick = (exampleQuery: string): void => {
// If there's an existing query, append the example with AND
const currentExpression = getCurrentExpression();
const newExpression = currentExpression
? `${currentExpression} AND ${exampleQuery}`
@@ -897,12 +919,12 @@ function QuerySearch({
// If we have previous pairs, we can prioritize keys that haven't been used yet
if (queryContext.queryPairs && queryContext.queryPairs.length > 0) {
const usedKeys = queryContext.queryPairs.map((pair) => pair.key);
const usedKeys = new Set(queryContext.queryPairs.map((pair) => pair.key));
// Add boost to unused keys to prioritize them
options = options.map((option) => ({
...option,
boost: usedKeys.includes(option.label) ? -10 : 10,
boost: usedKeys.has(option.label) ? -10 : 10,
}));
}
@@ -1317,6 +1339,19 @@ function QuerySearch({
)}
<div className="query-where-clause-editor-container">
{isScopedFilter ? (
<Tooltip title={initialExpression || ''} placement="left">
<div className="query-search-initial-scope-label">
<Filter
size={14}
style={{
opacity: 0.9,
color: isDarkMode ? Color.BG_VANILLA_100 : Color.BG_INK_500,
}}
/>
</div>
</Tooltip>
) : null}
<Tooltip
title={<div data-log-detail-ignore="true">{getTooltipContent()}</div>}
placement="left"
@@ -1356,6 +1391,7 @@ function QuerySearch({
className={cx('query-where-clause-editor', {
isValid: validation.isValid === true,
hasErrors: validation.errors.length > 0,
hasInitialExpression: isScopedFilter,
})}
extensions={[
autocompletion({
@@ -1390,7 +1426,12 @@ function QuerySearch({
// Mod-Enter is usually Ctrl-Enter or Cmd-Enter based on OS
run: (): boolean => {
if (onRun && typeof onRun === 'function') {
onRun(getCurrentExpression());
const user = getCurrentExpression();
onRun(
isScopedFilter
? combineInitialAndUserExpression(initialExpression ?? '', user)
: user,
);
}
return true;
},
@@ -1555,6 +1596,7 @@ QuerySearch.defaultProps = {
placeholder:
"Enter your filter query (e.g., http.status_code >= 500 AND service.name = 'frontend')",
showFilterSuggestionsWithoutMetric: false,
initialExpression: undefined,
};
export default QuerySearch;

View File

@@ -0,0 +1,58 @@
import {
combineInitialAndUserExpression,
getUserExpressionFromCombined,
} from '../utils';
describe('entityLogsExpression', () => {
describe('combineInitialAndUserExpression', () => {
it('returns user when initial is empty', () => {
expect(combineInitialAndUserExpression('', 'body contains error')).toBe(
'body contains error',
);
});
it('returns initial when user is empty', () => {
expect(combineInitialAndUserExpression('k8s.pod.name = "x"', '')).toBe(
'k8s.pod.name = "x"',
);
});
it('wraps user in parentheses with AND', () => {
expect(
combineInitialAndUserExpression('k8s.pod.name = "x"', 'body = "a"'),
).toBe('k8s.pod.name = "x" AND (body = "a")');
});
});
describe('getUserExpressionFromCombined', () => {
it('returns empty when combined equals initial', () => {
expect(
getUserExpressionFromCombined('k8s.pod.name = "x"', 'k8s.pod.name = "x"'),
).toBe('');
});
it('extracts user from wrapped form', () => {
expect(
getUserExpressionFromCombined(
'k8s.pod.name = "x"',
'k8s.pod.name = "x" AND (body = "a")',
),
).toBe('body = "a"');
});
it('extracts user from legacy AND without parens', () => {
expect(
getUserExpressionFromCombined(
'k8s.pod.name = "x"',
'k8s.pod.name = "x" AND body = "a"',
),
).toBe('body = "a"');
});
it('returns full combined when initial is empty', () => {
expect(getUserExpressionFromCombined('', 'service.name = "a"')).toBe(
'service.name = "a"',
);
});
});
});

View File

@@ -0,0 +1,40 @@
export function combineInitialAndUserExpression(
initial: string,
user: string,
): string {
const i = initial.trim();
const u = user.trim();
if (!i) {
return u;
}
if (!u) {
return i;
}
return `${i} AND (${u})`;
}
export function getUserExpressionFromCombined(
initial: string,
combined: string | null | undefined,
): string {
const i = initial.trim();
const c = (combined ?? '').trim();
if (!c) {
return '';
}
if (!i) {
return c;
}
if (c === i) {
return '';
}
const wrappedPrefix = `${i} AND (`;
if (c.startsWith(wrappedPrefix) && c.endsWith(')')) {
return c.slice(wrappedPrefix.length, -1);
}
const plainPrefix = `${i} AND `;
if (c.startsWith(plainPrefix)) {
return c.slice(plainPrefix.length);
}
return c;
}

View File

@@ -0,0 +1,14 @@
export type {
QuerySearchProps,
QuerySearchV2ContextValue,
QuerySearchV2ProviderProps,
} from './QueryV2/QuerySearch/Provider';
export {
QuerySearchV2Provider,
useQuerySearchV2Context,
} from './QueryV2/QuerySearch/Provider';
export { QueryBuilderV2 } from './QueryBuilderV2';
export {
QueryBuilderV2Provider,
useQueryBuilderV2Context,
} from './QueryBuilderV2Context';

View File

@@ -60,7 +60,7 @@ function CreateOrEdit(props: CreateOrEditProps): JSX.Element {
const [form] = Form.useForm<FormValues>();
const [authnProvider, setAuthnProvider] = useState<
AuthtypesAuthNProviderDTO | ''
>(record?.ssoType || '');
>(record?.config?.ssoType || '');
const { showErrorModal } = useErrorModal();
const { featureFlags } = useAppContext();

View File

@@ -112,21 +112,26 @@ export function prepareInitialValues(
};
}
const config = record.config ?? {};
return {
...record,
googleAuthConfig: record.googleAuthConfig
name: record.name,
ssoEnabled: config.ssoEnabled,
ssoType: config.ssoType,
samlConfig: config.samlConfig ?? undefined,
oidcConfig: config.oidcConfig ?? undefined,
googleAuthConfig: config.googleAuthConfig
? {
...record.googleAuthConfig,
...config.googleAuthConfig,
domainToAdminEmailList: convertDomainMappingsToList(
record.googleAuthConfig.domainToAdminEmail,
config.googleAuthConfig.domainToAdminEmail,
),
}
: undefined,
roleMapping: record.roleMapping
roleMapping: config.roleMapping
? {
...record.roleMapping,
...config.roleMapping,
groupMappingsList: convertGroupMappingsToList(
record.roleMapping.groupMappings,
config.roleMapping.groupMappings,
),
}
: undefined,

View File

@@ -43,11 +43,11 @@ function SSOEnforcementToggle({
data: {
config: {
ssoEnabled: checked,
ssoType: record.ssoType,
googleAuthConfig: record.googleAuthConfig,
oidcConfig: record.oidcConfig,
samlConfig: record.samlConfig,
roleMapping: record.roleMapping,
ssoType: record.config?.ssoType,
googleAuthConfig: record.config?.googleAuthConfig,
oidcConfig: record.config?.oidcConfig,
samlConfig: record.config?.samlConfig,
roleMapping: record.config?.roleMapping,
},
},
},

View File

@@ -55,7 +55,10 @@ describe('SSOEnforcementToggle', () => {
render(
<SSOEnforcementToggle
isDefaultChecked={false}
record={{ ...mockGoogleAuthDomain, ssoEnabled: false }}
record={{
...mockGoogleAuthDomain,
config: { ...mockGoogleAuthDomain.config, ssoEnabled: false },
}}
/>,
);

View File

@@ -13,11 +13,13 @@ export const AUTH_DOMAINS_DELETE_ENDPOINT = '*/api/v1/domains/:id';
export const mockGoogleAuthDomain: AuthtypesGettableAuthDomainDTO = {
id: 'domain-1',
name: 'signoz.io',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.google_auth,
googleAuthConfig: {
clientId: 'test-client-id',
clientSecret: 'test-client-secret',
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.google_auth,
googleAuthConfig: {
clientId: 'test-client-id',
clientSecret: 'test-client-secret',
},
},
authNProviderInfo: {
relayStatePath: 'api/v1/sso/relay/domain-1',
@@ -28,12 +30,14 @@ export const mockGoogleAuthDomain: AuthtypesGettableAuthDomainDTO = {
export const mockSamlAuthDomain: AuthtypesGettableAuthDomainDTO = {
id: 'domain-2',
name: 'example.com',
ssoEnabled: false,
ssoType: AuthtypesAuthNProviderDTO.saml,
samlConfig: {
samlIdp: 'https://idp.example.com/sso',
samlEntity: 'urn:example:idp',
samlCert: 'MOCK_CERTIFICATE',
config: {
ssoEnabled: false,
ssoType: AuthtypesAuthNProviderDTO.saml,
samlConfig: {
samlIdp: 'https://idp.example.com/sso',
samlEntity: 'urn:example:idp',
samlCert: 'MOCK_CERTIFICATE',
},
},
authNProviderInfo: {
relayStatePath: 'api/v1/sso/relay/domain-2',
@@ -44,12 +48,14 @@ export const mockSamlAuthDomain: AuthtypesGettableAuthDomainDTO = {
export const mockOidcAuthDomain: AuthtypesGettableAuthDomainDTO = {
id: 'domain-3',
name: 'corp.io',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.oidc,
oidcConfig: {
issuer: 'https://oidc.corp.io',
clientId: 'oidc-client-id',
clientSecret: 'oidc-client-secret',
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.oidc,
oidcConfig: {
issuer: 'https://oidc.corp.io',
clientId: 'oidc-client-id',
clientSecret: 'oidc-client-secret',
},
},
authNProviderInfo: {
relayStatePath: 'api/v1/sso/relay/domain-3',
@@ -60,20 +66,22 @@ export const mockOidcAuthDomain: AuthtypesGettableAuthDomainDTO = {
export const mockDomainWithRoleMapping: AuthtypesGettableAuthDomainDTO = {
id: 'domain-4',
name: 'enterprise.com',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.saml,
samlConfig: {
samlIdp: 'https://idp.enterprise.com/sso',
samlEntity: 'urn:enterprise:idp',
samlCert: 'MOCK_CERTIFICATE',
},
roleMapping: {
defaultRole: 'EDITOR',
useRoleAttribute: false,
groupMappings: {
'admin-group': 'ADMIN',
'dev-team': 'EDITOR',
viewers: 'VIEWER',
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.saml,
samlConfig: {
samlIdp: 'https://idp.enterprise.com/sso',
samlEntity: 'urn:enterprise:idp',
samlCert: 'MOCK_CERTIFICATE',
},
roleMapping: {
defaultRole: 'EDITOR',
useRoleAttribute: false,
groupMappings: {
'admin-group': 'ADMIN',
'dev-team': 'EDITOR',
viewers: 'VIEWER',
},
},
},
authNProviderInfo: {
@@ -86,16 +94,18 @@ export const mockDomainWithDirectRoleAttribute: AuthtypesGettableAuthDomainDTO =
{
id: 'domain-5',
name: 'direct-role.com',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.oidc,
oidcConfig: {
issuer: 'https://oidc.direct-role.com',
clientId: 'direct-role-client-id',
clientSecret: 'direct-role-client-secret',
},
roleMapping: {
defaultRole: 'VIEWER',
useRoleAttribute: true,
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.oidc,
oidcConfig: {
issuer: 'https://oidc.direct-role.com',
clientId: 'direct-role-client-id',
clientSecret: 'direct-role-client-secret',
},
roleMapping: {
defaultRole: 'VIEWER',
useRoleAttribute: true,
},
},
authNProviderInfo: {
relayStatePath: 'api/v1/sso/relay/domain-5',
@@ -106,20 +116,22 @@ export const mockDomainWithDirectRoleAttribute: AuthtypesGettableAuthDomainDTO =
export const mockOidcWithClaimMapping: AuthtypesGettableAuthDomainDTO = {
id: 'domain-6',
name: 'oidc-claims.com',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.oidc,
oidcConfig: {
issuer: 'https://oidc.claims.com',
issuerAlias: 'https://alias.claims.com',
clientId: 'claims-client-id',
clientSecret: 'claims-client-secret',
insecureSkipEmailVerified: true,
getUserInfo: true,
claimMapping: {
email: 'user_email',
name: 'display_name',
groups: 'user_groups',
role: 'user_role',
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.oidc,
oidcConfig: {
issuer: 'https://oidc.claims.com',
issuerAlias: 'https://alias.claims.com',
clientId: 'claims-client-id',
clientSecret: 'claims-client-secret',
insecureSkipEmailVerified: true,
getUserInfo: true,
claimMapping: {
email: 'user_email',
name: 'display_name',
groups: 'user_groups',
role: 'user_role',
},
},
},
authNProviderInfo: {
@@ -131,17 +143,19 @@ export const mockOidcWithClaimMapping: AuthtypesGettableAuthDomainDTO = {
export const mockSamlWithAttributeMapping: AuthtypesGettableAuthDomainDTO = {
id: 'domain-7',
name: 'saml-attrs.com',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.saml,
samlConfig: {
samlIdp: 'https://idp.saml-attrs.com/sso',
samlEntity: 'urn:saml-attrs:idp',
samlCert: 'MOCK_CERTIFICATE_ATTRS',
insecureSkipAuthNRequestsSigned: true,
attributeMapping: {
name: 'user_display_name',
groups: 'member_of',
role: 'signoz_role',
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.saml,
samlConfig: {
samlIdp: 'https://idp.saml-attrs.com/sso',
samlEntity: 'urn:saml-attrs:idp',
samlCert: 'MOCK_CERTIFICATE_ATTRS',
insecureSkipAuthNRequestsSigned: true,
attributeMapping: {
name: 'user_display_name',
groups: 'member_of',
role: 'signoz_role',
},
},
},
authNProviderInfo: {
@@ -154,19 +168,21 @@ export const mockGoogleAuthWithWorkspaceGroups: AuthtypesGettableAuthDomainDTO =
{
id: 'domain-8',
name: 'google-groups.com',
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.google_auth,
googleAuthConfig: {
clientId: 'google-groups-client-id',
clientSecret: 'google-groups-client-secret',
insecureSkipEmailVerified: false,
fetchGroups: true,
serviceAccountJson: '{"type": "service_account"}',
domainToAdminEmail: {
'google-groups.com': 'admin@google-groups.com',
config: {
ssoEnabled: true,
ssoType: AuthtypesAuthNProviderDTO.google_auth,
googleAuthConfig: {
clientId: 'google-groups-client-id',
clientSecret: 'google-groups-client-secret',
insecureSkipEmailVerified: false,
fetchGroups: true,
serviceAccountJson: '{"type": "service_account"}',
domainToAdminEmail: {
'google-groups.com': 'admin@google-groups.com',
},
fetchTransitiveGroupMembership: true,
allowedGroups: ['allowed-group-1', 'allowed-group-2'],
},
fetchTransitiveGroupMembership: true,
allowedGroups: ['allowed-group-1', 'allowed-group-2'],
},
authNProviderInfo: {
relayStatePath: 'api/v1/sso/relay/domain-8',
@@ -191,15 +207,19 @@ export const mockSingleDomainResponse = {
data: [mockGoogleAuthDomain],
};
// Mock success responses
// Mock success responses. CreateAuthDomain returns just an Identifiable
// (the new domain ID); clients re-Read to get the full domain.
export const mockCreateSuccessResponse = {
status: 'success',
data: mockGoogleAuthDomain,
data: { id: mockGoogleAuthDomain.id },
};
export const mockUpdateSuccessResponse = {
status: 'success',
data: { ...mockGoogleAuthDomain, ssoEnabled: false },
data: {
...mockGoogleAuthDomain,
config: { ...mockGoogleAuthDomain.config, ssoEnabled: false },
},
};
export const mockDeleteSuccessResponse = {

View File

@@ -158,7 +158,7 @@ function AuthDomain(): JSX.Element {
onClick={(): void => setRecord(record)}
variant="link"
>
Configure {SSOType.get(record.ssoType || '')}
Configure {SSOType.get(record.config?.ssoType || '')}
</Button>
<Button
className="auth-domain-list-action-link delete"

1
go.mod
View File

@@ -88,6 +88,7 @@ require (
gonum.org/v1/gonum v0.17.0
google.golang.org/api v0.265.0
google.golang.org/protobuf v1.36.11
gopkg.in/evanphx/json-patch.v4 v4.13.0
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/apimachinery v0.35.2

View File

@@ -34,9 +34,9 @@ func (provider *provider) addAuthDomainRoutes(router *mux.Router) error {
Description: "This endpoint creates an auth domain",
Request: new(authtypes.PostableAuthDomain),
RequestContentType: "application/json",
Response: new(authtypes.GettableAuthDomain),
Response: new(types.Identifiable),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
@@ -66,7 +66,7 @@ func (provider *provider) addAuthDomainRoutes(router *mux.Router) error {
Tags: []string{"authdomains"},
Summary: "Update auth domain",
Description: "This endpoint updates an auth domain",
Request: new(authtypes.UpdateableAuthDomain),
Request: new(authtypes.UpdatableAuthDomain),
RequestContentType: "application/json",
Response: nil,
ResponseContentType: "",

View File

@@ -13,6 +13,165 @@ import (
)
func (provider *provider) addDashboardRoutes(router *mux.Router) error {
if err := router.Handle("/api/v2/dashboards", handler.New(provider.authZ.EditAccess(provider.dashboardHandler.CreateV2), handler.OpenAPIDef{
ID: "CreateDashboardV2",
Tags: []string{"dashboard"},
Summary: "Create dashboard (v2)",
Description: "This endpoint creates a v2-shape dashboard with structured metadata, a typed data tree, and resolved tags.",
Request: new(dashboardtypes.PostableDashboardV2),
RequestContentType: "application/json",
Response: new(dashboardtypes.GettableDashboardV2),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodPost).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}", handler.New(provider.authZ.ViewAccess(provider.dashboardHandler.GetV2), handler.OpenAPIDef{
ID: "GetDashboardV2",
Tags: []string{"dashboard"},
Summary: "Get dashboard (v2)",
Description: "This endpoint returns a v2-shape dashboard with its tags and public sharing config (if any).",
Request: nil,
RequestContentType: "",
Response: new(dashboardtypes.GettableDashboardV2),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleViewer),
})).Methods(http.MethodGet).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}", handler.New(provider.authZ.EditAccess(provider.dashboardHandler.UpdateV2), handler.OpenAPIDef{
ID: "UpdateDashboardV2",
Tags: []string{"dashboard"},
Summary: "Update dashboard (v2)",
Description: "This endpoint updates a v2-shape dashboard's metadata, data, and tag set. Locked dashboards are rejected.",
Request: new(dashboardtypes.UpdateableDashboardV2),
RequestContentType: "application/json",
Response: new(dashboardtypes.GettableDashboardV2),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodPut).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}", handler.New(provider.authZ.EditAccess(provider.dashboardHandler.PatchV2), handler.OpenAPIDef{
ID: "PatchDashboardV2",
Tags: []string{"dashboard"},
Summary: "Patch dashboard (v2)",
Description: "This endpoint applies an RFC 6902 JSON Patch to a v2-shape dashboard. The patch is applied against the postable view of the dashboard (metadata, data, tags), so individual panels, queries, variables, layouts, or tags can be updated without re-sending the rest of the dashboard. Locked dashboards are rejected.",
Request: new(dashboardtypes.JSONPatchDocument),
// Strictly per RFC 6902 the content type is `application/json-patch+json`,
// but our OpenAPI generator only reflects schemas for content types it
// understands (application/json, form-urlencoded, multipart) — anything
// else degrades to `type: string`. Declaring application/json here keeps
// the array-of-ops schema visible to spec consumers; the runtime decoder
// parses JSON regardless of the request's actual Content-Type header.
RequestContentType: "application/json",
Response: new(dashboardtypes.GettableDashboardV2),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodPatch).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}/lock", handler.New(provider.authZ.EditAccess(provider.dashboardHandler.LockV2), handler.OpenAPIDef{
ID: "LockDashboardV2",
Tags: []string{"dashboard"},
Summary: "Lock dashboard (v2)",
Description: "This endpoint locks a v2-shape dashboard. Only the dashboard's creator or an org admin may lock or unlock.",
Request: nil,
RequestContentType: "",
Response: nil,
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusNoContent,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodPut).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}/lock", handler.New(provider.authZ.EditAccess(provider.dashboardHandler.UnlockV2), handler.OpenAPIDef{
ID: "UnlockDashboardV2",
Tags: []string{"dashboard"},
Summary: "Unlock dashboard (v2)",
Description: "This endpoint unlocks a v2-shape dashboard. Only the dashboard's creator or an org admin may lock or unlock.",
Request: nil,
RequestContentType: "",
Response: nil,
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusNoContent,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodDelete).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}/public", handler.New(provider.authZ.AdminAccess(provider.dashboardHandler.CreatePublicV2), handler.OpenAPIDef{
ID: "CreatePublicDashboardV2",
Tags: []string{"dashboard"},
Summary: "Make a dashboard v2 public",
Description: "This endpoint creates the public sharing config for a v2 dashboard and returns the dashboard with the new public config attached. Lock state does not gate this endpoint.",
Request: new(dashboardtypes.PostablePublicDashboard),
RequestContentType: "application/json",
Response: new(dashboardtypes.GettableDashboardV2),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodPatch).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}", handler.New(provider.authZ.EditAccess(provider.dashboardHandler.DeleteV2), handler.OpenAPIDef{
ID: "DeleteDashboardV2",
Tags: []string{"dashboard"},
Summary: "Delete dashboard (v2)",
Description: "This endpoint soft-deletes a v2-shape dashboard. Locked dashboards are rejected. Hard deletion happens later via the purge cron.",
Request: nil,
RequestContentType: "",
Response: nil,
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusNoContent,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleEditor),
})).Methods(http.MethodDelete).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v2/dashboards/{id}/public", handler.New(provider.authZ.AdminAccess(provider.dashboardHandler.UpdatePublicV2), handler.OpenAPIDef{
ID: "UpdatePublicDashboardV2",
Tags: []string{"dashboard"},
Summary: "Update public sharing config for a dashboard v2",
Description: "This endpoint updates the public sharing config (time range settings) of an already-public v2 dashboard. Lock state does not gate this endpoint.",
Request: new(dashboardtypes.UpdatablePublicDashboard),
RequestContentType: "application/json",
Response: new(dashboardtypes.GettableDashboardV2),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodPut).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v1/dashboards/{id}/public", handler.New(provider.authZ.AdminAccess(provider.dashboardHandler.CreatePublic), handler.OpenAPIDef{
ID: "CreatePublicDashboard",
Tags: []string{"dashboard"},

View File

@@ -8,7 +8,6 @@ var (
FeatureHideRootUser = featuretypes.MustNewName("hide_root_user")
FeatureGetMetersFromZeus = featuretypes.MustNewName("get_meters_from_zeus")
FeaturePutMetersInZeus = featuretypes.MustNewName("put_meters_in_zeus")
FeatureUseMeterReporter = featuretypes.MustNewName("use_meter_reporter")
FeatureUseJSONBody = featuretypes.MustNewName("use_json_body")
)
@@ -54,14 +53,6 @@ func MustNewRegistry() featuretypes.Registry {
DefaultVariant: featuretypes.MustNewName("disabled"),
Variants: featuretypes.NewBooleanVariants(),
},
&featuretypes.Feature{
Name: FeatureUseMeterReporter,
Kind: featuretypes.KindBoolean,
Stage: featuretypes.StageExperimental,
Description: "Controls whether the enterprise meter reporter runs instead of the noop reporter",
DefaultVariant: featuretypes.MustNewName("disabled"),
Variants: featuretypes.NewBooleanVariants(),
},
&featuretypes.Feature{
Name: FeatureUseJSONBody,
Kind: featuretypes.KindBoolean,

View File

@@ -1,12 +0,0 @@
package metercollector
const (
// DimensionOrganizationID identifies the organization.
DimensionOrganizationID = "signoz.billing.organization.id"
// DimensionRetentionDays identifies the retention bucket a meter belongs to.
DimensionRetentionDays = "signoz.billing.retention.days"
// DimensionWorkspaceKeyID identifies the ingestion workspace key.
DimensionWorkspaceKeyID = "signoz.workspace.key.id"
)

View File

@@ -1,6 +0,0 @@
package metercollector
import "github.com/SigNoz/signoz/pkg/errors"
// ErrCodeCollectFailed is the shared error code for collector failures.
var ErrCodeCollectFailed = errors.MustNewCode("metercollector_collect_failed")

View File

@@ -1,19 +0,0 @@
// Package metercollector defines the contract for billing meter collectors.
package metercollector
import (
"context"
"github.com/SigNoz/signoz/pkg/types/metercollectortypes"
"github.com/SigNoz/signoz/pkg/types/meterreportertypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// MeterCollector owns one billing meter's metadata and collection query.
// Collect stamps DimensionOrganizationID and returns errors instead of panics.
type MeterCollector interface {
Name() metercollectortypes.Name
Unit() metercollectortypes.Unit
Aggregation() metercollectortypes.Aggregation
Collect(ctx context.Context, orgID valuer.UUID, window meterreportertypes.Window) ([]meterreportertypes.Meter, error)
}

View File

@@ -1,53 +0,0 @@
package meterreporter
import (
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
var _ factory.Config = (*Config)(nil)
type Config struct {
// Interval is how often the reporter collects and ships meters.
Interval time.Duration `mapstructure:"interval"`
// Timeout bounds one collect-and-ship tick.
Timeout time.Duration `mapstructure:"timeout"`
// CatchupMaxDaysPerTick caps sealed-day catchup work per tick.
CatchupMaxDaysPerTick int `mapstructure:"catchup_max_days_per_tick"`
}
func newConfig() factory.Config {
return Config{
Interval: 6 * time.Hour,
Timeout: 5 * time.Minute,
CatchupMaxDaysPerTick: 180,
}
}
func NewConfigFactory() factory.ConfigFactory {
return factory.NewConfigFactory(factory.MustNewName("meterreporter"), newConfig)
}
func (c Config) Validate() error {
if c.Interval < 5*time.Minute {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidInput, "meterreporter::interval must be at least 5m")
}
if c.Timeout < 3*time.Minute {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidInput, "meterreporter::timeout must be at least 3m")
}
if c.Timeout >= c.Interval {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidInput, "meterreporter::timeout must be less than meterreporter::interval")
}
if c.CatchupMaxDaysPerTick < 1 || c.CatchupMaxDaysPerTick > 180 {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidInput, "meterreporter::catchup_max_days_per_tick must be between 1 and 180")
}
return nil
}

View File

@@ -1,14 +0,0 @@
package meterreporter
import (
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
var (
ErrCodeInvalidInput = errors.MustNewCode("meterreporter_invalid_input")
)
type Reporter interface {
factory.ServiceWithHealthy
}

View File

@@ -1,39 +0,0 @@
package noopmeterreporter
import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/meterreporter"
)
type provider struct {
healthyC chan struct{}
stopC chan struct{}
}
func NewFactory() factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config] {
return factory.NewProviderFactory(factory.MustNewName("noop"), New)
}
func New(_ context.Context, _ factory.ProviderSettings, _ meterreporter.Config) (meterreporter.Reporter, error) {
return &provider{
healthyC: make(chan struct{}),
stopC: make(chan struct{}),
}, nil
}
func (p *provider) Start(_ context.Context) error {
close(p.healthyC)
<-p.stopC
return nil
}
func (p *provider) Stop(_ context.Context) error {
close(p.stopC)
return nil
}
func (p *provider) Healthy() <-chan struct{} {
return p.healthyC
}

View File

@@ -142,7 +142,7 @@ func (handler *handler) Update(rw http.ResponseWriter, r *http.Request) {
return
}
body := new(authtypes.UpdateableAuthDomain)
body := new(authtypes.UpdatableAuthDomain)
if err := binding.JSON.BindBody(r.Body, body); err != nil {
render.Error(rw, err)
return

View File

@@ -52,6 +52,26 @@ type Module interface {
statsreporter.StatsCollector
authz.RegisterTypeable
// ════════════════════════════════════════════════════════════════════════
// v2 dashboard methods
// ════════════════════════════════════════════════════════════════════════
CreateV2(ctx context.Context, orgID valuer.UUID, createdBy string, creator valuer.UUID, postable dashboardtypes.PostableDashboardV2) (*dashboardtypes.DashboardV2, error)
GetV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.DashboardV2, error)
UpdateV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, updateable dashboardtypes.UpdateableDashboardV2) (*dashboardtypes.DashboardV2, error)
PatchV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, patch dashboardtypes.PatchableDashboardV2) (*dashboardtypes.DashboardV2, error)
LockUnlockV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, isAdmin bool, lock bool) error
CreatePublicV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, postable dashboardtypes.PostablePublicDashboard) (*dashboardtypes.DashboardV2, error)
UpdatePublicV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatable dashboardtypes.UpdatablePublicDashboard) (*dashboardtypes.DashboardV2, error)
DeleteV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, deletedBy string) error
}
type Handler interface {
@@ -74,4 +94,25 @@ type Handler interface {
LockUnlock(http.ResponseWriter, *http.Request)
Delete(http.ResponseWriter, *http.Request)
// ════════════════════════════════════════════════════════════════════════
// v2 dashboard methods
// ════════════════════════════════════════════════════════════════════════
CreateV2(http.ResponseWriter, *http.Request)
GetV2(http.ResponseWriter, *http.Request)
UpdateV2(http.ResponseWriter, *http.Request)
PatchV2(http.ResponseWriter, *http.Request)
LockV2(http.ResponseWriter, *http.Request)
UnlockV2(http.ResponseWriter, *http.Request)
CreatePublicV2(http.ResponseWriter, *http.Request)
UpdatePublicV2(http.ResponseWriter, *http.Request)
DeleteV2(http.ResponseWriter, *http.Request)
}

View File

@@ -0,0 +1,46 @@
package dashboardpurger
import (
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
type Config struct {
// Interval between successive purge passes.
Interval time.Duration `mapstructure:"interval"`
// BatchSize is the maximum number of dashboards hard-deleted per pass.
// Caps the size of the IN(...) clause and bounds tx duration.
BatchSize int `mapstructure:"batch_size"`
// Retention is how long a soft-deleted dashboard sticks around before
// becoming eligible for hard deletion.
Retention time.Duration `mapstructure:"retention"`
}
func NewConfigFactory() factory.ConfigFactory {
return factory.NewConfigFactory(factory.MustNewName("dashboardpurger"), newConfig)
}
func newConfig() factory.Config {
return &Config{
Interval: 1 * time.Hour,
BatchSize: 100,
Retention: 7 * 24 * time.Hour,
}
}
func (c Config) Validate() error {
if c.Interval <= 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "interval must be positive")
}
if c.BatchSize <= 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "batch_size must be positive")
}
if c.Retention < 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "retention must not be negative")
}
return nil
}

View File

@@ -0,0 +1,79 @@
package dashboardpurger
import (
"context"
"log/slog"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
)
type Purger interface {
factory.Service
}
type purger struct {
config Config
settings factory.ScopedProviderSettings
store dashboardtypes.Store
stopC chan struct{}
}
func NewFactory(store dashboardtypes.Store) factory.ProviderFactory[Purger, Config] {
return factory.NewProviderFactory(factory.MustNewName("dashboardpurger"), func(ctx context.Context, providerSettings factory.ProviderSettings, config Config) (Purger, error) {
return New(ctx, providerSettings, config, store)
})
}
func New(_ context.Context, providerSettings factory.ProviderSettings, config Config, store dashboardtypes.Store) (Purger, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/pkg/modules/dashboard/dashboardpurger")
return &purger{
config: config,
settings: settings,
store: store,
stopC: make(chan struct{}),
}, nil
}
func (p *purger) Start(ctx context.Context) error {
ticker := time.NewTicker(p.config.Interval)
defer ticker.Stop()
for {
select {
case <-p.stopC:
return nil
case <-ticker.C:
ctx, span := p.settings.Tracer().Start(ctx, "dashboardpurger.Sweep")
if err := p.sweep(ctx); err != nil {
span.RecordError(err)
p.settings.Logger().ErrorContext(ctx, "dashboard purge sweep failed", errors.Attr(err))
}
span.End()
}
}
}
func (p *purger) Stop(_ context.Context) error {
close(p.stopC)
return nil
}
// sweep does at most one batch per call. The ticker drives further passes; if
// purge volume is bursty the next tick will pick up the rest.
func (p *purger) sweep(ctx context.Context) error {
ids, err := p.store.ListPurgeable(ctx, p.config.Retention, p.config.BatchSize)
if err != nil {
return err
}
if len(ids) == 0 {
return nil
}
if err := p.store.HardDelete(ctx, ids); err != nil {
return err
}
p.settings.Logger().InfoContext(ctx, "hard-deleted soft-deleted dashboards", slog.Int("count", len(ids)))
return nil
}

View File

@@ -10,7 +10,9 @@ import (
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
@@ -20,20 +22,24 @@ import (
type module struct {
store dashboardtypes.Store
sqlstore sqlstore.SQLStore
settings factory.ScopedProviderSettings
analytics analytics.Analytics
orgGetter organization.Getter
queryParser queryparser.QueryParser
tagModule tag.Module
}
func NewModule(store dashboardtypes.Store, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser) dashboard.Module {
func NewModule(store dashboardtypes.Store, sqlstore sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, tagModule tag.Module) dashboard.Module {
scopedProviderSettings := factory.NewScopedProviderSettings(settings, "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard")
return &module{
store: store,
sqlstore: sqlstore,
settings: scopedProviderSettings,
analytics: analytics,
orgGetter: orgGetter,
queryParser: queryParser,
tagModule: tagModule,
}
}

View File

@@ -2,10 +2,13 @@ package impldashboard
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
@@ -63,6 +66,195 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
return storableDashboard, nil
}
func (store *store) GetV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.StorableDashboard, *dashboardtypes.StorablePublicDashboard, error) {
type joinedRow struct {
bun.BaseModel `bun:"table:dashboard,alias:d"`
ID valuer.UUID `bun:"id"`
OrgID valuer.UUID `bun:"org_id"`
Data dashboardtypes.StorableDashboardData `bun:"data"`
Locked bool `bun:"locked"`
CreatedAt time.Time `bun:"created_at"`
CreatedBy string `bun:"created_by"`
UpdatedAt time.Time `bun:"updated_at"`
UpdatedBy string `bun:"updated_by"`
PublicID *valuer.UUID `bun:"public_id"`
PublicCreatedAt *time.Time `bun:"public_created_at"`
PublicUpdatedAt *time.Time `bun:"public_updated_at"`
PublicTimeRangeEnabled *bool `bun:"public_time_range_enabled"`
PublicDefaultTimeRange *string `bun:"public_default_time_range"`
}
row := new(joinedRow)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(row).
ColumnExpr("d.id, d.org_id, d.data, d.locked, d.created_at, d.created_by, d.updated_at, d.updated_by").
ColumnExpr("pd.id AS public_id, pd.created_at AS public_created_at, pd.updated_at AS public_updated_at, pd.time_range_enabled AS public_time_range_enabled, pd.default_time_range AS public_default_time_range").
Join("LEFT JOIN public_dashboard AS pd ON pd.dashboard_id = d.id").
Where("d.id = ?", id).
Where("d.org_id = ?", orgID).
Where("d.deleted_at IS NULL").
Scan(ctx)
if err != nil {
return nil, nil, store.sqlstore.WrapNotFoundErrf(err, dashboardtypes.ErrCodeDashboardNotFound, "dashboard with id %s doesn't exist", id)
}
storable := &dashboardtypes.StorableDashboard{
Identifiable: types.Identifiable{ID: row.ID},
TimeAuditable: types.TimeAuditable{CreatedAt: row.CreatedAt, UpdatedAt: row.UpdatedAt},
UserAuditable: types.UserAuditable{CreatedBy: row.CreatedBy, UpdatedBy: row.UpdatedBy},
OrgID: row.OrgID,
Data: row.Data,
Locked: row.Locked,
}
if row.PublicID == nil {
return storable, nil, nil
}
public := &dashboardtypes.StorablePublicDashboard{
Identifiable: types.Identifiable{ID: *row.PublicID},
TimeAuditable: types.TimeAuditable{CreatedAt: *row.PublicCreatedAt, UpdatedAt: *row.PublicUpdatedAt},
TimeRangeEnabled: *row.PublicTimeRangeEnabled,
DefaultTimeRange: *row.PublicDefaultTimeRange,
DashboardID: row.ID.StringValue(),
}
return storable, public, nil
}
func (store *store) UpdateV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, data dashboardtypes.StorableDashboardData) error {
res, err := store.
sqlstore.
BunDBCtx(ctx).
NewUpdate().
Model((*dashboardtypes.StorableDashboard)(nil)).
Set("data = ?", data).
Set("updated_by = ?", updatedBy).
Set("updated_at = ?", time.Now()).
Where("id = ?", id).
Where("org_id = ?", orgID).
Where("deleted_at IS NULL").
Exec(ctx)
if err != nil {
return err
}
rows, err := res.RowsAffected()
if err != nil {
return err
}
// Defends against the race where a soft-delete lands between the caller's
// pre-update GetV2 and this update.
if rows == 0 {
return errors.Newf(errors.TypeNotFound, dashboardtypes.ErrCodeDashboardNotFound, "dashboard with id %s doesn't exist", id)
}
return nil
}
func (store *store) ListPurgeable(ctx context.Context, retention time.Duration, limit int) ([]valuer.UUID, error) {
if limit <= 0 {
return nil, nil
}
cutoff := time.Now().Add(-retention)
ids := make([]valuer.UUID, 0, limit)
err := store.
sqlstore.
BunDB().
NewSelect().
Model((*dashboardtypes.StorableDashboard)(nil)).
Column("id").
Where("deleted_at IS NOT NULL").
Where("deleted_at < ?", cutoff).
Limit(limit).
Scan(ctx, &ids)
if err != nil {
return nil, err
}
return ids, nil
}
// HardDelete cascades to tag_relations and public_dashboard inside one
// transaction so a partial failure leaves no orphans.
func (store *store) HardDelete(ctx context.Context, ids []valuer.UUID) error {
if len(ids) == 0 {
return nil
}
return store.sqlstore.RunInTxCtx(ctx, nil, func(ctx context.Context) error {
if _, err := store.sqlstore.BunDBCtx(ctx).
NewDelete().
Model((*tagtypes.TagRelation)(nil)).
Where("entity_id IN (?)", bun.In(ids)).
Exec(ctx); err != nil {
return err
}
if _, err := store.sqlstore.BunDBCtx(ctx).
NewDelete().
Model((*dashboardtypes.StorablePublicDashboard)(nil)).
Where("dashboard_id IN (?)", bun.In(ids)).
Exec(ctx); err != nil {
return err
}
_, err := store.sqlstore.BunDBCtx(ctx).
NewDelete().
Model((*dashboardtypes.StorableDashboard)(nil)).
Where("id IN (?)", bun.In(ids)).
Exec(ctx)
return err
})
}
func (store *store) SoftDeleteV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, deletedBy string) error {
res, err := store.
sqlstore.
BunDBCtx(ctx).
NewUpdate().
Model((*dashboardtypes.StorableDashboard)(nil)).
Set("deleted_at = ?", time.Now()).
Set("deleted_by = ?", deletedBy).
Where("id = ?", id).
Where("org_id = ?", orgID).
Where("deleted_at IS NULL").
Exec(ctx)
if err != nil {
return err
}
rows, err := res.RowsAffected()
if err != nil {
return err
}
if rows == 0 {
return errors.Newf(errors.TypeNotFound, dashboardtypes.ErrCodeDashboardNotFound, "dashboard with id %s doesn't exist", id)
}
return nil
}
func (store *store) LockUnlockV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, locked bool, updatedBy string) error {
res, err := store.
sqlstore.
BunDBCtx(ctx).
NewUpdate().
Model((*dashboardtypes.StorableDashboard)(nil)).
Set("locked = ?", locked).
Set("updated_by = ?", updatedBy).
Set("updated_at = ?", time.Now()).
Where("id = ?", id).
Where("deleted_at IS NULL").
Exec(ctx)
if err != nil {
return err
}
rows, err := res.RowsAffected()
if err != nil {
return err
}
if rows == 0 {
return errors.Newf(errors.TypeNotFound, dashboardtypes.ErrCodeDashboardNotFound, "dashboard with id %s doesn't exist", id)
}
return nil
}
func (store *store) GetPublic(ctx context.Context, dashboardID string) (*dashboardtypes.StorablePublicDashboard, error) {
storable := new(dashboardtypes.StorablePublicDashboard)
err := store.

View File

@@ -0,0 +1,344 @@
package impldashboard
import (
"context"
"encoding/json"
"net/http"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
)
func (handler *handler) CreateV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
req := dashboardtypes.PostableDashboardV2{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.CreateV2(ctx, orgID, claims.Email, valuer.MustNewUUID(claims.IdentityID()), req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusCreated, dashboardtypes.NewGettableDashboardV2FromDashboardV2(dashboard))
}
func (handler *handler) GetV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.GetV2(ctx, orgID, dashboardID)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, dashboardtypes.NewGettableDashboardV2FromDashboardV2(dashboard))
}
func (handler *handler) LockV2(rw http.ResponseWriter, r *http.Request) {
handler.lockUnlockV2(rw, r, true)
}
func (handler *handler) UnlockV2(rw http.ResponseWriter, r *http.Request) {
handler.lockUnlockV2(rw, r, false)
}
func (handler *handler) lockUnlockV2(rw http.ResponseWriter, r *http.Request, lock bool) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
isAdmin := false
selectors := []authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, authtypes.SigNozAdminRoleName),
}
if err := handler.authz.CheckWithTupleCreation(
ctx,
claims,
valuer.MustNewUUID(claims.OrgID),
authtypes.RelationAssignee,
authtypes.TypeableRole,
selectors,
selectors,
); err == nil {
isAdmin = true
}
if err := handler.module.LockUnlockV2(ctx, orgID, dashboardID, claims.Email, isAdmin, lock); err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusNoContent, nil)
}
func (handler *handler) UpdateV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
req := dashboardtypes.UpdateableDashboardV2{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.UpdateV2(ctx, orgID, dashboardID, claims.Email, req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, dashboardtypes.NewGettableDashboardV2FromDashboardV2(dashboard))
}
func (handler *handler) PatchV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
req := dashboardtypes.PatchableDashboardV2{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.PatchV2(ctx, orgID, dashboardID, claims.Email, req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, dashboardtypes.NewGettableDashboardV2FromDashboardV2(dashboard))
}
func (handler *handler) CreatePublicV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
req := dashboardtypes.PostablePublicDashboard{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.CreatePublicV2(ctx, orgID, dashboardID, req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, dashboardtypes.NewGettableDashboardV2FromDashboardV2(dashboard))
}
func (handler *handler) UpdatePublicV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
req := dashboardtypes.UpdatablePublicDashboard{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
render.Error(rw, err)
return
}
dashboard, err := handler.module.UpdatePublicV2(ctx, orgID, dashboardID, req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, dashboardtypes.NewGettableDashboardV2FromDashboardV2(dashboard))
}
func (handler *handler) DeleteV2(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, err)
return
}
id := mux.Vars(r)["id"]
if id == "" {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "id is missing in the path"))
return
}
dashboardID, err := valuer.NewUUID(id)
if err != nil {
render.Error(rw, err)
return
}
if err := handler.module.DeleteV2(ctx, orgID, dashboardID, claims.Email); err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusNoContent, nil)
}

View File

@@ -0,0 +1,189 @@
package impldashboard
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
func (module *module) CreateV2(ctx context.Context, orgID valuer.UUID, createdBy string, creator valuer.UUID, postable dashboardtypes.PostableDashboardV2) (*dashboardtypes.DashboardV2, error) {
if err := postable.Validate(); err != nil {
return nil, err
}
// Tag upserts run outside the dashboard transaction by design: a successful
// upsert that loses an outer dashboard insert just leaves resolved tag rows
// around for the next attempt — preferable to coupling the two.
resolvedTags, err := module.tagModule.CreateMany(ctx, orgID, postable.Tags, createdBy)
if err != nil {
return nil, err
}
dashboard := dashboardtypes.NewDashboardV2(orgID, createdBy, postable, resolvedTags)
storableDashboard, err := dashboard.ToStorableDashboard()
if err != nil {
return nil, err
}
tagIDs := make([]valuer.UUID, len(resolvedTags))
for i, t := range resolvedTags {
tagIDs[i] = t.ID
}
err = module.sqlstore.RunInTxCtx(ctx, nil, func(ctx context.Context) error {
if err := module.store.Create(ctx, storableDashboard); err != nil {
return err
}
return module.tagModule.LinkToEntity(ctx, orgID, dashboardtypes.EntityTypeDashboard, dashboard.ID, tagIDs)
})
if err != nil {
return nil, err
}
module.analytics.TrackUser(ctx, orgID.String(), creator.String(), "Dashboard Created", dashboardtypes.NewStatsFromStorableDashboards([]*dashboardtypes.StorableDashboard{storableDashboard}))
return dashboard, nil
}
func (module *module) GetV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.DashboardV2, error) {
storable, public, err := module.store.GetV2(ctx, orgID, id)
if err != nil {
return nil, err
}
tags, err := module.tagModule.ListForEntity(ctx, id)
if err != nil {
return nil, err
}
return dashboardtypes.NewDashboardV2FromStorable(storable, public, tags)
}
func (module *module) UpdateV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, updateable dashboardtypes.UpdateableDashboardV2) (*dashboardtypes.DashboardV2, error) {
if err := updateable.Validate(); err != nil {
return nil, err
}
existing, err := module.GetV2(ctx, orgID, id)
if err != nil {
return nil, err
}
// safety check before upserting tags. existing.Update also has this checks, but
// because existing.Update needs the resolved tags, that method can only be called
// after the tags have been resolved.
if err := existing.CanUpdate(); err != nil {
return nil, err
}
// Tag upserts run outside the update transaction for the same reason as
// Create: a successful upsert that loses the outer transaction just leaves
// resolved tag rows around for the next attempt.
resolvedTags, err := module.tagModule.CreateMany(ctx, orgID, updateable.Tags, updatedBy)
if err != nil {
return nil, err
}
tagIDs := make([]valuer.UUID, len(resolvedTags))
for i, t := range resolvedTags {
tagIDs[i] = t.ID
}
if err := existing.Update(updateable, updatedBy, resolvedTags); err != nil {
return nil, err
}
storable, err := existing.ToStorableDashboard()
if err != nil {
return nil, err
}
err = module.sqlstore.RunInTxCtx(ctx, nil, func(ctx context.Context) error {
if err := module.tagModule.SyncLinksForEntity(ctx, orgID, dashboardtypes.EntityTypeDashboard, id, tagIDs); err != nil {
return err
}
return module.store.UpdateV2(ctx, orgID, id, updatedBy, storable.Data)
})
if err != nil {
return nil, err
}
return existing, nil
}
func (module *module) PatchV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, patch dashboardtypes.PatchableDashboardV2) (*dashboardtypes.DashboardV2, error) {
existing, err := module.GetV2(ctx, orgID, id)
if err != nil {
return nil, err
}
if err := existing.CanUpdate(); err != nil {
return nil, err
}
updateable, err := patch.Apply(existing)
if err != nil {
return nil, err
}
resolvedTags, err := module.tagModule.CreateMany(ctx, orgID, updateable.Tags, updatedBy)
if err != nil {
return nil, err
}
tagIDs := make([]valuer.UUID, len(resolvedTags))
for i, t := range resolvedTags {
tagIDs[i] = t.ID
}
if err := existing.Update(*updateable, updatedBy, resolvedTags); err != nil {
return nil, err
}
storable, err := existing.ToStorableDashboard()
if err != nil {
return nil, err
}
err = module.sqlstore.RunInTxCtx(ctx, nil, func(ctx context.Context) error {
if err := module.tagModule.SyncLinksForEntity(ctx, orgID, dashboardtypes.EntityTypeDashboard, id, tagIDs); err != nil {
return err
}
return module.store.UpdateV2(ctx, orgID, id, updatedBy, storable.Data)
})
if err != nil {
return nil, err
}
return existing, nil
}
// CreatePublicV2 is not supported in the community build.
func (module *module) CreatePublicV2(_ context.Context, _ valuer.UUID, _ valuer.UUID, _ dashboardtypes.PostablePublicDashboard) (*dashboardtypes.DashboardV2, error) {
return nil, errors.Newf(errors.TypeUnsupported, dashboardtypes.ErrCodePublicDashboardUnsupported, "not implemented")
}
// UpdatePublicV2 is not supported in the community build.
func (module *module) UpdatePublicV2(_ context.Context, _ valuer.UUID, _ valuer.UUID, _ dashboardtypes.UpdatablePublicDashboard) (*dashboardtypes.DashboardV2, error) {
return nil, errors.Newf(errors.TypeUnsupported, dashboardtypes.ErrCodePublicDashboardUnsupported, "not implemented")
}
func (module *module) DeleteV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, deletedBy string) error {
existing, err := module.GetV2(ctx, orgID, id)
if err != nil {
return err
}
if err := existing.CanDelete(); err != nil {
return err
}
return module.store.SoftDeleteV2(ctx, orgID, id, deletedBy)
}
func (module *module) LockUnlockV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, isAdmin bool, lock bool) error {
existing, err := module.GetV2(ctx, orgID, id)
if err != nil {
return err
}
if err := existing.LockUnlock(lock, isAdmin, updatedBy); err != nil {
return err
}
return module.store.LockUnlockV2(ctx, orgID, id, lock, updatedBy)
}

View File

@@ -0,0 +1,53 @@
package impltag
import (
"context"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type module struct {
store tagtypes.Store
}
func NewModule(store tagtypes.Store) tag.Module {
return &module{store: store}
}
func (m *module) CreateMany(ctx context.Context, orgID valuer.UUID, postable []tagtypes.PostableTag, createdBy string) ([]*tagtypes.Tag, error) {
if len(postable) == 0 {
return []*tagtypes.Tag{}, nil
}
toCreate, matched, err := tagtypes.Resolve(ctx, m.store, orgID, postable, createdBy)
if err != nil {
return nil, err
}
created, err := m.store.Create(ctx, toCreate)
if err != nil {
return nil, err
}
return append(matched, created...), nil
}
func (m *module) LinkToEntity(ctx context.Context, orgID valuer.UUID, entityType tagtypes.EntityType, entityID valuer.UUID, tagIDs []valuer.UUID) error {
if len(tagIDs) == 0 {
return nil
}
return m.store.CreateRelations(ctx, tagtypes.NewTagRelations(orgID, entityType, entityID, tagIDs))
}
func (m *module) SyncLinksForEntity(ctx context.Context, orgID valuer.UUID, entityType tagtypes.EntityType, entityID valuer.UUID, tagIDs []valuer.UUID) error {
if err := m.store.CreateRelations(ctx, tagtypes.NewTagRelations(orgID, entityType, entityID, tagIDs)); err != nil {
return err
}
return m.store.DeleteRelationsExcept(ctx, entityID, tagIDs)
}
func (m *module) ListForEntity(ctx context.Context, entityID valuer.UUID) ([]*tagtypes.Tag, error) {
return m.store.ListByEntity(ctx, entityID)
}

View File

@@ -0,0 +1,95 @@
package impltag
import (
"context"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
type store struct {
sqlstore sqlstore.SQLStore
}
func NewStore(sqlstore sqlstore.SQLStore) tagtypes.Store {
return &store{sqlstore: sqlstore}
}
func (s *store) List(ctx context.Context, orgID valuer.UUID) ([]*tagtypes.Tag, error) {
tags := make([]*tagtypes.Tag, 0)
err := s.sqlstore.
BunDBCtx(ctx).
NewSelect().
Model(&tags).
Where("org_id = ?", orgID).
Scan(ctx)
if err != nil {
return nil, err
}
return tags, nil
}
func (s *store) ListByEntity(ctx context.Context, entityID valuer.UUID) ([]*tagtypes.Tag, error) {
tags := make([]*tagtypes.Tag, 0)
err := s.sqlstore.
BunDBCtx(ctx).
NewSelect().
Model(&tags).
Join("JOIN tag_relations AS tr ON tr.tag_id = tag.id").
Where("tr.entity_id = ?", entityID).
Scan(ctx)
if err != nil {
return nil, err
}
return tags, nil
}
func (s *store) Create(ctx context.Context, tags []*tagtypes.Tag) ([]*tagtypes.Tag, error) {
if len(tags) == 0 {
return tags, nil
}
// DO UPDATE on a self-set is a deliberate no-op write whose only purpose
// is to make RETURNING fire on conflicting rows. Without it, RETURNING is
// silent on the conflict path and we'd have to refetch by internal name to
// learn the existing rows' IDs after a concurrent-insert race.
err := s.sqlstore.
BunDBCtx(ctx).
NewInsert().
Model(&tags).
On("CONFLICT (org_id, internal_name) DO UPDATE").
Set("internal_name = EXCLUDED.internal_name").
Returning("*").
Scan(ctx)
if err != nil {
return nil, err
}
return tags, nil
}
func (s *store) CreateRelations(ctx context.Context, relations []*tagtypes.TagRelation) error {
if len(relations) == 0 {
return nil
}
_, err := s.sqlstore.
BunDBCtx(ctx).
NewInsert().
Model(&relations).
On("CONFLICT (entity_id, tag_id) DO NOTHING").
Exec(ctx)
return err
}
func (s *store) DeleteRelationsExcept(ctx context.Context, entityID valuer.UUID, keepTagIDs []valuer.UUID) error {
q := s.sqlstore.
BunDBCtx(ctx).
NewDelete().
Model((*tagtypes.TagRelation)(nil)).
Where("entity_id = ?", entityID)
if len(keepTagIDs) > 0 {
q = q.Where("tag_id NOT IN (?)", bun.In(keepTagIDs))
}
_, err := q.Exec(ctx)
return err
}

View File

@@ -0,0 +1,140 @@
package impltag
import (
"context"
"path/filepath"
"testing"
"time"
"github.com/SigNoz/signoz/pkg/factory/factorytest"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/sqlstore/sqlitesqlstore"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/uptrace/bun"
)
func newTestStore(t *testing.T) sqlstore.SQLStore {
t.Helper()
dbPath := filepath.Join(t.TempDir(), "test.db")
store, err := sqlitesqlstore.New(context.Background(), factorytest.NewSettings(), sqlstore.Config{
Provider: "sqlite",
Connection: sqlstore.ConnectionConfig{
MaxOpenConns: 1,
MaxConnLifetime: 0,
},
Sqlite: sqlstore.SqliteConfig{
Path: dbPath,
Mode: "wal",
BusyTimeout: 5 * time.Second,
TransactionMode: "deferred",
},
})
require.NoError(t, err)
_, err = store.BunDB().NewCreateTable().
Model((*tagtypes.Tag)(nil)).
IfNotExists().
Exec(context.Background())
require.NoError(t, err)
_, err = store.BunDB().Exec(`CREATE UNIQUE INDEX IF NOT EXISTS uq_tag_org_id_internal_name ON tag (org_id, internal_name)`)
require.NoError(t, err)
return store
}
func tagsByInternalName(t *testing.T, db *bun.DB) map[string]*tagtypes.Tag {
t.Helper()
all := make([]*tagtypes.Tag, 0)
require.NoError(t, db.NewSelect().Model(&all).Scan(context.Background()))
out := map[string]*tagtypes.Tag{}
for _, tag := range all {
out[tag.InternalName] = tag
}
return out
}
func TestStore_Create_PopulatesIDsOnFreshInsert(t *testing.T) {
ctx := context.Background()
sqlstore := newTestStore(t)
s := NewStore(sqlstore)
orgID := valuer.GenerateUUID()
tagA := tagtypes.NewTag(orgID, "Database", "database", "u@signoz.io")
tagB := tagtypes.NewTag(orgID, "team/BLR", "team::blr", "u@signoz.io")
preIDA := tagA.ID
preIDB := tagB.ID
got, err := s.Create(ctx, []*tagtypes.Tag{tagA, tagB})
require.NoError(t, err)
require.Len(t, got, 2)
// No race → pre-generated IDs stand. The slice is what we passed in,
// confirming Scan didn't reallocate.
assert.Equal(t, preIDA, got[0].ID)
assert.Equal(t, preIDB, got[1].ID)
// And the rows are in the DB.
stored := tagsByInternalName(t, sqlstore.BunDB())
require.Contains(t, stored, "database")
require.Contains(t, stored, "team::blr")
assert.Equal(t, preIDA, stored["database"].ID)
assert.Equal(t, preIDB, stored["team::blr"].ID)
}
func TestStore_Create_ConflictReturnsExistingRowID(t *testing.T) {
ctx := context.Background()
sqlstore := newTestStore(t)
s := NewStore(sqlstore)
orgID := valuer.GenerateUUID()
// Simulate a concurrent insert: someone else has already inserted "database".
winner := tagtypes.NewTag(orgID, "Database", "database", "concurrent")
_, err := s.Create(ctx, []*tagtypes.Tag{winner})
require.NoError(t, err)
winnerID := winner.ID
// Now our request runs with a different pre-generated ID for the same
// internal name. RETURNING should overwrite our stale ID with winner's ID.
loser := tagtypes.NewTag(orgID, "Database", "database", "u@signoz.io")
loserPreID := loser.ID
require.NotEqual(t, winnerID, loserPreID, "pre-generated IDs must differ for this test to be meaningful")
got, err := s.Create(ctx, []*tagtypes.Tag{loser})
require.NoError(t, err)
require.Len(t, got, 1)
assert.Equal(t, winnerID, got[0].ID, "returned slice should carry the existing row's ID, not our stale one")
assert.Equal(t, winnerID, loser.ID, "input slice element is mutated in place")
// And the DB still has exactly one row for that internal name — winner's.
stored := tagsByInternalName(t, sqlstore.BunDB())
require.Len(t, stored, 1)
assert.Equal(t, winnerID, stored["database"].ID)
}
func TestStore_Create_MixedFreshAndConflict(t *testing.T) {
ctx := context.Background()
sqlstore := newTestStore(t)
s := NewStore(sqlstore)
orgID := valuer.GenerateUUID()
pre := tagtypes.NewTag(orgID, "Database", "database", "concurrent")
_, err := s.Create(ctx, []*tagtypes.Tag{pre})
require.NoError(t, err)
preExistingID := pre.ID
conflict := tagtypes.NewTag(orgID, "Database", "database", "u@signoz.io")
fresh := tagtypes.NewTag(orgID, "team/BLR", "team::blr", "u@signoz.io")
freshPreID := fresh.ID
got, err := s.Create(ctx, []*tagtypes.Tag{conflict, fresh})
require.NoError(t, err)
require.Len(t, got, 2)
assert.Equal(t, preExistingID, got[0].ID, "conflicting row's ID overwritten with the existing row's")
assert.Equal(t, freshPreID, got[1].ID, "fresh row's pre-generated ID is preserved")
}

28
pkg/modules/tag/tag.go Normal file
View File

@@ -0,0 +1,28 @@
package tag
import (
"context"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type Module interface {
// CreateMany resolves user-supplied tag names against the existing tags for the
// org — reusing the casing of any existing parent tag so that
// "teams/blr/platform" inherits the "BLR" casing from a pre-existing
// "teams/BLR" tag — and inserts any tags that don't yet exist.
//
// Does not link the resolved tags to any entity — call LinkToEntity for that.
CreateMany(ctx context.Context, orgID valuer.UUID, postable []tagtypes.PostableTag, createdBy string) ([]*tagtypes.Tag, error)
// LinkToEntity inserts (entity, tag) rows in tag_relations. Existing rows
// are left untouched. Uses the caller's transaction context if any so that
// it can be made atomic with the entity row insert.
LinkToEntity(ctx context.Context, orgID valuer.UUID, entityType tagtypes.EntityType, entityID valuer.UUID, tagIDs []valuer.UUID) error
// missing links are inserted, obsolete ones removed.
SyncLinksForEntity(ctx context.Context, orgID valuer.UUID, entityType tagtypes.EntityType, entityID valuer.UUID, tagIDs []valuer.UUID) error
ListForEntity(ctx context.Context, entityID valuer.UUID) ([]*tagtypes.Tag, error)
}

View File

@@ -21,9 +21,9 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/utils/timestamp"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/ctxtypes"
"github.com/SigNoz/signoz/pkg/types/instrumentationtypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
@@ -1342,7 +1342,7 @@ func getLocalTableName(tableName string) string {
}
func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalLogs.StringValue(),
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -1377,7 +1377,7 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
if apiErr != nil {
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
if statusItem.Status == constants.StatusPending {
return nil, &model.ApiError{Typ: model.ErrorConflict, Err: fmt.Errorf("TTL is already running")}
}
}
@@ -1425,14 +1425,18 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
// we will change ttl for only the new parts and not the old ones
query += " SETTINGS materialize_ttl_after_modify=0"
ttl := retentiontypes.TTLSetting{
ID: valuer.GenerateUUID(),
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
ttl := types.TTLSetting{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
TransactionID: uuid,
TableName: tableName,
TTL: int(params.DelDuration),
Status: retentiontypes.TTLSettingStatusPending,
Status: constants.StatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -1456,9 +1460,9 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Set("status = ?", constants.StatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1476,9 +1480,9 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Set("status = ?", constants.StatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1491,9 +1495,9 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusSuccess).
Set("status = ?", constants.StatusSuccess).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1503,10 +1507,10 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
}
}(ttlPayload)
return &retentiontypes.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
return &model.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
}
func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalTraces.StringValue(),
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -1536,7 +1540,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
if apiErr != nil {
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
if statusItem.Status == constants.StatusPending {
return nil, &model.ApiError{Typ: model.ErrorConflict, Err: fmt.Errorf("TTL is already running")}
}
}
@@ -1559,14 +1563,18 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
timestamp = "end"
}
ttl := retentiontypes.TTLSetting{
ID: valuer.GenerateUUID(),
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
ttl := types.TTLSetting{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
TransactionID: uuid,
TableName: tableName,
TTL: int(params.DelDuration),
Status: retentiontypes.TTLSettingStatusPending,
Status: constants.StatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -1602,9 +1610,9 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Set("status = ?", constants.StatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1623,9 +1631,9 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Set("status = ?", constants.StatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1638,9 +1646,9 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusSuccess).
Set("status = ?", constants.StatusSuccess).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1649,7 +1657,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
}
}(distributedTableName)
}
return &retentiontypes.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
return &model.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
}
func (r *ClickHouseReader) hasCustomRetentionColumn(ctx context.Context) (bool, error) {
@@ -1678,7 +1686,7 @@ func (r *ClickHouseReader) hasCustomRetentionColumn(ctx context.Context) (bool,
return true, nil
}
func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *retentiontypes.CustomRetentionTTLParams) (*retentiontypes.CustomRetentionTTLResponse, error) {
func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *model.CustomRetentionTTLParams) (*model.CustomRetentionTTLResponse, error) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalLogs.StringValue(),
@@ -1693,7 +1701,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
if !hasCustomRetention {
r.logger.Info("Custom retention not supported, falling back to standard TTL method", "orgID", orgID)
ttlParams := &retentiontypes.TTLParams{
ttlParams := &model.TTLParams{
Type: params.Type,
DelDuration: int64(params.DefaultTTLDays * 24 * 3600),
}
@@ -1714,7 +1722,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
return nil, errorsV2.Wrapf(apiErr.Err, errorsV2.TypeInternal, errorsV2.CodeInternal, "failed to set standard TTL")
}
return &retentiontypes.CustomRetentionTTLResponse{
return &model.CustomRetentionTTLResponse{
Message: fmt.Sprintf("Custom retention not supported, applied standard TTL of %d days. %s", params.DefaultTTLDays, ttlResult.Message),
}, nil
}
@@ -1725,7 +1733,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
uuidWithHyphen := valuer.GenerateUUID()
uuid := strings.Replace(uuidWithHyphen.String(), "-", "", -1)
if params.Type != retentiontypes.LogsTTL {
if params.Type != constants.LogsTTL {
return nil, errorsV2.Newf(errorsV2.TypeInternal, errorsV2.CodeInternal, "custom retention TTL only supported for logs")
}
@@ -1756,7 +1764,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
if apiErr != nil {
return nil, errorsV2.Newf(errorsV2.TypeInternal, errorsV2.CodeInternal, "error in processing custom_retention_ttl_status check sql query")
}
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
if statusItem.Status == constants.StatusPending {
return nil, errorsV2.Newf(errorsV2.TypeInternal, errorsV2.CodeInternal, "custom retention TTL is already running")
}
}
@@ -1830,15 +1838,19 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
}
for tableName, queries := range ttlPayload {
customTTL := retentiontypes.TTLSetting{
ID: valuer.GenerateUUID(),
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
customTTL := types.TTLSetting{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
TransactionID: uuid,
TableName: tableName,
TTL: params.DefaultTTLDays,
Condition: string(ttlConditionsJSON),
Status: retentiontypes.TTLSettingStatusPending,
Status: constants.StatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -1854,7 +1866,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
err := r.setColdStorage(ctx, tableName, params.ColdStorageVolume)
if err != nil {
r.logger.Error("error in setting cold storage", errorsV2.Attr(err))
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, retentiontypes.TTLSettingStatusFailed)
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, constants.StatusFailed)
return nil, errorsV2.Wrapf(err.Err, errorsV2.TypeInternal, errorsV2.CodeInternal, "error setting cold storage for table %s", tableName)
}
}
@@ -1863,21 +1875,21 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *r
r.logger.Debug("Executing custom retention TTL request: ", "request", query, "step", i+1)
if err := r.db.Exec(ctx, query); err != nil {
r.logger.Error("error while setting custom retention ttl", errorsV2.Attr(err))
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, retentiontypes.TTLSettingStatusFailed)
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, constants.StatusFailed)
return nil, errorsV2.Wrapf(err, errorsV2.TypeInternal, errorsV2.CodeInternal, "error setting custom retention TTL for table %s, query: %s", tableName, query)
}
}
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, retentiontypes.TTLSettingStatusSuccess)
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, constants.StatusSuccess)
}
return &retentiontypes.CustomRetentionTTLResponse{
return &model.CustomRetentionTTLResponse{
Message: "custom retention TTL has been successfully set up",
}, nil
}
// New method to build multiIf expressions with support for multiple AND conditions
func (r *ClickHouseReader) buildMultiIfExpression(ttlConditions []retentiontypes.CustomRetentionRule, defaultTTLDays int, isResourceTable bool) string {
func (r *ClickHouseReader) buildMultiIfExpression(ttlConditions []model.CustomRetentionRule, defaultTTLDays int, isResourceTable bool) string {
var conditions []string
for i, rule := range ttlConditions {
@@ -1949,7 +1961,7 @@ func (r *ClickHouseReader) buildMultiIfExpression(ttlConditions []retentiontypes
return result
}
func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID string) (*retentiontypes.GetCustomRetentionTTLResponse, error) {
func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID string) (*model.GetCustomRetentionTTLResponse, error) {
// Check if V2 (custom retention) is supported
hasCustomRetention, err := r.hasCustomRetentionColumn(ctx)
if err != nil {
@@ -1958,14 +1970,14 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
hasCustomRetention = false
}
response := &retentiontypes.GetCustomRetentionTTLResponse{}
response := &model.GetCustomRetentionTTLResponse{}
if hasCustomRetention {
// V2 - Custom retention is supported
response.Version = "v2"
// Get the latest custom retention TTL setting
customTTL := new(retentiontypes.TTLSetting)
customTTL := new(types.TTLSetting)
err := r.sqlDB.BunDB().NewSelect().
Model(customTTL).
Where("org_id = ?", orgID).
@@ -1981,19 +1993,19 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
if err == sql.ErrNoRows {
// No V2 configuration found, return defaults
response.DefaultTTLDays = retentiontypes.DefaultLogsRetentionDays
response.TTLConditions = []retentiontypes.CustomRetentionRule{}
response.Status = retentiontypes.TTLSettingStatusSuccess
response.DefaultTTLDays = 15
response.TTLConditions = []model.CustomRetentionRule{}
response.Status = constants.StatusSuccess
response.ColdStorageTTLDays = -1
return response, nil
}
// Parse TTL conditions from Condition
var ttlConditions []retentiontypes.CustomRetentionRule
var ttlConditions []model.CustomRetentionRule
if customTTL.Condition != "" {
if err := json.Unmarshal([]byte(customTTL.Condition), &ttlConditions); err != nil {
r.logger.Error("Error parsing TTL conditions", errorsV2.Attr(err))
ttlConditions = []retentiontypes.CustomRetentionRule{}
ttlConditions = []model.CustomRetentionRule{}
}
}
@@ -2007,8 +2019,8 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
response.Version = "v1"
// Get V1 TTL configuration
ttlParams := &retentiontypes.GetTTLParams{
Type: retentiontypes.LogsTTL,
ttlParams := &model.GetTTLParams{
Type: constants.LogsTTL,
}
ttlResult, apiErr := r.GetTTL(ctx, orgID, ttlParams)
@@ -2028,14 +2040,14 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
}
// For V1, we don't have TTL conditions
response.TTLConditions = []retentiontypes.CustomRetentionRule{}
response.TTLConditions = []model.CustomRetentionRule{}
}
return response, nil
}
func (r *ClickHouseReader) checkCustomRetentionTTLStatusItem(ctx context.Context, orgID string, tableName string) (*retentiontypes.TTLSetting, error) {
ttl := new(retentiontypes.TTLSetting)
func (r *ClickHouseReader) checkCustomRetentionTTLStatusItem(ctx context.Context, orgID string, tableName string) (*types.TTLSetting, error) {
ttl := new(types.TTLSetting)
err := r.sqlDB.BunDB().NewSelect().
Model(ttl).
Where("table_name = ?", tableName).
@@ -2056,7 +2068,7 @@ func (r *ClickHouseReader) updateCustomRetentionTTLStatus(ctx context.Context, o
statusItem, apiErr := r.checkCustomRetentionTTLStatusItem(ctx, orgID, tableName)
if apiErr == nil && statusItem != nil {
_, dbErr := r.sqlDB.BunDB().NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", status).
Where("id = ?", statusItem.ID.StringValue()).
@@ -2068,7 +2080,7 @@ func (r *ClickHouseReader) updateCustomRetentionTTLStatus(ctx context.Context, o
}
// Enhanced validation function with duplicate detection and efficient key validation
func (r *ClickHouseReader) validateTTLConditions(ctx context.Context, ttlConditions []retentiontypes.CustomRetentionRule) error {
func (r *ClickHouseReader) validateTTLConditions(ctx context.Context, ttlConditions []model.CustomRetentionRule) error {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.CodeNamespace: "clickhouse-reader",
instrumentationtypes.CodeFunctionName: "validateTTLConditions",
@@ -2172,16 +2184,16 @@ func (r *ClickHouseReader) validateTTLConditions(ctx context.Context, ttlConditi
// SetTTL sets the TTL for traces or metrics or logs tables.
// This is an async API which creates goroutines to set TTL.
// Status of TTL update is tracked with ttl_status table in sqlite db.
func (r *ClickHouseReader) SetTTL(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) SetTTL(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
// Keep only latest 100 transactions/requests
r.deleteTtlTransactions(ctx, orgID, 100)
switch params.Type {
case retentiontypes.TraceTTL:
case constants.TraceTTL:
return r.setTTLTraces(ctx, orgID, params)
case retentiontypes.MetricsTTL:
case constants.MetricsTTL:
return r.setTTLMetrics(ctx, orgID, params)
case retentiontypes.LogsTTL:
case constants.LogsTTL:
return r.setTTLLogs(ctx, orgID, params)
default:
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error while setting ttl. ttl type should be <metrics|traces>, got %v", params.Type)}
@@ -2189,7 +2201,7 @@ func (r *ClickHouseReader) SetTTL(ctx context.Context, orgID string, params *ret
}
func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalMetrics.StringValue(),
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -2218,19 +2230,23 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
if apiErr != nil {
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
if statusItem.Status == constants.StatusPending {
return nil, &model.ApiError{Typ: model.ErrorConflict, Err: fmt.Errorf("TTL is already running")}
}
}
metricTTL := func(tableName string) {
ttl := retentiontypes.TTLSetting{
ID: valuer.GenerateUUID(),
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
ttl := types.TTLSetting{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
},
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
TransactionID: uuid,
TableName: tableName,
TTL: int(params.DelDuration),
Status: retentiontypes.TTLSettingStatusPending,
Status: constants.StatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -2266,9 +2282,9 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Set("status = ?", constants.StatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -2287,9 +2303,9 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Set("status = ?", constants.StatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -2302,9 +2318,9 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
sqlDB.
BunDB().
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", retentiontypes.TTLSettingStatusSuccess).
Set("status = ?", constants.StatusSuccess).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -2315,7 +2331,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
for _, tableName := range tableNames {
go metricTTL(tableName)
}
return &retentiontypes.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
return &model.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
}
func (r *ClickHouseReader) deleteTtlTransactions(ctx context.Context, orgID string, numberOfTransactionsStore int) {
@@ -2325,7 +2341,7 @@ func (r *ClickHouseReader) deleteTtlTransactions(ctx context.Context, orgID stri
BunDB().
NewSelect().
Column("transaction_id").
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Where("org_id = ?", orgID).
Group("transaction_id").
OrderExpr("MAX(created_at) DESC").
@@ -2340,7 +2356,7 @@ func (r *ClickHouseReader) deleteTtlTransactions(ctx context.Context, orgID stri
sqlDB.
BunDB().
NewDelete().
Model(new(retentiontypes.TTLSetting)).
Model(new(types.TTLSetting)).
Where("transaction_id NOT IN (?)", bun.In(limitTransactions)).
Exec(ctx)
if err != nil {
@@ -2349,9 +2365,9 @@ func (r *ClickHouseReader) deleteTtlTransactions(ctx context.Context, orgID stri
}
// checkTTLStatusItem checks if ttl_status table has an entry for the given table name
func (r *ClickHouseReader) checkTTLStatusItem(ctx context.Context, orgID string, tableName string) (*retentiontypes.TTLSetting, *model.ApiError) {
func (r *ClickHouseReader) checkTTLStatusItem(ctx context.Context, orgID string, tableName string) (*types.TTLSetting, *model.ApiError) {
r.logger.Info("checkTTLStatusItem query", "tableName", tableName)
ttl := new(retentiontypes.TTLSetting)
ttl := new(types.TTLSetting)
err := r.
sqlDB.
BunDB().
@@ -2372,26 +2388,26 @@ func (r *ClickHouseReader) checkTTLStatusItem(ctx context.Context, orgID string,
// getTTLQueryStatus fetches ttl_status table status from DB
func (r *ClickHouseReader) getTTLQueryStatus(ctx context.Context, orgID string, tableNameArray []string) (string, *model.ApiError) {
failFlag := false
status := retentiontypes.TTLSettingStatusSuccess
status := constants.StatusSuccess
for _, tableName := range tableNameArray {
statusItem, apiErr := r.checkTTLStatusItem(ctx, orgID, tableName)
emptyStatusStruct := new(retentiontypes.TTLSetting)
emptyStatusStruct := new(types.TTLSetting)
if statusItem == emptyStatusStruct {
return "", nil
}
if apiErr != nil {
return "", &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == retentiontypes.TTLSettingStatusPending && statusItem.UpdatedAt.Unix()-time.Now().Unix() < 3600 {
status = retentiontypes.TTLSettingStatusPending
if statusItem.Status == constants.StatusPending && statusItem.UpdatedAt.Unix()-time.Now().Unix() < 3600 {
status = constants.StatusPending
return status, nil
}
if statusItem.Status == retentiontypes.TTLSettingStatusFailed {
if statusItem.Status == constants.StatusFailed {
failFlag = true
}
}
if failFlag {
status = retentiontypes.TTLSettingStatusFailed
status = constants.StatusFailed
}
return status, nil
@@ -2444,7 +2460,7 @@ func getLocalTableNameArray(tableNames []string) []string {
}
// GetTTL returns current ttl, expected ttl and past setTTL status for metrics/traces.
func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *retentiontypes.GetTTLParams) (*retentiontypes.GetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *model.GetTTLParams) (*model.GetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -2479,8 +2495,8 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
return delTTL, moveTTL
}
getMetricsTTL := func() (*retentiontypes.DBResponseTTL, *model.ApiError) {
var dbResp []retentiontypes.DBResponseTTL
getMetricsTTL := func() (*model.DBResponseTTL, *model.ApiError) {
var dbResp []model.DBResponseTTL
query := fmt.Sprintf("SELECT engine_full FROM system.tables WHERE name='%v'", signozSampleLocalTableName)
@@ -2497,8 +2513,8 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
}
getTracesTTL := func() (*retentiontypes.DBResponseTTL, *model.ApiError) {
var dbResp []retentiontypes.DBResponseTTL
getTracesTTL := func() (*model.DBResponseTTL, *model.ApiError) {
var dbResp []model.DBResponseTTL
query := fmt.Sprintf("SELECT engine_full FROM system.tables WHERE name='%v' AND database='%v'", r.traceLocalTableName, signozTraceDBName)
@@ -2515,8 +2531,8 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
}
getLogsTTL := func() (*retentiontypes.DBResponseTTL, *model.ApiError) {
var dbResp []retentiontypes.DBResponseTTL
getLogsTTL := func() (*model.DBResponseTTL, *model.ApiError) {
var dbResp []model.DBResponseTTL
query := fmt.Sprintf("SELECT engine_full FROM system.tables WHERE name='%v' AND database='%v'", r.logsLocalTableName, r.logsDB)
@@ -2534,7 +2550,7 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
switch ttlParams.Type {
case retentiontypes.TraceTTL:
case constants.TraceTTL:
tableNameArray := []string{
r.TraceDB + "." + r.traceTableName,
r.TraceDB + "." + r.traceResourceTableV3,
@@ -2562,9 +2578,9 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
delTTL, moveTTL := parseTTL(dbResp.EngineFull)
return &retentiontypes.GetTTLResponseItem{TracesTime: delTTL, TracesMoveTime: moveTTL, ExpectedTracesTime: ttlQuery.TTL, ExpectedTracesMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
return &model.GetTTLResponseItem{TracesTime: delTTL, TracesMoveTime: moveTTL, ExpectedTracesTime: ttlQuery.TTL, ExpectedTracesMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
case retentiontypes.MetricsTTL:
case constants.MetricsTTL:
tableNameArray := []string{signozMetricDBName + "." + signozSampleTableName}
tableNameArray = getLocalTableNameArray(tableNameArray)
status, apiErr := r.getTTLQueryStatus(ctx, orgID, tableNameArray)
@@ -2585,9 +2601,9 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
delTTL, moveTTL := parseTTL(dbResp.EngineFull)
return &retentiontypes.GetTTLResponseItem{MetricsTime: delTTL, MetricsMoveTime: moveTTL, ExpectedMetricsTime: ttlQuery.TTL, ExpectedMetricsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
return &model.GetTTLResponseItem{MetricsTime: delTTL, MetricsMoveTime: moveTTL, ExpectedMetricsTime: ttlQuery.TTL, ExpectedMetricsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
case retentiontypes.LogsTTL:
case constants.LogsTTL:
tableNameArray := []string{r.logsDB + "." + r.logsTableName}
tableNameArray = getLocalTableNameArray(tableNameArray)
status, apiErr := r.getTTLQueryStatus(ctx, orgID, tableNameArray)
@@ -2608,7 +2624,7 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
delTTL, moveTTL := parseTTL(dbResp.EngineFull)
return &retentiontypes.GetTTLResponseItem{LogsTime: delTTL, LogsMoveTime: moveTTL, ExpectedLogsTime: ttlQuery.TTL, ExpectedLogsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
return &model.GetTTLResponseItem{LogsTime: delTTL, LogsMoveTime: moveTTL, ExpectedLogsTime: ttlQuery.TTL, ExpectedLogsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
default:
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error while getting ttl. ttl type should be metrics|traces, got %v",

View File

@@ -34,7 +34,6 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/app/cloudintegrations/services"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
@@ -1678,7 +1677,7 @@ func (aH *APIHandler) setCustomRetentionTTL(w http.ResponseWriter, r *http.Reque
return
}
var params retentiontypes.CustomRetentionTTLParams
var params model.CustomRetentionTTLParams
if err := json.NewDecoder(r.Body).Decode(&params); err != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "Invalid data"))
return

View File

@@ -40,7 +40,6 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/postprocess"
"github.com/SigNoz/signoz/pkg/query-service/utils"
querytemplate "github.com/SigNoz/signoz/pkg/query-service/utils/queryTemplate"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
chVariables "github.com/SigNoz/signoz/pkg/variables/clickhouse"
)
@@ -420,7 +419,7 @@ func parseTime(param string, r *http.Request) (*time.Time, error) {
}
func parseTTLParams(r *http.Request) (*retentiontypes.TTLParams, error) {
func parseTTLParams(r *http.Request) (*model.TTLParams, error) {
// make sure either of the query params are present
typeTTL := r.URL.Query().Get("type")
@@ -433,7 +432,7 @@ func parseTTLParams(r *http.Request) (*retentiontypes.TTLParams, error) {
}
// Validate the type parameter
if typeTTL != retentiontypes.TraceTTL && typeTTL != retentiontypes.MetricsTTL && typeTTL != retentiontypes.LogsTTL {
if typeTTL != baseconstants.TraceTTL && typeTTL != baseconstants.MetricsTTL && typeTTL != baseconstants.LogsTTL {
return nil, fmt.Errorf("type param should be metrics|traces|logs, got %v", typeTTL)
}
@@ -456,7 +455,7 @@ func parseTTLParams(r *http.Request) (*retentiontypes.TTLParams, error) {
}
}
return &retentiontypes.TTLParams{
return &model.TTLParams{
Type: typeTTL,
DelDuration: int64(durationParsed.Seconds()),
ColdStorageVolume: coldStorage,
@@ -464,7 +463,7 @@ func parseTTLParams(r *http.Request) (*retentiontypes.TTLParams, error) {
}, nil
}
func parseGetTTL(r *http.Request) (*retentiontypes.GetTTLParams, error) {
func parseGetTTL(r *http.Request) (*model.GetTTLParams, error) {
typeTTL := r.URL.Query().Get("type")
@@ -472,12 +471,12 @@ func parseGetTTL(r *http.Request) (*retentiontypes.GetTTLParams, error) {
return nil, fmt.Errorf("type param cannot be empty from the query")
} else {
// Validate the type parameter
if typeTTL != retentiontypes.TraceTTL && typeTTL != retentiontypes.MetricsTTL && typeTTL != retentiontypes.LogsTTL {
if typeTTL != baseconstants.TraceTTL && typeTTL != baseconstants.MetricsTTL && typeTTL != baseconstants.LogsTTL {
return nil, fmt.Errorf("type param should be metrics|traces|logs, got %v", typeTTL)
}
}
return &retentiontypes.GetTTLParams{Type: typeTTL}, nil
return &model.GetTTLParams{Type: typeTTL}, nil
}
func parseAggregateAttributeRequest(r *http.Request) (*v3.AggregateAttributeRequest, error) {

View File

@@ -19,6 +19,10 @@ const (
const MaxAllowedPointsInTimeSeries = 300
const TraceTTL = "traces"
const MetricsTTL = "metrics"
const LogsTTL = "logs"
const SpanSearchScopeRoot = "isroot"
const SpanSearchScopeEntryPoint = "isentrypoint"
const OrderBySpanCount = "span_count"

View File

@@ -7,7 +7,6 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/querycache"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/prometheus/prometheus/promql"
"github.com/prometheus/prometheus/util/stats"
@@ -24,8 +23,8 @@ type Reader interface {
GetServicesList(ctx context.Context) (*[]string, error)
GetDependencyGraph(ctx context.Context, query *model.GetServicesParams) (*[]model.ServiceMapDependencyResponseItem, error)
GetTTL(ctx context.Context, orgID string, ttlParams *retentiontypes.GetTTLParams) (*retentiontypes.GetTTLResponseItem, *model.ApiError)
GetCustomRetentionTTL(ctx context.Context, orgID string) (*retentiontypes.GetCustomRetentionTTLResponse, error)
GetTTL(ctx context.Context, orgID string, ttlParams *model.GetTTLParams) (*model.GetTTLResponseItem, *model.ApiError)
GetCustomRetentionTTL(ctx context.Context, orgID string) (*model.GetCustomRetentionTTLResponse, error)
// GetDisks returns a list of disks configured in the underlying DB. It is supported by
// clickhouse only.
@@ -47,8 +46,8 @@ type Reader interface {
GetFlamegraphSpansForTrace(ctx context.Context, orgID valuer.UUID, traceID string, req *model.GetFlamegraphSpansForTraceParams) (*model.GetFlamegraphSpansForTraceResponse, error)
// Setter Interfaces
SetTTL(ctx context.Context, orgID string, ttlParams *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError)
SetTTLV2(ctx context.Context, orgID string, params *retentiontypes.CustomRetentionTTLParams) (*retentiontypes.CustomRetentionTTLResponse, error)
SetTTL(ctx context.Context, orgID string, ttlParams *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError)
SetTTLV2(ctx context.Context, orgID string, params *model.CustomRetentionTTLParams) (*model.CustomRetentionTTLResponse, error)
FetchTemporality(ctx context.Context, orgID valuer.UUID, metricNames []string) (map[string]map[v3.Temporality]bool, error)
GetMetricAggregateAttributes(ctx context.Context, orgID valuer.UUID, req *v3.AggregateAttributeRequest, skipSignozMetrics bool) (*v3.AggregateAttributeResponse, error)

View File

@@ -404,6 +404,56 @@ type TagKey struct {
Type TagDataType `json:"type"`
}
type TTLParams struct {
Type string // It can be one of {traces, metrics}.
ColdStorageVolume string // Name of the cold storage volume.
ToColdStorageDuration int64 // Seconds after which data will be moved to cold storage.
DelDuration int64 // Seconds after which data will be deleted.
}
type CustomRetentionTTLParams struct {
Type string `json:"type"`
DefaultTTLDays int `json:"defaultTTLDays"`
TTLConditions []CustomRetentionRule `json:"ttlConditions"`
ColdStorageVolume string `json:"coldStorageVolume,omitempty"`
ToColdStorageDurationDays int64 `json:"coldStorageDurationDays,omitempty"`
}
type CustomRetentionRule struct {
Filters []FilterCondition `json:"conditions"`
TTLDays int `json:"ttlDays"`
}
type FilterCondition struct {
Key string `json:"key"`
Values []string `json:"values"`
}
type GetCustomRetentionTTLResponse struct {
Version string `json:"version"`
Status string `json:"status"`
// V1 fields
// LogsTime int `json:"logs_ttl_duration_hrs,omitempty"`
// LogsMoveTime int `json:"logs_move_ttl_duration_hrs,omitempty"`
ExpectedLogsTime int `json:"expected_logs_ttl_duration_hrs,omitempty"`
ExpectedLogsMoveTime int `json:"expected_logs_move_ttl_duration_hrs,omitempty"`
// V2 fields
DefaultTTLDays int `json:"default_ttl_days,omitempty"`
TTLConditions []CustomRetentionRule `json:"ttl_conditions,omitempty"`
ColdStorageVolume string `json:"cold_storage_volume,omitempty"`
ColdStorageTTLDays int `json:"cold_storage_ttl_days,omitempty"`
}
type CustomRetentionTTLResponse struct {
Message string `json:"message"`
}
type GetTTLParams struct {
Type string
}
type ListErrorsParams struct {
StartStr string `json:"start"`
EndStr string `json:"end"`

View File

@@ -150,6 +150,16 @@ type RuleResponseItem struct {
Data string `json:"data" db:"data"`
}
type TTLStatusItem struct {
Id int `json:"id" db:"id"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
TableName string `json:"table_name" db:"table_name"`
TTL int `json:"ttl" db:"ttl"`
Status string `json:"status" db:"status"`
ColdStorageTtl int `json:"cold_storage_ttl" db:"cold_storage_ttl"`
}
type ChannelItem struct {
Id int `json:"id" db:"id"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
@@ -452,11 +462,35 @@ type SpanAggregatesDBResponseItem struct {
GroupBy string `ch:"groupBy"`
}
type SetTTLResponseItem struct {
Message string `json:"message"`
}
type DiskItem struct {
Name string `json:"name,omitempty" ch:"name"`
Type string `json:"type,omitempty" ch:"type"`
}
type DBResponseTTL struct {
EngineFull string `ch:"engine_full"`
}
type GetTTLResponseItem struct {
MetricsTime int `json:"metrics_ttl_duration_hrs,omitempty"`
MetricsMoveTime int `json:"metrics_move_ttl_duration_hrs,omitempty"`
TracesTime int `json:"traces_ttl_duration_hrs,omitempty"`
TracesMoveTime int `json:"traces_move_ttl_duration_hrs,omitempty"`
LogsTime int `json:"logs_ttl_duration_hrs,omitempty"`
LogsMoveTime int `json:"logs_move_ttl_duration_hrs,omitempty"`
ExpectedMetricsTime int `json:"expected_metrics_ttl_duration_hrs,omitempty"`
ExpectedMetricsMoveTime int `json:"expected_metrics_move_ttl_duration_hrs,omitempty"`
ExpectedTracesTime int `json:"expected_traces_ttl_duration_hrs,omitempty"`
ExpectedTracesMoveTime int `json:"expected_traces_move_ttl_duration_hrs,omitempty"`
ExpectedLogsTime int `json:"expected_logs_ttl_duration_hrs,omitempty"`
ExpectedLogsMoveTime int `json:"expected_logs_move_ttl_duration_hrs,omitempty"`
Status string `json:"status"`
}
type DBResponseServiceName struct {
ServiceName string `ch:"serviceName"`
Count uint64 `ch:"count"`

View File

@@ -23,8 +23,8 @@ import (
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/identn"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard/dashboardpurger"
"github.com/SigNoz/signoz/pkg/modules/inframonitoring"
"github.com/SigNoz/signoz/pkg/modules/metricsexplorer"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
@@ -136,9 +136,6 @@ type Config struct {
// Auditor config
Auditor auditor.Config `mapstructure:"auditor"`
// MeterReporter config
MeterReporter meterreporter.Config `mapstructure:"meterreporter"`
// CloudIntegration config
CloudIntegration cloudintegration.Config `mapstructure:"cloudintegration"`
@@ -147,6 +144,9 @@ type Config struct {
// Authz config
Authz authz.Config `mapstructure:"authz"`
// DashboardPurger config (hard-deletes soft-deleted dashboards after a TTL).
DashboardPurger dashboardpurger.Config `mapstructure:"dashboardpurger"`
}
func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.ResolverConfig) (Config, error) {
@@ -172,6 +172,7 @@ func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.R
statsreporter.NewConfigFactory(),
gateway.NewConfigFactory(),
tokenizer.NewConfigFactory(),
dashboardpurger.NewConfigFactory(),
metricsexplorer.NewConfigFactory(),
inframonitoring.NewConfigFactory(),
flagger.NewConfigFactory(),
@@ -179,7 +180,6 @@ func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.R
identn.NewConfigFactory(),
serviceaccount.NewConfigFactory(),
auditor.NewConfigFactory(),
meterreporter.NewConfigFactory(),
cloudintegration.NewConfigFactory(),
tracedetail.NewConfigFactory(),
authz.NewConfigFactory(),

View File

@@ -16,6 +16,7 @@ import (
"github.com/SigNoz/signoz/pkg/instrumentation/instrumentationtest"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/tag/impltag"
"github.com/SigNoz/signoz/pkg/modules/user/impluser"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
@@ -44,7 +45,8 @@ func TestNewHandlers(t *testing.T) {
emailing := emailingtest.New()
queryParser := queryparser.New(providerSettings)
require.NoError(t, err)
dashboardModule := impldashboard.NewModule(impldashboard.NewStore(sqlstore), providerSettings, nil, orgGetter, queryParser)
tagModule := impltag.NewModule(impltag.NewStore(sqlstore))
dashboardModule := impldashboard.NewModule(impldashboard.NewStore(sqlstore), sqlstore, providerSettings, nil, orgGetter, queryParser, tagModule)
flagger, err := flagger.New(context.Background(), instrumentationtest.New().ToProviderSettings(), flagger.Config{}, flagger.MustNewRegistry())
require.NoError(t, err)
@@ -52,7 +54,7 @@ func TestNewHandlers(t *testing.T) {
userRoleStore := impluser.NewUserRoleStore(sqlstore, providerSettings)
userGetter := impluser.NewGetter(impluser.NewStore(sqlstore, providerSettings), userRoleStore, flagger)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, nil, nil, flagger)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, nil, nil, flagger, tagModule)
querierHandler := querier.NewHandler(providerSettings, nil, nil)
registryHandler := factory.NewHandler(nil)

View File

@@ -40,6 +40,7 @@ import (
"github.com/SigNoz/signoz/pkg/modules/session/implsession"
"github.com/SigNoz/signoz/pkg/modules/spanpercentile"
"github.com/SigNoz/signoz/pkg/modules/spanpercentile/implspanpercentile"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/modules/tracedetail"
"github.com/SigNoz/signoz/pkg/modules/tracedetail/impltracedetail"
"github.com/SigNoz/signoz/pkg/modules/tracefunnel"
@@ -80,6 +81,7 @@ type Modules struct {
CloudIntegration cloudintegration.Module
RuleStateHistory rulestatehistory.Module
TraceDetail tracedetail.Module
Tag tag.Module
}
func NewModules(
@@ -104,6 +106,7 @@ func NewModules(
serviceAccount serviceaccount.Module,
cloudIntegrationModule cloudintegration.Module,
fl flagger.Flagger,
tagModule tag.Module,
) Modules {
quickfilter := implquickfilter.NewModule(implquickfilter.NewStore(sqlstore))
orgSetter := implorganization.NewSetter(implorganization.NewStore(sqlstore), alertmanager, quickfilter)
@@ -133,5 +136,6 @@ func NewModules(
RuleStateHistory: implrulestatehistory.NewModule(implrulestatehistory.NewStore(telemetryStore, telemetryMetadataStore, providerSettings.Logger)),
CloudIntegration: cloudIntegrationModule,
TraceDetail: impltracedetail.NewModule(impltracedetail.NewTraceStore(telemetryStore), providerSettings, config.TraceDetail),
Tag: tagModule,
}
}

View File

@@ -18,6 +18,7 @@ import (
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount/implserviceaccount"
"github.com/SigNoz/signoz/pkg/modules/tag/impltag"
"github.com/SigNoz/signoz/pkg/modules/user/impluser"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sharder"
@@ -45,7 +46,8 @@ func TestNewModules(t *testing.T) {
emailing := emailingtest.New()
queryParser := queryparser.New(providerSettings)
require.NoError(t, err)
dashboardModule := impldashboard.NewModule(impldashboard.NewStore(sqlstore), providerSettings, nil, orgGetter, queryParser)
tagModule := impltag.NewModule(impltag.NewStore(sqlstore))
dashboardModule := impldashboard.NewModule(impldashboard.NewStore(sqlstore), sqlstore, providerSettings, nil, orgGetter, queryParser, tagModule)
flagger, err := flagger.New(context.Background(), instrumentationtest.New().ToProviderSettings(), flagger.Config{}, flagger.MustNewRegistry())
require.NoError(t, err)
@@ -56,7 +58,7 @@ func TestNewModules(t *testing.T) {
serviceAccount := implserviceaccount.NewModule(implserviceaccount.NewStore(sqlstore), nil, nil, nil, providerSettings, serviceaccount.Config{})
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, serviceAccount, implcloudintegration.NewModule(), flagger)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, serviceAccount, implcloudintegration.NewModule(), flagger, tagModule)
reflectVal := reflect.ValueOf(modules)
for i := 0; i < reflectVal.NumField(); i++ {

View File

@@ -28,8 +28,6 @@ import (
"github.com/SigNoz/signoz/pkg/identn/apikeyidentn"
"github.com/SigNoz/signoz/pkg/identn/impersonationidentn"
"github.com/SigNoz/signoz/pkg/identn/tokenizeridentn"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/meterreporter/noopmeterreporter"
"github.com/SigNoz/signoz/pkg/modules/authdomain/implauthdomain"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
@@ -197,6 +195,8 @@ func NewSQLMigrationProviderFactories(
sqlmigration.NewServiceAccountAuthzactory(sqlstore),
sqlmigration.NewDropUserDeletedAtFactory(sqlstore, sqlschema),
sqlmigration.NewMigrateAWSAllRegionsFactory(sqlstore),
sqlmigration.NewAddTagsFactory(sqlstore, sqlschema),
sqlmigration.NewAddDashboardSoftDeleteFactory(sqlstore, sqlschema),
)
}
@@ -320,12 +320,6 @@ func NewAuditorProviderFactories() factory.NamedMap[factory.ProviderFactory[audi
)
}
func NewMeterReporterProviderFactories() factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]] {
return factory.MustNewNamedMap(
noopmeterreporter.NewFactory(),
)
}
func NewFlaggerProviderFactories(registry featuretypes.Registry) factory.NamedMap[factory.ProviderFactory[flagger.FlaggerProvider, flagger.Config]] {
return factory.MustNewNamedMap(
configflagger.NewFactory(registry),

View File

@@ -22,7 +22,6 @@ import (
"github.com/SigNoz/signoz/pkg/identn"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
@@ -30,6 +29,10 @@ import (
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount/implserviceaccount"
"github.com/SigNoz/signoz/pkg/modules/tag"
"github.com/SigNoz/signoz/pkg/modules/dashboard/dashboardpurger"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/tag/impltag"
"github.com/SigNoz/signoz/pkg/modules/user/impluser"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/querier"
@@ -85,7 +88,6 @@ type SigNoz struct {
Flagger flagger.Flagger
Gateway gateway.Gateway
Auditor auditor.Auditor
MeterReporter meterreporter.Reporter
}
func New(
@@ -103,10 +105,9 @@ func New(
telemetrystoreProviderFactories factory.NamedMap[factory.ProviderFactory[telemetrystore.TelemetryStore, telemetrystore.Config]],
authNsCallback func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error),
authzCallback func(context.Context, sqlstore.SQLStore, licensing.Licensing, []authz.OnBeforeRoleDelete, dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error),
dashboardModuleCallback func(sqlstore.SQLStore, factory.ProviderSettings, analytics.Analytics, organization.Getter, queryparser.QueryParser, querier.Querier, licensing.Licensing) dashboard.Module,
dashboardModuleCallback func(sqlstore.SQLStore, factory.ProviderSettings, analytics.Analytics, organization.Getter, queryparser.QueryParser, querier.Querier, licensing.Licensing, tag.Module) dashboard.Module,
gatewayProviderFactory func(licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config],
auditorProviderFactories func(licensing.Licensing) factory.NamedMap[factory.ProviderFactory[auditor.Auditor, auditor.Config]],
meterReporterProviderFactories func(context.Context, flagger.Flagger, licensing.Licensing, telemetrystore.TelemetryStore, sqlstore.SQLStore, organization.Getter, zeus.Zeus) (factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]], string),
querierHandlerCallback func(factory.ProviderSettings, querier.Querier, analytics.Analytics) querier.Handler,
cloudIntegrationCallback func(sqlstore.SQLStore, global.Global, zeus.Zeus, gateway.Gateway, licensing.Licensing, serviceaccount.Module, cloudintegration.Config) (cloudintegration.Module, error),
rulerProviderFactories func(cache.Cache, alertmanager.Alertmanager, sqlstore.SQLStore, telemetrystore.TelemetryStore, telemetrytypes.MetadataStore, prometheus.Prometheus, organization.Getter, rulestatehistory.Module, querier.Querier, queryparser.QueryParser) factory.NamedMap[factory.ProviderFactory[ruler.Ruler, ruler.Config]],
@@ -328,8 +329,13 @@ func New(
// Initialize query parser (needed for dashboard module)
queryParser := queryparser.New(providerSettings)
// Initialize tag module — shared across modules that link entities to tags
// (currently dashboard; future: alerts, RBAC). Built once here and injected
// where needed.
tagModule := impltag.NewModule(impltag.NewStore(sqlstore))
// Initialize dashboard module (needed for authz registry)
dashboard := dashboardModuleCallback(sqlstore, providerSettings, analytics, orgGetter, queryParser, querier, licensing)
dashboard := dashboardModuleCallback(sqlstore, providerSettings, analytics, orgGetter, queryParser, querier, licensing, tagModule)
// Initialize user getter
userGetter := impluser.NewGetter(userStore, userRoleStore, flagger)
@@ -389,13 +395,6 @@ func New(
return nil, err
}
// Initialize meter reporter from the variant-specific provider factories
meterReporterFactories, meterReporterProvider := meterReporterProviderFactories(ctx, flagger, licensing, telemetrystore, sqlstore, orgGetter, zeus)
meterReporter, err := factory.NewProviderFromNamedMap(ctx, providerSettings, config.MeterReporter, meterReporterFactories, meterReporterProvider)
if err != nil {
return nil, err
}
// Initialize authns
store := sqlauthnstore.NewStore(sqlstore)
authNs, err := authNsCallback(ctx, providerSettings, store, licensing)
@@ -451,7 +450,7 @@ func New(
}
// Initialize all modules
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config, dashboard, userGetter, userRoleStore, serviceAccount, cloudIntegrationModule, flagger)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config, dashboard, userGetter, userRoleStore, serviceAccount, cloudIntegrationModule, flagger, tagModule)
// Initialize ruler from the variant-specific provider factories
rulerInstance, err := factory.NewProviderFromNamedMap(ctx, providerSettings, config.Ruler, rulerProviderFactories(cache, alertmanager, sqlstore, telemetrystore, telemetryMetadataStore, prometheus, orgGetter, modules.RuleStateHistory, querier, queryParser), "signoz")
@@ -498,6 +497,12 @@ func New(
return nil, err
}
// Initialize dashboard purger — periodic hard-delete of soft-deleted rows.
dashboardPurger, err := dashboardpurger.NewFactory(impldashboard.NewStore(sqlstore)).New(ctx, providerSettings, config.DashboardPurger)
if err != nil {
return nil, err
}
registry, err := factory.NewRegistry(
ctx,
instrumentation.Logger(),
@@ -511,8 +516,8 @@ func New(
factory.NewNamedService(factory.MustNewName("authz"), authz),
factory.NewNamedService(factory.MustNewName("user"), userService, factory.MustNewName("authz")),
factory.NewNamedService(factory.MustNewName("auditor"), auditor),
factory.NewNamedService(factory.MustNewName("meterreporter"), meterReporter, factory.MustNewName("licensing")),
factory.NewNamedService(factory.MustNewName("ruler"), rulerInstance),
factory.NewNamedService(factory.MustNewName("dashboardpurger"), dashboardPurger),
)
if err != nil {
return nil, err
@@ -561,6 +566,5 @@ func New(
Flagger: flagger,
Gateway: gateway,
Auditor: auditor,
MeterReporter: meterReporter,
}, nil
}

View File

@@ -0,0 +1,103 @@
package sqlmigration
import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/uptrace/bun"
"github.com/uptrace/bun/migrate"
)
type addTags struct {
sqlstore sqlstore.SQLStore
sqlschema sqlschema.SQLSchema
}
func NewAddTagsFactory(sqlstore sqlstore.SQLStore, sqlschema sqlschema.SQLSchema) factory.ProviderFactory[SQLMigration, Config] {
return factory.NewProviderFactory(factory.MustNewName("add_tags"), func(ctx context.Context, ps factory.ProviderSettings, c Config) (SQLMigration, error) {
return &addTags{
sqlstore: sqlstore,
sqlschema: sqlschema,
}, nil
})
}
func (migration *addTags) Register(migrations *migrate.Migrations) error {
return migrations.Register(migration.Up, migration.Down)
}
func (migration *addTags) Up(ctx context.Context, db *bun.DB) error {
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer func() {
_ = tx.Rollback()
}()
sqls := [][]byte{}
tagTableSQLs := migration.sqlschema.Operator().CreateTable(&sqlschema.Table{
Name: "tag",
Columns: []*sqlschema.Column{
{Name: "id", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "name", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "internal_name", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "org_id", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "created_at", DataType: sqlschema.DataTypeTimestamp, Nullable: false},
{Name: "created_by", DataType: sqlschema.DataTypeText, Nullable: true},
{Name: "updated_at", DataType: sqlschema.DataTypeTimestamp, Nullable: false},
{Name: "updated_by", DataType: sqlschema.DataTypeText, Nullable: true},
},
PrimaryKeyConstraint: &sqlschema.PrimaryKeyConstraint{ColumnNames: []sqlschema.ColumnName{"id"}},
ForeignKeyConstraints: []*sqlschema.ForeignKeyConstraint{
{
ReferencingColumnName: sqlschema.ColumnName("org_id"),
ReferencedTableName: sqlschema.TableName("organizations"),
ReferencedColumnName: sqlschema.ColumnName("id"),
},
},
})
sqls = append(sqls, tagTableSQLs...)
tagUniqueIndexSQL := migration.sqlschema.Operator().CreateIndex(
&sqlschema.UniqueIndex{
TableName: "tag",
ColumnNames: []sqlschema.ColumnName{"org_id", "internal_name"},
})
sqls = append(sqls, tagUniqueIndexSQL...)
tagRelationsTableSQLs := migration.sqlschema.Operator().CreateTable(&sqlschema.Table{
Name: "tag_relations",
Columns: []*sqlschema.Column{
{Name: "entity_type", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "entity_id", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "tag_id", DataType: sqlschema.DataTypeText, Nullable: false},
{Name: "org_id", DataType: sqlschema.DataTypeText, Nullable: false},
},
PrimaryKeyConstraint: &sqlschema.PrimaryKeyConstraint{ColumnNames: []sqlschema.ColumnName{"entity_id", "tag_id"}},
ForeignKeyConstraints: []*sqlschema.ForeignKeyConstraint{
{
ReferencingColumnName: sqlschema.ColumnName("org_id"),
ReferencedTableName: sqlschema.TableName("organizations"),
ReferencedColumnName: sqlschema.ColumnName("id"),
},
},
})
sqls = append(sqls, tagRelationsTableSQLs...)
for _, sql := range sqls {
if _, err := tx.ExecContext(ctx, string(sql)); err != nil {
return err
}
}
return tx.Commit()
}
func (migration *addTags) Down(_ context.Context, _ *bun.DB) error {
return nil
}

View File

@@ -0,0 +1,59 @@
package sqlmigration
import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/uptrace/bun"
"github.com/uptrace/bun/migrate"
)
type addDashboardSoftDelete struct {
sqlstore sqlstore.SQLStore
sqlschema sqlschema.SQLSchema
}
func NewAddDashboardSoftDeleteFactory(sqlstore sqlstore.SQLStore, sqlschema sqlschema.SQLSchema) factory.ProviderFactory[SQLMigration, Config] {
return factory.NewProviderFactory(factory.MustNewName("add_dashboard_soft_delete"), func(ctx context.Context, ps factory.ProviderSettings, c Config) (SQLMigration, error) {
return &addDashboardSoftDelete{
sqlstore: sqlstore,
sqlschema: sqlschema,
}, nil
})
}
func (migration *addDashboardSoftDelete) Register(migrations *migrate.Migrations) error {
return migrations.Register(migration.Up, migration.Down)
}
func (migration *addDashboardSoftDelete) Up(ctx context.Context, db *bun.DB) error {
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer func() { _ = tx.Rollback() }()
dashboardTable := &sqlschema.Table{Name: "dashboard"}
sqls := [][]byte{}
sqls = append(sqls, migration.sqlschema.Operator().AddColumn(
dashboardTable, nil,
&sqlschema.Column{Name: "deleted_at", DataType: sqlschema.DataTypeTimestamp, Nullable: true}, nil,
)...)
sqls = append(sqls, migration.sqlschema.Operator().AddColumn(
dashboardTable, nil,
&sqlschema.Column{Name: "deleted_by", DataType: sqlschema.DataTypeText, Nullable: true}, nil,
)...)
for _, sql := range sqls {
if _, err := tx.ExecContext(ctx, string(sql)); err != nil {
return err
}
}
return tx.Commit()
}
func (migration *addDashboardSoftDelete) Down(_ context.Context, _ *bun.DB) error {
return nil
}

View File

@@ -11,3 +11,8 @@ type UserAuditable struct {
CreatedBy string `bun:"created_by,type:text" json:"createdBy"`
UpdatedBy string `bun:"updated_by,type:text" json:"updatedBy"`
}
type DeleteAuditable struct {
DeletedBy string `bun:"deleted_by,type:text,nullzero" json:"deletedBy,omitempty"`
DeletedAt time.Time `bun:"deleted_at,nullzero" json:"deletedAt,omitzero"`
}

View File

@@ -30,7 +30,7 @@ var (
type GettableAuthDomain struct {
StorableAuthDomain
AuthDomainConfig
Config AuthDomainConfig `json:"config"`
AuthNProviderInfo *AuthNProviderInfo `json:"authNProviderInfo"`
}
@@ -43,7 +43,7 @@ type PostableAuthDomain struct {
Name string `json:"name"`
}
type UpdateableAuthDomain struct {
type UpdatableAuthDomain struct {
Config AuthDomainConfig `json:"config"`
}
@@ -121,7 +121,7 @@ func NewAuthDomainFromStorableAuthDomain(storableAuthDomain *StorableAuthDomain)
func NewGettableAuthDomainFromAuthDomain(authDomain *AuthDomain, authNProviderInfo *AuthNProviderInfo) *GettableAuthDomain {
return &GettableAuthDomain{
StorableAuthDomain: *authDomain.StorableAuthDomain(),
AuthDomainConfig: *authDomain.AuthDomainConfig(),
Config: *authDomain.AuthDomainConfig(),
AuthNProviderInfo: authNProviderInfo,
}
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
@@ -19,6 +20,8 @@ var (
TypeableMetaResourceDashboard = authtypes.MustNewTypeableMetaResource(authtypes.MustNewName("dashboard"))
TypeableMetaResourcePublicDashboard = authtypes.MustNewTypeableMetaResource(authtypes.MustNewName("public-dashboard"))
TypeableMetaResourcesDashboards = authtypes.MustNewTypeableMetaResources(authtypes.MustNewName("dashboards"))
EntityTypeDashboard = tagtypes.MustNewEntityType("dashboard")
)
var (
@@ -34,6 +37,7 @@ type StorableDashboard struct {
types.Identifiable
types.TimeAuditable
types.UserAuditable
types.DeleteAuditable
Data StorableDashboardData `bun:"data,type:text,notnull"`
Locked bool `bun:"locked,notnull,default:false"`
OrgID valuer.UUID `bun:"org_id,notnull"`

View File

@@ -1,262 +0,0 @@
package dashboardtypes
import (
"bytes"
"encoding/json"
"fmt"
"slices"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
qb "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/go-playground/validator/v10"
v1 "github.com/perses/perses/pkg/model/api/v1"
"github.com/perses/perses/pkg/model/api/v1/common"
"github.com/perses/perses/pkg/model/api/v1/dashboard"
)
// StorableDashboardDataV2 wraps v1.DashboardSpec (Perses) with additional SigNoz-specific fields.
//
// We embed DashboardSpec (not v1.Dashboard) to avoid carrying Perses's Metadata
// (Name, Project, CreatedAt, UpdatedAt, Tags, Version) and Kind field. SigNoz
// manages identity (ID), timestamps (TimeAuditable), and multi-tenancy (OrgID)
// separately on StorableDashboardV2/DashboardV2.
//
// The following v1 request fields map to locations inside v1.DashboardSpec:
// - title → Display.Name (common.Display)
// - description → Display.Description (common.Display)
//
// Fields that have no Perses equivalent will be added in this wrapper (like image, uploadGrafana, etc.)
type StorableDashboardDataV2 = v1.DashboardSpec
// UnmarshalAndValidateDashboardV2JSON unmarshals the JSON into a StorableDashboardDataV2
// (= PostableDashboardV2 = UpdatableDashboardV2) and validates plugin kinds and specs.
func UnmarshalAndValidateDashboardV2JSON(data []byte) (*StorableDashboardDataV2, error) {
var d StorableDashboardDataV2
// Note: DashboardSpec has a custom UnmarshalJSON which prevents
// DisallowUnknownFields from working at the top level. Unknown
// fields in plugin specs are still rejected by validateAndNormalizePluginSpec.
if err := json.Unmarshal(data, &d); err != nil {
return nil, err
}
if err := validateDashboardV2(d); err != nil {
return nil, err
}
return &d, nil
}
// Plugin kind → spec type factory. Each value is a pointer to the zero value of the
// expected spec struct. validatePluginSpec marshals plugin.Spec back to JSON and
// unmarshals into the typed struct to catch field-level errors.
var (
panelPluginSpecs = map[PanelPluginKind]func() any{
PanelKindTimeSeries: func() any { return new(TimeSeriesPanelSpec) },
PanelKindBarChart: func() any { return new(BarChartPanelSpec) },
PanelKindNumber: func() any { return new(NumberPanelSpec) },
PanelKindPieChart: func() any { return new(PieChartPanelSpec) },
PanelKindTable: func() any { return new(TablePanelSpec) },
PanelKindHistogram: func() any { return new(HistogramPanelSpec) },
PanelKindList: func() any { return new(ListPanelSpec) },
}
queryPluginSpecs = map[QueryPluginKind]func() any{
QueryKindBuilder: func() any { return new(BuilderQuerySpec) },
QueryKindComposite: func() any { return new(CompositeQuerySpec) },
QueryKindFormula: func() any { return new(FormulaSpec) },
QueryKindPromQL: func() any { return new(PromQLQuerySpec) },
QueryKindClickHouseSQL: func() any { return new(ClickHouseSQLQuerySpec) },
QueryKindTraceOperator: func() any { return new(TraceOperatorSpec) },
}
variablePluginSpecs = map[VariablePluginKind]func() any{
VariableKindDynamic: func() any { return new(DynamicVariableSpec) },
VariableKindQuery: func() any { return new(QueryVariableSpec) },
VariableKindCustom: func() any { return new(CustomVariableSpec) },
VariableKindTextbox: func() any { return new(TextboxVariableSpec) },
}
datasourcePluginSpecs = map[DatasourcePluginKind]func() any{
DatasourceKindSigNoz: func() any { return new(struct{}) },
}
// allowedQueryKinds maps each panel plugin kind to the query plugin
// kinds it supports. Composite sub-query types are mapped to these
// same kind strings via compositeSubQueryTypeToPluginKind.
allowedQueryKinds = map[PanelPluginKind][]QueryPluginKind{
PanelKindTimeSeries: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindBarChart: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindNumber: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindHistogram: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindPieChart: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindClickHouseSQL},
PanelKindTable: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindClickHouseSQL},
PanelKindList: {QueryKindBuilder},
}
// compositeSubQueryTypeToPluginKind maps CompositeQuery sub-query type
// strings to the equivalent top-level query plugin kind for validation.
compositeSubQueryTypeToPluginKind = map[qb.QueryType]QueryPluginKind{
qb.QueryTypeBuilder: QueryKindBuilder,
qb.QueryTypeFormula: QueryKindFormula,
qb.QueryTypeTraceOperator: QueryKindTraceOperator,
qb.QueryTypePromQL: QueryKindPromQL,
qb.QueryTypeClickHouseSQL: QueryKindClickHouseSQL,
}
)
func validateDashboardV2(d StorableDashboardDataV2) error {
for name, ds := range d.Datasources {
if err := validateDatasourcePlugin(&ds.Plugin, fmt.Sprintf("spec.datasources.%s.plugin", name)); err != nil {
return err
}
}
for i, v := range d.Variables {
if err := validateVariablePlugin(v, fmt.Sprintf("spec.variables[%d]", i)); err != nil {
return err
}
}
for key, panel := range d.Panels {
if panel == nil {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "spec.panels.%s: panel must not be null", key)
}
path := fmt.Sprintf("spec.panels.%s", key)
if err := validatePanelPlugin(&panel.Spec.Plugin, path+".spec.plugin"); err != nil {
return err
}
panelKind := PanelPluginKind(panel.Spec.Plugin.Kind)
allowed := allowedQueryKinds[panelKind]
for qi := range panel.Spec.Queries {
queryPath := fmt.Sprintf("%s.spec.queries[%d].spec.plugin", path, qi)
if err := validateQueryPlugin(&panel.Spec.Queries[qi].Spec.Plugin, queryPath); err != nil {
return err
}
if err := validateQueryAllowedForPanel(panel.Spec.Queries[qi].Spec.Plugin, allowed, panelKind, queryPath); err != nil {
return err
}
}
}
return nil
}
func validateDatasourcePlugin(plugin *common.Plugin, path string) error {
kind := DatasourcePluginKind(plugin.Kind)
factory, ok := datasourcePluginSpecs[kind]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s: unknown datasource plugin kind %q; allowed values: %s", path, kind, formatEnum(kind.Enum()))
}
return validateAndNormalizePluginSpec(plugin, factory, path)
}
func validateVariablePlugin(v dashboard.Variable, path string) error {
switch spec := v.Spec.(type) {
case *dashboard.ListVariableSpec:
pluginPath := path + ".spec.plugin"
kind := VariablePluginKind(spec.Plugin.Kind)
factory, ok := variablePluginSpecs[kind]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s: unknown variable plugin kind %q; allowed values: %s", pluginPath, kind, formatEnum(kind.Enum()))
}
return validateAndNormalizePluginSpec(&spec.Plugin, factory, pluginPath)
case *dashboard.TextVariableSpec:
// TextVariables have no plugin, nothing to validate.
return nil
default:
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "%s: unsupported variable kind %q", path, v.Kind)
}
}
func validatePanelPlugin(plugin *common.Plugin, path string) error {
kind := PanelPluginKind(plugin.Kind)
factory, ok := panelPluginSpecs[kind]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s: unknown panel plugin kind %q; allowed values: %s", path, kind, formatEnum(kind.Enum()))
}
return validateAndNormalizePluginSpec(plugin, factory, path)
}
func validateQueryPlugin(plugin *common.Plugin, path string) error {
kind := QueryPluginKind(plugin.Kind)
factory, ok := queryPluginSpecs[kind]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s: unknown query plugin kind %q; allowed values: %s", path, kind, formatEnum(kind.Enum()))
}
return validateAndNormalizePluginSpec(plugin, factory, path)
}
func formatEnum(values []any) string {
parts := make([]string, len(values))
for i, v := range values {
parts[i] = fmt.Sprintf("`%v`", v)
}
return strings.Join(parts, ", ")
}
// validateAndNormalizePluginSpec validates the plugin spec and writes the typed
// struct (with defaults) back into plugin.Spec so that DB storage and API
// responses contain normalized values.
func validateAndNormalizePluginSpec(plugin *common.Plugin, factory func() any, path string) error {
if plugin.Kind == "" {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "%s: plugin kind is required", path)
}
if plugin.Spec == nil {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "%s: plugin spec is required", path)
}
// Re-marshal the spec and unmarshal into the typed struct.
specJSON, err := json.Marshal(plugin.Spec)
if err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s.spec", path)
}
target := factory()
decoder := json.NewDecoder(bytes.NewReader(specJSON))
decoder.DisallowUnknownFields()
if err := decoder.Decode(target); err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s.spec", path)
}
if err := validator.New().Struct(target); err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s.spec", path)
}
// Write the typed struct back so defaults are included.
plugin.Spec = target
return nil
}
// validateQueryAllowedForPanel checks that the query plugin kind is permitted
// for the given panel. For composite queries it recurses into sub-queries.
func validateQueryAllowedForPanel(plugin common.Plugin, allowed []QueryPluginKind, panelKind PanelPluginKind, path string) error {
queryKind := QueryPluginKind(plugin.Kind)
if !slices.Contains(allowed, queryKind) {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s: query kind %q is not supported by panel kind %q", path, queryKind, panelKind)
}
// For composite queries, validate each sub-query type.
if queryKind == QueryKindComposite && plugin.Spec != nil {
specJSON, err := json.Marshal(plugin.Spec)
if err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s.spec", path)
}
var composite struct {
Queries []struct {
Type qb.QueryType `json:"type"`
} `json:"queries"`
}
if err := json.Unmarshal(specJSON, &composite); err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s.spec", path)
}
for si, sub := range composite.Queries {
pluginKind, ok := compositeSubQueryTypeToPluginKind[sub.Type]
if !ok {
continue
}
if !slices.Contains(allowed, pluginKind) {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s.spec.queries[%d]: sub-query type %q is not supported by panel kind %q",
path, si, sub.Type, panelKind)
}
}
}
return nil
}

View File

@@ -0,0 +1,326 @@
package dashboardtypes
import (
"bytes"
"encoding/json"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/swaggest/jsonschema-go"
jsonpatch "gopkg.in/evanphx/json-patch.v4"
)
const (
SchemaVersion = "v6"
MaxTagsPerDashboard = 5
)
type DashboardV2 struct {
types.Identifiable
types.TimeAuditable
types.UserAuditable
OrgID valuer.UUID `json:"orgId"`
Locked bool `json:"locked"`
Info DashboardInfo `json:"info"`
PublicConfig *PublicDashboard `json:"publicConfig,omitempty"`
}
// DashboardInfo is the serializable view of a dashboard's contents — what the UI renders as "the dashboard JSON".
type DashboardInfo struct {
StoredDashboardInfo
Tags []*tagtypes.Tag `json:"tags,omitempty"`
}
// StoredDashboardInfo is exactly what serializes into the dashboard.data column.
type StoredDashboardInfo struct {
Metadata DashboardMetadata `json:"metadata"`
Data DashboardData `json:"data"`
}
type DashboardMetadata struct {
SchemaVersion string `json:"schemaVersion"`
Image string `json:"image,omitempty"`
UploadedGrafana bool `json:"uploadedGrafana"`
}
type PostableDashboardV2 struct {
StoredDashboardInfo
Tags []tagtypes.PostableTag `json:"tags,omitempty"`
}
type UpdateableDashboardV2 = PostableDashboardV2
// PatchableDashboardV2 is an RFC 6902 JSON Patch document applied against a
// PostableDashboardV2-shaped view of an existing dashboard. Patch ops can
// target any field — including individual entries inside `data.panels`,
// `data.panels.<id>.spec.queries`, or `tags` — without re-sending the rest of
// the dashboard.
type PatchableDashboardV2 struct {
patch jsonpatch.Patch
}
// JSONPatchDocument is the OpenAPI-facing schema for an RFC 6902 patch body.
// PatchableDashboardV2 has only an internal `jsonpatch.Patch` field, so the
// reflector would emit an empty schema; the handler def points at this type
// instead so consumers see the array-of-ops shape.
type JSONPatchDocument []JSONPatchOperation
// JSONPatchOperation is one RFC 6902 op. Not every field is valid on every
// op kind (e.g. `value` is required for add/replace/test, ignored for remove;
// `from` is required for move/copy) — the JSON Patch RFC governs that.
type JSONPatchOperation struct {
Op string `json:"op" required:"true"`
Path string `json:"path" required:"true" description:"JSON Pointer (RFC 6901) into the dashboard's postable shape — e.g. /data/display/name, /data/panels/<id>, /data/panels/<id>/spec/queries/0, /tags/-."`
Value any `json:"value,omitempty" description:"Value to add/replace/test against. The expected type depends on the path — a string for /data/display/name, a Panel for /data/panels/<id>, a PostableTag for /tags/-, etc. Required for add/replace/test; ignored for remove/move/copy."`
From string `json:"from,omitempty" description:"Source JSON Pointer for move/copy ops; ignored for other ops."`
}
// PrepareJSONSchema constrains the `op` field to the six RFC 6902 verbs.
func (JSONPatchOperation) PrepareJSONSchema(s *jsonschema.Schema) error {
op, ok := s.Properties["op"]
if !ok || op.TypeObject == nil {
return errors.NewInternalf(errors.CodeInternal, "JSONPatchOperation schema missing `op` property")
}
op.TypeObject.WithEnum("add", "remove", "replace", "move", "copy", "test")
s.Properties["op"] = op
return nil
}
func (p *PatchableDashboardV2) UnmarshalJSON(data []byte) error {
patch, err := jsonpatch.DecodePatch(data)
if err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s", err.Error())
}
p.patch = patch
return nil
}
// patchableDashboardV2View is the JSON shape a patch is applied against.
// It mirrors PostableDashboardV2 except `tags` is always emitted (even when
// empty) — RFC 6902 `add /tags/-` requires the array to exist in the target
// document, and PostableDashboardV2's own `omitempty` on tags would drop it.
type patchableDashboardV2View struct {
StoredDashboardInfo
Tags []tagtypes.PostableTag `json:"tags"`
}
// Apply runs the patch against the existing dashboard. The dashboard is
// projected into the postable JSON shape, the patch is applied, and the
// result is decoded back into an UpdateableDashboardV2 — which re-runs
// the full v2 validation chain.
func (p PatchableDashboardV2) Apply(existing *DashboardV2) (*UpdateableDashboardV2, error) {
postableTags := make([]tagtypes.PostableTag, len(existing.Info.Tags))
for i, t := range existing.Info.Tags {
postableTags[i] = tagtypes.PostableTag{Name: t.Name}
}
base := patchableDashboardV2View{
StoredDashboardInfo: existing.Info.StoredDashboardInfo,
Tags: postableTags,
}
raw, err := json.Marshal(base)
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "marshal existing dashboard for patch")
}
patched, err := p.patch.Apply(raw)
if err != nil {
return nil, errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s", err.Error())
}
out := &UpdateableDashboardV2{}
if err := json.Unmarshal(patched, out); err != nil {
return nil, err
}
return out, nil
}
func (p *PostableDashboardV2) UnmarshalJSON(data []byte) error {
dec := json.NewDecoder(bytes.NewReader(data))
dec.DisallowUnknownFields()
type alias PostableDashboardV2
var tmp alias
if err := dec.Decode(&tmp); err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "%s", err.Error())
}
*p = PostableDashboardV2(tmp)
return p.Validate()
}
func (p *PostableDashboardV2) Validate() error {
if p.Metadata.SchemaVersion != SchemaVersion {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "metadata.schemaVersion must be %q, got %q", SchemaVersion, p.Metadata.SchemaVersion)
}
if p.Data.Display == nil || p.Data.Display.Name == "" {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "data.display.name is required")
}
if len(p.Tags) > MaxTagsPerDashboard {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "a dashboard can have at most %d tags", MaxTagsPerDashboard)
}
return p.Data.Validate()
}
type GettableDashboardV2 struct {
types.Identifiable
types.TimeAuditable
types.UserAuditable
OrgID valuer.UUID `json:"orgId"`
Locked bool `json:"locked"`
Info GettableDashboardInfo `json:"info"`
PublicConfig *GettablePublicDasbhboard `json:"publicConfig,omitempty"`
}
type GettableDashboardInfo struct {
StoredDashboardInfo
Tags []*tagtypes.GettableTag `json:"tags,omitempty"`
}
func NewGettableDashboardV2FromDashboardV2(dashboard *DashboardV2) *GettableDashboardV2 {
gettable := &GettableDashboardV2{
Identifiable: dashboard.Identifiable,
TimeAuditable: dashboard.TimeAuditable,
UserAuditable: dashboard.UserAuditable,
OrgID: dashboard.OrgID,
Locked: dashboard.Locked,
Info: GettableDashboardInfo{
StoredDashboardInfo: dashboard.Info.StoredDashboardInfo,
Tags: tagtypes.NewGettableTagsFromTags(dashboard.Info.Tags),
},
}
if dashboard.PublicConfig != nil {
gettable.PublicConfig = NewGettablePublicDashboard(dashboard.PublicConfig)
}
return gettable
}
func NewDashboardV2(orgID valuer.UUID, createdBy string, postable PostableDashboardV2, resolvedTags []*tagtypes.Tag) *DashboardV2 {
now := time.Now()
return &DashboardV2{
Identifiable: types.Identifiable{ID: valuer.GenerateUUID()},
TimeAuditable: types.TimeAuditable{CreatedAt: now, UpdatedAt: now},
UserAuditable: types.UserAuditable{CreatedBy: createdBy, UpdatedBy: createdBy},
OrgID: orgID,
Locked: false,
Info: DashboardInfo{
StoredDashboardInfo: StoredDashboardInfo{
Metadata: postable.Metadata,
Data: postable.Data,
},
Tags: resolvedTags,
},
}
}
// rejects rows that don't carry a v2-shape blob — those are pre-migration v1 dashboards that the v2 API can't render.
func NewDashboardV2FromStorable(storable *StorableDashboard, public *StorablePublicDashboard, tags []*tagtypes.Tag) (*DashboardV2, error) {
metadata, _ := storable.Data["metadata"].(map[string]any)
if metadata == nil || metadata["schemaVersion"] != SchemaVersion {
return nil, errors.Newf(errors.TypeUnsupported, ErrCodeDashboardInvalidData, "dashboard %s is not in %s schema", storable.ID, SchemaVersion)
}
raw, err := json.Marshal(storable.Data)
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "marshal stored v2 dashboard data")
}
var stored StoredDashboardInfo
if err := json.Unmarshal(raw, &stored); err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "unmarshal stored v2 dashboard data")
}
var publicConfig *PublicDashboard
if public != nil {
publicConfig = NewPublicDashboardFromStorablePublicDashboard(public)
}
return &DashboardV2{
Identifiable: storable.Identifiable,
TimeAuditable: storable.TimeAuditable,
UserAuditable: storable.UserAuditable,
OrgID: storable.OrgID,
Locked: storable.Locked,
Info: DashboardInfo{
StoredDashboardInfo: stored,
Tags: tags,
},
PublicConfig: publicConfig,
}, nil
}
func (d *DashboardV2) CanLockUnlock(lock bool, isAdmin bool, updatedBy string) error {
if d.CreatedBy != updatedBy && !isAdmin {
return errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "you are not authorized to lock/unlock this dashboard")
}
if d.Locked == lock {
if lock {
return errors.Newf(errors.TypeAlreadyExists, errors.CodeAlreadyExists, "dashboard is already locked")
}
return errors.Newf(errors.TypeAlreadyExists, errors.CodeAlreadyExists, "dashboard is already unlocked")
}
return nil
}
func (d *DashboardV2) LockUnlock(lock bool, isAdmin bool, updatedBy string) error {
if err := d.CanLockUnlock(lock, isAdmin, updatedBy); err != nil {
return err
}
d.Locked = lock
d.UpdatedBy = updatedBy
d.UpdatedAt = time.Now()
return nil
}
func (d *DashboardV2) CanUpdate() error {
if d.Locked {
return errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "cannot update a locked dashboard, please unlock the dashboard to update")
}
return nil
}
func (d *DashboardV2) CanDelete() error {
return d.CanUpdate()
}
func (d *DashboardV2) Update(updateable UpdateableDashboardV2, updatedBy string, resolvedTags []*tagtypes.Tag) error {
if err := d.CanUpdate(); err != nil {
return err
}
d.Info.Metadata = updateable.Metadata
d.Info.Data = updateable.Data
d.Info.Tags = resolvedTags
d.UpdatedBy = updatedBy
d.UpdatedAt = time.Now()
return nil
}
// ToStorableDashboard packages a Dashboard into the bun row that goes into
// the dashboard table. Tags are intentionally omitted — they live in
// tag_relations and are inserted separately by the caller.
func (d *DashboardV2) ToStorableDashboard() (*StorableDashboard, error) {
data, err := d.Info.toStorableDashboardData()
if err != nil {
return nil, err
}
return &StorableDashboard{
Identifiable: types.Identifiable{ID: d.ID},
TimeAuditable: d.TimeAuditable,
UserAuditable: d.UserAuditable,
OrgID: d.OrgID,
Locked: d.Locked,
Data: data,
}, nil
}
func (s StoredDashboardInfo) toStorableDashboardData() (StorableDashboardData, error) {
raw, err := json.Marshal(s)
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "marshal v2 dashboard data")
}
out := StorableDashboardData{}
if err := json.Unmarshal(raw, &out); err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "unmarshal v2 dashboard data")
}
return out, nil
}

View File

@@ -0,0 +1,107 @@
package dashboardtypes
import (
"bytes"
"encoding/json"
"fmt"
"slices"
"github.com/SigNoz/signoz/pkg/errors"
qb "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
v1 "github.com/perses/perses/pkg/model/api/v1"
"github.com/perses/perses/pkg/model/api/v1/common"
)
// DashboardData is the SigNoz dashboard v2 spec shape. It mirrors
// v1.DashboardSpec (Perses) field-for-field, except every common.Plugin
// occurrence is replaced with a typed SigNoz plugin whose OpenAPI schema is a
// per-site discriminated oneOf.
type DashboardData struct {
Display *common.Display `json:"display,omitempty"`
Datasources map[string]*DatasourceSpec `json:"datasources,omitempty"`
Variables []Variable `json:"variables,omitempty"`
Panels map[string]*Panel `json:"panels"`
Layouts []Layout `json:"layouts"`
Duration common.DurationString `json:"duration"`
RefreshInterval common.DurationString `json:"refreshInterval,omitempty"`
Links []v1.Link `json:"links,omitempty"`
}
// ══════════════════════════════════════════════
// Unmarshal + validate entry point
// ══════════════════════════════════════════════
func (d *DashboardData) UnmarshalJSON(data []byte) error {
dec := json.NewDecoder(bytes.NewReader(data))
dec.DisallowUnknownFields()
type alias DashboardData
var tmp alias
if err := dec.Decode(&tmp); err != nil {
return errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "invalid dashboard spec")
}
*d = DashboardData(tmp)
return d.Validate()
}
// ══════════════════════════════════════════════
// Cross-field validation
// ══════════════════════════════════════════════
func (d *DashboardData) Validate() error {
for key, panel := range d.Panels {
if panel == nil {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "spec.panels.%s: panel must not be null", key)
}
path := fmt.Sprintf("spec.panels.%s", key)
panelKind := panel.Spec.Plugin.Kind
if len(panel.Spec.Queries) == 0 {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "%s.spec.queries: panel must have at least one query", path)
}
allowed := allowedQueryKinds[panelKind]
for qi, q := range panel.Spec.Queries {
queryPath := fmt.Sprintf("%s.spec.queries[%d].spec.plugin", path, qi)
if err := validateQueryAllowedForPanel(q.Spec.Plugin, allowed, panelKind, queryPath); err != nil {
return err
}
}
}
return nil
}
func validateQueryAllowedForPanel(plugin QueryPlugin, allowed []QueryPluginKind, panelKind PanelPluginKind, path string) error {
if !slices.Contains(allowed, plugin.Kind) {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s: query kind %q is not supported by panel kind %q", path, plugin.Kind, panelKind)
}
if plugin.Kind != QueryKindComposite {
return nil
}
composite, ok := plugin.Spec.(*CompositeQuerySpec)
if !ok || composite == nil {
// Unreachable via UnmarshalJSON; reaching here means a Go caller broke the Kind/Spec pairing.
return errors.NewInternalf(errors.CodeInternal, "%s: composite query plugin has unexpected spec type %T", path, plugin.Spec)
}
for si, sub := range composite.Queries {
subKind, ok := compositeSubQueryTypeToPluginKind[sub.Type]
if !ok {
continue
}
if !slices.Contains(allowed, subKind) {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput,
"%s.spec.queries[%d]: sub-query type %q is not supported by panel kind %q",
path, si, sub.Type, panelKind)
}
}
return nil
}
var (
compositeSubQueryTypeToPluginKind = map[qb.QueryType]QueryPluginKind{
qb.QueryTypeBuilder: QueryKindBuilder,
qb.QueryTypeFormula: QueryKindFormula,
qb.QueryTypeTraceOperator: QueryKindTraceOperator,
qb.QueryTypePromQL: QueryKindPromQL,
qb.QueryTypeClickHouseSQL: QueryKindClickHouseSQL,
}
)

View File

@@ -0,0 +1,521 @@
package dashboardtypes
import (
"encoding/json"
"strings"
"testing"
"github.com/SigNoz/signoz/pkg/types/tagtypes"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// basePostableJSON is the postable shape of a small but realistic v2
// dashboard used as the base document for patch tests. Each panel carries
// one builder query in the same shape production dashboards use
// (aggregations, filter, groupBy populated), and the dashboard has one
// variable — the variable is not patched in any test here, that's
// covered in a separate variable-focused suite.
const basePostableJSON = `{
"metadata": {"schemaVersion": "v6"},
"data": {
"display": {"name": "Service overview"},
"variables": [
{
"kind": "ListVariable",
"spec": {
"name": "service",
"allowAllValue": true,
"allowMultiple": false,
"plugin": {
"kind": "signoz/DynamicVariable",
"spec": {"name": "service.name", "signal": "metrics"}
}
}
}
],
"panels": {
"p1": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/TimeSeriesPanel", "spec": {}},
"queries": [
{
"kind": "TimeSeriesQuery",
"spec": {"plugin": {"kind": "signoz/BuilderQuery", "spec": {
"name": "A",
"signal": "metrics",
"aggregations": [{
"metricName": "signoz_calls_total",
"temporality": "cumulative",
"timeAggregation": "rate",
"spaceAggregation": "sum"
}],
"filter": {"expression": "service.name IN $service"},
"groupBy": [{"name": "service.name", "fieldDataType": "string", "fieldContext": "tag"}]
}}}
}
]
}
},
"p2": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/NumberPanel", "spec": {}},
"queries": [
{
"kind": "TimeSeriesQuery",
"spec": {"plugin": {"kind": "signoz/BuilderQuery", "spec": {
"name": "X",
"signal": "metrics",
"aggregations": [{
"metricName": "signoz_latency_count",
"temporality": "cumulative",
"timeAggregation": "rate",
"spaceAggregation": "sum"
}]
}}}
}
]
}
}
},
"layouts": [
{
"kind": "Grid",
"spec": {
"display": {"title": "Row 1"},
"items": [
{"x": 0, "y": 0, "width": 6, "height": 6, "content": {"$ref": "#/spec/panels/p1"}},
{"x": 6, "y": 0, "width": 6, "height": 6, "content": {"$ref": "#/spec/panels/p2"}}
]
}
}
],
"duration": "1h"
},
"tags": [{"name": "team/alpha"}, {"name": "env/prod"}]
}`
func TestPatchableDashboardV2_Apply(t *testing.T) {
// Apply doesn't mutate the input *DashboardV2 — it marshals it to
// JSON, applies the patch, and unmarshals the result into a fresh
// struct. Sharing one base across subtests is safe.
var p PostableDashboardV2
require.NoError(t, json.Unmarshal([]byte(basePostableJSON), &p), "base postable JSON must validate")
base := &DashboardV2{
Info: DashboardInfo{
StoredDashboardInfo: p.StoredDashboardInfo,
Tags: []*tagtypes.Tag{
{Name: "team/alpha", InternalName: "team::alpha"},
{Name: "env/prod", InternalName: "env::prod"},
},
},
}
decode := func(t *testing.T, body string) PatchableDashboardV2 {
t.Helper()
var patch PatchableDashboardV2
require.NoError(t, json.Unmarshal([]byte(body), &patch))
return patch
}
// jsonOf marshals the patched dashboard back to JSON so subtests can
// assert on field values without reaching into the typed plugin specs.
jsonOf := func(t *testing.T, out *UpdateableDashboardV2) string {
t.Helper()
raw, err := json.Marshal(out)
require.NoError(t, err)
return string(raw)
}
// ─────────────────────────────────────────────────────────────────
// Successful patches
// ─────────────────────────────────────────────────────────────────
t.Run("no-op preserves all fields", func(t *testing.T) {
out, err := decode(t, `[]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, base.Info.Metadata, out.Metadata)
assert.Equal(t, base.Info.Data.Display.Name, out.Data.Display.Name)
require.Equal(t, len(base.Info.Data.Panels), len(out.Data.Panels))
for k, panel := range base.Info.Data.Panels {
require.Contains(t, out.Data.Panels, k)
assert.Equal(t, panel.Spec.Plugin.Kind, out.Data.Panels[k].Spec.Plugin.Kind)
}
assert.Len(t, out.Tags, len(base.Info.Tags))
assert.Len(t, out.Data.Variables, len(base.Info.Data.Variables))
assert.Len(t, out.Data.Layouts, len(base.Info.Data.Layouts))
})
t.Run("add metadata image", func(t *testing.T) {
out, err := decode(t, `[{"op": "add", "path": "/metadata/image", "value": "https://example.com/img.png"}]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, "https://example.com/img.png", out.Metadata.Image)
assert.Equal(t, SchemaVersion, out.Metadata.SchemaVersion, "schemaVersion preserved")
})
t.Run("replace display name", func(t *testing.T) {
out, err := decode(t, `[{"op": "replace", "path": "/data/display/name", "value": "Renamed"}]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, "Renamed", out.Data.Display.Name)
})
// Per RFC 6902 § 4.1, `add` on an existing object member replaces the
// existing value rather than erroring — same effect as `replace`.
t.Run("add overwrites existing display name", func(t *testing.T) {
out, err := decode(t, `[{"op": "add", "path": "/data/display/name", "value": "Overwritten"}]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, "Overwritten", out.Data.Display.Name)
})
t.Run("add data refreshInterval", func(t *testing.T) {
out, err := decode(t, `[{"op": "add", "path": "/data/refreshInterval", "value": "30s"}]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, "30s", string(out.Data.RefreshInterval))
})
t.Run("add panel leaves others untouched", func(t *testing.T) {
out, err := decode(t, `[{
"op": "add",
"path": "/data/panels/p3",
"value": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/TablePanel", "spec": {}},
"queries": [{
"kind": "TimeSeriesQuery",
"spec": {"plugin": {"kind": "signoz/BuilderQuery", "spec": {
"name": "A",
"signal": "logs",
"aggregations": [{"expression": "count()"}]
}}}
}]
}
}
}]`).Apply(base)
require.NoError(t, err)
assert.Len(t, out.Data.Panels, 3)
assert.Contains(t, out.Data.Panels, "p1")
assert.Contains(t, out.Data.Panels, "p2")
assert.Contains(t, out.Data.Panels, "p3")
})
t.Run("replace single panel", func(t *testing.T) {
out, err := decode(t, `[{
"op": "replace",
"path": "/data/panels/p2",
"value": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/BarChartPanel", "spec": {}},
"queries": [{
"kind": "TimeSeriesQuery",
"spec": {"plugin": {"kind": "signoz/BuilderQuery", "spec": {
"name": "A",
"signal": "metrics",
"aggregations": [{
"metricName": "signoz_calls_total",
"temporality": "cumulative",
"timeAggregation": "rate",
"spaceAggregation": "sum"
}]
}}}
}]
}
}
}]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, PanelPluginKind("signoz/BarChartPanel"), out.Data.Panels["p2"].Spec.Plugin.Kind)
assert.Equal(t, PanelPluginKind("signoz/TimeSeriesPanel"), out.Data.Panels["p1"].Spec.Plugin.Kind, "p1 untouched")
})
// Removing a panel realistically also drops its layout item — exercise
// the multi-op shape the UI sends.
t.Run("remove panel and its layout item", func(t *testing.T) {
out, err := decode(t, `[
{"op": "remove", "path": "/data/panels/p2"},
{"op": "remove", "path": "/data/layouts/0/spec/items/1"}
]`).Apply(base)
require.NoError(t, err)
assert.Len(t, out.Data.Panels, 1)
assert.Contains(t, out.Data.Panels, "p1")
assert.NotContains(t, out.Data.Panels, "p2")
raw := jsonOf(t, out)
assert.NotContains(t, raw, `"$ref":"#/spec/panels/p2"`)
assert.Contains(t, raw, `"$ref":"#/spec/panels/p1"`)
})
// The headline use case: edit a single field of a single query inside
// one panel without re-sending any other part of the dashboard.
t.Run("rename single query inside panel", func(t *testing.T) {
out, err := decode(t, `[{
"op": "replace",
"path": "/data/panels/p1/spec/queries/0/spec/plugin/spec/name",
"value": "renamed"
}]`).Apply(base)
require.NoError(t, err)
require.Len(t, out.Data.Panels["p1"].Spec.Queries, 1)
assert.Contains(t, jsonOf(t, out), `"name":"renamed"`)
})
// Replace a query at a specific index — swaps query "A" out for "B"
// without re-sending the rest of the panel.
t.Run("replace query at index", func(t *testing.T) {
out, err := decode(t, `[{
"op": "replace",
"path": "/data/panels/p1/spec/queries/0",
"value": {
"kind": "TimeSeriesQuery",
"spec": {"plugin": {"kind": "signoz/BuilderQuery", "spec": {
"name": "B",
"signal": "metrics",
"aggregations": [{
"metricName": "signoz_db_calls_total",
"temporality": "cumulative",
"timeAggregation": "rate",
"spaceAggregation": "sum"
}]
}}}
}
}]`).Apply(base)
require.NoError(t, err)
require.Len(t, out.Data.Panels["p1"].Spec.Queries, 1)
raw := jsonOf(t, out)
assert.Contains(t, raw, `"name":"B"`)
assert.NotContains(t, raw, `"name":"A"`)
})
// ─────────────────────────────────────────────────────────────────
// Layout edits
// ─────────────────────────────────────────────────────────────────
t.Run("move panel by editing layout x coordinate", func(t *testing.T) {
out, err := decode(t, `[{"op": "replace", "path": "/data/layouts/0/spec/items/0/x", "value": 6}]`).Apply(base)
require.NoError(t, err)
raw := jsonOf(t, out)
// The first item used to live at x=0, now lives at x=6.
assert.Contains(t, raw, `"x":6,"y":0,"width":6,"height":6,"content":{"$ref":"#/spec/panels/p1"}`)
})
t.Run("resize panel by editing layout width", func(t *testing.T) {
out, err := decode(t, `[{"op": "replace", "path": "/data/layouts/0/spec/items/0/width", "value": 12}]`).Apply(base)
require.NoError(t, err)
raw := jsonOf(t, out)
assert.Contains(t, raw, `"width":12`)
})
t.Run("rename layout row title", func(t *testing.T) {
out, err := decode(t, `[{"op": "replace", "path": "/data/layouts/0/spec/display/title", "value": "Latency"}]`).Apply(base)
require.NoError(t, err)
assert.Contains(t, jsonOf(t, out), `"title":"Latency"`)
})
t.Run("append layout item", func(t *testing.T) {
out, err := decode(t, `[{
"op": "add",
"path": "/data/layouts/0/spec/items/-",
"value": {"x": 0, "y": 6, "width": 12, "height": 6, "content": {"$ref": "#/spec/panels/p1"}}
}]`).Apply(base)
require.NoError(t, err)
// Item count went 2 → 3.
raw := jsonOf(t, out)
assert.Equal(t, 3, strings.Count(raw, `"$ref":"#/spec/panels/`))
})
// Composing add-panel + add-layout-item is the realistic shape of the
// "add a new chart to my dashboard" UI flow — exercise it end-to-end.
t.Run("add panel and corresponding layout item", func(t *testing.T) {
out, err := decode(t, `[
{
"op": "add",
"path": "/data/panels/p3",
"value": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/TablePanel", "spec": {}},
"queries": [{
"kind": "TimeSeriesQuery",
"spec": {"plugin": {"kind": "signoz/BuilderQuery", "spec": {
"name": "A",
"signal": "logs",
"aggregations": [{"expression": "count()"}]
}}}
}]
}
}
},
{
"op": "add",
"path": "/data/layouts/0/spec/items/-",
"value": {"x": 0, "y": 6, "width": 12, "height": 6, "content": {"$ref": "#/spec/panels/p3"}}
}
]`).Apply(base)
require.NoError(t, err)
assert.Len(t, out.Data.Panels, 3)
raw := jsonOf(t, out)
assert.Contains(t, raw, `"$ref":"#/spec/panels/p3"`)
})
t.Run("append tag", func(t *testing.T) {
out, err := decode(t, `[{"op": "add", "path": "/tags/-", "value": {"name": "env/staging"}}]`).Apply(base)
require.NoError(t, err)
require.Len(t, out.Tags, 3)
assert.Equal(t, "env/staging", out.Tags[2].Name)
})
t.Run("append tag when none exist", func(t *testing.T) {
noTagsBase := &DashboardV2{
Info: DashboardInfo{
StoredDashboardInfo: base.Info.StoredDashboardInfo,
Tags: nil,
},
}
out, err := decode(t, `[{"op": "add", "path": "/tags/-", "value": {"name": "team/new"}}]`).Apply(noTagsBase)
require.NoError(t, err)
require.Len(t, out.Tags, 1)
assert.Equal(t, "team/new", out.Tags[0].Name)
})
t.Run("replace tag name", func(t *testing.T) {
out, err := decode(t, `[{"op": "replace", "path": "/tags/0/name", "value": "team/beta"}]`).Apply(base)
require.NoError(t, err)
require.Len(t, out.Tags, 2)
assert.Equal(t, "team/beta", out.Tags[0].Name)
assert.Equal(t, "env/prod", out.Tags[1].Name, "tag at index 1 untouched")
for _, tag := range out.Tags {
assert.NotEqual(t, "team/alpha", tag.Name, "old tag name must be gone")
}
})
t.Run("multiple ops applied in order", func(t *testing.T) {
out, err := decode(t, `[
{"op": "replace", "path": "/data/display/name", "value": "Multi-step"},
{"op": "remove", "path": "/data/panels/p2"},
{"op": "add", "path": "/tags/-", "value": {"name": "env/staging"}}
]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, "Multi-step", out.Data.Display.Name)
assert.Len(t, out.Data.Panels, 1)
assert.Len(t, out.Tags, 3)
})
// `test` is an RFC 6902 precondition op: aborts the patch if the value
// at the path doesn't equal the supplied value. Used for optimistic
// concurrency. Here it matches, so the subsequent ops apply.
t.Run("test op passes", func(t *testing.T) {
out, err := decode(t, `[
{"op": "test", "path": "/data/display/name", "value": "Service overview"},
{"op": "replace", "path": "/data/display/name", "value": "Confirmed"}
]`).Apply(base)
require.NoError(t, err)
assert.Equal(t, "Confirmed", out.Data.Display.Name)
})
// ─────────────────────────────────────────────────────────────────
// Failure cases
// ─────────────────────────────────────────────────────────────────
t.Run("decode rejects non-array body", func(t *testing.T) {
var patch PatchableDashboardV2
err := json.Unmarshal([]byte(`{"op": "replace"}`), &patch)
require.Error(t, err)
})
t.Run("decode rejects malformed JSON", func(t *testing.T) {
var patch PatchableDashboardV2
// Outer json.Unmarshal rejects non-JSON before PatchableDashboardV2's
// UnmarshalJSON runs, so the error is a stdlib SyntaxError rather
// than the InvalidInput-classified wrap.
err := json.Unmarshal([]byte(`not json`), &patch)
require.Error(t, err)
})
// `test` precondition fails — the whole patch is rejected, including
// the subsequent replace.
t.Run("test op failure rejected", func(t *testing.T) {
_, err := decode(t, `[
{"op": "test", "path": "/data/display/name", "value": "Wrong"},
{"op": "replace", "path": "/data/display/name", "value": "Should not apply"}
]`).Apply(base)
require.Error(t, err)
})
t.Run("remove at missing path rejected", func(t *testing.T) {
_, err := decode(t, `[{"op": "remove", "path": "/data/panels/does-not-exist"}]`).Apply(base)
require.Error(t, err)
})
t.Run("remove schemaVersion rejected", func(t *testing.T) {
_, err := decode(t, `[{"op": "remove", "path": "/metadata/schemaVersion"}]`).Apply(base)
require.Error(t, err)
})
t.Run("wrong schemaVersion rejected", func(t *testing.T) {
_, err := decode(t, `[{"op": "replace", "path": "/metadata/schemaVersion", "value": "v5"}]`).Apply(base)
require.Error(t, err)
require.Contains(t, err.Error(), SchemaVersion)
})
t.Run("empty display name rejected", func(t *testing.T) {
_, err := decode(t, `[{"op": "replace", "path": "/data/display/name", "value": ""}]`).Apply(base)
require.Error(t, err)
require.Contains(t, err.Error(), "data.display.name is required")
})
t.Run("unknown top-level field rejected", func(t *testing.T) {
_, err := decode(t, `[{"op": "add", "path": "/bogus", "value": 42}]`).Apply(base)
require.Error(t, err)
require.Contains(t, err.Error(), "bogus")
})
t.Run("invalid panel kind rejected", func(t *testing.T) {
_, err := decode(t, `[{
"op": "replace",
"path": "/data/panels/p1",
"value": {
"kind": "Panel",
"spec": {"plugin": {"kind": "signoz/NotAPanel", "spec": {}}}
}
}]`).Apply(base)
require.Error(t, err)
require.Contains(t, err.Error(), "NotAPanel")
})
t.Run("query kind incompatible with panel rejected", func(t *testing.T) {
// PromQLQuery is not allowed on ListPanel — verify the cross-check
// in Validate still runs after a patch.
_, err := decode(t, `[{
"op": "replace",
"path": "/data/panels/p2",
"value": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/ListPanel", "spec": {}},
"queries": [{"kind": "TimeSeriesQuery", "spec": {"plugin": {"kind": "signoz/PromQLQuery", "spec": {"name": "A", "query": "up"}}}}]
}
}
}]`).Apply(base)
require.Error(t, err)
})
t.Run("removing only query rejected", func(t *testing.T) {
// p2 has exactly one query in the base; removing it would strand
// the panel queryless, which Validate now rejects.
_, err := decode(t, `[{"op": "remove", "path": "/data/panels/p2/spec/queries/0"}]`).Apply(base)
require.Error(t, err)
require.Contains(t, err.Error(), "at least one query")
})
t.Run("too many tags rejected", func(t *testing.T) {
_, err := decode(t, `[
{"op": "add", "path": "/tags/-", "value": {"name": "t1"}},
{"op": "add", "path": "/tags/-", "value": {"name": "t2"}},
{"op": "add", "path": "/tags/-", "value": {"name": "t3"}},
{"op": "add", "path": "/tags/-", "value": {"name": "t4"}}
]`).Apply(base)
require.Error(t, err)
require.Contains(t, err.Error(), "at most")
})
}

View File

@@ -7,33 +7,75 @@ import (
"testing"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func unmarshalDashboard(data []byte) (*DashboardData, error) {
var d DashboardData
if err := json.Unmarshal(data, &d); err != nil {
return nil, err
}
return &d, nil
}
func TestValidateBigExample(t *testing.T) {
data, err := os.ReadFile("testdata/perses.json")
require.NoError(t, err, "reading example file")
_, err = UnmarshalAndValidateDashboardV2JSON(data)
_, err = unmarshalDashboard(data)
require.NoError(t, err, "expected valid dashboard")
}
func TestValidateDashboardWithSections(t *testing.T) {
data, err := os.ReadFile("testdata/perses_with_sections.json")
require.NoError(t, err, "reading example file")
_, err = UnmarshalAndValidateDashboardV2JSON(data)
_, err = unmarshalDashboard(data)
require.NoError(t, err, "expected valid dashboard")
}
func TestInvalidateNotAJSON(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON([]byte("not json"))
_, err := unmarshalDashboard([]byte("not json"))
require.Error(t, err, "expected error for invalid JSON")
}
// TestUnmarshalErrorPreservesNestedMessage guards the wrap on dec.Decode in
// DashboardData.UnmarshalJSON. The wrap stamps a consistent type/code on
// decode failures, but must not smother the rich messages produced by nested
// UnmarshalJSON methods (panel/query/variable/datasource plugin envelopes).
func TestUnmarshalErrorPreservesNestedMessage(t *testing.T) {
data := []byte(`{
"panels": {
"p1": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "NonExistentPanel", "spec": {}}
}
}
},
"layouts": []
}`)
_, err := unmarshalDashboard(data)
require.Error(t, err)
require.Contains(t, err.Error(), "unknown panel plugin kind",
"outer wrap should not smother the inner UnmarshalJSON message")
require.Contains(t, err.Error(), `"NonExistentPanel"`,
"the offending value should still appear in the error")
require.Contains(t, err.Error(), "allowed values:",
"the allowed-values hint should still appear in the error")
assert.True(t, errors.Ast(err, errors.TypeInvalidInput),
"outer wrap should classify the error as TypeInvalidInput")
assert.True(t, errors.Asc(err, ErrCodeDashboardInvalidInput),
"outer wrap should stamp ErrCodeDashboardInvalidInput")
}
func TestValidateEmptySpec(t *testing.T) {
// no variables no panels
data := []byte(`{}`)
_, err := UnmarshalAndValidateDashboardV2JSON(data)
_, err := unmarshalDashboard(data)
require.NoError(t, err, "expected valid")
}
@@ -59,17 +101,13 @@ func TestValidateOnlyVariables(t *testing.T) {
"kind": "TextVariable",
"spec": {
"name": "mytext",
"value": "default",
"plugin": {
"kind": "signoz/TextboxVariable",
"spec": {}
}
"value": "default"
}
}
],
"layouts": []
}`)
_, err := UnmarshalAndValidateDashboardV2JSON(data)
_, err := unmarshalDashboard(data)
require.NoError(t, err, "expected valid")
}
@@ -148,7 +186,7 @@ func TestInvalidateUnknownPluginKind(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON([]byte(tt.data))
_, err := unmarshalDashboard([]byte(tt.data))
require.Error(t, err, "expected error containing %q, got nil", tt.wantContain)
require.Contains(t, err.Error(), tt.wantContain, "error should mention %q", tt.wantContain)
})
@@ -169,7 +207,7 @@ func TestInvalidateOneInvalidPanel(t *testing.T) {
},
"layouts": []
}`)
_, err := UnmarshalAndValidateDashboardV2JSON(data)
_, err := unmarshalDashboard(data)
require.Error(t, err, "expected error for invalid panel plugin kind")
require.Contains(t, err.Error(), "FakePanel", "error should mention FakePanel")
}
@@ -245,7 +283,7 @@ func TestRejectUnknownFieldsInPluginSpec(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON([]byte(tt.data))
_, err := unmarshalDashboard([]byte(tt.data))
require.Error(t, err, "expected error for unknown field")
require.Contains(t, err.Error(), tt.wantContain, "error should mention %q", tt.wantContain)
})
@@ -323,7 +361,7 @@ func TestInvalidateWrongFieldTypeInPluginSpec(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON([]byte(tt.data))
_, err := unmarshalDashboard([]byte(tt.data))
require.Error(t, err, "expected validation error")
if tt.wantContain != "" {
require.Contains(t, err.Error(), tt.wantContain, "error should mention %q", tt.wantContain)
@@ -531,13 +569,46 @@ func TestInvalidateBadPanelSpecValues(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON([]byte(tt.data))
_, err := unmarshalDashboard([]byte(tt.data))
require.Error(t, err, "expected error containing %q, got nil", tt.wantContain)
require.Contains(t, err.Error(), tt.wantContain, "error should mention %q", tt.wantContain)
})
}
}
func TestInvalidatePanelWithoutQueries(t *testing.T) {
data := []byte(`{
"panels": {
"p1": {
"kind": "Panel",
"spec": {"plugin": {"kind": "signoz/TimeSeriesPanel", "spec": {}}}
}
},
"layouts": []
}`)
_, err := unmarshalDashboard(data)
require.Error(t, err, "expected panel-without-queries to be rejected")
require.Contains(t, err.Error(), "at least one query")
}
func TestInvalidatePanelWithEmptyQueriesArray(t *testing.T) {
data := []byte(`{
"panels": {
"p1": {
"kind": "Panel",
"spec": {
"plugin": {"kind": "signoz/TimeSeriesPanel", "spec": {}},
"queries": []
}
}
},
"layouts": []
}`)
_, err := unmarshalDashboard(data)
require.Error(t, err, "expected panel with explicit empty queries array to be rejected")
require.Contains(t, err.Error(), "at least one query")
}
func TestValidateRequiredFields(t *testing.T) {
wrapVariable := func(pluginKind, pluginSpec string) string {
return `{
@@ -626,7 +697,7 @@ func TestValidateRequiredFields(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON([]byte(tt.data))
_, err := unmarshalDashboard([]byte(tt.data))
require.Error(t, err, "expected error containing %q, got nil", tt.wantContain)
require.Contains(t, err.Error(), tt.wantContain, "error should mention %q", tt.wantContain)
})
@@ -642,13 +713,14 @@ func TestTimeSeriesPanelDefaults(t *testing.T) {
"plugin": {
"kind": "signoz/TimeSeriesPanel",
"spec": {}
}
},
"queries": [{"kind": "TimeSeriesQuery", "spec": {"plugin": {"kind": "signoz/PromQLQuery", "spec": {"name": "A", "query": "up"}}}}]
}
}
},
"layouts": []
}`)
d, err := UnmarshalAndValidateDashboardV2JSON(data)
d, err := unmarshalDashboard(data)
require.NoError(t, err, "unmarshal and validate failed")
// After validation+normalization, the plugin spec should be a typed struct.
@@ -689,13 +761,14 @@ func TestNumberPanelDefaults(t *testing.T) {
"plugin": {
"kind": "signoz/NumberPanel",
"spec": {"thresholds": [{"value": 100, "color": "Red"}]}
}
},
"queries": [{"kind": "TimeSeriesQuery", "spec": {"plugin": {"kind": "signoz/PromQLQuery", "spec": {"name": "A", "query": "up"}}}}]
}
}
},
"layouts": []
}`)
d, err := UnmarshalAndValidateDashboardV2JSON(data)
d, err := unmarshalDashboard(data)
require.NoError(t, err, "unmarshal and validate failed")
require.IsType(t, &NumberPanelSpec{}, d.Panels["p1"].Spec.Plugin.Spec)
@@ -716,6 +789,30 @@ func TestNumberPanelDefaults(t *testing.T) {
"expected stored/response JSON to contain operator:>, got: %s", outputStr)
}
// TestPersesFixtureStorageRoundTrip exercises the typed → map[string]any →
// typed cycle that the create/get path performs against the kitchen-sink
// fixture. Catches plugin specs whose UnmarshalJSON expects a different shape
// than the default MarshalJSON emits.
func TestPersesFixtureStorageRoundTrip(t *testing.T) {
raw, err := os.ReadFile("testdata/perses.json")
require.NoError(t, err)
var data DashboardData
require.NoError(t, json.Unmarshal(raw, &data), "initial unmarshal")
marshaled, err := json.Marshal(data)
require.NoError(t, err, "marshal typed → JSON")
var asMap map[string]any
require.NoError(t, json.Unmarshal(marshaled, &asMap), "JSON → map (storage shape)")
remarshaled, err := json.Marshal(asMap)
require.NoError(t, err, "map → JSON (read-back shape)")
var roundtripped DashboardData
require.NoError(t, json.Unmarshal(remarshaled, &roundtripped), "JSON → typed (the failure mode)")
}
// TestStorageRoundTrip simulates the future DB store/load cycle:
// marshal the normalized dashboard to JSON (what would be written to DB),
// then unmarshal it back (what would be read from DB), and verify defaults survive.
@@ -728,7 +825,8 @@ func TestStorageRoundTrip(t *testing.T) {
"plugin": {
"kind": "signoz/TimeSeriesPanel",
"spec": {}
}
},
"queries": [{"kind": "TimeSeriesQuery", "spec": {"plugin": {"kind": "signoz/PromQLQuery", "spec": {"name": "A", "query": "up"}}}}]
}
},
"p2": {
@@ -737,7 +835,8 @@ func TestStorageRoundTrip(t *testing.T) {
"plugin": {
"kind": "signoz/NumberPanel",
"spec": {"thresholds": [{"value": 100, "color": "Red"}]}
}
},
"queries": [{"kind": "TimeSeriesQuery", "spec": {"plugin": {"kind": "signoz/PromQLQuery", "spec": {"name": "A", "query": "up"}}}}]
}
}
},
@@ -745,7 +844,7 @@ func TestStorageRoundTrip(t *testing.T) {
}`)
// Step 1: Unmarshal + validate + normalize (what the API handler does).
d, err := UnmarshalAndValidateDashboardV2JSON(input)
d, err := unmarshalDashboard(input)
require.NoError(t, err, "unmarshal and validate failed")
// Step 1.5: Verify struct fields have correct defaults (extra validation before storing).
@@ -765,7 +864,7 @@ func TestStorageRoundTrip(t *testing.T) {
require.NoError(t, err, "marshal for storage failed")
// Step 3: Unmarshal from JSON (simulates reading from DB).
loaded, err := UnmarshalAndValidateDashboardV2JSON(stored)
loaded, err := unmarshalDashboard(stored)
require.NoError(t, err, "unmarshal from storage failed")
// Step 3.5: Verify struct fields have correct defaults after loading (before returning in API).
@@ -878,7 +977,7 @@ func TestPanelTypeQueryTypeCompatibility(t *testing.T) {
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
_, err := UnmarshalAndValidateDashboardV2JSON(tc.data)
_, err := unmarshalDashboard(tc.data)
if tc.wantErr {
require.Error(t, err)
} else {

View File

@@ -0,0 +1,170 @@
package dashboardtypes
// TestDashboardDataMatchesPerses asserts that DashboardData
// and every nested SigNoz-owned type cover the JSON field set of their Perses
// counterpart.
import (
"reflect"
"sort"
"strings"
"testing"
v1 "github.com/perses/perses/pkg/model/api/v1"
"github.com/perses/perses/pkg/model/api/v1/dashboard"
"github.com/stretchr/testify/assert"
)
func TestDashboardDataMatchesPerses(t *testing.T) {
cases := []struct {
name string
ours reflect.Type
perses reflect.Type
}{
{"DashboardSpec", typeOf[DashboardData](), typeOf[v1.DashboardSpec]()},
{"Panel", typeOf[Panel](), typeOf[v1.Panel]()},
{"PanelSpec", typeOf[PanelSpec](), typeOf[v1.PanelSpec]()},
{"Query", typeOf[Query](), typeOf[v1.Query]()},
{"QuerySpec", typeOf[QuerySpec](), typeOf[v1.QuerySpec]()},
{"DatasourceSpec", typeOf[DatasourceSpec](), typeOf[v1.DatasourceSpec]()},
{"Variable", typeOf[Variable](), typeOf[dashboard.Variable]()},
{"ListVariableSpec", typeOf[ListVariableSpec](), typeOf[dashboard.ListVariableSpec]()},
{"Layout", typeOf[Layout](), typeOf[dashboard.Layout]()},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
missing, extra := drift(c.ours, c.perses)
assert.Empty(t, missing,
"DashboardData (%s) is missing json fields present on Perses %s — upstream likely added or renamed a field",
c.ours.Name(), c.perses.Name())
assert.Empty(t, extra,
"DashboardData (%s) has json fields absent on Perses %s — upstream likely removed a field or we added one without the counterpart",
c.ours.Name(), c.perses.Name())
})
}
}
func TestDriftDetectionMechanics(t *testing.T) {
t.Run("upstream added a field", func(t *testing.T) {
type ours struct {
Name string `json:"name"`
}
type perses struct {
Name string `json:"name"`
Description string `json:"description"`
}
missing, extra := drift(typeOf[ours](), typeOf[perses]())
assert.Equal(t, []string{"description"}, missing, "missing fires: upstream has a field we don't")
assert.Empty(t, extra)
})
t.Run("upstream removed a field", func(t *testing.T) {
type ours struct {
Name string `json:"name"`
Description string `json:"description"`
}
type perses struct {
Name string `json:"name"`
}
missing, extra := drift(typeOf[ours](), typeOf[perses]())
assert.Empty(t, missing)
assert.Equal(t, []string{"description"}, extra, "extra fires: we kept a field upstream removed")
})
t.Run("upstream renamed a field", func(t *testing.T) {
type ours struct {
Name string `json:"name"`
}
type perses struct {
Name string `json:"title"`
}
missing, extra := drift(typeOf[ours](), typeOf[perses]())
assert.Equal(t, []string{"title"}, missing, "missing fires for the new name")
assert.Equal(t, []string{"name"}, extra, "extra fires for the old name — both fire on a rename")
})
t.Run("we added a field upstream does not have", func(t *testing.T) {
type ours struct {
Name string `json:"name"`
Internal string `json:"internal"`
}
type perses struct {
Name string `json:"name"`
}
missing, extra := drift(typeOf[ours](), typeOf[perses]())
assert.Empty(t, missing)
assert.Equal(t, []string{"internal"}, extra, "extra fires: we added a field with no upstream counterpart")
})
t.Run("embedded struct flattens — drift inside the embed is caught", func(t *testing.T) {
type embedded struct {
Display string `json:"display"`
NewBit string `json:"newBit"` // upstream added this inside the embed
}
type ours struct {
Display string `json:"display"`
Name string `json:"name"`
}
type perses struct {
embedded `json:",inline"`
Name string `json:"name"`
}
missing, extra := drift(typeOf[ours](), typeOf[perses]())
assert.Equal(t, []string{"newBit"}, missing, "field added inside an inlined embed surfaces at the parent level")
assert.Empty(t, extra)
})
}
func drift(ours, perses reflect.Type) (missing, extra []string) {
o, p := jsonFields(ours), jsonFields(perses)
return sortedDiff(p, o), sortedDiff(o, p)
}
// jsonFields returns the set of json tag names for a struct, flattening
// anonymous embedded fields (matching encoding/json behavior).
func jsonFields(t reflect.Type) map[string]struct{} {
out := map[string]struct{}{}
if t.Kind() != reflect.Struct {
return out
}
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
// Skip unexported fields (e.g., dashboard.ListVariableSpec has an
// unexported `variableSpec` interface tag).
if !f.IsExported() && !f.Anonymous {
continue
}
tag := f.Tag.Get("json")
name := strings.Split(tag, ",")[0]
// Anonymous embed with empty json name (no tag, or `json:",inline"` /
// `json:",omitempty"`-style options-only tag) is flattened by encoding/json.
if f.Anonymous && name == "" {
for k := range jsonFields(f.Type) {
out[k] = struct{}{}
}
continue
}
if tag == "-" || name == "" {
continue
}
out[name] = struct{}{}
}
return out
}
// sortedDiff returns keys in a but not in b, sorted.
func sortedDiff(a, b map[string]struct{}) []string {
var diff []string
for k := range a {
if _, ok := b[k]; !ok {
diff = append(diff, k)
}
}
sort.Strings(diff)
return diff
}
func typeOf[T any]() reflect.Type { return reflect.TypeOf((*T)(nil)).Elem() }

View File

@@ -0,0 +1,312 @@
package dashboardtypes
import (
"bytes"
"encoding/json"
"maps"
"slices"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/go-playground/validator/v10"
"github.com/swaggest/jsonschema-go"
)
// ══════════════════════════════════════════════
// Panel plugin
// ══════════════════════════════════════════════
type PanelPlugin struct {
Kind PanelPluginKind `json:"kind"`
Spec any `json:"spec"`
}
// PrepareJSONSchema drops the reflected struct shape (type: object, properties)
// from the envelope so that only the JSONSchemaOneOf result binds.
func (PanelPlugin) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
func (p *PanelPlugin) UnmarshalJSON(data []byte) error {
kind, specJSON, err := extractKindAndSpec(data)
if err != nil {
return err
}
factory, ok := panelPluginSpecs[PanelPluginKind(kind)]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "unknown panel plugin kind %q; allowed values: %s", kind, allowedValuesForKind(slices.Sorted(maps.Keys(panelPluginSpecs))))
}
spec, err := decodeSpec(specJSON, factory(), kind)
if err != nil {
return err
}
p.Kind = PanelPluginKind(kind)
p.Spec = spec
return nil
}
func (PanelPlugin) JSONSchemaOneOf() []any {
return []any{
PanelPluginVariant[TimeSeriesPanelSpec]{Kind: string(PanelKindTimeSeries)},
PanelPluginVariant[BarChartPanelSpec]{Kind: string(PanelKindBarChart)},
PanelPluginVariant[NumberPanelSpec]{Kind: string(PanelKindNumber)},
PanelPluginVariant[PieChartPanelSpec]{Kind: string(PanelKindPieChart)},
PanelPluginVariant[TablePanelSpec]{Kind: string(PanelKindTable)},
PanelPluginVariant[HistogramPanelSpec]{Kind: string(PanelKindHistogram)},
PanelPluginVariant[ListPanelSpec]{Kind: string(PanelKindList)},
}
}
type PanelPluginVariant[S any] struct {
Kind string `json:"kind" required:"true"`
Spec S `json:"spec" required:"true"`
}
func (v PanelPluginVariant[S]) PrepareJSONSchema(s *jsonschema.Schema) error {
return restrictKindToOneValue(s, v.Kind)
}
// ══════════════════════════════════════════════
// Query plugin
// ══════════════════════════════════════════════
type QueryPlugin struct {
Kind QueryPluginKind `json:"kind"`
Spec any `json:"spec"`
}
func (QueryPlugin) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
func (p *QueryPlugin) UnmarshalJSON(data []byte) error {
kind, specJSON, err := extractKindAndSpec(data)
if err != nil {
return err
}
factory, ok := queryPluginSpecs[QueryPluginKind(kind)]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "unknown query plugin kind %q; allowed values: %s", kind, allowedValuesForKind(slices.Sorted(maps.Keys(queryPluginSpecs))))
}
spec, err := decodeSpec(specJSON, factory(), kind)
if err != nil {
return err
}
p.Kind = QueryPluginKind(kind)
p.Spec = spec
return nil
}
func (QueryPlugin) JSONSchemaOneOf() []any {
return []any{
QueryPluginVariant[BuilderQuerySpec]{Kind: string(QueryKindBuilder)},
QueryPluginVariant[CompositeQuerySpec]{Kind: string(QueryKindComposite)},
QueryPluginVariant[FormulaSpec]{Kind: string(QueryKindFormula)},
QueryPluginVariant[PromQLQuerySpec]{Kind: string(QueryKindPromQL)},
QueryPluginVariant[ClickHouseSQLQuerySpec]{Kind: string(QueryKindClickHouseSQL)},
QueryPluginVariant[TraceOperatorSpec]{Kind: string(QueryKindTraceOperator)},
}
}
type QueryPluginVariant[S any] struct {
Kind string `json:"kind" required:"true"`
Spec S `json:"spec" required:"true"`
}
func (v QueryPluginVariant[S]) PrepareJSONSchema(s *jsonschema.Schema) error {
return restrictKindToOneValue(s, v.Kind)
}
// ══════════════════════════════════════════════
// Variable plugin
// ══════════════════════════════════════════════
type VariablePlugin struct {
Kind VariablePluginKind `json:"kind"`
Spec any `json:"spec"`
}
func (VariablePlugin) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
func (p *VariablePlugin) UnmarshalJSON(data []byte) error {
kind, specJSON, err := extractKindAndSpec(data)
if err != nil {
return err
}
factory, ok := variablePluginSpecs[VariablePluginKind(kind)]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "unknown variable plugin kind %q; allowed values: %s", kind, allowedValuesForKind(slices.Sorted(maps.Keys(variablePluginSpecs))))
}
spec, err := decodeSpec(specJSON, factory(), kind)
if err != nil {
return err
}
p.Kind = VariablePluginKind(kind)
p.Spec = spec
return nil
}
func (VariablePlugin) JSONSchemaOneOf() []any {
return []any{
VariablePluginVariant[DynamicVariableSpec]{Kind: string(VariableKindDynamic)},
VariablePluginVariant[QueryVariableSpec]{Kind: string(VariableKindQuery)},
VariablePluginVariant[CustomVariableSpec]{Kind: string(VariableKindCustom)},
}
}
type VariablePluginVariant[S any] struct {
Kind string `json:"kind" required:"true"`
Spec S `json:"spec" required:"true"`
}
func (v VariablePluginVariant[S]) PrepareJSONSchema(s *jsonschema.Schema) error {
return restrictKindToOneValue(s, v.Kind)
}
// ══════════════════════════════════════════════
// Datasource plugin
// ══════════════════════════════════════════════
type DatasourcePlugin struct {
Kind DatasourcePluginKind `json:"kind"`
Spec any `json:"spec"`
}
func (DatasourcePlugin) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
func (p *DatasourcePlugin) UnmarshalJSON(data []byte) error {
kind, specJSON, err := extractKindAndSpec(data)
if err != nil {
return err
}
factory, ok := datasourcePluginSpecs[DatasourcePluginKind(kind)]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "unknown datasource plugin kind %q; allowed values: %s", kind, allowedValuesForKind(slices.Sorted(maps.Keys(datasourcePluginSpecs))))
}
spec, err := decodeSpec(specJSON, factory(), kind)
if err != nil {
return err
}
p.Kind = DatasourcePluginKind(kind)
p.Spec = spec
return nil
}
func (DatasourcePlugin) JSONSchemaOneOf() []any {
return []any{
DatasourcePluginVariant[struct{}]{Kind: string(DatasourceKindSigNoz)},
}
}
type DatasourcePluginVariant[S any] struct {
Kind string `json:"kind" required:"true"`
Spec S `json:"spec" required:"true"`
}
func (v DatasourcePluginVariant[S]) PrepareJSONSchema(s *jsonschema.Schema) error {
return restrictKindToOneValue(s, v.Kind)
}
// ══════════════════════════════════════════════
// Helpers
// ══════════════════════════════════════════════
var (
panelPluginSpecs = map[PanelPluginKind]func() any{
PanelKindTimeSeries: func() any { return new(TimeSeriesPanelSpec) },
PanelKindBarChart: func() any { return new(BarChartPanelSpec) },
PanelKindNumber: func() any { return new(NumberPanelSpec) },
PanelKindPieChart: func() any { return new(PieChartPanelSpec) },
PanelKindTable: func() any { return new(TablePanelSpec) },
PanelKindHistogram: func() any { return new(HistogramPanelSpec) },
PanelKindList: func() any { return new(ListPanelSpec) },
}
queryPluginSpecs = map[QueryPluginKind]func() any{
QueryKindBuilder: func() any { return new(BuilderQuerySpec) },
QueryKindComposite: func() any { return new(CompositeQuerySpec) },
QueryKindFormula: func() any { return new(FormulaSpec) },
QueryKindPromQL: func() any { return new(PromQLQuerySpec) },
QueryKindClickHouseSQL: func() any { return new(ClickHouseSQLQuerySpec) },
QueryKindTraceOperator: func() any { return new(TraceOperatorSpec) },
}
variablePluginSpecs = map[VariablePluginKind]func() any{
VariableKindDynamic: func() any { return new(DynamicVariableSpec) },
VariableKindQuery: func() any { return new(QueryVariableSpec) },
VariableKindCustom: func() any { return new(CustomVariableSpec) },
}
datasourcePluginSpecs = map[DatasourcePluginKind]func() any{
DatasourceKindSigNoz: func() any { return new(struct{}) },
}
allowedQueryKinds = map[PanelPluginKind][]QueryPluginKind{
PanelKindTimeSeries: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindBarChart: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindNumber: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindHistogram: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindPromQL, QueryKindClickHouseSQL},
PanelKindPieChart: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindClickHouseSQL},
PanelKindTable: {QueryKindBuilder, QueryKindComposite, QueryKindFormula, QueryKindTraceOperator, QueryKindClickHouseSQL},
PanelKindList: {QueryKindBuilder},
}
)
func allowedValuesForKind[K ~string](kinds []K) string {
parts := make([]string, len(kinds))
for i, k := range kinds {
parts[i] = "`" + string(k) + "`"
}
return strings.Join(parts, ", ")
}
// extractKindAndSpec parses a {"kind": "...", "spec": {...}} envelope and returns
// kind and the raw spec bytes for typed decoding.
func extractKindAndSpec(data []byte) (string, []byte, error) {
var head struct {
Kind string `json:"kind"`
Spec json.RawMessage `json:"spec"`
}
if err := json.Unmarshal(data, &head); err != nil {
return "", nil, errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "invalid plugin envelope")
}
return head.Kind, head.Spec, nil
}
// decodeSpec strict-decodes a spec JSON into target and runs struct-tag validation (go-playground/validator).
func decodeSpec(specJSON []byte, target any, kind string) (any, error) {
if len(specJSON) == 0 {
return nil, errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "kind %q: spec is required", kind)
}
dec := json.NewDecoder(bytes.NewReader(specJSON))
dec.DisallowUnknownFields()
if err := dec.Decode(target); err != nil {
return nil, errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "kind %q: invalid spec JSON", kind)
}
if err := validator.New().Struct(target); err != nil {
return nil, errors.WrapInvalidInputf(err, ErrCodeDashboardInvalidInput, "kind %q: spec failed validation", kind)
}
return target, nil
}
// clearOneOfParentShape drops Type and Properties on a schema that also has a JSONSchemaOneOf.
func clearOneOfParentShape(s *jsonschema.Schema) error {
s.Type = nil
s.Properties = nil
return nil
}
// restrictKindToOneValue ensures that the schema only allows one Kind value for a type.
// For eg. PanelPluginVariant[TimeSeriesPanelSpec]{Kind: string(PanelKindTimeSeries)} should
// only allow "signoz/TimeSeriesPanel" in its kind field.
func restrictKindToOneValue(schema *jsonschema.Schema, kind string) error {
kindProp, ok := schema.Properties["kind"]
if !ok || kindProp.TypeObject == nil {
return errors.NewInternalf(errors.CodeInternal, "variant schema missing `kind` property")
}
kindProp.TypeObject.WithEnum(kind)
schema.Properties["kind"] = kindProp
return nil
}

View File

@@ -0,0 +1,182 @@
package dashboardtypes
import (
"maps"
"slices"
"github.com/SigNoz/signoz/pkg/errors"
v1 "github.com/perses/perses/pkg/model/api/v1"
"github.com/perses/perses/pkg/model/api/v1/common"
"github.com/perses/perses/pkg/model/api/v1/dashboard"
"github.com/perses/perses/pkg/model/api/v1/variable"
"github.com/swaggest/jsonschema-go"
)
// ══════════════════════════════════════════════
// Datasource
// ══════════════════════════════════════════════
type DatasourceSpec struct {
Display *common.Display `json:"display,omitempty"`
Default bool `json:"default"`
Plugin DatasourcePlugin `json:"plugin"`
}
// ══════════════════════════════════════════════
// Panel
// ══════════════════════════════════════════════
type Panel struct {
Kind string `json:"kind"`
Spec PanelSpec `json:"spec"`
}
type PanelSpec struct {
Display *v1.PanelDisplay `json:"display,omitempty"`
Plugin PanelPlugin `json:"plugin"`
Queries []Query `json:"queries,omitempty"`
Links []v1.Link `json:"links,omitempty"`
}
// ══════════════════════════════════════════════
// Query
// ══════════════════════════════════════════════
type Query struct {
Kind string `json:"kind"`
Spec QuerySpec `json:"spec"`
}
type QuerySpec struct {
Name string `json:"name,omitempty"`
Plugin QueryPlugin `json:"plugin"`
}
// ══════════════════════════════════════════════
// Variable
// ══════════════════════════════════════════════
// Variable is the list/text sum type. Spec is set to *ListVariableSpec or
// *dashboard.TextVariableSpec by UnmarshalJSON based on Kind. The schema is a
// discriminated oneOf (see JSONSchemaOneOf).
type Variable struct {
Kind variable.Kind `json:"kind"`
Spec any `json:"spec"`
}
func (Variable) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
func (v *Variable) UnmarshalJSON(data []byte) error {
kind, specJSON, err := extractKindAndSpec(data)
if err != nil {
return err
}
switch kind {
case string(variable.KindList):
spec, err := decodeSpec(specJSON, new(ListVariableSpec), kind)
if err != nil {
return err
}
v.Kind = variable.KindList
v.Spec = spec
case string(variable.KindText):
spec, err := decodeSpec(specJSON, new(dashboard.TextVariableSpec), kind)
if err != nil {
return err
}
v.Kind = variable.KindText
v.Spec = spec
default:
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "unknown variable kind %q; allowed values: %s", kind, allowedValuesForKind([]variable.Kind{variable.KindList, variable.KindText}))
}
return nil
}
func (Variable) JSONSchemaOneOf() []any {
return []any{
VariableEnvelope[ListVariableSpec]{Kind: string(variable.KindList)},
VariableEnvelope[dashboard.TextVariableSpec]{Kind: string(variable.KindText)},
}
}
type VariableEnvelope[S any] struct {
Kind string `json:"kind" required:"true"`
Spec S `json:"spec" required:"true"`
}
func (v VariableEnvelope[S]) PrepareJSONSchema(s *jsonschema.Schema) error {
return restrictKindToOneValue(s, v.Kind)
}
// ListVariableSpec mirrors dashboard.ListVariableSpec (variable.ListSpec
// fields + Name) but with a typed VariablePlugin replacing common.Plugin.
type ListVariableSpec struct {
Display *variable.Display `json:"display,omitempty"`
DefaultValue *variable.DefaultValue `json:"defaultValue,omitempty"`
AllowAllValue bool `json:"allowAllValue"`
AllowMultiple bool `json:"allowMultiple"`
CustomAllValue string `json:"customAllValue,omitempty"`
CapturingRegexp string `json:"capturingRegexp,omitempty"`
Sort *variable.Sort `json:"sort,omitempty"`
Plugin VariablePlugin `json:"plugin"`
Name string `json:"name"`
}
// ══════════════════════════════════════════════
// Layout
// ══════════════════════════════════════════════
// Layout is the dashboard layout sum type. Spec is populated by UnmarshalJSON
// with the concrete layout spec struct (today only dashboard.GridLayoutSpec)
// based on Kind. No plugin is involved, so we reuse the Perses spec types as
// leaf imports.
type Layout struct {
Kind dashboard.LayoutKind `json:"kind"`
Spec any `json:"spec"`
}
// layoutSpecs is the layout sum type factory. Perses only defines
// KindGridLayout today; adding a new kind upstream surfaces as an
// "unknown layout kind" runtime error here until we add it.
var layoutSpecs = map[dashboard.LayoutKind]func() any{
dashboard.KindGridLayout: func() any { return new(dashboard.GridLayoutSpec) },
}
func (Layout) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
func (l *Layout) UnmarshalJSON(data []byte) error {
kind, specJSON, err := extractKindAndSpec(data)
if err != nil {
return err
}
factory, ok := layoutSpecs[dashboard.LayoutKind(kind)]
if !ok {
return errors.NewInvalidInputf(ErrCodeDashboardInvalidInput, "unknown layout kind %q; allowed values: %s", kind, allowedValuesForKind(slices.Sorted(maps.Keys(layoutSpecs))))
}
spec, err := decodeSpec(specJSON, factory(), kind)
if err != nil {
return err
}
l.Kind = dashboard.LayoutKind(kind)
l.Spec = spec
return nil
}
func (Layout) JSONSchemaOneOf() []any {
return []any{
LayoutEnvelope[dashboard.GridLayoutSpec]{Kind: string(dashboard.KindGridLayout)},
}
}
type LayoutEnvelope[S any] struct {
Kind string `json:"kind" required:"true"`
Spec S `json:"spec" required:"true"`
}
func (v LayoutEnvelope[S]) PrepareJSONSchema(s *jsonschema.Schema) error {
return restrictKindToOneValue(s, v.Kind)
}

View File

@@ -8,6 +8,7 @@ import (
qb "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/swaggest/jsonschema-go"
)
// ══════════════════════════════════════════════
@@ -20,11 +21,10 @@ const (
VariableKindDynamic VariablePluginKind = "signoz/DynamicVariable"
VariableKindQuery VariablePluginKind = "signoz/QueryVariable"
VariableKindCustom VariablePluginKind = "signoz/CustomVariable"
VariableKindTextbox VariablePluginKind = "signoz/TextboxVariable"
)
func (VariablePluginKind) Enum() []any {
return []any{VariableKindDynamic, VariableKindQuery, VariableKindCustom, VariableKindTextbox}
return []any{VariableKindDynamic, VariableKindQuery, VariableKindCustom}
}
type DynamicVariableSpec struct {
@@ -42,8 +42,6 @@ type CustomVariableSpec struct {
CustomValue string `json:"customValue" validate:"required" required:"true"`
}
type TextboxVariableSpec struct{}
// ══════════════════════════════════════════════
// SigNoz query plugin specs — aliased from querybuildertypesv5
// ══════════════════════════════════════════════
@@ -87,6 +85,30 @@ func (b *BuilderQuerySpec) UnmarshalJSON(data []byte) error {
return nil
}
// MarshalJSON delegates to the inner Spec so the on-wire shape matches what
// UnmarshalJSON expects (a flat builder-query payload with `signal` at the top
// level). Without this, Go's default would wrap it as {"Spec": {...}} and the
// signal-dispatch on read would fail.
func (b BuilderQuerySpec) MarshalJSON() ([]byte, error) {
return json.Marshal(b.Spec)
}
// PrepareJSONSchema drops the reflected struct shape so only the
// JSONSchemaOneOf result binds.
func (BuilderQuerySpec) PrepareJSONSchema(s *jsonschema.Schema) error {
return clearOneOfParentShape(s)
}
// JSONSchemaOneOf exposes the three signal-dispatched shapes a builder query
// can take. Mirrors qb.UnmarshalBuilderQueryBySignal's runtime dispatch.
func (BuilderQuerySpec) JSONSchemaOneOf() []any {
return []any{
qb.QueryBuilderQuery[qb.LogAggregation]{},
qb.QueryBuilderQuery[qb.MetricAggregation]{},
qb.QueryBuilderQuery[qb.TraceAggregation]{},
}
}
// ══════════════════════════════════════════════
// SigNoz panel plugin specs
// ══════════════════════════════════════════════

View File

@@ -2,6 +2,7 @@ package dashboardtypes
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -32,4 +33,23 @@ type Store interface {
DeletePublic(context.Context, string) error
RunInTx(context.Context, func(context.Context) error) error
// ════════════════════════════════════════════════════════════════════════
// v2 dashboard methods
// ════════════════════════════════════════════════════════════════════════
GetV2(context.Context, valuer.UUID, valuer.UUID) (*StorableDashboard, *StorablePublicDashboard, error)
// UpdateV2 updates the dashboard's data, updated_at and updated_by columns
// only, scoped by org and excluding soft-deleted rows. Uses the caller's
// transaction context so it can be made atomic with tag relation changes.
UpdateV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, data StorableDashboardData) error
LockUnlockV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, locked bool, updatedBy string) error
//re-deleting a soft-deleted row returns 0 rows → NotFound.
SoftDeleteV2(ctx context.Context, orgID valuer.UUID, id valuer.UUID, deletedBy string) error
ListPurgeable(ctx context.Context, retention time.Duration, limit int) ([]valuer.UUID, error)
HardDelete(ctx context.Context, ids []valuer.UUID) error
}

View File

@@ -76,11 +76,7 @@
"display": {
"name": "textboxvar"
},
"value": "defaultvaluegoeshere",
"plugin": {
"kind": "signoz/TextboxVariable",
"spec": {}
}
"value": "defaultvaluegoeshere"
}
}
],

View File

@@ -1,13 +0,0 @@
package metercollectortypes
import "github.com/SigNoz/signoz/pkg/valuer"
// Aggregation is a supported Zeus aggregation name.
type Aggregation struct {
valuer.String
}
var (
AggregationSum = Aggregation{valuer.NewString("sum")}
AggregationMax = Aggregation{valuer.NewString("max")}
)

View File

@@ -1,40 +0,0 @@
// Package metercollectortypes holds billing meter value types.
package metercollectortypes
import (
"regexp"
"github.com/SigNoz/signoz/pkg/errors"
)
var nameRegex = regexp.MustCompile(`^[a-z][a-z0-9_.]+$`)
// Name is a validated dotted meter name.
type Name struct {
s string
}
func NewName(s string) (Name, error) {
if !nameRegex.MatchString(s) {
return Name{}, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "invalid meter name: %s", s)
}
return Name{s: s}, nil
}
func MustNewName(s string) Name {
name, err := NewName(s)
if err != nil {
panic(err)
}
return name
}
func (n Name) String() string {
return n.s
}
func (n Name) IsZero() bool {
return n.s == ""
}

View File

@@ -1,13 +0,0 @@
package metercollectortypes
import "github.com/SigNoz/signoz/pkg/valuer"
// Unit is a supported Zeus meter unit.
type Unit struct {
valuer.String
}
var (
UnitCount = Unit{valuer.NewString("count")}
UnitBytes = Unit{valuer.NewString("bytes")}
)

View File

@@ -1,57 +0,0 @@
package meterreportertypes
import "github.com/SigNoz/signoz/pkg/types/metercollectortypes"
// Meter is one meter value sent to Zeus.
type Meter struct {
// MeterName is the fully-qualified meter identifier.
MeterName string `json:"name"`
// Value is the aggregated scalar for this meter over the reporting window.
Value float64 `json:"value"`
// Unit is the metric unit for this meter.
Unit metercollectortypes.Unit `json:"unit"`
// Aggregation names the aggregation applied to produce Value.
Aggregation metercollectortypes.Aggregation `json:"aggregation"`
// StartUnixMilli is the inclusive window start in epoch milliseconds.
StartUnixMilli int64 `json:"start_unix_milli"`
// EndUnixMilli is the exclusive window end in epoch milliseconds.
EndUnixMilli int64 `json:"end_unix_milli"`
// IsCompleted is false for the current day's partial value.
IsCompleted bool `json:"is_completed"`
// Dimensions is the per-meter label set.
Dimensions map[string]string `json:"dimensions"`
}
// NewMeter builds a meter from typed metadata and a reporting window.
func NewMeter(
name metercollectortypes.Name,
value float64,
unit metercollectortypes.Unit,
aggregation metercollectortypes.Aggregation,
window Window,
dimensions map[string]string,
) Meter {
return Meter{
MeterName: name.String(),
Value: value,
Unit: unit,
Aggregation: aggregation,
StartUnixMilli: window.StartUnixMilli,
EndUnixMilli: window.EndUnixMilli,
IsCompleted: window.IsCompleted,
Dimensions: dimensions,
}
}
// PostableMeters is one day of meters for Zeus.PutMetersV3.
type PostableMeters struct {
// Meters is the set of meter values being shipped for one day.
Meters []Meter `json:"meters"`
}

Some files were not shown because too many files have changed in this diff Show More