Compare commits

...

9 Commits

Author SHA1 Message Date
vikrantgupta25
5f036090a3 fix(authz): single line returns 2026-04-23 19:20:28 +05:30
vikrantgupta25
9f2efd100b feat(authz): fix the role corelations 2026-04-23 19:20:28 +05:30
vikrantgupta25
642440732e feat(authz): fix the role corelations 2026-04-23 19:20:28 +05:30
vikrantgupta25
216845187d feat(authz): move to types 2026-04-23 19:20:28 +05:30
vikrantgupta25
71ac319860 feat(authz): add check API for community build 2026-04-23 19:20:24 +05:30
Vikrant Gupta
afe85c48f9 feat(authz): add support for delete role (#11044)
* feat(authz): add support for delete role

* feat(authz): register config and return error on cleanup failure

* feat(authz): take user and serviceaccount DI for assignee checks

* feat(authz): add the example yaml

* feat(authz): move to callbacks instead of DI
2026-04-23 13:25:19 +00:00
Pandey
aeadeacc70 chore(tests): bump deps to close 8 dependabot alerts (#11076)
Bumps direct pins pytest>=9.0.3 (GHSA-6w46-j5rx-g56g) and requests>=2.33.0
(GHSA-gc5v-m9x4-r6x2). uv lock --upgrade then refreshes everything
transitive, which covers:

- cryptography 46.0.3 -> 46.0.7 (GHSA-r6ph-v2qm-q3c2 high, GHSA-p423-j2cm-9vmq
  medium, GHSA-m959-cc7f-wv43 low)
- python-dotenv 1.2.1 -> 1.2.2 (GHSA-mf9w-mj56-hr94)
- Pygments 2.19.2 -> 2.20.0 (GHSA-5239-wwwm-4pmq)
- jwcrypto 1.5.6 -> 1.5.7 (GHSA-fjrm-76x2-c4q4 — PyPI has 1.5.7, GitHub's
  advisory hasn't catalogued the patched version yet)

Risk: python-keycloak majored 6.0.0 -> 7.1.1. The 7.0 release tightens
return-type handling and can now raise TypeError on mismatch. Imports
collect cleanly (499 tests) but only the callbackauthn suite exercises
KeycloakAdmin at runtime — watch that job in CI.
2026-04-23 12:56:29 +00:00
Vikrant Gupta
6996d41b01 fix(serviceaccount): status code for deleted service accounts (#11075)
* fix(serviceaccount): status code for deleted service accounts

* fix(authz): plural relation endpoints
2026-04-23 12:53:52 +00:00
Pandey
f62024ad3f chore: modern fmts and lints for tests/ (#11074)
* chore(frontend): remove stale e2e scaffold

frontend/e2e/ held an unused settings-only test-plan scaffold from Oct 2025.
Active Playwright specs live at tests/e2e/. Drop the directory, the orphan
playwright.config.ts, the @playwright/test dependency, and the tsconfig
references that pinned them.

* chore(e2e): migrate formatter from prettier to oxfmt

Swap tests/e2e/ onto oxfmt — same tool the frontend adopted in #11057. Style
matches frontend/.oxfmtrc.json (tabs, tabWidth:1) so the two TS trees stay
visually consistent. Drops .prettierrc.json and .prettierignore, adds the
fmt/fmt:check yarn scripts, and reformats the existing specs.

* chore(e2e): migrate linter from eslint to oxlint

Drop eslint + @typescript-eslint plugins in favour of oxlint 1.59 + tsgolint
— same toolchain the frontend adopted in #10176. The .oxlintrc.json mirrors
frontend/.oxlintrc.json with plugins scoped to a Playwright TS codebase
(eslint, typescript, unicorn, import, promise).

Divergence: eslint-plugin-playwright is not ported. Its rules depend on
ESLint APIs (context.getAncestors) that oxlint's JS plugin shim does not
implement, so the five playwright/* rules are dropped in this migration.

* ci(e2e): add fmtlint job

Mirror integrationci.yaml's fmtlint job for e2e. Runs oxfmt --check and
oxlint on tests/e2e/ under the same safe-to-e2e label gating as the
existing test job.

* chore(integration): migrate python tooling from black/pylint/isort/autoflake to ruff

Replace the four-tool stack with ruff — same motivation as the oxfmt/oxlint
swap on the TS side. One tool covers formatting (ruff format), import
sorting (I), unused-import/variable cleanup (F401/F841), and the pylint
rules we actually care about (E/W/F/UP/B/PL).

Rule set mirrors the intent of the prior pylint config: too-many-* checks
and magic-value-comparison stay disabled, dangerous-default-value (now
B006) stays muted. A handful of newly-surfaced codes (B011/B024/B905/E741/
UP047/PLC0206/PLW2901) are also muted to keep this a pure tool swap — each
deserves its own review before enabling.

Divergence: ruff caps line-length at 320, so the prior pylint value of 400
drops to 320. Nothing in tree exceeds 320, so no lines wrap.

No changes to integrationci.yaml — both fmt/lint steps still call
make py-fmt / make py-lint, which now dispatch to ruff.

* chore(e2e): restore playwright lint rules via oxlint jsPlugin

eslint-plugin-playwright@2.x was rewritten against ESLint 8's
context.sourceCode.getAncestors() API, which oxlint's JS plugin shim
exposes. The 0.16.x version previously ruled out by context.getAncestors()
missing is no longer a blocker. Bump to 2.10.2, re-add it as a jsPlugin,
and restore the five rules dropped in the initial oxlint migration:
expect-expect, no-conditional-in-test, no-page-pause, no-wait-for-timeout,
prefer-web-first-assertions.

Rule count: 104 → 109.

* chore(frontend): remove stale e2e prompt

frontend/prompts/generate-e2e-test.md is leftover from the same Oct 2025
scaffold removed in ebf735dcc. It references frontend/e2e/utils/login.util.ts,
which no longer exists, and is not wired into anything.

* chore(e2e): make .env.local write layout explicit

The single f-string with inline \n escapes read as a wall of text after
ruff's line-length allowance collapsed it onto one line. Switch to a
triple-quoted f-string so the generated .env.local structure is visible
in source. Byte-for-byte identical output.

* chore(e2e): write .env.local one key per line

Open the file with a context manager and emit each key with its own
f.write call. Same output as before, but each key-value pair is a
discrete statement.
2026-04-23 12:01:42 +00:00
154 changed files with 3002 additions and 6979 deletions

View File

@@ -9,6 +9,27 @@ on:
- labeled
jobs:
fmtlint:
if: |
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-e2e')
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: node
uses: actions/setup-node@v4
with:
node-version: lts/*
- name: install
run: |
cd tests/e2e && yarn install --frozen-lockfile
- name: fmt
run: |
cd tests/e2e && yarn fmt:check
- name: lint
run: |
cd tests/e2e && yarn lint
test:
strategy:
fail-fast: false

View File

@@ -201,14 +201,12 @@ docker-buildx-enterprise: go-build-enterprise js-build
# python commands
##############################################################
.PHONY: py-fmt
py-fmt: ## Run black across the shared tests project
@cd tests && uv run black .
py-fmt: ## Run ruff format across the shared tests project
@cd tests && uv run ruff format .
.PHONY: py-lint
py-lint: ## Run lint across the shared tests project
@cd tests && uv run isort .
@cd tests && uv run autoflake .
@cd tests && uv run pylint .
py-lint: ## Run ruff check across the shared tests project
@cd tests && uv run ruff check --fix .
.PHONY: py-test-setup
py-test-setup: ## Bring up the shared SigNoz backend used by integration and e2e tests

View File

@@ -92,7 +92,7 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error) {
return signoz.NewAuthNs(ctx, providerSettings, store, licensing)
},
func(ctx context.Context, sqlstore sqlstore.SQLStore, _ licensing.Licensing, _ dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error) {
func(ctx context.Context, sqlstore sqlstore.SQLStore, _ licensing.Licensing, _ []authz.OnBeforeRoleDelete, _ dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error) {
openfgaDataStore, err := openfgaserver.NewSQLStore(sqlstore)
if err != nil {
return nil, err

View File

@@ -137,12 +137,12 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
return authNs, nil
},
func(ctx context.Context, sqlstore sqlstore.SQLStore, licensing licensing.Licensing, dashboardModule dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error) {
func(ctx context.Context, sqlstore sqlstore.SQLStore, licensing licensing.Licensing, onBeforeRoleDelete []authz.OnBeforeRoleDelete, dashboardModule dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error) {
openfgaDataStore, err := openfgaserver.NewSQLStore(sqlstore)
if err != nil {
return nil, err
}
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx), openfgaDataStore, licensing, dashboardModule), nil
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx), openfgaDataStore, licensing, onBeforeRoleDelete, dashboardModule), nil
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, queryParser, querier, licensing)

View File

@@ -407,3 +407,11 @@ cloudintegration:
agent:
# The version of the cloud integration agent.
version: v0.0.8
##################### Authz #################################
authz:
# Specifies the authz provider to use.
provider: openfga
openfga:
# maximum tuples allowed per openfga write operation.
max_tuples_per_write: 100

View File

@@ -7807,7 +7807,7 @@ paths:
summary: Patch role
tags:
- role
/api/v1/roles/{id}/relation/{relation}/objects:
/api/v1/roles/{id}/relations/{relation}/objects:
get:
deprecated: false
description: Gets all objects connected to the specified role via a given relation

View File

@@ -20,20 +20,23 @@ import (
)
type provider struct {
pkgAuthzService authz.AuthZ
openfgaServer *openfgaserver.Server
licensing licensing.Licensing
store authtypes.RoleStore
registry []authz.RegisterTypeable
config authz.Config
pkgAuthzService authz.AuthZ
openfgaServer *openfgaserver.Server
licensing licensing.Licensing
store authtypes.RoleStore
registry []authz.RegisterTypeable
settings factory.ScopedProviderSettings
onBeforeRoleDelete []authz.OnBeforeRoleDelete
}
func NewProviderFactory(sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore, licensing licensing.Licensing, registry ...authz.RegisterTypeable) factory.ProviderFactory[authz.AuthZ, authz.Config] {
func NewProviderFactory(sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore, licensing licensing.Licensing, onBeforeRoleDelete []authz.OnBeforeRoleDelete, registry ...authz.RegisterTypeable) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return factory.NewProviderFactory(factory.MustNewName("openfga"), func(ctx context.Context, ps factory.ProviderSettings, config authz.Config) (authz.AuthZ, error) {
return newOpenfgaProvider(ctx, ps, config, sqlstore, openfgaSchema, openfgaDataStore, licensing, registry)
return newOpenfgaProvider(ctx, ps, config, sqlstore, openfgaSchema, openfgaDataStore, licensing, onBeforeRoleDelete, registry)
})
}
func newOpenfgaProvider(ctx context.Context, settings factory.ProviderSettings, config authz.Config, sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore, licensing licensing.Licensing, registry []authz.RegisterTypeable) (authz.AuthZ, error) {
func newOpenfgaProvider(ctx context.Context, settings factory.ProviderSettings, config authz.Config, sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore, licensing licensing.Licensing, onBeforeRoleDelete []authz.OnBeforeRoleDelete, registry []authz.RegisterTypeable) (authz.AuthZ, error) {
pkgOpenfgaAuthzProvider := pkgopenfgaauthz.NewProviderFactory(sqlstore, openfgaSchema, openfgaDataStore)
pkgAuthzService, err := pkgOpenfgaAuthzProvider.New(ctx, settings, config)
if err != nil {
@@ -45,12 +48,17 @@ func newOpenfgaProvider(ctx context.Context, settings factory.ProviderSettings,
return nil, err
}
scopedSettings := factory.NewScopedProviderSettings(settings, "github.com/SigNoz/signoz/ee/authz/openfgaauthz")
return &provider{
pkgAuthzService: pkgAuthzService,
openfgaServer: openfgaServer,
licensing: licensing,
store: sqlauthzstore.NewSqlAuthzStore(sqlstore),
registry: registry,
config: config,
pkgAuthzService: pkgAuthzService,
openfgaServer: openfgaServer,
licensing: licensing,
store: sqlauthzstore.NewSqlAuthzStore(sqlstore),
registry: registry,
settings: scopedSettings,
onBeforeRoleDelete: onBeforeRoleDelete,
}, nil
}
@@ -78,14 +86,40 @@ func (provider *provider) BatchCheck(ctx context.Context, tupleReq map[string]*o
return provider.openfgaServer.BatchCheck(ctx, tupleReq)
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
return provider.openfgaServer.ListObjects(ctx, subject, relation, typeable)
func (provider *provider) CheckTransactions(ctx context.Context, subject string, orgID valuer.UUID, transactions []*authtypes.Transaction) ([]*authtypes.TransactionWithAuthorization, error) {
tuples, err := authtypes.NewTuplesFromTransactions(transactions, subject, orgID)
if err != nil {
return nil, err
}
batchResults, err := provider.openfgaServer.BatchCheck(ctx, tuples)
if err != nil {
return nil, err
}
results := make([]*authtypes.TransactionWithAuthorization, len(transactions))
for i, txn := range transactions {
result := batchResults[txn.ID.StringValue()]
results[i] = &authtypes.TransactionWithAuthorization{
Transaction: txn,
Authorized: result.Authorized,
}
}
return results, nil
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, objectType authtypes.Type) ([]*authtypes.Object, error) {
return provider.openfgaServer.ListObjects(ctx, subject, relation, objectType)
}
func (provider *provider) Write(ctx context.Context, additions []*openfgav1.TupleKey, deletions []*openfgav1.TupleKey) error {
return provider.openfgaServer.Write(ctx, additions, deletions)
}
func (provider *provider) ReadTuples(ctx context.Context, tupleKey *openfgav1.ReadRequestTupleKey) ([]*openfgav1.TupleKey, error) {
return provider.openfgaServer.ReadTuples(ctx, tupleKey)
}
func (provider *provider) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*authtypes.Role, error) {
return provider.pkgAuthzService.Get(ctx, orgID, id)
}
@@ -146,7 +180,7 @@ func (provider *provider) Create(ctx context.Context, orgID valuer.UUID, role *a
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return provider.store.Create(ctx, authtypes.NewStorableRoleFromRole(role))
return provider.store.Create(ctx, role)
}
func (provider *provider) GetOrCreate(ctx context.Context, orgID valuer.UUID, role *authtypes.Role) (*authtypes.Role, error) {
@@ -163,10 +197,10 @@ func (provider *provider) GetOrCreate(ctx context.Context, orgID valuer.UUID, ro
}
if existingRole != nil {
return authtypes.NewRoleFromStorableRole(existingRole), nil
return existingRole, nil
}
err = provider.store.Create(ctx, authtypes.NewStorableRoleFromRole(role))
err = provider.store.Create(ctx, role)
if err != nil {
return nil, err
}
@@ -175,14 +209,13 @@ func (provider *provider) GetOrCreate(ctx context.Context, orgID valuer.UUID, ro
}
func (provider *provider) GetResources(_ context.Context) []*authtypes.Resource {
typeables := make([]authtypes.Typeable, 0)
for _, register := range provider.registry {
typeables = append(typeables, register.MustGetTypeables()...)
}
typeables = append(typeables, provider.MustGetTypeables()...)
resources := make([]*authtypes.Resource, 0)
for _, typeable := range typeables {
for _, register := range provider.registry {
for _, typeable := range register.MustGetTypeables() {
resources = append(resources, &authtypes.Resource{Name: typeable.Name(), Type: typeable.Type()})
}
}
for _, typeable := range provider.MustGetTypeables() {
resources = append(resources, &authtypes.Resource{Name: typeable.Name(), Type: typeable.Type()})
}
@@ -201,21 +234,23 @@ func (provider *provider) GetObjects(ctx context.Context, orgID valuer.UUID, id
}
objects := make([]*authtypes.Object, 0)
for _, resource := range provider.GetResources(ctx) {
if slices.Contains(authtypes.TypeableRelations[resource.Type], relation) {
resourceObjects, err := provider.
ListObjects(
ctx,
authtypes.MustNewSubject(authtypes.TypeableRole, storableRole.Name, orgID, &authtypes.RelationAssignee),
relation,
authtypes.MustNewTypeableFromType(resource.Type, resource.Name),
)
if err != nil {
return nil, err
}
objects = append(objects, resourceObjects...)
for _, objectType := range provider.getUniqueTypes() {
if !slices.Contains(authtypes.TypeableRelations[objectType], relation) {
continue
}
resourceObjects, err := provider.
ListObjects(
ctx,
authtypes.MustNewSubject(authtypes.TypeableRole, storableRole.Name, orgID, &authtypes.RelationAssignee),
relation,
objectType,
)
if err != nil {
return nil, err
}
objects = append(objects, resourceObjects...)
}
return objects, nil
@@ -227,7 +262,7 @@ func (provider *provider) Patch(ctx context.Context, orgID valuer.UUID, role *au
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return provider.store.Update(ctx, orgID, authtypes.NewStorableRoleFromRole(role))
return provider.store.Update(ctx, orgID, role)
}
func (provider *provider) PatchObjects(ctx context.Context, orgID valuer.UUID, name string, relation authtypes.Relation, additions, deletions []*authtypes.Object) error {
@@ -260,17 +295,26 @@ func (provider *provider) Delete(ctx context.Context, orgID valuer.UUID, id valu
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableRole, err := provider.store.Get(ctx, orgID, id)
role, err := provider.store.Get(ctx, orgID, id)
if err != nil {
return err
}
role := authtypes.NewRoleFromStorableRole(storableRole)
err = role.ErrIfManaged()
if err != nil {
return err
}
for _, cb := range provider.onBeforeRoleDelete {
if err := cb(ctx, orgID, id); err != nil {
return err
}
}
if err := provider.deleteTuples(ctx, role.Name, orgID); err != nil {
return errors.WithAdditionalf(err, "failed to delete tuples for the role: %s", role.Name)
}
return provider.store.Delete(ctx, orgID, id)
}
@@ -346,3 +390,62 @@ func (provider *provider) getManagedRoleTransactionTuples(orgID valuer.UUID) ([]
return tuples, nil
}
func (provider *provider) deleteTuples(ctx context.Context, roleName string, orgID valuer.UUID) error {
subject := authtypes.MustNewSubject(authtypes.TypeableRole, roleName, orgID, &authtypes.RelationAssignee)
tuples := make([]*openfgav1.TupleKey, 0)
for _, objectType := range provider.getUniqueTypes() {
typeTuples, err := provider.ReadTuples(ctx, &openfgav1.ReadRequestTupleKey{
User: subject,
Object: objectType.StringValue() + ":",
})
if err != nil {
return err
}
tuples = append(tuples, typeTuples...)
}
if len(tuples) == 0 {
return nil
}
for idx := 0; idx < len(tuples); idx += provider.config.OpenFGA.MaxTuplesPerWrite {
end := idx + provider.config.OpenFGA.MaxTuplesPerWrite
if end > len(tuples) {
end = len(tuples)
}
err := provider.Write(ctx, nil, tuples[idx:end])
if err != nil {
return err
}
}
return nil
}
func (provider *provider) getUniqueTypes() []authtypes.Type {
seen := make(map[string]struct{})
uniqueTypes := make([]authtypes.Type, 0)
for _, register := range provider.registry {
for _, typeable := range register.MustGetTypeables() {
typeKey := typeable.Type().StringValue()
if _, ok := seen[typeKey]; ok {
continue
}
seen[typeKey] = struct{}{}
uniqueTypes = append(uniqueTypes, typeable.Type())
}
}
for _, typeable := range provider.MustGetTypeables() {
typeKey := typeable.Type().StringValue()
if _, ok := seen[typeKey]; ok {
continue
}
seen[typeKey] = struct{}{}
uniqueTypes = append(uniqueTypes, typeable.Type())
}
return uniqueTypes
}

View File

@@ -110,10 +110,14 @@ func (server *Server) BatchCheck(ctx context.Context, tupleReq map[string]*openf
return server.pkgAuthzService.BatchCheck(ctx, tupleReq)
}
func (server *Server) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
return server.pkgAuthzService.ListObjects(ctx, subject, relation, typeable)
func (server *Server) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, objectType authtypes.Type) ([]*authtypes.Object, error) {
return server.pkgAuthzService.ListObjects(ctx, subject, relation, objectType)
}
func (server *Server) Write(ctx context.Context, additions []*openfgav1.TupleKey, deletions []*openfgav1.TupleKey) error {
return server.pkgAuthzService.Write(ctx, additions, deletions)
}
func (server *Server) ReadTuples(ctx context.Context, tupleKey *openfgav1.ReadRequestTupleKey) ([]*openfgav1.TupleKey, error) {
return server.pkgAuthzService.ReadTuples(ctx, tupleKey)
}

View File

@@ -1,29 +0,0 @@
# SigNoz E2E Test Plan
This directory contains the structured test plan for the SigNoz application. Each subfolder corresponds to a main module or feature area, and contains scenario files for all user journeys, edge cases, and cross-module flows. These documents serve as the basis for generating Playwright MCP-driven E2E tests.
## Structure
- Each main module (e.g., logs, traces, dashboards, alerts, settings, etc.) has its own folder or markdown file.
- Each file contains detailed scenario templates, including preconditions, step-by-step actions, and expected outcomes.
- Use these documents to write, review, and update test cases as the application evolves.
## Folders & Files
- `logs/` — Logs module scenarios
- `traces/` — Traces module scenarios
- `metrics/` — Metrics module scenarios
- `dashboards/` — Dashboards module scenarios
- `alerts/` — Alerts module scenarios
- `services/` — Services module scenarios
- `settings/` — Settings and all sub-settings scenarios
- `onboarding/` — Onboarding and signup flows
- `navigation/` — Navigation, sidebar, and cross-module flows
- `exceptions/` — Exception and error handling scenarios
- `external-apis/` — External API monitoring scenarios
- `messaging-queues/` — Messaging queue scenarios
- `infrastructure/` — Infrastructure monitoring scenarios
- `help-support/` — Help & support scenarios
- `user-preferences/` — User preferences and personalization scenarios
- `service-map/` — Service map scenarios
- `saved-views/` — Saved views scenarios

View File

@@ -1,16 +0,0 @@
# Settings Module Test Plan
This folder contains E2E test scenarios for the Settings module and all sub-settings.
## Scenario Categories
- General settings (org/workspace, branding, version info)
- Billing settings
- Members & SSO
- Custom domain
- Integrations
- Notification channels
- API keys
- Ingestion
- Account settings (profile, password, preferences)
- Keyboard shortcuts

View File

@@ -1,43 +0,0 @@
# Account Settings E2E Scenarios (Updated)
## 1. Update Name
- **Precondition:** User is logged in
- **Steps:**
1. Click 'Update name' button
2. Edit name field in the modal/dialog
3. Save changes
- **Expected:** Name is updated in the UI
## 2. Update Email
- **Note:** The email field is not editable in the current UI.
## 3. Reset Password
- **Precondition:** User is logged in
- **Steps:**
1. Click 'Reset password' button
2. Complete reset flow (modal/dialog or external flow)
- **Expected:** Password is reset
## 4. Toggle 'Adapt to my timezone'
- **Precondition:** User is logged in
- **Steps:**
1. Toggle 'Adapt to my timezone' switch
- **Expected:** Timezone adapts accordingly (UI feedback/confirmation should be checked)
## 5. Toggle Theme (Dark/Light)
- **Precondition:** User is logged in
- **Steps:**
1. Toggle theme radio buttons ('Dark', 'Light Beta')
- **Expected:** Theme changes
## 6. Toggle Sidebar Always Open
- **Precondition:** User is logged in
- **Steps:**
1. Toggle 'Keep the primary sidebar always open' switch
- **Expected:** Sidebar remains open/closed as per toggle

View File

@@ -1,26 +0,0 @@
# API Keys E2E Scenarios (Updated)
## 1. Create a New API Key
- **Precondition:** User is admin
- **Steps:**
1. Click 'New Key' button
2. Enter details in the modal/dialog
3. Click 'Save'
- **Expected:** API key is created and listed in the table
## 2. Revoke an API Key
- **Precondition:** API key exists
- **Steps:**
1. In the table, locate the API key row
2. Click the revoke/delete button (icon button in the Action column)
3. Confirm if prompted
- **Expected:** API key is revoked/removed from the table
## 3. View API Key Usage
- **Precondition:** API key exists
- **Steps:**
1. View the 'Last used' and 'Expired' columns in the table
- **Expected:** Usage data is displayed for each API key

View File

@@ -1,17 +0,0 @@
# Billing Settings E2E Scenarios (Updated)
## 1. View Billing Information
- **Precondition:** User is admin
- **Steps:**
1. Navigate to Billing Settings
2. Wait for the billing chart/data to finish loading
- **Expected:**
- Billing heading and subheading are displayed
- Usage/cost table is visible with columns: Unit, Data Ingested, Price per Unit, Cost (Billing period to date)
- "Download CSV" and "Manage Billing" buttons are present and enabled after loading
- Test clicking "Download CSV" and "Manage Billing" for expected behavior (e.g., file download, navigation, or modal)
> Note: If these features are expected to trigger specific flows, document the observed behavior for each button.

View File

@@ -1,18 +0,0 @@
# Custom Domain E2E Scenarios (Updated)
## 1. Add or Update Custom Domain
- **Precondition:** User is admin
- **Steps:**
1. Click 'Customize teams URL' button
2. In the 'Customize your teams URL' dialog, enter the preferred subdomain
3. Click 'Apply Changes'
- **Expected:** Domain is set/updated for the team (UI feedback/confirmation should be checked)
## 2. Verify Domain Ownership
- **Note:** No explicit 'Verify' button or flow is present in the current UI. If verification is required, it may be handled automatically or via support.
## 3. Remove a Custom Domain
- **Note:** No explicit 'Remove' button or flow is present in the current UI. The only available action is to update the subdomain.

View File

@@ -1,31 +0,0 @@
# General Settings E2E Scenarios
## 1. View General Settings
- **Precondition:** User is logged in
- **Steps:**
1. Navigate to General Settings
- **Expected:** General settings are displayed
## 2. Update Organization/Workspace Name
- **Precondition:** User is admin
- **Steps:**
1. Edit organization/workspace name
2. Save changes
- **Expected:** Name is updated and visible
## 3. Update Logo or Branding
- **Precondition:** User is admin
- **Steps:**
1. Upload new logo/branding
2. Save changes
- **Expected:** Branding is updated
## 4. View Version/Build Info
- **Precondition:** User is logged in
- **Steps:**
1. View version/build info section
- **Expected:** Version/build info is displayed

View File

@@ -1,20 +0,0 @@
# Ingestion E2E Scenarios (Updated)
## 1. View Ingestion Sources
- **Precondition:** User is admin
- **Steps:**
1. Navigate to the Integrations page
- **Expected:** List of available data sources/integrations is displayed
## 2. Configure Ingestion Sources
- **Precondition:** User is admin
- **Steps:**
1. Click 'Configure' for a data source/integration
2. Complete the configuration flow (modal or page, as available)
- **Expected:** Source is configured (UI feedback/confirmation should be checked)
## 3. Disable/Enable Ingestion
- **Note:** No visible enable/disable toggle for ingestion sources in the current UI. Ingestion is managed via the Integrations configuration flows.

View File

@@ -1,51 +0,0 @@
# Integrations E2E Scenarios (Updated)
## 1. View List of Available Integrations
- **Precondition:** User is logged in
- **Steps:**
1. Navigate to Integrations
- **Expected:** List of integrations is displayed, each with a name, description, and 'Configure' button
## 2. Search Integrations by Name/Type
- **Precondition:** Integrations exist
- **Steps:**
1. Enter search/filter criteria in the 'Search for an integration...' box
- **Expected:** Only matching integrations are shown
## 3. Connect a New Integration
- **Precondition:** User is admin
- **Steps:**
1. Click 'Configure' for an integration
2. Complete the configuration flow (modal or page, as available)
- **Expected:** Integration is connected/configured (UI feedback/confirmation should be checked)
## 4. Disconnect an Integration
- **Note:** No visible 'Disconnect' button in the main list. This may be available in the configuration flow for a connected integration.
## 5. Configure Integration Settings
- **Note:** Configuration is handled in the flow after clicking 'Configure' for an integration.
## 6. Test Integration Connection
- **Note:** No visible 'Test Connection' button in the main list. This may be available in the configuration flow.
## 7. View Integration Status/Logs
- **Note:** No visible status/logs section in the main list. This may be available in the configuration flow.
## 8. Filter Integrations by Category
- **Note:** No explicit category filter in the current UI, only a search box.
## 9. View Integration Documentation/Help
- **Note:** No visible 'Help/Docs' button in the main list. This may be available in the configuration flow.
## 10. Update Integration Configuration
- **Note:** Configuration is handled in the flow after clicking 'Configure' for an integration.

View File

@@ -1,19 +0,0 @@
# Keyboard Shortcuts E2E Scenarios (Updated)
## 1. View Keyboard Shortcuts
- **Precondition:** User is logged in
- **Steps:**
1. Navigate to Keyboard Shortcuts
- **Expected:** Shortcuts are displayed in categorized tables (Global, Logs Explorer, Query Builder, Dashboard)
## 2. Customize Keyboard Shortcuts (if supported)
- **Note:** Customization is not available in the current UI. Shortcuts are view-only.
## 3. Use Keyboard Shortcuts for Navigation/Actions
- **Precondition:** User is logged in
- **Steps:**
1. Use shortcut for navigation/action (e.g., shift+s for Services, cmd+enter for running query)
- **Expected:** Navigation/action is performed as per shortcut

View File

@@ -1,49 +0,0 @@
# Members & SSO E2E Scenarios (Updated)
## 1. Invite a New Member
- **Precondition:** User is admin
- **Steps:**
1. Click 'Invite Members' button
2. In the 'Invite team members' dialog, enter email address, name (optional), and select role
3. (Optional) Click 'Add another team member' to invite more
4. Click 'Invite team members' to send invite(s)
- **Expected:** Pending invite appears in the 'Pending Invites' table
## 2. Remove a Member
- **Precondition:** User is admin, member exists
- **Steps:**
1. In the 'Members' table, locate the member row
2. Click 'Delete' in the Action column
3. Confirm removal if prompted
- **Expected:** Member is removed from the table
## 3. Update Member Roles
- **Precondition:** User is admin, member exists
- **Steps:**
1. In the 'Members' table, locate the member row
2. Click 'Edit' in the Action column
3. Change role in the edit dialog/modal
4. Save changes
- **Expected:** Member role is updated in the table
## 4. Configure SSO
- **Precondition:** User is admin
- **Steps:**
1. In the 'Authenticated Domains' section, locate the domain row
2. Click 'Configure SSO' or 'Edit Google Auth' as available
3. Complete SSO provider configuration in the modal/dialog
4. Save settings
- **Expected:** SSO is configured for the domain
## 5. Login via SSO
- **Precondition:** SSO is configured
- **Steps:**
1. Log out from the app
2. On the login page, click 'Login with SSO'
3. Complete SSO login flow
- **Expected:** User is logged in via SSO

View File

@@ -1,39 +0,0 @@
# Notification Channels E2E Scenarios (Updated)
## 1. Add a New Notification Channel
- **Precondition:** User is admin
- **Steps:**
1. Click 'New Alert Channel' button
2. In the 'New Notification Channel' form, fill in required fields (Name, Type, Webhook URL, etc.)
3. (Optional) Toggle 'Send resolved alerts'
4. (Optional) Click 'Test' to send a test notification
5. Click 'Save' to add the channel
- **Expected:** Channel is added and listed in the table
## 2. Test Notification Channel
- **Precondition:** Channel is being created or edited
- **Steps:**
1. In the 'New Notification Channel' or 'Edit Notification Channel' form, click 'Test'
- **Expected:** Test notification is sent (UI feedback/confirmation should be checked)
## 3. Remove a Notification Channel
- **Precondition:** Channel is added
- **Steps:**
1. In the table, locate the channel row
2. Click 'Delete' in the Action column
3. Confirm removal if prompted
- **Expected:** Channel is removed from the table
## 4. Update Notification Channel Settings
- **Precondition:** Channel is added
- **Steps:**
1. In the table, locate the channel row
2. Click 'Edit' in the Action column
3. In the 'Edit Notification Channel' form, update fields as needed
4. (Optional) Click 'Test' to send a test notification
5. Click 'Save' to update the channel
- **Expected:** Settings are updated

View File

@@ -1,199 +0,0 @@
# SigNoz Test Plan Validation Report
This report documents the validation of the E2E test plan against the current live application using Playwright MCP. Each module is reviewed for coverage, gaps, and required updates.
---
## Home Module
- **Coverage:**
- Widgets for logs, traces, metrics, dashboards, alerts, services, saved views, onboarding checklist
- Quick access buttons: Explore Logs, Create dashboard, Create an alert
- **Gaps/Updates:**
- Add scenarios for checklist interactions (e.g., “Ill do this later”, progress tracking)
- Add scenarios for Saved Views and cross-module links
- Add scenario for onboarding checklist completion
---
## Logs Module
- **Coverage:**
- Explorer, Pipelines, Views tabs
- Filtering by service, environment, severity, host, k8s, etc.
- Search, save view, create alert, add to dashboard, export, view mode switching
- **Gaps/Updates:**
- Add scenario for quick filter customization
- Add scenario for “Old Explorer” button
- Add scenario for frequency chart toggle
- Add scenario for “Stage & Run Query” workflow
---
## Traces Module
- **Coverage:**
- Tabs: Explorer, Funnels, Views
- Filtering by name, error status, duration, environment, function, service, RPC, status code, HTTP, trace ID, etc.
- Search, save view, create alert, add to dashboard, export, view mode switching (List, Traces, Time Series, Table)
- Pagination, quick filter customization, group by, aggregation
- **Gaps/Updates:**
- Add scenario for quick filter customization
- Add scenario for “Stage & Run Query” workflow
- Add scenario for all view modes (List, Traces, Time Series, Table)
- Add scenario for group by/aggregation
- Add scenario for trace detail navigation (clicking on trace row)
- Add scenario for Funnels tab (create/edit/delete funnel)
- Add scenario for Views tab (manage saved views)
---
## Metrics Module
- **Coverage:**
- Tabs: Summary, Explorer, Views
- Filtering by metric, type, unit, etc.
- Search, save view, add to dashboard, export, view mode switching (chart, table, proportion view)
- Pagination, group by, aggregation, custom queries
- **Gaps/Updates:**
- Add scenario for Proportion View in Summary
- Add scenario for all view modes (chart, table, proportion)
- Add scenario for group by/aggregation
- Add scenario for custom queries in Explorer
- Add scenario for Views tab (manage saved views)
---
## Dashboards Module
- **Coverage:**
- List, search, and filter dashboards
- Create new dashboard (button and template link)
- Edit, delete, and view dashboard details
- Add/edit/delete widgets (implied by dashboard detail)
- Pagination through dashboards
- **Gaps/Updates:**
- Add scenario for browsing dashboard templates (external link)
- Add scenario for requesting new template
- Add scenario for dashboard owner and creation info
- Add scenario for dashboard tags and filtering by tags
- Add scenario for dashboard sharing (if available)
- Add scenario for dashboard image/preview
---
## Messaging Queues Module
- **Coverage:**
- Overview tab: queue metrics, filters (Service Name, Span Name, Msg System, Destination, Kind)
- Search across all columns
- Pagination of queue data
- Sync and Share buttons
- Tabs for Kafka and Celery
- **Gaps/Updates:**
- Add scenario for Kafka tab (detailed metrics, actions)
- Add scenario for Celery tab (detailed metrics, actions)
- Add scenario for filter combinations and edge cases
- Add scenario for sharing queue data
- Add scenario for time range selection
---
## External APIs Module
- **Coverage:**
- Accessed via side navigation under MORE
- Explorer tab: domain, endpoints, last used, rate, error %, avg. latency
- Filters: Deployment Environment, Service Name, Rpc Method, Show IP addresses
- Table pagination
- Share and Stage & Run Query buttons
- **Gaps/Updates:**
- Add scenario for customizing quick filters
- Add scenario for running and staging queries
- Add scenario for sharing API data
- Add scenario for edge cases in filters and table data
---
## Alerts Module
- **Coverage:**
- Alert Rules tab: list, search, create (New Alert), edit, delete, enable/disable, severity, labels, actions
- Triggered Alerts tab (visible in tablist)
- Configuration tab (visible in tablist)
- Table pagination
- **Gaps/Updates:**
- Add scenario for triggered alerts (view, acknowledge, resolve)
- Add scenario for alert configuration (settings, integrations)
- Add scenario for edge cases in alert creation and management
- Add scenario for searching and filtering alerts
---
## Integrations Module
- **Coverage:**
- Integrations tab: list, search, configure (e.g., AWS), request new integration
- One-click setup for AWS monitoring
- Request more integrations (form)
- **Gaps/Updates:**
- Add scenario for configuring integrations (step-by-step)
- Add scenario for searching and filtering integrations
- Add scenario for requesting new integrations
- Add scenario for edge cases (e.g., failed configuration)
---
## Exceptions Module
- **Coverage:**
- All Exceptions: list, search, filter (Deployment Environment, Service Name, Host Name, K8s Cluster/Deployment/Namespace, Net Peer Name)
- Table: Exception Type, Error Message, Count, Last Seen, First Seen, Application
- Pagination
- Exception detail links
- Share and Stage & Run Query buttons
- **Gaps/Updates:**
- Add scenario for exception detail view
- Add scenario for advanced filtering and edge cases
- Add scenario for sharing and running queries
- Add scenario for error grouping and navigation
---
## Service Map Module
- **Coverage:**
- Service Map visualization (main graph)
- Filters: environment, resource attributes
- Time range selection
- Sync and Share buttons
- **Gaps/Updates:**
- Add scenario for interacting with the map (zoom, pan, select service)
- Add scenario for filtering and edge cases
- Add scenario for sharing the map
- Add scenario for time range and environment combinations
---
## Billing Module
- **Coverage:**
- Billing overview: cost monitoring, invoices, CSV download (disabled), manage billing (disabled)
- Teams Cloud section
- Billing table: Unit, Data Ingested, Price per Unit, Cost (Billing period to date)
- **Gaps/Updates:**
- Add scenario for invoice download and management (when enabled)
- Add scenario for cost monitoring and edge cases
- Add scenario for billing table data validation
- Add scenario for permissions and access control
---
## Usage Explorer Module
- **Status:**
- Not accessible in the current environment. Removing from test plan flows.
---
## [Next modules will be filled as validation proceeds]

View File

@@ -1,42 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('Account Settings - View and Assert Static Controls', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Assert General section and controls (confirmed by DOM)
await expect(
page.getByLabel('My Settings').getByText('General'),
).toBeVisible();
await expect(page.getByText('Manage your account settings.')).toBeVisible();
await expect(page.getByRole('button', { name: 'Update name' })).toBeVisible();
await expect(
page.getByRole('button', { name: 'Reset password' }),
).toBeVisible();
// Assert User Preferences section and controls (confirmed by DOM)
await expect(page.getByText('User Preferences')).toBeVisible();
await expect(
page.getByText('Tailor the SigNoz console to work according to your needs.'),
).toBeVisible();
await expect(page.getByText('Select your theme')).toBeVisible();
const themeSelector = page.getByTestId('theme-selector');
await expect(themeSelector.getByText('Dark')).toBeVisible();
await expect(themeSelector.getByText('Light')).toBeVisible();
await expect(themeSelector.getByText('System')).toBeVisible();
await expect(page.getByTestId('timezone-adaptation-switch')).toBeVisible();
await expect(page.getByTestId('side-nav-pinned-switch')).toBeVisible();
});

View File

@@ -1,42 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('API Keys Settings - View and Interact', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click API Keys tab in the settings sidebar (by data-testid)
await page.getByTestId('api-keys').click();
// Assert heading and subheading
await expect(page.getByRole('heading', { name: 'API Keys' })).toBeVisible();
await expect(
page.getByText('Create and manage API keys for the SigNoz API'),
).toBeVisible();
// Assert presence of New Key button
const newKeyBtn = page.getByRole('button', { name: 'New Key' });
await expect(newKeyBtn).toBeVisible();
// Assert table columns
await expect(page.getByText('Last used').first()).toBeVisible();
await expect(page.getByText('Expired').first()).toBeVisible();
// Assert at least one API key row with action buttons
// Select the first action cell's first button (icon button)
const firstActionCell = page.locator('table tr').nth(1).locator('td').last();
const deleteBtn = firstActionCell.locator('button').first();
await expect(deleteBtn).toBeVisible();
});

View File

@@ -1,71 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
// E2E: Billing Settings - View Billing Information and Button Actions
test('View Billing Information and Button Actions', async ({
page,
context,
}) => {
// Ensure user is logged in
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click Billing tab in the settings sidebar (by data-testid)
await page.getByTestId('billing').click();
// Wait for billing chart/data to finish loading
await page.getByText('loading').first().waitFor({ state: 'hidden' });
// Assert visibility of subheading (unique)
await expect(
page.getByText(
'Manage your billing information, invoices, and monitor costs.',
),
).toBeVisible();
// Assert visibility of Teams Cloud heading
await expect(page.getByRole('heading', { name: 'Teams Cloud' })).toBeVisible();
// Assert presence of summary and detailed tables
await expect(page.getByText('TOTAL SPENT')).toBeVisible();
await expect(page.getByText('Data Ingested')).toBeVisible();
await expect(page.getByText('Price per Unit')).toBeVisible();
await expect(page.getByText('Cost (Billing period to date)')).toBeVisible();
// Assert presence of alert and note
await expect(
page.getByText('Your current billing period is from', { exact: false }),
).toBeVisible();
await expect(
page.getByText('Billing metrics are updated once every 24 hours.'),
).toBeVisible();
// Test Download CSV button
const [download] = await Promise.all([
page.waitForEvent('download'),
page.getByRole('button', { name: 'cloud-download Download CSV' }).click(),
]);
// Optionally, check download file name
expect(download.suggestedFilename()).toContain('billing_usage');
// Test Manage Billing button (opens Stripe in new tab)
const [newPage] = await Promise.all([
context.waitForEvent('page'),
page.getByTestId('header-billing-button').click(),
]);
await newPage.waitForLoadState();
expect(newPage.url()).toContain('stripe.com');
await newPage.close();
});

View File

@@ -1,52 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('Custom Domain Settings - View and Interact', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click Custom Domain tab in the settings sidebar (by data-testid)
await page.getByTestId('custom-domain').click();
// Wait for custom domain chart/data to finish loading
await page.getByText('loading').first().waitFor({ state: 'hidden' });
// Assert heading and subheading
await expect(
page.getByRole('heading', { name: 'Custom Domain Settings' }),
).toBeVisible();
await expect(
page.getByText('Personalize your workspace domain effortlessly.'),
).toBeVisible();
// Assert presence of Customize teams URL button
const customizeBtn = page.getByRole('button', {
name: 'Customize teams URL',
});
await expect(customizeBtn).toBeVisible();
await customizeBtn.click();
// Assert modal/dialog fields and buttons
await expect(
page.getByRole('dialog', { name: 'Customize your teams URL' }),
).toBeVisible();
await expect(page.getByLabel('Teams URL subdomain')).toBeVisible();
await expect(
page.getByRole('button', { name: 'Apply Changes' }),
).toBeVisible();
await expect(page.getByRole('button', { name: 'Close' })).toBeVisible();
// Close the modal
await page.getByRole('button', { name: 'Close' }).click();
});

View File

@@ -1,32 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('View General Settings', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click General tab in the settings sidebar (by data-testid)
await page.getByTestId('general').click();
// Wait for General tab to be visible
await page.getByRole('tabpanel', { name: 'General' }).waitFor();
// Assert visibility of definitive/static elements
await expect(page.getByRole('heading', { name: 'Metrics' })).toBeVisible();
await expect(page.getByRole('heading', { name: 'Traces' })).toBeVisible();
await expect(page.getByRole('heading', { name: 'Logs' })).toBeVisible();
await expect(page.getByText('Please')).toBeVisible();
await expect(page.getByRole('link', { name: 'email us' })).toBeVisible();
});

View File

@@ -1,48 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('Ingestion Settings - View and Interact', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click Ingestion tab in the settings sidebar (by data-testid)
await page.getByTestId('ingestion').click();
// Assert heading and subheading (Integrations page)
await expect(
page.getByRole('heading', { name: 'Integrations' }),
).toBeVisible();
await expect(
page.getByText('Manage Integrations for this workspace'),
).toBeVisible();
// Assert presence of search box
await expect(
page.getByPlaceholder('Search for an integration...'),
).toBeVisible();
// Assert at least one data source with Configure button
const configureBtn = page.getByRole('button', { name: 'Configure' }).first();
await expect(configureBtn).toBeVisible();
// Assert Request more integrations section
await expect(
page.getByText(
"Can't find what youre looking for? Request more integrations",
),
).toBeVisible();
await expect(page.getByPlaceholder('Enter integration name...')).toBeVisible();
await expect(page.getByRole('button', { name: 'Submit' })).toBeVisible();
});

View File

@@ -1,48 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('Integrations Settings - View and Interact', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click Integrations tab in the settings sidebar (by data-testid)
await page.getByTestId('integrations').click();
// Assert heading and subheading
await expect(
page.getByRole('heading', { name: 'Integrations' }),
).toBeVisible();
await expect(
page.getByText('Manage Integrations for this workspace'),
).toBeVisible();
// Assert presence of search box
await expect(
page.getByPlaceholder('Search for an integration...'),
).toBeVisible();
// Assert at least one integration with Configure button
const configureBtn = page.getByRole('button', { name: 'Configure' }).first();
await expect(configureBtn).toBeVisible();
// Assert Request more integrations section
await expect(
page.getByText(
"Can't find what youre looking for? Request more integrations",
),
).toBeVisible();
await expect(page.getByPlaceholder('Enter integration name...')).toBeVisible();
await expect(page.getByRole('button', { name: 'Submit' })).toBeVisible();
});

View File

@@ -1,56 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('Members & SSO Settings - View and Interact', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click Members & SSO tab in the settings sidebar (by data-testid)
await page.getByTestId('members-sso').click();
// Assert headings and tables
await expect(
page.getByRole('heading', { name: /Members \(\d+\)/ }),
).toBeVisible();
await expect(
page.getByRole('heading', { name: /Pending Invites \(\d+\)/ }),
).toBeVisible();
await expect(
page.getByRole('heading', { name: 'Authenticated Domains' }),
).toBeVisible();
// Assert Invite Members button is visible and clickable
const inviteBtn = page.getByRole('button', { name: /Invite Members/ });
await expect(inviteBtn).toBeVisible();
await inviteBtn.click();
// Assert Invite Members modal/dialog appears (modal title is unique)
await expect(page.getByText('Invite team members').first()).toBeVisible();
// Close the modal (use unique 'Close' button)
await page.getByRole('button', { name: 'Close' }).click();
// Assert Edit and Delete buttons are present for at least one member
const editBtn = page.getByRole('button', { name: /Edit/ }).first();
const deleteBtn = page.getByRole('button', { name: /Delete/ }).first();
await expect(editBtn).toBeVisible();
await expect(deleteBtn).toBeVisible();
// Assert Add Domains button is visible
await expect(page.getByRole('button', { name: /Add Domains/ })).toBeVisible();
// Assert Configure SSO or Edit Google Auth button is visible for at least one domain
const ssoBtn = page
.getByRole('button', { name: /Configure SSO|Edit Google Auth/ })
.first();
await expect(ssoBtn).toBeVisible();
});

View File

@@ -1,57 +0,0 @@
import { expect, test } from '@playwright/test';
import { ensureLoggedIn } from '../../../utils/login.util';
test('Notification Channels Settings - View and Interact', async ({ page }) => {
await ensureLoggedIn(page);
// 1. Open the sidebar settings menu using data-testid
await page.getByTestId('settings-nav-item').click();
// 2. Click Account Settings in the dropdown (by role/name or data-testid if available)
await page.getByRole('menuitem', { name: 'Account Settings' }).click();
// Assert the main tabpanel/heading (confirmed by DOM)
await expect(page.getByTestId('settings-page-title')).toBeVisible();
// Focus on the settings page sidenav
await page.getByTestId('settings-page-sidenav').focus();
// Click Notification Channels tab in the settings sidebar (by data-testid)
await page.getByTestId('notification-channels').click();
// Wait for loading to finish
await page.getByText('loading').first().waitFor({ state: 'hidden' });
// Assert presence of New Alert Channel button
const newChannelBtn = page.getByRole('button', { name: /New Alert Channel/ });
await expect(newChannelBtn).toBeVisible();
// Assert table columns
await expect(page.getByText('Name')).toBeVisible();
await expect(page.getByText('Type')).toBeVisible();
await expect(page.getByText('Action')).toBeVisible();
// Click New Alert Channel and assert modal fields/buttons
await newChannelBtn.click();
await expect(
page.getByRole('heading', { name: 'New Notification Channel' }),
).toBeVisible();
await expect(page.getByLabel('Name')).toBeVisible();
await expect(page.getByLabel('Type')).toBeVisible();
await expect(page.getByLabel('Webhook URL')).toBeVisible();
await expect(
page.getByRole('switch', { name: 'Send resolved alerts' }),
).toBeVisible();
await expect(page.getByRole('button', { name: 'Save' })).toBeVisible();
await expect(page.getByRole('button', { name: 'Test' })).toBeVisible();
await expect(page.getByRole('button', { name: 'Back' })).toBeVisible();
// Close modal
await page.getByRole('button', { name: 'Back' }).click();
// Assert Edit and Delete buttons for at least one channel
const editBtn = page.getByRole('button', { name: 'Edit' }).first();
const deleteBtn = page.getByRole('button', { name: 'Delete' }).first();
await expect(editBtn).toBeVisible();
await expect(deleteBtn).toBeVisible();
});

View File

@@ -1,35 +0,0 @@
import { Page } from '@playwright/test';
// Read credentials from environment variables
const username = process.env.LOGIN_USERNAME;
const password = process.env.LOGIN_PASSWORD;
const baseURL = process.env.BASE_URL;
/**
* Ensures the user is logged in. If not, performs the login steps.
* Follows the MCP process step-by-step.
*/
export async function ensureLoggedIn(page: Page): Promise<void> {
// if already in home page, return
if (await page.url().includes('/home')) {
return;
}
if (!username || !password) {
throw new Error(
'E2E_EMAIL and E2E_PASSWORD environment variables must be set.',
);
}
await page.goto(`${baseURL}/login`);
await page.getByTestId('email').click();
await page.getByTestId('email').fill(username);
await page.getByTestId('initiate_login').click();
await page.getByTestId('password').click();
await page.getByTestId('password').fill(password);
await page.getByRole('button', { name: 'Login' }).click();
await page
.getByText('Hello there, Welcome to your')
.waitFor({ state: 'visible' });
}

View File

@@ -44,7 +44,6 @@
"@mdx-js/loader": "2.3.0",
"@mdx-js/react": "2.3.0",
"@monaco-editor/react": "^4.3.1",
"@playwright/test": "1.55.1",
"@radix-ui/react-tabs": "1.0.4",
"@radix-ui/react-tooltip": "1.0.7",
"@sentry/react": "8.41.0",

View File

@@ -1,95 +0,0 @@
import { defineConfig, devices } from '@playwright/test';
import dotenv from 'dotenv';
import path from 'path';
// Read from ".env" file.
dotenv.config({ path: path.resolve(__dirname, '.env') });
/**
* Read environment variables from file.
* https://github.com/motdotla/dotenv
*/
// import dotenv from 'dotenv';
// import path from 'path';
// dotenv.config({ path: path.resolve(__dirname, '.env') });
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: './e2e/tests',
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Run tests in parallel even in CI - optimized for GitHub Actions free tier */
workers: process.env.CI ? 2 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: 'html',
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Base URL to use in actions like `await page.goto('/')`. */
baseURL:
process.env.SIGNOZ_E2E_BASE_URL || 'https://app.us.staging.signoz.cloud',
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: 'on-first-retry',
colorScheme: 'dark',
locale: 'en-US',
viewport: { width: 1280, height: 720 },
},
/* Configure projects for major browsers */
projects: [
{
name: 'chromium',
use: {
launchOptions: { args: ['--start-maximized'] },
viewport: null,
colorScheme: 'dark',
locale: 'en-US',
baseURL: 'https://app.us.staging.signoz.cloud',
trace: 'on-first-retry',
},
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'] },
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'] },
},
/* Test against mobile viewports. */
// {
// name: 'Mobile Chrome',
// use: { ...devices['Pixel 5'] },
// },
// {
// name: 'Mobile Safari',
// use: { ...devices['iPhone 12'] },
// },
/* Test against branded browsers. */
// {
// name: 'Microsoft Edge',
// use: { ...devices['Desktop Edge'], channel: 'msedge' },
// },
// {
// name: 'Google Chrome',
// use: { ...devices['Desktop Chrome'], channel: 'chrome' },
// },
],
/* Run your local dev server before starting the tests */
// webServer: {
// command: 'npm run start',
// url: 'http://localhost:3000',
// reuseExistingServer: !process.env.CI,
// },
});

View File

@@ -1,16 +0,0 @@
RULE: All test code for this repo must be generated by following the step-by-step Playwright MCP process as described below.
- You are a playwright test generator.
- You are given a scenario and you need to generate a playwright test for it.
- Use login util if not logged in.
- DO NOT generate test code based on the scenario alone.
- DO run steps one by one using the tools provided by the Playwright MCP.
- Only after all steps are completed, emit a Playwright TypeScript test that uses @playwright/test based on message history
- Gather correct selectors before writing the test
- DO NOT valiate for dynamic content in the tests, only validate for the correctness with meta data
- Always inspect the DOM at each navigation or interaction step to determine the correct selector for the next action. Do not assume selectors, confirm via inspection before proceeding.
- Assert visibility of definitive/static elements in the UI (such as labels, headings, or section titles) rather than dynamic values or content that may change between runs.
- Save generated test file in the tests directory
- Execute the test file and iterate until the test passes

View File

@@ -471,7 +471,7 @@ export const getObjects = (
signal?: AbortSignal,
) => {
return GeneratedAPIInstance<GetObjects200>({
url: `/api/v1/roles/${id}/relation/${relation}/objects`,
url: `/api/v1/roles/${id}/relations/${relation}/objects`,
method: 'GET',
signal,
});
@@ -481,7 +481,7 @@ export const getGetObjectsQueryKey = ({
id,
relation,
}: GetObjectsPathParameters) => {
return [`/api/v1/roles/${id}/relation/${relation}/objects`] as const;
return [`/api/v1/roles/${id}/relations/${relation}/objects`] as const;
};
export const getGetObjectsQueryOptions = <
@@ -574,7 +574,7 @@ export const patchObjects = (
authtypesPatchableObjectsDTO: BodyType<AuthtypesPatchableObjectsDTO>,
) => {
return GeneratedAPIInstance<string>({
url: `/api/v1/roles/${id}/relation/${relation}/objects`,
url: `/api/v1/roles/${id}/relations/${relation}/objects`,
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
data: authtypesPatchableObjectsDTO,

View File

@@ -66,8 +66,6 @@
"./vite.config.ts",
"./jest.setup.ts",
"./tests/**.ts",
"./**/*.d.ts",
"./playwright.config.ts",
"./e2e/**/*.ts"
"./**/*.d.ts"
]
}

View File

@@ -4540,13 +4540,6 @@
resolved "https://registry.yarnpkg.com/@pkgr/core/-/core-0.2.9.tgz#d229a7b7f9dac167a156992ef23c7f023653f53b"
integrity sha512-QNqXyfVS2wm9hweSYD2O7F0G06uurj9kZ96TRQE5Y9hU7+tgdZwIkbAKc5Ocy1HxEY2kuDQa6cQ1WRs/O5LFKA==
"@playwright/test@1.55.1":
version "1.55.1"
resolved "https://registry.yarnpkg.com/@playwright/test/-/test-1.55.1.tgz#80f775d5f948cd3ef550fcc45ef99986d3ffb36c"
integrity sha512-IVAh/nOJaw6W9g+RJVlIQJ6gSiER+ae6mKQ5CX1bERzQgbC1VSeBlwdvczT7pxb0GWiyrxH4TGKbMfDb4Sq/ig==
dependencies:
playwright "1.55.1"
"@posthog/core@1.6.0":
version "1.6.0"
resolved "https://registry.yarnpkg.com/@posthog/core/-/core-1.6.0.tgz#a5b63a30950a8dfe87d4bf335ab24005c7ce1278"
@@ -10568,7 +10561,7 @@ fscreen@^1.0.2:
resolved "https://registry.yarnpkg.com/fscreen/-/fscreen-1.2.0.tgz#1a8c88e06bc16a07b473ad96196fb06d6657f59e"
integrity sha512-hlq4+BU0hlPmwsFjwGGzZ+OZ9N/wq9Ljg/sq3pX+2CD7hrJsX9tJgWWK/wiNTFM212CLHWhicOoqwXyZGGetJg==
fsevents@2.3.2, fsevents@^2.3.2, fsevents@~2.3.2:
fsevents@^2.3.2, fsevents@~2.3.2:
version "2.3.2"
resolved "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz"
integrity sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==
@@ -15468,20 +15461,6 @@ pkg-dir@^7.0.0:
dependencies:
find-up "^6.3.0"
playwright-core@1.55.1:
version "1.55.1"
resolved "https://registry.yarnpkg.com/playwright-core/-/playwright-core-1.55.1.tgz#5d3bb1846bc4289d364ea1a9dcb33f14545802e9"
integrity sha512-Z6Mh9mkwX+zxSlHqdr5AOcJnfp+xUWLCt9uKV18fhzA8eyxUd8NUWzAjxUh55RZKSYwDGX0cfaySdhZJGMoJ+w==
playwright@1.55.1:
version "1.55.1"
resolved "https://registry.yarnpkg.com/playwright/-/playwright-1.55.1.tgz#8a9954e9e61ed1ab479212af9be336888f8b3f0e"
integrity sha512-cJW4Xd/G3v5ovXtJJ52MAOclqeac9S/aGGgRzLabuF8TnIb6xHvMzKIa6JmrRzUkeXJgfL1MhukP0NK6l39h3A==
dependencies:
playwright-core "1.55.1"
optionalDependencies:
fsevents "2.3.2"
pony-cause@^1.1.1:
version "1.1.1"
resolved "https://registry.yarnpkg.com/pony-cause/-/pony-cause-1.1.1.tgz#f795524f83bebbf1878bd3587b45f69143cbf3f9"

View File

@@ -61,7 +61,7 @@ func (provider *provider) addRoleRoutes(router *mux.Router) error {
return err
}
if err := router.Handle("/api/v1/roles/{id}/relation/{relation}/objects", handler.New(provider.authZ.AdminAccess(provider.authzHandler.GetObjects), handler.OpenAPIDef{
if err := router.Handle("/api/v1/roles/{id}/relations/{relation}/objects", handler.New(provider.authZ.AdminAccess(provider.authzHandler.GetObjects), handler.OpenAPIDef{
ID: "GetObjects",
Tags: []string{"role"},
Summary: "Get objects for a role by relation",
@@ -95,7 +95,7 @@ func (provider *provider) addRoleRoutes(router *mux.Router) error {
return err
}
if err := router.Handle("/api/v1/roles/{id}/relation/{relation}/objects", handler.New(provider.authZ.AdminAccess(provider.authzHandler.PatchObjects), handler.OpenAPIDef{
if err := router.Handle("/api/v1/roles/{id}/relations/{relation}/objects", handler.New(provider.authZ.AdminAccess(provider.authzHandler.PatchObjects), handler.OpenAPIDef{
ID: "PatchObjects",
Tags: []string{"role"},
Summary: "Patch objects for a role by relation",

View File

@@ -22,11 +22,15 @@ type AuthZ interface {
// BatchCheck accepts a map of ID → tuple and returns a map of ID → authorization result.
BatchCheck(context.Context, map[string]*openfgav1.TupleKey) (map[string]*authtypes.TupleKeyAuthorization, error)
// CheckTransactions checks whether the given subject is authorized for the given transactions.
// Returns results in the same order as the input transactions.
CheckTransactions(ctx context.Context, subject string, orgID valuer.UUID, transactions []*authtypes.Transaction) ([]*authtypes.TransactionWithAuthorization, error)
// Write accepts the insertion tuples and the deletion tuples.
Write(context.Context, []*openfgav1.TupleKey, []*openfgav1.TupleKey) error
// Lists the selectors for objects assigned to subject (s) with relation (r) on resource (s)
ListObjects(context.Context, string, authtypes.Relation, authtypes.Typeable) ([]*authtypes.Object, error)
ListObjects(context.Context, string, authtypes.Relation, authtypes.Type) ([]*authtypes.Object, error)
// Creates the role.
Create(context.Context, valuer.UUID, *authtypes.Role) error
@@ -78,8 +82,14 @@ type AuthZ interface {
// Bootstrap managed roles transactions and user assignments
CreateManagedUserRoleTransactions(context.Context, valuer.UUID, valuer.UUID) error
// ReadTuples reads tuples from the authorization server matching the given tuple key filter.
ReadTuples(context.Context, *openfgav1.ReadRequestTupleKey) ([]*openfgav1.TupleKey, error)
}
// OnBeforeRoleDelete is a callback invoked before a role is deleted.
type OnBeforeRoleDelete func(context.Context, valuer.UUID, valuer.UUID) error
type RegisterTypeable interface {
MustGetTypeables() []authtypes.Typeable

View File

@@ -18,7 +18,7 @@ func NewSqlAuthzStore(sqlstore sqlstore.SQLStore) authtypes.RoleStore {
return &store{sqlstore: sqlstore}
}
func (store *store) Create(ctx context.Context, role *authtypes.StorableRole) error {
func (store *store) Create(ctx context.Context, role *authtypes.Role) error {
_, err := store.
sqlstore.
BunDBCtx(ctx).
@@ -32,8 +32,8 @@ func (store *store) Create(ctx context.Context, role *authtypes.StorableRole) er
return nil
}
func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*authtypes.StorableRole, error) {
role := new(authtypes.StorableRole)
func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*authtypes.Role, error) {
role := new(authtypes.Role)
err := store.
sqlstore.
BunDBCtx(ctx).
@@ -49,8 +49,8 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
return role, nil
}
func (store *store) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*authtypes.StorableRole, error) {
role := new(authtypes.StorableRole)
func (store *store) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*authtypes.Role, error) {
role := new(authtypes.Role)
err := store.
sqlstore.
BunDBCtx(ctx).
@@ -66,8 +66,8 @@ func (store *store) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, na
return role, nil
}
func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*authtypes.StorableRole, error) {
roles := make([]*authtypes.StorableRole, 0)
func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*authtypes.Role, error) {
roles := make([]*authtypes.Role, 0)
err := store.
sqlstore.
BunDBCtx(ctx).
@@ -82,8 +82,8 @@ func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*authtypes.S
return roles, nil
}
func (store *store) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*authtypes.StorableRole, error) {
roles := make([]*authtypes.StorableRole, 0)
func (store *store) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*authtypes.Role, error) {
roles := make([]*authtypes.Role, 0)
err := store.
sqlstore.
BunDBCtx(ctx).
@@ -103,8 +103,8 @@ func (store *store) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID,
return roles, nil
}
func (store *store) ListByOrgIDAndIDs(ctx context.Context, orgID valuer.UUID, ids []valuer.UUID) ([]*authtypes.StorableRole, error) {
roles := make([]*authtypes.StorableRole, 0)
func (store *store) ListByOrgIDAndIDs(ctx context.Context, orgID valuer.UUID, ids []valuer.UUID) ([]*authtypes.Role, error) {
roles := make([]*authtypes.Role, 0)
err := store.
sqlstore.
BunDBCtx(ctx).
@@ -124,7 +124,7 @@ func (store *store) ListByOrgIDAndIDs(ctx context.Context, orgID valuer.UUID, id
return roles, nil
}
func (store *store) Update(ctx context.Context, orgID valuer.UUID, role *authtypes.StorableRole) error {
func (store *store) Update(ctx context.Context, orgID valuer.UUID, role *authtypes.Role) error {
_, err := store.
sqlstore.
BunDBCtx(ctx).
@@ -145,7 +145,7 @@ func (store *store) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUI
sqlstore.
BunDBCtx(ctx).
NewDelete().
Model(new(authtypes.StorableRole)).
Model(new(authtypes.Role)).
Where("org_id = ?", orgID).
Where("id = ?", id).
Exec(ctx)

View File

@@ -4,14 +4,30 @@ import (
"github.com/SigNoz/signoz/pkg/factory"
)
type Config struct{}
type Config struct {
// Provider is the name of the authorization provider to use.
Provider string `mapstructure:"provider"`
// OpenFGA is the configuration specific to the OpenFGA authorization provider.
OpenFGA OpenFGAConfig `mapstructure:"openfga"`
}
type OpenFGAConfig struct {
// MaxTuplesPerWrite is the maximum number of tuples to include in a single write call.
MaxTuplesPerWrite int `mapstructure:"max_tuples_per_write"`
}
func NewConfigFactory() factory.ConfigFactory {
return factory.NewConfigFactory(factory.MustNewName("authz"), newConfig)
}
func newConfig() factory.Config {
return Config{}
return &Config{
Provider: "openfga",
OpenFGA: OpenFGAConfig{
MaxTuplesPerWrite: 100,
},
}
}
func (c Config) Validate() error {

View File

@@ -18,25 +18,31 @@ import (
)
type provider struct {
server *openfgaserver.Server
store authtypes.RoleStore
server *openfgaserver.Server
store authtypes.RoleStore
registry []authz.RegisterTypeable
managedRolesByTransaction map[string][]string
}
func NewProviderFactory(sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
func NewProviderFactory(sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore, registry ...authz.RegisterTypeable) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return factory.NewProviderFactory(factory.MustNewName("openfga"), func(ctx context.Context, ps factory.ProviderSettings, config authz.Config) (authz.AuthZ, error) {
return newOpenfgaProvider(ctx, ps, config, sqlstore, openfgaSchema, openfgaDataStore)
return newOpenfgaProvider(ctx, ps, config, sqlstore, openfgaSchema, openfgaDataStore, registry)
})
}
func newOpenfgaProvider(ctx context.Context, settings factory.ProviderSettings, config authz.Config, sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore) (authz.AuthZ, error) {
func newOpenfgaProvider(ctx context.Context, settings factory.ProviderSettings, config authz.Config, sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile, openfgaDataStore storage.OpenFGADatastore, registry []authz.RegisterTypeable) (authz.AuthZ, error) {
server, err := openfgaserver.NewOpenfgaServer(ctx, settings, config, sqlstore, openfgaSchema, openfgaDataStore)
if err != nil {
return nil, err
}
managedRolesByTransaction := buildManagedRolesByTransaction(registry)
return &provider{
server: server,
store: sqlauthzstore.NewSqlAuthzStore(sqlstore),
server: server,
store: sqlauthzstore.NewSqlAuthzStore(sqlstore),
registry: registry,
managedRolesByTransaction: managedRolesByTransaction,
}, nil
}
@@ -68,68 +74,32 @@ func (provider *provider) Write(ctx context.Context, additions []*openfgav1.Tupl
return provider.server.Write(ctx, additions, deletions)
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
return provider.server.ListObjects(ctx, subject, relation, typeable)
func (provider *provider) ReadTuples(ctx context.Context, tupleKey *openfgav1.ReadRequestTupleKey) ([]*openfgav1.TupleKey, error) {
return provider.server.ReadTuples(ctx, tupleKey)
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, objectType authtypes.Type) ([]*authtypes.Object, error) {
return provider.server.ListObjects(ctx, subject, relation, objectType)
}
func (provider *provider) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*authtypes.Role, error) {
storableRole, err := provider.store.Get(ctx, orgID, id)
if err != nil {
return nil, err
}
return authtypes.NewRoleFromStorableRole(storableRole), nil
return provider.store.Get(ctx, orgID, id)
}
func (provider *provider) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*authtypes.Role, error) {
storableRole, err := provider.store.GetByOrgIDAndName(ctx, orgID, name)
if err != nil {
return nil, err
}
return authtypes.NewRoleFromStorableRole(storableRole), nil
return provider.store.GetByOrgIDAndName(ctx, orgID, name)
}
func (provider *provider) List(ctx context.Context, orgID valuer.UUID) ([]*authtypes.Role, error) {
storableRoles, err := provider.store.List(ctx, orgID)
if err != nil {
return nil, err
}
roles := make([]*authtypes.Role, len(storableRoles))
for idx, storableRole := range storableRoles {
roles[idx] = authtypes.NewRoleFromStorableRole(storableRole)
}
return roles, nil
return provider.store.List(ctx, orgID)
}
func (provider *provider) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*authtypes.Role, error) {
storableRoles, err := provider.store.ListByOrgIDAndNames(ctx, orgID, names)
if err != nil {
return nil, err
}
roles := make([]*authtypes.Role, len(storableRoles))
for idx, storable := range storableRoles {
roles[idx] = authtypes.NewRoleFromStorableRole(storable)
}
return roles, nil
return provider.store.ListByOrgIDAndNames(ctx, orgID, names)
}
func (provider *provider) ListByOrgIDAndIDs(ctx context.Context, orgID valuer.UUID, ids []valuer.UUID) ([]*authtypes.Role, error) {
storableRoles, err := provider.store.ListByOrgIDAndIDs(ctx, orgID, ids)
if err != nil {
return nil, err
}
roles := make([]*authtypes.Role, len(storableRoles))
for idx, storable := range storableRoles {
roles[idx] = authtypes.NewRoleFromStorableRole(storable)
}
return roles, nil
return provider.store.ListByOrgIDAndIDs(ctx, orgID, ids)
}
func (provider *provider) Grant(ctx context.Context, orgID valuer.UUID, names []string, subject string) error {
@@ -197,7 +167,7 @@ func (provider *provider) Revoke(ctx context.Context, orgID valuer.UUID, names [
func (provider *provider) CreateManagedRoles(ctx context.Context, _ valuer.UUID, managedRoles []*authtypes.Role) error {
err := provider.store.RunInTx(ctx, func(ctx context.Context) error {
for _, role := range managedRoles {
err := provider.store.Create(ctx, authtypes.NewStorableRoleFromRole(role))
err := provider.store.Create(ctx, role)
if err != nil {
return err
}
@@ -245,6 +215,42 @@ func (provider *provider) Delete(_ context.Context, _ valuer.UUID, _ valuer.UUID
return errors.Newf(errors.TypeUnsupported, authtypes.ErrCodeRoleUnsupported, "not implemented")
}
func (provider *provider) CheckTransactions(ctx context.Context, subject string, orgID valuer.UUID, transactions []*authtypes.Transaction) ([]*authtypes.TransactionWithAuthorization, error) {
if len(transactions) == 0 {
return make([]*authtypes.TransactionWithAuthorization, 0), nil
}
tuples, preResolved, roleCorrelations, err := authtypes.NewTuplesFromTransactionsWithManagedRoles(transactions, subject, orgID, provider.managedRolesByTransaction)
if err != nil {
return nil, err
}
if len(tuples) == 0 {
return authtypes.NewTransactionWithAuthorizationFromBatchResults(transactions, nil, preResolved, roleCorrelations), nil
}
batchResults, err := provider.server.BatchCheck(ctx, tuples)
if err != nil {
return nil, err
}
return authtypes.NewTransactionWithAuthorizationFromBatchResults(transactions, batchResults, preResolved, roleCorrelations), nil
}
func buildManagedRolesByTransaction(registry []authz.RegisterTypeable) map[string][]string {
managedRolesByTransaction := make(map[string][]string)
for _, register := range registry {
for roleName, transactions := range register.MustGetManagedRoleTransactions() {
for _, txn := range transactions {
key := txn.TransactionKey()
managedRolesByTransaction[key] = append(managedRolesByTransaction[key], roleName)
}
}
}
return managedRolesByTransaction
}
func (provider *provider) MustGetTypeables() []authtypes.Typeable {
return nil
}

View File

@@ -265,17 +265,45 @@ func (server *Server) Write(ctx context.Context, additions []*openfgav1.TupleKey
return nil
}
func (server *Server) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
func (server *Server) ReadTuples(ctx context.Context, tupleKey *openfgav1.ReadRequestTupleKey) ([]*openfgav1.TupleKey, error) {
storeID, _ := server.getStoreIDandModelID()
var tuples []*openfgav1.TupleKey
continuationToken := ""
for {
response, err := server.openfgaServer.Read(ctx, &openfgav1.ReadRequest{
StoreId: storeID,
TupleKey: tupleKey,
ContinuationToken: continuationToken,
})
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "failed to read tuples from authorization server")
}
for _, tuple := range response.Tuples {
tuples = append(tuples, tuple.Key)
}
if response.ContinuationToken == "" {
break
}
continuationToken = response.ContinuationToken
}
return tuples, nil
}
func (server *Server) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, objectType authtypes.Type) ([]*authtypes.Object, error) {
storeID, modelID := server.getStoreIDandModelID()
response, err := server.openfgaServer.ListObjects(ctx, &openfgav1.ListObjectsRequest{
StoreId: storeID,
AuthorizationModelId: modelID,
User: subject,
Relation: relation.StringValue(),
Type: typeable.Type().StringValue(),
Type: objectType.StringValue(),
})
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "cannot list objects for subject %s with relation %s for type %s", subject, relation.StringValue(), typeable.Type().StringValue())
return nil, errors.Wrapf(err, errors.TypeInternal, authtypes.ErrCodeAuthZUnavailable, "cannot list objects for subject %s with relation %s for type %s", subject, relation.StringValue(), objectType.StringValue())
}
return authtypes.MustNewObjectsFromStringSlice(response.Objects), nil

View File

@@ -272,17 +272,11 @@ func (handler *handler) Check(rw http.ResponseWriter, r *http.Request) {
return
}
tuples, err := authtypes.NewTuplesFromTransactions(transactions, subject, orgID)
results, err := handler.authz.CheckTransactions(ctx, subject, orgID, transactions)
if err != nil {
render.Error(rw, err)
return
}
results, err := handler.authz.BatchCheck(ctx, tuples)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusOK, authtypes.NewGettableTransaction(transactions, results))
render.Success(rw, http.StatusOK, authtypes.NewGettableTransaction(results))
}

View File

@@ -0,0 +1,30 @@
package implserviceaccount
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/serviceaccounttypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type getter struct {
store serviceaccounttypes.Store
}
func NewGetter(store serviceaccounttypes.Store) serviceaccount.Getter {
return &getter{store: store}
}
func (getter *getter) OnBeforeRoleDelete(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) error {
serviceAccounts, err := getter.store.GetServiceAccountsByOrgIDAndRoleID(ctx, orgID, roleID)
if err != nil {
return err
}
if len(serviceAccounts) > 0 {
return errors.New(errors.TypeInvalidInput, authtypes.ErrCodeRoleHasServiceAccountAssignees, "role has active service account assignments, remove them before deleting")
}
return nil
}

View File

@@ -123,6 +123,25 @@ func (store *store) GetByIDAndStatus(ctx context.Context, id valuer.UUID, status
return storable, nil
}
func (store *store) GetServiceAccountsByOrgIDAndRoleID(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) ([]*serviceaccounttypes.ServiceAccount, error) {
serviceAccounts := make([]*serviceaccounttypes.ServiceAccount, 0)
err := store.
sqlstore.
BunDBCtx(ctx).
NewSelect().
Model(&serviceAccounts).
Join(`JOIN service_account_role ON service_account_role.service_account_id = service_account.id`).
Where(`service_account.org_id = ?`, orgID).
Where("service_account_role.role_id = ?", roleID).
Scan(ctx)
if err != nil {
return nil, err
}
return serviceAccounts, nil
}
func (store *store) CountByOrgID(ctx context.Context, orgID valuer.UUID) (int64, error) {
storable := new(serviceaccounttypes.ServiceAccount)

View File

@@ -11,6 +11,11 @@ import (
"github.com/SigNoz/signoz/pkg/valuer"
)
type Getter interface {
// OnBeforeRoleDelete checks if any service accounts are assigned to the role and rejects deletion if so.
OnBeforeRoleDelete(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) error
}
type Module interface {
// Creates a new service account for an organization.
Create(context.Context, valuer.UUID, *serviceaccounttypes.ServiceAccount) error

View File

@@ -225,3 +225,14 @@ func (module *getter) GetResetPasswordTokenByOrgIDAndUserID(ctx context.Context,
func (module *getter) GetUsersByOrgIDAndRoleID(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) ([]*types.User, error) {
return module.store.GetUsersByOrgIDAndRoleID(ctx, orgID, roleID)
}
func (module *getter) OnBeforeRoleDelete(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) error {
users, err := module.GetUsersByOrgIDAndRoleID(ctx, orgID, roleID)
if err != nil {
return err
}
if len(users) > 0 {
return errors.New(errors.TypeInvalidInput, authtypes.ErrCodeRoleHasUserAssignees, "role has active user assignments, remove them before deleting")
}
return nil
}

View File

@@ -91,6 +91,9 @@ type Getter interface {
// Gets all the user with role using role id in an org id
GetUsersByOrgIDAndRoleID(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) ([]*types.User, error)
// OnBeforeRoleDelete checks if any users are assigned to the role and rejects deletion if so.
OnBeforeRoleDelete(ctx context.Context, orgID valuer.UUID, roleID valuer.UUID) error
}
type Handler interface {

View File

@@ -12,6 +12,7 @@ import (
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/apiserver"
"github.com/SigNoz/signoz/pkg/auditor"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/config"
"github.com/SigNoz/signoz/pkg/emailing"
@@ -135,6 +136,9 @@ type Config struct {
// CloudIntegration config
CloudIntegration cloudintegration.Config `mapstructure:"cloudintegration"`
// Authz config
Authz authz.Config `mapstructure:"authz"`
}
func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.ResolverConfig) (Config, error) {
@@ -168,6 +172,7 @@ func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.R
serviceaccount.NewConfigFactory(),
auditor.NewConfigFactory(),
cloudintegration.NewConfigFactory(),
authz.NewConfigFactory(),
}
conf, err := config.New(ctx, resolverConfig, configFactories)

View File

@@ -100,7 +100,7 @@ func New(
sqlstoreProviderFactories factory.NamedMap[factory.ProviderFactory[sqlstore.SQLStore, sqlstore.Config]],
telemetrystoreProviderFactories factory.NamedMap[factory.ProviderFactory[telemetrystore.TelemetryStore, telemetrystore.Config]],
authNsCallback func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error),
authzCallback func(context.Context, sqlstore.SQLStore, licensing.Licensing, dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error),
authzCallback func(context.Context, sqlstore.SQLStore, licensing.Licensing, []authz.OnBeforeRoleDelete, dashboard.Module) (factory.ProviderFactory[authz.AuthZ, authz.Config], error),
dashboardModuleCallback func(sqlstore.SQLStore, factory.ProviderSettings, analytics.Analytics, organization.Getter, queryparser.QueryParser, querier.Querier, licensing.Licensing) dashboard.Module,
gatewayProviderFactory func(licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config],
auditorProviderFactories func(licensing.Licensing) factory.NamedMap[factory.ProviderFactory[auditor.Auditor, auditor.Config]],
@@ -328,19 +328,28 @@ func New(
// Initialize dashboard module (needed for authz registry)
dashboard := dashboardModuleCallback(sqlstore, providerSettings, analytics, orgGetter, queryParser, querier, licensing)
// Initialize authz
authzProviderFactory, err := authzCallback(ctx, sqlstore, licensing, dashboard)
if err != nil {
return nil, err
}
authz, err := authzProviderFactory.New(ctx, providerSettings, authz.Config{})
if err != nil {
return nil, err
}
// Initialize user getter
userGetter := impluser.NewGetter(userStore, userRoleStore, flagger)
// Initialize service account getter
serviceAccountGetter := implserviceaccount.NewGetter(implserviceaccount.NewStore(sqlstore))
// Build pre-delete callbacks from modules
onBeforeRoleDelete := []authz.OnBeforeRoleDelete{
userGetter.OnBeforeRoleDelete,
serviceAccountGetter.OnBeforeRoleDelete,
}
// Initialize authz
authzProviderFactory, err := authzCallback(ctx, sqlstore, licensing, onBeforeRoleDelete, dashboard)
if err != nil {
return nil, err
}
authz, err := authzProviderFactory.New(ctx, providerSettings, config.Authz)
if err != nil {
return nil, err
}
// Initialize notification manager from the available notification manager provider factories
nfManager, err := factory.NewProviderFromNamedMap(
ctx,

View File

@@ -54,7 +54,7 @@ func (migration *addManagedRoles) Up(ctx context.Context, db *bun.DB) error {
return err
}
managedRoles := []*authtypes.StorableRole{}
managedRoles := []*authtypes.Role{}
for _, orgIDStr := range orgIDs {
orgID, err := valuer.NewUUID(orgIDStr)
if err != nil {
@@ -63,19 +63,19 @@ func (migration *addManagedRoles) Up(ctx context.Context, db *bun.DB) error {
// signoz admin
signozAdminRole := authtypes.NewRole(authtypes.SigNozAdminRoleName, authtypes.SigNozAdminRoleDescription, authtypes.RoleTypeManaged, orgID)
managedRoles = append(managedRoles, authtypes.NewStorableRoleFromRole(signozAdminRole))
managedRoles = append(managedRoles, signozAdminRole)
// signoz editor
signozEditorRole := authtypes.NewRole(authtypes.SigNozEditorRoleName, authtypes.SigNozEditorRoleDescription, authtypes.RoleTypeManaged, orgID)
managedRoles = append(managedRoles, authtypes.NewStorableRoleFromRole(signozEditorRole))
managedRoles = append(managedRoles, signozEditorRole)
// signoz viewer
signozViewerRole := authtypes.NewRole(authtypes.SigNozViewerRoleName, authtypes.SigNozViewerRoleDescription, authtypes.RoleTypeManaged, orgID)
managedRoles = append(managedRoles, authtypes.NewStorableRoleFromRole(signozViewerRole))
managedRoles = append(managedRoles, signozViewerRole)
// signoz anonymous
signozAnonymousRole := authtypes.NewRole(authtypes.SigNozAnonymousRoleName, authtypes.SigNozAnonymousRoleDescription, authtypes.RoleTypeManaged, orgID)
managedRoles = append(managedRoles, authtypes.NewStorableRoleFromRole(signozAnonymousRole))
managedRoles = append(managedRoles, signozAnonymousRole)
}
if len(managedRoles) > 0 {

View File

@@ -20,6 +20,8 @@ var (
ErrCodeRoleNotFound = errors.MustNewCode("role_not_found")
ErrCodeRoleFailedTransactionsFromString = errors.MustNewCode("role_failed_transactions_from_string")
ErrCodeRoleUnsupported = errors.MustNewCode("role_unsupported")
ErrCodeRoleHasUserAssignees = errors.MustNewCode("role_has_user_assignees")
ErrCodeRoleHasServiceAccountAssignees = errors.MustNewCode("role_has_service_account_assignees")
)
var (
@@ -60,17 +62,6 @@ var (
TypeableResourcesRoles = MustNewTypeableMetaResources(MustNewName("roles"))
)
type StorableRole struct {
bun.BaseModel `bun:"table:role"`
types.Identifiable
types.TimeAuditable
Name string `bun:"name,type:string" json:"name"`
Description string `bun:"description,type:string" json:"description"`
Type string `bun:"type,type:string" json:"type"`
OrgID string `bun:"org_id,type:string" json:"orgId"`
}
type Role struct {
bun.BaseModel `bun:"table:role"`
@@ -91,28 +82,6 @@ type PatchableRole struct {
Description string `json:"description" required:"true"`
}
func NewStorableRoleFromRole(role *Role) *StorableRole {
return &StorableRole{
Identifiable: role.Identifiable,
TimeAuditable: role.TimeAuditable,
Name: role.Name,
Description: role.Description,
Type: role.Type.String(),
OrgID: role.OrgID.StringValue(),
}
}
func NewRoleFromStorableRole(storableRole *StorableRole) *Role {
return &Role{
Identifiable: storableRole.Identifiable,
TimeAuditable: storableRole.TimeAuditable,
Name: storableRole.Name,
Description: storableRole.Description,
Type: valuer.NewString(storableRole.Type),
OrgID: valuer.MustNewUUID(storableRole.OrgID),
}
}
func NewRole(name, description string, roleType valuer.String, orgID valuer.UUID) *Role {
return &Role{
Identifiable: types.Identifiable{
@@ -264,13 +233,13 @@ func MustGetSigNozManagedRoleFromExistingRole(role types.Role) string {
}
type RoleStore interface {
Create(context.Context, *StorableRole) error
Get(context.Context, valuer.UUID, valuer.UUID) (*StorableRole, error)
GetByOrgIDAndName(context.Context, valuer.UUID, string) (*StorableRole, error)
List(context.Context, valuer.UUID) ([]*StorableRole, error)
ListByOrgIDAndNames(context.Context, valuer.UUID, []string) ([]*StorableRole, error)
ListByOrgIDAndIDs(context.Context, valuer.UUID, []valuer.UUID) ([]*StorableRole, error)
Update(context.Context, valuer.UUID, *StorableRole) error
Create(context.Context, *Role) error
Get(context.Context, valuer.UUID, valuer.UUID) (*Role, error)
GetByOrgIDAndName(context.Context, valuer.UUID, string) (*Role, error)
List(context.Context, valuer.UUID) ([]*Role, error)
ListByOrgIDAndNames(context.Context, valuer.UUID, []string) ([]*Role, error)
ListByOrgIDAndIDs(context.Context, valuer.UUID, []valuer.UUID) ([]*Role, error)
Update(context.Context, valuer.UUID, *Role) error
Delete(context.Context, valuer.UUID, valuer.UUID) error
RunInTx(context.Context, func(ctx context.Context) error) error
}

View File

@@ -20,6 +20,11 @@ type GettableTransaction struct {
Authorized bool `json:"authorized" required:"true"`
}
type TransactionWithAuthorization struct {
Transaction *Transaction
Authorized bool
}
func NewTransaction(relation Relation, object Object) (*Transaction, error) {
if !slices.Contains(TypeableRelations[object.Resource.Type], relation) {
return nil, errors.Newf(errors.TypeInvalidInput, ErrCodeAuthZInvalidRelation, "invalid relation %s for type %s", relation.StringValue(), object.Resource.Type.StringValue())
@@ -28,13 +33,12 @@ func NewTransaction(relation Relation, object Object) (*Transaction, error) {
return &Transaction{ID: valuer.GenerateUUID(), Relation: relation, Object: object}, nil
}
func NewGettableTransaction(transactions []*Transaction, results map[string]*TupleKeyAuthorization) []*GettableTransaction {
gettableTransactions := make([]*GettableTransaction, len(transactions))
for i, txn := range transactions {
result := results[txn.ID.StringValue()]
func NewGettableTransaction(results []*TransactionWithAuthorization) []*GettableTransaction {
gettableTransactions := make([]*GettableTransaction, len(results))
for i, result := range results {
gettableTransactions[i] = &GettableTransaction{
Relation: txn.Relation,
Object: txn.Object,
Relation: result.Transaction.Relation,
Object: result.Transaction.Object,
Authorized: result.Authorized,
}
}
@@ -42,6 +46,54 @@ func NewGettableTransaction(transactions []*Transaction, results map[string]*Tup
return gettableTransactions
}
// NewTransactionWithAuthorizationFromBatchResults merges batch check results into an ordered
// slice of TransactionWithAuthorization matching the input transactions order.
// preResolved contains txn IDs whose authorization was determined without BatchCheck.
// roleCorrelations maps txn IDs to correlation IDs used for managed role checks.
func NewTransactionWithAuthorizationFromBatchResults(
transactions []*Transaction,
batchResults map[string]*TupleKeyAuthorization,
preResolved map[string]bool,
roleCorrelations map[string][]string,
) []*TransactionWithAuthorization {
output := make([]*TransactionWithAuthorization, len(transactions))
for i, txn := range transactions {
txnID := txn.ID.StringValue()
if authorized, ok := preResolved[txnID]; ok {
output[i] = &TransactionWithAuthorization{
Transaction: txn,
Authorized: authorized,
}
continue
}
if txn.Object.Resource.Type == TypeRole && txn.Relation == RelationAssignee {
output[i] = &TransactionWithAuthorization{
Transaction: txn,
Authorized: batchResults[txnID].Authorized,
}
continue
}
correlationIDs := roleCorrelations[txnID]
authorized := false
for _, correlationID := range correlationIDs {
if result, exists := batchResults[correlationID]; exists && result.Authorized {
authorized = true
break
}
}
output[i] = &TransactionWithAuthorization{
Transaction: txn,
Authorized: authorized,
}
}
return output
}
func (transaction *Transaction) UnmarshalJSON(data []byte) error {
var shadow = struct {
Relation Relation

View File

@@ -10,6 +10,11 @@ type TupleKeyAuthorization struct {
Authorized bool
}
// TransactionKey returns a composite key for matching transactions to managed roles.
func (transaction *Transaction) TransactionKey() string {
return transaction.Relation.StringValue() + ":" + transaction.Object.Resource.Type.StringValue() + ":" + transaction.Object.Resource.Name.String()
}
func NewTuplesFromTransactions(transactions []*Transaction, subject string, orgID valuer.UUID) (map[string]*openfgav1.TupleKey, error) {
tuples := make(map[string]*openfgav1.TupleKey, len(transactions))
for _, txn := range transactions {
@@ -29,3 +34,57 @@ func NewTuplesFromTransactions(transactions []*Transaction, subject string, orgI
return tuples, nil
}
// NewTuplesFromTransactionsWithManagedRoles converts transactions to tuples for BatchCheck.
// Direct role-assignment transactions (TypeRole + RelationAssignee) produce one tuple keyed by txn ID.
// Other transactions are expanded via managedRolesByTransaction into role-assignee checks, keyed by "txnID:roleName".
// Transactions with no managed role mapping are marked as pre-resolved (false) in the returned map.
func NewTuplesFromTransactionsWithManagedRoles(
transactions []*Transaction,
subject string,
orgID valuer.UUID,
managedRolesByTransaction map[string][]string,
) (tuples map[string]*openfgav1.TupleKey, preResolved map[string]bool, roleCorrelations map[string][]string, err error) {
tuples = make(map[string]*openfgav1.TupleKey)
preResolved = make(map[string]bool)
roleCorrelations = make(map[string][]string)
for _, txn := range transactions {
txnID := txn.ID.StringValue()
if txn.Object.Resource.Type == TypeRole && txn.Relation == RelationAssignee {
typeable, err := NewTypeableFromType(txn.Object.Resource.Type, txn.Object.Resource.Name)
if err != nil {
return nil, nil, nil, err
}
txnTuples, err := typeable.Tuples(subject, txn.Relation, []Selector{txn.Object.Selector}, orgID)
if err != nil {
return nil, nil, nil, err
}
tuples[txnID] = txnTuples[0]
continue
}
roleNames, found := managedRolesByTransaction[txn.TransactionKey()]
if !found || len(roleNames) == 0 {
preResolved[txnID] = false
continue
}
for _, roleName := range roleNames {
roleSelector := MustNewSelector(TypeRole, roleName)
roleTuples, err := TypeableRole.Tuples(subject, RelationAssignee, []Selector{roleSelector}, orgID)
if err != nil {
return nil, nil, nil, err
}
correlationID := valuer.GenerateUUID().StringValue()
tuples[correlationID] = roleTuples[0]
roleCorrelations[txnID] = append(roleCorrelations[txnID], correlationID)
}
}
return tuples, preResolved, roleCorrelations, nil
}

View File

@@ -17,13 +17,12 @@ import (
)
var (
ErrCodeServiceAccountInvalidConfig = errors.MustNewCode("service_account_invalid_config")
ErrCodeServiceAccountInvalidInput = errors.MustNewCode("service_account_invalid_input")
ErrCodeServiceAccountAlreadyExists = errors.MustNewCode("service_account_already_exists")
ErrCodeServiceAccountNotFound = errors.MustNewCode("service_account_not_found")
ErrCodeServiceAccountRoleAlreadyExists = errors.MustNewCode("service_account_role_already_exists")
ErrCodeServiceAccountOperationUnsupported = errors.MustNewCode("service_account_operation_unsupported")
errInvalidServiceAccountName = errors.New(errors.TypeInvalidInput, ErrCodeServiceAccountInvalidInput, "name must start with a lowercase letter (a-z), contain only lowercase letters, numbers (0-9), and hyphens (-), and be at most 50 characters long")
ErrCodeServiceAccountInvalidConfig = errors.MustNewCode("service_account_invalid_config")
ErrCodeServiceAccountInvalidInput = errors.MustNewCode("service_account_invalid_input")
ErrCodeServiceAccountAlreadyExists = errors.MustNewCode("service_account_already_exists")
ErrCodeServiceAccountNotFound = errors.MustNewCode("service_account_not_found")
ErrCodeServiceAccountRoleAlreadyExists = errors.MustNewCode("service_account_role_already_exists")
errInvalidServiceAccountName = errors.New(errors.TypeInvalidInput, ErrCodeServiceAccountInvalidInput, "name must start with a lowercase letter (a-z), contain only lowercase letters, numbers (0-9), and hyphens (-), and be at most 50 characters long")
)
var (
@@ -120,7 +119,7 @@ func (serviceAccount *ServiceAccount) UpdateStatus(status ServiceAccountStatus)
func (serviceAccount *ServiceAccount) ErrIfDeleted() error {
if serviceAccount.Status == ServiceAccountStatusDeleted {
return errors.New(errors.TypeUnsupported, ErrCodeServiceAccountOperationUnsupported, "this operation is not supported for disabled service account")
return errors.Newf(errors.TypeNotFound, ErrCodeServiceAccountNotFound, "an active service account with id: %s does not exist", serviceAccount.ID)
}
return nil
@@ -239,6 +238,7 @@ type Store interface {
GetActiveByOrgIDAndName(context.Context, valuer.UUID, string) (*ServiceAccount, error)
GetByID(context.Context, valuer.UUID) (*ServiceAccount, error)
GetByIDAndStatus(context.Context, valuer.UUID, ServiceAccountStatus) (*ServiceAccount, error)
GetServiceAccountsByOrgIDAndRoleID(context.Context, valuer.UUID, valuer.UUID) ([]*ServiceAccount, error)
CountByOrgID(context.Context, valuer.UUID) (int64, error)
List(context.Context, valuer.UUID) ([]*ServiceAccount, error)
Update(context.Context, valuer.UUID, *ServiceAccount) error

View File

@@ -1,38 +0,0 @@
# Dependencies
node_modules/
# Build outputs
dist/
build/
# Test results
test-results/
playwright-report/
coverage/
# Environment files
.env
.env.local
.env.production
# Editor files
.vscode/
.idea/
*.swp
*.swo
# OS files
.DS_Store
Thumbs.db
# Logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Runtime data
pids
*.pid
*.seed
*.pid.lock

View File

@@ -1,68 +0,0 @@
module.exports = {
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 2022,
sourceType: 'module',
},
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:playwright/recommended',
],
env: {
node: true,
es2022: true,
},
rules: {
// Code Quality
'@typescript-eslint/no-unused-vars': 'error',
'@typescript-eslint/no-explicit-any': 'warn',
'prefer-const': 'error',
'no-var': 'error',
// Formatting Rules (ESLint handles formatting)
'semi': ['error', 'always'],
'quotes': ['error', 'single', { avoidEscape: true }],
'comma-dangle': ['error', 'always-multiline'],
'indent': ['error', 2, { SwitchCase: 1 }],
'object-curly-spacing': ['error', 'always'],
'array-bracket-spacing': ['error', 'never'],
'space-before-function-paren': ['error', {
anonymous: 'always',
named: 'never',
asyncArrow: 'always',
}],
'keyword-spacing': 'error',
'space-infix-ops': 'error',
'eol-last': 'error',
'no-trailing-spaces': 'error',
'no-multiple-empty-lines': ['error', { max: 2, maxEOF: 1 }],
// Playwright-specific (enhanced)
'playwright/expect-expect': 'error',
'playwright/no-conditional-in-test': 'error',
'playwright/no-page-pause': 'error',
'playwright/no-wait-for-timeout': 'warn',
'playwright/prefer-web-first-assertions': 'error',
// Console usage
'no-console': ['warn', { allow: ['warn', 'error'] }],
},
overrides: [
{
// Config files can use console and have relaxed formatting
files: ['*.config.{js,ts}', 'playwright.config.ts'],
rules: {
'no-console': 'off',
'@typescript-eslint/no-explicit-any': 'off',
},
},
{
// Test files specific rules
files: ['**/*.spec.ts', '**/*.test.ts'],
rules: {
'@typescript-eslint/no-explicit-any': 'off', // Page objects often need any
},
},
],
};

26
tests/e2e/.oxfmtrc.json Normal file
View File

@@ -0,0 +1,26 @@
{
"$schema": "./node_modules/oxfmt/configuration_schema.json",
"trailingComma": "all",
"useTabs": true,
"tabWidth": 1,
"singleQuote": true,
"jsxSingleQuote": false,
"semi": true,
"printWidth": 80,
"bracketSpacing": true,
"jsxBracketSameLine": false,
"arrowParens": "always",
"endOfLine": "lf",
"quoteProps": "as-needed",
"proseWrap": "preserve",
"htmlWhitespaceSensitivity": "css",
"embeddedLanguageFormatting": "auto",
"sortPackageJson": false,
"ignorePatterns": [
"artifacts",
"node_modules",
"playwright-report",
"**/*.md",
"**/*.json"
]
}

45
tests/e2e/.oxlintrc.json Normal file
View File

@@ -0,0 +1,45 @@
{
"$schema": "./node_modules/oxlint/configuration_schema.json",
"jsPlugins": ["eslint-plugin-playwright"],
"plugins": ["eslint", "typescript", "unicorn", "import", "promise"],
"categories": {
"correctness": "warn"
},
"env": {
"builtin": true,
"es2022": true,
"node": true
},
"options": {
"typeAware": true,
"typeCheck": false
},
"rules": {
"prefer-const": "error",
"no-var": "error",
"no-console": ["warn", { "allow": ["warn", "error"] }],
"typescript/no-unused-vars": "error",
"typescript/no-explicit-any": "warn",
"playwright/expect-expect": "error",
"playwright/no-conditional-in-test": "error",
"playwright/no-page-pause": "error",
"playwright/no-wait-for-timeout": "warn",
"playwright/prefer-web-first-assertions": "error"
},
"overrides": [
{
"files": ["*.config.{js,ts}", "playwright.config.ts"],
"rules": {
"no-console": "off",
"typescript/no-explicit-any": "off"
}
},
{
"files": ["**/*.spec.ts", "**/*.test.ts"],
"rules": {
"typescript/no-explicit-any": "off"
}
}
],
"ignorePatterns": ["node_modules", "artifacts", "playwright-report"]
}

View File

@@ -1,30 +0,0 @@
# Dependencies
node_modules/
# Generated test outputs
artifacts/
playwright/.cache/
# Build outputs
dist/
# Environment files
.env
.env.local
.env*.local
# Lock files
yarn.lock
package-lock.json
pnpm-lock.yaml
# Logs
*.log
yarn-error.log
# IDE
.vscode/
.idea/
# Other
.DS_Store

View File

@@ -1,6 +0,0 @@
{
"useTabs": false,
"tabWidth": 2,
"singleQuote": true,
"trailingComma": "all"
}

View File

@@ -26,13 +26,12 @@ def test_setup(
seeder_cfg = seeder.host_configs["8080"]
out = _env_file(pytestconfig)
out.parent.mkdir(parents=True, exist_ok=True)
out.write_text(
"# Generated by tests/e2e/bootstrap/setup.py — do not edit.\n"
f"SIGNOZ_E2E_BASE_URL={host_cfg.base()}\n"
f"SIGNOZ_E2E_USERNAME={USER_ADMIN_EMAIL}\n"
f"SIGNOZ_E2E_PASSWORD={USER_ADMIN_PASSWORD}\n"
f"SIGNOZ_E2E_SEEDER_URL={seeder_cfg.base()}\n"
)
with out.open("w") as f:
f.write("# Generated by tests/e2e/bootstrap/setup.py — do not edit.\n")
f.write(f"SIGNOZ_E2E_BASE_URL={host_cfg.base()}\n")
f.write(f"SIGNOZ_E2E_USERNAME={USER_ADMIN_EMAIL}\n")
f.write(f"SIGNOZ_E2E_PASSWORD={USER_ADMIN_PASSWORD}\n")
f.write(f"SIGNOZ_E2E_SEEDER_URL={seeder_cfg.base()}\n")
def test_teardown(

View File

@@ -1,17 +1,17 @@
import {
test as base,
expect,
type Browser,
type BrowserContext,
type Page,
test as base,
expect,
type Browser,
type BrowserContext,
type Page,
} from '@playwright/test';
export type User = { email: string; password: string };
// Default user — admin from the pytest bootstrap (.env.local) or staging .env.
export const ADMIN: User = {
email: process.env.SIGNOZ_E2E_USERNAME!,
password: process.env.SIGNOZ_E2E_PASSWORD!,
email: process.env.SIGNOZ_E2E_USERNAME!,
password: process.env.SIGNOZ_E2E_PASSWORD!,
};
// Per-worker storageState cache. One login per unique user per worker.
@@ -21,65 +21,65 @@ type StorageState = Awaited<ReturnType<BrowserContext['storageState']>>;
const storageByUser = new Map<string, Promise<StorageState>>();
async function storageFor(browser: Browser, user: User): Promise<StorageState> {
const cached = storageByUser.get(user.email);
if (cached) return cached;
const cached = storageByUser.get(user.email);
if (cached) return cached;
const task = (async () => {
const ctx = await browser.newContext();
const page = await ctx.newPage();
await login(page, user);
const state = await ctx.storageState();
await ctx.close();
return state;
})();
const task = (async () => {
const ctx = await browser.newContext();
const page = await ctx.newPage();
await login(page, user);
const state = await ctx.storageState();
await ctx.close();
return state;
})();
storageByUser.set(user.email, task);
return task;
storageByUser.set(user.email, task);
return task;
}
async function login(page: Page, user: User): Promise<void> {
if (!user.email || !user.password) {
throw new Error(
'User credentials missing. Set SIGNOZ_E2E_USERNAME / SIGNOZ_E2E_PASSWORD ' +
'(pytest bootstrap writes them to .env.local), or pass a User via test.use({ user: ... }).',
);
}
await page.goto('/login?password=Y');
await page.getByTestId('email').fill(user.email);
await page.getByTestId('initiate_login').click();
await page.getByTestId('password').fill(user.password);
await page.getByRole('button', { name: 'Sign in with Password' }).click();
// Post-login lands somewhere different depending on whether the org is
// licensed (onboarding flow on ENTERPRISE) or not (legacy "Hello there"
// welcome). Wait for URL to move off /login — whichever page follows
// is fine, each spec navigates to the feature under test anyway.
await page.waitForURL((url) => !url.pathname.startsWith('/login'));
if (!user.email || !user.password) {
throw new Error(
'User credentials missing. Set SIGNOZ_E2E_USERNAME / SIGNOZ_E2E_PASSWORD ' +
'(pytest bootstrap writes them to .env.local), or pass a User via test.use({ user: ... }).',
);
}
await page.goto('/login?password=Y');
await page.getByTestId('email').fill(user.email);
await page.getByTestId('initiate_login').click();
await page.getByTestId('password').fill(user.password);
await page.getByRole('button', { name: 'Sign in with Password' }).click();
// Post-login lands somewhere different depending on whether the org is
// licensed (onboarding flow on ENTERPRISE) or not (legacy "Hello there"
// welcome). Wait for URL to move off /login — whichever page follows
// is fine, each spec navigates to the feature under test anyway.
await page.waitForURL((url) => !url.pathname.startsWith('/login'));
}
export const test = base.extend<{
/**
* User identity for this test. Override with `test.use({ user: ... })` at
* the describe or test level to run the suite as a different user.
* Defaults to ADMIN (the pytest-bootstrap-seeded admin).
*/
user: User;
/**
* User identity for this test. Override with `test.use({ user: ... })` at
* the describe or test level to run the suite as a different user.
* Defaults to ADMIN (the pytest-bootstrap-seeded admin).
*/
user: User;
/**
* A Page whose context is already authenticated as `user`. First request
* for a given user triggers one login per worker; the resulting
* storageState is held in memory and reused for all later requests.
*/
authedPage: Page;
/**
* A Page whose context is already authenticated as `user`. First request
* for a given user triggers one login per worker; the resulting
* storageState is held in memory and reused for all later requests.
*/
authedPage: Page;
}>({
user: [ADMIN, { option: true }],
user: [ADMIN, { option: true }],
authedPage: async ({ browser, user }, use) => {
const storageState = await storageFor(browser, user);
const ctx = await browser.newContext({ storageState });
const page = await ctx.newPage();
await use(page);
await ctx.close();
},
authedPage: async ({ browser, user }, use) => {
const storageState = await storageFor(browser, user);
const ctx = await browser.newContext({ storageState });
const page = await ctx.newPage();
await use(page);
await ctx.close();
},
});
export { expect };

View File

@@ -16,8 +16,10 @@
"codegen": "playwright codegen",
"install:browsers": "playwright install",
"install:cli": "npm install -g @playwright/cli@latest && playwright-cli install --skills",
"lint": "eslint . --ext .ts,.js",
"lint:fix": "eslint . --ext .ts,.js --fix",
"fmt": "oxfmt .",
"fmt:check": "oxfmt --check .",
"lint": "oxlint .",
"lint:fix": "oxlint . --fix",
"typecheck": "tsc --noEmit"
},
"keywords": [
@@ -31,11 +33,11 @@
"devDependencies": {
"@playwright/test": "^1.57.0-alpha-2025-10-09",
"@types/node": "^20.0.0",
"@typescript-eslint/eslint-plugin": "^6.0.0",
"@typescript-eslint/parser": "^6.0.0",
"dotenv": "^16.0.0",
"eslint": "^9.26.0",
"eslint-plugin-playwright": "^0.16.0",
"eslint-plugin-playwright": "^2.10.2",
"oxfmt": "^0.41.0",
"oxlint": "^1.59.0",
"oxlint-tsgolint": "^0.20.0",
"typescript": "^5.0.0"
},
"engines": {

View File

@@ -12,50 +12,50 @@ dotenv.config({ path: path.resolve(__dirname, '.env') });
dotenv.config({ path: path.resolve(__dirname, '.env.local'), override: true });
export default defineConfig({
testDir: './tests',
testDir: './tests',
// All Playwright output lands under artifacts/. One subdir per reporter
// plus results/ for per-test artifacts (traces/screenshots/videos).
// CI can archive the whole dir with `tar czf artifacts.tgz tests/e2e/artifacts`.
outputDir: 'artifacts/results',
// All Playwright output lands under artifacts/. One subdir per reporter
// plus results/ for per-test artifacts (traces/screenshots/videos).
// CI can archive the whole dir with `tar czf artifacts.tgz tests/e2e/artifacts`.
outputDir: 'artifacts/results',
// Run tests in parallel
fullyParallel: true,
// Run tests in parallel
fullyParallel: true,
// Fail the build on CI if you accidentally left test.only
forbidOnly: !!process.env.CI,
// Fail the build on CI if you accidentally left test.only
forbidOnly: !!process.env.CI,
// Retry on CI only
retries: process.env.CI ? 2 : 0,
// Retry on CI only
retries: process.env.CI ? 2 : 0,
// Workers
workers: process.env.CI ? 2 : undefined,
// Workers
workers: process.env.CI ? 2 : undefined,
// Reporter
reporter: [
['html', { outputFolder: 'artifacts/html', open: 'never' }],
['json', { outputFile: 'artifacts/json/results.json' }],
['list'],
],
// Reporter
reporter: [
['html', { outputFolder: 'artifacts/html', open: 'never' }],
['json', { outputFile: 'artifacts/json/results.json' }],
['list'],
],
// Shared settings
use: {
baseURL:
process.env.SIGNOZ_E2E_BASE_URL || 'https://app.us.staging.signoz.cloud',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
colorScheme: 'dark',
locale: 'en-US',
viewport: { width: 1280, height: 720 },
},
// Shared settings
use: {
baseURL:
process.env.SIGNOZ_E2E_BASE_URL || 'https://app.us.staging.signoz.cloud',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
colorScheme: 'dark',
locale: 'en-US',
viewport: { width: 1280, height: 720 },
},
// Browser projects. No project-level auth — specs opt in via the
// authedPage fixture in tests/e2e/fixtures/auth.ts, which logs a user
// in on first use and caches the resulting storageState per worker.
projects: [
{ name: 'chromium', use: devices['Desktop Chrome'] },
{ name: 'firefox', use: devices['Desktop Firefox'] },
{ name: 'webkit', use: devices['Desktop Safari'] },
],
// Browser projects. No project-level auth — specs opt in via the
// authedPage fixture in tests/e2e/fixtures/auth.ts, which logs a user
// in on first use and caches the resulting storageState per worker.
projects: [
{ name: 'chromium', use: devices['Desktop Chrome'] },
{ name: 'firefox', use: devices['Desktop Firefox'] },
{ name: 'webkit', use: devices['Desktop Safari'] },
],
});

View File

@@ -1,7 +1,7 @@
import { test, expect } from '../../fixtures/auth';
test('TC-01 alerts page — tabs render', async ({ authedPage: page }) => {
await page.goto('/alerts');
await expect(page.getByRole('tab', { name: /alert rules/i })).toBeVisible();
await expect(page.getByRole('tab', { name: /configuration/i })).toBeVisible();
await page.goto('/alerts');
await expect(page.getByRole('tab', { name: /alert rules/i })).toBeVisible();
await expect(page.getByRole('tab', { name: /configuration/i })).toBeVisible();
});

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,9 @@
import base64
import json
import time
from datetime import datetime, timedelta, timezone
from collections.abc import Callable
from datetime import UTC, datetime, timedelta
from http import HTTPStatus
from typing import Callable, List
import pytest
import requests
@@ -20,9 +20,7 @@ logger = setup_logger(__name__)
@pytest.fixture(name="create_alert_rule", scope="function")
def create_alert_rule(
signoz: types.SigNoz, get_token: Callable[[str, str], str]
) -> Callable[[dict], str]:
def create_alert_rule(signoz: types.SigNoz, get_token: Callable[[str, str], str]) -> Callable[[dict], str]:
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
rule_ids = []
@@ -34,9 +32,7 @@ def create_alert_rule(
headers={"Authorization": f"Bearer {admin_token}"},
timeout=5,
)
assert (
response.status_code == HTTPStatus.OK
), f"Failed to create rule, api returned {response.status_code} with response: {response.text}"
assert response.status_code == HTTPStatus.OK, f"Failed to create rule, api returned {response.status_code} with response: {response.text}"
rule_id = response.json()["data"]["id"]
rule_ids.append(rule_id)
return rule_id
@@ -64,23 +60,21 @@ def create_alert_rule(
@pytest.fixture(name="insert_alert_data", scope="function")
def insert_alert_data(
insert_metrics: Callable[[List[Metrics]], None],
insert_traces: Callable[[List[Traces]], None],
insert_logs: Callable[[List[Logs]], None],
) -> Callable[[List[types.AlertData]], None]:
insert_metrics: Callable[[list[Metrics]], None],
insert_traces: Callable[[list[Traces]], None],
insert_logs: Callable[[list[Logs]], None],
) -> Callable[[list[types.AlertData]], None]:
def _insert_alert_data(
alert_data_items: List[types.AlertData],
alert_data_items: list[types.AlertData],
base_time: datetime = None,
) -> None:
metrics: List[Metrics] = []
traces: List[Traces] = []
logs: List[Logs] = []
metrics: list[Metrics] = []
traces: list[Traces] = []
logs: list[Logs] = []
now = base_time or datetime.now(tz=timezone.utc).replace(
second=0, microsecond=0
)
now = base_time or datetime.now(tz=UTC).replace(second=0, microsecond=0)
for data_item in alert_data_items:
if data_item.type == "metrics":
@@ -113,9 +107,7 @@ def insert_alert_data(
yield _insert_alert_data
def collect_webhook_firing_alerts(
webhook_test_container: types.TestContainerDocker, notification_channel_name: str
) -> List[types.FiringAlert]:
def collect_webhook_firing_alerts(webhook_test_container: types.TestContainerDocker, notification_channel_name: str) -> list[types.FiringAlert]:
# Prepare the endpoint path for the channel name, for alerts tests we have
# used different paths for receiving alerts from each channel so that
# multiple rules can be tested in isolation.
@@ -127,10 +119,7 @@ def collect_webhook_firing_alerts(
"url": rule_webhook_endpoint,
}
res = requests.post(url, json=req, timeout=5)
assert res.status_code == HTTPStatus.OK, (
f"Failed to collect firing alerts for notification channel {notification_channel_name}, "
f"status code: {res.status_code}, response: {res.text}"
)
assert res.status_code == HTTPStatus.OK, f"Failed to collect firing alerts for notification channel {notification_channel_name}, status code: {res.status_code}, response: {res.text}"
response = res.json()
alerts = []
for req in response["requests"]:
@@ -144,9 +133,7 @@ def collect_webhook_firing_alerts(
return alerts
def _verify_alerts_labels(
firing_alerts: list[dict[str, str]], expected_alerts: list[dict[str, str]]
) -> tuple[int, list[dict[str, str]]]:
def _verify_alerts_labels(firing_alerts: list[dict[str, str]], expected_alerts: list[dict[str, str]]) -> tuple[int, list[dict[str, str]]]:
"""
Checks how many of the expected alerts have been fired.
Returns the count of expected alerts that have been fired.
@@ -159,10 +146,7 @@ def _verify_alerts_labels(
for fired_alert in firing_alerts:
# Check if current expected alert is present in the fired alerts
if all(
key in fired_alert and fired_alert[key] == value
for key, value in alert.items()
):
if all(key in fired_alert and fired_alert[key] == value for key, value in alert.items()):
is_alert_fired = True
break
@@ -181,35 +165,24 @@ def verify_webhook_alert_expectation(
) -> bool:
# time to wait till the expected alerts are fired
time_to_wait = datetime.now() + timedelta(
seconds=alert_expectations.wait_time_seconds
)
expected_alerts_labels = [
alert.labels for alert in alert_expectations.expected_alerts
]
time_to_wait = datetime.now() + timedelta(seconds=alert_expectations.wait_time_seconds)
expected_alerts_labels = [alert.labels for alert in alert_expectations.expected_alerts]
while datetime.now() < time_to_wait:
firing_alerts = collect_webhook_firing_alerts(
test_alert_container, notification_channel_name
)
firing_alerts = collect_webhook_firing_alerts(test_alert_container, notification_channel_name)
firing_alert_labels = [alert.labels for alert in firing_alerts]
if alert_expectations.should_alert:
# verify the number of alerts fired, currently we're only verifying the labels of the alerts
# but there could be verification of annotations and other fields in the FiringAlert
(verified_count, missing_alerts) = _verify_alerts_labels(
firing_alert_labels, expected_alerts_labels
)
(verified_count, missing_alerts) = _verify_alerts_labels(firing_alert_labels, expected_alerts_labels)
if verified_count == len(alert_expectations.expected_alerts):
logger.info(
"Got expected number of alerts: %s", {"count": verified_count}
)
logger.info("Got expected number of alerts: %s", {"count": verified_count})
return True
else:
# No alert is supposed to be fired if should_alert is False
if len(firing_alerts) > 0:
break
# No alert is supposed to be fired if should_alert is False
elif len(firing_alerts) > 0:
break
# wait for some time before checking again
time.sleep(1)
@@ -220,7 +193,7 @@ def verify_webhook_alert_expectation(
if not alert_expectations.should_alert:
assert len(firing_alerts) == 0, (
"Expected no alerts to be fired, ",
f"got {len(firing_alerts)} alerts, " f"firing alerts: {firing_alerts}",
f"got {len(firing_alerts)} alerts, firing alerts: {firing_alerts}",
)
logger.info("No alerts fired, as expected")
return True

View File

@@ -1,7 +1,8 @@
import datetime
import json
from abc import ABC
from typing import Any, Callable, Generator, List, Optional
from collections.abc import Callable, Generator
from typing import Any
import numpy as np
import pytest
@@ -54,8 +55,8 @@ class AuditTagAttributes(ABC):
tag_type: str
tag_data_type: str
string_value: str
int64_value: Optional[np.int64]
float64_value: Optional[np.float64]
int64_value: np.int64 | None
float64_value: np.float64 | None
def __init__(
self,
@@ -63,9 +64,9 @@ class AuditTagAttributes(ABC):
tag_key: str,
tag_type: str,
tag_data_type: str,
string_value: Optional[str],
int64_value: Optional[np.int64],
float64_value: Optional[np.float64],
string_value: str | None,
int64_value: np.int64 | None,
float64_value: np.float64 | None,
) -> None:
self.unix_milli = np.int64(int(timestamp.timestamp() * 1e3))
self.tag_key = tag_key
@@ -121,14 +122,14 @@ class AuditLog(ABC):
resource_json: dict[str, str]
event_name: str
resource: List[AuditResource]
tag_attributes: List[AuditTagAttributes]
resource_keys: List[AuditResourceOrAttributeKeys]
attribute_keys: List[AuditResourceOrAttributeKeys]
resource: list[AuditResource]
tag_attributes: list[AuditTagAttributes]
resource_keys: list[AuditResourceOrAttributeKeys]
attribute_keys: list[AuditResourceOrAttributeKeys]
def __init__(
self,
timestamp: Optional[datetime.datetime] = None,
timestamp: datetime.datetime | None = None,
resources: dict[str, Any] = {},
attributes: dict[str, Any] = {},
body: str = "",
@@ -180,13 +181,9 @@ class AuditLog(ABC):
float64_value=None,
)
)
self.resource_keys.append(
AuditResourceOrAttributeKeys(name=k, datatype="string")
)
self.resource_keys.append(AuditResourceOrAttributeKeys(name=k, datatype="string"))
self.resource_fingerprint = LogsOrTracesFingerprint(
self.resource_json
).calculate()
self.resource_fingerprint = LogsOrTracesFingerprint(self.resource_json).calculate()
# Process attributes by type
self.attributes_string = {}
@@ -207,9 +204,7 @@ class AuditLog(ABC):
float64_value=None,
)
)
self.attribute_keys.append(
AuditResourceOrAttributeKeys(name=k, datatype="bool")
)
self.attribute_keys.append(AuditResourceOrAttributeKeys(name=k, datatype="bool"))
elif isinstance(v, int):
self.attributes_number[k] = np.float64(v)
self.tag_attributes.append(
@@ -223,9 +218,7 @@ class AuditLog(ABC):
float64_value=None,
)
)
self.attribute_keys.append(
AuditResourceOrAttributeKeys(name=k, datatype="int64")
)
self.attribute_keys.append(AuditResourceOrAttributeKeys(name=k, datatype="int64"))
elif isinstance(v, float):
self.attributes_number[k] = np.float64(v)
self.tag_attributes.append(
@@ -239,9 +232,7 @@ class AuditLog(ABC):
float64_value=np.float64(v),
)
)
self.attribute_keys.append(
AuditResourceOrAttributeKeys(name=k, datatype="float64")
)
self.attribute_keys.append(AuditResourceOrAttributeKeys(name=k, datatype="float64"))
else:
self.attributes_string[k] = str(v)
self.tag_attributes.append(
@@ -255,9 +246,7 @@ class AuditLog(ABC):
float64_value=None,
)
)
self.attribute_keys.append(
AuditResourceOrAttributeKeys(name=k, datatype="string")
)
self.attribute_keys.append(AuditResourceOrAttributeKeys(name=k, datatype="string"))
self.scope_name = scope_name
self.scope_version = scope_version
@@ -300,9 +289,9 @@ class AuditLog(ABC):
@pytest.fixture(name="insert_audit_logs", scope="function")
def insert_audit_logs(
clickhouse: types.TestContainerClickhouse,
) -> Generator[Callable[[List[AuditLog]], None], Any, None]:
def _insert_audit_logs(logs: List[AuditLog]) -> None:
resources: List[AuditResource] = []
) -> Generator[Callable[[list[AuditLog]], None], Any]:
def _insert_audit_logs(logs: list[AuditLog]) -> None:
resources: list[AuditResource] = []
for log in logs:
resources.extend(log.resource)
@@ -318,7 +307,7 @@ def insert_audit_logs(
],
)
tag_attributes: List[AuditTagAttributes] = []
tag_attributes: list[AuditTagAttributes] = []
for log in logs:
tag_attributes.extend(log.tag_attributes)
@@ -338,7 +327,7 @@ def insert_audit_logs(
],
)
attribute_keys: List[AuditResourceOrAttributeKeys] = []
attribute_keys: list[AuditResourceOrAttributeKeys] = []
for log in logs:
attribute_keys.extend(log.attribute_keys)
@@ -350,7 +339,7 @@ def insert_audit_logs(
column_names=["name", "datatype"],
)
resource_keys: List[AuditResourceOrAttributeKeys] = []
resource_keys: list[AuditResourceOrAttributeKeys] = []
for log in logs:
resource_keys.extend(log.resource_keys)
@@ -399,6 +388,4 @@ def insert_audit_logs(
"logs_attribute_keys",
"logs_resource_keys",
]:
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_audit.{table} ON CLUSTER '{cluster}' SYNC"
)
clickhouse.conn.query(f"TRUNCATE TABLE signoz_audit.{table} ON CLUSTER '{cluster}' SYNC")

View File

@@ -1,6 +1,6 @@
import time
from collections.abc import Callable
from http import HTTPStatus
from typing import Callable, Dict, List, Tuple
import pytest
import requests
@@ -57,9 +57,7 @@ def _login(signoz: types.SigNoz, email: str, password: str) -> str:
@pytest.fixture(name="create_user_admin", scope="package")
def create_user_admin(
signoz: types.SigNoz, request: pytest.FixtureRequest, pytestconfig: pytest.Config
) -> types.Operation:
def create_user_admin(signoz: types.SigNoz, request: pytest.FixtureRequest, pytestconfig: pytest.Config) -> types.Operation:
def create() -> None:
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/register"),
@@ -143,7 +141,7 @@ def get_token(signoz: types.SigNoz) -> Callable[[str, str], str]:
@pytest.fixture(name="get_tokens", scope="function")
def get_tokens(signoz: types.SigNoz) -> Callable[[str, str], Tuple[str, str]]:
def get_tokens(signoz: types.SigNoz) -> Callable[[str, str], tuple[str, str]]:
def _get_tokens(email: str, password: str) -> str:
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v2/sessions/context"),
@@ -193,11 +191,7 @@ def apply_license(
request=MappingRequest(
method=HttpMethods.GET,
url="/v2/licenses/me",
headers={
"X-Signoz-Cloud-Api-Key": {
WireMockMatchers.EQUAL_TO: "secret-key"
}
},
headers={"X-Signoz-Cloud-Api-Key": {WireMockMatchers.EQUAL_TO: "secret-key"}},
),
response=MappingResponse(
status=200,
@@ -245,9 +239,7 @@ def apply_license(
# redirects first-time admins to a questionnaire. Mark the preference
# complete so specs can navigate directly to the feature under test.
pref_resp = requests.put(
signoz.self.host_configs["8080"].get(
"/api/v1/org/preferences/org_onboarding"
),
signoz.self.host_configs["8080"].get("/api/v1/org/preferences/org_onboarding"),
json={"value": True},
headers=auth_header,
timeout=5,
@@ -276,7 +268,7 @@ def apply_license(
# This is also idempotent in nature.
def add_license(
signoz: types.SigNoz,
make_http_mocks: Callable[[types.TestContainerDocker, List[Mapping]], None],
make_http_mocks: Callable[[types.TestContainerDocker, list[Mapping]], None],
get_token: Callable[[str, str], str], # pylint: disable=redefined-outer-name
) -> None:
make_http_mocks(
@@ -286,11 +278,7 @@ def add_license(
request=MappingRequest(
method=HttpMethods.GET,
url="/v2/licenses/me",
headers={
"X-Signoz-Cloud-Api-Key": {
WireMockMatchers.EQUAL_TO: "secret-key"
}
},
headers={"X-Signoz-Cloud-Api-Key": {WireMockMatchers.EQUAL_TO: "secret-key"}},
),
response=MappingResponse(
status=200,
@@ -368,7 +356,7 @@ def create_active_user(
return invited_user["id"]
def find_user_by_email(signoz: types.SigNoz, token: str, email: str) -> Dict:
def find_user_by_email(signoz: types.SigNoz, token: str, email: str) -> dict:
"""Find a user by email from the user list. Raises AssertionError if not found."""
response = requests.get(
signoz.self.host_configs["8080"].get(USERS_BASE),
@@ -381,7 +369,7 @@ def find_user_by_email(signoz: types.SigNoz, token: str, email: str) -> Dict:
return user
def find_user_with_roles_by_email(signoz: types.SigNoz, token: str, email: str) -> Dict:
def find_user_with_roles_by_email(signoz: types.SigNoz, token: str, email: str) -> dict:
"""Find a user by email and return UserWithRoles (user fields + userRoles).
Raises AssertionError if the user is not found.
@@ -396,7 +384,7 @@ def find_user_with_roles_by_email(signoz: types.SigNoz, token: str, email: str)
return response.json()["data"]
def assert_user_has_role(data: Dict, role_name: str) -> None:
def assert_user_has_role(data: dict, role_name: str) -> None:
"""Assert that a UserWithRoles response contains the expected managed role."""
role_names = {ur["role"]["name"] for ur in data.get("userRoles", [])}
assert role_name in role_names, f"Expected role '{role_name}' in {role_names}"
@@ -427,9 +415,7 @@ def change_user_role(
# Remove old role
response = requests.delete(
signoz.self.host_configs["8080"].get(
f"{USERS_BASE}/{user_id}/roles/{old_role_entry['id']}"
),
signoz.self.host_configs["8080"].get(f"{USERS_BASE}/{user_id}/roles/{old_role_entry['id']}"),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=5,
)

View File

@@ -1,5 +1,6 @@
import os
from typing import Any, Generator
from collections.abc import Generator
from typing import Any
import clickhouse_connect
import clickhouse_connect.driver
@@ -18,7 +19,7 @@ logger = setup_logger(__name__)
@pytest.fixture(name="clickhouse", scope="package")
def clickhouse(
tmpfs: Generator[types.LegacyPath, Any, None],
tmpfs: Generator[types.LegacyPath, Any],
network: Network,
zookeeper: types.TestContainerDocker,
request: pytest.FixtureRequest,
@@ -153,9 +154,7 @@ def clickhouse(
with open(custom_function_file_path, "w", encoding="utf-8") as f:
f.write(custom_function_config)
container.with_volume_mapping(
cluster_config_file_path, "/etc/clickhouse-server/config.d/cluster.xml"
)
container.with_volume_mapping(cluster_config_file_path, "/etc/clickhouse-server/config.d/cluster.xml")
container.with_volume_mapping(
custom_function_file_path,
"/etc/clickhouse-server/custom-function.xml",
@@ -183,9 +182,7 @@ def clickhouse(
],
)
if exit_code != 0:
raise RuntimeError(
f"Failed to install histogramQuantile binary: {output.decode()}"
)
raise RuntimeError(f"Failed to install histogramQuantile binary: {output.decode()}")
connection = clickhouse_connect.get_client(
user=container.username,
@@ -210,12 +207,8 @@ def clickhouse(
),
},
container_configs={
"9000": types.TestContainerUrlConfig(
"tcp", container.get_wrapped_container().name, 9000
),
"8123": types.TestContainerUrlConfig(
"tcp", container.get_wrapped_container().name, 8123
),
"9000": types.TestContainerUrlConfig("tcp", container.get_wrapped_container().name, 9000),
"8123": types.TestContainerUrlConfig("tcp", container.get_wrapped_container().name, 8123),
},
),
conn=connection,
@@ -261,9 +254,7 @@ def clickhouse(
pytestconfig,
"clickhouse",
empty=lambda: types.TestContainerSQL(
container=types.TestContainerDocker(
id="", host_configs={}, container_configs={}
),
container=types.TestContainerDocker(id="", host_configs={}, container_configs={}),
conn=None,
env={},
),

View File

@@ -1,7 +1,7 @@
"""Fixtures for cloud integration tests."""
from collections.abc import Callable
from http import HTTPStatus
from typing import Callable
import pytest
import requests
@@ -52,9 +52,7 @@ def deprecated_create_cloud_integration_account(
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Failed to create test account: {response.status_code}"
assert response.status_code == HTTPStatus.OK, f"Failed to create test account: {response.status_code}"
data = response.json().get("data", response.json())
created_accounts.append((data.get("account_id"), cloud_provider))
@@ -127,9 +125,7 @@ def create_cloud_integration_account(
timeout=10,
)
assert (
response.status_code == HTTPStatus.CREATED
), f"Failed to create test account: {response.status_code}: {response.text}"
assert response.status_code == HTTPStatus.CREATED, f"Failed to create test account: {response.status_code}: {response.text}"
data = response.json()["data"]
created_accounts.append((data["id"], cloud_provider))
@@ -143,9 +139,7 @@ def create_cloud_integration_account(
try:
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
for account_id, cloud_provider in created_accounts:
delete_endpoint = (
f"/api/v1/cloud_integrations/{cloud_provider}/accounts/{account_id}"
)
delete_endpoint = f"/api/v1/cloud_integrations/{cloud_provider}/accounts/{account_id}"
r = requests.delete(
signoz.self.host_configs["8080"].get(delete_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
@@ -206,11 +200,7 @@ def setup_create_account_mocks(
request=MappingRequest(
method=HttpMethods.GET,
url="/v2/deployments/me",
headers={
"X-Signoz-Cloud-Api-Key": {
WireMockMatchers.EQUAL_TO: "secret-key"
}
},
headers={"X-Signoz-Cloud-Api-Key": {WireMockMatchers.EQUAL_TO: "secret-key"}},
),
response=MappingResponse(
status=200,

View File

@@ -1,5 +1,6 @@
import os
from typing import Any, Generator
from collections.abc import Generator
from typing import Any
import pytest
@@ -9,7 +10,7 @@ from fixtures import types
@pytest.fixture(scope="package")
def tmpfs(
tmp_path_factory: pytest.TempPathFactory,
) -> Generator[types.LegacyPath, Any, None]:
) -> Generator[types.LegacyPath, Any]:
def _tmp(basename: str):
return tmp_path_factory.mktemp(basename)
@@ -19,7 +20,5 @@ def tmpfs(
def get_testdata_file_path(file: str) -> str:
# Integration testdata lives at tests/integration/testdata/. This helper
# resolves from tests/fixtures/fs.py, so walk up to tests/ and across.
testdata_dir = os.path.join(
os.path.dirname(__file__), "..", "integration", "testdata"
)
testdata_dir = os.path.join(os.path.dirname(__file__), "..", "integration", "testdata")
return os.path.join(testdata_dir, file)

View File

@@ -1,5 +1,4 @@
import json
from typing import Optional
import requests
from wiremock.client import WireMockMatchers
@@ -14,9 +13,7 @@ def common_gateway_headers():
"""Common headers expected on requests forwarded to the gateway."""
return {
"X-Signoz-Cloud-Api-Key": {WireMockMatchers.EQUAL_TO: "secret-key"},
"X-Consumer-Username": {
WireMockMatchers.EQUAL_TO: "lid:00000000-0000-0000-0000-000000000000"
},
"X-Consumer-Username": {WireMockMatchers.EQUAL_TO: "lid:00000000-0000-0000-0000-000000000000"},
"X-Consumer-Groups": {WireMockMatchers.EQUAL_TO: "ns:default"},
}
@@ -34,9 +31,7 @@ def get_gateway_requests(signoz: types.SigNoz, method: str, url: str) -> list:
return response.json().get("requests", [])
def get_latest_gateway_request_body(
signoz: types.SigNoz, method: str, url: str
) -> Optional[dict]:
def get_latest_gateway_request_body(signoz: types.SigNoz, method: str, url: str) -> dict | None:
"""Return the parsed JSON body of the most recent matching gateway request.
WireMock returns requests in reverse chronological order, so ``matched[0]``

View File

@@ -1,4 +1,4 @@
from typing import Callable, List
from collections.abc import Callable
import docker
import docker.errors
@@ -42,11 +42,7 @@ def zeus(
container.get_exposed_port(8080),
)
},
container_configs={
"8080": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 8080
)
},
container_configs={"8080": types.TestContainerUrlConfig("http", container.get_wrapped_container().name, 8080)},
)
def delete(container: types.TestContainerDocker):
@@ -99,11 +95,7 @@ def gateway(
container.get_exposed_port(8080),
)
},
container_configs={
"8080": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 8080
)
},
container_configs={"8080": types.TestContainerUrlConfig("http", container.get_wrapped_container().name, 8080)},
)
def delete(container: types.TestContainerDocker):
@@ -132,10 +124,8 @@ def gateway(
@pytest.fixture(name="make_http_mocks", scope="function")
def make_http_mocks() -> Callable[[types.TestContainerDocker, List[Mapping]], None]:
def _make_http_mocks(
container: types.TestContainerDocker, mappings: List[Mapping]
) -> None:
def make_http_mocks() -> Callable[[types.TestContainerDocker, list[Mapping]], None]:
def _make_http_mocks(container: types.TestContainerDocker, mappings: list[Mapping]) -> None:
Config.base_url = container.host_configs["8080"].get("/__admin")
for mapping in mappings:

65
tests/fixtures/idp.py vendored
View File

@@ -1,4 +1,5 @@
from typing import Any, Callable, Dict, List
from collections.abc import Callable
from typing import Any
from urllib.parse import urljoin, urlparse
from xml.etree import ElementTree
@@ -15,9 +16,7 @@ from fixtures.keycloak import IDP_ROOT_PASSWORD, IDP_ROOT_USERNAME
@pytest.fixture(name="create_saml_client", scope="function")
def create_saml_client(
idp: types.TestContainerIDP, signoz: types.SigNoz
) -> Callable[[str, str], None]:
def create_saml_client(idp: types.TestContainerIDP, signoz: types.SigNoz) -> Callable[[str, str], None]:
def _create_saml_client(client_id: str, callback_path: str) -> None:
client = KeycloakAdmin(
server_url=idp.container.host_configs["6060"].base(),
@@ -34,9 +33,7 @@ def create_saml_client(
"description": f"client for {client_id}",
"rootUrl": "",
"adminUrl": "",
"baseUrl": urljoin(
f"{signoz.self.host_configs['8080'].base()}", callback_path
),
"baseUrl": urljoin(f"{signoz.self.host_configs['8080'].base()}", callback_path),
"surrogateAuthRequired": False,
"enabled": True,
"alwaysDisplayInConsole": False,
@@ -71,9 +68,7 @@ def create_saml_client(
"saml_signature_canonicalization_method": "http://www.w3.org/2001/10/xml-exc-c14n#",
"saml.onetimeuse.condition": "false",
"saml.server.signature.keyinfo.xmlSigKeyInfoKeyNameTransformer": "NONE",
"saml_assertion_consumer_url_post": urljoin(
f"{signoz.self.host_configs['8080'].base()}", callback_path
),
"saml_assertion_consumer_url_post": urljoin(f"{signoz.self.host_configs['8080'].base()}", callback_path),
},
"authenticationFlowBindingOverrides": {},
"fullScopeAllowed": True,
@@ -164,10 +159,8 @@ def create_saml_client(
@pytest.fixture(name="update_saml_client_attributes", scope="function")
def update_saml_client_attributes(
idp: types.TestContainerIDP,
) -> Callable[[str, Dict[str, Any]], None]:
def _update_saml_client_attributes(
client_id: str, attributes: Dict[str, Any]
) -> None:
) -> Callable[[str, dict[str, Any]], None]:
def _update_saml_client_attributes(client_id: str, attributes: dict[str, Any]) -> None:
client = KeycloakAdmin(
server_url=idp.container.host_configs["6060"].base(),
username=IDP_ROOT_USERNAME,
@@ -189,9 +182,7 @@ def update_saml_client_attributes(
@pytest.fixture(name="create_oidc_client", scope="function")
def create_oidc_client(
idp: types.TestContainerIDP, signoz: types.SigNoz
) -> Callable[[str, str], None]:
def create_oidc_client(idp: types.TestContainerIDP, signoz: types.SigNoz) -> Callable[[str, str], None]:
def _create_oidc_client(client_id: str, callback_path: str) -> None:
client = KeycloakAdmin(
server_url=idp.container.host_configs["6060"].base(),
@@ -215,9 +206,7 @@ def create_oidc_client(
"enabled": True,
"alwaysDisplayInConsole": False,
"clientAuthenticatorType": "client-secret",
"redirectUris": [
f"{urljoin(signoz.self.host_configs['8080'].base(), callback_path)}"
],
"redirectUris": [f"{urljoin(signoz.self.host_configs['8080'].base(), callback_path)}"],
"webOrigins": ["/*"],
"notBefore": 0,
"bearerOnly": False,
@@ -287,9 +276,7 @@ def get_saml_settings(idp: types.TestContainerIDP) -> dict:
return {
"entityID": entity_id,
"certificate": certificate_el.text if certificate_el is not None else None,
"singleSignOnServiceLocation": (
sso_post_el.get("Location") if sso_post_el is not None else None
),
"singleSignOnServiceLocation": (sso_post_el.get("Location") if sso_post_el is not None else None),
}
return _get_saml_settings
@@ -422,7 +409,7 @@ def create_group_idp(idp: types.TestContainerIDP) -> Callable[[str], str]:
def create_user_idp_with_groups(
idp: types.TestContainerIDP,
create_group_idp: Callable[[str], str], # pylint: disable=redefined-outer-name
) -> Callable[[str, str, bool, List[str]], None]:
) -> Callable[[str, str, bool, list[str]], None]:
"""Creates a user in Keycloak IDP with specified groups."""
client = KeycloakAdmin(
server_url=idp.container.host_configs["6060"].base(),
@@ -433,9 +420,7 @@ def create_user_idp_with_groups(
created_users = []
def _create_user_idp_with_groups(
email: str, password: str, verified: bool, groups: List[str]
) -> None:
def _create_user_idp_with_groups(email: str, password: str, verified: bool, groups: list[str]) -> None:
# Create groups first
group_ids = []
for group_name in groups:
@@ -493,7 +478,7 @@ def add_user_to_group(
def create_user_idp_with_role(
idp: types.TestContainerIDP,
create_group_idp: Callable[[str], str], # pylint: disable=redefined-outer-name
) -> Callable[[str, str, bool, str, List[str]], None]:
) -> Callable[[str, str, bool, str, list[str]], None]:
"""Creates a user in Keycloak IDP with a custom role attribute and optional groups."""
client = KeycloakAdmin(
server_url=idp.container.host_configs["6060"].base(),
@@ -504,9 +489,7 @@ def create_user_idp_with_role(
created_users = []
def _create_user_idp_with_role(
email: str, password: str, verified: bool, role: str, groups: List[str]
) -> None:
def _create_user_idp_with_role(email: str, password: str, verified: bool, role: str, groups: list[str]) -> None:
# Create groups first
group_ids = []
for group_name in groups:
@@ -559,9 +542,7 @@ def setup_user_profile(idp: types.TestContainerIDP) -> Callable[[], None]:
# Check if signoz_role attribute already exists
attributes = profile.get("attributes", [])
signoz_role_exists = any(
attr.get("name") == "signoz_role" for attr in attributes
)
signoz_role_exists = any(attr.get("name") == "signoz_role" for attr in attributes)
if not signoz_role_exists:
# Add signoz_role attribute to user profile
@@ -645,11 +626,7 @@ def get_oidc_domain(signoz: types.SigNoz, admin_token: str) -> dict:
timeout=2,
)
return next(
(
domain
for domain in response.json()["data"]
if domain["name"] == "oidc.integration.test"
),
(domain for domain in response.json()["data"] if domain["name"] == "oidc.integration.test"),
None,
)
@@ -680,9 +657,7 @@ def perform_oidc_login(
session_context = get_session_context(email)
url = session_context["orgs"][0]["authNSupport"]["callback"][0]["url"]
parsed_url = urlparse(url)
actual_url = (
f"{idp.container.host_configs['6060'].get(parsed_url.path)}?{parsed_url.query}"
)
actual_url = f"{idp.container.host_configs['6060'].get(parsed_url.path)}?{parsed_url.query}"
driver.get(actual_url)
idp_login(email, password)
@@ -694,11 +669,7 @@ def get_saml_domain(signoz: types.SigNoz, admin_token: str) -> dict:
timeout=2,
)
return next(
(
domain
for domain in response.json()["data"]
if domain["name"] == "saml.integration.test"
),
(domain for domain in response.json()["data"] if domain["name"] == "saml.integration.test"),
None,
)

View File

@@ -52,12 +52,8 @@ def idp(
),
},
container_configs={
"6060": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 6060
),
"6061": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 6061
),
"6060": types.TestContainerUrlConfig("http", container.get_wrapped_container().name, 6060),
"6061": types.TestContainerUrlConfig("http", container.get_wrapped_container().name, 6061),
},
),
)
@@ -84,11 +80,7 @@ def idp(
request,
pytestconfig,
"idp",
lambda: types.TestContainerIDP(
container=types.TestContainerDocker(
id="", host_configs={}, container_configs={}
)
),
lambda: types.TestContainerIDP(container=types.TestContainerDocker(id="", host_configs={}, container_configs={})),
create,
delete,
restore,

135
tests/fixtures/logs.py vendored
View File

@@ -1,8 +1,9 @@
import datetime
import json
from abc import ABC
from collections.abc import Callable, Generator
from http import HTTPStatus
from typing import Any, Callable, Generator, List, Literal, Optional
from typing import Any, Literal
import numpy as np
import pytest
@@ -25,9 +26,7 @@ class LogsResource(ABC):
fingerprint: str,
seen_at_ts_bucket_start: np.int64,
) -> None:
self.labels = json.dumps(
labels, separators=(",", ":")
) # clickhouse treats {"a": "b"} differently from {"a":"b"}. In the first case it is not able to run json functions
self.labels = json.dumps(labels, separators=(",", ":")) # clickhouse treats {"a": "b"} differently from {"a":"b"}. In the first case it is not able to run json functions
self.fingerprint = fingerprint
self.seen_at_ts_bucket_start = seen_at_ts_bucket_start
@@ -67,7 +66,7 @@ class LogsTagAttributes(ABC):
tag_key: str,
tag_type: str,
tag_data_type: str,
string_value: Optional[str],
string_value: str | None,
number_value: np.float64,
) -> None:
self.unix_milli = np.int64(int(timestamp.timestamp() * 1e3))
@@ -111,14 +110,14 @@ class Logs(ABC):
scope_version: str
scope_string: dict[str, str]
resource: List[LogsResource]
tag_attributes: List[LogsTagAttributes]
resource_keys: List[LogsResourceOrAttributeKeys]
attribute_keys: List[LogsResourceOrAttributeKeys]
resource: list[LogsResource]
tag_attributes: list[LogsTagAttributes]
resource_keys: list[LogsResourceOrAttributeKeys]
attribute_keys: list[LogsResourceOrAttributeKeys]
def __init__(
self,
timestamp: Optional[datetime.datetime] = None,
timestamp: datetime.datetime | None = None,
resources: dict[str, Any] = {},
attributes: dict[str, Any] = {},
body: str = "default body",
@@ -169,9 +168,7 @@ class Logs(ABC):
# Process resources and attributes
self.resources_string = {k: str(v) for k, v in resources.items()}
self.resource_json = (
{} if resource_write_mode == "legacy_only" else dict(self.resources_string)
)
self.resource_json = {} if resource_write_mode == "legacy_only" else dict(self.resources_string)
for k, v in self.resources_string.items():
self.tag_attributes.append(
LogsTagAttributes(
@@ -183,14 +180,10 @@ class Logs(ABC):
number_value=None,
)
)
self.resource_keys.append(
LogsResourceOrAttributeKeys(name=k, datatype="string")
)
self.resource_keys.append(LogsResourceOrAttributeKeys(name=k, datatype="string"))
# Calculate resource fingerprint
self.resource_fingerprint = LogsOrTracesFingerprint(
self.resources_string
).calculate()
self.resource_fingerprint = LogsOrTracesFingerprint(self.resources_string).calculate()
# Process attributes by type
self.attributes_string = {}
@@ -210,9 +203,7 @@ class Logs(ABC):
number_value=None,
)
)
self.attribute_keys.append(
LogsResourceOrAttributeKeys(name=k, datatype="bool")
)
self.attribute_keys.append(LogsResourceOrAttributeKeys(name=k, datatype="bool"))
elif isinstance(v, (int, float)):
self.attributes_number[k] = np.float64(v)
self.tag_attributes.append(
@@ -225,9 +216,7 @@ class Logs(ABC):
number_value=np.float64(v),
)
)
self.attribute_keys.append(
LogsResourceOrAttributeKeys(name=k, datatype="float64")
)
self.attribute_keys.append(LogsResourceOrAttributeKeys(name=k, datatype="float64"))
else:
self.attributes_string[k] = str(v)
self.tag_attributes.append(
@@ -240,9 +229,7 @@ class Logs(ABC):
number_value=None,
)
)
self.attribute_keys.append(
LogsResourceOrAttributeKeys(name=k, datatype="string")
)
self.attribute_keys.append(LogsResourceOrAttributeKeys(name=k, datatype="string"))
# Initialize scope fields
self.scope_name = scope_name
@@ -280,9 +267,7 @@ class Logs(ABC):
number_value=None,
)
)
self.attribute_keys.append(
LogsResourceOrAttributeKeys(name="severity_text", datatype="string")
)
self.attribute_keys.append(LogsResourceOrAttributeKeys(name="severity_text", datatype="string"))
self.tag_attributes.append(
LogsTagAttributes(
@@ -294,9 +279,7 @@ class Logs(ABC):
number_value=float(self.severity_number),
)
)
self.attribute_keys.append(
LogsResourceOrAttributeKeys(name="severity_number", datatype="float64")
)
self.attribute_keys.append(LogsResourceOrAttributeKeys(name="severity_number", datatype="float64"))
def _get_severity_number(self, severity_text: str) -> np.uint8:
"""Convert severity text to numeric value"""
@@ -357,12 +340,12 @@ class Logs(ABC):
def load_from_file(
cls,
file_path: str,
base_time: Optional[datetime.datetime] = None,
) -> List["Logs"]:
base_time: datetime.datetime | None = None,
) -> list["Logs"]:
"""Load logs from a JSONL file."""
data_list = []
with open(file_path, "r", encoding="utf-8") as f:
with open(file_path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
@@ -391,7 +374,7 @@ class Logs(ABC):
return logs
def insert_logs_to_clickhouse(conn, logs: List[Logs]) -> None:
def insert_logs_to_clickhouse(conn, logs: list[Logs]) -> None:
"""
Insert logs into ClickHouse tables following the same logic as the Go exporter.
Handles insertion into:
@@ -404,7 +387,7 @@ def insert_logs_to_clickhouse(conn, logs: List[Logs]) -> None:
Pure function so the seeder container can reuse the exact insert path
used by the pytest fixture. `conn` is a clickhouse-connect Client.
"""
resources: List[LogsResource] = []
resources: list[LogsResource] = []
for log in logs:
resources.extend(log.resource)
@@ -420,7 +403,7 @@ def insert_logs_to_clickhouse(conn, logs: List[Logs]) -> None:
],
)
tag_attributes: List[LogsTagAttributes] = []
tag_attributes: list[LogsTagAttributes] = []
for log in logs:
tag_attributes.extend(log.tag_attributes)
@@ -431,7 +414,7 @@ def insert_logs_to_clickhouse(conn, logs: List[Logs]) -> None:
data=[tag_attribute.np_arr() for tag_attribute in tag_attributes],
)
attribute_keys: List[LogsResourceOrAttributeKeys] = []
attribute_keys: list[LogsResourceOrAttributeKeys] = []
for log in logs:
attribute_keys.extend(log.attribute_keys)
@@ -442,7 +425,7 @@ def insert_logs_to_clickhouse(conn, logs: List[Logs]) -> None:
data=[attribute_key.np_arr() for attribute_key in attribute_keys],
)
resource_keys: List[LogsResourceOrAttributeKeys] = []
resource_keys: list[LogsResourceOrAttributeKeys] = []
for log in logs:
resource_keys.extend(log.resource_keys)
@@ -500,8 +483,8 @@ def truncate_logs_tables(conn, cluster: str) -> None:
@pytest.fixture(name="insert_logs", scope="function")
def insert_logs(
clickhouse: types.TestContainerClickhouse,
) -> Generator[Callable[[List[Logs]], None], Any, None]:
def _insert_logs(logs: List[Logs]) -> None:
) -> Generator[Callable[[list[Logs]], None], Any]:
def _insert_logs(logs: list[Logs]) -> None:
insert_logs_to_clickhouse(clickhouse.conn, logs)
yield _insert_logs
@@ -515,8 +498,8 @@ def insert_logs(
@pytest.fixture(name="materialize_log_field", scope="function")
def materialize_log_field(
signoz: types.SigNoz,
) -> Generator[Callable[[str, str, str, str], None], None, None]:
mat_fields: List[tuple[str, str, str]] = []
) -> Generator[Callable[[str, str, str, str], None]]:
mat_fields: list[tuple[str, str, str]] = []
def _materialize_log_field(
token: str,
@@ -535,10 +518,7 @@ def materialize_log_field(
},
timeout=10,
)
assert response.status_code == HTTPStatus.OK, (
f"Failed to materialize log field {name}: "
f"{response.status_code} {response.text}"
)
assert response.status_code == HTTPStatus.OK, f"Failed to materialize log field {name}: {response.status_code} {response.text}"
mat_fields.append((field_type, data_type, name))
yield _materialize_log_field
@@ -548,16 +528,10 @@ def materialize_log_field(
if mat_field_type == "resources":
mat_field_type = "resource"
field = f"{mat_field_type}_{mat_field_data_type}_{mat_field_name}"
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' DROP INDEX IF EXISTS {field}_idx"
)
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' DROP INDEX IF EXISTS {field}_idx")
for table in ["logs_v2", "distributed_logs_v2"]:
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' DROP COLUMN IF EXISTS {field}"
)
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' DROP COLUMN IF EXISTS {field}_exists"
)
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' DROP COLUMN IF EXISTS {field}")
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' DROP COLUMN IF EXISTS {field}_exists")
@pytest.fixture(name="ttl_legacy_logs_v2_table_setup", scope="function")
@@ -569,20 +543,14 @@ def ttl_legacy_logs_v2_table_setup(request, signoz: types.SigNoz):
"""
# Setup code
result = signoz.telemetrystore.conn.query(
f"RENAME TABLE signoz_logs.logs_v2 TO signoz_logs.logs_v2_backup ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"
).result_rows
result = signoz.telemetrystore.conn.query(f"RENAME TABLE signoz_logs.logs_v2 TO signoz_logs.logs_v2_backup ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'").result_rows
assert result is not None
# Add cleanup to restore original table
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
f"RENAME TABLE signoz_logs.logs_v2_backup TO signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"
)
)
request.addfinalizer(lambda: signoz.telemetrystore.conn.query(f"RENAME TABLE signoz_logs.logs_v2_backup TO signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"))
# Create new test tables
result = signoz.telemetrystore.conn.query(
f"""CREATE TABLE signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'
f"""CREATE TABLE signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]}'
(
`id` String,
`timestamp` UInt64 CODEC(DoubleDelta, LZ4)
@@ -594,11 +562,7 @@ def ttl_legacy_logs_v2_table_setup(request, signoz: types.SigNoz):
assert result is not None
# Add cleanup to drop test table
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
f"DROP TABLE IF EXISTS signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"
)
)
request.addfinalizer(lambda: signoz.telemetrystore.conn.query(f"DROP TABLE IF EXISTS signoz_logs.logs_v2 ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"))
yield # Test runs here
@@ -612,20 +576,14 @@ def ttl_legacy_logs_v2_resource_table_setup(request, signoz: types.SigNoz):
"""
# Setup code
result = signoz.telemetrystore.conn.query(
f"RENAME TABLE signoz_logs.logs_v2_resource TO signoz_logs.logs_v2_resource_backup ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"
).result_rows
result = signoz.telemetrystore.conn.query(f"RENAME TABLE signoz_logs.logs_v2_resource TO signoz_logs.logs_v2_resource_backup ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'").result_rows
assert result is not None
# Add cleanup to restore original table
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
f"RENAME TABLE signoz_logs.logs_v2_resource_backup TO signoz_logs.logs_v2_resource ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"
)
)
request.addfinalizer(lambda: signoz.telemetrystore.conn.query(f"RENAME TABLE signoz_logs.logs_v2_resource_backup TO signoz_logs.logs_v2_resource ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'"))
# Create new test tables
result = signoz.telemetrystore.conn.query(
f"""CREATE TABLE signoz_logs.logs_v2_resource ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'
f"""CREATE TABLE signoz_logs.logs_v2_resource ON CLUSTER '{signoz.telemetrystore.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]}'
(
`id` String,
`seen_at_ts_bucket_start` Int64 CODEC(Delta(8), ZSTD(1))
@@ -636,11 +594,7 @@ def ttl_legacy_logs_v2_resource_table_setup(request, signoz: types.SigNoz):
assert result is not None
# Add cleanup to drop test table
request.addfinalizer(
lambda: signoz.telemetrystore.conn.query(
f"DROP TABLE IF EXISTS signoz_logs.logs_v2_resource ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}';"
)
)
request.addfinalizer(lambda: signoz.telemetrystore.conn.query(f"DROP TABLE IF EXISTS signoz_logs.logs_v2_resource ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}';"))
yield # Test runs here
@@ -661,7 +615,6 @@ def remove_logs_ttl_settings(signoz: types.SigNoz):
"logs_resource_keys",
]
for table in tables:
try:
# Reset _retention_days and _retention_days_cold default values to 0 for tables that have these columns
if table in [
@@ -671,19 +624,19 @@ def remove_logs_ttl_settings(signoz: types.SigNoz):
"distributed_logs_v2_resource",
]:
reset_retention_query = f"""
ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'
ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]}'
MODIFY COLUMN _retention_days UInt16 DEFAULT 0
"""
signoz.telemetrystore.conn.query(reset_retention_query)
reset_retention_cold_query = f"""
ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'
ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]}'
MODIFY COLUMN _retention_days_cold UInt16 DEFAULT 0
"""
signoz.telemetrystore.conn.query(reset_retention_cold_query)
else:
alter_query = f"""
ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}'
ALTER TABLE signoz_logs.{table} ON CLUSTER '{signoz.telemetrystore.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]}'
REMOVE TTL
"""
signoz.telemetrystore.conn.query(alter_query)

View File

@@ -1,7 +1,8 @@
import hashlib
import json
from collections.abc import Callable, Generator
from datetime import datetime, timedelta
from typing import Any, Callable, Generator, List
from typing import Any
import numpy as np
import pytest
@@ -44,9 +45,7 @@ class MeterSample:
self.value = np.float64(value)
fingerprint_str = metric_name + self.labels
self.fingerprint = np.uint64(
int(hashlib.md5(fingerprint_str.encode()).hexdigest()[:16], 16)
)
self.fingerprint = np.uint64(int(hashlib.md5(fingerprint_str.encode()).hexdigest()[:16], 16))
def to_samples_row(self) -> list:
return [
@@ -70,7 +69,7 @@ def make_meter_samples(
count: int = 60,
base_value: float = 100.0,
**kwargs,
) -> List[MeterSample]:
) -> list[MeterSample]:
samples = []
for i in range(count):
ts = now - timedelta(minutes=count - i)
@@ -89,8 +88,8 @@ def make_meter_samples(
@pytest.fixture(name="insert_meter_samples", scope="function")
def insert_meter_samples(
clickhouse: types.TestContainerClickhouse,
) -> Generator[Callable[[List[MeterSample]], None], Any, None]:
def _insert_meter_samples(samples: List[MeterSample]) -> None:
) -> Generator[Callable[[list[MeterSample]], None], Any]:
def _insert_meter_samples(samples: list[MeterSample]) -> None:
if len(samples) == 0:
return
@@ -116,6 +115,4 @@ def insert_meter_samples(
cluster = clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]
for table in ["samples", "samples_agg_1d"]:
clickhouse.conn.query(
f"TRUNCATE TABLE signoz_meter.{table} ON CLUSTER '{cluster}' SYNC"
)
clickhouse.conn.query(f"TRUNCATE TABLE signoz_meter.{table} ON CLUSTER '{cluster}' SYNC")

View File

@@ -2,7 +2,8 @@ import datetime
import hashlib
import json
from abc import ABC
from typing import Any, Callable, Generator, List, Optional
from collections.abc import Callable, Generator
from typing import Any
import numpy as np
import pytest
@@ -63,9 +64,7 @@ class MetricsTimeSeries(ABC):
# Calculate fingerprint from metric_name + labels
fingerprint_str = metric_name + self.labels
self.fingerprint = np.uint64(
int(hashlib.md5(fingerprint_str.encode()).hexdigest()[:16], 16)
)
self.fingerprint = np.uint64(int(hashlib.md5(fingerprint_str.encode()).hexdigest()[:16], 16))
def to_row(self) -> list:
return [
@@ -267,7 +266,7 @@ class Metrics(ABC):
self,
metric_name: str,
labels: dict[str, str] = {},
timestamp: Optional[datetime.datetime] = None,
timestamp: datetime.datetime | None = None,
value: float = 0.0,
temporality: str = "Unspecified",
flags: int = 0,
@@ -334,7 +333,7 @@ class Metrics(ABC):
cls,
data: dict,
# base_time: Optional[datetime.datetime] = None,
metric_name_override: Optional[str] = None,
metric_name_override: str | None = None,
) -> "Metrics":
"""
Create a Metrics instance from a dict.
@@ -368,9 +367,9 @@ class Metrics(ABC):
def load_from_file(
cls,
file_path: str,
base_time: Optional[datetime.datetime] = None,
metric_name_override: Optional[str] = None,
) -> List["Metrics"]:
base_time: datetime.datetime | None = None,
metric_name_override: str | None = None,
) -> list["Metrics"]:
"""
Load metrics from a JSONL file.
@@ -383,7 +382,7 @@ class Metrics(ABC):
metric_name_override: If provided, overrides metric_name for all metrics
"""
data_list = []
with open(file_path, "r", encoding="utf-8") as f:
with open(file_path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
@@ -410,14 +409,12 @@ class Metrics(ABC):
original_ts = parse_timestamp(data["timestamp"])
adjusted_ts = original_ts + time_offset
data["timestamp"] = adjusted_ts.isoformat()
metrics.append(
cls.from_dict(data, metric_name_override=metric_name_override)
)
metrics.append(cls.from_dict(data, metric_name_override=metric_name_override))
return metrics
def insert_metrics_to_clickhouse(conn, metrics: List[Metrics]) -> None:
def insert_metrics_to_clickhouse(conn, metrics: list[Metrics]) -> None:
"""
Insert metrics into ClickHouse tables.
Handles insertion into:
@@ -567,8 +564,8 @@ def truncate_metrics_tables(conn, cluster: str) -> None:
@pytest.fixture(name="insert_metrics", scope="function")
def insert_metrics(
clickhouse: types.TestContainerClickhouse,
) -> Generator[Callable[[List[Metrics]], None], Any, None]:
def _insert_metrics(metrics: List[Metrics]) -> None:
) -> Generator[Callable[[list[Metrics]], None], Any]:
def _insert_metrics(metrics: list[Metrics]) -> None:
insert_metrics_to_clickhouse(clickhouse.conn, metrics)
yield _insert_metrics
@@ -600,11 +597,7 @@ def remove_metrics_ttl_and_storage_settings(signoz: types.SigNoz):
cluster = signoz.telemetrystore.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]
for table in tables:
try:
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_metrics.{table} ON CLUSTER '{cluster}' REMOVE TTL"
)
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_metrics.{table} ON CLUSTER '{cluster}' RESET SETTING storage_policy;"
)
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_metrics.{table} ON CLUSTER '{cluster}' REMOVE TTL")
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_metrics.{table} ON CLUSTER '{cluster}' RESET SETTING storage_policy;")
except Exception as e: # pylint: disable=broad-exception-caught
print(f"ttl and storage policy reset failed for {table}: {e}")

View File

@@ -1,8 +1,8 @@
import math
import random
from collections.abc import Generator
from dataclasses import dataclass, field
from datetime import datetime, timedelta, timezone
from typing import Dict, Generator, List, Optional, Tuple
from datetime import UTC, datetime, timedelta
from fixtures.metrics import Metrics
@@ -10,7 +10,7 @@ from fixtures.metrics import Metrics
@dataclass
class DeterministicAnomalies:
# resets: (offset)
COUNTER_RESET_OFFSETS: List[timedelta] = field(
COUNTER_RESET_OFFSETS: list[timedelta] = field(
default_factory=lambda: [
timedelta(hours=6),
timedelta(hours=24),
@@ -18,7 +18,7 @@ class DeterministicAnomalies:
)
# gaps: (offset, duration)
DATA_GAP_OFFSETS: List[Tuple[timedelta, timedelta]] = field(
DATA_GAP_OFFSETS: list[tuple[timedelta, timedelta]] = field(
default_factory=lambda: [
(timedelta(hours=12), timedelta(minutes=30)), # 30 min gap
(timedelta(hours=36), timedelta(minutes=15)), # 15 min gap
@@ -26,14 +26,14 @@ class DeterministicAnomalies:
)
# spikes: (offset, multiplier)
SPIKE_OFFSETS: List[Tuple[timedelta, float]] = field(
SPIKE_OFFSETS: list[tuple[timedelta, float]] = field(
default_factory=lambda: [
(timedelta(hours=18), 10.0), # at this point the value spikes 10x
]
)
# 0 value offsets
ZERO_VALUE_OFFSETS: List[timedelta] = field(
ZERO_VALUE_OFFSETS: list[timedelta] = field(
default_factory=lambda: [
timedelta(hours=30), # at this point the value drops to zero
]
@@ -83,7 +83,7 @@ class MetricsDataGenerator:
duration = (self.end - self.start).total_seconds()
self.num_points = int(duration / step_seconds)
def _timestamps(self) -> Generator[datetime, None, None]:
def _timestamps(self) -> Generator[datetime]:
current = self.start
while current < self.end:
yield current
@@ -92,16 +92,12 @@ class MetricsDataGenerator:
def _offset_from_start(self, ts: datetime) -> timedelta:
return ts - self.start
def steady_with_noise(
self, base: float, noise_pct: float = 0.1
) -> Generator[float, None, None]:
def steady_with_noise(self, base: float, noise_pct: float = 0.1) -> Generator[float]:
for _ in range(self.num_points):
noise = self.rng.uniform(-noise_pct, noise_pct) * base
yield base + noise
def diurnal_pattern(
self, min_val: float, max_val: float
) -> Generator[float, None, None]:
def diurnal_pattern(self, min_val: float, max_val: float) -> Generator[float]:
amplitude = (max_val - min_val) / 2
baseline = min_val + amplitude
@@ -117,9 +113,7 @@ class MetricsDataGenerator:
if i >= self.num_points - 1:
break
def monotonic_increasing(
self, rate_per_minute: float, noise_pct: float = 0.1
) -> Generator[float, None, None]:
def monotonic_increasing(self, rate_per_minute: float, noise_pct: float = 0.1) -> Generator[float]:
value = 0.0
for ts in self._timestamps():
offset = self._offset_from_start(ts)
@@ -139,7 +133,7 @@ class MetricsDataGenerator:
base: float,
spike_factor: float = 10.0,
spike_interval_minutes: int = 60,
) -> Generator[float, None, None]:
) -> Generator[float]:
for i, ts in enumerate(self._timestamps()):
offset = self._offset_from_start(ts)
@@ -160,7 +154,7 @@ class MetricsDataGenerator:
self,
base: float,
noise_pct: float = 0.1,
) -> Generator[float, None, None]:
) -> Generator[float]:
for ts in self._timestamps():
offset = self._offset_from_start(ts)
@@ -170,9 +164,7 @@ class MetricsDataGenerator:
noise = self.rng.uniform(-noise_pct, noise_pct) * base
yield max(0, base + noise)
def latency_distribution(
self, p50_ms: float, p99_ms: float
) -> Generator[List[float], None, None]:
def latency_distribution(self, p50_ms: float, p99_ms: float) -> Generator[list[float]]:
# see otel defaults (in ms)
# https://opentelemetry.io/docs/specs/otel/metrics/sdk/#explicit-bucket-histogram-aggregation
buckets = [5, 10, 25, 50, 75, 100, 250, 500, 750, 1000, 2500, 5000, 7500, 10000]
@@ -209,9 +201,9 @@ class MetricsDataGenerator:
def generate_gauge_metrics(
self,
metric_name: str = "system.cpu.utilization",
services: List[str] = None,
hosts: List[str] = None,
) -> List[Metrics]:
services: list[str] = None,
hosts: list[str] = None,
) -> list[Metrics]:
if services is None:
services = ["frontend", "backend", "worker", "database"]
if hosts is None:
@@ -249,10 +241,10 @@ class MetricsDataGenerator:
def generate_cumulative_counter_metrics(
self,
metric_name: str = "http.server.request.count",
services: List[str] = None,
endpoints: List[str] = None,
status_codes: List[str] = None,
) -> List[Metrics]:
services: list[str] = None,
endpoints: list[str] = None,
status_codes: list[str] = None,
) -> list[Metrics]:
if services is None:
services = ["api", "web", "auth"]
if endpoints is None:
@@ -304,10 +296,10 @@ class MetricsDataGenerator:
def generate_delta_counter_metrics(
self,
metric_name: str = "http.server.request.delta",
services: List[str] = None,
endpoints: List[str] = None,
status_codes: List[str] = None,
) -> List[Metrics]:
services: list[str] = None,
endpoints: list[str] = None,
status_codes: list[str] = None,
) -> list[Metrics]:
if services is None:
services = ["api", "web", "auth"]
if endpoints is None:
@@ -359,9 +351,9 @@ class MetricsDataGenerator:
def generate_connection_gauge_metrics(
self,
metric_name: str = "db.client.connections",
services: List[str] = None,
pool_names: List[str] = None,
) -> List[Metrics]:
services: list[str] = None,
pool_names: list[str] = None,
) -> list[Metrics]:
if services is None:
services = ["backend", "worker"]
if pool_names is None:
@@ -398,16 +390,14 @@ class MetricsDataGenerator:
def generate_gc_duration_metrics(
self,
metric_name: str = "process.runtime.gc.duration",
services: List[str] = None,
) -> List[Metrics]:
services: list[str] = None,
) -> list[Metrics]:
if services is None:
services = ["frontend", "backend", "worker", "api"]
metrics = []
for service in services:
values = list[float](
self.spike_pattern(5.0, 10.0, 30)
) # bump every 30 minutes (to similate real pattern)
values = list[float](self.spike_pattern(5.0, 10.0, 30)) # bump every 30 minutes (to similate real pattern)
labels = {"service_name": service}
for i, ts in enumerate[datetime](self._timestamps()):
@@ -431,7 +421,7 @@ class MetricsDataGenerator:
return metrics
def generate_all_metrics(self) -> Dict[str, List[Metrics]]:
def generate_all_metrics(self) -> dict[str, list[Metrics]]:
return {
"gauge": self.generate_gauge_metrics(),
"cumulative_counter": self.generate_cumulative_counter_metrics(),
@@ -440,7 +430,7 @@ class MetricsDataGenerator:
"gc_duration": self.generate_gc_duration_metrics(),
}
def generate_flat_metrics(self) -> List[Metrics]:
def generate_flat_metrics(self) -> list[Metrics]:
all_metrics = []
for metrics_list in self.generate_all_metrics().values():
all_metrics.extend(metrics_list)
@@ -452,7 +442,7 @@ def create_test_data_generator(
step_seconds: int = 60,
seed: int = 42,
) -> MetricsDataGenerator:
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
now = datetime.now(tz=UTC).replace(second=0, microsecond=0)
start = now - timedelta(hours=duration_hours)
return MetricsDataGenerator(
start_time=start,
@@ -464,13 +454,13 @@ def create_test_data_generator(
def generate_simple_gauge_series(
metric_name: str,
labels: Dict[str, str],
values: List[float],
start_time: Optional[datetime] = None,
labels: dict[str, str],
values: list[float],
start_time: datetime | None = None,
step_seconds: int = 60,
) -> List[Metrics]:
) -> list[Metrics]:
if start_time is None:
start_time = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_time = datetime.now(tz=UTC).replace(second=0, microsecond=0)
metrics = []
for i, value in enumerate(values):
@@ -491,14 +481,14 @@ def generate_simple_gauge_series(
def generate_simple_counter_series(
metric_name: str,
labels: Dict[str, str],
values: List[float],
labels: dict[str, str],
values: list[float],
temporality: str = "Cumulative",
start_time: Optional[datetime] = None,
start_time: datetime | None = None,
step_seconds: int = 60,
) -> List[Metrics]:
) -> list[Metrics]:
if start_time is None:
start_time = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_time = datetime.now(tz=UTC).replace(second=0, microsecond=0)
metrics = []
for i, value in enumerate(values):
@@ -519,14 +509,14 @@ def generate_simple_counter_series(
def generate_non_monotonic_sum_series(
metric_name: str,
labels: Dict[str, str],
values: List[float],
labels: dict[str, str],
values: list[float],
temporality: str = "Cumulative",
start_time: Optional[datetime] = None,
start_time: datetime | None = None,
step_seconds: int = 60,
) -> List[Metrics]:
) -> list[Metrics]:
if start_time is None:
start_time = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_time = datetime.now(tz=UTC).replace(second=0, microsecond=0)
metrics = []
for i, value in enumerate(values):
@@ -547,15 +537,15 @@ def generate_non_monotonic_sum_series(
def generate_counter_with_resets(
metric_name: str,
labels: Dict[str, str],
labels: dict[str, str],
num_points: int,
rate_per_point: float,
reset_at_indices: List[int],
start_time: Optional[datetime] = None,
reset_at_indices: list[int],
start_time: datetime | None = None,
step_seconds: int = 60,
) -> List[Metrics]:
) -> list[Metrics]:
if start_time is None:
start_time = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_time = datetime.now(tz=UTC).replace(second=0, microsecond=0)
metrics = []
value = 0.0
@@ -582,16 +572,16 @@ def generate_counter_with_resets(
def generate_sparse_series(
metric_name: str,
labels: Dict[str, str],
values_at_indices: Dict[int, float],
labels: dict[str, str],
values_at_indices: dict[int, float],
total_points: int,
temporality: str = "Unspecified",
type_: str = "Gauge",
start_time: Optional[datetime] = None,
start_time: datetime | None = None,
step_seconds: int = 60,
) -> List[Metrics]:
) -> list[Metrics]:
if start_time is None:
start_time = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_time = datetime.now(tz=UTC).replace(second=0, microsecond=0)
metrics = []
for i, value in values_at_indices.items():

View File

@@ -25,7 +25,7 @@ def migrator(
container = client.containers.run(
image=f"signoz/signoz-schema-migrator:{version}",
command=f"sync --replication=true --cluster-name=cluster --up= --dsn={clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN"]}",
command=f"sync --replication=true --cluster-name=cluster --up= --dsn={clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN']}",
detach=True,
auto_remove=False,
network=network.id,
@@ -43,7 +43,7 @@ def migrator(
container = client.containers.run(
image=f"signoz/signoz-schema-migrator:{version}",
command=f"async --replication=true --cluster-name=cluster --up= --dsn={clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN"]}",
command=f"async --replication=true --cluster-name=cluster --up= --dsn={clickhouse.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN']}",
detach=True,
auto_remove=False,
network=network.id,

View File

@@ -10,9 +10,7 @@ logger = setup_logger(__name__)
@pytest.fixture(name="network", scope="package")
def network(
request: pytest.FixtureRequest, pytestconfig: pytest.Config
) -> types.Network:
def network(request: pytest.FixtureRequest, pytestconfig: pytest.Config) -> types.Network:
"""
Package-Scoped fixture for creating a network
"""

View File

@@ -1,5 +1,5 @@
from collections.abc import Callable
from http import HTTPStatus
from typing import Callable
import docker
import docker.errors
@@ -39,11 +39,7 @@ def notification_channel(
container.get_exposed_port(8080),
)
},
container_configs={
"8080": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 8080
)
},
container_configs={"8080": types.TestContainerUrlConfig("http", container.get_wrapped_container().name, 8080)},
)
def delete(container: types.TestContainerDocker):
@@ -101,11 +97,7 @@ def create_webhook_notification_channel(
headers={"Authorization": f"Bearer {admin_token}"},
timeout=5,
)
assert response.status_code == HTTPStatus.CREATED, (
f"Failed to create channel, "
f"Response: {response.text} "
f"Response status: {response.status_code}"
)
assert response.status_code == HTTPStatus.CREATED, f"Failed to create channel, Response: {response.text} Response status: {response.status_code}"
channel_id = response.json()["data"]["id"]
return channel_id

View File

@@ -12,9 +12,7 @@ logger = setup_logger(__name__)
@pytest.fixture(name="postgres", scope="package")
def postgres(
network: Network, request: pytest.FixtureRequest, pytestconfig: pytest.Config
) -> types.TestContainerSQL:
def postgres(network: Network, request: pytest.FixtureRequest, pytestconfig: pytest.Config) -> types.TestContainerSQL:
"""
Package-scoped fixture for PostgreSQL TestContainer.
"""
@@ -33,9 +31,7 @@ def postgres(
container.with_network(network)
container.start()
engine = create_engine(
f"postgresql+psycopg2://{container.username}:{container.password}@{container.get_container_host_ip()}:{container.get_exposed_port(5432)}/{container.dbname}"
)
engine = create_engine(f"postgresql+psycopg2://{container.username}:{container.password}@{container.get_container_host_ip()}:{container.get_exposed_port(5432)}/{container.dbname}")
with engine.connect() as conn:
result = conn.execute(sql.text("SELECT 1"))
@@ -51,11 +47,7 @@ def postgres(
container.get_exposed_port(5432),
)
},
container_configs={
"5432": types.TestContainerUrlConfig(
"postgresql", container.get_wrapped_container().name, 5432
)
},
container_configs={"5432": types.TestContainerUrlConfig("postgresql", container.get_wrapped_container().name, 5432)},
),
conn=engine,
env={
@@ -83,9 +75,7 @@ def postgres(
host_config = container.host_configs["5432"]
env = cache["env"]
engine = create_engine(
f"postgresql+psycopg2://{env['SIGNOZ_SQLSTORE_POSTGRES_USER']}:{env['SIGNOZ_SQLSTORE_POSTGRES_PASSWORD']}@{host_config.address}:{host_config.port}/{env['SIGNOZ_SQLSTORE_POSTGRES_DBNAME']}"
)
engine = create_engine(f"postgresql+psycopg2://{env['SIGNOZ_SQLSTORE_POSTGRES_USER']}:{env['SIGNOZ_SQLSTORE_POSTGRES_PASSWORD']}@{host_config.address}:{host_config.port}/{env['SIGNOZ_SQLSTORE_POSTGRES_DBNAME']}")
with engine.connect() as conn:
result = conn.execute(sql.text("SELECT 1"))
@@ -102,9 +92,7 @@ def postgres(
pytestconfig,
"postgres",
lambda: types.TestContainerSQL(
container=types.TestContainerDocker(
id="", host_configs={}, container_configs={}
),
container=types.TestContainerDocker(id="", host_configs={}, container_configs={}),
conn=None,
env={},
),

View File

@@ -1,7 +1,7 @@
from dataclasses import dataclass
from datetime import datetime, timedelta, timezone
from datetime import UTC, datetime, timedelta
from http import HTTPStatus
from typing import Any, Dict, List, Optional, Union
from typing import Any
import requests
@@ -17,10 +17,10 @@ QUERY_TIMEOUT = 30 # seconds
@dataclass
class TelemetryFieldKey:
name: str
field_data_type: Optional[str] = None
field_context: Optional[str] = None
field_data_type: str | None = None
field_context: str | None = None
def to_dict(self) -> Dict:
def to_dict(self) -> dict:
return {
"name": self.name,
"fieldDataType": self.field_data_type,
@@ -33,7 +33,7 @@ class OrderBy:
key: TelemetryFieldKey
direction: str = "asc"
def to_dict(self) -> Dict:
def to_dict(self) -> dict:
return {"key": self.key.to_dict(), "direction": self.direction}
@@ -41,14 +41,14 @@ class OrderBy:
class BuilderQuery:
signal: str
name: str = "A"
source: Optional[str] = None
limit: Optional[int] = None
filter_expression: Optional[str] = None
select_fields: Optional[List[TelemetryFieldKey]] = None
order: Optional[List[OrderBy]] = None
source: str | None = None
limit: int | None = None
filter_expression: str | None = None
select_fields: list[TelemetryFieldKey] | None = None
order: list[OrderBy] | None = None
def to_dict(self) -> Dict:
spec: Dict[str, Any] = {
def to_dict(self) -> dict:
spec: dict[str, Any] = {
"signal": self.signal,
"name": self.name,
}
@@ -61,9 +61,7 @@ class BuilderQuery:
if self.select_fields:
spec["selectFields"] = [f.to_dict() for f in self.select_fields]
if self.order:
spec["order"] = [
o.to_dict() if hasattr(o, "to_dict") else o for o in self.order
]
spec["order"] = [o.to_dict() if hasattr(o, "to_dict") else o for o in self.order]
return {"type": "builder_query", "spec": spec}
@@ -72,11 +70,11 @@ class TraceOperatorQuery:
name: str
expression: str
return_spans_from: str
limit: Optional[int] = None
order: Optional[List[OrderBy]] = None
limit: int | None = None
order: list[OrderBy] | None = None
def to_dict(self) -> Dict:
spec: Dict[str, Any] = {
def to_dict(self) -> dict:
spec: dict[str, Any] = {
"name": self.name,
"expression": self.expression,
"returnSpansFrom": self.return_spans_from,
@@ -84,9 +82,7 @@ class TraceOperatorQuery:
if self.limit is not None:
spec["limit"] = self.limit
if self.order:
spec["order"] = [
o.to_dict() if hasattr(o, "to_dict") else o for o in self.order
]
spec["order"] = [o.to_dict() if hasattr(o, "to_dict") else o for o in self.order]
return {"type": "builder_trace_operator", "spec": spec}
@@ -94,11 +90,11 @@ class TraceOperatorQuery:
class QueryRangeRequest:
start: int # nanoseconds
end: int # nanoseconds
queries: List[Union[BuilderQuery, TraceOperatorQuery]]
request_type: Optional[str] = "raw"
queries: list[BuilderQuery | TraceOperatorQuery]
request_type: str | None = "raw"
def to_dict(self) -> Dict:
body: Dict[str, Any] = {
def to_dict(self) -> dict:
body: dict[str, Any] = {
"start": self.start,
"end": self.end,
"compositeQuery": {
@@ -115,11 +111,11 @@ def make_query_request(
token: str,
start_ms: int,
end_ms: int,
queries: List[Dict],
queries: list[dict],
*,
request_type: str = "time_series",
format_options: Optional[Dict] = None,
variables: Optional[Dict] = None,
format_options: dict | None = None,
variables: dict | None = None,
no_cache: bool = True,
timeout: int = QUERY_TIMEOUT,
) -> requests.Response:
@@ -152,16 +148,16 @@ def build_builder_query(
time_aggregation: str,
space_aggregation: str,
*,
comparisonSpaceAggregationParam: Optional[Dict] = None,
temporality: Optional[str] = None,
source: Optional[str] = None,
comparisonSpaceAggregationParam: dict | None = None,
temporality: str | None = None,
source: str | None = None,
step_interval: int = DEFAULT_STEP_INTERVAL,
group_by: Optional[List[str]] = None,
filter_expression: Optional[str] = None,
functions: Optional[List[Dict]] = None,
group_by: list[str] | None = None,
filter_expression: str | None = None,
functions: list[dict] | None = None,
disabled: bool = False,
) -> Dict:
spec: Dict[str, Any] = {
) -> dict:
spec: dict[str, Any] = {
"name": name,
"signal": "metrics",
"aggregations": [
@@ -179,9 +175,7 @@ def build_builder_query(
if temporality:
spec["aggregations"][0]["temporality"] = temporality
if comparisonSpaceAggregationParam:
spec["aggregations"][0][
"comparisonSpaceAggregationParam"
] = comparisonSpaceAggregationParam
spec["aggregations"][0]["comparisonSpaceAggregationParam"] = comparisonSpaceAggregationParam
if group_by:
spec["groupBy"] = [
{
@@ -203,10 +197,10 @@ def build_formula_query(
name: str,
expression: str,
*,
functions: Optional[List[Dict]] = None,
functions: list[dict] | None = None,
disabled: bool = False,
) -> Dict:
spec: Dict[str, Any] = {
) -> dict:
spec: dict[str, Any] = {
"name": name,
"expression": expression,
"disabled": disabled,
@@ -216,14 +210,14 @@ def build_formula_query(
return {"type": "builder_formula", "spec": spec}
def build_function(name: str, *args: Any) -> Dict:
func: Dict[str, Any] = {"name": name}
def build_function(name: str, *args: Any) -> dict:
func: dict[str, Any] = {"name": name}
if args:
func["args"] = [{"value": arg} for arg in args]
return func
def get_series_values(response_json: Dict, query_name: str) -> List[Dict]:
def get_series_values(response_json: dict, query_name: str) -> list[dict]:
results = response_json.get("data", {}).get("data", {}).get("results", [])
result = find_named_result(results, query_name)
if not result:
@@ -238,7 +232,7 @@ def get_series_values(response_json: Dict, query_name: str) -> List[Dict]:
return series[0].get("values", [])
def get_all_series(response_json: Dict, query_name: str) -> List[Dict]:
def get_all_series(response_json: dict, query_name: str) -> list[dict]:
results = response_json.get("data", {}).get("data", {}).get("results", [])
result = find_named_result(results, query_name)
if not result:
@@ -250,18 +244,18 @@ def get_all_series(response_json: Dict, query_name: str) -> List[Dict]:
return aggregations[0].get("series", [])
def get_scalar_value(response_json: Dict, query_name: str) -> Optional[float]:
def get_scalar_value(response_json: dict, query_name: str) -> float | None:
values = get_series_values(response_json, query_name)
if values:
return values[0].get("value")
return None
def get_all_warnings(response_json: Dict) -> List[Dict]:
def get_all_warnings(response_json: dict) -> list[dict]:
return response_json.get("data", {}).get("warning", {}).get("warnings", [])
def get_error_message(response_json: Dict) -> str:
def get_error_message(response_json: dict) -> str:
return response_json.get("error", {}).get("message", "")
@@ -274,8 +268,8 @@ def compare_values(
def compare_series_values(
values1: List[Dict],
values2: List[Dict],
values1: list[dict],
values2: list[dict],
tolerance: float = DEFAULT_TOLERANCE,
) -> bool:
if len(values1) != len(values2):
@@ -293,24 +287,17 @@ def compare_series_values(
def compare_all_series(
series1: List[Dict],
series2: List[Dict],
series1: list[dict],
series2: list[dict],
tolerance: float = DEFAULT_TOLERANCE,
) -> bool:
if len(series1) != len(series2):
return False
# oh my lovely python
def series_key(s: Dict) -> str:
def series_key(s: dict) -> str:
labels = s.get("labels", [])
return str(
sorted(
[
(lbl.get("key", {}).get("name", ""), lbl.get("value", ""))
for lbl in labels
]
)
)
return str(sorted([(lbl.get("key", {}).get("name", ""), lbl.get("value", "")) for lbl in labels]))
sorted1 = sorted(series1, key=series_key)
sorted2 = sorted(series2, key=series_key)
@@ -328,8 +315,8 @@ def compare_all_series(
def assert_results_equal(
result_cached: Dict,
result_no_cache: Dict,
result_cached: dict,
result_no_cache: dict,
query_name: str,
context: str,
tolerance: float = DEFAULT_TOLERANCE,
@@ -340,27 +327,16 @@ def assert_results_equal(
sorted_cached = sorted(values_cached, key=lambda x: x["timestamp"])
sorted_no_cache = sorted(values_no_cache, key=lambda x: x["timestamp"])
assert len(sorted_cached) == len(sorted_no_cache), (
f"{context}: Different number of values. "
f"Cached: {len(sorted_cached)}, No-cache: {len(sorted_no_cache)}\n"
f"Cached timestamps: {[v['timestamp'] for v in sorted_cached]}\n"
f"No-cache timestamps: {[v['timestamp'] for v in sorted_no_cache]}"
)
assert len(sorted_cached) == len(sorted_no_cache), f"{context}: Different number of values. Cached: {len(sorted_cached)}, No-cache: {len(sorted_no_cache)}\nCached timestamps: {[v['timestamp'] for v in sorted_cached]}\nNo-cache timestamps: {[v['timestamp'] for v in sorted_no_cache]}"
for v_cached, v_no_cache in zip(sorted_cached, sorted_no_cache):
assert v_cached["timestamp"] == v_no_cache["timestamp"], (
f"{context}: Timestamp mismatch. "
f"Cached: {v_cached['timestamp']}, No-cache: {v_no_cache['timestamp']}"
)
assert compare_values(v_cached["value"], v_no_cache["value"], tolerance), (
f"{context}: Value mismatch at timestamp {v_cached['timestamp']}. "
f"Cached: {v_cached['value']}, No-cache: {v_no_cache['value']}"
)
assert v_cached["timestamp"] == v_no_cache["timestamp"], f"{context}: Timestamp mismatch. Cached: {v_cached['timestamp']}, No-cache: {v_no_cache['timestamp']}"
assert compare_values(v_cached["value"], v_no_cache["value"], tolerance), f"{context}: Value mismatch at timestamp {v_cached['timestamp']}. Cached: {v_cached['value']}, No-cache: {v_no_cache['value']}"
def assert_all_series_equal(
result_cached: Dict,
result_no_cache: Dict,
result_cached: dict,
result_no_cache: dict,
query_name: str,
context: str,
tolerance: float = DEFAULT_TOLERANCE,
@@ -368,25 +344,21 @@ def assert_all_series_equal(
series_cached = get_all_series(result_cached, query_name)
series_no_cache = get_all_series(result_no_cache, query_name)
assert compare_all_series(
series_cached, series_no_cache, tolerance
), f"{context}: Cached series differ from non-cached series"
assert compare_all_series(series_cached, series_no_cache, tolerance), f"{context}: Cached series differ from non-cached series"
def expected_minutely_bucket_timestamps_ms(now: datetime) -> List[List[int]]:
previous_five = [
int((now - timedelta(minutes=m)).timestamp() * 1000) for m in range(5, 0, -1)
]
def expected_minutely_bucket_timestamps_ms(now: datetime) -> list[list[int]]:
previous_five = [int((now - timedelta(minutes=m)).timestamp() * 1000) for m in range(5, 0, -1)]
with_current = previous_five + [int(now.timestamp() * 1000)]
return [previous_five, with_current]
def assert_minutely_bucket_timestamps(
points: List[Dict[str, Any]],
points: list[dict[str, Any]],
now: datetime,
*,
context: str,
) -> List[int]:
) -> list[int]:
expected = expected_minutely_bucket_timestamps_ms(now)
actual = [p["timestamp"] for p in points]
assert actual in expected, f"Unexpected timestamps for {context}: {actual}"
@@ -394,10 +366,10 @@ def assert_minutely_bucket_timestamps(
def assert_minutely_bucket_values(
points: List[Dict[str, Any]],
points: list[dict[str, Any]],
now: datetime,
*,
expected_by_ts: Dict[int, float],
expected_by_ts: dict[int, float],
context: str,
) -> None:
timestamps = assert_minutely_bucket_timestamps(points, now, context=context)
@@ -406,24 +378,17 @@ def assert_minutely_bucket_values(
for point in points:
ts = point["timestamp"]
assert point["value"] == expected[ts], (
f"Unexpected value for {context} at timestamp={ts}: "
f"got {point['value']}, expected {expected[ts]}"
)
assert point["value"] == expected[ts], f"Unexpected value for {context} at timestamp={ts}: got {point['value']}, expected {expected[ts]}"
def index_series_by_label(
series: List[Dict[str, Any]],
series: list[dict[str, Any]],
label_name: str,
) -> Dict[str, Dict[str, Any]]:
series_by_label: Dict[str, Dict[str, Any]] = {}
) -> dict[str, dict[str, Any]]:
series_by_label: dict[str, dict[str, Any]] = {}
for s in series:
label = next(
(
l
for l in s.get("labels", [])
if l.get("key", {}).get("name") == label_name
),
(l for l in s.get("labels", []) if l.get("key", {}).get("name") == label_name),
None,
)
assert label is not None, f"Expected {label_name} label in series"
@@ -432,17 +397,11 @@ def index_series_by_label(
def find_named_result(
results: List[Dict[str, Any]],
results: list[dict[str, Any]],
name: str,
) -> Optional[Dict[str, Any]]:
) -> dict[str, Any] | None:
return next(
(
r
for r in results
if r.get("name") == name
or r.get("queryName") == name
or (r.get("spec") or {}).get("name") == name
),
(r for r in results if r.get("name") == name or r.get("queryName") == name or (r.get("spec") or {}).get("name") == name),
None,
)
@@ -450,18 +409,18 @@ def find_named_result(
def build_scalar_query(
name: str,
signal: str,
aggregations: List[Dict],
aggregations: list[dict],
*,
source: Optional[str] = None,
group_by: Optional[List[Dict]] = None,
order: Optional[List[Dict]] = None,
limit: Optional[int] = None,
filter_expression: Optional[str] = None,
having_expression: Optional[str] = None,
source: str | None = None,
group_by: list[dict] | None = None,
order: list[dict] | None = None,
limit: int | None = None,
filter_expression: str | None = None,
having_expression: str | None = None,
step_interval: int = DEFAULT_STEP_INTERVAL,
disabled: bool = False,
) -> Dict:
spec: Dict[str, Any] = {
) -> dict:
spec: dict[str, Any] = {
"name": name,
"signal": signal,
"stepInterval": step_interval,
@@ -494,7 +453,7 @@ def build_group_by_field(
name: str,
field_data_type: str = "string",
field_context: str = "resource",
) -> Dict:
) -> dict:
return {
"name": name,
"fieldDataType": field_data_type,
@@ -502,12 +461,12 @@ def build_group_by_field(
}
def build_order_by(name: str, direction: str = "desc") -> Dict:
def build_order_by(name: str, direction: str = "desc") -> dict:
return {"key": {"name": name}, "direction": direction}
def build_logs_aggregation(expression: str, alias: Optional[str] = None) -> Dict:
agg: Dict[str, Any] = {"expression": expression}
def build_logs_aggregation(expression: str, alias: str | None = None) -> dict:
agg: dict[str, Any] = {"expression": expression}
if alias:
agg["alias"] = alias
return agg
@@ -518,7 +477,7 @@ def build_metrics_aggregation(
time_aggregation: str,
space_aggregation: str,
temporality: str = "cumulative",
) -> Dict:
) -> dict:
return {
"metricName": metric_name,
"temporality": temporality,
@@ -527,65 +486,51 @@ def build_metrics_aggregation(
}
def get_scalar_table_data(response_json: Dict) -> List[List[Any]]:
def get_scalar_table_data(response_json: dict) -> list[list[Any]]:
results = response_json.get("data", {}).get("data", {}).get("results", [])
if not results:
return []
return results[0].get("data", [])
def get_scalar_columns(response_json: Dict) -> List[Dict]:
def get_scalar_columns(response_json: dict) -> list[dict]:
results = response_json.get("data", {}).get("data", {}).get("results", [])
if not results:
return []
return results[0].get("columns", [])
def get_column_data_from_response(response_json: Dict, column_name: str) -> List[Any]:
def get_column_data_from_response(response_json: dict, column_name: str) -> list[Any]:
results = response_json.get("data", {}).get("data", {}).get("results", [])
if not results:
return []
rows = results[0].get("rows") or []
return [
row["data"][column_name] for row in rows if column_name in row.get("data", {})
]
return [row["data"][column_name] for row in rows if column_name in row.get("data", {})]
def assert_scalar_result_order(
data: List[List[Any]],
expected_order: List[tuple],
data: list[list[Any]],
expected_order: list[tuple],
context: str = "",
) -> None:
assert len(data) == len(expected_order), (
f"{context}: Expected {len(expected_order)} rows, got {len(data)}. "
f"Data: {data}"
)
assert len(data) == len(expected_order), f"{context}: Expected {len(expected_order)} rows, got {len(data)}. Data: {data}"
for i, (row, expected) in enumerate(zip(data, expected_order)):
for j, expected_val in enumerate(expected):
actual_val = row[j]
assert actual_val == expected_val, (
f"{context}: Row {i}, column {j} mismatch. "
f"Expected {expected_val}, got {actual_val}. "
f"Full row: {row}, expected: {expected}"
)
assert actual_val == expected_val, f"{context}: Row {i}, column {j} mismatch. Expected {expected_val}, got {actual_val}. Full row: {row}, expected: {expected}"
def assert_scalar_column_order(
data: List[List[Any]],
data: list[list[Any]],
column_index: int,
expected_values: List[Any],
expected_values: list[Any],
context: str = "",
) -> None:
assert len(data) == len(
expected_values
), f"{context}: Expected {len(expected_values)} rows, got {len(data)}"
assert len(data) == len(expected_values), f"{context}: Expected {len(expected_values)} rows, got {len(data)}"
actual_values = [row[column_index] for row in data]
assert actual_values == expected_values, (
f"{context}: Column {column_index} order mismatch. "
f"Expected {expected_values}, got {actual_values}"
)
assert actual_values == expected_values, f"{context}: Column {column_index} order mismatch. Expected {expected_values}, got {actual_values}"
def format_timestamp(dt: datetime) -> str:
@@ -602,28 +547,21 @@ def format_timestamp(dt: datetime) -> str:
return f"{base_str}Z"
def assert_identical_query_response(
response1: requests.Response, response2: requests.Response
) -> None:
def assert_identical_query_response(response1: requests.Response, response2: requests.Response) -> None:
"""
Assert that two query responses are identical in status and data.
"""
assert response1.status_code == response2.status_code, "Status codes do not match"
if response1.status_code == HTTPStatus.OK:
assert (
response1.json()["status"] == response2.json()["status"]
), "Response statuses do not match"
assert (
response1.json()["data"]["data"]["results"]
== response2.json()["data"]["data"]["results"]
), "Response data do not match"
assert response1.json()["status"] == response2.json()["status"], "Response statuses do not match"
assert response1.json()["data"]["data"]["results"] == response2.json()["data"]["data"]["results"], "Response data do not match"
def generate_logs_with_corrupt_metadata() -> List[Logs]:
def generate_logs_with_corrupt_metadata() -> list[Logs]:
"""
Specifically, entries with 'id', 'timestamp', 'severity_text', 'severity_number' and 'body' fields in metadata
"""
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
now = datetime.now(tz=UTC).replace(second=0, microsecond=0)
return [
Logs(
@@ -714,7 +652,7 @@ def generate_logs_with_corrupt_metadata() -> List[Logs]:
]
def generate_traces_with_corrupt_metadata() -> List[Traces]:
def generate_traces_with_corrupt_metadata() -> list[Traces]:
"""
Specifically, entries with 'id', 'timestamp', 'trace_id' and 'duration_nano' fields in metadata
"""
@@ -725,7 +663,7 @@ def generate_traces_with_corrupt_metadata() -> List[Traces]:
topic_service_trace_id = TraceIdGenerator.trace_id()
topic_service_span_id = TraceIdGenerator.span_id()
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
now = datetime.now(tz=UTC).replace(second=0, microsecond=0)
return [
Traces(

View File

@@ -1,4 +1,5 @@
from typing import Callable, TypeVar
from collections.abc import Callable
from typing import TypeVar
import pytest
@@ -95,8 +96,6 @@ def wrap( # pylint: disable=too-many-arguments,too-many-positional-arguments
request.addfinalizer(finalizer)
if reuse(request):
pytestconfig.cache.set(
key, resource.__cache__() if hasattr(resource, "__cache__") else resource
)
pytestconfig.cache.set(key, resource.__cache__() if hasattr(resource, "__cache__") else resource)
return resource

View File

@@ -32,21 +32,11 @@ def seeder(
)
container = DockerContainer("signoz-tests-seeder:latest")
container.with_env(
"CH_HOST", clickhouse.container.container_configs["8123"].address
)
container.with_env(
"CH_PORT", str(clickhouse.container.container_configs["8123"].port)
)
container.with_env(
"CH_USER", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_USERNAME"]
)
container.with_env(
"CH_PASSWORD", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_PASSWORD"]
)
container.with_env(
"CH_CLUSTER", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"]
)
container.with_env("CH_HOST", clickhouse.container.container_configs["8123"].address)
container.with_env("CH_PORT", str(clickhouse.container.container_configs["8123"].port))
container.with_env("CH_USER", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_USERNAME"])
container.with_env("CH_PASSWORD", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_PASSWORD"])
container.with_env("CH_CLUSTER", clickhouse.env["SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER"])
container.with_exposed_ports(8080)
container.with_network(network=network)
container.start()
@@ -71,9 +61,7 @@ def seeder(
"8080": types.TestContainerUrlConfig("http", host, host_port),
},
container_configs={
"8080": types.TestContainerUrlConfig(
"http", container.get_wrapped_container().name, 8080
),
"8080": types.TestContainerUrlConfig("http", container.get_wrapped_container().name, 8080),
},
)
@@ -92,9 +80,7 @@ def seeder(
request,
pytestconfig,
"seeder",
empty=lambda: types.TestContainerDocker(
id="", host_configs={}, container_configs={}
),
empty=lambda: types.TestContainerDocker(id="", host_configs={}, container_configs={}),
create=create,
delete=delete,
restore=restore,

View File

@@ -26,9 +26,7 @@ def find_role_by_name(signoz: types.SigNoz, token: str, name: str) -> str:
return role["id"]
def create_service_account(
signoz: types.SigNoz, token: str, name: str, role: str = "signoz-viewer"
) -> str:
def create_service_account(signoz: types.SigNoz, token: str, name: str, role: str = "signoz-viewer") -> str:
"""Create a service account, assign a role, and return its ID."""
resp = requests.post(
signoz.self.host_configs["8080"].get(SERVICE_ACCOUNT_BASE),
@@ -41,9 +39,7 @@ def create_service_account(
role_id = find_role_by_name(signoz, token, role)
role_resp = requests.post(
signoz.self.host_configs["8080"].get(
f"{SERVICE_ACCOUNT_BASE}/{service_account_id}/roles"
),
signoz.self.host_configs["8080"].get(f"{SERVICE_ACCOUNT_BASE}/{service_account_id}/roles"),
json={"id": role_id},
headers={"Authorization": f"Bearer {token}"},
timeout=5,
@@ -53,16 +49,12 @@ def create_service_account(
return service_account_id
def create_service_account_with_key(
signoz: types.SigNoz, token: str, name: str, role: str = "signoz-admin"
) -> tuple:
def create_service_account_with_key(signoz: types.SigNoz, token: str, name: str, role: str = "signoz-admin") -> tuple:
"""Create a service account with an API key and return (service_account_id, api_key)."""
service_account_id = create_service_account(signoz, token, name, role)
key_resp = requests.post(
signoz.self.host_configs["8080"].get(
f"{SERVICE_ACCOUNT_BASE}/{service_account_id}/keys"
),
signoz.self.host_configs["8080"].get(f"{SERVICE_ACCOUNT_BASE}/{service_account_id}/keys"),
json={"name": "auth-key", "expiresAt": 0},
headers={"Authorization": f"Bearer {token}"},
timeout=5,
@@ -73,14 +65,10 @@ def create_service_account_with_key(
return service_account_id, api_key
def delete_service_account(
signoz: types.SigNoz, token: str, service_account_id: str
) -> None:
def delete_service_account(signoz: types.SigNoz, token: str, service_account_id: str) -> None:
"""Soft-delete a service account."""
resp = requests.delete(
signoz.self.host_configs["8080"].get(
f"{SERVICE_ACCOUNT_BASE}/{service_account_id}"
),
signoz.self.host_configs["8080"].get(f"{SERVICE_ACCOUNT_BASE}/{service_account_id}"),
headers={"Authorization": f"Bearer {token}"},
timeout=5,
)
@@ -95,8 +83,4 @@ def find_service_account_by_name(signoz: types.SigNoz, token: str, name: str) ->
timeout=5,
)
assert list_resp.status_code == HTTPStatus.OK, list_resp.text
return next(
service_account
for service_account in list_resp.json()["data"]
if service_account["name"] == name
)
return next(service_account for service_account in list_resp.json()["data"] if service_account["name"] == name)

View File

@@ -2,7 +2,6 @@ import platform
import time
from http import HTTPStatus
from os import path
from typing import Optional
import docker
import docker.errors
@@ -26,7 +25,7 @@ def create_signoz(
request: pytest.FixtureRequest,
pytestconfig: pytest.Config,
cache_key: str = "signoz",
env_overrides: Optional[dict] = None,
env_overrides: dict | None = None,
) -> types.SigNoz:
"""
Factory function for creating a SigNoz container.

View File

@@ -1,5 +1,6 @@
from collections import namedtuple
from typing import Any, Generator
from collections.abc import Generator
from typing import Any
import pytest
from sqlalchemy import create_engine, sql
@@ -11,7 +12,7 @@ ConnectionTuple = namedtuple("ConnectionTuple", "connection config")
@pytest.fixture(name="sqlite", scope="package")
def sqlite(
tmpfs: Generator[types.LegacyPath, Any, None],
tmpfs: Generator[types.LegacyPath, Any],
request: pytest.FixtureRequest,
pytestconfig: pytest.Config,
) -> types.TestContainerSQL:
@@ -69,9 +70,7 @@ def sqlite(
pytestconfig,
"sqlite",
lambda: types.TestContainerSQL(
container=types.TestContainerDocker(
id="", host_configs={}, container_configs={}
),
container=types.TestContainerDocker(id="", host_configs={}, container_configs={}),
conn=None,
env={},
),

View File

@@ -4,8 +4,9 @@ import json
import secrets
import uuid
from abc import ABC
from collections.abc import Callable, Generator
from enum import Enum
from typing import Any, Callable, Generator, List, Optional
from typing import Any
from urllib.parse import urlparse
import numpy as np
@@ -65,9 +66,7 @@ class TracesResource(ABC):
fingerprint: str,
seen_at_ts_bucket_start: np.int64,
) -> None:
self.labels = json.dumps(
labels, separators=(",", ":")
) # clickhouse treats {"a": "b"} differently from {"a":"b"}. In the first case it is not able to run json functions
self.labels = json.dumps(labels, separators=(",", ":")) # clickhouse treats {"a": "b"} differently from {"a":"b"}. In the first case it is not able to run json functions
self.fingerprint = fingerprint
self.seen_at_ts_bucket_start = seen_at_ts_bucket_start
@@ -81,18 +80,14 @@ class TracesResourceOrAttributeKeys(ABC):
tag_type: str
is_column: bool
def __init__(
self, name: str, datatype: str, tag_type: str, is_column: bool = False
) -> None:
def __init__(self, name: str, datatype: str, tag_type: str, is_column: bool = False) -> None:
self.name = name
self.datatype = datatype
self.tag_type = tag_type
self.is_column = is_column
def np_arr(self) -> np.array:
return np.array(
[self.name, self.tag_type, self.datatype, self.is_column], dtype=object
)
return np.array([self.name, self.tag_type, self.datatype, self.is_column], dtype=object)
class TracesTagAttributes(ABC):
@@ -109,7 +104,7 @@ class TracesTagAttributes(ABC):
tag_key: str,
tag_type: str,
tag_data_type: str,
string_value: Optional[str],
string_value: str | None,
number_value: np.float64,
) -> None:
self.unix_milli = np.int64(int(timestamp.timestamp() * 1e3))
@@ -148,9 +143,7 @@ class TracesEvent(ABC):
self.attribute_map = attribute_map
def np_arr(self) -> np.array:
return np.array(
[self.name, self.time_unix_nano, json.dumps(self.attribute_map)]
)
return np.array([self.name, self.time_unix_nano, json.dumps(self.attribute_map)])
class TracesErrorEvent(ABC):
@@ -243,7 +236,7 @@ class Traces(ABC):
attributes_number: dict[str, np.float64]
attributes_bool: dict[str, bool]
resources_string: dict[str, str]
events: List[str]
events: list[str]
links: str
response_status_code: str
external_http_url: str
@@ -256,16 +249,16 @@ class Traces(ABC):
has_error: bool
is_remote: str
resource: List[TracesResource]
tag_attributes: List[TracesTagAttributes]
resource_keys: List[TracesResourceOrAttributeKeys]
attribute_keys: List[TracesResourceOrAttributeKeys]
span_attributes: List[TracesSpanAttribute]
error_events: List[TracesErrorEvent]
resource: list[TracesResource]
tag_attributes: list[TracesTagAttributes]
resource_keys: list[TracesResourceOrAttributeKeys]
attribute_keys: list[TracesResourceOrAttributeKeys]
span_attributes: list[TracesSpanAttribute]
error_events: list[TracesErrorEvent]
def __init__(
self,
timestamp: Optional[datetime.datetime] = None,
timestamp: datetime.datetime | None = None,
duration: datetime.timedelta = datetime.timedelta(seconds=1),
trace_id: str = "",
span_id: str = "",
@@ -276,8 +269,8 @@ class Traces(ABC):
status_message: str = "",
resources: dict[str, Any] = {},
attributes: dict[str, Any] = {},
events: List[TracesEvent] = [],
links: List[TracesLink] = [],
events: list[TracesEvent] = [],
links: list[TracesLink] = [],
trace_state: str = "",
flags: np.uint32 = 0,
) -> None:
@@ -344,11 +337,7 @@ class Traces(ABC):
number_value=None,
)
)
self.resource_keys.append(
TracesResourceOrAttributeKeys(
name=k, datatype="string", tag_type="resource"
)
)
self.resource_keys.append(TracesResourceOrAttributeKeys(name=k, datatype="string", tag_type="resource"))
self.span_attributes.append(
TracesSpanAttribute(
key=k,
@@ -359,9 +348,7 @@ class Traces(ABC):
)
# Calculate resource fingerprint
self.resource_fingerprint = LogsOrTracesFingerprint(
self.resources_string
).calculate()
self.resource_fingerprint = LogsOrTracesFingerprint(self.resources_string).calculate()
# Process attributes by type and populate custom fields
self.attribute_string = {}
@@ -384,11 +371,7 @@ class Traces(ABC):
number_value=None,
)
)
self.attribute_keys.append(
TracesResourceOrAttributeKeys(
name=k, datatype="bool", tag_type="tag"
)
)
self.attribute_keys.append(TracesResourceOrAttributeKeys(name=k, datatype="bool", tag_type="tag"))
self.span_attributes.append(
TracesSpanAttribute(
key=k,
@@ -409,11 +392,7 @@ class Traces(ABC):
number_value=np.float64(v),
)
)
self.attribute_keys.append(
TracesResourceOrAttributeKeys(
name=k, datatype="float64", tag_type="tag"
)
)
self.attribute_keys.append(TracesResourceOrAttributeKeys(name=k, datatype="float64", tag_type="tag"))
self.span_attributes.append(
TracesSpanAttribute(
key=k,
@@ -434,11 +413,7 @@ class Traces(ABC):
number_value=None,
)
)
self.attribute_keys.append(
TracesResourceOrAttributeKeys(
name=k, datatype="string", tag_type="tag"
)
)
self.attribute_keys.append(TracesResourceOrAttributeKeys(name=k, datatype="string", tag_type="tag"))
self.span_attributes.append(
TracesSpanAttribute(
key=k,
@@ -451,9 +426,7 @@ class Traces(ABC):
# Process events and derive error events
self.events = []
for event in events:
self.events.append(
json.dumps([event.name, event.time_unix_nano, event.attribute_map])
)
self.events.append(json.dumps([event.name, event.time_unix_nano, event.attribute_map]))
# Create error events for exception events (following Go exporter logic)
if event.name == "exception":
@@ -475,9 +448,7 @@ class Traces(ABC):
),
)
self.links = json.dumps(
[link.__dict__() for link in links_copy], separators=(",", ":")
)
self.links = json.dumps([link.__dict__() for link in links_copy], separators=(",", ":"))
# Initialize resource
self.resource = []
@@ -541,9 +512,7 @@ class Traces(ABC):
except Exception: # pylint: disable=broad-exception-caught
self.external_http_url = str_value
self.http_url = str_value
elif (
key in ["http.method", "http.request.method"] and self.kind == 3
): # SPAN_KIND_CLIENT
elif key in ["http.method", "http.request.method"] and self.kind == 3: # SPAN_KIND_CLIENT
self.external_http_method = str_value
self.http_method = str_value
elif key in ["http.url", "url.full"] and self.kind != 3:
@@ -621,12 +590,8 @@ class Traces(ABC):
timestamp = parse_timestamp(data["timestamp"])
duration = parse_duration(data.get("duration", "PT1S"))
kind = TracesKind.from_value(
data.get("kind", TracesKind.SPAN_KIND_INTERNAL.value)
)
status_code = TracesStatusCode.from_value(
data.get("status_code", TracesStatusCode.STATUS_CODE_UNSET.value)
)
kind = TracesKind.from_value(data.get("kind", TracesKind.SPAN_KIND_INTERNAL.value))
status_code = TracesStatusCode.from_value(data.get("status_code", TracesStatusCode.STATUS_CODE_UNSET.value))
return cls(
timestamp=timestamp,
@@ -648,12 +613,12 @@ class Traces(ABC):
def load_from_file(
cls,
file_path: str,
base_time: Optional[datetime.datetime] = None,
) -> List["Traces"]:
base_time: datetime.datetime | None = None,
) -> list["Traces"]:
"""Load traces from a JSONL file."""
data_list = []
with open(file_path, "r", encoding="utf-8") as f:
with open(file_path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
@@ -689,7 +654,7 @@ class Traces(ABC):
return traces
def insert_traces_to_clickhouse(conn, traces: List[Traces]) -> None:
def insert_traces_to_clickhouse(conn, traces: list[Traces]) -> None:
"""
Insert traces into ClickHouse tables following the same logic as the Go exporter.
Handles insertion into:
@@ -703,7 +668,7 @@ def insert_traces_to_clickhouse(conn, traces: List[Traces]) -> None:
exact insert path used by the pytest fixtures. `conn` is a
clickhouse-connect Client.
"""
resources: List[TracesResource] = []
resources: list[TracesResource] = []
for trace in traces:
resources.extend(trace.resource)
@@ -714,7 +679,7 @@ def insert_traces_to_clickhouse(conn, traces: List[Traces]) -> None:
data=[resource.np_arr() for resource in resources],
)
tag_attributes: List[TracesTagAttributes] = []
tag_attributes: list[TracesTagAttributes] = []
for trace in traces:
tag_attributes.extend(trace.tag_attributes)
@@ -725,8 +690,8 @@ def insert_traces_to_clickhouse(conn, traces: List[Traces]) -> None:
data=[tag_attribute.np_arr() for tag_attribute in tag_attributes],
)
attribute_keys: List[TracesResourceOrAttributeKeys] = []
resource_keys: List[TracesResourceOrAttributeKeys] = []
attribute_keys: list[TracesResourceOrAttributeKeys] = []
resource_keys: list[TracesResourceOrAttributeKeys] = []
for trace in traces:
attribute_keys.extend(trace.attribute_keys)
resource_keys.extend(trace.resource_keys)
@@ -785,7 +750,7 @@ def insert_traces_to_clickhouse(conn, traces: List[Traces]) -> None:
data=[trace.np_arr() for trace in traces],
)
error_events: List[TracesErrorEvent] = []
error_events: list[TracesErrorEvent] = []
for trace in traces:
error_events.extend(trace.error_events)
@@ -816,8 +781,8 @@ def truncate_traces_tables(conn, cluster: str) -> None:
@pytest.fixture(name="insert_traces", scope="function")
def insert_traces(
clickhouse: types.TestContainerClickhouse,
) -> Generator[Callable[[List[Traces]], None], Any, None]:
def _insert_traces(traces: List[Traces]) -> None:
) -> Generator[Callable[[list[Traces]], None], Any]:
def _insert_traces(traces: list[Traces]) -> None:
insert_traces_to_clickhouse(clickhouse.conn, traces)
yield _insert_traces
@@ -846,11 +811,7 @@ def remove_traces_ttl_and_storage_settings(signoz: types.SigNoz):
for table in tables:
try:
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_traces.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' REMOVE TTL"
)
signoz.telemetrystore.conn.query(
f"ALTER TABLE signoz_traces.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' RESET SETTING storage_policy;"
)
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_traces.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' REMOVE TTL")
signoz.telemetrystore.conn.query(f"ALTER TABLE signoz_traces.{table} ON CLUSTER '{signoz.telemetrystore.env['SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER']}' RESET SETTING storage_policy;")
except Exception as e: # pylint: disable=broad-exception-caught
print(f"ttl and storage policy reset failed for {table}: {e}")

View File

@@ -1,5 +1,5 @@
from dataclasses import dataclass
from typing import Dict, List, Literal
from typing import Literal
from urllib.parse import urljoin
import clickhouse_connect
@@ -39,33 +39,22 @@ class TestContainerUrlConfig:
class TestContainerDocker:
__test__ = False
id: str
host_configs: Dict[str, TestContainerUrlConfig]
container_configs: Dict[str, TestContainerUrlConfig]
host_configs: dict[str, TestContainerUrlConfig]
container_configs: dict[str, TestContainerUrlConfig]
@staticmethod
def from_cache(cache: dict) -> "TestContainerDocker":
return TestContainerDocker(
id=cache["id"],
host_configs={
port: TestContainerUrlConfig(**config)
for port, config in cache["host_configs"].items()
},
container_configs={
port: TestContainerUrlConfig(**config)
for port, config in cache["container_configs"].items()
},
host_configs={port: TestContainerUrlConfig(**config) for port, config in cache["host_configs"].items()},
container_configs={port: TestContainerUrlConfig(**config) for port, config in cache["container_configs"].items()},
)
def __cache__(self) -> dict:
return {
"id": self.id,
"host_configs": {
port: config.__cache__() for port, config in self.host_configs.items()
},
"container_configs": {
port: config.__cache__()
for port, config in self.container_configs.items()
},
"host_configs": {port: config.__cache__() for port, config in self.host_configs.items()},
"container_configs": {port: config.__cache__() for port, config in self.container_configs.items()},
}
def __log__(self) -> str:
@@ -77,7 +66,7 @@ class TestContainerSQL:
__test__ = False
container: TestContainerDocker
conn: Engine
env: Dict[str, str]
env: dict[str, str]
def __cache__(self) -> dict:
return {
@@ -94,7 +83,7 @@ class TestContainerClickhouse:
__test__ = False
container: TestContainerDocker
conn: clickhouse_connect.driver.client.Client
env: Dict[str, str]
env: dict[str, str]
def __cache__(self) -> dict:
return {
@@ -187,7 +176,7 @@ class AlertExpectation:
# whether we expect any alerts to be fired
should_alert: bool
# alerts that we expect to be fired
expected_alerts: List[FiringAlert]
expected_alerts: list[FiringAlert]
# seconds to wait for the alerts to be fired, if no
# alerts are fired in the expected time, the test will fail
wait_time_seconds: int
@@ -200,6 +189,6 @@ class AlertTestCase:
# path to the rule file in testdata directory
rule_path: str
# list of alert data that will be inserted into the database
alert_data: List[AlertData]
alert_data: list[AlertData]
# list of alert expectations for the test case
alert_expectation: AlertExpectation

View File

@@ -10,9 +10,7 @@ logger = setup_logger(__name__)
@pytest.fixture(name="zookeeper", scope="package")
def zookeeper(
network: Network, request: pytest.FixtureRequest, pytestconfig: pytest.Config
) -> types.TestContainerDocker:
def zookeeper(network: Network, request: pytest.FixtureRequest, pytestconfig: pytest.Config) -> types.TestContainerDocker:
"""
Package-scoped fixture for Zookeeper TestContainer.
"""

View File

@@ -10,14 +10,10 @@ logger = logging.getLogger(__name__)
def test_setup(signoz: types.SigNoz) -> None:
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/version"), timeout=2
)
response = requests.get(signoz.self.host_configs["8080"].get("/api/v1/version"), timeout=2)
assert response.status_code == HTTPStatus.OK
healthz = requests.get(
signoz.self.host_configs["8080"].get("/api/v2/healthz"), timeout=2
)
healthz = requests.get(signoz.self.host_configs["8080"].get("/api/v2/healthz"), timeout=2)
logger.info("healthz response: %s", healthz.json())
assert healthz.status_code == HTTPStatus.OK
@@ -35,9 +31,7 @@ def test_telemetry_databases_exist(signoz: types.SigNoz) -> None:
]
for db_name in required_databases:
assert any(
db_name in str(db) for db in databases
), f"Database {db_name} not found"
assert any(db_name in str(db) for db in databases), f"Database {db_name} not found"
def test_teardown(

View File

@@ -1,7 +1,7 @@
import time
import uuid
from collections.abc import Callable
from http import HTTPStatus
from typing import Callable, List
import requests
from wiremock.client import HttpMethods, Mapping, MappingRequest, MappingResponse
@@ -17,7 +17,7 @@ def test_webhook_notification_channel(
signoz: types.SigNoz,
get_token: Callable[[str, str], str],
notification_channel: types.TestContainerDocker,
make_http_mocks: Callable[[types.TestContainerDocker, List[Mapping]], None],
make_http_mocks: Callable[[types.TestContainerDocker, list[Mapping]], None],
create_webhook_notification_channel: Callable[[str, str, dict, bool], str],
) -> None:
"""
@@ -28,9 +28,7 @@ def test_webhook_notification_channel(
# Prepare notification channel name and webhook endpoint
notification_channel_name = f"notification-channel-{uuid.uuid4()}"
webhook_endpoint_path = f"/alert/{notification_channel_name}"
webhook_endpoint = notification_channel.container_configs["8080"].get(
webhook_endpoint_path
)
webhook_endpoint = notification_channel.container_configs["8080"].get(webhook_endpoint_path)
# register the mock endpoint in notification channel
make_http_mocks(
@@ -81,10 +79,7 @@ def test_webhook_notification_channel(
headers={"Authorization": f"Bearer {admin_token}"},
timeout=5,
)
assert response.status_code == HTTPStatus.NO_CONTENT, (
f"Failed to create notification channel: {response.text}"
f"Status code: {response.status_code}"
)
assert response.status_code == HTTPStatus.NO_CONTENT, f"Failed to create notification channel: {response.text}Status code: {response.status_code}"
# Verify that the alert was sent to the notification channel
response = requests.post(
@@ -92,11 +87,6 @@ def test_webhook_notification_channel(
json={"method": "POST", "url": webhook_endpoint_path},
timeout=5,
)
assert response.status_code == HTTPStatus.OK, (
f"Failed to get test notification count: {response.text}"
f"Status code: {response.status_code}"
)
assert response.status_code == HTTPStatus.OK, f"Failed to get test notification count: {response.text}Status code: {response.status_code}"
# Verify that the test notification was sent to the notification channel
assert (
response.json()["count"] == 1
), f"Expected 1 test notification, got {response.json()['count']}"
assert response.json()["count"] == 1, f"Expected 1 test notification, got {response.json()['count']}"

View File

@@ -1,7 +1,7 @@
import json
import uuid
from datetime import datetime, timedelta, timezone
from typing import Callable, List
from collections.abc import Callable
from datetime import UTC, datetime, timedelta
import pytest
from wiremock.client import HttpMethods, Mapping, MappingRequest, MappingResponse
@@ -583,28 +583,24 @@ logger = setup_logger(__name__)
@pytest.mark.parametrize(
"alert_test_case",
TEST_RULES_MATCH_TYPE_AND_COMPARE_OPERATORS
+ TEST_RULES_UNIT_CONVERSION
+ TEST_RULES_MISCELLANEOUS,
TEST_RULES_MATCH_TYPE_AND_COMPARE_OPERATORS + TEST_RULES_UNIT_CONVERSION + TEST_RULES_MISCELLANEOUS,
ids=lambda alert_test_case: alert_test_case.name,
)
def test_basic_alert_rule_conditions(
# Notification channel related fixtures
notification_channel: types.TestContainerDocker,
make_http_mocks: Callable[[types.TestContainerDocker, List[Mapping]], None],
make_http_mocks: Callable[[types.TestContainerDocker, list[Mapping]], None],
create_webhook_notification_channel: Callable[[str, str, dict, bool], str],
# Alert rule related fixtures
create_alert_rule: Callable[[dict], str],
# Alert data insertion related fixtures
insert_alert_data: Callable[[List[types.AlertData], datetime], None],
insert_alert_data: Callable[[list[types.AlertData], datetime], None],
alert_test_case: types.AlertTestCase,
):
# Prepare notification channel name and webhook endpoint
notification_channel_name = str(uuid.uuid4())
webhook_endpoint_path = f"/alert/{notification_channel_name}"
notification_url = notification_channel.container_configs["8080"].get(
webhook_endpoint_path
)
notification_url = notification_channel.container_configs["8080"].get(webhook_endpoint_path)
logger.info("notification_url: %s", {"notification_url": notification_url})
@@ -642,12 +638,12 @@ def test_basic_alert_rule_conditions(
# Insert alert data
insert_alert_data(
alert_test_case.alert_data,
base_time=datetime.now(tz=timezone.utc) - timedelta(minutes=5),
base_time=datetime.now(tz=UTC) - timedelta(minutes=5),
)
# Create Alert Rule
rule_path = get_testdata_file_path(alert_test_case.rule_path)
with open(rule_path, "r", encoding="utf-8") as f:
with open(rule_path, encoding="utf-8") as f:
rule_data = json.loads(f.read())
# Update the channel name in the rule data
update_rule_channel_name(rule_data, notification_channel_name)

View File

@@ -1,6 +1,6 @@
from datetime import datetime, timedelta, timezone
from collections.abc import Callable
from datetime import UTC, datetime, timedelta
from http import HTTPStatus
from typing import Callable, List
import pytest
@@ -20,10 +20,10 @@ def test_audit_list_all(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_audit_logs: Callable[[List[AuditLog]], None],
insert_audit_logs: Callable[[list[AuditLog]], None],
) -> None:
"""List audit events across multiple resource types — verify count, ordering, and fields."""
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
insert_audit_logs(
[
AuditLog(
@@ -85,7 +85,7 @@ def test_audit_list_all(
)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
response = make_query_request(
signoz,
token,
@@ -134,8 +134,7 @@ def test_audit_list_all(
id="filter_by_outcome_failure",
),
pytest.param(
"signoz.audit.resource.kind = 'dashboard'"
" AND signoz.audit.resource.id = 'dash-001'",
"signoz.audit.resource.kind = 'dashboard' AND signoz.audit.resource.id = 'dash-001'",
3,
{"dashboard.deleted", "dashboard.updated", "dashboard.created"},
id="filter_by_resource_kind_and_id",
@@ -147,8 +146,7 @@ def test_audit_list_all(
id="filter_by_principal_type",
),
pytest.param(
"signoz.audit.resource.kind = 'dashboard'"
" AND signoz.audit.action = 'delete'",
"signoz.audit.resource.kind = 'dashboard' AND signoz.audit.action = 'delete'",
1,
{"dashboard.deleted"},
id="filter_by_resource_kind_and_action",
@@ -159,13 +157,13 @@ def test_audit_filter(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_audit_logs: Callable[[List[AuditLog]], None],
insert_audit_logs: Callable[[list[AuditLog]], None],
filter_expression: str,
expected_count: int,
expected_event_names: set,
) -> None:
"""Parametrized audit filter tests covering the documented query patterns."""
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
insert_audit_logs(
[
AuditLog(
@@ -265,7 +263,7 @@ def test_audit_filter(
)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
response = make_query_request(
signoz,
token,
@@ -296,10 +294,10 @@ def test_audit_scalar_count_failures(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_audit_logs: Callable[[List[AuditLog]], None],
insert_audit_logs: Callable[[list[AuditLog]], None],
) -> None:
"""Alert query — count multiple failures from different principals."""
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
insert_audit_logs(
[
AuditLog(
@@ -356,7 +354,7 @@ def test_audit_scalar_count_failures(
)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
response = make_query_request(
signoz,
token,
@@ -386,10 +384,10 @@ def test_audit_does_not_leak_into_logs(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_audit_logs: Callable[[List[AuditLog]], None],
insert_audit_logs: Callable[[list[AuditLog]], None],
) -> None:
"""A single audit event in signoz_audit must not appear in regular log queries."""
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
insert_audit_logs(
[
AuditLog(
@@ -412,7 +410,7 @@ def test_audit_does_not_leak_into_logs(
)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
now = datetime.now(tz=timezone.utc)
now = datetime.now(tz=UTC)
response = make_query_request(
signoz,
token,
@@ -432,10 +430,5 @@ def test_audit_does_not_leak_into_logs(
rows = response.json()["data"]["data"]["results"][0].get("rows") or []
audit_bodies = [
row["data"]["body"]
for row in rows
if "signoz.audit"
in row["data"].get("attributes_string", {}).get("signoz.audit.action", "")
]
audit_bodies = [row["data"]["body"] for row in rows if "signoz.audit" in row["data"].get("attributes_string", {}).get("signoz.audit.action", "")]
assert len(audit_bodies) == 0

View File

@@ -1,5 +1,5 @@
from collections.abc import Callable
from http import HTTPStatus
from typing import Callable
import requests

Some files were not shown because too many files have changed in this diff Show More