Compare commits

..

2 Commits

Author SHA1 Message Date
Nikhil Soni
08c53fe7e8 docs: add few modules implemtation details
Generated by claude code
2026-01-27 22:33:49 +05:30
Nikhil Soni
c1fac00d2e feat: add claude.md and github commands 2026-01-27 22:33:12 +05:30
66 changed files with 2863 additions and 1811 deletions

136
.claude/CLAUDE.md Normal file
View File

@@ -0,0 +1,136 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
SigNoz is an open-source observability platform (APM, logs, metrics, traces) built on OpenTelemetry and ClickHouse. It provides a unified solution for monitoring applications with features including distributed tracing, log management, metrics dashboards, and alerting.
## Build and Development Commands
### Development Environment Setup
```bash
make devenv-up # Start ClickHouse and OTel Collector for local dev
make devenv-clickhouse # Start only ClickHouse
make devenv-signoz-otel-collector # Start only OTel Collector
make devenv-clickhouse-clean # Clean ClickHouse data
```
### Backend (Go)
```bash
make go-run-community # Run community backend server
make go-run-enterprise # Run enterprise backend server
make go-test # Run all Go unit tests
go test -race ./pkg/... # Run tests for specific package
go test -race ./pkg/querier/... # Example: run querier tests
```
### Integration Tests (Python)
```bash
cd tests/integration
uv sync # Install dependencies
make py-test-setup # Start test environment (keep running with --reuse)
make py-test # Run all integration tests
make py-test-teardown # Stop test environment
# Run specific test
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>/<file>.py::test_name
```
### Code Quality
```bash
# Go linting (golangci-lint)
golangci-lint run
# Python formatting/linting
make py-fmt # Format with black
make py-lint # Run isort, autoflake, pylint
```
### OpenAPI Generation
```bash
go run cmd/enterprise/*.go generate openapi
```
## Architecture Overview
### Backend Structure
The Go backend follows a **provider pattern** for dependency injection:
- **`pkg/signoz/`** - IoC container that wires all providers together
- **`pkg/modules/`** - Business logic modules (user, organization, dashboard, etc.)
- **`pkg/<provider>/`** - Provider implementations following consistent structure:
- `<name>.go` - Interface definition
- `config.go` - Configuration (implements `factory.Config`)
- `<implname><name>/provider.go` - Implementation
- `<name>test/` - Mock implementations for testing
### Key Packages
- **`pkg/querier/`** - Query engine for telemetry data (logs, traces, metrics)
- **`pkg/telemetrystore/`** - ClickHouse telemetry storage interface
- **`pkg/sqlstore/`** - Relational database (SQLite/PostgreSQL) for metadata
- **`pkg/apiserver/`** - HTTP API server with OpenAPI integration
- **`pkg/alertmanager/`** - Alert management
- **`pkg/authn/`, `pkg/authz/`** - Authentication and authorization
- **`pkg/flagger/`** - Feature flags (OpenFeature-based)
- **`pkg/errors/`** - Structured error handling
### Enterprise vs Community
- **`cmd/community/`** - Community edition entry point
- **`cmd/enterprise/`** - Enterprise edition entry point
- **`ee/`** - Enterprise-only features
## Code Conventions
### Error Handling
Use the custom `pkg/errors` package instead of standard library:
```go
errors.New(typ, code, message) // Instead of errors.New()
errors.Newf(typ, code, message, args...) // Instead of fmt.Errorf()
errors.Wrapf(err, typ, code, msg) // Wrap with context
```
Define domain-specific error codes:
```go
var CodeThingNotFound = errors.MustNewCode("thing_not_found")
```
### HTTP Handlers
Handlers are thin adapters in modules that:
1. Extract auth context from request
2. Decode request body using `binding` package
3. Call module functions
4. Return responses using `render` package
Register routes in `pkg/apiserver/signozapiserver/` with `handler.New()` and `OpenAPIDef`.
### SQL/Database
- Use Bun ORM via `sqlstore.BunDBCtx(ctx)`
- Star schema with `organizations` as central entity
- All tables have `id`, `created_at`, `updated_at`, `org_id` columns
- Write idempotent migrations in `pkg/sqlmigration/`
- No `ON CASCADE` deletes - handle in application logic
### REST Endpoints
- Use plural resource names: `/v1/organizations`, `/v1/users`
- Use `me` for current user/org: `/v1/organizations/me/users`
- Follow RESTful conventions for CRUD operations
### Linting Rules (from .golangci.yml)
- Don't use `errors` package - use `pkg/errors`
- Don't use `zap` logger - use `slog`
- Don't use `fmt.Errorf` or `fmt.Print*`
## Testing
### Unit Tests
- Run with race detector: `go test -race ./...`
- Provider mocks are in `<provider>test/` packages
### Integration Tests
- Located in `tests/integration/`
- Use pytest with testcontainers
- Files prefixed with numbers for execution order (e.g., `01_database.py`)
- Always use `--reuse` flag during development
- Fixtures in `tests/integration/fixtures/`

View File

@@ -0,0 +1,36 @@
---
name: commit
description: Create a conventional commit with staged changes
disable-model-invocation: true
allowed-tools: Bash(git:*)
---
# Create Conventional Commit
Commit staged changes using conventional commit format: `type(scope): description`
## Types
- `feat:` - New feature
- `fix:` - Bug fix
- `chore:` - Maintenance/refactor/tooling
- `test:` - Tests only
- `docs:` - Documentation
## Process
1. Review staged changes: `git diff --cached`
2. Determine type, optional scope, and description (imperative, <70 chars)
3. Commit using HEREDOC:
```bash
git commit -m "$(cat <<'EOF'
type(scope): description
EOF
)"
```
4. Verify: `git log -1`
## Notes
- Description: imperative mood, lowercase, no period
- Body: explain WHY, not WHAT (code shows what)

View File

@@ -0,0 +1,55 @@
---
name: raise-pr
description: Create a pull request with auto-filled template. Pass 'commit' to commit staged changes first.
disable-model-invocation: true
allowed-tools: Bash(gh:*, git:*), Read
argument-hint: [commit?]
---
# Raise Pull Request
Create a PR with auto-filled template from commits after origin/main.
## Arguments
- No argument: Create PR with existing commits
- `commit`: Commit staged changes first, then create PR
## Process
1. **If `$ARGUMENTS` is "commit"**: Review staged changes and commit with descriptive message
- Check for staged changes: `git diff --cached --stat`
- If changes exist:
- Review the changes: `git diff --cached`
- Create a short and clear commit message based on the changes
- Commit command: `git commit -m "message"`
2. **Analyze commits since origin/main**:
- `git log origin/main..HEAD --pretty=format:"%s%n%b"` - get commit messages
- `git diff origin/main...HEAD --stat` - see changes
3. **Read template**: `.github/pull_request_template.md`
4. **Generate PR**:
- **Title**: Short (<70 chars), from commit messages or main change
- **Body**: Fill template sections based on commits/changes:
- Summary (why/what/approach) - end with "Closes #<issue_number>" if issue number is available from branch name (git branch --show-current)
- Change Type checkboxes
- Bug Context (if applicable)
- Testing Strategy
- Risk Assessment
- Changelog (if user-facing)
- Checklist
5. **Create PR**:
```bash
git push -u origin $(git branch --show-current)
gh pr create --base main --title "..." --body "..."
gh pr view
```
## Notes
- Analyze ALL commits messages from origin/main to HEAD
- Fill template sections based on code analysis
- Leave the sections of PR template as it is if you can't determine

View File

@@ -70,7 +70,6 @@ jobs:
echo 'PYLON_APP_ID="${{ secrets.PYLON_APP_ID }}"' >> frontend/.env
echo 'APPCUES_APP_ID="${{ secrets.APPCUES_APP_ID }}"' >> frontend/.env
echo 'PYLON_IDENTITY_SECRET="${{ secrets.PYLON_IDENTITY_SECRET }}"' >> frontend/.env
echo 'DOCS_BASE_URL="https://signoz.io"' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:

View File

@@ -69,7 +69,6 @@ jobs:
echo 'PYLON_APP_ID="${{ secrets.NP_PYLON_APP_ID }}"' >> frontend/.env
echo 'APPCUES_APP_ID="${{ secrets.NP_APPCUES_APP_ID }}"' >> frontend/.env
echo 'PYLON_IDENTITY_SECRET="${{ secrets.NP_PYLON_IDENTITY_SECRET }}"' >> frontend/.env
echo 'DOCS_BASE_URL="https://staging.signoz.io"' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:

View File

@@ -3,8 +3,8 @@ name: gor-signoz
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+"
- "v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+"
- 'v[0-9]+.[0-9]+.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
permissions:
contents: write
@@ -36,9 +36,8 @@ jobs:
echo 'PYLON_APP_ID="${{ secrets.PYLON_APP_ID }}"' >> .env
echo 'APPCUES_APP_ID="${{ secrets.APPCUES_APP_ID }}"' >> .env
echo 'PYLON_IDENTITY_SECRET="${{ secrets.PYLON_IDENTITY_SECRET }}"' >> .env
echo 'DOCS_BASE_URL="https://signoz.io"' >> .env
- name: build-frontend
run: make js-build
run: make js-build
- name: upload-frontend-artifact
uses: actions/upload-artifact@v4
with:
@@ -105,7 +104,7 @@ jobs:
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser-pro
version: "~> v2"
version: '~> v2'
args: release --config ${{ env.CONFIG_PATH }} --clean --split
workdir: .
env:
@@ -162,7 +161,7 @@ jobs:
if: steps.cache-linux.outputs.cache-hit == 'true' && steps.cache-darwin.outputs.cache-hit == 'true' # only run if caches hit
with:
distribution: goreleaser-pro
version: "~> v2"
version: '~> v2'
args: continue --merge
workdir: .
env:

View File

@@ -1,7 +1,5 @@
{
"eslint.workingDirectories": [
"./frontend"
],
"eslint.workingDirectories": ["./frontend"],
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.codeActionsOnSave": {

View File

@@ -19,7 +19,6 @@ import (
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/modules/role/implrole"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/queryparser"
@@ -81,15 +80,12 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, _ role.Setter, _ role.Granter, queryParser queryparser.QueryParser, _ querier.Querier, _ licensing.Licensing) dashboard.Module {
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, _ role.Module, queryParser queryparser.QueryParser, _ querier.Querier, _ licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(impldashboard.NewStore(store), settings, analytics, orgGetter, queryParser)
},
func(_ licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return noopgateway.NewProviderFactory()
},
func(store sqlstore.SQLStore, authz authz.AuthZ, licensing licensing.Licensing, _ []role.RegisterTypeable) role.Setter {
return implrole.NewSetter(implrole.NewStore(store), authz)
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)

View File

@@ -14,7 +14,6 @@ import (
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/ee/modules/role/implrole"
enterpriseapp "github.com/SigNoz/signoz/ee/query-service/app"
"github.com/SigNoz/signoz/ee/sqlschema/postgressqlschema"
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
@@ -30,7 +29,6 @@ import (
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
pkgimplrole "github.com/SigNoz/signoz/pkg/modules/role/implrole"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/signoz"
@@ -121,17 +119,13 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, roleSetter role.Setter, granter role.Granter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, roleSetter, granter, queryParser, querier, licensing)
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, role role.Module, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, role, queryParser, querier, licensing)
},
func(licensing licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return httpgateway.NewProviderFactory(licensing)
},
func(store sqlstore.SQLStore, authz authz.AuthZ, licensing licensing.Licensing, registry []role.RegisterTypeable) role.Setter {
return implrole.NewSetter(pkgimplrole.NewStore(store), authz, licensing, registry)
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
return err

View File

@@ -291,12 +291,3 @@ flagger:
float:
integer:
object:
##################### User #####################
user:
password:
reset:
# Whether to allow users to reset their password themselves.
allow_self: true
# The duration within which a user can reset their password.
max_token_lifetime: 6h

View File

@@ -209,7 +209,7 @@ paths:
/api/v1/dashboards/{id}/public:
delete:
deprecated: false
description: This endpoint deletes the public sharing config and disables the
description: This endpoints deletes the public sharing config and disables the
public sharing of a dashboard
operationId: DeletePublicDashboard
parameters:
@@ -253,7 +253,7 @@ paths:
- dashboard
get:
deprecated: false
description: This endpoint returns public sharing config for a dashboard
description: This endpoints returns public sharing config for a dashboard
operationId: GetPublicDashboard
parameters:
- in: path
@@ -301,7 +301,7 @@ paths:
- dashboard
post:
deprecated: false
description: This endpoint creates public sharing config and enables public
description: This endpoints creates public sharing config and enables public
sharing of the dashboard
operationId: CreatePublicDashboard
parameters:
@@ -355,7 +355,7 @@ paths:
- dashboard
put:
deprecated: false
description: This endpoint updates the public sharing config for a dashboard
description: This endpoints updates the public sharing config for a dashboard
operationId: UpdatePublicDashboard
parameters:
- in: path
@@ -671,7 +671,7 @@ paths:
/api/v1/global/config:
get:
deprecated: false
description: This endpoint returns global config
description: This endpoints returns global config
operationId: GetGlobalConfig
responses:
"200":
@@ -1447,7 +1447,8 @@ paths:
/api/v1/public/dashboards/{id}:
get:
deprecated: false
description: This endpoint returns the sanitized dashboard data for public access
description: This endpoints returns the sanitized dashboard data for public
access
operationId: GetPublicDashboardData
parameters:
- in: path
@@ -1578,228 +1579,6 @@ paths:
summary: Reset password
tags:
- users
/api/v1/roles:
get:
deprecated: false
description: This endpoint lists all roles
operationId: ListRoles
responses:
"200":
content:
application/json:
schema:
properties:
data:
items:
$ref: '#/components/schemas/RoletypesRole'
type: array
status:
type: string
type: object
description: OK
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: List roles
tags:
- role
post:
deprecated: false
description: This endpoint creates a role
operationId: CreateRole
responses:
"201":
content:
application/json:
schema:
properties:
data:
$ref: '#/components/schemas/TypesIdentifiable'
status:
type: string
type: object
description: Created
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: Create role
tags:
- role
/api/v1/roles/{id}:
delete:
deprecated: false
description: This endpoint deletes a role
operationId: DeleteRole
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
"204":
content:
application/json:
schema:
type: string
description: No Content
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: Delete role
tags:
- role
get:
deprecated: false
description: This endpoint gets a role
operationId: GetRole
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
"200":
content:
application/json:
schema:
properties:
data:
$ref: '#/components/schemas/RoletypesRole'
status:
type: string
type: object
description: OK
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: Get role
tags:
- role
patch:
deprecated: false
description: This endpoint patches a role
operationId: PatchRole
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
"204":
content:
application/json:
schema:
type: string
description: No Content
"401":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Unauthorized
"403":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Forbidden
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
security:
- api_key:
- ADMIN
- tokenizer:
- ADMIN
summary: Patch role
tags:
- role
/api/v1/user:
get:
deprecated: false
@@ -2206,35 +1985,6 @@ paths:
summary: Update user preference
tags:
- preferences
/api/v2/factor_password/forgot:
post:
deprecated: false
description: This endpoint initiates the forgot password flow by sending a reset
password email
operationId: ForgotPassword
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/TypesPostableForgotPassword'
responses:
"204":
description: No Content
"400":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Bad Request
"500":
content:
application/json:
schema:
$ref: '#/components/schemas/RenderErrorResponse'
description: Internal Server Error
summary: Forgot password
tags:
- users
/api/v2/features:
get:
deprecated: false
@@ -4109,25 +3859,6 @@ components:
status:
type: string
type: object
RoletypesRole:
properties:
createdAt:
format: date-time
type: string
description:
type: string
id:
type: string
name:
type: string
orgId:
type: string
type:
type: string
updatedAt:
format: date-time
type: string
type: object
TypesChangePasswordRequest:
properties:
newPassword:
@@ -4248,15 +3979,6 @@ components:
token:
type: string
type: object
TypesPostableForgotPassword:
properties:
email:
type: string
frontendBaseURL:
type: string
orgId:
type: string
type: object
TypesPostableInvite:
properties:
email:
@@ -4277,9 +3999,6 @@ components:
type: object
TypesResetPasswordToken:
properties:
expiresAt:
format: date-time
type: string
id:
type: string
passwordId:

View File

@@ -0,0 +1,292 @@
# External API Monitoring - Developer Guide
## Overview
External API Monitoring tracks outbound HTTP calls from your services to external APIs. It groups spans by domain (e.g., `api.example.com`) and displays metrics like endpoint count, request rate, error rate, latency, and last seen time.
**Key Requirement**: Spans must have `kind_string = 'Client'` and either `http.url`/`url.full` AND `net.peer.name`/`server.address` attributes.
---
## Architecture Flow
```
Frontend (DomainList)
→ useListOverview hook
→ POST /api/v1/third-party-apis/overview/list
→ getDomainList handler
→ BuildDomainList (7 queries)
→ QueryRange (ClickHouse)
→ Post-processing (merge semconv, filter IPs)
→ formatDataForTable
→ UI Display
```
---
## Key APIs
### 1. Domain List API
**Endpoint**: `POST /api/v1/third-party-apis/overview/list`
**Request**:
```json
{
"start": 1699123456789, // Unix timestamp (ms)
"end": 1699127056789,
"show_ip": false, // Filter IP addresses
"filter": {
"expression": "kind_string = 'Client' AND service.name = 'api'"
}
}
```
**Response**: Table with columns:
- `net.peer.name` (domain name)
- `endpoints` (count_distinct with fallback: http.url or url.full)
- `rps` (rate())
- `error_rate` (formula: error/total_span * 100)
- `p99` (p99(duration_nano))
- `lastseen` (max(timestamp))
**Handler**: `pkg/query-service/app/http_handler.go::getDomainList()`
---
### 2. Domain Info API
**Endpoint**: `POST /api/v1/third-party-apis/overview/domain`
**Request**: Same as Domain List, but includes `domain` field
**Response**: Endpoint-level metrics for a specific domain
**Handler**: `pkg/query-service/app/http_handler.go::getDomainInfo()`
---
## Query Building
### Location
`pkg/modules/thirdpartyapi/translator.go`
### BuildDomainList() - Creates 7 Sub-queries
1. **endpoints**: `count_distinct(if(http.url exists, http.url, url.full))` - Unique endpoint count (handles both semconv attributes)
2. **lastseen**: `max(timestamp)` - Last access time
3. **rps**: `rate()` - Requests per second
4. **error**: `count() WHERE has_error = true` - Error count
5. **total_span**: `count()` - Total spans (for error rate)
6. **p99**: `p99(duration_nano)` - 99th percentile latency
7. **error_rate**: Formula `(error/total_span)*100`
### Base Filter
```go
"(http.url EXISTS OR url.full EXISTS) AND kind_string = 'Client'"
```
### GroupBy
- Groups by `server.address` + `net.peer.name` (dual semconv support)
---
## Key Files
### Frontend
| File | Purpose |
|------|---------|
| `frontend/src/container/ApiMonitoring/Explorer/Domains/DomainList.tsx` | Main list view component |
| `frontend/src/container/ApiMonitoring/Explorer/Domains/DomainDetails/DomainDetails.tsx` | Domain details drawer |
| `frontend/src/hooks/thirdPartyApis/useListOverview.ts` | Data fetching hook |
| `frontend/src/api/thirdPartyApis/listOverview.ts` | API client |
| `frontend/src/container/ApiMonitoring/utils.tsx` | Utilities (formatting, query building) |
### Backend
| File | Purpose |
|------|---------|
| `pkg/query-service/app/http_handler.go` | API handlers (`getDomainList`, `getDomainInfo`) |
| `pkg/modules/thirdpartyapi/translator.go` | Query builder & response processing |
| `pkg/types/thirdpartyapitypes/thirdpartyapi.go` | Request/response types |
---
## Data Tables
### Primary Table
- **Table**: `signoz_traces.distributed_signoz_index_v3`
- **Key Columns**:
- `kind_string` - Filter for `'Client'` spans
- `duration_nano` - For latency calculations
- `has_error` - For error rate
- `timestamp` - For last seen
- `attributes_string` - Map containing `http.url`, `net.peer.name`, etc.
- `resources_string` - Map containing `server.address`, `service.name`, etc.
### Attribute Access
```sql
-- Check existence
mapContains(attributes_string, 'http.url') = 1
-- Get value
attributes_string['http.url']
-- Materialized (if exists)
attribute_string_http$$url
```
---
## Post-Processing
### 1. MergeSemconvColumns()
- Merges `server.address` and `net.peer.name` into single column
- Location: `pkg/modules/thirdpartyapi/translator.go:117`
### 2. FilterIntermediateColumns()
- Removes intermediate formula columns from response
- Location: `pkg/modules/thirdpartyapi/translator.go:70`
### 3. FilterResponse()
- Filters out IP addresses if `show_ip = false`
- Uses `net.ParseIP()` to detect IPs
- Location: `pkg/modules/thirdpartyapi/translator.go:214`
---
## Required Attributes
### For Domain Grouping
- `net.peer.name` OR `server.address` (required)
### For Filtering
- `http.url` OR `url.full` (required)
- `kind_string = 'Client'` (required)
### Not Required
- `http.target` - Not used in external API monitoring
### Known Bug
The `buildEndpointsQuery()` uses `count_distinct(http.url)` but filter allows `url.full`. If spans only have `url.full`, they pass filter but don't contribute to endpoint count.
**Fix Needed**: Update aggregation to handle both attributes:
```go
// Current (buggy)
{Expression: "count_distinct(http.url)"}
// Should be
{Expression: "count_distinct(coalesce(http.url, url.full))"}
```
---
## Frontend Data Flow
### 1. Domain List View
```
DomainList component
→ useListOverview({ start, end, show_ip, filter })
→ listOverview API call
→ formatDataForTable(response)
→ Table display
```
### 2. Domain Details View
```
User clicks domain
→ DomainDetails drawer opens
→ Multiple queries:
- DomainMetrics (overview cards)
- AllEndpoints (endpoint table)
- TopErrors (error table)
- EndPointDetails (when endpoint selected)
```
### 3. Data Formatting
- `formatDataForTable()` - Converts API response to table format
- Handles `n/a` values, converts nanoseconds to milliseconds
- Maps column names to display fields
---
## Query Examples
### Domain List Query
```sql
SELECT
multiIf(
mapContains(attributes_string, 'server.address'),
attributes_string['server.address'],
mapContains(attributes_string, 'net.peer.name'),
attributes_string['net.peer.name'],
NULL
) AS domain,
count_distinct(attributes_string['http.url']) AS endpoints,
rate() AS rps,
p99(duration_nano) AS p99,
max(timestamp) AS lastseen
FROM signoz_traces.distributed_signoz_index_v3
WHERE
(mapContains(attributes_string, 'http.url') = 1
OR mapContains(attributes_string, 'url.full') = 1)
AND kind_string = 'Client'
AND timestamp >= ? AND timestamp < ?
GROUP BY domain
```
---
## Testing
### Key Test Files
- `frontend/src/container/ApiMonitoring/__tests__/AllEndpointsWidgetV5Migration.test.tsx`
- `frontend/src/container/ApiMonitoring/__tests__/EndpointDropdownV5Migration.test.tsx`
- `pkg/modules/thirdpartyapi/translator_test.go`
### Test Scenarios
1. Domain filtering with both semconv attributes
2. URL handling (http.url vs url.full)
3. IP address filtering
4. Error rate calculation
5. Empty state handling
---
## Common Issues
### Empty State
**Symptom**: No domains shown despite data existing
**Causes**:
1. Missing `net.peer.name` or `server.address`
2. Missing `http.url` or `url.full`
3. Spans not marked as `kind_string = 'Client'`
4. Bug: Only `url.full` present but query uses `count_distinct(http.url)`
### Performance
- Queries use `ts_bucket_start` for time partitioning
- Resource filtering uses separate `distributed_traces_v3_resource` table
- Materialized columns improve performance for common attributes
---
## Quick Start Checklist
- [ ] Understand trace table schema (`signoz_index_v3`)
- [ ] Review `BuildDomainList()` in `translator.go`
- [ ] Check `getDomainList()` handler in `http_handler.go`
- [ ] Review frontend `DomainList.tsx` component
- [ ] Understand semconv attribute mapping (legacy vs current)
- [ ] Test with spans that have required attributes
- [ ] Review post-processing functions (merge, filter)
---
## References
- **Trace Schema**: `pkg/telemetrytraces/field_mapper.go`
- **Query Builder**: `pkg/telemetrytraces/statement_builder.go`
- **API Routes**: `pkg/query-service/app/http_handler.go:2157`
- **Constants**: `pkg/modules/thirdpartyapi/translator.go:14-20`

View File

@@ -0,0 +1,980 @@
# Query Range API (V5) - Developer Guide
This document provides a comprehensive guide to the Query Range API (V5), which is the primary query endpoint for traces, logs, and metrics in SigNoz. It covers architecture, request/response models, code flows, and implementation details.
## Table of Contents
1. [Overview](#overview)
2. [API Endpoint](#api-endpoint)
3. [Request/Response Models](#requestresponse-models)
4. [Query Types](#query-types)
5. [Request Types](#request-types)
6. [Code Flow](#code-flow)
7. [Key Components](#key-components)
8. [Query Execution](#query-execution)
9. [Caching](#caching)
10. [Result Processing](#result-processing)
11. [Performance Considerations](#performance-considerations)
12. [Extending the API](#extending-the-api)
---
## Overview
The Query Range API (V5) is the unified query endpoint for all telemetry signals (traces, logs, metrics) in SigNoz. It provides:
- **Unified Interface**: Single endpoint for all signal types
- **Query Builder**: Visual query builder support
- **Multiple Query Types**: Builder queries, PromQL, ClickHouse SQL, Formulas, Trace Operators
- **Flexible Response Types**: Time series, scalar, raw data, trace-specific
- **Advanced Features**: Aggregations, filters, group by, ordering, pagination
- **Caching**: Intelligent caching for performance
### Key Technologies
- **Backend**: Go (Golang)
- **Storage**: ClickHouse (columnar database)
- **Query Language**: Custom query builder + PromQL + ClickHouse SQL
- **Protocol**: HTTP/REST API
---
## API Endpoint
### Endpoint Details
**URL**: `POST /api/v5/query_range`
**Handler**: `QuerierAPI.QueryRange``querier.QueryRange`
**Location**:
- Handler: `pkg/querier/querier.go:122`
- Route Registration: `pkg/query-service/app/http_handler.go:480`
**Authentication**: Requires ViewAccess permission
**Content-Type**: `application/json`
### Request Flow
```
HTTP Request (POST /api/v5/query_range)
HTTP Handler (QuerierAPI.QueryRange)
Querier.QueryRange (pkg/querier/querier.go)
Query Execution (Statement Builders → ClickHouse)
Result Processing & Merging
HTTP Response (QueryRangeResponse)
```
---
## Request/Response Models
### Request Model
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/req.go`
```go
type QueryRangeRequest struct {
Start uint64 // Start timestamp (milliseconds)
End uint64 // End timestamp (milliseconds)
RequestType RequestType // Response type (TimeSeries, Scalar, Raw, Trace)
Variables map[string]VariableItem // Template variables
CompositeQuery CompositeQuery // Container for queries
NoCache bool // Skip cache flag
}
```
### Composite Query
```go
type CompositeQuery struct {
Queries []QueryEnvelope // Array of queries to execute
}
```
### Query Envelope
```go
type QueryEnvelope struct {
Type QueryType // Query type (Builder, PromQL, ClickHouseSQL, Formula, TraceOperator)
Spec any // Query specification (type-specific)
}
```
### Response Model
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/req.go`
```go
type QueryRangeResponse struct {
Type RequestType // Response type
Data QueryData // Query results
Meta ExecStats // Execution statistics
Warning *QueryWarnData // Warnings (if any)
QBEvent *QBEvent // Query builder event metadata
}
type QueryData struct {
Results []any // Array of result objects (type depends on RequestType)
}
type ExecStats struct {
RowsScanned uint64 // Total rows scanned
BytesScanned uint64 // Total bytes scanned
DurationMS uint64 // Query duration in milliseconds
StepIntervals map[string]uint64 // Step intervals per query
}
```
---
## Query Types
The API supports multiple query types, each with its own specification format.
### 1. Builder Query (`QueryTypeBuilder`)
Visual query builder queries. Supports traces, logs, and metrics.
**Spec Type**: `QueryBuilderQuery[T]` where T is:
- `TraceAggregation` for traces
- `LogAggregation` for logs
- `MetricAggregation` for metrics
**Example**:
```go
QueryBuilderQuery[TraceAggregation] {
Name: "query_name",
Signal: SignalTraces,
Filter: &Filter {
Expression: "service.name = 'api' AND duration_nano > 1000000",
},
Aggregations: []TraceAggregation {
{Expression: "count()", Alias: "total"},
{Expression: "avg(duration_nano)", Alias: "avg_duration"},
},
GroupBy: []GroupByKey {...},
Order: []OrderBy {...},
Limit: 100,
}
```
**Key Files**:
- Traces: `pkg/telemetrytraces/statement_builder.go`
- Logs: `pkg/telemetrylogs/statement_builder.go`
- Metrics: `pkg/telemetrymetrics/statement_builder.go`
### 2. PromQL Query (`QueryTypePromQL`)
Prometheus Query Language queries for metrics.
**Spec Type**: `PromQuery`
**Example**:
```go
PromQuery {
Query: "rate(http_requests_total[5m])",
Step: Step{Duration: time.Minute},
}
```
**Key Files**: `pkg/querier/promql_query.go`
### 3. ClickHouse SQL Query (`QueryTypeClickHouseSQL`)
Direct ClickHouse SQL queries.
**Spec Type**: `ClickHouseQuery`
**Example**:
```go
ClickHouseQuery {
Query: "SELECT count() FROM signoz_traces.distributed_signoz_index_v3 WHERE ...",
}
```
**Key Files**: `pkg/querier/ch_sql_query.go`
### 4. Formula Query (`QueryTypeFormula`)
Mathematical formulas combining other queries.
**Spec Type**: `QueryBuilderFormula`
**Example**:
```go
QueryBuilderFormula {
Expression: "A / B * 100", // A and B are query names
}
```
**Key Files**: `pkg/querier/formula_query.go`
### 5. Trace Operator Query (`QueryTypeTraceOperator`)
Set operations on trace queries (AND, OR, NOT).
**Spec Type**: `QueryBuilderTraceOperator`
**Example**:
```go
QueryBuilderTraceOperator {
Expression: "A AND B", // A and B are query names
Filter: &Filter {...},
}
```
**Key Files**:
- `pkg/telemetrytraces/trace_operator_statement_builder.go`
- `pkg/querier/trace_operator_query.go`
---
## Request Types
The `RequestType` determines the format of the response data.
### 1. `RequestTypeTimeSeries`
Returns time series data for charts.
**Response Format**: `TimeSeriesData`
```go
type TimeSeriesData struct {
QueryName string
Aggregations []AggregationBucket
}
type AggregationBucket struct {
Index int
Series []TimeSeries
Alias string
Meta AggregationMeta
}
type TimeSeries struct {
Labels map[string]string
Values []TimeSeriesValue
}
type TimeSeriesValue struct {
Timestamp int64
Value float64
}
```
**Use Case**: Line charts, bar charts, area charts
### 2. `RequestTypeScalar`
Returns a single scalar value.
**Response Format**: `ScalarData`
```go
type ScalarData struct {
QueryName string
Data []ScalarValue
}
type ScalarValue struct {
Timestamp int64
Value float64
}
```
**Use Case**: Single value displays, stat panels
### 3. `RequestTypeRaw`
Returns raw data rows.
**Response Format**: `RawData`
```go
type RawData struct {
QueryName string
Columns []string
Rows []RawDataRow
}
type RawDataRow struct {
Timestamp time.Time
Data map[string]any
}
```
**Use Case**: Tables, logs viewer, trace lists
### 4. `RequestTypeTrace`
Returns trace-specific data structure.
**Response Format**: Trace-specific format (see traces documentation)
**Use Case**: Trace-specific visualizations
---
## Code Flow
### Complete Request Flow
```
1. HTTP Request
POST /api/v5/query_range
Body: QueryRangeRequest JSON
2. HTTP Handler
QuerierAPI.QueryRange (pkg/querier/querier.go)
- Validates request
- Extracts organization ID from auth context
3. Querier.QueryRange (pkg/querier/querier.go:122)
- Validates QueryRangeRequest
- Processes each query in CompositeQuery.Queries
- Identifies dependencies (e.g., trace operators, formulas)
- Calculates step intervals
- Fetches metric temporality if needed
4. Query Creation
For each QueryEnvelope:
a. Builder Query:
- newBuilderQuery() creates builderQuery instance
- Selects appropriate statement builder based on signal:
* Traces → traceStmtBuilder
* Logs → logStmtBuilder
* Metrics → metricStmtBuilder or meterStmtBuilder
b. PromQL Query:
- newPromqlQuery() creates promqlQuery instance
- Uses Prometheus engine
c. ClickHouse SQL Query:
- newchSQLQuery() creates chSQLQuery instance
- Direct SQL execution
d. Formula Query:
- newFormulaQuery() creates formulaQuery instance
- References other queries by name
e. Trace Operator Query:
- newTraceOperatorQuery() creates traceOperatorQuery instance
- Uses traceOperatorStmtBuilder
5. Statement Building (for Builder queries)
StatementBuilder.Build()
- Resolves field keys from metadata store
- Builds SQL based on request type:
* RequestTypeRaw → buildListQuery()
* RequestTypeTimeSeries → buildTimeSeriesQuery()
* RequestTypeScalar → buildScalarQuery()
* RequestTypeTrace → buildTraceQuery()
- Returns SQL statement with arguments
6. Query Execution
Query.Execute()
- Executes SQL/query against ClickHouse or Prometheus
- Processes results into response format
- Returns Result with data and statistics
7. Caching (if applicable)
- Checks bucket cache for time series queries
- Executes queries for missing time ranges
- Merges cached and fresh results
8. Result Processing
querier.run()
- Executes all queries (with dependency resolution)
- Collects results and warnings
- Merges results from multiple queries
9. Post-Processing
postProcessResults()
- Applies formulas if present
- Handles variable substitution
- Formats results for response
10. HTTP Response
- Returns QueryRangeResponse with results
- Includes execution statistics
- Includes warnings if any
```
### Key Decision Points
1. **Query Type Selection**: Based on `QueryEnvelope.Type`
2. **Signal Selection**: For builder queries, based on `Signal` field
3. **Request Type Handling**: Different SQL generation for different request types
4. **Caching Strategy**: Only for time series queries with valid fingerprints
5. **Dependency Resolution**: Trace operators and formulas resolve dependencies first
---
## Key Components
### 1. Querier
**Location**: `pkg/querier/querier.go`
**Purpose**: Orchestrates query execution, caching, and result merging
**Key Methods**:
- `QueryRange()`: Main entry point for query execution
- `run()`: Executes queries and merges results
- `executeWithCache()`: Handles caching logic
- `mergeResults()`: Merges cached and fresh results
- `postProcessResults()`: Applies formulas and variable substitution
**Key Features**:
- Query orchestration across multiple query types
- Intelligent caching with bucket-based strategy
- Result merging from multiple queries
- Formula evaluation
- Time range optimization
- Step interval calculation and validation
### 2. Statement Builder Interface
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/`
**Purpose**: Converts query builder specifications into executable queries
**Interface**:
```go
type StatementBuilder[T any] interface {
Build(
ctx context.Context,
start uint64,
end uint64,
requestType RequestType,
query QueryBuilderQuery[T],
variables map[string]VariableItem,
) (*Statement, error)
}
```
**Implementations**:
- `traceQueryStatementBuilder` - Traces (`pkg/telemetrytraces/statement_builder.go`)
- `logQueryStatementBuilder` - Logs (`pkg/telemetrylogs/statement_builder.go`)
- `metricQueryStatementBuilder` - Metrics (`pkg/telemetrymetrics/statement_builder.go`)
**Key Features**:
- Field resolution via metadata store
- SQL generation for different request types
- Filter, aggregation, group by, ordering support
- Time range optimization
### 3. Query Interface
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/`
**Purpose**: Represents an executable query
**Interface**:
```go
type Query interface {
Execute(ctx context.Context) (*Result, error)
Fingerprint() string // For caching
Window() (uint64, uint64) // Time range
}
```
**Implementations**:
- `builderQuery[T]` - Builder queries (`pkg/querier/builder_query.go`)
- `promqlQuery` - PromQL queries (`pkg/querier/promql_query.go`)
- `chSQLQuery` - ClickHouse SQL queries (`pkg/querier/ch_sql_query.go`)
- `formulaQuery` - Formula queries (`pkg/querier/formula_query.go`)
- `traceOperatorQuery` - Trace operator queries (`pkg/querier/trace_operator_query.go`)
### 4. Telemetry Store
**Location**: `pkg/telemetrystore/`
**Purpose**: Abstraction layer for ClickHouse database access
**Key Methods**:
- `Query()`: Execute SQL query
- `QueryRow()`: Execute query returning single row
- `Select()`: Execute query returning multiple rows
**Implementation**: `clickhouseTelemetryStore` (`pkg/telemetrystore/clickhousetelemetrystore/`)
### 5. Metadata Store
**Location**: `pkg/types/telemetrytypes/`
**Purpose**: Provides metadata about available fields, keys, and attributes
**Key Methods**:
- `GetKeysMulti()`: Get field keys for multiple selectors
- `FetchTemporalityMulti()`: Get metric temporality information
**Implementation**: `telemetryMetadataStore` (`pkg/telemetrymetadata/`)
### 6. Bucket Cache
**Location**: `pkg/querier/`
**Purpose**: Caches query results by time buckets for performance
**Key Methods**:
- `GetMissRanges()`: Get time ranges not in cache
- `Put()`: Store query result in cache
**Features**:
- Bucket-based caching (aligned to step intervals)
- Automatic cache invalidation
- Parallel query execution for missing ranges
---
## Query Execution
### Builder Query Execution
**Location**: `pkg/querier/builder_query.go`
**Process**:
1. Statement builder generates SQL
2. SQL executed against ClickHouse via TelemetryStore
3. Results processed based on RequestType:
- TimeSeries: Grouped by time buckets and labels
- Scalar: Single value extraction
- Raw: Row-by-row processing
4. Statistics collected (rows scanned, bytes scanned, duration)
### PromQL Query Execution
**Location**: `pkg/querier/promql_query.go`
**Process**:
1. Query parsed by Prometheus engine
2. Executed against Prometheus-compatible data
3. Results converted to QueryRangeResponse format
### ClickHouse SQL Query Execution
**Location**: `pkg/querier/ch_sql_query.go`
**Process**:
1. SQL query executed directly
2. Results processed based on RequestType
3. Variable substitution applied
### Formula Query Execution
**Location**: `pkg/querier/formula_query.go`
**Process**:
1. Referenced queries executed first
2. Formula expression evaluated using govaluate
3. Results computed from query results
### Trace Operator Query Execution
**Location**: `pkg/querier/trace_operator_query.go`
**Process**:
1. Expression parsed to find dependencies
2. Referenced queries executed
3. Set operations applied (INTERSECT, UNION, EXCEPT)
4. Results combined
---
## Caching
### Caching Strategy
**Location**: `pkg/querier/querier.go:642`
**When Caching Applies**:
- Time series queries only
- Queries with valid fingerprints
- `NoCache` flag not set
**How It Works**:
1. Query fingerprint generated (includes query structure, filters, time range)
2. Cache checked for existing results
3. Missing time ranges identified
4. Queries executed only for missing ranges (parallel execution)
5. Fresh results merged with cached results
6. Merged result stored in cache
### Cache Key Generation
**Location**: `pkg/querier/builder_query.go:52`
The fingerprint includes:
- Signal type
- Source type
- Step interval
- Aggregations
- Filters
- Group by fields
- Time range (for cache key, not fingerprint)
### Cache Benefits
- **Performance**: Avoids re-executing identical queries
- **Efficiency**: Only queries missing time ranges
- **Parallelism**: Multiple missing ranges queried in parallel
---
## Result Processing
### Result Merging
**Location**: `pkg/querier/querier.go:795`
**Process**:
1. Results from multiple queries collected
2. For time series: Series merged by labels
3. For raw data: Rows combined
4. Statistics aggregated (rows scanned, bytes scanned, duration)
### Formula Evaluation
**Location**: `pkg/querier/formula_query.go`
**Process**:
1. Formula expression parsed
2. Referenced query results retrieved
3. Expression evaluated using govaluate library
4. Result computed and formatted
### Variable Substitution
**Location**: `pkg/querier/querier.go`
**Process**:
1. Variables extracted from request
2. Variable values substituted in queries
3. Applied to filters, aggregations, and other query parts
---
## Performance Considerations
### Query Optimization
1. **Time Range Optimization**:
- For trace queries with `trace_id` filter, query `trace_summary` first to narrow time range
- Use appropriate time ranges to limit data scanned
2. **Step Interval Calculation**:
- Automatic step interval calculation based on time range
- Minimum step interval enforcement
- Warnings for suboptimal intervals
3. **Index Usage**:
- Queries use time bucket columns (`ts_bucket_start`) for efficient filtering
- Proper filter placement for index utilization
4. **Limit Enforcement**:
- Raw data queries should include limits
- Pagination support via offset/cursor
### Best Practices
1. **Use Query Builder**: Prefer query builder over raw SQL for better optimization
2. **Limit Time Ranges**: Always specify reasonable time ranges
3. **Use Aggregations**: For large datasets, use aggregations instead of raw data
4. **Cache Awareness**: Be mindful of cache TTLs when testing
5. **Parallel Queries**: Multiple independent queries execute in parallel
6. **Step Intervals**: Let system calculate optimal step intervals
### Monitoring
Execution statistics are included in response:
- `RowsScanned`: Total rows scanned
- `BytesScanned`: Total bytes scanned
- `DurationMS`: Query execution time
- `StepIntervals`: Step intervals per query
---
## Extending the API
### Adding a New Query Type
1. **Define Query Type** (`pkg/types/querybuildertypes/querybuildertypesv5/query.go`):
```go
const (
QueryTypeMyNewType QueryType = "my_new_type"
)
```
2. **Define Query Spec**:
```go
type MyNewQuerySpec struct {
Name string
// ... your fields
}
```
3. **Update QueryEnvelope Unmarshaling** (`pkg/types/querybuildertypes/querybuildertypesv5/query.go`):
```go
case QueryTypeMyNewType:
var spec MyNewQuerySpec
if err := UnmarshalJSONWithContext(shadow.Spec, &spec, "my new query spec"); err != nil {
return wrapUnmarshalError(err, "invalid my new query spec: %v", err)
}
q.Spec = spec
```
4. **Implement Query Interface** (`pkg/querier/my_new_query.go`):
```go
type myNewQuery struct {
spec MyNewQuerySpec
// ... other fields
}
func (q *myNewQuery) Execute(ctx context.Context) (*qbtypes.Result, error) {
// Implementation
}
func (q *myNewQuery) Fingerprint() string {
// Generate fingerprint for caching
}
func (q *myNewQuery) Window() (uint64, uint64) {
// Return time range
}
```
5. **Update Querier** (`pkg/querier/querier.go`):
```go
case QueryTypeMyNewType:
myQuery, ok := query.Spec.(MyNewQuerySpec)
if !ok {
return nil, errors.NewInvalidInputf(...)
}
queries[myQuery.Name] = newMyNewQuery(myQuery, ...)
```
### Adding a New Request Type
1. **Define Request Type** (`pkg/types/querybuildertypes/querybuildertypesv5/req.go`):
```go
const (
RequestTypeMyNewType RequestType = "my_new_type"
)
```
2. **Update Statement Builders**: Add handling in `Build()` method
3. **Update Query Execution**: Add result processing for new type
4. **Update Response Models**: Add response data structure
### Adding a New Aggregation Function
1. **Update Aggregation Rewriter** (`pkg/querybuilder/agg_expr_rewriter.go`):
```go
func (r *aggExprRewriter) RewriteAggregation(expr string) (string, error) {
if strings.HasPrefix(expr, "my_function(") {
// Parse arguments
// Return ClickHouse SQL expression
return "myClickHouseFunction(...)", nil
}
// ... existing functions
}
```
2. **Update Documentation**: Document the new function
---
## Common Patterns
### Pattern 1: Simple Time Series Query
```go
req := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{Expression: "sum(rate)", Alias: "total"},
},
StepInterval: qbtypes.Step{Duration: time.Minute},
},
},
},
},
}
```
### Pattern 2: Query with Filter and Group By
```go
req := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{
Expression: "service.name = 'api' AND duration_nano > 1000000",
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "count()", Alias: "total"},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
}},
},
},
},
},
},
}
```
### Pattern 3: Formula Query
```go
req := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
// ... query A definition
},
},
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "B",
// ... query B definition
},
},
{
Type: qbtypes.QueryTypeFormula,
Spec: qbtypes.QueryBuilderFormula{
Name: "C",
Expression: "A / B * 100",
},
},
},
},
}
```
---
## Testing
### Unit Tests
- `pkg/querier/querier_test.go` - Querier tests
- `pkg/querier/builder_query_test.go` - Builder query tests
- `pkg/querier/formula_query_test.go` - Formula query tests
### Integration Tests
- `tests/integration/` - End-to-end API tests
### Running Tests
```bash
# Run all querier tests
go test ./pkg/querier/...
# Run with verbose output
go test -v ./pkg/querier/...
# Run specific test
go test -v ./pkg/querier/ -run TestQueryRange
```
---
## Debugging
### Enable Debug Logging
```go
// In querier.go
q.logger.DebugContext(ctx, "Executing query",
"query", queryName,
"start", start,
"end", end)
```
### Common Issues
1. **Query Not Found**: Check query name matches in CompositeQuery
2. **SQL Errors**: Check generated SQL in logs, verify ClickHouse syntax
3. **Performance**: Check execution statistics, optimize time ranges
4. **Cache Issues**: Set `NoCache: true` to bypass cache
5. **Formula Errors**: Check formula expression syntax and referenced query names
---
## References
### Key Files
- `pkg/querier/querier.go` - Main query orchestration
- `pkg/querier/builder_query.go` - Builder query execution
- `pkg/types/querybuildertypes/querybuildertypesv5/` - Request/response models
- `pkg/telemetrystore/` - ClickHouse interface
- `pkg/telemetrymetadata/` - Metadata store
### Signal-Specific Documentation
- [Traces Module](./TRACES_MODULE.md) - Trace-specific details
- Logs module documentation (when available)
- Metrics module documentation (when available)
### Related Documentation
- [ClickHouse Documentation](https://clickhouse.com/docs)
- [PromQL Documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/)
---
## Contributing
When contributing to the Query Range API:
1. **Follow Existing Patterns**: Match the style of existing query types
2. **Add Tests**: Include unit tests for new functionality
3. **Update Documentation**: Update this doc for significant changes
4. **Consider Performance**: Optimize queries and use caching appropriately
5. **Handle Errors**: Provide meaningful error messages
For questions or help, reach out to the maintainers or open an issue.

View File

@@ -0,0 +1,185 @@
# SigNoz Span Metrics Processor
The `signozspanmetricsprocessor` is an OpenTelemetry Collector processor that intercepts trace data to generate RED metrics (Rate, Errors, Duration) from spans.
**Location:** `signoz-otel-collector/processor/signozspanmetricsprocessor/`
## Trace Interception
The processor implements `consumer.Traces` interface and sits in the traces pipeline:
```go
func (p *processorImp) ConsumeTraces(ctx context.Context, traces ptrace.Traces) error {
p.lock.Lock()
p.aggregateMetrics(traces)
p.lock.Unlock()
return p.tracesConsumer.ConsumeTraces(ctx, traces) // forward unchanged
}
```
All traces flow through this method. Metrics are aggregated, then traces are forwarded unmodified to the next consumer.
## Metrics Generated
| Metric | Type | Description |
|--------|------|-------------|
| `signoz_latency` | Histogram | Span latency by service/operation/kind/status |
| `signoz_calls_total` | Counter | Call count per service/operation/kind/status |
| `signoz_db_latency_sum/count` | Counter | DB call latency (spans with `db.system` attribute) |
| `signoz_external_call_latency_sum/count` | Counter | External call latency (client spans with remote address) |
### Dimensions
All metrics include these base dimensions:
- `service.name` - from resource attributes
- `operation` - span name
- `span.kind` - SPAN_KIND_SERVER, SPAN_KIND_CLIENT, etc.
- `status.code` - STATUS_CODE_OK, STATUS_CODE_ERROR, etc.
Additional dimensions can be configured.
## Aggregation Flow
```
traces pipeline
┌─────────────────────────────────────────────────────────┐
│ ConsumeTraces() │
│ │ │
│ ▼ │
│ aggregateMetrics(traces) │
│ │ │
│ ├── for each ResourceSpan │
│ │ extract service.name │
│ │ │ │
│ │ ├── for each Span │
│ │ │ │ │
│ │ │ ▼ │
│ │ │ aggregateMetricsForSpan() │
│ │ │ ├── skip stale spans (>24h) │
│ │ │ ├── skip excluded patterns │
│ │ │ ├── calculate latency │
│ │ │ ├── build metric key │
│ │ │ ├── update histograms │
│ │ │ └── cache dimensions │
│ │ │ │
│ ▼ │
│ forward traces to next consumer │
└─────────────────────────────────────────────────────────┘
```
### Periodic Export
A background goroutine exports aggregated metrics on a ticker interval:
```go
go func() {
for {
select {
case <-p.ticker.C:
p.exportMetrics(ctx) // build and send to metrics exporter
}
}
}()
```
## Key Design Features
### 1. Time Bucketing (Delta Temporality)
For delta temporality, metric keys include a time bucket prefix:
```go
if p.config.GetAggregationTemporality() == pmetric.AggregationTemporalityDelta {
p.AddTimeToKeyBuf(span.StartTimestamp().AsTime()) // truncated to interval
}
```
- Spans are grouped by time bucket (default: 1 minute)
- After export, buckets are reset
- Memory-efficient for high-cardinality data
### 2. LRU Dimension Caching
Dimension key-value maps are cached to avoid rebuilding:
```go
if _, has := p.metricKeyToDimensions.Get(k); !has {
p.metricKeyToDimensions.Add(k, p.buildDimensionKVs(...))
}
```
- Configurable cache size (`DimensionsCacheSize`)
- Evicted keys also removed from histograms
### 3. Cardinality Protection
Prevents memory explosion from high cardinality:
```go
if len(p.serviceToOperations) > p.maxNumberOfServicesToTrack {
serviceName = "overflow_service"
}
if len(p.serviceToOperations[serviceName]) > p.maxNumberOfOperationsToTrackPerService {
spanName = "overflow_operation"
}
```
Excess services/operations are aggregated into overflow buckets.
### 4. Exemplars
Trace/span IDs attached to histogram samples for metric-to-trace correlation:
```go
histo.exemplarsData = append(histo.exemplarsData, exemplarData{
traceID: traceID,
spanID: spanID,
value: latency,
})
```
Enables "show me a trace that caused this latency spike" in UI.
## Configuration Options
| Option | Description | Default |
|--------|-------------|---------|
| `metrics_exporter` | Target exporter for generated metrics | required |
| `latency_histogram_buckets` | Custom histogram bucket boundaries | 2,4,6,8,10,50,100,200,400,800,1000,1400,2000,5000,10000,15000 ms |
| `dimensions` | Additional span/resource attributes to include | [] |
| `dimensions_cache_size` | LRU cache size for dimension maps | 1000 |
| `aggregation_temporality` | cumulative or delta | cumulative |
| `time_bucket_interval` | Bucket interval for delta temporality | 1m |
| `skip_spans_older_than` | Skip stale spans | 24h |
| `max_services_to_track` | Cardinality limit for services | - |
| `max_operations_to_track_per_service` | Cardinality limit for operations | - |
| `exclude_patterns` | Regex patterns to skip spans | [] |
## Pipeline Configuration Example
```yaml
processors:
signozspanmetrics:
metrics_exporter: clickhousemetricswrite
latency_histogram_buckets: [2ms, 4ms, 6ms, 8ms, 10ms, 50ms, 100ms, 200ms]
dimensions:
- name: http.method
- name: http.status_code
dimensions_cache_size: 10000
aggregation_temporality: delta
pipelines:
traces:
receivers: [otlp]
processors: [signozspanmetrics, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
exporters: [clickhousemetricswrite]
```
The processor sits in the traces pipeline but exports to a metrics pipeline exporter.

View File

@@ -0,0 +1,832 @@
# SigNoz Traces Module - Developer Guide
This document provides a comprehensive guide to understanding and contributing to the traces module in SigNoz. It covers architecture, APIs, code flows, and implementation details.
## Table of Contents
1. [Overview](#overview)
2. [Architecture](#architecture)
3. [Data Models](#data-models)
4. [API Endpoints](#api-endpoints)
5. [Code Flows](#code-flows)
6. [Key Components](#key-components)
7. [Query Building System](#query-building-system)
8. [Storage Schema](#storage-schema)
9. [Extending the Traces Module](#extending-the-traces-module)
---
## Overview
The traces module in SigNoz handles distributed tracing data from OpenTelemetry. It provides:
- **Ingestion**: Receives traces via OpenTelemetry Collector
- **Storage**: Stores traces in ClickHouse
- **Querying**: Supports complex queries with filters, aggregations, and trace operators
- **Visualization**: Provides waterfall and flamegraph views
- **Trace Funnels**: Advanced analytics for multi-step trace analysis
### Key Technologies
- **Backend**: Go (Golang)
- **Storage**: ClickHouse (columnar database)
- **Protocol**: OpenTelemetry Protocol (OTLP)
- **Query Language**: Custom query builder + ClickHouse SQL
---
## Architecture
### High-Level Flow
```
Application → OpenTelemetry SDK → OTLP Receiver →
[Processors: signozspanmetrics, batch] →
ClickHouse Traces Exporter → ClickHouse Database
Query Service (Go)
Frontend (React/TypeScript)
```
### Component Architecture
```
┌─────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ - TracesExplorer │
│ - TraceDetail (Waterfall/Flamegraph) │
│ - Query Builder UI │
└────────────────────┬────────────────────────────────────┘
│ HTTP/REST API
┌────────────────────▼────────────────────────────────────┐
│ Query Service (Go) │
│ ┌──────────────────────────────────────────────────┐ │
│ │ HTTP Handlers (http_handler.go) │ │
│ │ - QueryRangeV5 (Main query endpoint) │ │
│ │ - GetWaterfallSpansForTrace │ │
│ │ - GetFlamegraphSpansForTrace │ │
│ │ - Trace Fields API │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Querier (querier.go) │ │
│ │ - Query orchestration │ │
│ │ - Cache management │ │
│ │ - Result merging │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Statement Builders │ │
│ │ - traceQueryStatementBuilder │ │
│ │ - traceOperatorStatementBuilder │ │
│ │ - Builds ClickHouse SQL from query specs │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ ClickHouse Reader (clickhouseReader/) │ │
│ │ - Direct trace retrieval │ │
│ │ - Waterfall/Flamegraph data processing │ │
│ └──────────────────────────────────────────────────┘ │
└────────────────────┬────────────────────────────────────┘
│ ClickHouse Protocol
┌────────────────────▼────────────────────────────────────┐
│ ClickHouse Database │
│ - signoz_traces.distributed_signoz_index_v3 │
│ - signoz_traces.distributed_trace_summary │
│ - signoz_traces.distributed_tag_attributes_v2 │
└──────────────────────────────────────────────────────────┘
```
---
## Data Models
### Core Trace Models
**Location**: `pkg/query-service/model/trace.go`
### Query Request Models
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/`
- `QueryRangeRequest`: Main query request structure
- `QueryBuilderQuery[TraceAggregation]`: Query builder specification for traces
- `QueryBuilderTraceOperator`: Trace operator query specification
- `CompositeQuery`: Container for multiple queries
---
## API Endpoints
### 1. Query Range API (V5) - Primary Query Endpoint
**Endpoint**: `POST /api/v5/query_range`
**Handler**: `QuerierAPI.QueryRange``querier.QueryRange`
**Purpose**: Main query endpoint for traces, logs, and metrics. Supports:
- Query builder queries
- Trace operator queries
- Aggregations, filters, group by
- Time series, scalar, and raw data requests
> **Note**: For detailed information about the Query Range API, including request/response models, query types, and common code flows, see the [Query Range API Documentation](./QUERY_RANGE_API.md).
**Trace-Specific Details**:
- Uses `traceQueryStatementBuilder` for SQL generation
- Supports trace-specific aggregations (count, avg, p99, etc. on duration_nano)
- Trace operator queries combine multiple trace queries with set operations
- Time range optimization when `trace_id` filter is present
**Key Files**:
- `pkg/telemetrytraces/statement_builder.go` - Trace SQL generation
- `pkg/telemetrytraces/trace_operator_statement_builder.go` - Trace operator SQL
- `pkg/querier/trace_operator_query.go` - Trace operator execution
### 2. Waterfall View API
**Endpoint**: `POST /api/v2/traces/waterfall/{traceId}`
**Handler**: `GetWaterfallSpansForTraceWithMetadata`
**Purpose**: Retrieves spans for waterfall visualization with metadata
**Request Parameters**:
```go
type GetWaterfallSpansForTraceWithMetadataParams struct {
SelectedSpanID string // Selected span to focus on
IsSelectedSpanIDUnCollapsed bool // Whether selected span is expanded
UncollapsedSpans []string // List of expanded span IDs
}
```
**Response**:
```go
type GetWaterfallSpansForTraceWithMetadataResponse struct {
StartTimestampMillis uint64 // Trace start time
EndTimestampMillis uint64 // Trace end time
DurationNano uint64 // Total duration
RootServiceName string // Root service
RootServiceEntryPoint string // Entry point operation
TotalSpansCount uint64 // Total spans
TotalErrorSpansCount uint64 // Error spans
ServiceNameToTotalDurationMap map[string]uint64 // Service durations
Spans []*Span // Span tree
HasMissingSpans bool // Missing spans indicator
UncollapsedSpans []string // Expanded spans
}
```
**Code Flow**:
```
Handler → ClickHouseReader.GetWaterfallSpansForTraceWithMetadata
→ Query trace_summary for time range
→ Query spans from signoz_index_v3
→ Build span tree structure
→ Apply uncollapsed/selected span logic
→ Return filtered spans (500 span limit)
```
**Key Files**:
- `pkg/query-service/app/http_handler.go:1748` - Handler
- `pkg/query-service/app/clickhouseReader/reader.go:873` - Implementation
- `pkg/query-service/app/traces/tracedetail/waterfall.go` - Tree processing
### 3. Flamegraph View API
**Endpoint**: `POST /api/v2/traces/flamegraph/{traceId}`
**Handler**: `GetFlamegraphSpansForTrace`
**Purpose**: Retrieves spans organized by level for flamegraph visualization
**Request Parameters**:
```go
type GetFlamegraphSpansForTraceParams struct {
SelectedSpanID string // Selected span ID
}
```
**Response**:
```go
type GetFlamegraphSpansForTraceResponse struct {
StartTimestampMillis uint64 // Trace start
EndTimestampMillis uint64 // Trace end
DurationNano uint64 // Total duration
Spans [][]*FlamegraphSpan // Spans organized by level
}
```
**Code Flow**:
```
Handler → ClickHouseReader.GetFlamegraphSpansForTrace
→ Query trace_summary for time range
→ Query spans from signoz_index_v3
→ Build span tree
→ BFS traversal to organize by level
→ Sample spans (50 levels, 100 spans/level max)
→ Return level-organized spans
```
**Key Files**:
- `pkg/query-service/app/http_handler.go:1781` - Handler
- `pkg/query-service/app/clickhouseReader/reader.go:1091` - Implementation
- `pkg/query-service/app/traces/tracedetail/flamegraph.go` - BFS processing
### 4. Trace Fields API
**Endpoint**:
- `GET /api/v2/traces/fields` - Get available trace fields
- `POST /api/v2/traces/fields` - Update trace field metadata
**Handler**: `traceFields`, `updateTraceField`
**Purpose**: Manage trace field metadata for query builder
**Key Files**:
- `pkg/query-service/app/http_handler.go:4912` - Get handler
- `pkg/query-service/app/http_handler.go:4921` - Update handler
### 5. Trace Funnels API
**Endpoint**: `/api/v1/trace-funnels/*`
**Purpose**: Manage trace funnels (multi-step trace analysis)
**Endpoints**:
- `POST /api/v1/trace-funnels/new` - Create funnel
- `GET /api/v1/trace-funnels/list` - List funnels
- `GET /api/v1/trace-funnels/{funnel_id}` - Get funnel
- `PUT /api/v1/trace-funnels/{funnel_id}` - Update funnel
- `DELETE /api/v1/trace-funnels/{funnel_id}` - Delete funnel
- `POST /api/v1/trace-funnels/{funnel_id}/analytics/*` - Analytics endpoints
**Key Files**:
- `pkg/query-service/app/http_handler.go:5084` - Route registration
- `pkg/modules/tracefunnel/` - Funnel implementation
---
## Code Flows
### Flow 1: Query Range Request (V5)
This is the primary query flow for traces. For the complete flow covering all query types, see the [Query Range API Documentation](./QUERY_RANGE_API.md#code-flow).
**Trace-Specific Flow**:
```
1. HTTP Request
POST /api/v5/query_range
2. Querier.QueryRange (common flow - see QUERY_RANGE_API.md)
3. Trace Query Processing:
a. Builder Query (QueryTypeBuilder with SignalTraces):
- newBuilderQuery() creates builderQuery instance
- Uses traceStmtBuilder (traceQueryStatementBuilder)
b. Trace Operator Query (QueryTypeTraceOperator):
- newTraceOperatorQuery() creates traceOperatorQuery
- Uses traceOperatorStmtBuilder
4. Trace Statement Building
traceQueryStatementBuilder.Build() (pkg/telemetrytraces/statement_builder.go:58)
- Resolves trace field keys from metadata store
- Optimizes time range if trace_id filter present (queries trace_summary)
- Maps fields using traceFieldMapper
- Builds conditions using traceConditionBuilder
- Builds SQL based on request type:
* RequestTypeRaw → buildListQuery()
* RequestTypeTimeSeries → buildTimeSeriesQuery()
* RequestTypeScalar → buildScalarQuery()
* RequestTypeTrace → buildTraceQuery()
5. Query Execution
builderQuery.Execute() (pkg/querier/builder_query.go)
- Executes SQL against ClickHouse (signoz_traces database)
- Processes results into response format
6. Result Processing (common flow - see QUERY_RANGE_API.md)
- Merges results from multiple queries
- Applies formulas if present
- Handles caching
7. HTTP Response
- Returns QueryRangeResponse with trace results
```
**Trace-Specific Key Components**:
- `pkg/telemetrytraces/statement_builder.go` - Trace SQL generation
- `pkg/telemetrytraces/field_mapper.go` - Trace field mapping
- `pkg/telemetrytraces/condition_builder.go` - Trace filter building
- `pkg/telemetrytraces/trace_operator_statement_builder.go` - Trace operator SQL
### Flow 2: Waterfall View Request
```
1. HTTP Request
POST /api/v2/traces/waterfall/{traceId}
2. GetWaterfallSpansForTraceWithMetadata handler
- Extracts traceId from URL
- Parses request body for params
3. ClickHouseReader.GetWaterfallSpansForTraceWithMetadata
- Checks cache first (5 minute TTL)
4. If cache miss:
a. Query trace_summary table
SELECT * FROM distributed_trace_summary WHERE trace_id = ?
- Gets time range (start, end, num_spans)
b. Query spans table
SELECT ... FROM distributed_signoz_index_v3
WHERE trace_id = ?
AND ts_bucket_start >= ? AND ts_bucket_start <= ?
- Retrieves all spans for trace
c. Build span tree
- Parse references to build parent-child relationships
- Identify root spans (no parent)
- Calculate service durations
d. Cache result
5. Apply selection logic
tracedetail.GetSelectedSpans()
- Traverses tree based on uncollapsed spans
- Finds path to selected span
- Returns sliding window (500 spans max)
6. HTTP Response
- Returns spans with metadata
```
**Key Components**:
- `pkg/query-service/app/clickhouseReader/reader.go:873`
- `pkg/query-service/app/traces/tracedetail/waterfall.go`
- `pkg/query-service/model/trace.go`
### Flow 3: Trace Operator Query
Trace operators allow combining multiple trace queries with set operations.
```
1. QueryRangeRequest with QueryTypeTraceOperator
2. Querier identifies trace operator queries
- Parses expression to find dependencies
- Collects referenced queries
3. traceOperatorStatementBuilder.Build()
- Parses expression (e.g., "A AND B", "A OR B")
- Builds expression tree
4. traceOperatorCTEBuilder.build()
- Creates CTEs (Common Table Expressions) for each query
- Builds final query with set operations:
* AND → INTERSECT
* OR → UNION
* NOT → EXCEPT
5. Execute combined query
- Returns traces matching the operator expression
```
**Key Components**:
- `pkg/telemetrytraces/trace_operator_statement_builder.go`
- `pkg/telemetrytraces/trace_operator_cte_builder.go`
- `pkg/querier/trace_operator_query.go`
---
## Key Components
> **Note**: For common components used across all signals (Querier, TelemetryStore, MetadataStore, etc.), see the [Query Range API Documentation](./QUERY_RANGE_API.md#key-components).
### 1. Trace Statement Builder
**Location**: `pkg/telemetrytraces/statement_builder.go`
**Purpose**: Converts trace query builder specifications into ClickHouse SQL
**Key Methods**:
- `Build()`: Main entry point, builds SQL statement
- `buildListQuery()`: Builds query for raw/list results
- `buildTimeSeriesQuery()`: Builds query for time series
- `buildScalarQuery()`: Builds query for scalar values
- `buildTraceQuery()`: Builds query for trace-specific results
**Key Features**:
- Trace field resolution via metadata store
- Time range optimization for trace_id filters (queries trace_summary first)
- Support for trace aggregations, filters, group by, ordering
- Calculated field support (http_method, db_name, has_error, etc.)
- Resource filter support via resourceFilterStmtBuilder
### 2. Trace Field Mapper
**Location**: `pkg/telemetrytraces/field_mapper.go`
**Purpose**: Maps trace query field names to ClickHouse column names
**Field Types**:
- **Intrinsic Fields**: Built-in fields (trace_id, span_id, duration_nano, name, kind_string, status_code_string, etc.)
- **Calculated Fields**: Derived fields (http_method, db_name, has_error, response_status_code, etc.)
- **Attribute Fields**: Dynamic span/resource attributes (accessed via attributes_string, attributes_number, attributes_bool, resources_string)
**Example Mapping**:
```
"service.name" → "resource_string_service$$name"
"http.method" → Calculated from attributes_string['http.method']
"duration_nano" → "duration_nano" (intrinsic)
"trace_id" → "trace_id" (intrinsic)
```
**Key Methods**:
- `MapField()`: Maps a field to ClickHouse expression
- `MapAttribute()`: Maps attribute fields
- `MapResource()`: Maps resource fields
### 3. Trace Condition Builder
**Location**: `pkg/telemetrytraces/condition_builder.go`
**Purpose**: Builds WHERE clause conditions from trace filter expressions
**Supported Operators**:
- `=`, `!=`, `IN`, `NOT IN`
- `>`, `>=`, `<`, `<=`
- `LIKE`, `NOT LIKE`, `ILIKE`
- `EXISTS`, `NOT EXISTS`
- `CONTAINS`, `NOT CONTAINS`
**Key Methods**:
- `BuildCondition()`: Builds condition from filter expression
- Handles attribute, resource, and intrinsic field filtering
### 4. Trace Operator Statement Builder
**Location**: `pkg/telemetrytraces/trace_operator_statement_builder.go`
**Purpose**: Builds SQL for trace operator queries (AND, OR, NOT operations on trace queries)
**Key Methods**:
- `Build()`: Builds CTE-based SQL for trace operators
- Uses `traceOperatorCTEBuilder` to create Common Table Expressions
**Features**:
- Parses operator expressions (e.g., "A AND B")
- Creates CTEs for each referenced query
- Combines results using INTERSECT, UNION, EXCEPT
### 5. ClickHouse Reader (Trace-Specific Methods)
**Location**: `pkg/query-service/app/clickhouseReader/reader.go`
**Purpose**: Direct trace data retrieval and processing (bypasses query builder)
**Key Methods**:
- `GetWaterfallSpansForTraceWithMetadata()`: Waterfall view data
- `GetFlamegraphSpansForTrace()`: Flamegraph view data
- `SearchTraces()`: Legacy trace search (still used for some flows)
- `GetMinAndMaxTimestampForTraceID()`: Time range optimization helper
**Caching**: Implements 5-minute cache for trace detail views
**Note**: These methods are used for trace-specific visualizations. For general trace queries, use the Query Range API.
---
## Query Building System
> **Note**: For general query building concepts and patterns, see the [Query Range API Documentation](./QUERY_RANGE_API.md). This section covers trace-specific aspects.
### Trace Query Builder Structure
A trace query consists of:
```go
QueryBuilderQuery[TraceAggregation] {
Name: "query_name",
Signal: SignalTraces,
Filter: &Filter {
Expression: "service.name = 'api' AND duration_nano > 1000000"
},
Aggregations: []TraceAggregation {
{Expression: "count()", Alias: "total"},
{Expression: "avg(duration_nano)", Alias: "avg_duration"},
{Expression: "p99(duration_nano)", Alias: "p99"},
},
GroupBy: []GroupByKey {
{TelemetryFieldKey: {Name: "service.name", ...}},
},
Order: []OrderBy {...},
Limit: 100,
}
```
### Trace-Specific SQL Generation Process
1. **Field Resolution**:
- Resolve trace field names using `traceFieldMapper`
- Handle intrinsic, calculated, and attribute fields
- Map to ClickHouse columns (e.g., `service.name``resource_string_service$$name`)
2. **Time Range Optimization**:
- If `trace_id` filter present, query `trace_summary` first
- Narrow time range based on trace start/end times
- Reduces data scanned significantly
3. **Filter Building**:
- Convert filter expression using `traceConditionBuilder`
- Handle attribute filters (attributes_string, attributes_number, attributes_bool)
- Handle resource filters (resources_string)
- Handle intrinsic field filters
4. **Aggregation Building**:
- Build SELECT with trace aggregations
- Support trace-specific functions (count, avg, p99, etc. on duration_nano)
5. **Group By Building**:
- Add GROUP BY clause with trace fields
- Support grouping by service.name, operation name, etc.
6. **Order Building**:
- Add ORDER BY clause
- Support ordering by duration, timestamp, etc.
7. **Limit/Offset**:
- Add pagination
### Example Generated SQL
For query: `count() WHERE service.name = 'api' GROUP BY service.name`
```sql
SELECT
count() AS total,
resource_string_service$$name AS service_name
FROM signoz_traces.distributed_signoz_index_v3
WHERE
timestamp >= toDateTime64(1234567890/1e9, 9)
AND timestamp <= toDateTime64(1234567899/1e9, 9)
AND ts_bucket_start >= toDateTime64(1234567890/1e9, 9)
AND ts_bucket_start <= toDateTime64(1234567899/1e9, 9)
AND resource_string_service$$name = 'api'
GROUP BY resource_string_service$$name
```
**Note**: The query uses `ts_bucket_start` for efficient time filtering (partitioning column).
---
## Storage Schema
### Main Tables
**Location**: `pkg/telemetrytraces/tables.go`
#### 1. `distributed_signoz_index_v3`
Main span index table. Stores all span data.
**Key Columns**:
- `timestamp`: Span timestamp
- `duration_nano`: Span duration
- `span_id`, `trace_id`: Identifiers
- `has_error`: Error indicator
- `kind`: Span kind
- `name`: Operation name
- `attributes_string`, `attributes_number`, `attributes_bool`: Attributes
- `resources_string`: Resource attributes
- `events`: Span events
- `status_code_string`, `status_message`: Status
- `ts_bucket_start`: Time bucket for partitioning
#### 2. `distributed_trace_summary`
Trace-level summary for quick lookups.
**Columns**:
- `trace_id`: Trace identifier
- `start`: Earliest span timestamp
- `end`: Latest span timestamp
- `num_spans`: Total span count
#### 3. `distributed_tag_attributes_v2`
Metadata table for attribute keys.
**Purpose**: Stores available attribute keys for autocomplete
#### 4. `distributed_span_attributes_keys`
Span attribute keys metadata.
**Purpose**: Tracks which attributes exist in spans
### Database
All trace tables are in the `signoz_traces` database.
---
## Extending the Traces Module
### Adding a New Calculated Field
1. **Define Field in Constants** (`pkg/telemetrytraces/const.go`):
```go
CalculatedFields = map[string]telemetrytypes.TelemetryFieldKey{
"my_new_field": {
Name: "my_new_field",
Description: "Description of the field",
Signal: telemetrytypes.SignalTraces,
FieldContext: telemetrytypes.FieldContextSpan,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
}
```
2. **Implement Field Mapping** (`pkg/telemetrytraces/field_mapper.go`):
```go
func (fm *fieldMapper) MapField(field telemetrytypes.TelemetryFieldKey) (string, error) {
if field.Name == "my_new_field" {
// Return ClickHouse expression
return "attributes_string['my.attribute.key']", nil
}
// ... existing mappings
}
```
3. **Update Condition Builder** (if needed for filtering):
```go
// In condition_builder.go, add support for your field
```
### Adding a New API Endpoint
1. **Add Handler Method** (`pkg/query-service/app/http_handler.go`):
```go
func (aH *APIHandler) MyNewTraceHandler(w http.ResponseWriter, r *http.Request) {
// Extract parameters
// Call reader or querier
// Return response
}
```
2. **Register Route** (in `RegisterRoutes` or separate method):
```go
router.HandleFunc("/api/v2/traces/my-endpoint",
am.ViewAccess(aH.MyNewTraceHandler)).Methods(http.MethodPost)
```
3. **Implement Logic**:
- Add to `ClickHouseReader` if direct DB access needed
- Or use `Querier` for query builder queries
### Adding a New Aggregation Function
1. **Update Aggregation Rewriter** (`pkg/querybuilder/agg_expr_rewriter.go`):
```go
func (r *aggExprRewriter) RewriteAggregation(expr string) (string, error) {
// Add parsing for your function
if strings.HasPrefix(expr, "my_function(") {
// Return ClickHouse SQL expression
return "myClickHouseFunction(...)", nil
}
}
```
2. **Update Statement Builder** (if special handling needed):
```go
// In statement_builder.go, add special case if needed
```
### Adding Trace Operator Support
Trace operators are already extensible. To add a new operator:
1. **Update Grammar** (`grammar/TraceOperatorGrammar.g4`):
```antlr
operator: AND | OR | NOT | MY_NEW_OPERATOR;
```
2. **Update CTE Builder** (`pkg/telemetrytraces/trace_operator_cte_builder.go`):
```go
func (b *traceOperatorCTEBuilder) buildOperatorQuery(op TraceOperatorType) string {
switch op {
case TraceOperatorTypeMyNewOperator:
return "MY_CLICKHOUSE_OPERATION"
}
}
```
---
## Common Patterns
### Pattern 1: Query with Filter
```go
query := qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "filtered_traces",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{
Expression: "service.name = 'api' AND duration_nano > 1000000",
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "count()", Alias: "total"},
},
}
```
### Pattern 2: Time Series Query
```go
query := qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "time_series",
Signal: telemetrytypes.SignalTraces,
Aggregations: []qbtypes.TraceAggregation{
{Expression: "avg(duration_nano)", Alias: "avg_duration"},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
}},
},
StepInterval: qbtypes.Step{Duration: time.Minute},
}
```
### Pattern 3: Trace Operator Query
```go
query := qbtypes.QueryBuilderTraceOperator{
Name: "operator_query",
Expression: "A AND B", // A and B are query names
Filter: &qbtypes.Filter{
Expression: "duration_nano > 5000000",
},
}
```
---
## Performance Considerations
### Caching
- **Trace Detail Views**: 5-minute cache for waterfall/flamegraph
- **Query Results**: Bucket-based caching in querier
- **Metadata**: Cached attribute keys and field metadata
### Query Optimization
1. **Time Range Optimization**: When `trace_id` is in filter, query `trace_summary` first to narrow time range
2. **Index Usage**: Queries use `ts_bucket_start` for time filtering
3. **Limit Enforcement**: Waterfall/flamegraph have span limits (500/50)
### Best Practices
1. **Use Query Builder**: Prefer query builder over raw SQL for better optimization
2. **Limit Time Ranges**: Always specify reasonable time ranges
3. **Use Aggregations**: For large datasets, use aggregations instead of raw data
4. **Cache Awareness**: Be mindful of cache TTLs when testing
---
## References
### Key Files
- `pkg/telemetrytraces/` - Core trace query building
- `statement_builder.go` - Trace SQL generation
- `field_mapper.go` - Trace field mapping
- `condition_builder.go` - Trace filter building
- `trace_operator_statement_builder.go` - Trace operator SQL
- `pkg/query-service/app/clickhouseReader/reader.go` - Direct trace access
- `pkg/query-service/app/http_handler.go` - API handlers
- `pkg/query-service/model/trace.go` - Data models
### Related Documentation
- [Query Range API Documentation](./QUERY_RANGE_API.md) - Common query_range API details
- [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/)
- [ClickHouse Documentation](https://clickhouse.com/docs)
- [Query Builder Guide](../contributing/go/query-builder.md)
---
## Contributing
When contributing to the traces module:
1. **Follow Existing Patterns**: Match the style of existing code
2. **Add Tests**: Include unit tests for new functionality
3. **Update Documentation**: Update this doc for significant changes
4. **Consider Performance**: Optimize queries and use caching appropriately
5. **Handle Errors**: Provide meaningful error messages
For questions or help, reach out to the maintainers or open an issue.

View File

@@ -47,7 +47,7 @@ func (provider *provider) Check(ctx context.Context, tuple *openfgav1.TupleKey)
return provider.pkgAuthzService.Check(ctx, tuple)
}
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, relation authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector, _ []authtypes.Selector) error {
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, relation authtypes.Relation, _ authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableUser, claims.UserID, orgID, nil)
if err != nil {
return err
@@ -66,7 +66,7 @@ func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims aut
return nil
}
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, relation authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector, _ []authtypes.Selector) error {
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, relation authtypes.Relation, _ authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.String(), orgID, nil)
if err != nil {
return err

View File

@@ -26,13 +26,12 @@ type module struct {
pkgDashboardModule dashboard.Module
store dashboardtypes.Store
settings factory.ScopedProviderSettings
roleSetter role.Setter
granter role.Granter
role role.Module
querier querier.Querier
licensing licensing.Licensing
}
func NewModule(store dashboardtypes.Store, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, roleSetter role.Setter, granter role.Granter, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
func NewModule(store dashboardtypes.Store, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, role role.Module, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
scopedProviderSettings := factory.NewScopedProviderSettings(settings, "github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard")
pkgDashboardModule := pkgimpldashboard.NewModule(store, settings, analytics, orgGetter, queryParser)
@@ -40,8 +39,7 @@ func NewModule(store dashboardtypes.Store, settings factory.ProviderSettings, an
pkgDashboardModule: pkgDashboardModule,
store: store,
settings: scopedProviderSettings,
roleSetter: roleSetter,
granter: granter,
role: role,
querier: querier,
licensing: licensing,
}
@@ -61,12 +59,12 @@ func (module *module) CreatePublic(ctx context.Context, orgID valuer.UUID, publi
return errors.Newf(errors.TypeAlreadyExists, dashboardtypes.ErrCodePublicDashboardAlreadyExists, "dashboard with id %s is already public", storablePublicDashboard.DashboardID)
}
role, err := module.roleSetter.GetOrCreate(ctx, orgID, roletypes.NewRole(roletypes.SigNozAnonymousRoleName, roletypes.SigNozAnonymousRoleDescription, roletypes.RoleTypeManaged, orgID))
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
err = module.granter.Grant(ctx, orgID, roletypes.SigNozAnonymousRoleName, authtypes.MustNewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.StringValue(), orgID, nil))
err = module.role.Assign(ctx, role.ID, orgID, authtypes.MustNewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.StringValue(), orgID, nil))
if err != nil {
return err
}
@@ -79,7 +77,7 @@ func (module *module) CreatePublic(ctx context.Context, orgID valuer.UUID, publi
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.roleSetter.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, []*authtypes.Object{additionObject}, nil)
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, []*authtypes.Object{additionObject}, nil)
if err != nil {
return err
}
@@ -195,7 +193,7 @@ func (module *module) DeletePublic(ctx context.Context, orgID valuer.UUID, dashb
return err
}
role, err := module.roleSetter.GetOrCreate(ctx, orgID, roletypes.NewRole(roletypes.SigNozAnonymousRoleName, roletypes.SigNozAnonymousRoleDescription, roletypes.RoleTypeManaged, orgID))
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
@@ -208,7 +206,7 @@ func (module *module) DeletePublic(ctx context.Context, orgID valuer.UUID, dashb
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.roleSetter.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
if err != nil {
return err
}
@@ -272,7 +270,7 @@ func (module *module) deletePublic(ctx context.Context, orgID valuer.UUID, dashb
return err
}
role, err := module.roleSetter.GetOrCreate(ctx, orgID, roletypes.NewRole(roletypes.SigNozAnonymousRoleName, roletypes.SigNozAnonymousRoleDescription, roletypes.RoleTypeManaged, orgID))
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
@@ -285,7 +283,7 @@ func (module *module) deletePublic(ctx context.Context, orgID valuer.UUID, dashb
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.roleSetter.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
if err != nil {
return err
}

View File

@@ -1,165 +0,0 @@
package implrole
import (
"context"
"slices"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type setter struct {
store roletypes.Store
authz authz.AuthZ
licensing licensing.Licensing
registry []role.RegisterTypeable
}
func NewSetter(store roletypes.Store, authz authz.AuthZ, licensing licensing.Licensing, registry []role.RegisterTypeable) role.Setter {
return &setter{
store: store,
authz: authz,
licensing: licensing,
registry: registry,
}
}
func (setter *setter) Create(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) error {
_, err := setter.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return setter.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
}
func (setter *setter) GetOrCreate(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) (*roletypes.Role, error) {
_, err := setter.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
existingRole, err := setter.store.GetByOrgIDAndName(ctx, role.OrgID, role.Name)
if err != nil {
if !errors.Ast(err, errors.TypeNotFound) {
return nil, err
}
}
if existingRole != nil {
return roletypes.NewRoleFromStorableRole(existingRole), nil
}
err = setter.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
if err != nil {
return nil, err
}
return role, nil
}
func (setter *setter) GetResources(_ context.Context) []*authtypes.Resource {
typeables := make([]authtypes.Typeable, 0)
for _, register := range setter.registry {
typeables = append(typeables, register.MustGetTypeables()...)
}
// role module cannot self register itself!
typeables = append(typeables, setter.MustGetTypeables()...)
resources := make([]*authtypes.Resource, 0)
for _, typeable := range typeables {
resources = append(resources, &authtypes.Resource{Name: typeable.Name(), Type: typeable.Type()})
}
return resources
}
func (setter *setter) GetObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation) ([]*authtypes.Object, error) {
storableRole, err := setter.store.Get(ctx, orgID, id)
if err != nil {
return nil, err
}
objects := make([]*authtypes.Object, 0)
for _, resource := range setter.GetResources(ctx) {
if slices.Contains(authtypes.TypeableRelations[resource.Type], relation) {
resourceObjects, err := setter.
authz.
ListObjects(
ctx,
authtypes.MustNewSubject(authtypes.TypeableRole, storableRole.ID.String(), orgID, &authtypes.RelationAssignee),
relation,
authtypes.MustNewTypeableFromType(resource.Type, resource.Name),
)
if err != nil {
return nil, err
}
objects = append(objects, resourceObjects...)
}
}
return objects, nil
}
func (setter *setter) Patch(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) error {
_, err := setter.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return setter.store.Update(ctx, orgID, roletypes.NewStorableRoleFromRole(role))
}
func (setter *setter) PatchObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation, additions, deletions []*authtypes.Object) error {
_, err := setter.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
additionTuples, err := roletypes.GetAdditionTuples(id, orgID, relation, additions)
if err != nil {
return err
}
deletionTuples, err := roletypes.GetDeletionTuples(id, orgID, relation, deletions)
if err != nil {
return err
}
err = setter.authz.Write(ctx, additionTuples, deletionTuples)
if err != nil {
return err
}
return nil
}
func (setter *setter) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUID) error {
_, err := setter.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storableRole, err := setter.store.Get(ctx, orgID, id)
if err != nil {
return err
}
role := roletypes.NewRoleFromStorableRole(storableRole)
err = role.CanEditDelete()
if err != nil {
return err
}
return setter.store.Delete(ctx, orgID, id)
}
func (setter *setter) MustGetTypeables() []authtypes.Typeable {
return []authtypes.Typeable{authtypes.TypeableRole, roletypes.TypeableResourcesRoles}
}

View File

@@ -211,7 +211,7 @@ func (s Server) HealthCheckStatus() chan healthcheck.Status {
func (s *Server) createPublicServer(apiHandler *api.APIHandler, web web.Web) (*http.Server, error) {
r := baseapp.NewRouter()
am := middleware.NewAuthZ(s.signoz.Instrumentation.Logger(), s.signoz.Modules.OrgGetter, s.signoz.Authz, s.signoz.Modules.RoleGetter)
am := middleware.NewAuthZ(s.signoz.Instrumentation.Logger(), s.signoz.Modules.OrgGetter, s.signoz.Authz)
r.Use(otelmux.Middleware(
"apiserver",

View File

@@ -26,7 +26,6 @@ import { ApiMonitoringHardcodedAttributeKeys } from '../../constants';
import { DEFAULT_PARAMS, useApiMonitoringParams } from '../../queryParams';
import { columnsConfig, formatDataForTable } from '../../utils';
import DomainDetails from './DomainDetails/DomainDetails';
import DOCLINKS from 'utils/docLinks';
function DomainList(): JSX.Element {
const [params, setParams] = useApiMonitoringParams();
@@ -146,17 +145,7 @@ function DomainList(): JSX.Element {
/>
<Typography.Text className="no-filtered-domains-message">
No External API calls detected. To automatically detect them, ensure
Client spans are being sent with required attributes.
<br />
Read more about <span> </span>
<a
href={DOCLINKS.EXTERNAL_API_MONITORING}
target="_blank"
rel="noreferrer"
>
configuring External API monitoring.
</a>
This query had no results. Edit your query and try again!
</Typography.Text>
</div>
</div>

View File

@@ -6,8 +6,6 @@ const DOCLINKS = {
'https://signoz.io/docs/product-features/trace-explorer/?utm_source=product&utm_medium=traces-explorer-trace-tab#traces-view',
METRICS_EXPLORER_EMPTY_STATE:
'https://signoz.io/docs/userguide/send-metrics-cloud/',
EXTERNAL_API_MONITORING:
'https://signoz.io/docs/external-api-monitoring/overview/',
};
export default DOCLINKS;

View File

@@ -17,7 +17,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
ID: "CreatePublicDashboard",
Tags: []string{"dashboard"},
Summary: "Create public dashboard",
Description: "This endpoint creates public sharing config and enables public sharing of the dashboard",
Description: "This endpoints creates public sharing config and enables public sharing of the dashboard",
Request: new(dashboardtypes.PostablePublicDashboard),
RequestContentType: "",
Response: new(types.Identifiable),
@@ -34,7 +34,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
ID: "GetPublicDashboard",
Tags: []string{"dashboard"},
Summary: "Get public dashboard",
Description: "This endpoint returns public sharing config for a dashboard",
Description: "This endpoints returns public sharing config for a dashboard",
Request: nil,
RequestContentType: "",
Response: new(dashboardtypes.GettablePublicDasbhboard),
@@ -51,7 +51,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
ID: "UpdatePublicDashboard",
Tags: []string{"dashboard"},
Summary: "Update public dashboard",
Description: "This endpoint updates the public sharing config for a dashboard",
Description: "This endpoints updates the public sharing config for a dashboard",
Request: new(dashboardtypes.UpdatablePublicDashboard),
RequestContentType: "",
Response: nil,
@@ -68,7 +68,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
ID: "DeletePublicDashboard",
Tags: []string{"dashboard"},
Summary: "Delete public dashboard",
Description: "This endpoint deletes the public sharing config and disables the public sharing of a dashboard",
Description: "This endpoints deletes the public sharing config and disables the public sharing of a dashboard",
Request: nil,
RequestContentType: "",
Response: nil,
@@ -83,7 +83,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
if err := router.Handle("/api/v1/public/dashboards/{id}", handler.New(provider.authZ.CheckWithoutClaims(
provider.dashboardHandler.GetPublicData,
authtypes.RelationRead,
authtypes.RelationRead, authtypes.RelationRead,
dashboardtypes.TypeableMetaResourcePublicDashboard,
func(req *http.Request, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error) {
id, err := valuer.NewUUID(mux.Vars(req)["id"])
@@ -92,11 +92,11 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
}
return provider.dashboardModule.GetPublicDashboardSelectorsAndOrg(req.Context(), id, orgs)
}, []string{}), handler.OpenAPIDef{
}), handler.OpenAPIDef{
ID: "GetPublicDashboardData",
Tags: []string{"dashboard"},
Summary: "Get public dashboard data",
Description: "This endpoint returns the sanitized dashboard data for public access",
Description: "This endpoints returns the sanitized dashboard data for public access",
Request: nil,
RequestContentType: "",
Response: new(dashboardtypes.GettablePublicDashboardData),
@@ -111,7 +111,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
if err := router.Handle("/api/v1/public/dashboards/{id}/widgets/{idx}/query_range", handler.New(provider.authZ.CheckWithoutClaims(
provider.dashboardHandler.GetPublicWidgetQueryRange,
authtypes.RelationRead,
authtypes.RelationRead, authtypes.RelationRead,
dashboardtypes.TypeableMetaResourcePublicDashboard,
func(req *http.Request, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error) {
id, err := valuer.NewUUID(mux.Vars(req)["id"])
@@ -120,7 +120,7 @@ func (provider *provider) addDashboardRoutes(router *mux.Router) error {
}
return provider.dashboardModule.GetPublicDashboardSelectorsAndOrg(req.Context(), id, orgs)
}, []string{}), handler.OpenAPIDef{
}), handler.OpenAPIDef{
ID: "GetPublicDashboardWidgetQueryRange",
Tags: []string{"dashboard"},
Summary: "Get query range result",

View File

@@ -13,7 +13,7 @@ func (provider *provider) addGlobalRoutes(router *mux.Router) error {
ID: "GetGlobalConfig",
Tags: []string{"global"},
Summary: "Get global config",
Description: "This endpoint returns global config",
Description: "This endpoints returns global config",
Request: nil,
RequestContentType: "",
Response: new(types.GettableGlobalConfig),

View File

@@ -17,7 +17,6 @@ import (
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/preference"
"github.com/SigNoz/signoz/pkg/modules/promote"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/modules/session"
"github.com/SigNoz/signoz/pkg/modules/user"
"github.com/SigNoz/signoz/pkg/types"
@@ -42,8 +41,6 @@ type provider struct {
dashboardHandler dashboard.Handler
metricsExplorerHandler metricsexplorer.Handler
gatewayHandler gateway.Handler
roleGetter role.Getter
roleHandler role.Handler
}
func NewFactory(
@@ -61,11 +58,9 @@ func NewFactory(
dashboardHandler dashboard.Handler,
metricsExplorerHandler metricsexplorer.Handler,
gatewayHandler gateway.Handler,
roleGetter role.Getter,
roleHandler role.Handler,
) factory.ProviderFactory[apiserver.APIServer, apiserver.Config] {
return factory.NewProviderFactory(factory.MustNewName("signoz"), func(ctx context.Context, providerSettings factory.ProviderSettings, config apiserver.Config) (apiserver.APIServer, error) {
return newProvider(ctx, providerSettings, config, orgGetter, authz, orgHandler, userHandler, sessionHandler, authDomainHandler, preferenceHandler, globalHandler, promoteHandler, flaggerHandler, dashboardModule, dashboardHandler, metricsExplorerHandler, gatewayHandler, roleGetter, roleHandler)
return newProvider(ctx, providerSettings, config, orgGetter, authz, orgHandler, userHandler, sessionHandler, authDomainHandler, preferenceHandler, globalHandler, promoteHandler, flaggerHandler, dashboardModule, dashboardHandler, metricsExplorerHandler, gatewayHandler)
})
}
@@ -87,8 +82,6 @@ func newProvider(
dashboardHandler dashboard.Handler,
metricsExplorerHandler metricsexplorer.Handler,
gatewayHandler gateway.Handler,
roleGetter role.Getter,
roleHandler role.Handler,
) (apiserver.APIServer, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/pkg/apiserver/signozapiserver")
router := mux.NewRouter().UseEncodedPath()
@@ -109,11 +102,9 @@ func newProvider(
dashboardHandler: dashboardHandler,
metricsExplorerHandler: metricsExplorerHandler,
gatewayHandler: gatewayHandler,
roleGetter: roleGetter,
roleHandler: roleHandler,
}
provider.authZ = middleware.NewAuthZ(settings.Logger(), orgGetter, authz, roleGetter)
provider.authZ = middleware.NewAuthZ(settings.Logger(), orgGetter, authz)
if err := provider.AddToRouter(router); err != nil {
return nil, err
@@ -171,10 +162,6 @@ func (provider *provider) AddToRouter(router *mux.Router) error {
return err
}
if err := provider.addRoleRoutes(router); err != nil {
return err
}
return nil
}

View File

@@ -1,99 +0,0 @@
package signozapiserver
import (
"net/http"
"github.com/SigNoz/signoz/pkg/http/handler"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/gorilla/mux"
)
func (provider *provider) addRoleRoutes(router *mux.Router) error {
if err := router.Handle("/api/v1/roles", handler.New(provider.authZ.AdminAccess(provider.roleHandler.Create), handler.OpenAPIDef{
ID: "CreateRole",
Tags: []string{"role"},
Summary: "Create role",
Description: "This endpoint creates a role",
Request: nil,
RequestContentType: "",
Response: new(types.Identifiable),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodPost).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v1/roles", handler.New(provider.authZ.AdminAccess(provider.roleHandler.List), handler.OpenAPIDef{
ID: "ListRoles",
Tags: []string{"role"},
Summary: "List roles",
Description: "This endpoint lists all roles",
Request: nil,
RequestContentType: "",
Response: make([]*roletypes.Role, 0),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodGet).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v1/roles/{id}", handler.New(provider.authZ.AdminAccess(provider.roleHandler.Get), handler.OpenAPIDef{
ID: "GetRole",
Tags: []string{"role"},
Summary: "Get role",
Description: "This endpoint gets a role",
Request: nil,
RequestContentType: "",
Response: new(roletypes.Role),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusOK,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodGet).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v1/roles/{id}", handler.New(provider.authZ.AdminAccess(provider.roleHandler.Patch), handler.OpenAPIDef{
ID: "PatchRole",
Tags: []string{"role"},
Summary: "Patch role",
Description: "This endpoint patches a role",
Request: nil,
RequestContentType: "",
Response: nil,
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusNoContent,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodPatch).GetError(); err != nil {
return err
}
if err := router.Handle("/api/v1/roles/{id}", handler.New(provider.authZ.AdminAccess(provider.roleHandler.Delete), handler.OpenAPIDef{
ID: "DeleteRole",
Tags: []string{"role"},
Summary: "Delete role",
Description: "This endpoint deletes a role",
Request: nil,
RequestContentType: "",
Response: nil,
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusNoContent,
ErrorStatusCodes: []int{},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
})).Methods(http.MethodDelete).GetError(); err != nil {
return err
}
return nil
}

View File

@@ -315,22 +315,5 @@ func (provider *provider) addUserRoutes(router *mux.Router) error {
return err
}
if err := router.Handle("/api/v2/factor_password/forgot", handler.New(provider.authZ.OpenAccess(provider.userHandler.ForgotPassword), handler.OpenAPIDef{
ID: "ForgotPassword",
Tags: []string{"users"},
Summary: "Forgot password",
Description: "This endpoint initiates the forgot password flow by sending a reset password email",
Request: new(types.PostableForgotPassword),
RequestContentType: "application/json",
Response: nil,
ResponseContentType: "",
SuccessStatusCode: http.StatusNoContent,
ErrorStatusCodes: []int{http.StatusBadRequest},
Deprecated: false,
SecuritySchemes: []handler.OpenAPISecurityScheme{},
})).Methods(http.MethodPost).GetError(); err != nil {
return err
}
return nil
}

View File

@@ -16,10 +16,9 @@ type AuthZ interface {
Check(context.Context, *openfgav1.TupleKey) error
// CheckWithTupleCreation takes upon the responsibility for generating the tuples alongside everything Check does.
CheckWithTupleCreation(context.Context, authtypes.Claims, valuer.UUID, authtypes.Relation, authtypes.Typeable, []authtypes.Selector, []authtypes.Selector) error
CheckWithTupleCreation(context.Context, authtypes.Claims, valuer.UUID, authtypes.Relation, authtypes.Relation, authtypes.Typeable, []authtypes.Selector) error
// CheckWithTupleCreationWithoutClaims checks permissions for anonymous users.
CheckWithTupleCreationWithoutClaims(context.Context, valuer.UUID, authtypes.Relation, authtypes.Typeable, []authtypes.Selector, []authtypes.Selector) error
CheckWithTupleCreationWithoutClaims(context.Context, valuer.UUID, authtypes.Relation, authtypes.Relation, authtypes.Typeable, []authtypes.Selector) error
// Batch Check returns error when the upstream authorization server is unavailable or for all the tuples of subject (s) doesn't have relation (r) on object (o).
BatchCheck(context.Context, []*openfgav1.TupleKey) error

View File

@@ -152,17 +152,17 @@ func (provider *provider) BatchCheck(ctx context.Context, tupleReq []*openfgav1.
}
}
return errors.Newf(errors.TypeForbidden, authtypes.ErrCodeAuthZForbidden, "none of the subjects are allowed for requested access")
return errors.New(errors.TypeForbidden, authtypes.ErrCodeAuthZForbidden, "")
}
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, _ authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector, roleSelectors []authtypes.Selector) error {
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, _ authtypes.Relation, translation authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableUser, claims.UserID, orgID, nil)
if err != nil {
return err
}
tuples, err := authtypes.TypeableRole.Tuples(subject, authtypes.RelationAssignee, roleSelectors, orgID)
tuples, err := authtypes.TypeableOrganization.Tuples(subject, translation, []authtypes.Selector{authtypes.MustNewSelector(authtypes.TypeOrganization, orgID.StringValue())}, orgID)
if err != nil {
return err
}
@@ -175,13 +175,13 @@ func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims aut
return nil
}
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, _ authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector, roleSelectors []authtypes.Selector) error {
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, _ authtypes.Relation, translation authtypes.Relation, _ authtypes.Typeable, _ []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.String(), orgID, nil)
if err != nil {
return err
}
tuples, err := authtypes.TypeableRole.Tuples(subject, authtypes.RelationAssignee, roleSelectors, orgID)
tuples, err := authtypes.TypeableOrganization.Tuples(subject, translation, []authtypes.Selector{authtypes.MustNewSelector(authtypes.TypeOrganization, orgID.StringValue())}, orgID)
if err != nil {
return err
}
@@ -195,10 +195,6 @@ func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Contex
}
func (provider *provider) Write(ctx context.Context, additions []*openfgav1.TupleKey, deletions []*openfgav1.TupleKey) error {
if len(additions) == 0 && len(deletions) == 0 {
return nil
}
storeID, modelID := provider.getStoreIDandModelID()
deletionTuplesWithoutCondition := make([]*openfgav1.TupleKeyWithoutCondition, len(deletions))
for idx, tuple := range deletions {

View File

@@ -34,11 +34,11 @@ func TestProviderStartStop(t *testing.T) {
sqlstore.Mock().ExpectQuery("SELECT authorization_model_id, schema_version, type, type_definition, serialized_protobuf FROM authorization_model WHERE authorization_model_id = (.+) AND store = (.+)").WithArgs("01K44QQKXR6F729W160NFCJT58", "01K3V0NTN47MPTMEV1PD5ST6ZC").WillReturnRows(modelRows)
sqlstore.Mock().ExpectExec("INSERT INTO authorization_model (.+) VALUES (.+)").WillReturnResult(sqlmock.NewResult(1, 1))
go func() {
err := provider.Start(context.Background())
require.NoError(t, err)
}()
// wait for the service to start
time.Sleep(time.Second * 2)

View File

@@ -7,7 +7,6 @@ import (
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
@@ -21,15 +20,14 @@ type AuthZ struct {
logger *slog.Logger
orgGetter organization.Getter
authzService authz.AuthZ
roleGetter role.Getter
}
func NewAuthZ(logger *slog.Logger, orgGetter organization.Getter, authzService authz.AuthZ, roleGetter role.Getter) *AuthZ {
func NewAuthZ(logger *slog.Logger, orgGetter organization.Getter, authzService authz.AuthZ) *AuthZ {
if logger == nil {
panic("cannot build authz middleware, logger is empty")
}
return &AuthZ{logger: logger, orgGetter: orgGetter, authzService: authzService, roleGetter: roleGetter}
return &AuthZ{logger: logger, orgGetter: orgGetter, authzService: authzService}
}
func (middleware *AuthZ) ViewAccess(next http.HandlerFunc) http.HandlerFunc {
@@ -111,10 +109,9 @@ func (middleware *AuthZ) OpenAccess(next http.HandlerFunc) http.HandlerFunc {
})
}
func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackWithClaimsFn, roles []string) http.HandlerFunc {
func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relation, translation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackWithClaimsFn) http.HandlerFunc {
return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
ctx := req.Context()
claims, err := authtypes.ClaimsFromContext(ctx)
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
@@ -132,18 +129,7 @@ func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relatio
return
}
roles, err := middleware.roleGetter.ListByOrgIDAndNames(req.Context(), orgId, roles)
if err != nil {
render.Error(rw, err)
return
}
roleSelectors := []authtypes.Selector{}
for _, role := range roles {
selectors = append(selectors, authtypes.MustNewSelector(authtypes.TypeRole, role.ID.String()))
}
err = middleware.authzService.CheckWithTupleCreation(ctx, claims, orgId, relation, typeable, selectors, roleSelectors)
err = middleware.authzService.CheckWithTupleCreation(req.Context(), claims, orgId, relation, translation, typeable, selectors)
if err != nil {
render.Error(rw, err)
return
@@ -153,7 +139,7 @@ func (middleware *AuthZ) Check(next http.HandlerFunc, relation authtypes.Relatio
})
}
func (middleware *AuthZ) CheckWithoutClaims(next http.HandlerFunc, relation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackWithoutClaimsFn, roles []string) http.HandlerFunc {
func (middleware *AuthZ) CheckWithoutClaims(next http.HandlerFunc, relation authtypes.Relation, translation authtypes.Relation, typeable authtypes.Typeable, cb authtypes.SelectorCallbackWithoutClaimsFn) http.HandlerFunc {
return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
ctx := req.Context()
orgs, err := middleware.orgGetter.ListByOwnedKeyRange(ctx)
@@ -168,7 +154,7 @@ func (middleware *AuthZ) CheckWithoutClaims(next http.HandlerFunc, relation auth
return
}
err = middleware.authzService.CheckWithTupleCreationWithoutClaims(ctx, orgID, relation, typeable, selectors, selectors)
err = middleware.authzService.CheckWithTupleCreationWithoutClaims(ctx, orgID, relation, translation, typeable, selectors)
if err != nil {
render.Error(rw, err)
return

View File

@@ -1,63 +0,0 @@
package implrole
import (
"context"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type getter struct {
store roletypes.Store
}
func NewGetter(store roletypes.Store) role.Getter {
return &getter{store: store}
}
func (getter *getter) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*roletypes.Role, error) {
storableRole, err := getter.store.Get(ctx, orgID, id)
if err != nil {
return nil, err
}
return roletypes.NewRoleFromStorableRole(storableRole), nil
}
func (getter *getter) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*roletypes.Role, error) {
storableRole, err := getter.store.GetByOrgIDAndName(ctx, orgID, name)
if err != nil {
return nil, err
}
return roletypes.NewRoleFromStorableRole(storableRole), nil
}
func (getter *getter) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes.Role, error) {
storableRoles, err := getter.store.List(ctx, orgID)
if err != nil {
return nil, err
}
roles := make([]*roletypes.Role, len(storableRoles))
for idx, storableRole := range storableRoles {
roles[idx] = roletypes.NewRoleFromStorableRole(storableRole)
}
return roles, nil
}
func (getter *getter) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*roletypes.Role, error) {
storableRoles, err := getter.store.ListByOrgIDAndNames(ctx, orgID, names)
if err != nil {
return nil, err
}
roles := make([]*roletypes.Role, len(storableRoles))
for idx, storable := range storableRoles {
roles[idx] = roletypes.NewRoleFromStorableRole(storable)
}
return roles, nil
}

View File

@@ -1,108 +0,0 @@
package implrole
import (
"context"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type granter struct {
store roletypes.Store
authz authz.AuthZ
}
func NewGranter(store roletypes.Store, authz authz.AuthZ) role.Granter {
return &granter{store: store, authz: authz}
}
func (granter *granter) Grant(ctx context.Context, orgID valuer.UUID, name string, subject string) error {
role, err := granter.store.GetByOrgIDAndName(ctx, orgID, name)
if err != nil {
return err
}
tuples, err := authtypes.TypeableRole.Tuples(
subject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, role.ID.StringValue()),
},
orgID,
)
if err != nil {
return err
}
return granter.authz.Write(ctx, tuples, nil)
}
func (granter *granter) GrantByID(ctx context.Context, orgID valuer.UUID, id valuer.UUID, subject string) error {
tuples, err := authtypes.TypeableRole.Tuples(
subject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, id.StringValue()),
},
orgID,
)
if err != nil {
return err
}
return granter.authz.Write(ctx, tuples, nil)
}
func (granter *granter) ModifyGrant(ctx context.Context, orgID valuer.UUID, existingRoleName string, updatedRoleName string, subject string) error {
err := granter.Revoke(ctx, orgID, existingRoleName, subject)
if err != nil {
return err
}
err = granter.Grant(ctx, orgID, updatedRoleName, subject)
if err != nil {
return err
}
return nil
}
func (granter *granter) Revoke(ctx context.Context, orgID valuer.UUID, name string, subject string) error {
role, err := granter.store.GetByOrgIDAndName(ctx, orgID, name)
if err != nil {
return err
}
tuples, err := authtypes.TypeableRole.Tuples(
subject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, role.ID.StringValue()),
},
orgID,
)
if err != nil {
return err
}
return granter.authz.Write(ctx, nil, tuples)
}
func (granter *granter) CreateManagedRoles(ctx context.Context, _ valuer.UUID, managedRoles []*roletypes.Role) error {
err := granter.store.RunInTx(ctx, func(ctx context.Context) error {
for _, role := range managedRoles {
err := granter.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
if err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
return nil
}

View File

@@ -14,12 +14,11 @@ import (
)
type handler struct {
setter role.Setter
getter role.Getter
module role.Module
}
func NewHandler(setter role.Setter, getter role.Getter) role.Handler {
return &handler{setter: setter, getter: getter}
func NewHandler(module role.Module) role.Handler {
return &handler{module: module}
}
func (handler *handler) Create(rw http.ResponseWriter, r *http.Request) {
@@ -36,7 +35,7 @@ func (handler *handler) Create(rw http.ResponseWriter, r *http.Request) {
return
}
err = handler.setter.Create(ctx, valuer.MustNewUUID(claims.OrgID), roletypes.NewRole(req.Name, req.Description, roletypes.RoleTypeCustom, valuer.MustNewUUID(claims.OrgID)))
err = handler.module.Create(ctx, roletypes.NewRole(req.Name, req.Description, roletypes.RoleTypeCustom.StringValue(), valuer.MustNewUUID(claims.OrgID)))
if err != nil {
render.Error(rw, err)
return
@@ -64,7 +63,7 @@ func (handler *handler) Get(rw http.ResponseWriter, r *http.Request) {
return
}
role, err := handler.getter.Get(ctx, valuer.MustNewUUID(claims.OrgID), roleID)
role, err := handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), roleID)
if err != nil {
render.Error(rw, err)
return
@@ -103,7 +102,7 @@ func (handler *handler) GetObjects(rw http.ResponseWriter, r *http.Request) {
return
}
objects, err := handler.setter.GetObjects(ctx, valuer.MustNewUUID(claims.OrgID), roleID, relation)
objects, err := handler.module.GetObjects(ctx, valuer.MustNewUUID(claims.OrgID), roleID, relation)
if err != nil {
render.Error(rw, err)
return
@@ -114,7 +113,7 @@ func (handler *handler) GetObjects(rw http.ResponseWriter, r *http.Request) {
func (handler *handler) GetResources(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
resources := handler.setter.GetResources(ctx)
resources := handler.module.GetResources(ctx)
var resourceRelations = struct {
Resources []*authtypes.Resource `json:"resources"`
@@ -134,7 +133,7 @@ func (handler *handler) List(rw http.ResponseWriter, r *http.Request) {
return
}
roles, err := handler.getter.List(ctx, valuer.MustNewUUID(claims.OrgID))
roles, err := handler.module.List(ctx, valuer.MustNewUUID(claims.OrgID))
if err != nil {
render.Error(rw, err)
return
@@ -163,19 +162,14 @@ func (handler *handler) Patch(rw http.ResponseWriter, r *http.Request) {
return
}
role, err := handler.getter.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
role, err := handler.module.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return
}
err = role.PatchMetadata(req.Name, req.Description)
if err != nil {
render.Error(rw, err)
return
}
err = handler.setter.Patch(ctx, valuer.MustNewUUID(claims.OrgID), role)
role.PatchMetadata(req.Name, req.Description)
err = handler.module.Patch(ctx, valuer.MustNewUUID(claims.OrgID), role)
if err != nil {
render.Error(rw, err)
return
@@ -210,19 +204,13 @@ func (handler *handler) PatchObjects(rw http.ResponseWriter, r *http.Request) {
return
}
role, err := handler.getter.Get(ctx, valuer.MustNewUUID(claims.OrgID), id)
patchableObjects, err := roletypes.NewPatchableObjects(req.Additions, req.Deletions, relation)
if err != nil {
render.Error(rw, err)
return
}
patchableObjects, err := role.NewPatchableObjects(req.Additions, req.Deletions, relation)
if err != nil {
render.Error(rw, err)
return
}
err = handler.setter.PatchObjects(ctx, valuer.MustNewUUID(claims.OrgID), id, relation, patchableObjects.Additions, patchableObjects.Deletions)
err = handler.module.PatchObjects(ctx, valuer.MustNewUUID(claims.OrgID), id, relation, patchableObjects.Additions, patchableObjects.Deletions)
if err != nil {
render.Error(rw, err)
return
@@ -245,7 +233,7 @@ func (handler *handler) Delete(rw http.ResponseWriter, r *http.Request) {
return
}
err = handler.setter.Delete(ctx, valuer.MustNewUUID(claims.OrgID), id)
err = handler.module.Delete(ctx, valuer.MustNewUUID(claims.OrgID), id)
if err != nil {
render.Error(rw, err)
return

View File

@@ -0,0 +1,164 @@
package implrole
import (
"context"
"slices"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type module struct {
store roletypes.Store
registry []role.RegisterTypeable
authz authz.AuthZ
}
func NewModule(store roletypes.Store, authz authz.AuthZ, registry []role.RegisterTypeable) role.Module {
return &module{
store: store,
authz: authz,
registry: registry,
}
}
func (module *module) Create(ctx context.Context, role *roletypes.Role) error {
return module.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
}
func (module *module) GetOrCreate(ctx context.Context, role *roletypes.Role) (*roletypes.Role, error) {
existingRole, err := module.store.GetByNameAndOrgID(ctx, role.Name, role.OrgID)
if err != nil {
if !errors.Ast(err, errors.TypeNotFound) {
return nil, err
}
}
if existingRole != nil {
return roletypes.NewRoleFromStorableRole(existingRole), nil
}
err = module.store.Create(ctx, roletypes.NewStorableRoleFromRole(role))
if err != nil {
return nil, err
}
return role, nil
}
func (module *module) GetResources(_ context.Context) []*authtypes.Resource {
typeables := make([]authtypes.Typeable, 0)
for _, register := range module.registry {
typeables = append(typeables, register.MustGetTypeables()...)
}
// role module cannot self register itself!
typeables = append(typeables, module.MustGetTypeables()...)
resources := make([]*authtypes.Resource, 0)
for _, typeable := range typeables {
resources = append(resources, &authtypes.Resource{Name: typeable.Name(), Type: typeable.Type()})
}
return resources
}
func (module *module) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*roletypes.Role, error) {
storableRole, err := module.store.Get(ctx, orgID, id)
if err != nil {
return nil, err
}
return roletypes.NewRoleFromStorableRole(storableRole), nil
}
func (module *module) GetObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation) ([]*authtypes.Object, error) {
storableRole, err := module.store.Get(ctx, orgID, id)
if err != nil {
return nil, err
}
objects := make([]*authtypes.Object, 0)
for _, resource := range module.GetResources(ctx) {
if slices.Contains(authtypes.TypeableRelations[resource.Type], relation) {
resourceObjects, err := module.
authz.
ListObjects(
ctx,
authtypes.MustNewSubject(authtypes.TypeableRole, storableRole.ID.String(), orgID, &authtypes.RelationAssignee),
relation,
authtypes.MustNewTypeableFromType(resource.Type, resource.Name),
)
if err != nil {
return nil, err
}
objects = append(objects, resourceObjects...)
}
}
return objects, nil
}
func (module *module) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes.Role, error) {
storableRoles, err := module.store.List(ctx, orgID)
if err != nil {
return nil, err
}
roles := make([]*roletypes.Role, len(storableRoles))
for idx, storableRole := range storableRoles {
roles[idx] = roletypes.NewRoleFromStorableRole(storableRole)
}
return roles, nil
}
func (module *module) Patch(ctx context.Context, orgID valuer.UUID, role *roletypes.Role) error {
return module.store.Update(ctx, orgID, roletypes.NewStorableRoleFromRole(role))
}
func (module *module) PatchObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation, additions, deletions []*authtypes.Object) error {
additionTuples, err := roletypes.GetAdditionTuples(id, orgID, relation, additions)
if err != nil {
return err
}
deletionTuples, err := roletypes.GetDeletionTuples(id, orgID, relation, deletions)
if err != nil {
return err
}
err = module.authz.Write(ctx, additionTuples, deletionTuples)
if err != nil {
return err
}
return nil
}
func (module *module) Assign(ctx context.Context, id valuer.UUID, orgID valuer.UUID, subject string) error {
tuples, err := authtypes.TypeableRole.Tuples(
subject,
authtypes.RelationAssignee,
[]authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeRole, id.StringValue()),
},
orgID,
)
if err != nil {
return err
}
return module.authz.Write(ctx, tuples, nil)
}
func (module *module) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUID) error {
return module.store.Delete(ctx, orgID, id)
}
func (module *module) MustGetTypeables() []authtypes.Typeable {
return []authtypes.Typeable{authtypes.TypeableRole, roletypes.TypeableResourcesRoles}
}

View File

@@ -1,53 +0,0 @@
package implrole
import (
"context"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type setter struct {
store roletypes.Store
authz authz.AuthZ
}
func NewSetter(store roletypes.Store, authz authz.AuthZ) role.Setter {
return &setter{store: store, authz: authz}
}
func (setter *setter) Create(_ context.Context, _ valuer.UUID, _ *roletypes.Role) error {
return errors.Newf(errors.TypeUnsupported, roletypes.ErrCodeRoleUnsupported, "not implemented")
}
func (setter *setter) GetOrCreate(_ context.Context, _ valuer.UUID, _ *roletypes.Role) (*roletypes.Role, error) {
return nil, errors.Newf(errors.TypeUnsupported, roletypes.ErrCodeRoleUnsupported, "not implemented")
}
func (setter *setter) GetResources(_ context.Context) []*authtypes.Resource {
return nil
}
func (setter *setter) GetObjects(ctx context.Context, orgID valuer.UUID, id valuer.UUID, relation authtypes.Relation) ([]*authtypes.Object, error) {
return nil, errors.Newf(errors.TypeUnsupported, roletypes.ErrCodeRoleUnsupported, "not implemented")
}
func (setter *setter) Patch(_ context.Context, _ valuer.UUID, _ *roletypes.Role) error {
return errors.Newf(errors.TypeUnsupported, roletypes.ErrCodeRoleUnsupported, "not implemented")
}
func (setter *setter) PatchObjects(_ context.Context, _ valuer.UUID, _ valuer.UUID, _ authtypes.Relation, _, _ []*authtypes.Object) error {
return errors.Newf(errors.TypeUnsupported, roletypes.ErrCodeRoleUnsupported, "not implemented")
}
func (setter *setter) Delete(_ context.Context, _ valuer.UUID, _ valuer.UUID) error {
return errors.Newf(errors.TypeUnsupported, roletypes.ErrCodeRoleUnsupported, "not implemented")
}
func (setter *setter) MustGetTypeables() []authtypes.Typeable {
return nil
}

View File

@@ -7,7 +7,6 @@ import (
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/uptrace/bun"
)
type store struct {
@@ -21,7 +20,7 @@ func NewStore(sqlstore sqlstore.SQLStore) roletypes.Store {
func (store *store) Create(ctx context.Context, role *roletypes.StorableRole) error {
_, err := store.
sqlstore.
BunDBCtx(ctx).
BunDB().
NewInsert().
Model(role).
Exec(ctx)
@@ -36,7 +35,7 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
role := new(roletypes.StorableRole)
err := store.
sqlstore.
BunDBCtx(ctx).
BunDB().
NewSelect().
Model(role).
Where("org_id = ?", orgID).
@@ -49,11 +48,11 @@ func (store *store) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID)
return role, nil
}
func (store *store) GetByOrgIDAndName(ctx context.Context, orgID valuer.UUID, name string) (*roletypes.StorableRole, error) {
func (store *store) GetByNameAndOrgID(ctx context.Context, name string, orgID valuer.UUID) (*roletypes.StorableRole, error) {
role := new(roletypes.StorableRole)
err := store.
sqlstore.
BunDBCtx(ctx).
BunDB().
NewSelect().
Model(role).
Where("org_id = ?", orgID).
@@ -70,30 +69,13 @@ func (store *store) List(ctx context.Context, orgID valuer.UUID) ([]*roletypes.S
roles := make([]*roletypes.StorableRole, 0)
err := store.
sqlstore.
BunDBCtx(ctx).
BunDB().
NewSelect().
Model(&roles).
Where("org_id = ?", orgID).
Scan(ctx)
if err != nil {
return nil, err
}
return roles, nil
}
func (store *store) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID, names []string) ([]*roletypes.StorableRole, error) {
roles := make([]*roletypes.StorableRole, 0)
err := store.
sqlstore.
BunDBCtx(ctx).
NewSelect().
Model(&roles).
Where("org_id = ?", orgID).
Where("name IN (?)", bun.In(names)).
Scan(ctx)
if err != nil {
return nil, err
return nil, store.sqlstore.WrapNotFoundErrf(err, roletypes.ErrCodeRoleNotFound, "no roles found in org_id: %s", orgID)
}
return roles, nil
@@ -102,7 +84,7 @@ func (store *store) ListByOrgIDAndNames(ctx context.Context, orgID valuer.UUID,
func (store *store) Update(ctx context.Context, orgID valuer.UUID, role *roletypes.StorableRole) error {
_, err := store.
sqlstore.
BunDBCtx(ctx).
BunDB().
NewUpdate().
Model(role).
WherePK().
@@ -118,7 +100,7 @@ func (store *store) Update(ctx context.Context, orgID valuer.UUID, role *roletyp
func (store *store) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUID) error {
_, err := store.
sqlstore.
BunDBCtx(ctx).
BunDB().
NewDelete().
Model(new(roletypes.StorableRole)).
Where("org_id = ?", orgID).

View File

@@ -9,16 +9,22 @@ import (
"github.com/SigNoz/signoz/pkg/valuer"
)
type Setter interface {
type Module interface {
// Creates the role.
Create(context.Context, valuer.UUID, *roletypes.Role) error
Create(context.Context, *roletypes.Role) error
// Gets the role if it exists or creates one.
GetOrCreate(context.Context, valuer.UUID, *roletypes.Role) (*roletypes.Role, error)
GetOrCreate(context.Context, *roletypes.Role) (*roletypes.Role, error)
// Gets the role
Get(context.Context, valuer.UUID, valuer.UUID) (*roletypes.Role, error)
// Gets the objects associated with the given role and relation.
GetObjects(context.Context, valuer.UUID, valuer.UUID, authtypes.Relation) ([]*authtypes.Object, error)
// Lists all the roles for the organization.
List(context.Context, valuer.UUID) ([]*roletypes.Role, error)
// Gets all the typeable resources registered from role registry.
GetResources(context.Context) []*authtypes.Resource
@@ -31,40 +37,12 @@ type Setter interface {
// Deletes the role and tuples in authorization server.
Delete(context.Context, valuer.UUID, valuer.UUID) error
// Assigns role to the given subject.
Assign(context.Context, valuer.UUID, valuer.UUID, string) error
RegisterTypeable
}
type Getter interface {
// Gets the role
Get(context.Context, valuer.UUID, valuer.UUID) (*roletypes.Role, error)
// Gets the role by org_id and name
GetByOrgIDAndName(context.Context, valuer.UUID, string) (*roletypes.Role, error)
// Lists all the roles for the organization.
List(context.Context, valuer.UUID) ([]*roletypes.Role, error)
// Lists all the roles for the organization filtered by name
ListByOrgIDAndNames(context.Context, valuer.UUID, []string) ([]*roletypes.Role, error)
}
type Granter interface {
// Grants a role to the subject based on role name.
Grant(context.Context, valuer.UUID, string, string) error
// Grants a role to the subject based on role id.
GrantByID(context.Context, valuer.UUID, valuer.UUID, string) error
// Revokes a granted role from the subject based on role name.
Revoke(context.Context, valuer.UUID, string, string) error
// Changes the granted role for the subject based on role name.
ModifyGrant(context.Context, valuer.UUID, string, string, string) error
// Bootstrap the managed roles.
CreateManagedRoles(context.Context, valuer.UUID, []*roletypes.Role) error
}
type RegisterTypeable interface {
MustGetTypeables() []authtypes.Typeable
}

View File

@@ -1,43 +0,0 @@
package user
import (
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
type Config struct {
Password PasswordConfig `mapstructure:"password"`
}
type PasswordConfig struct {
Reset ResetConfig `mapstructure:"reset"`
}
type ResetConfig struct {
AllowSelf bool `mapstructure:"allow_self"`
MaxTokenLifetime time.Duration `mapstructure:"max_token_lifetime"`
}
func NewConfigFactory() factory.ConfigFactory {
return factory.NewConfigFactory(factory.MustNewName("user"), newConfig)
}
func newConfig() factory.Config {
return &Config{
Password: PasswordConfig{
Reset: ResetConfig{
AllowSelf: false,
MaxTokenLifetime: 6 * time.Hour,
},
},
}
}
func (c Config) Validate() error {
if c.Password.Reset.MaxTokenLifetime <= 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "user::password::reset::max_token_lifetime must be positive")
}
return nil
}

View File

@@ -332,25 +332,6 @@ func (handler *handler) ChangePassword(w http.ResponseWriter, r *http.Request) {
render.Success(w, http.StatusNoContent, nil)
}
func (h *handler) ForgotPassword(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
req := new(types.PostableForgotPassword)
if err := binding.JSON.BindBody(r.Body, req); err != nil {
render.Error(w, err)
return
}
err := h.module.ForgotPassword(ctx, req.OrgID, req.Email, req.FrontendBaseURL)
if err != nil {
render.Error(w, err)
return
}
render.Success(w, http.StatusNoContent, nil)
}
func (h *handler) CreateAPIKey(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()

View File

@@ -12,14 +12,11 @@ import (
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/modules/user"
root "github.com/SigNoz/signoz/pkg/modules/user"
"github.com/SigNoz/signoz/pkg/tokenizer"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/emailtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/dustin/go-humanize"
"golang.org/x/text/cases"
"golang.org/x/text/language"
)
@@ -30,13 +27,11 @@ type Module struct {
emailing emailing.Emailing
settings factory.ScopedProviderSettings
orgSetter organization.Setter
granter role.Granter
analytics analytics.Analytics
config user.Config
}
// This module is a WIP, don't take inspiration from this.
func NewModule(store types.UserStore, tokenizer tokenizer.Tokenizer, emailing emailing.Emailing, providerSettings factory.ProviderSettings, orgSetter organization.Setter, granter role.Granter, analytics analytics.Analytics, config user.Config) root.Module {
func NewModule(store types.UserStore, tokenizer tokenizer.Tokenizer, emailing emailing.Emailing, providerSettings factory.ProviderSettings, orgSetter organization.Setter, analytics analytics.Analytics) root.Module {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/pkg/modules/user/impluser")
return &Module{
store: store,
@@ -45,8 +40,6 @@ func NewModule(store types.UserStore, tokenizer tokenizer.Tokenizer, emailing em
settings: settings,
orgSetter: orgSetter,
analytics: analytics,
granter: granter,
config: config,
}
}
@@ -230,6 +223,7 @@ func (m *Module) UpdateUser(ctx context.Context, orgID valuer.UUID, id string, u
}
user.UpdatedAt = time.Now()
updatedUser, err := m.store.UpdateUser(ctx, orgID, id, user)
if err != nil {
return nil, err
@@ -260,8 +254,8 @@ func (m *Module) UpdateUser(ctx context.Context, orgID valuer.UUID, id string, u
return updatedUser, nil
}
func (module *Module) DeleteUser(ctx context.Context, orgID valuer.UUID, id string, deletedBy string) error {
user, err := module.store.GetUser(ctx, valuer.MustNewUUID(id))
func (m *Module) DeleteUser(ctx context.Context, orgID valuer.UUID, id string, deletedBy string) error {
user, err := m.store.GetUser(ctx, valuer.MustNewUUID(id))
if err != nil {
return err
}
@@ -271,7 +265,7 @@ func (module *Module) DeleteUser(ctx context.Context, orgID valuer.UUID, id stri
}
// don't allow to delete the last admin user
adminUsers, err := module.store.GetUsersByRoleAndOrgID(ctx, types.RoleAdmin, orgID)
adminUsers, err := m.store.GetUsersByRoleAndOrgID(ctx, types.RoleAdmin, orgID)
if err != nil {
return err
}
@@ -280,11 +274,11 @@ func (module *Module) DeleteUser(ctx context.Context, orgID valuer.UUID, id stri
return errors.New(errors.TypeForbidden, errors.CodeForbidden, "cannot delete the last admin")
}
if err := module.store.DeleteUser(ctx, orgID.String(), user.ID.StringValue()); err != nil {
if err := m.store.DeleteUser(ctx, orgID.String(), user.ID.StringValue()); err != nil {
return err
}
module.analytics.TrackUser(ctx, user.OrgID.String(), user.ID.String(), "User Deleted", map[string]any{
m.analytics.TrackUser(ctx, user.OrgID.String(), user.ID.String(), "User Deleted", map[string]any{
"deleted_by": deletedBy,
})
@@ -308,91 +302,33 @@ func (module *Module) GetOrCreateResetPasswordToken(ctx context.Context, userID
}
}
// check if a token already exists for this password id
existingResetPasswordToken, err := module.store.GetResetPasswordTokenByPasswordID(ctx, password.ID)
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return nil, err // return the error if it is not a not found error
resetPasswordToken, err := types.NewResetPasswordToken(password.ID)
if err != nil {
return nil, err
}
// return the existing token if it is not expired
if existingResetPasswordToken != nil && !existingResetPasswordToken.IsExpired() {
return existingResetPasswordToken, nil // return the existing token if it is not expired
}
err = module.store.CreateResetPasswordToken(ctx, resetPasswordToken)
if err != nil {
if !errors.Ast(err, errors.TypeAlreadyExists) {
return nil, err
}
// delete the existing token entry
if existingResetPasswordToken != nil {
if err := module.store.DeleteResetPasswordTokenByPasswordID(ctx, password.ID); err != nil {
// if the token already exists, we return the existing token
resetPasswordToken, err = module.store.GetResetPasswordTokenByPasswordID(ctx, password.ID)
if err != nil {
return nil, err
}
}
// create a new token
resetPasswordToken, err := types.NewResetPasswordToken(password.ID, time.Now().Add(module.config.Password.Reset.MaxTokenLifetime))
if err != nil {
return nil, err
}
// create a new token
err = module.store.CreateResetPasswordToken(ctx, resetPasswordToken)
if err != nil {
return nil, err
}
return resetPasswordToken, nil
}
func (module *Module) ForgotPassword(ctx context.Context, orgID valuer.UUID, email valuer.Email, frontendBaseURL string) error {
if !module.config.Password.Reset.AllowSelf {
return errors.New(errors.TypeUnsupported, errors.CodeUnsupported, "users are not allowed to reset their password themselves, please contact an admin to reset your password")
}
user, err := module.store.GetUserByEmailAndOrgID(ctx, email, orgID)
if err != nil {
if errors.Ast(err, errors.TypeNotFound) {
return nil // for security reasons
}
return err
}
token, err := module.GetOrCreateResetPasswordToken(ctx, user.ID)
if err != nil {
module.settings.Logger().ErrorContext(ctx, "failed to create reset password token", "error", err)
return err
}
resetLink := fmt.Sprintf("%s/password-reset?token=%s", frontendBaseURL, token.Token)
tokenLifetime := module.config.Password.Reset.MaxTokenLifetime
humanizedTokenLifetime := strings.TrimSpace(humanize.RelTime(time.Now(), time.Now().Add(tokenLifetime), "", ""))
if err := module.emailing.SendHTML(
ctx,
user.Email.String(),
"Reset your SigNoz password",
emailtypes.TemplateNameResetPassword,
map[string]any{
"Name": user.DisplayName,
"Link": resetLink,
"Expiry": humanizedTokenLifetime,
},
); err != nil {
module.settings.Logger().ErrorContext(ctx, "failed to send reset password email", "error", err)
return nil
}
return nil
}
func (module *Module) UpdatePasswordByResetPasswordToken(ctx context.Context, token string, passwd string) error {
resetPasswordToken, err := module.store.GetResetPasswordToken(ctx, token)
if err != nil {
return err
}
if resetPasswordToken.IsExpired() {
return errors.New(errors.TypeUnauthenticated, errors.CodeUnauthenticated, "reset password token has expired")
}
password, err := module.store.GetPassword(ctx, resetPasswordToken.PasswordID)
if err != nil {
return err
@@ -478,7 +414,7 @@ func (module *Module) CreateFirstUser(ctx context.Context, organization *types.O
}
if err = module.store.RunInTx(ctx, func(ctx context.Context) error {
err = module.orgSetter.Create(ctx, organization)
err := module.orgSetter.Create(ctx, organization)
if err != nil {
return err
}

View File

@@ -391,18 +391,6 @@ func (store *store) GetResetPasswordTokenByPasswordID(ctx context.Context, passw
return resetPasswordToken, nil
}
func (store *store) DeleteResetPasswordTokenByPasswordID(ctx context.Context, passwordID valuer.UUID) error {
_, err := store.sqlstore.BunDB().NewDelete().
Model(&types.ResetPasswordToken{}).
Where("password_id = ?", passwordID).
Exec(ctx)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to delete reset password token")
}
return nil
}
func (store *store) GetResetPasswordToken(ctx context.Context, token string) (*types.ResetPasswordToken, error) {
resetPasswordRequest := new(types.ResetPasswordToken)

View File

@@ -30,9 +30,6 @@ type Module interface {
// Updates password of user to the new password. It also deletes all reset password tokens for the user.
UpdatePassword(ctx context.Context, userID valuer.UUID, oldPassword string, password string) error
// Initiate forgot password flow for a user
ForgotPassword(ctx context.Context, orgID valuer.UUID, email valuer.Email, frontendBaseURL string) error
UpdateUser(ctx context.Context, orgID valuer.UUID, id string, user *types.User, updatedBy string) (*types.User, error)
DeleteUser(ctx context.Context, orgID valuer.UUID, id string, deletedBy string) error
@@ -95,7 +92,6 @@ type Handler interface {
GetResetPasswordToken(http.ResponseWriter, *http.Request)
ResetPassword(http.ResponseWriter, *http.Request)
ChangePassword(http.ResponseWriter, *http.Request)
ForgotPassword(http.ResponseWriter, *http.Request)
// API KEY
CreateAPIKey(http.ResponseWriter, *http.Request)

View File

@@ -209,7 +209,7 @@ func (s *Server) createPublicServer(api *APIHandler, web web.Web) (*http.Server,
r.Use(middleware.NewLogging(s.signoz.Instrumentation.Logger(), s.config.APIServer.Logging.ExcludedRoutes).Wrap)
r.Use(middleware.NewComment().Wrap)
am := middleware.NewAuthZ(s.signoz.Instrumentation.Logger(), s.signoz.Modules.OrgGetter, s.signoz.Authz, s.signoz.Modules.RoleGetter)
am := middleware.NewAuthZ(s.signoz.Instrumentation.Logger(), s.signoz.Modules.OrgGetter, s.signoz.Authz)
api.RegisterRoutes(r, am)
api.RegisterLogsRoutes(r, am)

View File

@@ -22,7 +22,6 @@ import (
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/modules/metricsexplorer"
"github.com/SigNoz/signoz/pkg/modules/user"
"github.com/SigNoz/signoz/pkg/prometheus"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/ruler"
@@ -110,9 +109,6 @@ type Config struct {
// Flagger config
Flagger flagger.Config `mapstructure:"flagger"`
// User config
User user.Config `mapstructure:"user"`
}
// DeprecatedFlags are the flags that are deprecated and scheduled for removal.
@@ -175,7 +171,6 @@ func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.R
tokenizer.NewConfigFactory(),
metricsexplorer.NewConfigFactory(),
flagger.NewConfigFactory(),
user.NewConfigFactory(),
}
conf, err := config.New(ctx, resolverConfig, configFactories)

View File

@@ -17,8 +17,6 @@ import (
"github.com/SigNoz/signoz/pkg/modules/quickfilter/implquickfilter"
"github.com/SigNoz/signoz/pkg/modules/rawdataexport"
"github.com/SigNoz/signoz/pkg/modules/rawdataexport/implrawdataexport"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/modules/role/implrole"
"github.com/SigNoz/signoz/pkg/modules/savedview"
"github.com/SigNoz/signoz/pkg/modules/savedview/implsavedview"
"github.com/SigNoz/signoz/pkg/modules/services"
@@ -43,7 +41,6 @@ type Handlers struct {
Global global.Handler
FlaggerHandler flagger.Handler
GatewayHandler gateway.Handler
Role role.Handler
}
func NewHandlers(modules Modules, providerSettings factory.ProviderSettings, querier querier.Querier, licensing licensing.Licensing, global global.Global, flaggerService flagger.Flagger, gatewayService gateway.Gateway) Handlers {
@@ -60,6 +57,5 @@ func NewHandlers(modules Modules, providerSettings factory.ProviderSettings, que
Global: signozglobal.NewHandler(global),
FlaggerHandler: flagger.NewHandler(flaggerService),
GatewayHandler: gateway.NewHandler(gatewayService),
Role: implrole.NewHandler(modules.RoleSetter, modules.RoleGetter),
}
}

View File

@@ -13,7 +13,6 @@ import (
"github.com/SigNoz/signoz/pkg/factory/factorytest"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/role/implrole"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sharder"
"github.com/SigNoz/signoz/pkg/sharder/noopsharder"
@@ -41,10 +40,7 @@ func TestNewHandlers(t *testing.T) {
queryParser := queryparser.New(providerSettings)
require.NoError(t, err)
dashboardModule := impldashboard.NewModule(impldashboard.NewStore(sqlstore), providerSettings, nil, orgGetter, queryParser)
roleSetter := implrole.NewSetter(implrole.NewStore(sqlstore), nil)
roleGetter := implrole.NewGetter(implrole.NewStore(sqlstore))
grantModule := implrole.NewGranter(implrole.NewStore(sqlstore), nil)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, roleSetter, roleGetter, grantModule)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule)
handlers := NewHandlers(modules, providerSettings, nil, nil, nil, nil, nil)

View File

@@ -25,7 +25,6 @@ import (
"github.com/SigNoz/signoz/pkg/modules/quickfilter/implquickfilter"
"github.com/SigNoz/signoz/pkg/modules/rawdataexport"
"github.com/SigNoz/signoz/pkg/modules/rawdataexport/implrawdataexport"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/modules/savedview"
"github.com/SigNoz/signoz/pkg/modules/savedview/implsavedview"
"github.com/SigNoz/signoz/pkg/modules/services"
@@ -67,9 +66,6 @@ type Modules struct {
SpanPercentile spanpercentile.Module
MetricsExplorer metricsexplorer.Module
Promote promote.Module
RoleSetter role.Setter
RoleGetter role.Getter
Granter role.Granter
}
func NewModules(
@@ -89,13 +85,10 @@ func NewModules(
queryParser queryparser.QueryParser,
config Config,
dashboard dashboard.Module,
roleSetter role.Setter,
roleGetter role.Getter,
granter role.Granter,
) Modules {
quickfilter := implquickfilter.NewModule(implquickfilter.NewStore(sqlstore))
orgSetter := implorganization.NewSetter(implorganization.NewStore(sqlstore), alertmanager, quickfilter)
user := impluser.NewModule(impluser.NewStore(sqlstore, providerSettings), tokenizer, emailing, providerSettings, orgSetter, granter, analytics, config.User)
user := impluser.NewModule(impluser.NewStore(sqlstore, providerSettings), tokenizer, emailing, providerSettings, orgSetter, analytics)
userGetter := impluser.NewGetter(impluser.NewStore(sqlstore, providerSettings))
ruleStore := sqlrulestore.NewRuleStore(sqlstore, queryParser, providerSettings)
@@ -117,8 +110,5 @@ func NewModules(
Services: implservices.NewModule(querier, telemetryStore),
MetricsExplorer: implmetricsexplorer.NewModule(telemetryStore, telemetryMetadataStore, cache, ruleStore, dashboard, providerSettings, config.MetricsExplorer),
Promote: implpromote.NewModule(telemetryMetadataStore, telemetryStore),
RoleSetter: roleSetter,
RoleGetter: roleGetter,
Granter: granter,
}
}

View File

@@ -13,7 +13,6 @@ import (
"github.com/SigNoz/signoz/pkg/factory/factorytest"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/role/implrole"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/sharder"
"github.com/SigNoz/signoz/pkg/sharder/noopsharder"
@@ -41,10 +40,7 @@ func TestNewModules(t *testing.T) {
queryParser := queryparser.New(providerSettings)
require.NoError(t, err)
dashboardModule := impldashboard.NewModule(impldashboard.NewStore(sqlstore), providerSettings, nil, orgGetter, queryParser)
roleSetter := implrole.NewSetter(implrole.NewStore(sqlstore), nil)
roleGetter := implrole.NewGetter(implrole.NewStore(sqlstore))
grantModule := implrole.NewGranter(implrole.NewStore(sqlstore), nil)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, roleSetter, roleGetter, grantModule)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule)
reflectVal := reflect.ValueOf(modules)
for i := 0; i < reflectVal.NumField(); i++ {

View File

@@ -19,7 +19,6 @@ import (
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/preference"
"github.com/SigNoz/signoz/pkg/modules/promote"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/modules/session"
"github.com/SigNoz/signoz/pkg/modules/user"
"github.com/SigNoz/signoz/pkg/types/ctxtypes"
@@ -50,8 +49,6 @@ func NewOpenAPI(ctx context.Context, instrumentation instrumentation.Instrumenta
struct{ dashboard.Handler }{},
struct{ metricsexplorer.Handler }{},
struct{ gateway.Handler }{},
struct{ role.Getter }{},
struct{ role.Handler }{},
).New(ctx, instrumentation.ToProviderSettings(), apiserver.Config{})
if err != nil {
return nil, err

View File

@@ -161,7 +161,6 @@ func NewSQLMigrationProviderFactories(
sqlmigration.NewUpdateUserPreferenceFactory(sqlstore, sqlschema),
sqlmigration.NewUpdateOrgPreferenceFactory(sqlstore, sqlschema),
sqlmigration.NewRenameOrgDomainsFactory(sqlstore, sqlschema),
sqlmigration.NewAddResetPasswordTokenExpiryFactory(sqlstore, sqlschema),
)
}
@@ -243,8 +242,6 @@ func NewAPIServerProviderFactories(orgGetter organization.Getter, authz authz.Au
handlers.Dashboard,
handlers.MetricsExplorer,
handlers.GatewayHandler,
modules.RoleGetter,
handlers.Role,
),
)
}

View File

@@ -90,9 +90,8 @@ func New(
telemetrystoreProviderFactories factory.NamedMap[factory.ProviderFactory[telemetrystore.TelemetryStore, telemetrystore.Config]],
authNsCallback func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error),
authzCallback func(context.Context, sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config],
dashboardModuleCallback func(sqlstore.SQLStore, factory.ProviderSettings, analytics.Analytics, organization.Getter, role.Setter, role.Granter, queryparser.QueryParser, querier.Querier, licensing.Licensing) dashboard.Module,
dashboardModuleCallback func(sqlstore.SQLStore, factory.ProviderSettings, analytics.Analytics, organization.Getter, role.Module, queryparser.QueryParser, querier.Querier, licensing.Licensing) dashboard.Module,
gatewayProviderFactory func(licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config],
roleSetterCallback func(sqlstore.SQLStore, authz.AuthZ, licensing.Licensing, []role.RegisterTypeable) role.Setter,
) (*SigNoz, error) {
// Initialize instrumentation
instrumentation, err := instrumentation.New(ctx, config.Instrumentation, version.Info, "signoz")
@@ -281,12 +280,6 @@ func New(
return nil, err
}
// Initialize user getter
userGetter := impluser.NewGetter(impluser.NewStore(sqlstore, providerSettings))
// Initialize the role getter
roleGetter := implrole.NewGetter(implrole.NewStore(sqlstore))
// Initialize authz
authzProviderFactory := authzCallback(ctx, sqlstore)
authz, err := authzProviderFactory.New(ctx, providerSettings, authz.Config{})
@@ -294,6 +287,9 @@ func New(
return nil, err
}
// Initialize user getter
userGetter := impluser.NewGetter(impluser.NewStore(sqlstore, providerSettings))
// Initialize notification manager from the available notification manager provider factories
nfManager, err := factory.NewProviderFromNamedMap(
ctx,
@@ -390,10 +386,9 @@ func New(
}
// Initialize all modules
roleSetter := roleSetterCallback(sqlstore, authz, licensing, nil)
granter := implrole.NewGranter(implrole.NewStore(sqlstore), authz)
dashboard := dashboardModuleCallback(sqlstore, providerSettings, analytics, orgGetter, roleSetter, granter, queryParser, querier, licensing)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config, dashboard, roleSetter, roleGetter, granter)
roleModule := implrole.NewModule(implrole.NewStore(sqlstore), authz, nil)
dashboardModule := dashboardModuleCallback(sqlstore, providerSettings, analytics, orgGetter, roleModule, queryParser, querier, licensing)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config, dashboardModule)
// Initialize all handlers for the modules
handlers := NewHandlers(modules, providerSettings, querier, licensing, global, flagger, gateway)

View File

@@ -1,83 +0,0 @@
package sqlmigration
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/uptrace/bun"
"github.com/uptrace/bun/migrate"
)
type addResetPasswordTokenExpiry struct {
sqlstore sqlstore.SQLStore
sqlschema sqlschema.SQLSchema
}
func NewAddResetPasswordTokenExpiryFactory(sqlstore sqlstore.SQLStore, sqlschema sqlschema.SQLSchema) factory.ProviderFactory[SQLMigration, Config] {
return factory.NewProviderFactory(factory.MustNewName("add_reset_password_token_expiry"), func(ctx context.Context, providerSettings factory.ProviderSettings, config Config) (SQLMigration, error) {
return newAddResetPasswordTokenExpiry(ctx, providerSettings, config, sqlstore, sqlschema)
})
}
func newAddResetPasswordTokenExpiry(_ context.Context, _ factory.ProviderSettings, _ Config, sqlstore sqlstore.SQLStore, sqlschema sqlschema.SQLSchema) (SQLMigration, error) {
return &addResetPasswordTokenExpiry{
sqlstore: sqlstore,
sqlschema: sqlschema,
}, nil
}
func (migration *addResetPasswordTokenExpiry) Register(migrations *migrate.Migrations) error {
if err := migrations.Register(migration.Up, migration.Down); err != nil {
return err
}
return nil
}
func (migration *addResetPasswordTokenExpiry) Up(ctx context.Context, db *bun.DB) error {
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer func() {
_ = tx.Rollback()
}()
// get the reset_password_token table
table, uniqueConstraints, err := migration.sqlschema.GetTable(ctx, sqlschema.TableName("reset_password_token"))
if err != nil {
return err
}
// add a new column `expires_at`
column := &sqlschema.Column{
Name: sqlschema.ColumnName("expires_at"),
DataType: sqlschema.DataTypeTimestamp,
Nullable: true,
}
// for existing rows set
defaultValueForExistingRows := time.Now()
sqls := migration.sqlschema.Operator().AddColumn(table, uniqueConstraints, column, defaultValueForExistingRows)
for _, sql := range sqls {
if _, err := tx.ExecContext(ctx, string(sql)); err != nil {
return err
}
}
if err := tx.Commit(); err != nil {
return err
}
return nil
}
func (migration *addResetPasswordTokenExpiry) Down(ctx context.Context, db *bun.DB) error {
return nil
}

View File

@@ -24,7 +24,7 @@ var (
typeRoleSelectorRegex = regexp.MustCompile(`^[0-9a-f]{8}(?:\-[0-9a-f]{4}){3}-[0-9a-f]{12}$`)
typeAnonymousSelectorRegex = regexp.MustCompile(`^\*$`)
typeOrganizationSelectorRegex = regexp.MustCompile(`^[0-9a-f]{8}(?:\-[0-9a-f]{4}){3}-[0-9a-f]{12}$`)
typeMetaResourceSelectorRegex = regexp.MustCompile(`^(^[0-9a-f]{8}(?:\-[0-9a-f]{4}){3}-[0-9a-f]{12}$|\*)$`)
typeMetaResourceSelectorRegex = regexp.MustCompile(`^[0-9a-f]{8}(?:\-[0-9a-f]{4}){3}-[0-9a-f]{12}$`)
// metaresources selectors are used to select either all or none
typeMetaResourcesSelectorRegex = regexp.MustCompile(`^\*$`)
)

View File

@@ -12,13 +12,12 @@ import (
var (
// Templates is a list of all the templates that are supported by the emailing service.
// This list should be updated whenever a new template is added.
Templates = []TemplateName{TemplateNameInvitationEmail, TemplateNameUpdateRole, TemplateNameResetPassword}
Templates = []TemplateName{TemplateNameInvitationEmail, TemplateNameUpdateRole}
)
var (
TemplateNameInvitationEmail = TemplateName{valuer.NewString("invitation_email")}
TemplateNameUpdateRole = TemplateName{valuer.NewString("update_role")}
TemplateNameResetPassword = TemplateName{valuer.NewString("reset_password_email")}
)
type TemplateName struct{ valuer.String }
@@ -29,8 +28,6 @@ func NewTemplateName(name string) (TemplateName, error) {
return TemplateNameInvitationEmail, nil
case TemplateNameUpdateRole.StringValue():
return TemplateNameUpdateRole, nil
case TemplateNameResetPassword.StringValue():
return TemplateNameResetPassword, nil
default:
return TemplateName{}, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "invalid template name: %s", name)
}

View File

@@ -35,19 +35,12 @@ type ChangePasswordRequest struct {
NewPassword string `json:"newPassword"`
}
type PostableForgotPassword struct {
OrgID valuer.UUID `json:"orgId"`
Email valuer.Email `json:"email"`
FrontendBaseURL string `json:"frontendBaseURL"`
}
type ResetPasswordToken struct {
bun.BaseModel `bun:"table:reset_password_token"`
Identifiable
Token string `bun:"token,type:text,notnull" json:"token"`
PasswordID valuer.UUID `bun:"password_id,type:text,notnull,unique" json:"passwordId"`
ExpiresAt time.Time `bun:"expires_at,type:timestamptz,nullzero" json:"expiresAt"`
}
type FactorPassword struct {
@@ -143,14 +136,13 @@ func NewHashedPassword(password string) (string, error) {
return string(hashedPassword), nil
}
func NewResetPasswordToken(passwordID valuer.UUID, expiresAt time.Time) (*ResetPasswordToken, error) {
func NewResetPasswordToken(passwordID valuer.UUID) (*ResetPasswordToken, error) {
return &ResetPasswordToken{
Identifiable: Identifiable{
ID: valuer.GenerateUUID(),
},
Token: valuer.GenerateUUID().String(),
PasswordID: passwordID,
ExpiresAt: expiresAt,
}, nil
}
@@ -216,7 +208,3 @@ func (f *FactorPassword) Equals(password string) bool {
func comparePassword(hashedPassword string, password string) bool {
return bcrypt.CompareHashAndPassword([]byte(hashedPassword), []byte(password)) == nil
}
func (r *ResetPasswordToken) IsExpired() bool {
return r.ExpiresAt.Before(time.Now())
}

View File

@@ -553,18 +553,6 @@ func (f Function) Copy() Function {
return c
}
// Validate validates the name and args for the function
func (f Function) Validate() error {
if err := f.Name.Validate(); err != nil {
return err
}
// Validate args for function
if err := f.ValidateArgs(); err != nil {
return err
}
return nil
}
type LimitBy struct {
// keys to limit by
Keys []string `json:"keys"`

View File

@@ -73,43 +73,6 @@ func (f *QueryBuilderFormula) UnmarshalJSON(data []byte) error {
return nil
}
// Validate checks if the QueryBuilderFormula fields are valid
func (f QueryBuilderFormula) Validate() error {
// Validate name is not blank
if strings.TrimSpace(f.Name) == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"formula name cannot be blank",
)
}
// Validate expression is not blank
if strings.TrimSpace(f.Expression) == "" {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"formula expression cannot be blank",
)
}
// Validate functions if present
for i, fn := range f.Functions {
if err := fn.Validate(); err != nil {
fnId := fmt.Sprintf("function #%d", i+1)
if f.Name != "" {
fnId = fmt.Sprintf("function #%d in formula '%s'", i+1, f.Name)
}
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid %s: %s",
fnId,
err.Error(),
)
}
}
return nil
}
// small container to store the query name and index or alias reference
// for a variable in the formula expression
// read below for more details on aggregation references

View File

@@ -1,13 +1,10 @@
package querybuildertypesv5
import (
"fmt"
"math"
"slices"
"strconv"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -36,46 +33,6 @@ var (
FunctionNameFillZero = FunctionName{valuer.NewString("fillZero")}
)
// Validate checks if the FunctionName is valid and one of the known types
func (fn FunctionName) Validate() error {
validFunctions := []FunctionName{
FunctionNameCutOffMin,
FunctionNameCutOffMax,
FunctionNameClampMin,
FunctionNameClampMax,
FunctionNameAbsolute,
FunctionNameRunningDiff,
FunctionNameLog2,
FunctionNameLog10,
FunctionNameCumulativeSum,
FunctionNameEWMA3,
FunctionNameEWMA5,
FunctionNameEWMA7,
FunctionNameMedian3,
FunctionNameMedian5,
FunctionNameMedian7,
FunctionNameTimeShift,
FunctionNameAnomaly,
FunctionNameFillZero,
}
if slices.Contains(validFunctions, fn) {
return nil
}
// Format valid functions as comma-separated string
var validFunctionNames []string
for _, fn := range validFunctions {
validFunctionNames = append(validFunctionNames, fn.StringValue())
}
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid function name: %s",
fn.StringValue(),
).WithAdditional(fmt.Sprintf("valid functions are: %s", strings.Join(validFunctionNames, ", ")))
}
// ApplyFunction applies the given function to the result data
func ApplyFunction(fn Function, result *TimeSeries) *TimeSeries {
// Extract the function name and arguments
@@ -155,61 +112,6 @@ func ApplyFunction(fn Function, result *TimeSeries) *TimeSeries {
return result
}
// ValidateArgs validates the arguments for the given function
func (fn Function) ValidateArgs() error {
// Extract the function name and arguments
name := fn.Name
args := fn.Args
switch name {
case FunctionNameCutOffMin, FunctionNameCutOffMax, FunctionNameClampMin, FunctionNameClampMax:
if len(args) == 0 {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"threshold value is required for function %s",
name.StringValue(),
)
}
_, err := parseFloat64Arg(args[0].Value)
if err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"threshold value must be a floating value for function %s",
name.StringValue(),
)
}
case FunctionNameEWMA3, FunctionNameEWMA5, FunctionNameEWMA7:
if len(args) == 0 {
return nil // alpha is optional for EWMA functions
}
_, err := parseFloat64Arg(args[0].Value)
if err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"alpha value must be a floating value for function %s",
name.StringValue(),
)
}
case FunctionNameTimeShift:
if len(args) == 0 {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"time shift value is required for function %s",
name.StringValue(),
)
}
_, err := parseFloat64Arg(args[0].Value)
if err != nil {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"time shift value must be a floating value for function %s",
name.StringValue(),
)
}
}
return nil
}
// parseFloat64Arg parses an argument to float64
func parseFloat64Arg(value any) (float64, error) {
switch v := value.(type) {

View File

@@ -65,6 +65,46 @@ const (
MaxQueryLimit = 10000
)
// ValidateFunctionName checks if the function name is valid
func ValidateFunctionName(name FunctionName) error {
validFunctions := []FunctionName{
FunctionNameCutOffMin,
FunctionNameCutOffMax,
FunctionNameClampMin,
FunctionNameClampMax,
FunctionNameAbsolute,
FunctionNameRunningDiff,
FunctionNameLog2,
FunctionNameLog10,
FunctionNameCumulativeSum,
FunctionNameEWMA3,
FunctionNameEWMA5,
FunctionNameEWMA7,
FunctionNameMedian3,
FunctionNameMedian5,
FunctionNameMedian7,
FunctionNameTimeShift,
FunctionNameAnomaly,
FunctionNameFillZero,
}
if slices.Contains(validFunctions, name) {
return nil
}
// Format valid functions as comma-separated string
var validFunctionNames []string
for _, fn := range validFunctions {
validFunctionNames = append(validFunctionNames, fn.StringValue())
}
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"invalid function name: %s",
name.StringValue(),
).WithAdditional(fmt.Sprintf("valid functions are: %s", strings.Join(validFunctionNames, ", ")))
}
// Validate performs preliminary validation on QueryBuilderQuery
func (q *QueryBuilderQuery[T]) Validate(requestType RequestType) error {
// Validate signal
@@ -271,7 +311,7 @@ func (q *QueryBuilderQuery[T]) validateLimitAndPagination() error {
func (q *QueryBuilderQuery[T]) validateFunctions() error {
for i, fn := range q.Functions {
if err := fn.Validate(); err != nil {
if err := ValidateFunctionName(fn.Name); err != nil {
fnId := fmt.Sprintf("function #%d", i+1)
if q.Name != "" {
fnId = fmt.Sprintf("function #%d in query '%s'", i+1, q.Name)

View File

@@ -20,7 +20,6 @@ var (
ErrCodeInvalidTypeRelation = errors.MustNewCode("role_invalid_type_relation")
ErrCodeRoleNotFound = errors.MustNewCode("role_not_found")
ErrCodeRoleFailedTransactionsFromString = errors.MustNewCode("role_failed_transactions_from_string")
ErrCodeRoleUnsupported = errors.MustNewCode("role_unsupported")
)
var (
@@ -33,22 +32,8 @@ var (
)
var (
SigNozAnonymousRoleName = "signoz-anonymous"
SigNozAnonymousRoleDescription = "Role assigned to anonymous users for access to public resources."
SigNozAdminRoleName = "signoz-admin"
SigNozAdminRoleDescription = "Role assigned to users who have full administrative access to SigNoz resources."
SigNozEditorRoleName = "signoz-editor"
SigNozEditorRoleDescription = "Role assigned to users who can create, edit, and manage SigNoz resources but do not have full administrative privileges."
SigNozViewerRoleName = "signoz-viewer"
SigNozViewerRoleDescription = "Role assigned to users who have read-only access to SigNoz resources."
)
var (
ExistingRoleToSigNozManagedRoleMap = map[types.Role]string{
types.RoleAdmin: SigNozAdminRoleName,
types.RoleEditor: SigNozEditorRoleName,
types.RoleViewer: SigNozViewerRoleName,
}
AnonymousUserRoleName = "signoz-anonymous"
AnonymousUserRoleDescription = "Role assigned to anonymous users for access to public resources."
)
var (
@@ -69,10 +54,10 @@ type StorableRole struct {
type Role struct {
types.Identifiable
types.TimeAuditable
Name string `json:"name"`
Description string `json:"description"`
Type valuer.String `json:"type"`
OrgID valuer.UUID `json:"orgId"`
Name string `json:"name"`
Description string `json:"description"`
Type string `json:"type"`
OrgID valuer.UUID `json:"org_id"`
}
type PostableRole struct {
@@ -96,7 +81,7 @@ func NewStorableRoleFromRole(role *Role) *StorableRole {
TimeAuditable: role.TimeAuditable,
Name: role.Name,
Description: role.Description,
Type: role.Type.String(),
Type: role.Type,
OrgID: role.OrgID.StringValue(),
}
}
@@ -107,12 +92,12 @@ func NewRoleFromStorableRole(storableRole *StorableRole) *Role {
TimeAuditable: storableRole.TimeAuditable,
Name: storableRole.Name,
Description: storableRole.Description,
Type: valuer.NewString(storableRole.Type),
Type: storableRole.Type,
OrgID: valuer.MustNewUUID(storableRole.OrgID),
}
}
func NewRole(name, description string, roleType valuer.String, orgID valuer.UUID) *Role {
func NewRole(name, description string, roleType string, orgID valuer.UUID) *Role {
return &Role{
Identifiable: types.Identifiable{
ID: valuer.GenerateUUID(),
@@ -128,38 +113,7 @@ func NewRole(name, description string, roleType valuer.String, orgID valuer.UUID
}
}
func NewManagedRoles(orgID valuer.UUID) []*Role {
return []*Role{
NewRole(SigNozAdminRoleName, SigNozAdminRoleDescription, RoleTypeManaged, orgID),
NewRole(SigNozEditorRoleName, SigNozEditorRoleDescription, RoleTypeManaged, orgID),
NewRole(SigNozViewerRoleName, SigNozViewerRoleDescription, RoleTypeManaged, orgID),
NewRole(SigNozAnonymousRoleName, SigNozAnonymousRoleDescription, RoleTypeManaged, orgID),
}
}
func (role *Role) PatchMetadata(name, description *string) error {
err := role.CanEditDelete()
if err != nil {
return err
}
if name != nil {
role.Name = *name
}
if description != nil {
role.Description = *description
}
role.UpdatedAt = time.Now()
return nil
}
func (role *Role) NewPatchableObjects(additions []*authtypes.Object, deletions []*authtypes.Object, relation authtypes.Relation) (*PatchableObjects, error) {
err := role.CanEditDelete()
if err != nil {
return nil, err
}
func NewPatchableObjects(additions []*authtypes.Object, deletions []*authtypes.Object, relation authtypes.Relation) (*PatchableObjects, error) {
if len(additions) == 0 && len(deletions) == 0 {
return nil, errors.New(errors.TypeInvalidInput, ErrCodeRoleEmptyPatch, "empty object patch request received, at least one of additions or deletions must be present")
}
@@ -179,12 +133,14 @@ func (role *Role) NewPatchableObjects(additions []*authtypes.Object, deletions [
return &PatchableObjects{Additions: additions, Deletions: deletions}, nil
}
func (role *Role) CanEditDelete() error {
if role.Type == RoleTypeManaged {
return errors.Newf(errors.TypeInvalidInput, ErrCodeRoleInvalidInput, "cannot edit/delete managed role: %s", role.Name)
func (role *Role) PatchMetadata(name, description *string) {
if name != nil {
role.Name = *name
}
return nil
if description != nil {
role.Description = *description
}
role.UpdatedAt = time.Now()
}
func (role *PostableRole) UnmarshalJSON(data []byte) error {
@@ -290,12 +246,3 @@ func GetDeletionTuples(id valuer.UUID, orgID valuer.UUID, relation authtypes.Rel
return tuples, nil
}
func MustGetSigNozManagedRoleFromExistingRole(role types.Role) string {
managedRole, ok := ExistingRoleToSigNozManagedRoleMap[role]
if !ok {
panic(errors.Newf(errors.TypeInternal, errors.CodeInternal, "invalid role: %s", role.String()))
}
return managedRole
}

View File

@@ -9,9 +9,8 @@ import (
type Store interface {
Create(context.Context, *StorableRole) error
Get(context.Context, valuer.UUID, valuer.UUID) (*StorableRole, error)
GetByOrgIDAndName(context.Context, valuer.UUID, string) (*StorableRole, error)
GetByNameAndOrgID(context.Context, string, valuer.UUID) (*StorableRole, error)
List(context.Context, valuer.UUID) ([]*StorableRole, error)
ListByOrgIDAndNames(context.Context, valuer.UUID, []string) ([]*StorableRole, error)
Update(context.Context, valuer.UUID, *StorableRole) error
Delete(context.Context, valuer.UUID, valuer.UUID) error
RunInTx(context.Context, func(ctx context.Context) error) error

View File

@@ -143,7 +143,6 @@ type UserStore interface {
GetPasswordByUserID(ctx context.Context, userID valuer.UUID) (*FactorPassword, error)
GetResetPasswordToken(ctx context.Context, token string) (*ResetPasswordToken, error)
GetResetPasswordTokenByPasswordID(ctx context.Context, passwordID valuer.UUID) (*ResetPasswordToken, error)
DeleteResetPasswordTokenByPasswordID(ctx context.Context, passwordID valuer.UUID) error
UpdatePassword(ctx context.Context, password *FactorPassword) error
// API KEY

View File

@@ -1,13 +0,0 @@
<!DOCTYPE html>
<html>
<body>
<p>Hello {{.Name}},</p>
<p>You requested a password reset for your SigNoz account.</p>
<p>Click the link below to reset your password:</p>
<a href="{{.Link}}">Reset Password</a>
<p>This link will expire in {{.Expiry}}.</p>
<p>If you didn't request this, please ignore this email. Your password will remain unchanged.</p>
<br>
<p>Best regards,<br>The SigNoz Team</p>
</body>
</html>

View File

@@ -20,10 +20,6 @@ USER_ADMIN_NAME = "admin"
USER_ADMIN_EMAIL = "admin@integration.test"
USER_ADMIN_PASSWORD = "password123Z$"
USER_EDITOR_NAME = 'editor'
USER_EDITOR_EMAIL = 'editor@integration.test'
USER_EDITOR_PASSWORD = 'password123Z$'
@pytest.fixture(name="create_user_admin", scope="package")
def create_user_admin(

View File

@@ -67,8 +67,6 @@ def signoz( # pylint: disable=too-many-arguments,too-many-positional-arguments
"SIGNOZ_GATEWAY_URL": gateway.container_configs["8080"].base(),
"SIGNOZ_TOKENIZER_JWT_SECRET": "secret",
"SIGNOZ_GLOBAL_INGESTION__URL": "https://ingest.test.signoz.cloud",
"SIGNOZ_USER_PASSWORD_RESET_ALLOW__SELF": True,
"SIGNOZ_USER_PASSWORD_RESET_MAX__TOKEN__LIFETIME": "6h",
}
| sqlstore.env
| clickhouse.env

View File

@@ -7,8 +7,6 @@ from sqlalchemy import sql
from fixtures import types
from fixtures.logger import setup_logger
from datetime import datetime, timedelta, timezone
logger = setup_logger(__name__)
@@ -242,261 +240,3 @@ def test_reset_password_with_no_password(
token = get_token("admin+password@integration.test", "FINALPASSword123!#[")
assert token is not None
def test_forgot_password_returns_204_for_nonexistent_email(
signoz: types.SigNoz,
) -> None:
"""
Test that forgotPassword returns 204 even for non-existent emails
(for security reasons - doesn't reveal if user exists).
"""
# Get org ID first (needed for the forgot password request)
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v2/sessions/context"),
params={
"email": "admin@integration.test",
"ref": f"{signoz.self.host_configs['8080'].base()}",
},
timeout=5,
)
assert response.status_code == HTTPStatus.OK
org_id = response.json()["data"]["orgs"][0]["id"]
# Call forgot password with a non-existent email
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v2/factor_password/forgot"),
json={
"email": "nonexistent@integration.test",
"orgId": org_id,
"frontendBaseURL": signoz.self.host_configs["8080"].base(),
},
timeout=5,
)
# Should return 204 even for non-existent email (security)
assert response.status_code == HTTPStatus.NO_CONTENT
def test_forgot_password_creates_reset_token(
signoz: types.SigNoz, get_token: Callable[[str, str], str]
) -> None:
"""
Test the full forgot password flow:
1. Call forgotPassword endpoint for existing user
2. Verify reset password token is created in database
3. Use the token to reset password
4. Verify user can login with new password
"""
admin_token = get_token("admin@integration.test", "password123Z$")
# Create a user specifically for testing forgot password
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/invite"),
json={"email": "forgot@integration.test", "role": "EDITOR", "name": "forgotpassword user"},
timeout=2,
headers={"Authorization": f"Bearer {admin_token}"},
)
assert response.status_code == HTTPStatus.CREATED
# Get the invite token
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/invite"),
timeout=2,
headers={"Authorization": f"Bearer {admin_token}"},
)
invite_response = response.json()["data"]
found_invite = next(
(
invite
for invite in invite_response
if invite["email"] == "forgot@integration.test"
),
None,
)
# Accept the invite to create the user
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/invite/accept"),
json={
"password": "originalPassword123Z$",
"displayName": "forgotpassword user",
"token": f"{found_invite['token']}",
},
timeout=2,
)
assert response.status_code == HTTPStatus.CREATED
# Get org ID
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v2/sessions/context"),
params={
"email": "forgot@integration.test",
"ref": f"{signoz.self.host_configs['8080'].base()}",
},
timeout=5,
)
assert response.status_code == HTTPStatus.OK
org_id = response.json()["data"]["orgs"][0]["id"]
# Call forgot password endpoint
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v2/factor_password/forgot"),
json={
"email": "forgot@integration.test",
"orgId": org_id,
"frontendBaseURL": signoz.self.host_configs["8080"].base(),
},
timeout=5,
)
assert response.status_code == HTTPStatus.NO_CONTENT
# Verify reset password token was created by querying the database
# First, get the user ID
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/user"),
timeout=2,
headers={"Authorization": f"Bearer {admin_token}"},
)
assert response.status_code == HTTPStatus.OK
user_response = response.json()["data"]
found_user = next(
(
user
for user in user_response
if user["email"] == "forgot@integration.test"
),
None,
)
assert found_user is not None
reset_token = None
# Query the database directly to get the reset password token
# First get the password_id from factor_password, then get the token
with signoz.sqlstore.conn.connect() as conn:
result = conn.execute(
sql.text("""
SELECT rpt.token
FROM reset_password_token rpt
JOIN factor_password fp ON rpt.password_id = fp.id
WHERE fp.user_id = :user_id
"""),
{"user_id": found_user["id"]},
)
row = result.fetchone()
assert row is not None, "Reset password token should exist after calling forgotPassword"
reset_token = row[0]
assert reset_token is not None
assert reset_token != ""
# Reset password with a valid strong password
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/resetPassword"),
json={"password": "newSecurePassword123Z$!", "token": reset_token},
timeout=2,
)
assert response.status_code == HTTPStatus.NO_CONTENT
# Verify user can login with the new password
user_token = get_token("forgot@integration.test", "newSecurePassword123Z$!")
assert user_token is not None
# Verify old password no longer works
try:
get_token("forgot@integration.test", "originalPassword123Z$")
assert False, "Old password should not work after reset"
except AssertionError:
pass # Expected - old password should fail
def test_reset_password_with_expired_token(
signoz: types.SigNoz, get_token: Callable[[str, str], str]
) -> None:
"""
Test that resetting password with an expired token fails.
"""
admin_token = get_token("admin@integration.test", "password123Z$")
# Get user ID for the forgot@integration.test user
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/user"),
timeout=2,
headers={"Authorization": f"Bearer {admin_token}"},
)
assert response.status_code == HTTPStatus.OK
user_response = response.json()["data"]
found_user = next(
(
user
for user in user_response
if user["email"] == "forgot@integration.test"
),
None,
)
assert found_user is not None
# Get org ID
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v2/sessions/context"),
params={
"email": "forgot@integration.test",
"ref": f"{signoz.self.host_configs['8080'].base()}",
},
timeout=5,
)
assert response.status_code == HTTPStatus.OK
org_id = response.json()["data"]["orgs"][0]["id"]
# Call forgot password to generate a new token
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v2/factor_password/forgot"),
json={
"email": "forgot@integration.test",
"orgId": org_id,
"frontendBaseURL": signoz.self.host_configs["8080"].base(),
},
timeout=5,
)
assert response.status_code == HTTPStatus.NO_CONTENT
# Query the database to get the token and then expire it
reset_token = None
with signoz.sqlstore.conn.connect() as conn:
# First get the token
result = conn.execute(
sql.text("""
SELECT rpt.token, rpt.id
FROM reset_password_token rpt
JOIN factor_password fp ON rpt.password_id = fp.id
WHERE fp.user_id = :user_id
"""),
{"user_id": found_user["id"]},
)
row = result.fetchone()
assert row is not None, "Reset password token should exist"
reset_token = row[0]
token_id = row[1]
# Now expire the token by setting expires_at to a past time
conn.execute(
sql.text("""
UPDATE reset_password_token
SET expires_at = :expired_time
WHERE id = :token_id
"""),
{
"expired_time": "2020-01-01 00:00:00",
"token_id": token_id,
},
)
conn.commit()
assert reset_token is not None
# Try to use the expired token - should fail with 401 Unauthorized
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/resetPassword"),
json={"password": "expiredTokenPassword123Z$!", "token": reset_token},
timeout=2,
)
assert response.status_code == HTTPStatus.UNAUTHORIZED