Compare commits

..

1 Commits

Author SHA1 Message Date
Shivanshu Raj Shrivastava
fc1483c56a chore: testing
Signed-off-by: Shivanshu Raj Shrivastava <shivanshu1333@gmail.com>
2025-04-29 00:02:02 +05:30
3352 changed files with 101089 additions and 458595 deletions

View File

@@ -1,136 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
SigNoz is an open-source observability platform (APM, logs, metrics, traces) built on OpenTelemetry and ClickHouse. It provides a unified solution for monitoring applications with features including distributed tracing, log management, metrics dashboards, and alerting.
## Build and Development Commands
### Development Environment Setup
```bash
make devenv-up # Start ClickHouse and OTel Collector for local dev
make devenv-clickhouse # Start only ClickHouse
make devenv-signoz-otel-collector # Start only OTel Collector
make devenv-clickhouse-clean # Clean ClickHouse data
```
### Backend (Go)
```bash
make go-run-community # Run community backend server
make go-run-enterprise # Run enterprise backend server
make go-test # Run all Go unit tests
go test -race ./pkg/... # Run tests for specific package
go test -race ./pkg/querier/... # Example: run querier tests
```
### Integration Tests (Python)
```bash
cd tests/integration
uv sync # Install dependencies
make py-test-setup # Start test environment (keep running with --reuse)
make py-test # Run all integration tests
make py-test-teardown # Stop test environment
# Run specific test
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>/<file>.py::test_name
```
### Code Quality
```bash
# Go linting (golangci-lint)
golangci-lint run
# Python formatting/linting
make py-fmt # Format with black
make py-lint # Run isort, autoflake, pylint
```
### OpenAPI Generation
```bash
go run cmd/enterprise/*.go generate openapi
```
## Architecture Overview
### Backend Structure
The Go backend follows a **provider pattern** for dependency injection:
- **`pkg/signoz/`** - IoC container that wires all providers together
- **`pkg/modules/`** - Business logic modules (user, organization, dashboard, etc.)
- **`pkg/<provider>/`** - Provider implementations following consistent structure:
- `<name>.go` - Interface definition
- `config.go` - Configuration (implements `factory.Config`)
- `<implname><name>/provider.go` - Implementation
- `<name>test/` - Mock implementations for testing
### Key Packages
- **`pkg/querier/`** - Query engine for telemetry data (logs, traces, metrics)
- **`pkg/telemetrystore/`** - ClickHouse telemetry storage interface
- **`pkg/sqlstore/`** - Relational database (SQLite/PostgreSQL) for metadata
- **`pkg/apiserver/`** - HTTP API server with OpenAPI integration
- **`pkg/alertmanager/`** - Alert management
- **`pkg/authn/`, `pkg/authz/`** - Authentication and authorization
- **`pkg/flagger/`** - Feature flags (OpenFeature-based)
- **`pkg/errors/`** - Structured error handling
### Enterprise vs Community
- **`cmd/community/`** - Community edition entry point
- **`cmd/enterprise/`** - Enterprise edition entry point
- **`ee/`** - Enterprise-only features
## Code Conventions
### Error Handling
Use the custom `pkg/errors` package instead of standard library:
```go
errors.New(typ, code, message) // Instead of errors.New()
errors.Newf(typ, code, message, args...) // Instead of fmt.Errorf()
errors.Wrapf(err, typ, code, msg) // Wrap with context
```
Define domain-specific error codes:
```go
var CodeThingNotFound = errors.MustNewCode("thing_not_found")
```
### HTTP Handlers
Handlers are thin adapters in modules that:
1. Extract auth context from request
2. Decode request body using `binding` package
3. Call module functions
4. Return responses using `render` package
Register routes in `pkg/apiserver/signozapiserver/` with `handler.New()` and `OpenAPIDef`.
### SQL/Database
- Use Bun ORM via `sqlstore.BunDBCtx(ctx)`
- Star schema with `organizations` as central entity
- All tables have `id`, `created_at`, `updated_at`, `org_id` columns
- Write idempotent migrations in `pkg/sqlmigration/`
- No `ON CASCADE` deletes - handle in application logic
### REST Endpoints
- Use plural resource names: `/v1/organizations`, `/v1/users`
- Use `me` for current user/org: `/v1/organizations/me/users`
- Follow RESTful conventions for CRUD operations
### Linting Rules (from .golangci.yml)
- Don't use `errors` package - use `pkg/errors`
- Don't use `zap` logger - use `slog`
- Don't use `fmt.Errorf` or `fmt.Print*`
## Testing
### Unit Tests
- Run with race detector: `go test -race ./...`
- Provider mocks are in `<provider>test/` packages
### Integration Tests
- Located in `tests/integration/`
- Use pytest with testcontainers
- Files prefixed with numbers for execution order (e.g., `01_database.py`)
- Always use `--reuse` flag during development
- Fixtures in `tests/integration/fixtures/`

View File

@@ -1,36 +0,0 @@
---
name: commit
description: Create a conventional commit with staged changes
disable-model-invocation: true
allowed-tools: Bash(git:*)
---
# Create Conventional Commit
Commit staged changes using conventional commit format: `type(scope): description`
## Types
- `feat:` - New feature
- `fix:` - Bug fix
- `chore:` - Maintenance/refactor/tooling
- `test:` - Tests only
- `docs:` - Documentation
## Process
1. Review staged changes: `git diff --cached`
2. Determine type, optional scope, and description (imperative, <70 chars)
3. Commit using HEREDOC:
```bash
git commit -m "$(cat <<'EOF'
type(scope): description
EOF
)"
```
4. Verify: `git log -1`
## Notes
- Description: imperative mood, lowercase, no period
- Body: explain WHY, not WHAT (code shows what)

View File

@@ -1,55 +0,0 @@
---
name: raise-pr
description: Create a pull request with auto-filled template. Pass 'commit' to commit staged changes first.
disable-model-invocation: true
allowed-tools: Bash(gh:*, git:*), Read
argument-hint: [commit?]
---
# Raise Pull Request
Create a PR with auto-filled template from commits after origin/main.
## Arguments
- No argument: Create PR with existing commits
- `commit`: Commit staged changes first, then create PR
## Process
1. **If `$ARGUMENTS` is "commit"**: Review staged changes and commit with descriptive message
- Check for staged changes: `git diff --cached --stat`
- If changes exist:
- Review the changes: `git diff --cached`
- Create a short and clear commit message based on the changes
- Commit command: `git commit -m "message"`
2. **Analyze commits since origin/main**:
- `git log origin/main..HEAD --pretty=format:"%s%n%b"` - get commit messages
- `git diff origin/main...HEAD --stat` - see changes
3. **Read template**: `.github/pull_request_template.md`
4. **Generate PR**:
- **Title**: Short (<70 chars), from commit messages or main change
- **Body**: Fill template sections based on commits/changes:
- Summary (why/what/approach) - end with "Closes #<issue_number>" if issue number is available from branch name (git branch --show-current)
- Change Type checkboxes
- Bug Context (if applicable)
- Testing Strategy
- Risk Assessment
- Changelog (if user-facing)
- Checklist
5. **Create PR**:
```bash
git push -u origin $(git branch --show-current)
gh pr create --base main --title "..." --body "..."
gh pr view
```
## Notes
- Analyze ALL commits messages from origin/main to HEAD
- Fill template sections based on code analysis
- Leave the sections of PR template as it is if you can't determine

View File

@@ -1,6 +1,7 @@
services:
clickhouse:
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: clickhouse
volumes:
- ${PWD}/fs/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
@@ -23,10 +24,9 @@ services:
retries: 3
depends_on:
- zookeeper
environment:
- CLICKHOUSE_SKIP_USER_SETUP=1
zookeeper:
image: signoz/zookeeper:3.7.1
image: bitnami/zookeeper:3.7.1
container_name: zookeeper
volumes:
- ${PWD}/fs/tmp/zookeeper:/bitnami/zookeeper
@@ -41,8 +41,9 @@ services:
interval: 30s
timeout: 5s
retries: 3
schema-migrator-sync:
image: signoz/signoz-schema-migrator:v0.129.12
image: signoz/signoz-schema-migrator:0.111.29
container_name: schema-migrator-sync
command:
- sync
@@ -54,8 +55,9 @@ services:
clickhouse:
condition: service_healthy
restart: on-failure
schema-migrator-async:
image: signoz/signoz-schema-migrator:v0.129.12
image: signoz/signoz-schema-migrator:0.111.29
container_name: schema-migrator-async
command:
- async

View File

@@ -1,29 +0,0 @@
services:
signoz-otel-collector:
image: signoz/signoz-otel-collector:v0.129.6
container_name: signoz-otel-collector-dev
command:
- --config=/etc/otel-collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "13133:13133" # health check extension
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:13133
interval: 30s
timeout: 5s
retries: 3
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"

View File

@@ -1,96 +0,0 @@
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system]
timeout: 2s
signozspanmetrics/delta:
metrics_exporter: signozclickhousemetrics
metrics_flush_interval: 60s
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
enable_exp_histogram: true
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# This is added to ensure the uniqueness of the timeseries
# Otherwise, identical timeseries produced by multiple replicas of
# collectors result in incorrect APM metrics
- name: signoz.collector.id
- name: service.version
- name: browser.platform
- name: browser.mobile
- name: k8s.cluster.name
- name: k8s.node.name
- name: k8s.namespace.name
- name: host.name
- name: host.type
- name: container.name
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource: tcp://host.docker.internal:9000/signoz_traces
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
use_new_schema: true
signozclickhousemetrics:
dsn: tcp://host.docker.internal:9000/signoz_metrics
clickhouselogsexporter:
dsn: tcp://host.docker.internal:9000/signoz_logs
timeout: 10s
use_new_schema: true
service:
telemetry:
logs:
encoding: json
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [signozclickhousemetrics]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [signozclickhousemetrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter]

134
.github/CODEOWNERS vendored
View File

@@ -1,133 +1,13 @@
# CODEOWNERS info: https://help.github.com/en/articles/about-code-owners
# Owners are automatically requested for review for PRs that changes code
# that they own.
/frontend/ @SigNoz/frontend-maintainers
# Onboarding
/frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.json @makeavish
/frontend/src/container/OnboardingV2Container/AddDataSource/AddDataSource.tsx @makeavish
/frontend/ @YounixM
/frontend/src/container/MetricsApplication @srikanthccv
/frontend/src/container/NewWidget/RightContainer/types.ts @srikanthccv
/deploy/ @SigNoz/devops
.github @SigNoz/devops
# Scaffold Owners
/pkg/config/ @vikrantgupta25
/pkg/errors/ @vikrantgupta25
/pkg/factory/ @vikrantgupta25
/pkg/types/ @vikrantgupta25
/pkg/valuer/ @vikrantgupta25
/cmd/ @vikrantgupta25
.golangci.yml @vikrantgupta25
# Zeus Owners
/pkg/zeus/ @vikrantgupta25
/ee/zeus/ @vikrantgupta25
/pkg/licensing/ @vikrantgupta25
/ee/licensing/ @vikrantgupta25
# SQL Owners
/pkg/sqlmigration/ @vikrantgupta25
/ee/sqlmigration/ @vikrantgupta25
/pkg/sqlschema/ @vikrantgupta25
/ee/sqlschema/ @vikrantgupta25
# Analytics Owners
/pkg/analytics/ @vikrantgupta25
/pkg/statsreporter/ @vikrantgupta25
# Querier Owners
/pkg/querier/ @srikanthccv
/pkg/variables/ @srikanthccv
/pkg/types/querybuildertypes/ @srikanthccv
/pkg/types/telemetrytypes/ @srikanthccv
/pkg/querybuilder/ @srikanthccv
/pkg/telemetrylogs/ @srikanthccv
/pkg/telemetrymetadata/ @srikanthccv
/pkg/telemetrymetrics/ @srikanthccv
/pkg/telemetrytraces/ @srikanthccv
# Metrics
/pkg/types/metrictypes/ @srikanthccv
/pkg/types/metricsexplorertypes/ @srikanthccv
/pkg/modules/metricsexplorer/ @srikanthccv
/pkg/prometheus/ @srikanthccv
# APM
/pkg/types/servicetypes/ @srikanthccv
/pkg/types/apdextypes/ @srikanthccv
/pkg/modules/apdex/ @srikanthccv
/pkg/modules/services/ @srikanthccv
# Dashboard
/pkg/types/dashboardtypes/ @srikanthccv
/pkg/modules/dashboard/ @srikanthccv
# Rule/Alertmanager
/pkg/types/ruletypes/ @srikanthccv
/pkg/types/alertmanagertypes @srikanthccv
/pkg/alertmanager/ @srikanthccv
/pkg/ruler/ @srikanthccv
# Correlation-adjacent
/pkg/contextlinks/ @srikanthccv
/pkg/types/parsertypes/ @srikanthccv
/pkg/queryparser/ @srikanthccv
# AuthN / AuthZ Owners
/pkg/authz/ @vikrantgupta25
/ee/authz/ @vikrantgupta25
/pkg/authn/ @vikrantgupta25
/ee/authn/ @vikrantgupta25
/pkg/modules/user/ @vikrantgupta25
/pkg/modules/session/ @vikrantgupta25
/pkg/modules/organization/ @vikrantgupta25
/pkg/modules/authdomain/ @vikrantgupta25
/pkg/modules/role/ @vikrantgupta25
# Integration tests
/tests/integration/ @vikrantgupta25
# OpenAPI types generator
/frontend/src/api @SigNoz/frontend-maintainers
# Dashboard Owners
/frontend/src/hooks/dashboard/ @SigNoz/pulse-frontend
## Dashboard Types
/frontend/src/api/types/dashboard/ @SigNoz/pulse-frontend
## Dashboard List
/frontend/src/pages/DashboardsListPage/ @SigNoz/pulse-frontend
/frontend/src/container/ListOfDashboard/ @SigNoz/pulse-frontend
## Dashboard Page
/frontend/src/pages/DashboardPage/ @SigNoz/pulse-frontend
/frontend/src/container/DashboardContainer/ @SigNoz/pulse-frontend
/frontend/src/container/GridCardLayout/ @SigNoz/pulse-frontend
/frontend/src/container/NewWidget/ @SigNoz/pulse-frontend
## Public Dashboard Page
/frontend/src/pages/PublicDashboard/ @SigNoz/pulse-frontend
/frontend/src/container/PublicDashboardContainer/ @SigNoz/pulse-frontend
/pkg/config/ @grandwizard28
/pkg/errors/ @grandwizard28
/pkg/factory/ @grandwizard28
/pkg/types/ @grandwizard28

View File

@@ -1,85 +1,17 @@
## Pull Request
### Summary
---
<!-- ✍️ A clear and concise description...-->
### 📄 Summary
> Why does this change exist?
> What problem does it solve, and why is this the right approach?
#### Related Issues / PR's
<!-- ✍️ Add the issues being resolved here and related PR's where applicable -->
#### Screenshots
#### Screenshots / Screen Recordings (if applicable)
> Include screenshots or screen recordings that clearly show the behavior before the change and the result after the change. This helps reviewers quickly understand the impact and verify the update.
NA
<!-- ✍️ Add screenshots of before and after changes where applicable-->
#### Issues closed by this PR
> Reference issues using `Closes #issue-number` to enable automatic closure on merge.
#### Affected Areas and Manually Tested Areas
---
### ✅ Change Type
_Select all that apply_
- [ ] ✨ Feature
- [ ] 🐛 Bug fix
- [ ] ♻️ Refactor
- [ ] 🛠️ Infra / Tooling
- [ ] 🧪 Test-only
---
### 🐛 Bug Context
> Required if this PR fixes a bug
#### Root Cause
> What caused the issue?
> Regression, faulty assumption, edge case, refactor, etc.
#### Fix Strategy
> How does this PR address the root cause?
---
### 🧪 Testing Strategy
> How was this change validated?
- Tests added/updated:
- Manual verification:
- Edge cases covered:
---
### ⚠️ Risk & Impact Assessment
> What could break? How do we recover?
- Blast radius:
- Potential regressions:
- Rollback plan:
---
### 📝 Changelog
> Fill only if this affects users, APIs, UI, or documented behavior
> Use **N/A** for internal or non-user-facing changes
| Field | Value |
|------|-------|
| Deployment Type | Cloud / OSS / Enterprise |
| Change Type | Feature / Bug Fix / Maintenance |
| Description | User-facing summary |
---
### 📋 Checklist
- [ ] Tests added or explicitly not required
- [ ] Manually tested
- [ ] Breaking changes documented
- [ ] Backward compatibility considered
---
## 👀 Notes for Reviewers
<!-- Anything reviewers should keep in mind while reviewing -->
---
<!-- ✍️ Add details of blast radius and dev testing areas where applicable-->

View File

@@ -3,8 +3,8 @@ name: build-community
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+"
- "v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+"
- 'v[0-9]+.[0-9]+.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
defaults:
run:
@@ -62,21 +62,20 @@ jobs:
secrets: inherit
with:
PRIMUS_REF: main
GO_VERSION: 1.24
GO_NAME: signoz-community
GO_INPUT_ARTIFACT_CACHE_KEY: community-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./cmd/community
GO_BUILD_CONTEXT: ./pkg/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-s -w
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=community
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
-X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr'
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./cmd/community/Dockerfile.multi-arch
DOCKER_DOCKERFILE_PATH: ./pkg/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: dockerhub

View File

@@ -67,9 +67,8 @@ jobs:
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> frontend/.env
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> frontend/.env
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> frontend/.env
echo 'PYLON_APP_ID="${{ secrets.PYLON_APP_ID }}"' >> frontend/.env
echo 'APPCUES_APP_ID="${{ secrets.APPCUES_APP_ID }}"' >> frontend/.env
echo 'PYLON_IDENTITY_SECRET="${{ secrets.PYLON_IDENTITY_SECRET }}"' >> frontend/.env
echo 'CUSTOMERIO_ID="${{ secrets.CUSTOMERIO_ID }}"' >> frontend/.env
echo 'CUSTOMERIO_SITE_ID="${{ secrets.CUSTOMERIO_SITE_ID }}"' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:
@@ -85,7 +84,7 @@ jobs:
JS_INPUT_ARTIFACT_CACHE_KEY: enterprise-dotenv-${{ github.sha }}
JS_INPUT_ARTIFACT_PATH: frontend/.env
JS_OUTPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
JS_OUTPUT_ARTIFACT_PATH: frontend/build
JS_OUTPUT_ARTIFACT_PATH: frontend/build
DOCKER_BUILD: false
DOCKER_MANIFEST: false
go-build:
@@ -94,23 +93,21 @@ jobs:
secrets: inherit
with:
PRIMUS_REF: main
GO_VERSION: 1.24
GO_INPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./cmd/enterprise
GO_BUILD_CONTEXT: ./ee/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-s -w
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
-X github.com/SigNoz/signoz/ee/zeus.url=https://api.signoz.cloud
-X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.signoz.io
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1
-X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr'
-X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./cmd/enterprise/Dockerfile.multi-arch
DOCKER_DOCKERFILE_PATH: ./ee/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: ${{ needs.prepare.outputs.docker_providers }}

View File

@@ -64,11 +64,8 @@ jobs:
run: |
mkdir -p frontend
echo 'CI=1' > frontend/.env
echo 'TUNNEL_URL="${{ secrets.NP_TUNNEL_URL }}"' >> frontend/.env
echo 'TUNNEL_DOMAIN="${{ secrets.NP_TUNNEL_DOMAIN }}"' >> frontend/.env
echo 'PYLON_APP_ID="${{ secrets.NP_PYLON_APP_ID }}"' >> frontend/.env
echo 'APPCUES_APP_ID="${{ secrets.NP_APPCUES_APP_ID }}"' >> frontend/.env
echo 'PYLON_IDENTITY_SECRET="${{ secrets.NP_PYLON_IDENTITY_SECRET }}"' >> frontend/.env
echo 'TUNNEL_URL=https://telemetry.staging.signoz.cloud/tunnel' >> frontend/.env
echo 'TUNNEL_DOMAIN=https://telemetry.staging.signoz.cloud' >> frontend/.env
- name: cache-dotenv
uses: actions/cache@v4
with:
@@ -93,24 +90,22 @@ jobs:
secrets: inherit
with:
PRIMUS_REF: main
GO_VERSION: 1.24
GO_INPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
GO_INPUT_ARTIFACT_PATH: frontend/build
GO_BUILD_CONTEXT: ./cmd/enterprise
GO_BUILD_CONTEXT: ./ee/query-service
GO_BUILD_FLAGS: >-
-tags timetzdata
-ldflags='-s -w
-ldflags='-linkmode external -extldflags \"-static\" -s -w
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
-X github.com/SigNoz/signoz/ee/zeus.url=https://api.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.staging.signoz.cloud/api/v1
-X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr'
-X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.staging.signoz.cloud
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.staging.signoz.cloud/api/v1'
GO_CGO_ENABLED: 1
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
DOCKER_DOCKERFILE_PATH: ./cmd/enterprise/Dockerfile.multi-arch
DOCKER_DOCKERFILE_PATH: ./ee/query-service/Dockerfile.multi-arch
DOCKER_MANIFEST: true
DOCKER_PROVIDERS: gcp
staging:
@@ -124,4 +119,4 @@ jobs:
GITHUB_SILENT: true
GITHUB_REPOSITORY_NAME: charts-saas-v3-staging
GITHUB_EVENT_NAME: releaser
GITHUB_EVENT_PAYLOAD: '{"deployment": "${{ needs.prepare.outputs.deployment }}", "signoz_version": "${{ needs.prepare.outputs.version }}"}'
GITHUB_EVENT_PAYLOAD: "{\"deployment\": \"${{ needs.prepare.outputs.deployment }}\", \"signoz_version\": \"${{ needs.prepare.outputs.version }}\"}"

View File

@@ -25,10 +25,3 @@ jobs:
else
echo "No references to 'ee' packages found in 'pkg' directory"
fi
if grep -R --include="*.go" '.*/ee/.*' cmd/community/; then
echo "Error: Found references to 'ee' packages in 'cmd/community' directory"
exit 1
else
echo "No references to 'ee' packages found in 'cmd/community' directory"
fi

View File

@@ -18,7 +18,6 @@ jobs:
with:
PRIMUS_REF: main
GO_TEST_CONTEXT: ./...
GO_VERSION: 1.24
fmt:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
@@ -27,7 +26,6 @@ jobs:
secrets: inherit
with:
PRIMUS_REF: main
GO_VERSION: 1.24
lint:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
@@ -36,16 +34,6 @@ jobs:
secrets: inherit
with:
PRIMUS_REF: main
GO_VERSION: 1.24
deps:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
uses: signoz/primus.workflows/.github/workflows/go-deps.yaml@main
secrets: inherit
with:
PRIMUS_REF: main
GO_VERSION: 1.24
build:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
@@ -57,7 +45,7 @@ jobs:
- name: go-install
uses: actions/setup-go@v5
with:
go-version: "1.24"
go-version: "1.22"
- name: qemu-install
uses: docker/setup-qemu-action@v3
- name: aarch64-install
@@ -65,10 +53,6 @@ jobs:
set -ex
sudo apt-get update
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
- name: node-install
uses: actions/setup-node@v5
with:
node-version: "22"
- name: docker-community
shell: bash
run: |
@@ -77,19 +61,3 @@ jobs:
shell: bash
run: |
make docker-build-enterprise
openapi:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
runs-on: ubuntu-latest
steps:
- name: self-checkout
uses: actions/checkout@v4
- name: go-install
uses: actions/setup-go@v5
with:
go-version: "1.24"
- name: generate-openapi
run: |
go run cmd/enterprise/*.go generate openapi
git diff --compact-summary --exit-code || (echo; echo "Unexpected difference in openapi spec. Run go run cmd/enterprise/*.go generate openapi locally and commit."; exit 1)

View File

@@ -36,7 +36,7 @@ jobs:
- ubuntu-latest
- macos-latest
env:
CONFIG_PATH: cmd/community/.goreleaser.yaml
CONFIG_PATH: pkg/query-service/.goreleaser.yaml
runs-on: ${{ matrix.os }}
steps:
- name: checkout
@@ -58,7 +58,7 @@ jobs:
- name: setup-go
uses: actions/setup-go@v5
with:
go-version: "1.24"
go-version: "1.22"
- name: cross-compilation-tools
if: matrix.os == 'ubuntu-latest'
run: |
@@ -100,7 +100,7 @@ jobs:
needs: build
env:
DOCKER_CLI_EXPERIMENTAL: "enabled"
WORKDIR: cmd/community
WORKDIR: pkg/query-service
steps:
- name: checkout
uses: actions/checkout@v4
@@ -122,7 +122,7 @@ jobs:
- name: setup-go
uses: actions/setup-go@v5
with:
go-version: "1.24"
go-version: "1.22"
# copy the caches from build
- name: get-sha

View File

@@ -33,9 +33,8 @@ jobs:
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> .env
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> .env
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> .env
echo 'PYLON_APP_ID="${{ secrets.PYLON_APP_ID }}"' >> .env
echo 'APPCUES_APP_ID="${{ secrets.APPCUES_APP_ID }}"' >> .env
echo 'PYLON_IDENTITY_SECRET="${{ secrets.PYLON_IDENTITY_SECRET }}"' >> .env
echo 'CUSTOMERIO_ID="${{ secrets.CUSTOMERIO_ID }}"' >> .env
echo 'CUSTOMERIO_SITE_ID="${{ secrets.CUSTOMERIO_SITE_ID }}"' >> .env
- name: build-frontend
run: make js-build
- name: upload-frontend-artifact
@@ -51,7 +50,7 @@ jobs:
- ubuntu-latest
- macos-latest
env:
CONFIG_PATH: cmd/enterprise/.goreleaser.yaml
CONFIG_PATH: ee/query-service/.goreleaser.yaml
runs-on: ${{ matrix.os }}
steps:
- name: checkout
@@ -73,7 +72,7 @@ jobs:
- name: setup-go
uses: actions/setup-go@v5
with:
go-version: "1.24"
go-version: "1.22"
- name: cross-compilation-tools
if: matrix.os == 'ubuntu-latest'
run: |
@@ -136,7 +135,7 @@ jobs:
- name: setup-go
uses: actions/setup-go@v5
with:
go-version: "1.24"
go-version: "1.22"
# copy the caches from build
- name: get-sha

View File

@@ -9,51 +9,20 @@ on:
- labeled
jobs:
fmtlint:
if: |
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-integrate')
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4
- name: python
uses: actions/setup-python@v5
with:
python-version: 3.13
- name: uv
uses: astral-sh/setup-uv@v4
- name: install
run: |
cd tests/integration && uv sync
- name: fmt
run: |
make py-fmt
- name: lint
run: |
make py-lint
test:
strategy:
fail-fast: false
matrix:
src:
- bootstrap
- passwordauthn
- callbackauthn
- cloudintegrations
- dashboard
- querier
- ttl
- preference
- logspipelines
sqlstore-provider:
- postgres
- sqlite
clickhouse-version:
- 25.5.6
- 25.10.1
- 24.1.2-alpine
- 24.12-alpine
schema-migrator-version:
- v0.129.7
- v0.111.38
postgres-version:
- 15
if: |
@@ -67,32 +36,15 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: 3.13
- name: uv
uses: astral-sh/setup-uv@v4
- name: install
- name: poetry
run: |
cd tests/integration && uv sync
- name: webdriver
run: |
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" | sudo tee -a /etc/apt/sources.list.d/google-chrome.list
sudo apt-get update -qqy
sudo apt-get -qqy install google-chrome-stable
CHROME_VERSION=$(google-chrome-stable --version)
CHROME_FULL_VERSION=${CHROME_VERSION%%.*}
CHROME_MAJOR_VERSION=${CHROME_FULL_VERSION//[!0-9]}
sudo rm /etc/apt/sources.list.d/google-chrome.list
export CHROMEDRIVER_VERSION=`curl -s https://googlechromelabs.github.io/chrome-for-testing/LATEST_RELEASE_${CHROME_MAJOR_VERSION%%.*}`
curl -L -O "https://storage.googleapis.com/chrome-for-testing-public/${CHROMEDRIVER_VERSION}/linux64/chromedriver-linux64.zip"
unzip chromedriver-linux64.zip
chmod +x chromedriver-linux64/chromedriver
sudo mv chromedriver-linux64/chromedriver /usr/local/bin/chromedriver
chromedriver -version
google-chrome-stable --version
python -m pip install poetry==2.1.2
python -m poetry config virtualenvs.in-project true
cd tests/integration && poetry install --no-root
- name: run
run: |
cd tests/integration && \
uv run pytest \
poetry run pytest \
--basetemp=./tmp/ \
src/${{matrix.src}} \
--sqlstore-provider ${{matrix.sqlstore-provider}} \

View File

@@ -17,23 +17,10 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v4
- name: setup node
uses: actions/setup-node@v5
with:
node-version: "22"
- name: install
run: cd frontend && yarn install
- name: tsc
run: cd frontend && yarn tsc
tsc2:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
uses: signoz/primus.workflows/.github/workflows/js-tsc.yaml@main
secrets: inherit
with:
PRIMUS_REF: main
JS_SRC: frontend
test:
if: |
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||

View File

@@ -1,6 +1,10 @@
name: prereleaser
on:
# schedule every wednesday 6:30 AM UTC (12:00 PM IST)
schedule:
- cron: '30 6 * * 3'
# allow manual triggering of the workflow by a maintainer
workflow_dispatch:
inputs:

View File

@@ -1,62 +0,0 @@
name: e2eci
on:
workflow_dispatch:
inputs:
userRole:
description: "Role of the user (ADMIN, EDITOR, VIEWER)"
required: true
type: choice
options:
- ADMIN
- EDITOR
- VIEWER
jobs:
test:
name: Run Playwright Tests
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: lts/*
- name: Mask secrets and input
run: |
echo "::add-mask::${{ secrets.BASE_URL }}"
echo "::add-mask::${{ secrets.LOGIN_USERNAME }}"
echo "::add-mask::${{ secrets.LOGIN_PASSWORD }}"
echo "::add-mask::${{ github.event.inputs.userRole }}"
- name: Install dependencies
working-directory: frontend
run: |
npm install -g yarn
yarn
- name: Install Playwright Browsers
working-directory: frontend
run: yarn playwright install --with-deps
- name: Run Playwright Tests
working-directory: frontend
run: |
BASE_URL="${{ secrets.BASE_URL }}" \
LOGIN_USERNAME="${{ secrets.LOGIN_USERNAME }}" \
LOGIN_PASSWORD="${{ secrets.LOGIN_PASSWORD }}" \
USER_ROLE="${{ github.event.inputs.userRole }}" \
yarn playwright test
- name: Upload Playwright Report
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: frontend/playwright-report/
retention-days: 30

18
.gitignore vendored
View File

@@ -1,9 +1,6 @@
node_modules
.vscode
!.vscode/settings.json
deploy/docker/environment_tiny/common_test
frontend/node_modules
frontend/.pnp
@@ -15,6 +12,7 @@ frontend/coverage
# production
frontend/build
frontend/.vscode
frontend/.yarnclean
frontend/.temp_cache
frontend/test-results
@@ -33,6 +31,7 @@ frontend/src/constants/env.ts
.idea
**/.vscode
**/build
**/storage
**/locust-scripts/__pycache__/
@@ -50,7 +49,6 @@ ee/query-service/tests/test-deploy/data/
# local data
*.backup
*.db
**/db
/deploy/docker/clickhouse-setup/data/
/deploy/docker-swarm/clickhouse-setup/data/
bin/
@@ -62,13 +60,14 @@ ee/query-service/db
e2e/node_modules/
e2e/test-results/
e2e/playwright-report/
e2e/blob-report/
e2e/playwright/.cache/
e2e/.auth
# go
vendor/
**/main/**
__debug_bin**
# git-town
.git-branches.toml
@@ -88,8 +87,6 @@ queries.active
.devenv/**/tmp/**
.qodo
.dev
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
@@ -107,6 +104,7 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
@@ -222,6 +220,8 @@ cython_debug/
#.idea/
### Python Patch ###
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml
# ruff
.ruff_cache/
@@ -229,6 +229,4 @@ cython_debug/
# LSP config files
pyrightconfig.json
# cursor files
frontend/.cursor/
# End of https://www.toptal.com/developers/gitignore/api/python

View File

@@ -1,64 +0,0 @@
version: "2"
linters:
default: none
enable:
- bodyclose
- depguard
- errcheck
- forbidigo
- govet
- iface
- ineffassign
- misspell
- nilnil
- sloglint
- wastedassign
- unparam
- unused
settings:
depguard:
rules:
noerrors:
deny:
- pkg: errors
desc: Do not use errors package. Use github.com/SigNoz/signoz/pkg/errors instead.
nozap:
deny:
- pkg: go.uber.org/zap
desc: Do not use zap logger. Use slog instead.
forbidigo:
forbid:
- pattern: fmt.Errorf
- pattern: ^(fmt\.Print.*|print|println)$
iface:
enable:
- identical
sloglint:
no-mixed-args: true
kv-only: true
no-global: all
context: all
static-msg: true
key-naming-case: snake
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- pkg/query-service
- ee/query-service
- scripts/
- tmp/
- third_party$
- builtin$
- examples$
formatters:
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -1,17 +0,0 @@
# Link to template variables: https://pkg.go.dev/github.com/vektra/mockery/v3/config#TemplateData
template: testify
packages:
github.com/SigNoz/signoz/pkg/alertmanager:
config:
all: true
dir: '{{.InterfaceDir}}/alertmanagertest'
filename: "alertmanager.go"
structname: 'Mock{{.InterfaceName}}'
pkgname: '{{.SrcPackageName}}test'
github.com/SigNoz/signoz/pkg/tokenizer:
config:
all: true
dir: '{{.InterfaceDir}}/tokenizertest'
filename: "tokenizer.go"
structname: 'Mock{{.InterfaceName}}'
pkgname: '{{.SrcPackageName}}test'

13
.vscode/settings.json vendored
View File

@@ -1,13 +0,0 @@
{
"eslint.workingDirectories": ["./frontend"],
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
},
"prettier.requireConfig": true,
"[go]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "golang.go"
}
}

View File

@@ -1,62 +0,0 @@
# SigNoz Community Advocate Program
Our community is filled with passionate developers who love SigNoz and have been helping spread the word about observability across the world. The SigNoz Community Advocate Program is our way of recognizing these incredible community members and creating deeper collaboration opportunities.
## What is the SigNoz Community Advocate Program?
The SigNoz Community Advocate Program celebrates and supports community members who are already passionate about observability and helping fellow developers. If you're someone who loves discussing SigNoz, helping others with their implementations, or sharing knowledge about observability practices, this program is designed with you in mind.
Our advocates are the heart of the SigNoz community, helping other developers succeed with observability and providing valuable insights that help us build better products.
## What Do Advocates Do?
1. **Community Support**
- Help fellow developers in our Slack community and GitHub Discussions
- Answer questions and share solutions
- Guide newcomers through SigNoz self-host implementations
2. **Knowledge Sharing**
- Spread awareness about observability best practices on developer forums
- Create content like blog posts, social media posts, and videos
- Host local meetups and events in their regions
3. **Product Collaboration**
- Provide insights on features, changes, and improvements the community needs
- Beta test new features and provide early feedback
- Help us understand real-world use cases and pain points
## What's In It For You?
**Recognition & Swag**
- Official recognition as a SigNoz advocate
- Welcome hamper upon joining
- Exclusive swag box within your first 3 months
- Feature on our website (with your permission)
**Early Access**
- First look at new features and updates
- Direct line to the SigNoz team for feedback and suggestions
- Opportunity to influence product roadmap
**Community Impact**
- Help shape the observability landscape
- Build your reputation in the developer community
- Connect with like-minded developers globally
## How Does It Work?
Currently, the SigNoz Community Advocate Program is **invite-only**. We're starting with a small group of passionate community members who have already been making a difference.
We'll be working closely with our first advocates to shape the program details, benefits, and structure based on what works best for everyone involved.
If you're interested in learning more about the program or want to get more involved in the SigNoz community, join our [Slack community](https://signoz-community.slack.com/) and let us know!
---
*The SigNoz Community Advocate Program recognizes and celebrates the amazing community members who are already passionate about helping fellow developers succeed with observability.*

View File

@@ -78,5 +78,3 @@ Need assistance? Join our Slack community:
- Set up your [development environment](docs/contributing/development.md)
- Deploy and observe [SigNoz in action with OpenTelemetry Demo Application](docs/otel-demo-docs.md)
- Explore the [SigNoz Community Advocate Program](ADVOCATE.md), which recognises contributors who support the community, share their expertise, and help shape SigNoz's future.
- Write [integration tests](docs/contributing/go/integration.md)

View File

@@ -2,7 +2,7 @@ Copyright (c) 2020-present SigNoz Inc.
Portions of this software are licensed as follows:
* All content that resides under the "ee/" and the "cmd/enterprise/" directory of this repository, if that directory exists, is licensed under the license defined in "ee/LICENSE".
* All content that resides under the "ee/" directory of this repository, if that directory exists, is licensed under the license defined in "ee/LICENSE".
* All third party components incorporated into the SigNoz Software are licensed under the original license provided by the owner of the applicable component.
* Content outside of the above mentioned directories or restrictions above is available under the "MIT Expat" license as defined below.

View File

@@ -14,24 +14,24 @@ ARCHS ?= amd64 arm64
TARGET_DIR ?= $(shell pwd)/target
ZEUS_URL ?= https://api.signoz.cloud
GO_BUILD_LDFLAG_ZEUS_URL = -X github.com/SigNoz/signoz/ee/zeus.url=$(ZEUS_URL)
LICENSE_URL ?= https://license.signoz.io
GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO = -X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=$(LICENSE_URL)
GO_BUILD_LDFLAG_ZEUS_URL = -X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=$(ZEUS_URL)
LICENSE_URL ?= https://license.signoz.io/api/v1
GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO = -X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=$(LICENSE_URL)
GO_BUILD_VERSION_LDFLAGS = -X github.com/SigNoz/signoz/pkg/version.version=$(VERSION) -X github.com/SigNoz/signoz/pkg/version.hash=$(COMMIT_SHORT_SHA) -X github.com/SigNoz/signoz/pkg/version.time=$(TIMESTAMP) -X github.com/SigNoz/signoz/pkg/version.branch=$(BRANCH_NAME)
GO_BUILD_ARCHS_COMMUNITY = $(addprefix go-build-community-,$(ARCHS))
GO_BUILD_CONTEXT_COMMUNITY = $(SRC)/cmd/community
GO_BUILD_CONTEXT_COMMUNITY = $(SRC)/pkg/query-service
GO_BUILD_LDFLAGS_COMMUNITY = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=community
GO_BUILD_ARCHS_ENTERPRISE = $(addprefix go-build-enterprise-,$(ARCHS))
GO_BUILD_ARCHS_ENTERPRISE_RACE = $(addprefix go-build-enterprise-race-,$(ARCHS))
GO_BUILD_CONTEXT_ENTERPRISE = $(SRC)/cmd/enterprise
GO_BUILD_CONTEXT_ENTERPRISE = $(SRC)/ee/query-service
GO_BUILD_LDFLAGS_ENTERPRISE = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=enterprise $(GO_BUILD_LDFLAG_ZEUS_URL) $(GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO)
DOCKER_BUILD_ARCHS_COMMUNITY = $(addprefix docker-build-community-,$(ARCHS))
DOCKERFILE_COMMUNITY = $(SRC)/cmd/community/Dockerfile
DOCKERFILE_COMMUNITY = $(SRC)/pkg/query-service/Dockerfile
DOCKER_REGISTRY_COMMUNITY ?= docker.io/signoz/signoz-community
DOCKER_BUILD_ARCHS_ENTERPRISE = $(addprefix docker-build-enterprise-,$(ARCHS))
DOCKERFILE_ENTERPRISE = $(SRC)/cmd/enterprise/Dockerfile
DOCKERFILE_ENTERPRISE = $(SRC)/ee/query-service/Dockerfile
DOCKER_REGISTRY_ENTERPRISE ?= docker.io/signoz/signoz
JS_BUILD_CONTEXT = $(SRC)/frontend
@@ -61,23 +61,6 @@ devenv-postgres: ## Run postgres in devenv
@cd .devenv/docker/postgres; \
docker compose -f compose.yaml up -d
.PHONY: devenv-signoz-otel-collector
devenv-signoz-otel-collector: ## Run signoz-otel-collector in devenv (requires clickhouse to be running)
@cd .devenv/docker/signoz-otel-collector; \
docker compose -f compose.yaml up -d
.PHONY: devenv-up
devenv-up: devenv-clickhouse devenv-signoz-otel-collector ## Start both clickhouse and signoz-otel-collector for local development
@echo "Development environment is ready!"
@echo " - ClickHouse: http://localhost:8123"
@echo " - Signoz OTel Collector: grpc://localhost:4317, http://localhost:4318"
.PHONY: devenv-clickhouse-clean
devenv-clickhouse-clean: ## Clean all ClickHouse data from filesystem
@echo "Removing ClickHouse data..."
@rm -rf .devenv/docker/clickhouse/fs/tmp/*
@echo "ClickHouse data cleaned!"
##############################################################
# go commands
##############################################################
@@ -86,13 +69,16 @@ go-run-enterprise: ## Runs the enterprise go backend server
@SIGNOZ_INSTRUMENTATION_LOGS_LEVEL=debug \
SIGNOZ_SQLSTORE_SQLITE_PATH=signoz.db \
SIGNOZ_WEB_ENABLED=false \
SIGNOZ_TOKENIZER_JWT_SECRET=secret \
SIGNOZ_JWT_SECRET=secret \
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER=cluster \
go run -race \
$(GO_BUILD_CONTEXT_ENTERPRISE)/*.go server
$(GO_BUILD_CONTEXT_ENTERPRISE)/main.go \
--config ./conf/prometheus.yml \
--cluster cluster \
--use-logs-new-schema true \
--use-trace-new-schema true
.PHONY: go-test
go-test: ## Runs go unit tests
@@ -103,13 +89,16 @@ go-run-community: ## Runs the community go backend server
@SIGNOZ_INSTRUMENTATION_LOGS_LEVEL=debug \
SIGNOZ_SQLSTORE_SQLITE_PATH=signoz.db \
SIGNOZ_WEB_ENABLED=false \
SIGNOZ_TOKENIZER_JWT_SECRET=secret \
SIGNOZ_JWT_SECRET=secret \
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER=cluster \
go run -race \
$(GO_BUILD_CONTEXT_COMMUNITY)/*.go server
$(GO_BUILD_CONTEXT_COMMUNITY)/main.go \
--config ./conf/prometheus.yml \
--cluster cluster \
--use-logs-new-schema true \
--use-trace-new-schema true
.PHONY: go-build-community $(GO_BUILD_ARCHS_COMMUNITY)
go-build-community: ## Builds the go backend server for community
@@ -118,9 +107,9 @@ $(GO_BUILD_ARCHS_COMMUNITY): go-build-community-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)-community"
@if [ $* = "arm64" ]; then \
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
else \
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
fi
@@ -131,9 +120,9 @@ $(GO_BUILD_ARCHS_ENTERPRISE): go-build-enterprise-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
@if [ $* = "arm64" ]; then \
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
else \
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
fi
.PHONY: go-build-enterprise-race $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
@@ -143,9 +132,9 @@ $(GO_BUILD_ARCHS_ENTERPRISE_RACE): go-build-enterprise-race-%: $(TARGET_DIR)
@mkdir -p $(TARGET_DIR)/$(OS)-$*
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
@if [ $* = "arm64" ]; then \
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
else \
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
CGO_ENABLED=1 GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-linkmode external -extldflags '-static' -s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
fi
##############################################################
@@ -202,40 +191,14 @@ docker-buildx-enterprise: go-build-enterprise js-build
##############################################################
.PHONY: py-fmt
py-fmt: ## Run black for integration tests
@cd tests/integration && uv run black .
@cd tests/integration && poetry run black .
.PHONY: py-lint
py-lint: ## Run lint for integration tests
@cd tests/integration && uv run isort .
@cd tests/integration && uv run autoflake .
@cd tests/integration && uv run pylint .
.PHONY: py-test-setup
py-test-setup: ## Runs integration tests
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --reuse --capture=no src/bootstrap/setup.py::test_setup
.PHONY: py-test-teardown
py-test-teardown: ## Runs integration tests with teardown
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --teardown --capture=no src/bootstrap/setup.py::test_teardown
@cd tests/integration && poetry run isort .
@cd tests/integration && poetry run autoflake .
@cd tests/integration && poetry run pylint .
.PHONY: py-test
py-test: ## Runs integration tests
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --capture=no src/
.PHONY: py-clean
py-clean: ## Clear all pycache and pytest cache from tests directory recursively
@echo ">> cleaning python cache files from tests directory"
@find tests -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
@find tests -type d -name .pytest_cache -exec rm -rf {} + 2>/dev/null || true
@find tests -type f -name "*.pyc" -delete 2>/dev/null || true
@find tests -type f -name "*.pyo" -delete 2>/dev/null || true
@echo ">> python cache cleaned"
##############################################################
# generate commands
##############################################################
.PHONY: gen-mocks
gen-mocks:
@echo ">> Generating mocks"
@mockery --config .mockery.yml
@cd tests/integration && poetry run pytest --basetemp=./tmp/ -vv --capture=no src/

View File

@@ -8,6 +8,7 @@
<p align="center">All your logs, metrics, and traces in one place. Monitor your application, spot issues before they occur and troubleshoot downtime quickly with rich context. SigNoz is a cost-effective open-source alternative to Datadog and New Relic. Visit <a href="https://signoz.io" target="_blank">signoz.io</a> for the full documentation, tutorials, and guide.</p>
<p align="center">
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/query-service?label=Docker Downloads"> </a>
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">
<img alt="tweet" src="https://img.shields.io/twitter/url/http/shields.io.svg?style=social"> </a>
@@ -230,13 +231,11 @@ Not sure how to get started? Just ping us on `#contributing` in our [slack commu
- [Shaheer Kochai](https://github.com/ahmadshaheer)
- [Amlan Kumar Nandy](https://github.com/amlannandy)
- [Sahil Khan](https://github.com/sawhil)
- [Aditya Singh](https://github.com/aks07)
- [Abhi Kumar](https://github.com/ahrefabhi)
#### DevOps
- [Prashant Shahi](https://github.com/prashant-shahi)
- [Vibhu Pandey](https://github.com/therealpandey)
- [Vibhu Pandey](https://github.com/grandwizard28)
<br /><br />

View File

@@ -1,19 +0,0 @@
package main
import (
"log/slog"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/pkg/instrumentation"
)
func main() {
// initialize logger for logging in the cmd/ package. This logger is different from the logger used in the application.
logger := instrumentation.NewLogger(instrumentation.Config{Logs: instrumentation.LogsConfig{Level: slog.LevelInfo}})
// register a list of commands to the root command
registerServer(cmd.RootCmd, logger)
cmd.RegisterGenerate(cmd.RootCmd, logger)
cmd.Execute(logger)
}

View File

@@ -1,126 +0,0 @@
package main
import (
"context"
"log/slog"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/authz/openfgaauthz"
"github.com/SigNoz/signoz/pkg/authz/openfgaschema"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/gateway/noopgateway"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/SigNoz/signoz/pkg/zeus/noopzeus"
"github.com/spf13/cobra"
)
func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
var flags signoz.DeprecatedFlags
serverCmd := &cobra.Command{
Use: "server",
Short: "Run the SigNoz server",
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
RunE: func(currCmd *cobra.Command, args []string) error {
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, flags)
if err != nil {
return err
}
return runServer(currCmd.Context(), config, logger)
},
}
flags.RegisterFlags(serverCmd)
parentCmd.AddCommand(serverCmd)
}
func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) error {
// print the version
version.Info.PrettyPrint(config.Version)
signoz, err := signoz.New(
ctx,
config,
zeus.Config{},
noopzeus.NewProviderFactory(),
licensing.Config{},
func(_ sqlstore.SQLStore, _ zeus.Zeus, _ organization.Getter, _ analytics.Analytics) factory.ProviderFactory[licensing.Licensing, licensing.Config] {
return nooplicensing.NewFactory()
},
signoz.NewEmailingProviderFactories(),
signoz.NewCacheProviderFactories(),
signoz.NewWebProviderFactories(),
func(sqlstore sqlstore.SQLStore) factory.NamedMap[factory.ProviderFactory[sqlschema.SQLSchema, sqlschema.Config]] {
return signoz.NewSQLSchemaProviderFactories(sqlstore)
},
signoz.NewSQLStoreProviderFactories(),
signoz.NewTelemetryStoreProviderFactories(),
func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error) {
return signoz.NewAuthNs(ctx, providerSettings, store, licensing)
},
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, _ role.Module, queryParser queryparser.QueryParser, _ querier.Querier, _ licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(impldashboard.NewStore(store), settings, analytics, orgGetter, queryParser)
},
func(_ licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return noopgateway.NewProviderFactory()
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
return err
}
server, err := app.NewServer(config, signoz)
if err != nil {
logger.ErrorContext(ctx, "failed to create server", "error", err)
return err
}
if err := server.Start(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start server", "error", err)
return err
}
signoz.Start(ctx)
if err := signoz.Wait(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start signoz", "error", err)
return err
}
err = server.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop server", "error", err)
return err
}
err = signoz.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop signoz", "error", err)
return err
}
return nil
}

View File

@@ -1,31 +0,0 @@
package cmd
import (
"context"
"log/slog"
"github.com/SigNoz/signoz/pkg/config"
"github.com/SigNoz/signoz/pkg/config/envprovider"
"github.com/SigNoz/signoz/pkg/config/fileprovider"
"github.com/SigNoz/signoz/pkg/signoz"
)
func NewSigNozConfig(ctx context.Context, logger *slog.Logger, flags signoz.DeprecatedFlags) (signoz.Config, error) {
config, err := signoz.NewConfig(
ctx,
logger,
config.ResolverConfig{
Uris: []string{"env:"},
ProviderFactories: []config.ProviderFactory{
envprovider.NewFactory(),
fileprovider.NewFactory(),
},
},
flags,
)
if err != nil {
return signoz.Config{}, err
}
return config, nil
}

View File

@@ -1,47 +0,0 @@
FROM node:18-bullseye AS build
WORKDIR /opt/
COPY ./frontend/ ./
ENV NODE_OPTIONS=--max-old-space-size=8192
RUN CI=1 yarn install
RUN CI=1 yarn build
FROM golang:1.24-bullseye
ARG OS="linux"
ARG TARGETARCH
ARG ZEUSURL
# This path is important for stacktraces
WORKDIR $GOPATH/src/github.com/signoz/signoz
WORKDIR /root
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
g++ \
gcc \
libc6-dev \
make \
pkg-config \
; \
rm -rf /var/lib/apt/lists/*
COPY go.mod go.sum ./
RUN go mod download
COPY ./cmd/ ./cmd/
COPY ./ee/ ./ee/
COPY ./pkg/ ./pkg/
COPY ./templates/email /root/templates
COPY Makefile Makefile
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
COPY --from=build /opt/build ./web/
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["/root/signoz", "server"]

View File

@@ -1,19 +0,0 @@
package main
import (
"log/slog"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/pkg/instrumentation"
)
func main() {
// initialize logger for logging in the cmd/ package. This logger is different from the logger used in the application.
logger := instrumentation.NewLogger(instrumentation.Config{Logs: instrumentation.LogsConfig{Level: slog.LevelInfo}})
// register a list of commands to the root command
registerServer(cmd.RootCmd, logger)
cmd.RegisterGenerate(cmd.RootCmd, logger)
cmd.Execute(logger)
}

View File

@@ -1,165 +0,0 @@
package main
import (
"context"
"log/slog"
"time"
"github.com/SigNoz/signoz/cmd"
"github.com/SigNoz/signoz/ee/authn/callbackauthn/oidccallbackauthn"
"github.com/SigNoz/signoz/ee/authn/callbackauthn/samlcallbackauthn"
"github.com/SigNoz/signoz/ee/authz/openfgaauthz"
"github.com/SigNoz/signoz/ee/authz/openfgaschema"
"github.com/SigNoz/signoz/ee/gateway/httpgateway"
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
enterpriseapp "github.com/SigNoz/signoz/ee/query-service/app"
"github.com/SigNoz/signoz/ee/sqlschema/postgressqlschema"
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
enterprisezeus "github.com/SigNoz/signoz/ee/zeus"
"github.com/SigNoz/signoz/ee/zeus/httpzeus"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/authz"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/sqlschema"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/sqlstore/sqlstorehook"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/spf13/cobra"
)
func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
var flags signoz.DeprecatedFlags
serverCmd := &cobra.Command{
Use: "server",
Short: "Run the SigNoz server",
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
RunE: func(currCmd *cobra.Command, args []string) error {
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, flags)
if err != nil {
return err
}
return runServer(currCmd.Context(), config, logger)
},
}
flags.RegisterFlags(serverCmd)
parentCmd.AddCommand(serverCmd)
}
func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) error {
// print the version
version.Info.PrettyPrint(config.Version)
// add enterprise sqlstore factories to the community sqlstore factories
sqlstoreFactories := signoz.NewSQLStoreProviderFactories()
if err := sqlstoreFactories.Add(postgressqlstore.NewFactory(sqlstorehook.NewLoggingFactory(), sqlstorehook.NewInstrumentationFactory())); err != nil {
logger.ErrorContext(ctx, "failed to add postgressqlstore factory", "error", err)
return err
}
signoz, err := signoz.New(
ctx,
config,
enterprisezeus.Config(),
httpzeus.NewProviderFactory(),
enterpriselicensing.Config(24*time.Hour, 3),
func(sqlstore sqlstore.SQLStore, zeus zeus.Zeus, orgGetter organization.Getter, analytics analytics.Analytics) factory.ProviderFactory[licensing.Licensing, licensing.Config] {
return httplicensing.NewProviderFactory(sqlstore, zeus, orgGetter, analytics)
},
signoz.NewEmailingProviderFactories(),
signoz.NewCacheProviderFactories(),
signoz.NewWebProviderFactories(),
func(sqlstore sqlstore.SQLStore) factory.NamedMap[factory.ProviderFactory[sqlschema.SQLSchema, sqlschema.Config]] {
existingFactories := signoz.NewSQLSchemaProviderFactories(sqlstore)
if err := existingFactories.Add(postgressqlschema.NewFactory(sqlstore)); err != nil {
panic(err)
}
return existingFactories
},
sqlstoreFactories,
signoz.NewTelemetryStoreProviderFactories(),
func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error) {
samlCallbackAuthN, err := samlcallbackauthn.New(ctx, store, licensing)
if err != nil {
return nil, err
}
oidcCallbackAuthN, err := oidccallbackauthn.New(store, licensing, providerSettings)
if err != nil {
return nil, err
}
authNs, err := signoz.NewAuthNs(ctx, providerSettings, store, licensing)
if err != nil {
return nil, err
}
authNs[authtypes.AuthNProviderSAML] = samlCallbackAuthN
authNs[authtypes.AuthNProviderOIDC] = oidcCallbackAuthN
return authNs, nil
},
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
},
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, role role.Module, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, role, queryParser, querier, licensing)
},
func(licensing licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return httpgateway.NewProviderFactory(licensing)
},
)
if err != nil {
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
return err
}
server, err := enterpriseapp.NewServer(config, signoz)
if err != nil {
logger.ErrorContext(ctx, "failed to create server", "error", err)
return err
}
if err := server.Start(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start server", "error", err)
return err
}
signoz.Start(ctx)
if err := signoz.Wait(ctx); err != nil {
logger.ErrorContext(ctx, "failed to start signoz", "error", err)
return err
}
err = server.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop server", "error", err)
return err
}
err = signoz.Stop(ctx)
if err != nil {
logger.ErrorContext(ctx, "failed to stop signoz", "error", err)
return err
}
return nil
}

View File

@@ -1,21 +0,0 @@
package cmd
import (
"log/slog"
"github.com/spf13/cobra"
)
func RegisterGenerate(parentCmd *cobra.Command, logger *slog.Logger) {
var generateCmd = &cobra.Command{
Use: "generate",
Short: "Generate artifacts",
SilenceUsage: true,
SilenceErrors: true,
CompletionOptions: cobra.CompletionOptions{DisableDefaultCmd: true},
}
registerGenerateOpenAPI(generateCmd)
parentCmd.AddCommand(generateCmd)
}

View File

@@ -1,41 +0,0 @@
package cmd
import (
"context"
"log/slog"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/version"
"github.com/spf13/cobra"
)
func registerGenerateOpenAPI(parentCmd *cobra.Command) {
openapiCmd := &cobra.Command{
Use: "openapi",
Short: "Generate OpenAPI schema for SigNoz",
RunE: func(currCmd *cobra.Command, args []string) error {
return runGenerateOpenAPI(currCmd.Context())
},
}
parentCmd.AddCommand(openapiCmd)
}
func runGenerateOpenAPI(ctx context.Context) error {
instrumentation, err := instrumentation.New(ctx, instrumentation.Config{Logs: instrumentation.LogsConfig{Level: slog.LevelInfo}}, version.Info, "signoz")
if err != nil {
return err
}
openapi, err := signoz.NewOpenAPI(ctx, instrumentation)
if err != nil {
return err
}
if err := openapi.CreateAndWrite("docs/api/openapi.yml"); err != nil {
return err
}
return nil
}

View File

@@ -1,33 +0,0 @@
package cmd
import (
"log/slog"
"os"
"github.com/SigNoz/signoz/pkg/version"
"github.com/spf13/cobra"
"go.uber.org/zap" //nolint:depguard
)
var RootCmd = &cobra.Command{
Use: "signoz",
Short: "OpenTelemetry-Native Logs, Metrics and Traces in a single pane",
Version: version.Info.Version(),
SilenceUsage: true,
SilenceErrors: true,
CompletionOptions: cobra.CompletionOptions{DisableDefaultCmd: true},
}
func Execute(logger *slog.Logger) {
zapLogger := newZapLogger()
zap.ReplaceGlobals(zapLogger)
defer func() {
_ = zapLogger.Sync()
}()
err := RootCmd.Execute()
if err != nil {
logger.ErrorContext(RootCmd.Context(), "error running command", "error", err)
os.Exit(1)
}
}

View File

@@ -1,110 +0,0 @@
package cmd
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"go.uber.org/zap" //nolint:depguard
"go.uber.org/zap/zapcore" //nolint:depguard
)
// Deprecated: Use `NewLogger` from `pkg/instrumentation` instead.
func newZapLogger() *zap.Logger {
config := zap.NewProductionConfig()
config.EncoderConfig.TimeKey = "timestamp"
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
// Extract sampling config before building the logger.
// We need to disable sampling in the config and apply it manually later
// to ensure correct core ordering. See filteringCore documentation for details.
samplerConfig := config.Sampling
config.Sampling = nil
logger, _ := config.Build()
// Wrap with custom core wrapping to filter certain log entries.
// The order of wrapping is important:
// 1. First wrap with filteringCore
// 2. Then wrap with sampler
//
// This creates the call chain: sampler -> filteringCore -> ioCore
//
// During logging:
// - sampler.Check decides whether to sample the log entry
// - If sampled, filteringCore.Check is called
// - filteringCore adds itself to CheckedEntry.cores
// - All cores in CheckedEntry.cores have their Write method called
// - filteringCore.Write can now filter the entry before passing to ioCore
//
// If we didn't disable the sampler above, filteringCore would have wrapped
// sampler. By calling sampler.Check we would have allowed it to call
// ioCore.Check that adds itself to CheckedEntry.cores. Then ioCore.Write
// would have bypassed our checks, making filtering impossible.
return logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
core = &filteringCore{core}
if samplerConfig != nil {
core = zapcore.NewSamplerWithOptions(
core,
time.Second,
samplerConfig.Initial,
samplerConfig.Thereafter,
)
}
return core
}))
}
// filteringCore wraps a zapcore.Core to filter out log entries based on a
// custom logic.
//
// Note: This core must be positioned before the sampler in the core chain
// to ensure Write is called. See newZapLogger for ordering details.
type filteringCore struct {
zapcore.Core
}
// filter determines whether a log entry should be written based on its fields.
// Returns false if the entry should be suppressed, true otherwise.
//
// Current filters:
// - context.Canceled: These are expected errors from cancelled operations,
// and create noise in logs.
func (c *filteringCore) filter(fields []zapcore.Field) bool {
for _, field := range fields {
if field.Type == zapcore.ErrorType {
if loggedErr, ok := field.Interface.(error); ok {
// Suppress logs containing context.Canceled errors
if errors.Is(loggedErr, context.Canceled) {
return false
}
}
}
}
return true
}
// With implements zapcore.Core.With
// It returns a new copy with the added context.
func (c *filteringCore) With(fields []zapcore.Field) zapcore.Core {
return &filteringCore{c.Core.With(fields)}
}
// Check implements zapcore.Core.Check.
// It adds this core to the CheckedEntry if the log level is enabled,
// ensuring that Write will be called for this entry.
func (c *filteringCore) Check(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {
if c.Enabled(ent.Level) {
return ce.AddCore(ent, c)
}
return ce
}
// Write implements zapcore.Core.Write.
// It filters log entries based on their fields before delegating to the wrapped core.
func (c *filteringCore) Write(ent zapcore.Entry, fields []zapcore.Field) error {
if !c.filter(fields) {
return nil
}
return c.Core.Write(ent, fields)
}

View File

@@ -1,15 +1,8 @@
##################### SigNoz Configuration Example #####################
#
#
# Do not modify this file
#
##################### Global #####################
global:
# the url under which the signoz apiserver is externally reachable.
external_url: <unset>
# the url where the SigNoz backend receives telemetry data (traces, metrics, logs) from instrumented applications.
ingestion_url: <unset>
##################### Version #####################
version:
banner:
@@ -54,10 +47,10 @@ cache:
provider: memory
# memory: Uses in-memory caching.
memory:
# Max items for the in-memory cache (10x the entries)
num_counters: 100000
# Total cost in bytes allocated bounded cache
max_cost: 67108864
# Time-to-live for cache entries in memory. Specify the duration in ns
ttl: 60000000000
# The interval at which the cache will be cleaned up
cleanupInterval: 1m
# redis: Uses Redis as the caching backend.
redis:
# The hostname or IP address of the Redis server.
@@ -65,7 +58,7 @@ cache:
# The port on which the Redis server is running. Default is usually 6379.
port: 6379
# The password for authenticating with the Redis server, if required.
password:
password:
# The Redis database number to use
db: 0
@@ -78,10 +71,6 @@ sqlstore:
sqlite:
# The path to the SQLite database file.
path: /var/lib/signoz/signoz.db
# Mode is the mode to use for the sqlite database.
mode: delete
# BusyTimeout is the timeout for the sqlite database to wait for a lock.
busy_timeout: 10s
##################### APIServer #####################
apiserver:
@@ -101,15 +90,6 @@ apiserver:
- /api/v1/version
- /
##################### Querier #####################
querier:
# The TTL for cached query results.
cache_ttl: 168h
# The interval for recent data that should not be cached.
flux_interval: 5m
# The maximum number of concurrent queries for missing ranges.
max_concurrent_queries: 4
##################### TelemetryStore #####################
telemetrystore:
# Maximum number of idle connections in the connection pool.
@@ -123,17 +103,13 @@ telemetrystore:
clickhouse:
# The DSN to use for clickhouse.
dsn: tcp://localhost:9000
# The cluster name to use for clickhouse.
cluster: cluster
# The query settings for clickhouse.
settings:
max_execution_time: 0
max_execution_time_leaf: 0
timeout_before_checking_execution_speed: 0
max_bytes_to_read: 0
max_result_rows: 0
ignore_data_skipping_indices: ""
secondary_indices_enable_bulk_filtering: false
max_result_rows_for_ch_query: 0
##################### Prometheus #####################
prometheus:
@@ -148,7 +124,10 @@ prometheus:
##################### Alertmanager #####################
alertmanager:
# Specifies the alertmanager provider to use.
provider: signoz
provider: legacy
legacy:
# The API URL (with prefix) of the legacy Alertmanager instance.
api_url: http://localhost:9093/api
signoz:
# The poll interval for periodically syncing the alertmanager with the config in the store.
poll_interval: 1m
@@ -185,109 +164,3 @@ alertmanager:
maintenance_interval: 15m
# Retention of the notification logs.
retention: 120h
##################### Emailing #####################
emailing:
# Whether to enable emailing.
enabled: false
templates:
# The directory containing the email templates. This directory should contain a list of files defined at pkg/types/emailtypes/template.go.
directory: /opt/signoz/conf/templates/email
smtp:
# The SMTP server address.
address: localhost:25
# The email address to use for the SMTP server.
from:
# The hello message to use for the SMTP server.
hello:
# The static headers to send with the email.
headers: {}
auth:
# The username to use for the SMTP server.
username:
# The password to use for the SMTP server.
password:
# The secret to use for the SMTP server.
secret:
# The identity to use for the SMTP server.
identity:
tls:
# Whether to enable TLS. It should be false in most cases since the authentication mechanism should use the STARTTLS extension instead.
enabled: false
# Whether to skip TLS verification.
insecure_skip_verify: false
# The path to the CA file.
ca_file_path:
# The path to the key file.
key_file_path:
# The path to the certificate file.
cert_file_path:
##################### Sharder (experimental) #####################
sharder:
# Specifies the sharder provider to use.
provider: noop
single:
# The org id to which this instance belongs to.
org_id: org_id
##################### Analytics #####################
analytics:
# Whether to enable analytics.
enabled: false
segment:
# The key to use for segment.
key: ""
##################### StatsReporter #####################
statsreporter:
# Whether to enable stats reporter. This is used to provide valuable insights to the SigNoz team. It does not collect any sensitive/PII data.
enabled: true
# The interval at which the stats are collected.
interval: 6h
collect:
# Whether to collect identities and traits (emails).
identities: true
##################### Gateway (License only) #####################
gateway:
# The URL of the gateway's api.
url: http://localhost:8080
##################### Tokenizer #####################
tokenizer:
# Specifies the tokenizer provider to use.
provider: jwt
lifetime:
# The duration for which a user can be idle before being required to authenticate.
idle: 168h
# The duration for which a user can remain logged in before being asked to login.
max: 720h
rotation:
# The interval to rotate tokens in.
interval: 30m
# The duration for which the previous token pair remains valid after a token pair is rotated.
duration: 60s
jwt:
# The secret to sign the JWT tokens.
secret: secret
opaque:
gc:
# The interval to perform garbage collection.
interval: 1h
token:
# The maximum number of tokens a user can have. This limits the number of concurrent sessions a user can have.
max_per_user: 5
##################### Flagger #####################
flagger:
# Config are the overrides for the feature flags which come directly from the config file.
config:
boolean:
use_span_metrics: true
interpolation_enabled: false
kafka_span_eval: false
string:
float:
integer:
object:

View File

@@ -1,4 +0,0 @@
{
"url": "https://context7.com/signoz/signoz",
"public_key": "pk_6g9GfjdkuPEIDuTGAxnol"
}

View File

@@ -11,7 +11,7 @@ x-common: &common
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
deploy:
labels:
@@ -37,11 +37,9 @@ x-clickhouse-defaults: &clickhouse-defaults
nofile:
soft: 262144
hard: 262144
environment:
- CLICKHOUSE_SKIP_USER_SETUP=1
x-zookeeper-defaults: &zookeeper-defaults
!!merge <<: *common
image: signoz/zookeeper:3.7.1
image: bitnami/zookeeper:3.7.1
user: root
deploy:
labels:
@@ -65,7 +63,7 @@ x-db-depend: &db-depend
services:
init-clickhouse:
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
command:
- bash
- -c
@@ -176,9 +174,11 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.108.0
image: signoz/signoz:v0.80.0
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port
@@ -195,8 +195,7 @@ services:
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
- SIGNOZ_TOKENIZER_JWT_SECRET=secret
- DOT_METRICS_ENABLED=true
- SIGNOZ_JWT_SECRET=secret
healthcheck:
test:
- CMD
@@ -209,7 +208,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:v0.129.12
image: signoz/signoz-otel-collector:v0.111.39
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
@@ -233,7 +232,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:v0.129.12
image: signoz/signoz-schema-migrator:v0.111.39
deploy:
restart_policy:
condition: on-failure

View File

@@ -11,7 +11,7 @@ x-common: &common
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
deploy:
labels:
@@ -36,11 +36,9 @@ x-clickhouse-defaults: &clickhouse-defaults
nofile:
soft: 262144
hard: 262144
environment:
- CLICKHOUSE_SKIP_USER_SETUP=1
x-zookeeper-defaults: &zookeeper-defaults
!!merge <<: *common
image: signoz/zookeeper:3.7.1
image: bitnami/zookeeper:3.7.1
user: root
deploy:
labels:
@@ -62,7 +60,7 @@ x-db-depend: &db-depend
services:
init-clickhouse:
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
command:
- bash
- -c
@@ -102,32 +100,28 @@ services:
# - "9000:9000"
# - "8123:8123"
# - "9181:9181"
configs:
- source: clickhouse-config
target: /etc/clickhouse-server/config.xml
- source: clickhouse-users
target: /etc/clickhouse-server/users.xml
- source: clickhouse-custom-function
target: /etc/clickhouse-server/custom-function.xml
- source: clickhouse-cluster
target: /etc/clickhouse-server/config.d/cluster.xml
volumes:
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
- clickhouse:/var/lib/clickhouse/
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:v0.108.0
image: signoz/signoz:v0.80.0
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port
volumes:
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
- ../common/dashboards:/root/config/dashboards
- sqlite:/var/lib/signoz/
configs:
- source: signoz-prometheus-config
target: /root/config/prometheus.yml
environment:
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
- SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
@@ -137,7 +131,6 @@ services:
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-swarm
- DOT_METRICS_ENABLED=true
healthcheck:
test:
- CMD
@@ -150,17 +143,15 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:v0.129.12
image: signoz/signoz-otel-collector:v0.111.39
command:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
configs:
- source: otel-collector-config
target: /etc/otel-collector-config.yaml
- source: otel-manager-config
target: /etc/manager-config.yaml
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
- LOW_CARDINAL_EXCEPTION_GROUPING=false
@@ -176,7 +167,7 @@ services:
- signoz
schema-migrator:
!!merge <<: *common
image: signoz/signoz-schema-migrator:v0.129.12
image: signoz/signoz-schema-migrator:v0.111.39
deploy:
restart_policy:
condition: on-failure
@@ -197,24 +188,3 @@ volumes:
name: signoz-sqlite
zookeeper-1:
name: signoz-zookeeper-1
configs:
clickhouse-config:
file: ../common/clickhouse/config.xml
clickhouse-users:
file: ../common/clickhouse/users.xml
clickhouse-custom-function:
file: ../common/clickhouse/custom-function.xml
clickhouse-cluster:
file: ../common/clickhouse/cluster.xml
signoz-prometheus-config:
file: ../common/signoz/prometheus.yml
# If you have multiple dashboard files, you can list them individually:
# dashboard-foo:
# file: ../common/dashboards/foo.json
# dashboard-bar:
# file: ../common/dashboards/bar.json
otel-collector-config:
file: ./otel-collector-config.yaml
otel-manager-config:
file: ../common/signoz/otel-collector-opamp-config.yaml

View File

@@ -1,10 +1,3 @@
connectors:
signozmeter:
metrics_flush_interval: 1h
dimensions:
- name: service.name
- name: deployment.environment
- name: host.name
receivers:
otlp:
protocols:
@@ -28,16 +21,12 @@ processors:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
batch/meter:
send_batch_max_size: 25000
send_batch_size: 20000
timeout: 1s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system]
timeout: 2s
signozspanmetrics/delta:
metrics_exporter: signozclickhousemetrics
metrics_exporter: clickhousemetricswrite
metrics_flush_interval: 60s
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
@@ -71,21 +60,25 @@ exporters:
datasource: tcp://clickhouse:9000/signoz_traces
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
use_new_schema: true
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/signoz_metrics
resource_to_telemetry_conversion:
enabled: true
clickhousemetricswrite/prometheus:
endpoint: tcp://clickhouse:9000/signoz_metrics
signozclickhousemetrics:
dsn: tcp://clickhouse:9000/signoz_metrics
clickhouselogsexporter:
dsn: tcp://clickhouse:9000/signoz_logs
timeout: 10s
use_new_schema: true
signozclickhousemeter:
dsn: tcp://clickhouse:9000/signoz_meter
timeout: 45s
sending_queue:
enabled: false
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
@@ -93,20 +86,16 @@ service:
traces:
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces, signozmeter]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [signozclickhousemetrics, signozmeter]
exporters: [clickhousemetricswrite, signozclickhousemetrics]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [signozclickhousemetrics, signozmeter]
exporters: [clickhousemetricswrite/prometheus, signozclickhousemetrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter, signozmeter]
metrics/meter:
receivers: [signozmeter]
processors: [batch/meter]
exporters: [signozclickhousemeter]
exporters: [clickhouselogsexporter]

View File

@@ -10,7 +10,7 @@ x-common: &common
x-clickhouse-defaults: &clickhouse-defaults
!!merge <<: *common
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
labels:
signoz.io/scrape: "true"
@@ -40,11 +40,9 @@ x-clickhouse-defaults: &clickhouse-defaults
nofile:
soft: 262144
hard: 262144
environment:
- CLICKHOUSE_SKIP_USER_SETUP=1
x-zookeeper-defaults: &zookeeper-defaults
!!merge <<: *common
image: signoz/zookeeper:3.7.1
image: bitnami/zookeeper:3.7.1
user: root
labels:
signoz.io/scrape: "true"
@@ -67,7 +65,7 @@ x-db-depend: &db-depend
services:
init-clickhouse:
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-init-clickhouse
command:
- bash
@@ -179,10 +177,12 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.108.0}
image: signoz/signoz:${VERSION:-v0.80.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port
@@ -199,7 +199,6 @@ services:
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
- DOT_METRICS_ENABLED=true
healthcheck:
test:
- CMD
@@ -213,7 +212,7 @@ services:
# TODO: support otel-collector multiple replicas. Nginx/Traefik for loadbalancing?
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.12}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.39}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -239,7 +238,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-sync
command:
- sync
@@ -250,7 +249,7 @@ services:
condition: service_healthy
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-async
command:
- async

View File

@@ -9,7 +9,8 @@ x-common: &common
max-file: "3"
x-clickhouse-defaults: &clickhouse-defaults
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
image: clickhouse/clickhouse-server:24.1.2-alpine
tty: true
labels:
signoz.io/scrape: "true"
@@ -35,11 +36,9 @@ x-clickhouse-defaults: &clickhouse-defaults
nofile:
soft: 262144
hard: 262144
environment:
- CLICKHOUSE_SKIP_USER_SETUP=1
x-zookeeper-defaults: &zookeeper-defaults
!!merge <<: *common
image: signoz/zookeeper:3.7.1
image: bitnami/zookeeper:3.7.1
user: root
labels:
signoz.io/scrape: "true"
@@ -62,7 +61,7 @@ x-db-depend: &db-depend
services:
init-clickhouse:
!!merge <<: *common
image: clickhouse/clickhouse-server:25.5.6
image: clickhouse/clickhouse-server:24.1.2-alpine
container_name: signoz-init-clickhouse
command:
- bash
@@ -111,10 +110,12 @@ services:
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
signoz:
!!merge <<: *db-depend
image: signoz/signoz:${VERSION:-v0.108.0}
image: signoz/signoz:${VERSION:-v0.80.0}
container_name: signoz
command:
- --config=/root/config/prometheus.yml
- --use-logs-new-schema=true
- --use-trace-new-schema=true
ports:
- "8080:8080" # signoz port
# - "6060:6060" # pprof port
@@ -131,7 +132,6 @@ services:
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
- DOT_METRICS_ENABLED=true
healthcheck:
test:
- CMD
@@ -144,7 +144,7 @@ services:
retries: 3
otel-collector:
!!merge <<: *db-depend
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.12}
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.111.39}
container_name: signoz-otel-collector
command:
- --config=/etc/otel-collector-config.yaml
@@ -166,7 +166,7 @@ services:
condition: service_healthy
schema-migrator-sync:
!!merge <<: *common
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-sync
command:
- sync
@@ -178,7 +178,7 @@ services:
restart: on-failure
schema-migrator-async:
!!merge <<: *db-depend
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.111.39}
container_name: schema-migrator-async
command:
- async

View File

@@ -1,10 +1,3 @@
connectors:
signozmeter:
metrics_flush_interval: 1h
dimensions:
- name: service.name
- name: deployment.environment
- name: host.name
receivers:
otlp:
protocols:
@@ -28,16 +21,12 @@ processors:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
batch/meter:
send_batch_max_size: 25000
send_batch_size: 20000
timeout: 1s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system]
timeout: 2s
signozspanmetrics/delta:
metrics_exporter: signozclickhousemetrics
metrics_exporter: clickhousemetricswrite
metrics_flush_interval: 60s
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
@@ -71,21 +60,25 @@ exporters:
datasource: tcp://clickhouse:9000/signoz_traces
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
use_new_schema: true
clickhousemetricswrite:
endpoint: tcp://clickhouse:9000/signoz_metrics
resource_to_telemetry_conversion:
enabled: true
clickhousemetricswrite/prometheus:
endpoint: tcp://clickhouse:9000/signoz_metrics
signozclickhousemetrics:
dsn: tcp://clickhouse:9000/signoz_metrics
clickhouselogsexporter:
dsn: tcp://clickhouse:9000/signoz_logs
timeout: 10s
use_new_schema: true
signozclickhousemeter:
dsn: tcp://clickhouse:9000/signoz_meter
timeout: 45s
sending_queue:
enabled: false
# debug: {}
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- pprof
@@ -93,20 +86,16 @@ service:
traces:
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces, signozmeter]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [signozclickhousemetrics, signozmeter]
exporters: [clickhousemetricswrite, signozclickhousemetrics]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [signozclickhousemetrics, signozmeter]
exporters: [clickhousemetricswrite/prometheus, signozclickhousemetrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter, signozmeter]
metrics/meter:
receivers: [signozmeter]
processors: [batch/meter]
exporters: [signozclickhousemeter]
exporters: [clickhouselogsexporter]

View File

@@ -93,7 +93,7 @@ check_os() {
;;
Red\ Hat*)
desired_os=1
os="rhel"
os="red hat"
package_manager="yum"
;;
CentOS*)

File diff suppressed because it is too large Load Diff

View File

@@ -13,6 +13,8 @@ Before diving in, make sure you have these tools installed:
- Download from [go.dev/dl](https://go.dev/dl/)
- Check [go.mod](../../go.mod#L3) for the minimum version
- **GCC** - Required for CGO dependencies
- Download from [gcc.gnu.org](https://gcc.gnu.org/)
- **Node** - Powers our frontend
- Download from [nodejs.org](https://nodejs.org)
@@ -42,35 +44,20 @@ Before diving in, make sure you have these tools installed:
SigNoz has three main components: Clickhouse, Backend, and Frontend. Let's set them up one by one.
### 1. Setting up ClickHouse
### 1. Setting up Clickhouse
First, we need to get ClickHouse running:
First, we need to get Clickhouse running:
```bash
make devenv-clickhouse
```
This command:
- Starts ClickHouse in a single-shard, single-replica cluster
- Starts Clickhouse in a single-shard, single-replica cluster
- Sets up Zookeeper
- Runs the latest schema migrations
### 2. Setting up SigNoz OpenTelemetry Collector
Next, start the OpenTelemetry Collector to receive telemetry data:
```bash
make devenv-signoz-otel-collector
```
This command:
- Starts the SigNoz OpenTelemetry Collector
- Listens on port 4317 (gRPC) and 4318 (HTTP) for incoming telemetry data
- Forwards data to ClickHouse for storage
> 💡 **Quick Setup**: Use `make devenv-up` to start both ClickHouse and OTel Collector together
### 3. Starting the Backend
### 2. Starting the Backend
1. Run the backend server:
```bash
@@ -86,24 +73,19 @@ This command:
> 💡 **Tip**: The API server runs at `http://localhost:8080/` by default
### 4. Setting up the Frontend
### 3. Setting up the Frontend
1. Navigate to the frontend directory:
```bash
cd frontend
```
2. Install dependencies:
1. Install dependencies:
```bash
yarn install
```
3. Create a `.env` file in this directory:
2. Create a `.env` file in the `frontend` directory:
```env
FRONTEND_API_ENDPOINT=http://localhost:8080
```
4. Start the development server:
3. Start the development server:
```bash
yarn dev
```
@@ -111,25 +93,3 @@ This command:
> 💡 **Tip**: `yarn dev` will automatically rebuild when you make changes to the code
Now you're all set to start developing! Happy coding! 🎉
## Verifying Your Setup
To verify everything is working correctly:
1. **Check ClickHouse**: `curl http://localhost:8123/ping` (should return "Ok.")
2. **Check OTel Collector**: `curl http://localhost:13133` (should return health status)
3. **Check Backend**: `curl http://localhost:8080/api/v1/health` (should return `{"status":"ok"}`)
4. **Check Frontend**: Open `http://localhost:3301` in your browser
## How to send test data?
You can now send telemetry data to your local SigNoz instance:
- **OTLP gRPC**: `localhost:4317`
- **OTLP HTTP**: `localhost:4318`
For example, using `curl` to send a test trace:
```bash
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{"resourceSpans":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"test-service"}}]},"scopeSpans":[{"spans":[{"traceId":"12345678901234567890123456789012","spanId":"1234567890123456","name":"test-span","startTimeUnixNano":"1609459200000000000","endTimeUnixNano":"1609459201000000000"}]}]}]}'
```

View File

@@ -1,51 +0,0 @@
# Endpoint
This guide outlines the recommended approach for designing endpoints, with a focus on entity relationships, RESTful structure, and examples from the codebase.
## How do we design an endpoint?
### Understand the core entities and their relationships
Start with understanding the core entities and their relationships. For example:
- **Organization**: an organization can have multiple users
### Structure Endpoints RESTfully
Endpoints should reflect the resource hierarchy and follow RESTful conventions. Use clear, **pluralized resource names** and versioning. For example:
- `POST /v1/organizations` — Create an organization
- `GET /v1/organizations/:id` — Get an organization by id
- `DELETE /v1/organizations/:id` — Delete an organization by id
- `PUT /v1/organizations/:id` — Update an organization by id
- `GET /v1/organizations/:id/users` — Get all users in an organization
- `GET /v1/organizations/me/users` — Get all users in my organization
Think in terms of resource navigation in a file system. For example, to find your organization, you would navigate to the root of the file system and then to the `organizations` directory. To find a user in an organization, you would navigate to the `organizations` directory and then to the `id` directory.
```bash
v1/
├── organizations/
│ └── 123/
│ └── users/
```
`me` endpoints are special. They are used to determine the actual id via some auth/external mechanism. For `me` endpoints, think of the `me` directory being symlinked to your organization directory. For example, if you are a part of the organization `123`, the `me` directory will be symlinked to `/v1/organizations/123`:
```bash
v1/
├── organizations/
│ └── me/ -> symlink to /v1/organizations/123
│ └── users/
│ └── 123/
│ └── users/
```
> 💡 **Note**: There are various ways to structure endpoints. Some prefer to use singular resource names instead of `me`. Others prefer to use singular resource names for all endpoints. We have, however, chosen to standardize our endpoints in the manner described above.
## What should I remember?
- Use clear, **plural resource names**
- Use `me` endpoints for determining the actual id via some auth mechanism
> 💡 **Note**: When in doubt, diagram the relationships and walk through the user flows as if navigating a file system. This will help you design endpoints that are both logical and user-friendly.

View File

@@ -1,103 +0,0 @@
# Errors
SigNoz includes its own structured [errors](/pkg/errors/errors.go) package. It's built on top of Go's `error` interface, extending it to add additional context that helps provide more meaningful error messages throughout the application.
## How to use it?
To use the SigNoz structured errors package, use these functions instead of the standard library alternatives:
```go
// Instead of errors.New()
errors.New(typ, code, message)
// Instead of fmt.Errorf()
errors.Newf(typ, code, message, args...)
```
### Typ
The Typ (read as Type, defined as `typ`) is used to categorize errors across the codebase and is loosely coupled with HTTP/GRPC status codes. All predefined types can be found in [pkg/errors/type.go](/pkg/errors/type.go). For example:
- `TypeInvalidInput` - Indicates invalid input was provided
- `TypeNotFound` - Indicates a resource was not found
By design, `typ` is unexported and cannot be declared outside of [errors](/pkg/errors/errors.go) package. This ensures that it is consistent across the codebase and is used in a way that is meaningful.
### Code
Codes are used to provide more granular categorization within types. For instance, a type of `TypeInvalidInput` might have codes like `CodeInvalidEmail` or `CodeInvalidPassword`.
To create new error codes, use the `errors.MustNewCode` function:
```go
var (
CodeThingAlreadyExists = errors.MustNewCode("thing_already_exists")
CodeThingNotFound = errors.MustNewCode("thing_not_found")
)
```
> 💡 **Note**: Error codes must match the regex `^[a-z_]+$` otherwise the code will panic.
## Show me some examples
### Using the error
A basic example of using the error:
```go
var (
CodeThingAlreadyExists = errors.MustNewCode("thing_already_exists")
)
func CreateThing(id string) error {
t, err := thing.GetFromStore(id)
if err != nil {
if errors.As(err, errors.TypeNotFound) {
// thing was not found, create it
return thing.Create(id)
}
// something else went wrong, wrap the error with more context
return errors.Wrapf(err, errors.TypeInternal, errors.CodeUnknown, "failed to get thing from store")
}
return errors.Newf(errors.TypeAlreadyExists, CodeThingAlreadyExists, "thing with id %s already exists", id)
}
```
### Changing the error
Sometimes you may want to change the error while preserving the message:
```go
func GetUserSecurely(id string) (*User, error) {
user, err := repository.GetUser(id)
if err != nil {
if errors.Ast(err, errors.TypeNotFound) {
// Convert NotFound to Forbidden for security reasons
return nil, errors.New(errors.TypeForbidden, errors.CodeAccessDenied, "access denied to requested resource")
}
return nil, err
}
return user, nil
}
```
## Why do we need this?
In a large codebase like SigNoz, error handling is critical for maintaining reliability, debuggability, and a good user experience. We believe that it is the **responsibility of a function** to return **well-defined** errors that **accurately describe what went wrong**. With our structured error system:
- Functions can create precise errors with appropriate additional context
- Callers can make informed decisions based on the additional context
- Error context is preserved and enhanced as it moves up the call stack
The caller (which can be another function or a HTTP/gRPC handler or something else entirely), can then choose to use this error to take appropriate actions such as:
- A function can branch into different paths based on the context
- An HTTP/gRPC handler can derive the correct status code and message from the error and send it to the client
- Logging systems can capture structured error information for better diagnostics
Although there might be cases where this might seem too verbose, it makes the code more maintainable and consistent. A little verbose code is better than clever code that doesn't provide enough context.
## What should I remember?
- Think about error handling as you write your code, not as an afterthought.
- Always use the [errors](/pkg/errors/errors.go) package instead of the standard library's `errors.New()` or `fmt.Errorf()`.
- Always assign appropriate codes to errors when creating them instead of using the "catch all" error codes defined in [pkg/errors/code.go](/pkg/errors/code.go).
- Use `errors.Wrapf()` to add context to errors while preserving the original when appropriate.

View File

@@ -1,134 +0,0 @@
# Flagger
Flagger is SigNoz's feature flagging system built on top of the [OpenFeature](https://openfeature.dev/) standard. It provides a unified interface for evaluating feature flags across the application, allowing features to be enabled, disabled, or configured dynamically without code changes.
> 💡 **Note**: OpenFeature is a CNCF project that provides a vendor-agnostic feature flagging API, making it easy to switch providers without changing application code.
## How does it work?
Flagger consists of three main components:
1. **Registry** (`pkg/flagger/registry.go`) - Contains all available feature flags with their metadata and default values
2. **Flagger** (`pkg/flagger/flagger.go`) - The consumer-facing interface for evaluating feature flags
3. **Providers** (`pkg/flagger/<provider>flagger/`) - Implementations that supply feature flag values (e.g., `configflagger` for config-based flags)
The evaluation flow works as follows:
1. The caller requests a feature flag value via the `Flagger` interface
2. Flagger checks the registry to validate the flag exists and get its default value
3. Each registered provider is queried for an override value
4. If a provider returns a value different from the default, that value is returned
5. Otherwise, the default value from the registry is returned
## How to add a new feature flag?
### 1. Register the flag in the registry
Add your feature flag definition in `pkg/flagger/registry.go`:
```go
var (
// Export the feature name for use in evaluations
FeatureMyNewFeature = featuretypes.MustNewName("my_new_feature")
)
func MustNewRegistry() featuretypes.Registry {
registry, err := featuretypes.NewRegistry(
// ...existing features...
&featuretypes.Feature{
Name: FeatureMyNewFeature,
Kind: featuretypes.KindBoolean, // or KindString, KindFloat, KindInt, KindObject
Stage: featuretypes.StageStable, // or StageAlpha, StageBeta
Description: "Controls whether my new feature is enabled",
DefaultVariant: featuretypes.MustNewName("disabled"),
Variants: featuretypes.NewBooleanVariants(),
},
)
// ...
}
```
> 💡 **Note**: Feature names must match the regex `^[a-z_]+$` (lowercase letters and underscores only).
### 2. Configure the feature flag value (optional)
To override the default value, add an entry in your configuration file:
```yaml
flagger:
config:
boolean:
my_new_feature: true
```
Supported configuration types:
| Type | Config Key | Go Type |
|------|------------|---------|
| Boolean | `boolean` | `bool` |
| String | `string` | `string` |
| Float | `float` | `float64` |
| Integer | `integer` | `int64` |
| Object | `object` | `any` |
## How to evaluate a feature flag?
Use the `Flagger` interface to evaluate feature flags. The interface provides typed methods for each value type:
```go
import (
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/types/featuretypes"
)
func DoSomething(ctx context.Context, flagger flagger.Flagger) error {
// Create an evaluation context (typically with org ID)
evalCtx := featuretypes.NewFlaggerEvaluationContext(orgID)
// Evaluate with error handling
enabled, err := flagger.Boolean(ctx, flagger.FeatureMyNewFeature, evalCtx)
if err != nil {
return err
}
if enabled {
// Feature is enabled
}
return nil
}
```
### Empty variants
For cases where you want to use a default value on error (and log the error), use the `*OrEmpty` methods:
```go
func DoSomething(ctx context.Context, flagger flagger.Flagger) {
evalCtx := featuretypes.NewFlaggerEvaluationContext(orgID)
// Returns false on error and logs the error
if flagger.BooleanOrEmpty(ctx, flagger.FeatureMyNewFeature, evalCtx) {
// Feature is enabled
}
}
```
### Available evaluation methods
| Method | Return Type | Empty Variant Default |
|--------|-------------|---------------------|
| `Boolean()` | `(bool, error)` | `false` |
| `String()` | `(string, error)` | `""` |
| `Float()` | `(float64, error)` | `0.0` |
| `Int()` | `(int64, error)` | `0` |
| `Object()` | `(any, error)` | `struct{}{}` |
## What should I remember?
- Always define feature flags in the registry (`pkg/flagger/registry.go`) before using them
- Use descriptive feature names that clearly indicate what the flag controls
- Prefer `*OrEmpty` methods for non-critical features to avoid error handling overhead
- Export feature name variables (e.g., `FeatureMyNewFeature`) for type-safe usage across packages
- Consider the feature's lifecycle stage (`Alpha`, `Beta`, `Stable`) when defining it
- Providers are evaluated in order; the first non-default value wins

View File

@@ -1,179 +0,0 @@
# Handler
Handlers in SigNoz are responsible for exposing module functionality over HTTP. They are thin adapters that:
- Decode incoming HTTP requests
- Call the appropriate module layer
- Return structured responses (or errors) in a consistent format
- Describe themselves for OpenAPI generation
They are **not** the place for complex business logic; that belongs in modules (for example, `pkg/modules/user`, `pkg/modules/session`, etc).
## How are handlers structured?
At a high level, a typical flow looks like this:
1. A `Handler` interface is defined in the module (for example, `user.Handler`, `session.Handler`, `organization.Handler`).
2. The `apiserver` provider wires those handlers into HTTP routes using Gorilla `mux.Router`.
Each route wraps a module handler method with the following:
- Authorization middleware (from `pkg/http/middleware`)
- A generic HTTP `handler.Handler` (from `pkg/http/handler`)
- An `OpenAPIDef` that describes the operation for OpenAPI generation
For example, in `pkg/apiserver/signozapiserver`:
```go
if err := router.Handle("/api/v1/invite", handler.New(
provider.authZ.AdminAccess(provider.userHandler.CreateInvite),
handler.OpenAPIDef{
ID: "CreateInvite",
Tags: []string{"users"},
Summary: "Create invite",
Description: "This endpoint creates an invite for a user",
Request: new(types.PostableInvite),
RequestContentType: "application/json",
Response: new(types.Invite),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
},
)).Methods(http.MethodPost).GetError(); err != nil {
return err
}
```
In this pattern:
- `provider.userHandler.CreateInvite` is a handler method.
- `provider.authZ.AdminAccess(...)` wraps that method with authorization checks and context setup.
- `handler.New` converts it into an HTTP handler and wires it to OpenAPI via the `OpenAPIDef`.
## How to write a new handler method?
When adding a new endpoint:
1. Add a method to the appropriate module `Handler` interface.
2. Implement that method in the module.
3. Register the method in `signozapiserver` with the correct route, HTTP method, auth, and `OpenAPIDef`.
### 1. Extend an existing `Handler` interface or create a new one
Find the module in `pkg/modules/<name>` and extend its `Handler` interface with a new method that receives an `http.ResponseWriter` and `*http.Request`. For example:
```go
type Handler interface {
// existing methods...
CreateThing(rw http.ResponseWriter, req *http.Request)
}
```
Keep the method focused on HTTP concerns and delegate business logic to the module.
### 2. Implement the handler method
In the module implementation, implement the new method. A typical implementation:
- Extracts authentication and organization context from `req.Context()`
- Decodes the request body into a `types.*` struct using the `binding` package
- Calls module functions
- Uses the `render` package to write responses or errors
```go
func (h *handler) CreateThing(rw http.ResponseWriter, req *http.Request) {
// Extract authentication and organization context from req.Context()
claims, err := authtypes.ClaimsFromContext(req.Context())
if err != nil {
render.Error(rw, err)
return
}
// Decode the request body into a `types.*` struct using the `binding` package
var in types.PostableThing
if err := binding.JSON.BindBody(req.Body, &in); err != nil {
render.Error(rw, err)
return
}
// Call module functions
out, err := h.module.CreateThing(req.Context(), claims.OrgID, &in)
if err != nil {
render.Error(rw, err)
return
}
// Use the `render` package to write responses or errors
render.Success(rw, http.StatusCreated, out)
}
```
### 3. Register the handler in `signozapiserver`
In `pkg/apiserver/signozapiserver`, add a route in the appropriate `add*Routes` function (`addUserRoutes`, `addSessionRoutes`, `addOrgRoutes`, etc.). The pattern is:
```go
if err := router.Handle("/api/v1/things", handler.New(
provider.authZ.AdminAccess(provider.thingHandler.CreateThing),
handler.OpenAPIDef{
ID: "CreateThing",
Tags: []string{"things"},
Summary: "Create thing",
Description: "This endpoint creates a thing",
Request: new(types.PostableThing),
RequestContentType: "application/json",
Response: new(types.GettableThing),
ResponseContentType: "application/json",
SuccessStatusCode: http.StatusCreated,
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
Deprecated: false,
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
},
)).Methods(http.MethodPost).GetError(); err != nil {
return err
}
```
### 4. Update the OpenAPI spec
Run the following command to update the OpenAPI spec:
```bash
go run cmd/enterprise/*.go generate openapi
```
This will update the OpenAPI spec in `docs/api/openapi.yml` to reflect the new endpoint.
## How does OpenAPI integration work?
The `handler.New` function ties the HTTP handler to OpenAPI metadata via `OpenAPIDef`. This drives the generated OpenAPI document.
- **ID**: A unique identifier for the operation (used as the `operationId`).
- **Tags**: Logical grouping for the operation (for example, `"users"`, `"sessions"`, `"orgs"`).
- **Summary / Description**: Human-friendly documentation.
- **Request / RequestContentType**:
- `Request` is a Go type that describes the request body or form.
- `RequestContentType` is usually `"application/json"` or `"application/x-www-form-urlencoded"` (for callbacks like SAML).
- **Response / ResponseContentType**:
- `Response` is the Go type for the successful response payload.
- `ResponseContentType` is usually `"application/json"`; use `""` for responses without a body.
- **SuccessStatusCode**: The HTTP status for successful responses (for example, `http.StatusOK`, `http.StatusCreated`, `http.StatusNoContent`).
- **ErrorStatusCodes**: Additional error status codes beyond the standard ones automatically added by `handler.New`.
- **SecuritySchemes**: Auth mechanisms and scopes required by the operation.
The generic handler:
- Automatically appends `401`, `403`, and `500` to `ErrorStatusCodes` when appropriate.
- Registers request and response schemas with the OpenAPI reflector so they appear in `docs/api/openapi.yml`.
See existing examples in:
- `addUserRoutes` (for typical JSON request/response)
- `addSessionRoutes` (for form-encoded and redirect flows)
## What should I remember?
- **Keep handlers thin**: focus on HTTP concerns and delegate logic to modules/services.
- **Always register routes through `signozapiserver`** using `handler.New` and a complete `OpenAPIDef`.
- **Choose accurate request/response types** from the `types` packages so OpenAPI schemas are correct.

View File

@@ -1,215 +0,0 @@
# Integration Tests
SigNoz uses integration tests to verify that different components work together correctly in a real environment. These tests run against actual services (ClickHouse, PostgreSQL, etc.) to ensure end-to-end functionality.
## How to set up the integration test environment?
### Prerequisites
Before running integration tests, ensure you have the following installed:
- Python 3.13+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- Docker (for containerized services)
### Initial Setup
1. Navigate to the integration tests directory:
```bash
cd tests/integration
```
2. Install dependencies using uv:
```bash
uv sync
```
> **_NOTE:_** the build backend could throw an error while installing `psycopg2`, pleae see https://www.psycopg.org/docs/install.html#build-prerequisites
### Starting the Test Environment
To spin up all the containers necessary for writing integration tests and keep them running:
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/bootstrap/setup.py::test_setup
```
This command will:
- Start all required services (ClickHouse, PostgreSQL, Zookeeper, etc.)
- Keep containers running due to the `--reuse` flag
- Verify that the setup is working correctly
### Stopping the Test Environment
When you're done writing integration tests, clean up the environment:
```bash
uv run pytest --basetemp=./tmp/ -vv --teardown -s src/bootstrap/setup.py::test_teardown
```
This will destroy the running integration test setup and clean up resources.
## Understanding the Integration Test Framework
Python and pytest form the foundation of the integration testing framework. Testcontainers are used to spin up disposable integration environments. Wiremock is used to spin up **test doubles** of other services.
- **Why Python/pytest?** It's expressive, low-boilerplate, and has powerful fixture capabilities that make integration testing straightforward. Extensive libraries for HTTP requests, JSON handling, and data analysis (numpy) make it easier to test APIs and verify data
- **Why testcontainers?** They let us spin up isolated dependencies that match our production environment without complex setup.
- **Why wiremock?** Well maintained, documented and extensible.
```
.
├── conftest.py
├── fixtures
│ ├── __init__.py
│ ├── auth.py
│ ├── clickhouse.py
│ ├── fs.py
│ ├── http.py
│ ├── migrator.py
│ ├── network.py
│ ├── postgres.py
│ ├── signoz.py
│ ├── sql.py
│ ├── sqlite.py
│ ├── types.py
│ └── zookeeper.py
├── uv.lock
├── pyproject.toml
└── src
└── bootstrap
├── __init__.py
├── 01_database.py
├── 02_register.py
└── 03_license.py
```
Each test suite follows some important principles:
1. **Organization**: Test suites live under `src/` in self-contained packages. Fixtures (a pytest concept) live inside `fixtures/`.
2. **Execution Order**: Files are prefixed with two-digit numbers (`01_`, `02_`, `03_`) to ensure sequential execution.
3. **Time Constraints**: Each suite should complete in under 10 minutes (setup takes ~4 mins).
### Test Suite Design
Test suites should target functional domains or subsystems within SigNoz. When designing a test suite, consider these principles:
- **Functional Cohesion**: Group tests around a specific capability or service boundary
- **Data Flow**: Follow the path of data through related components
- **Change Patterns**: Components frequently modified together should be tested together
The exact boundaries for modules are intentionally flexible, allowing teams to define logical groupings based on their specific context and knowledge of the system.
Eg: The **bootstrap** integration test suite validates core system functionality:
- Database initialization
- Version check
Other test suites can be **pipelines, auth, querier.**
## How to write an integration test?
Now start writing an integration test. Create a new file `src/bootstrap/05_version.py` and paste the following:
```python
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_version(signoz: types.SigNoz) -> None:
response = requests.get(signoz.self.host_config.get("/api/v1/version"), timeout=2)
logger.info(response)
```
We have written a simple test which calls the `version` endpoint of the container in step 1. In **order to just run this function, run the following command:**
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/bootstrap/05_version.py::test_version
```
> Note: The `--reuse` flag is used to reuse the environment if it is already running. Always use this flag when writing and running integration tests. If you don't use this flag, the environment will be destroyed and recreated every time you run the test.
Here's another example of how to write a more comprehensive integration test:
```python
from http import HTTPStatus
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_user_registration(signoz: types.SigNoz) -> None:
"""Test user registration functionality."""
response = requests.post(
signoz.self.host_configs["8080"].get("/api/v1/register"),
json={
"name": "testuser",
"orgId": "",
"orgName": "test.org",
"email": "test@example.com",
"password": "password123Z$",
},
timeout=2,
)
assert response.status_code == HTTPStatus.OK
assert response.json()["setupCompleted"] is True
```
## How to run integration tests?
### Running All Tests
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/
```
### Running Specific Test Categories
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>
# Run querier tests
uv run pytest --basetemp=./tmp/ -vv --reuse src/querier/
# Run auth tests
uv run pytest --basetemp=./tmp/ -vv --reuse src/auth/
```
### Running Individual Tests
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>/<file>.py::test_name
# Run test_register in file 01_register.py in passwordauthn suite
uv run pytest --basetemp=./tmp/ -vv --reuse src/passwordauthn/01_register.py::test_register
```
## How to configure different options for integration tests?
Tests can be configured using pytest options:
- `--sqlstore-provider` - Choose database provider (default: postgres)
- `--postgres-version` - PostgreSQL version (default: 15)
- `--clickhouse-version` - ClickHouse version (default: 25.5.6)
- `--zookeeper-version` - Zookeeper version (default: 3.7.1)
Example:
```bash
uv run pytest --basetemp=./tmp/ -vv --reuse --sqlstore-provider=postgres --postgres-version=14 src/auth/
```
## What should I remember?
- **Always use the `--reuse` flag** when setting up the environment to keep containers running
- **Use the `--teardown` flag** when cleaning up to avoid resource leaks
- **Follow the naming convention** with two-digit numeric prefixes (`01_`, `02_`) for test execution order
- **Use proper timeouts** in HTTP requests to avoid hanging tests
- **Clean up test data** between tests to avoid interference
- **Use descriptive test names** that clearly indicate what is being tested
- **Leverage fixtures** for common setup and authentication
- **Test both success and failure scenarios** to ensure robust functionality

View File

@@ -1,106 +0,0 @@
# Provider
SigNoz is built on the provider pattern, a design approach where code is organized into providers that handle specific application responsibilities. Providers act as adapter components that integrate with external services and deliver required functionality to the application.
> 💡 **Note**: Coming from a DDD background? Providers are similar (not exactly the same) to adapter/infrastructure services.
## How to create a new provider?
To create a new provider, create a directory in the `pkg/` directory named after your provider. The provider package consists of four key components:
- **Interface** (`pkg/<name>/<name>.go`): Defines the provider's interface. Other packages should import this interface to use the provider.
- **Config** (`pkg/<name>/config.go`): Contains provider configuration, implementing the `factory.Config` interface from [factory/config.go](/pkg/factory/config.go).
- **Implementation** (`pkg/<name>/<implname><name>/provider.go`): Contains the provider implementation, including a `NewProvider` function that returns a `factory.Provider` interface from [factory/provider.go](/pkg/factory/provider.go).
- **Mock** (`pkg/<name>/<name>test.go`): Provides mocks for the provider, typically used by dependent packages for unit testing.
For example, the [prometheus](/pkg/prometheus) provider delivers a prometheus engine to the application:
- `pkg/prometheus/prometheus.go` - Interface definition
- `pkg/prometheus/config.go` - Configuration
- `pkg/prometheus/clickhouseprometheus/provider.go` - Clickhouse-powered implementation
- `pkg/prometheus/prometheustest/provider.go` - Mock implementation
## How to wire it up?
The `pkg/signoz` package contains the inversion of control container responsible for wiring providers. It handles instantiation, configuration, and assembly of providers based on configuration metadata.
> 💡 **Note**: Coming from a Java background? Providers are similar to Spring beans.
Wiring up a provider involves three steps:
1. Wiring up the configuration
Add your config from `pkg/<name>/config.go` to the `pkg/signoz/config.Config` struct and in new factories:
```go
type Config struct {
...
MyProvider myprovider.Config `mapstructure:"myprovider"`
...
}
func NewConfig(ctx context.Context, resolverConfig config.ResolverConfig, ....) (Config, error) {
...
configFactories := []factory.ConfigFactory{
myprovider.NewConfigFactory(),
}
...
}
```
2. Wiring up the provider
Add available provider implementations in `pkg/signoz/provider.go`:
```go
func NewMyProviderFactories() factory.NamedMap[factory.ProviderFactory[myprovider.MyProvider, myprovider.Config]] {
return factory.MustNewNamedMap(
myproviderone.NewFactory(),
myprovidertwo.NewFactory(),
)
}
```
3. Instantiate the provider by adding it to the `SigNoz` struct in `pkg/signoz/signoz.go`:
```go
type SigNoz struct {
...
MyProvider myprovider.MyProvider
...
}
func New(...) (*SigNoz, error) {
...
myprovider, err := myproviderone.New(ctx, settings, config.MyProvider, "one/two")
if err != nil {
return nil, err
}
...
}
```
## How to use it?
To use a provider, import its interface. For example, to use the prometheus provider, import `pkg/prometheus/prometheus.go`:
```go
import "github.com/SigNoz/signoz/pkg/prometheus/prometheus"
func CreateSomething(ctx context.Context, prometheus prometheus.Prometheus) {
...
prometheus.DoSomething()
...
}
```
## Why do we need this?
Like any dependency injection framework, providers decouple the codebase from implementation details. This is especially valuable in SigNoz's large codebase, where we need to swap implementations without changing dependent code. The provider pattern offers several benefits apart from the obvious one of decoupling:
- Configuration is **defined with each provider and centralized in one place**, making it easier to understand and manage through various methods (environment variables, config files, etc.)
- Provider mocking is **straightforward for unit testing**, with a consistent pattern for locating mocks
- **Multiple implementations** of the same provider are **supported**, as demonstrated by our sqlstore provider
## What should I remember?
- Use the provider pattern wherever applicable.
- Always create a provider **irrespective of the number of implementations**. This makes it easier to add new implementations in the future.

View File

@@ -1,11 +0,0 @@
# Go
This document provides an overview of contributing to the SigNoz backend written in Go. The SigNoz backend is built with Go, focusing on performance, maintainability, and developer experience. We strive for clean, idiomatic code that follows established Go practices while addressing the unique needs of an observability platform.
We adhere to three primary style guides as our foundation:
- [Effective Go](https://go.dev/doc/effective_go) - For writing idiomatic Go code
- [Code Review Comments](https://go.dev/wiki/CodeReviewComments) - For understanding common comments in code reviews
- [Google Style Guide](https://google.github.io/styleguide/go/) - Additional practices from Google
We **recommend** (almost enforce) reviewing these guides before contributing to the codebase. They provide valuable insights into writing idiomatic Go code and will help you understand our approach to backend development. In addition, we have a few additional rules that make certain areas stricter than the above which can be found in area-specific files in this package.

View File

@@ -1,94 +0,0 @@
# SQL
SigNoz utilizes a relational database to store metadata including organization information, user data and other settings.
## How to use it?
The database interface is defined in [SQLStore](/pkg/sqlstore/sqlstore.go). SigNoz leverages the Bun ORM to interact with the underlying database. To access the database instance, use the `BunDBCtx` function. For operations that require transactions across multiple database operations, use the `RunInTxCtx` function. This function embeds a transaction in the context, which propagates through various functions in the callback.
```go
type Thing struct {
bun.BaseModel
ID types.Identifiable `bun:",embed"`
SomeColumn string `bun:"some_column"`
TimeAuditable types.TimeAuditable `bun:",embed"`
OrgID string `bun:"org_id"`
}
func GetThing(ctx context.Context, id string) (*Thing, error) {
thing := new(Thing)
err := sqlstore.
BunDBCtx(ctx).
NewSelect().
Model(thing).
Where("id = ?", id).
Scan(ctx)
return thing, err
}
func CreateThing(ctx context.Context, thing *Thing) error {
return sqlstore.
BunDBCtx(ctx).
NewInsert().
Model(thing).
Exec(ctx)
}
```
> 💡 **Note**: Always use line breaks while working with SQL queries to enhance code readability.
> 💡 **Note**: Always use the `new` function to create new instances of structs.
## What are hooks?
Hooks are user-defined functions that execute before and/or after specific database operations. These hooks are particularly useful for generating telemetry data such as logs, traces, and metrics, providing visibility into database interactions. Hooks are defined in the [SQLStoreHook](/pkg/sqlstore/sqlstore.go) interface.
## How is the schema designed?
SigNoz implements a star schema design with the organizations table as the central entity. All other tables link to the organizations table via foreign key constraints on the `org_id` column. This design ensures that every entity within the system is either directly or indirectly associated with an organization.
```mermaid
erDiagram
ORGANIZATIONS {
string id PK
timestamp created_at
timestamp updated_at
}
ENTITY_A {
string id PK
timestamp created_at
timestamp updated_at
string org_id FK
}
ENTITY_B {
string id PK
timestamp created_at
timestamp updated_at
string org_id FK
}
ORGANIZATIONS ||--o{ ENTITY_A : contains
ORGANIZATIONS ||--o{ ENTITY_B : contains
```
> 💡 **Note**: There are rare exceptions to the above star schema design. Consult with the maintainers before deviating from the above design.
All tables follow a consistent primary key pattern using a `id` column (referenced by the `types.Identifiable` struct) and include `created_at` and `updated_at` columns (referenced by the `types.TimeAuditable` struct) for audit purposes.
## How to write migrations?
For schema migrations, use the [SQLMigration](/pkg/sqlmigration/sqlmigration.go) interface and write the migration in the same package. When creating migrations, adhere to these guidelines:
- Do not implement **`ON CASCADE` foreign key constraints**. Deletion operations should be handled explicitly in application logic rather than delegated to the database.
- Do not **import types from the types package** in the `sqlmigration` package. Instead, define the required types within the migration package itself. This practice ensures migration stability as the core types evolve over time.
- Do not implement **`Down` migrations**. As the codebase matures, we may introduce this capability, but for now, the `Down` function should remain empty.
- Always write **idempotent** migrations. This means that if the migration is run multiple times, it should not cause an error.
- A migration which is **dependent on the underlying dialect** (sqlite, postgres, etc) should be written as part of the [SQLDialect](/pkg/sqlstore/sqlstore.go) interface. The implementation needs to go in the dialect specific package of the respective database.
## What should I remember?
- Use `BunDBCtx` and `RunInTxCtx` to access the database instance and execute transactions respectively.
- While designing new tables, ensure the consistency of `id`, `created_at`, `updated_at` and an `org_id` column with a foreign key constraint to the `organizations` table (unless the table serves as a transitive entity not directly associated with an organization but indirectly associated with one).
- Implement deletion logic in the application rather than relying on cascading deletes in the database.
- While writing migrations, adhere to the guidelines mentioned above.

View File

@@ -1,301 +0,0 @@
# Onboarding Configuration Guide
This guide explains how to add new data sources to the SigNoz onboarding flow. The onboarding configuration controls the "New Source" / "Get Started" experience in SigNoz Cloud.
## Configuration File Location
The configuration is located at:
```
frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.json
```
## JSON Structure Overview
The configuration file is a JSON array containing data source objects. Each object represents a selectable option in the onboarding flow.
## Data Source Object Keys
### Required Keys
| Key | Type | Description |
|-----|------|-------------|
| `dataSource` | `string` | Unique identifier for the data source (kebab-case, e.g., `"aws-ec2"`) |
| `label` | `string` | Display name shown to users (e.g., `"AWS EC2"`) |
| `tags` | `string[]` | Array of category tags for grouping (e.g., `["AWS"]`, `["database"]`) |
| `module` | `string` | Destination module after onboarding completion |
| `imgUrl` | `string` | Path to the logo/icon (e.g., `"/Logos/ec2.svg"`) |
### Optional Keys
| Key | Type | Description |
|-----|------|-------------|
| `link` | `string` | Docs link to redirect to (e.g., `"/docs/aws-monitoring/ec2/"`) |
| `relatedSearchKeywords` | `string[]` | Array of keywords for search functionality |
| `question` | `object` | Nested question object for multi-step flows |
| `internalRedirect` | `boolean` | When `true`, navigates within the app instead of showing docs |
## Module Values
The `module` key determines where users are redirected after completing onboarding:
| Value | Destination |
|-------|-------------|
| `apm` | APM / Traces |
| `logs` | Logs Explorer |
| `metrics` | Metrics Explorer |
| `dashboards` | Dashboards |
| `infra-monitoring-hosts` | Infrastructure Monitoring - Hosts |
| `infra-monitoring-k8s` | Infrastructure Monitoring - Kubernetes |
| `messaging-queues-kafka` | Messaging Queues - Kafka |
| `messaging-queues-celery` | Messaging Queues - Celery |
| `integrations` | Integrations page |
| `home` | Home page |
| `api-monitoring` | API Monitoring |
## Question Object Structure
The `question` object enables multi-step selection flows:
```json
{
"question": {
"desc": "What would you like to monitor?",
"type": "select",
"helpText": "Choose the telemetry type you want to collect.",
"helpLink": "/docs/azure-monitoring/overview/",
"helpLinkText": "Read the guide →",
"options": [
{
"key": "logging",
"label": "Logs",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/logging/"
},
{
"key": "metrics",
"label": "Metrics",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/metrics/"
},
{
"key": "tracing",
"label": "Traces",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/tracing/"
}
]
}
}
```
### Question Keys
| Key | Type | Description |
|-----|------|-------------|
| `desc` | `string` | Question text displayed to the user |
| `type` | `string` | Currently only `"select"` is supported |
| `helpText` | `string` | (Optional) Additional help text below the question |
| `helpLink` | `string` | (Optional) Docs link for the help section |
| `helpLinkText` | `string` | (Optional) Text for the help link (default: "Learn more →") |
| `options` | `array` | Array of option objects |
## Option Object Structure
Options can be simple (direct link) or nested (with another question):
### Simple Option (Direct Link)
```json
{
"key": "aws-ec2-logs",
"label": "Logs",
"imgUrl": "/Logos/ec2.svg",
"link": "/docs/userguide/collect_logs_from_file/"
}
```
### Option with Internal Redirect
```json
{
"key": "aws-ec2-metrics-one-click",
"label": "One Click AWS",
"imgUrl": "/Logos/ec2.svg",
"link": "/integrations?integration=aws-integration&service=ec2",
"internalRedirect": true
}
```
> **Important**: Set `internalRedirect: true` only for internal app routes (like `/integrations?...`). Docs links should NOT have this flag.
### Nested Option (Multi-step Flow)
```json
{
"key": "aws-ec2-metrics",
"label": "Metrics",
"imgUrl": "/Logos/ec2.svg",
"question": {
"desc": "How would you like to set up monitoring?",
"helpText": "Choose your setup method.",
"options": [...]
}
}
```
## Examples
### Simple Data Source (Direct Link)
```json
{
"dataSource": "aws-elb",
"label": "AWS ELB",
"tags": ["AWS"],
"module": "logs",
"relatedSearchKeywords": [
"aws",
"aws elb",
"elb logs",
"elastic load balancer"
],
"imgUrl": "/Logos/elb.svg",
"link": "/docs/aws-monitoring/elb/"
}
```
### Data Source with Single Question Level
```json
{
"dataSource": "app-service",
"label": "App Service",
"imgUrl": "/Logos/azure-vm.svg",
"tags": ["Azure"],
"module": "apm",
"relatedSearchKeywords": ["azure", "app service"],
"question": {
"desc": "What telemetry data do you want to visualise?",
"type": "select",
"options": [
{
"key": "logging",
"label": "Logs",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/logging/"
},
{
"key": "metrics",
"label": "Metrics",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/metrics/"
},
{
"key": "tracing",
"label": "Traces",
"imgUrl": "/Logos/azure-vm.svg",
"link": "/docs/azure-monitoring/app-service/tracing/"
}
]
}
}
```
### Data Source with Nested Questions (2-3 Levels)
```json
{
"dataSource": "aws-ec2",
"label": "AWS EC2",
"tags": ["AWS"],
"module": "logs",
"relatedSearchKeywords": ["aws", "aws ec2", "ec2 logs", "ec2 metrics"],
"imgUrl": "/Logos/ec2.svg",
"question": {
"desc": "What would you like to monitor for AWS EC2?",
"type": "select",
"helpText": "Choose the type of telemetry data you want to collect.",
"options": [
{
"key": "aws-ec2-logs",
"label": "Logs",
"imgUrl": "/Logos/ec2.svg",
"link": "/docs/userguide/collect_logs_from_file/"
},
{
"key": "aws-ec2-metrics",
"label": "Metrics",
"imgUrl": "/Logos/ec2.svg",
"question": {
"desc": "How would you like to set up EC2 Metrics monitoring?",
"helpText": "One Click uses AWS CloudWatch integration. Manual setup uses OpenTelemetry.",
"helpLink": "/docs/aws-monitoring/one-click-vs-manual/",
"helpLinkText": "Read the comparison guide →",
"options": [
{
"key": "aws-ec2-metrics-one-click",
"label": "One Click AWS",
"imgUrl": "/Logos/ec2.svg",
"link": "/integrations?integration=aws-integration&service=ec2",
"internalRedirect": true
},
{
"key": "aws-ec2-metrics-manual",
"label": "Manual Setup",
"imgUrl": "/Logos/ec2.svg",
"link": "/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/"
}
]
}
}
]
}
}
```
## Best Practices
### 1. Tags
- Use existing tags when possible: `AWS`, `Azure`, `GCP`, `database`, `logs`, `apm/traces`, `infrastructure monitoring`, `LLM Monitoring`
- Tags are used for grouping in the sidebar
- Every data source must have at least one tag
### 2. Search Keywords
- Include variations of the name (e.g., `"aws ec2"`, `"ec2"`, `"ec2 logs"`)
- Include common misspellings or alternative names
- Keep keywords lowercase and alphabetically sorted
### 3. Logos
- Place logo files in `public/Logos/`
- Use SVG format
- Reference as `"/Logos/your-logo.svg"`
### 4. Links
- Docs links should start with `/docs/` (will be prefixed with DOCS_BASE_URL)
- Internal app links should start with `/integrations`, `/services`, etc.
- Only use `internalRedirect: true` for internal app routes
### 5. Keys
- Use kebab-case for `dataSource` and option `key` values
- Make keys descriptive and unique
- Follow the pattern: `{service}-{subtype}-{action}` (e.g., `aws-ec2-metrics-one-click`)
## Adding a New Data Source
1. Add your data source object to the JSON array
2. Ensure the logo exists in `public/Logos/`
3. Test the flow locally with `yarn dev`
4. Validation:
- Navigate to the [onboarding page](http://localhost:3301/get-started-with-signoz-cloud) on your local machine
- Data source appears in the list
- Search keywords work correctly
- All links redirect to the correct pages
- Questions display correct help text and links
- Tags are used for grouping in the UI sidebar
- Clicking on Configure redirects to the correct page

View File

@@ -1,292 +0,0 @@
# External API Monitoring - Developer Guide
## Overview
External API Monitoring tracks outbound HTTP calls from your services to external APIs. It groups spans by domain (e.g., `api.example.com`) and displays metrics like endpoint count, request rate, error rate, latency, and last seen time.
**Key Requirement**: Spans must have `kind_string = 'Client'` and either `http.url`/`url.full` AND `net.peer.name`/`server.address` attributes.
---
## Architecture Flow
```
Frontend (DomainList)
→ useListOverview hook
→ POST /api/v1/third-party-apis/overview/list
→ getDomainList handler
→ BuildDomainList (7 queries)
→ QueryRange (ClickHouse)
→ Post-processing (merge semconv, filter IPs)
→ formatDataForTable
→ UI Display
```
---
## Key APIs
### 1. Domain List API
**Endpoint**: `POST /api/v1/third-party-apis/overview/list`
**Request**:
```json
{
"start": 1699123456789, // Unix timestamp (ms)
"end": 1699127056789,
"show_ip": false, // Filter IP addresses
"filter": {
"expression": "kind_string = 'Client' AND service.name = 'api'"
}
}
```
**Response**: Table with columns:
- `net.peer.name` (domain name)
- `endpoints` (count_distinct with fallback: http.url or url.full)
- `rps` (rate())
- `error_rate` (formula: error/total_span * 100)
- `p99` (p99(duration_nano))
- `lastseen` (max(timestamp))
**Handler**: `pkg/query-service/app/http_handler.go::getDomainList()`
---
### 2. Domain Info API
**Endpoint**: `POST /api/v1/third-party-apis/overview/domain`
**Request**: Same as Domain List, but includes `domain` field
**Response**: Endpoint-level metrics for a specific domain
**Handler**: `pkg/query-service/app/http_handler.go::getDomainInfo()`
---
## Query Building
### Location
`pkg/modules/thirdpartyapi/translator.go`
### BuildDomainList() - Creates 7 Sub-queries
1. **endpoints**: `count_distinct(if(http.url exists, http.url, url.full))` - Unique endpoint count (handles both semconv attributes)
2. **lastseen**: `max(timestamp)` - Last access time
3. **rps**: `rate()` - Requests per second
4. **error**: `count() WHERE has_error = true` - Error count
5. **total_span**: `count()` - Total spans (for error rate)
6. **p99**: `p99(duration_nano)` - 99th percentile latency
7. **error_rate**: Formula `(error/total_span)*100`
### Base Filter
```go
"(http.url EXISTS OR url.full EXISTS) AND kind_string = 'Client'"
```
### GroupBy
- Groups by `server.address` + `net.peer.name` (dual semconv support)
---
## Key Files
### Frontend
| File | Purpose |
|------|---------|
| `frontend/src/container/ApiMonitoring/Explorer/Domains/DomainList.tsx` | Main list view component |
| `frontend/src/container/ApiMonitoring/Explorer/Domains/DomainDetails/DomainDetails.tsx` | Domain details drawer |
| `frontend/src/hooks/thirdPartyApis/useListOverview.ts` | Data fetching hook |
| `frontend/src/api/thirdPartyApis/listOverview.ts` | API client |
| `frontend/src/container/ApiMonitoring/utils.tsx` | Utilities (formatting, query building) |
### Backend
| File | Purpose |
|------|---------|
| `pkg/query-service/app/http_handler.go` | API handlers (`getDomainList`, `getDomainInfo`) |
| `pkg/modules/thirdpartyapi/translator.go` | Query builder & response processing |
| `pkg/types/thirdpartyapitypes/thirdpartyapi.go` | Request/response types |
---
## Data Tables
### Primary Table
- **Table**: `signoz_traces.distributed_signoz_index_v3`
- **Key Columns**:
- `kind_string` - Filter for `'Client'` spans
- `duration_nano` - For latency calculations
- `has_error` - For error rate
- `timestamp` - For last seen
- `attributes_string` - Map containing `http.url`, `net.peer.name`, etc.
- `resources_string` - Map containing `server.address`, `service.name`, etc.
### Attribute Access
```sql
-- Check existence
mapContains(attributes_string, 'http.url') = 1
-- Get value
attributes_string['http.url']
-- Materialized (if exists)
attribute_string_http$$url
```
---
## Post-Processing
### 1. MergeSemconvColumns()
- Merges `server.address` and `net.peer.name` into single column
- Location: `pkg/modules/thirdpartyapi/translator.go:117`
### 2. FilterIntermediateColumns()
- Removes intermediate formula columns from response
- Location: `pkg/modules/thirdpartyapi/translator.go:70`
### 3. FilterResponse()
- Filters out IP addresses if `show_ip = false`
- Uses `net.ParseIP()` to detect IPs
- Location: `pkg/modules/thirdpartyapi/translator.go:214`
---
## Required Attributes
### For Domain Grouping
- `net.peer.name` OR `server.address` (required)
### For Filtering
- `http.url` OR `url.full` (required)
- `kind_string = 'Client'` (required)
### Not Required
- `http.target` - Not used in external API monitoring
### Known Bug
The `buildEndpointsQuery()` uses `count_distinct(http.url)` but filter allows `url.full`. If spans only have `url.full`, they pass filter but don't contribute to endpoint count.
**Fix Needed**: Update aggregation to handle both attributes:
```go
// Current (buggy)
{Expression: "count_distinct(http.url)"}
// Should be
{Expression: "count_distinct(coalesce(http.url, url.full))"}
```
---
## Frontend Data Flow
### 1. Domain List View
```
DomainList component
→ useListOverview({ start, end, show_ip, filter })
→ listOverview API call
→ formatDataForTable(response)
→ Table display
```
### 2. Domain Details View
```
User clicks domain
→ DomainDetails drawer opens
→ Multiple queries:
- DomainMetrics (overview cards)
- AllEndpoints (endpoint table)
- TopErrors (error table)
- EndPointDetails (when endpoint selected)
```
### 3. Data Formatting
- `formatDataForTable()` - Converts API response to table format
- Handles `n/a` values, converts nanoseconds to milliseconds
- Maps column names to display fields
---
## Query Examples
### Domain List Query
```sql
SELECT
multiIf(
mapContains(attributes_string, 'server.address'),
attributes_string['server.address'],
mapContains(attributes_string, 'net.peer.name'),
attributes_string['net.peer.name'],
NULL
) AS domain,
count_distinct(attributes_string['http.url']) AS endpoints,
rate() AS rps,
p99(duration_nano) AS p99,
max(timestamp) AS lastseen
FROM signoz_traces.distributed_signoz_index_v3
WHERE
(mapContains(attributes_string, 'http.url') = 1
OR mapContains(attributes_string, 'url.full') = 1)
AND kind_string = 'Client'
AND timestamp >= ? AND timestamp < ?
GROUP BY domain
```
---
## Testing
### Key Test Files
- `frontend/src/container/ApiMonitoring/__tests__/AllEndpointsWidgetV5Migration.test.tsx`
- `frontend/src/container/ApiMonitoring/__tests__/EndpointDropdownV5Migration.test.tsx`
- `pkg/modules/thirdpartyapi/translator_test.go`
### Test Scenarios
1. Domain filtering with both semconv attributes
2. URL handling (http.url vs url.full)
3. IP address filtering
4. Error rate calculation
5. Empty state handling
---
## Common Issues
### Empty State
**Symptom**: No domains shown despite data existing
**Causes**:
1. Missing `net.peer.name` or `server.address`
2. Missing `http.url` or `url.full`
3. Spans not marked as `kind_string = 'Client'`
4. Bug: Only `url.full` present but query uses `count_distinct(http.url)`
### Performance
- Queries use `ts_bucket_start` for time partitioning
- Resource filtering uses separate `distributed_traces_v3_resource` table
- Materialized columns improve performance for common attributes
---
## Quick Start Checklist
- [ ] Understand trace table schema (`signoz_index_v3`)
- [ ] Review `BuildDomainList()` in `translator.go`
- [ ] Check `getDomainList()` handler in `http_handler.go`
- [ ] Review frontend `DomainList.tsx` component
- [ ] Understand semconv attribute mapping (legacy vs current)
- [ ] Test with spans that have required attributes
- [ ] Review post-processing functions (merge, filter)
---
## References
- **Trace Schema**: `pkg/telemetrytraces/field_mapper.go`
- **Query Builder**: `pkg/telemetrytraces/statement_builder.go`
- **API Routes**: `pkg/query-service/app/http_handler.go:2157`
- **Constants**: `pkg/modules/thirdpartyapi/translator.go:14-20`

View File

@@ -1,980 +0,0 @@
# Query Range API (V5) - Developer Guide
This document provides a comprehensive guide to the Query Range API (V5), which is the primary query endpoint for traces, logs, and metrics in SigNoz. It covers architecture, request/response models, code flows, and implementation details.
## Table of Contents
1. [Overview](#overview)
2. [API Endpoint](#api-endpoint)
3. [Request/Response Models](#requestresponse-models)
4. [Query Types](#query-types)
5. [Request Types](#request-types)
6. [Code Flow](#code-flow)
7. [Key Components](#key-components)
8. [Query Execution](#query-execution)
9. [Caching](#caching)
10. [Result Processing](#result-processing)
11. [Performance Considerations](#performance-considerations)
12. [Extending the API](#extending-the-api)
---
## Overview
The Query Range API (V5) is the unified query endpoint for all telemetry signals (traces, logs, metrics) in SigNoz. It provides:
- **Unified Interface**: Single endpoint for all signal types
- **Query Builder**: Visual query builder support
- **Multiple Query Types**: Builder queries, PromQL, ClickHouse SQL, Formulas, Trace Operators
- **Flexible Response Types**: Time series, scalar, raw data, trace-specific
- **Advanced Features**: Aggregations, filters, group by, ordering, pagination
- **Caching**: Intelligent caching for performance
### Key Technologies
- **Backend**: Go (Golang)
- **Storage**: ClickHouse (columnar database)
- **Query Language**: Custom query builder + PromQL + ClickHouse SQL
- **Protocol**: HTTP/REST API
---
## API Endpoint
### Endpoint Details
**URL**: `POST /api/v5/query_range`
**Handler**: `QuerierAPI.QueryRange``querier.QueryRange`
**Location**:
- Handler: `pkg/querier/querier.go:122`
- Route Registration: `pkg/query-service/app/http_handler.go:480`
**Authentication**: Requires ViewAccess permission
**Content-Type**: `application/json`
### Request Flow
```
HTTP Request (POST /api/v5/query_range)
HTTP Handler (QuerierAPI.QueryRange)
Querier.QueryRange (pkg/querier/querier.go)
Query Execution (Statement Builders → ClickHouse)
Result Processing & Merging
HTTP Response (QueryRangeResponse)
```
---
## Request/Response Models
### Request Model
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/req.go`
```go
type QueryRangeRequest struct {
Start uint64 // Start timestamp (milliseconds)
End uint64 // End timestamp (milliseconds)
RequestType RequestType // Response type (TimeSeries, Scalar, Raw, Trace)
Variables map[string]VariableItem // Template variables
CompositeQuery CompositeQuery // Container for queries
NoCache bool // Skip cache flag
}
```
### Composite Query
```go
type CompositeQuery struct {
Queries []QueryEnvelope // Array of queries to execute
}
```
### Query Envelope
```go
type QueryEnvelope struct {
Type QueryType // Query type (Builder, PromQL, ClickHouseSQL, Formula, TraceOperator)
Spec any // Query specification (type-specific)
}
```
### Response Model
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/req.go`
```go
type QueryRangeResponse struct {
Type RequestType // Response type
Data QueryData // Query results
Meta ExecStats // Execution statistics
Warning *QueryWarnData // Warnings (if any)
QBEvent *QBEvent // Query builder event metadata
}
type QueryData struct {
Results []any // Array of result objects (type depends on RequestType)
}
type ExecStats struct {
RowsScanned uint64 // Total rows scanned
BytesScanned uint64 // Total bytes scanned
DurationMS uint64 // Query duration in milliseconds
StepIntervals map[string]uint64 // Step intervals per query
}
```
---
## Query Types
The API supports multiple query types, each with its own specification format.
### 1. Builder Query (`QueryTypeBuilder`)
Visual query builder queries. Supports traces, logs, and metrics.
**Spec Type**: `QueryBuilderQuery[T]` where T is:
- `TraceAggregation` for traces
- `LogAggregation` for logs
- `MetricAggregation` for metrics
**Example**:
```go
QueryBuilderQuery[TraceAggregation] {
Name: "query_name",
Signal: SignalTraces,
Filter: &Filter {
Expression: "service.name = 'api' AND duration_nano > 1000000",
},
Aggregations: []TraceAggregation {
{Expression: "count()", Alias: "total"},
{Expression: "avg(duration_nano)", Alias: "avg_duration"},
},
GroupBy: []GroupByKey {...},
Order: []OrderBy {...},
Limit: 100,
}
```
**Key Files**:
- Traces: `pkg/telemetrytraces/statement_builder.go`
- Logs: `pkg/telemetrylogs/statement_builder.go`
- Metrics: `pkg/telemetrymetrics/statement_builder.go`
### 2. PromQL Query (`QueryTypePromQL`)
Prometheus Query Language queries for metrics.
**Spec Type**: `PromQuery`
**Example**:
```go
PromQuery {
Query: "rate(http_requests_total[5m])",
Step: Step{Duration: time.Minute},
}
```
**Key Files**: `pkg/querier/promql_query.go`
### 3. ClickHouse SQL Query (`QueryTypeClickHouseSQL`)
Direct ClickHouse SQL queries.
**Spec Type**: `ClickHouseQuery`
**Example**:
```go
ClickHouseQuery {
Query: "SELECT count() FROM signoz_traces.distributed_signoz_index_v3 WHERE ...",
}
```
**Key Files**: `pkg/querier/ch_sql_query.go`
### 4. Formula Query (`QueryTypeFormula`)
Mathematical formulas combining other queries.
**Spec Type**: `QueryBuilderFormula`
**Example**:
```go
QueryBuilderFormula {
Expression: "A / B * 100", // A and B are query names
}
```
**Key Files**: `pkg/querier/formula_query.go`
### 5. Trace Operator Query (`QueryTypeTraceOperator`)
Set operations on trace queries (AND, OR, NOT).
**Spec Type**: `QueryBuilderTraceOperator`
**Example**:
```go
QueryBuilderTraceOperator {
Expression: "A AND B", // A and B are query names
Filter: &Filter {...},
}
```
**Key Files**:
- `pkg/telemetrytraces/trace_operator_statement_builder.go`
- `pkg/querier/trace_operator_query.go`
---
## Request Types
The `RequestType` determines the format of the response data.
### 1. `RequestTypeTimeSeries`
Returns time series data for charts.
**Response Format**: `TimeSeriesData`
```go
type TimeSeriesData struct {
QueryName string
Aggregations []AggregationBucket
}
type AggregationBucket struct {
Index int
Series []TimeSeries
Alias string
Meta AggregationMeta
}
type TimeSeries struct {
Labels map[string]string
Values []TimeSeriesValue
}
type TimeSeriesValue struct {
Timestamp int64
Value float64
}
```
**Use Case**: Line charts, bar charts, area charts
### 2. `RequestTypeScalar`
Returns a single scalar value.
**Response Format**: `ScalarData`
```go
type ScalarData struct {
QueryName string
Data []ScalarValue
}
type ScalarValue struct {
Timestamp int64
Value float64
}
```
**Use Case**: Single value displays, stat panels
### 3. `RequestTypeRaw`
Returns raw data rows.
**Response Format**: `RawData`
```go
type RawData struct {
QueryName string
Columns []string
Rows []RawDataRow
}
type RawDataRow struct {
Timestamp time.Time
Data map[string]any
}
```
**Use Case**: Tables, logs viewer, trace lists
### 4. `RequestTypeTrace`
Returns trace-specific data structure.
**Response Format**: Trace-specific format (see traces documentation)
**Use Case**: Trace-specific visualizations
---
## Code Flow
### Complete Request Flow
```
1. HTTP Request
POST /api/v5/query_range
Body: QueryRangeRequest JSON
2. HTTP Handler
QuerierAPI.QueryRange (pkg/querier/querier.go)
- Validates request
- Extracts organization ID from auth context
3. Querier.QueryRange (pkg/querier/querier.go:122)
- Validates QueryRangeRequest
- Processes each query in CompositeQuery.Queries
- Identifies dependencies (e.g., trace operators, formulas)
- Calculates step intervals
- Fetches metric temporality if needed
4. Query Creation
For each QueryEnvelope:
a. Builder Query:
- newBuilderQuery() creates builderQuery instance
- Selects appropriate statement builder based on signal:
* Traces → traceStmtBuilder
* Logs → logStmtBuilder
* Metrics → metricStmtBuilder or meterStmtBuilder
b. PromQL Query:
- newPromqlQuery() creates promqlQuery instance
- Uses Prometheus engine
c. ClickHouse SQL Query:
- newchSQLQuery() creates chSQLQuery instance
- Direct SQL execution
d. Formula Query:
- newFormulaQuery() creates formulaQuery instance
- References other queries by name
e. Trace Operator Query:
- newTraceOperatorQuery() creates traceOperatorQuery instance
- Uses traceOperatorStmtBuilder
5. Statement Building (for Builder queries)
StatementBuilder.Build()
- Resolves field keys from metadata store
- Builds SQL based on request type:
* RequestTypeRaw → buildListQuery()
* RequestTypeTimeSeries → buildTimeSeriesQuery()
* RequestTypeScalar → buildScalarQuery()
* RequestTypeTrace → buildTraceQuery()
- Returns SQL statement with arguments
6. Query Execution
Query.Execute()
- Executes SQL/query against ClickHouse or Prometheus
- Processes results into response format
- Returns Result with data and statistics
7. Caching (if applicable)
- Checks bucket cache for time series queries
- Executes queries for missing time ranges
- Merges cached and fresh results
8. Result Processing
querier.run()
- Executes all queries (with dependency resolution)
- Collects results and warnings
- Merges results from multiple queries
9. Post-Processing
postProcessResults()
- Applies formulas if present
- Handles variable substitution
- Formats results for response
10. HTTP Response
- Returns QueryRangeResponse with results
- Includes execution statistics
- Includes warnings if any
```
### Key Decision Points
1. **Query Type Selection**: Based on `QueryEnvelope.Type`
2. **Signal Selection**: For builder queries, based on `Signal` field
3. **Request Type Handling**: Different SQL generation for different request types
4. **Caching Strategy**: Only for time series queries with valid fingerprints
5. **Dependency Resolution**: Trace operators and formulas resolve dependencies first
---
## Key Components
### 1. Querier
**Location**: `pkg/querier/querier.go`
**Purpose**: Orchestrates query execution, caching, and result merging
**Key Methods**:
- `QueryRange()`: Main entry point for query execution
- `run()`: Executes queries and merges results
- `executeWithCache()`: Handles caching logic
- `mergeResults()`: Merges cached and fresh results
- `postProcessResults()`: Applies formulas and variable substitution
**Key Features**:
- Query orchestration across multiple query types
- Intelligent caching with bucket-based strategy
- Result merging from multiple queries
- Formula evaluation
- Time range optimization
- Step interval calculation and validation
### 2. Statement Builder Interface
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/`
**Purpose**: Converts query builder specifications into executable queries
**Interface**:
```go
type StatementBuilder[T any] interface {
Build(
ctx context.Context,
start uint64,
end uint64,
requestType RequestType,
query QueryBuilderQuery[T],
variables map[string]VariableItem,
) (*Statement, error)
}
```
**Implementations**:
- `traceQueryStatementBuilder` - Traces (`pkg/telemetrytraces/statement_builder.go`)
- `logQueryStatementBuilder` - Logs (`pkg/telemetrylogs/statement_builder.go`)
- `metricQueryStatementBuilder` - Metrics (`pkg/telemetrymetrics/statement_builder.go`)
**Key Features**:
- Field resolution via metadata store
- SQL generation for different request types
- Filter, aggregation, group by, ordering support
- Time range optimization
### 3. Query Interface
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/`
**Purpose**: Represents an executable query
**Interface**:
```go
type Query interface {
Execute(ctx context.Context) (*Result, error)
Fingerprint() string // For caching
Window() (uint64, uint64) // Time range
}
```
**Implementations**:
- `builderQuery[T]` - Builder queries (`pkg/querier/builder_query.go`)
- `promqlQuery` - PromQL queries (`pkg/querier/promql_query.go`)
- `chSQLQuery` - ClickHouse SQL queries (`pkg/querier/ch_sql_query.go`)
- `formulaQuery` - Formula queries (`pkg/querier/formula_query.go`)
- `traceOperatorQuery` - Trace operator queries (`pkg/querier/trace_operator_query.go`)
### 4. Telemetry Store
**Location**: `pkg/telemetrystore/`
**Purpose**: Abstraction layer for ClickHouse database access
**Key Methods**:
- `Query()`: Execute SQL query
- `QueryRow()`: Execute query returning single row
- `Select()`: Execute query returning multiple rows
**Implementation**: `clickhouseTelemetryStore` (`pkg/telemetrystore/clickhousetelemetrystore/`)
### 5. Metadata Store
**Location**: `pkg/types/telemetrytypes/`
**Purpose**: Provides metadata about available fields, keys, and attributes
**Key Methods**:
- `GetKeysMulti()`: Get field keys for multiple selectors
- `FetchTemporalityMulti()`: Get metric temporality information
**Implementation**: `telemetryMetadataStore` (`pkg/telemetrymetadata/`)
### 6. Bucket Cache
**Location**: `pkg/querier/`
**Purpose**: Caches query results by time buckets for performance
**Key Methods**:
- `GetMissRanges()`: Get time ranges not in cache
- `Put()`: Store query result in cache
**Features**:
- Bucket-based caching (aligned to step intervals)
- Automatic cache invalidation
- Parallel query execution for missing ranges
---
## Query Execution
### Builder Query Execution
**Location**: `pkg/querier/builder_query.go`
**Process**:
1. Statement builder generates SQL
2. SQL executed against ClickHouse via TelemetryStore
3. Results processed based on RequestType:
- TimeSeries: Grouped by time buckets and labels
- Scalar: Single value extraction
- Raw: Row-by-row processing
4. Statistics collected (rows scanned, bytes scanned, duration)
### PromQL Query Execution
**Location**: `pkg/querier/promql_query.go`
**Process**:
1. Query parsed by Prometheus engine
2. Executed against Prometheus-compatible data
3. Results converted to QueryRangeResponse format
### ClickHouse SQL Query Execution
**Location**: `pkg/querier/ch_sql_query.go`
**Process**:
1. SQL query executed directly
2. Results processed based on RequestType
3. Variable substitution applied
### Formula Query Execution
**Location**: `pkg/querier/formula_query.go`
**Process**:
1. Referenced queries executed first
2. Formula expression evaluated using govaluate
3. Results computed from query results
### Trace Operator Query Execution
**Location**: `pkg/querier/trace_operator_query.go`
**Process**:
1. Expression parsed to find dependencies
2. Referenced queries executed
3. Set operations applied (INTERSECT, UNION, EXCEPT)
4. Results combined
---
## Caching
### Caching Strategy
**Location**: `pkg/querier/querier.go:642`
**When Caching Applies**:
- Time series queries only
- Queries with valid fingerprints
- `NoCache` flag not set
**How It Works**:
1. Query fingerprint generated (includes query structure, filters, time range)
2. Cache checked for existing results
3. Missing time ranges identified
4. Queries executed only for missing ranges (parallel execution)
5. Fresh results merged with cached results
6. Merged result stored in cache
### Cache Key Generation
**Location**: `pkg/querier/builder_query.go:52`
The fingerprint includes:
- Signal type
- Source type
- Step interval
- Aggregations
- Filters
- Group by fields
- Time range (for cache key, not fingerprint)
### Cache Benefits
- **Performance**: Avoids re-executing identical queries
- **Efficiency**: Only queries missing time ranges
- **Parallelism**: Multiple missing ranges queried in parallel
---
## Result Processing
### Result Merging
**Location**: `pkg/querier/querier.go:795`
**Process**:
1. Results from multiple queries collected
2. For time series: Series merged by labels
3. For raw data: Rows combined
4. Statistics aggregated (rows scanned, bytes scanned, duration)
### Formula Evaluation
**Location**: `pkg/querier/formula_query.go`
**Process**:
1. Formula expression parsed
2. Referenced query results retrieved
3. Expression evaluated using govaluate library
4. Result computed and formatted
### Variable Substitution
**Location**: `pkg/querier/querier.go`
**Process**:
1. Variables extracted from request
2. Variable values substituted in queries
3. Applied to filters, aggregations, and other query parts
---
## Performance Considerations
### Query Optimization
1. **Time Range Optimization**:
- For trace queries with `trace_id` filter, query `trace_summary` first to narrow time range
- Use appropriate time ranges to limit data scanned
2. **Step Interval Calculation**:
- Automatic step interval calculation based on time range
- Minimum step interval enforcement
- Warnings for suboptimal intervals
3. **Index Usage**:
- Queries use time bucket columns (`ts_bucket_start`) for efficient filtering
- Proper filter placement for index utilization
4. **Limit Enforcement**:
- Raw data queries should include limits
- Pagination support via offset/cursor
### Best Practices
1. **Use Query Builder**: Prefer query builder over raw SQL for better optimization
2. **Limit Time Ranges**: Always specify reasonable time ranges
3. **Use Aggregations**: For large datasets, use aggregations instead of raw data
4. **Cache Awareness**: Be mindful of cache TTLs when testing
5. **Parallel Queries**: Multiple independent queries execute in parallel
6. **Step Intervals**: Let system calculate optimal step intervals
### Monitoring
Execution statistics are included in response:
- `RowsScanned`: Total rows scanned
- `BytesScanned`: Total bytes scanned
- `DurationMS`: Query execution time
- `StepIntervals`: Step intervals per query
---
## Extending the API
### Adding a New Query Type
1. **Define Query Type** (`pkg/types/querybuildertypes/querybuildertypesv5/query.go`):
```go
const (
QueryTypeMyNewType QueryType = "my_new_type"
)
```
2. **Define Query Spec**:
```go
type MyNewQuerySpec struct {
Name string
// ... your fields
}
```
3. **Update QueryEnvelope Unmarshaling** (`pkg/types/querybuildertypes/querybuildertypesv5/query.go`):
```go
case QueryTypeMyNewType:
var spec MyNewQuerySpec
if err := UnmarshalJSONWithContext(shadow.Spec, &spec, "my new query spec"); err != nil {
return wrapUnmarshalError(err, "invalid my new query spec: %v", err)
}
q.Spec = spec
```
4. **Implement Query Interface** (`pkg/querier/my_new_query.go`):
```go
type myNewQuery struct {
spec MyNewQuerySpec
// ... other fields
}
func (q *myNewQuery) Execute(ctx context.Context) (*qbtypes.Result, error) {
// Implementation
}
func (q *myNewQuery) Fingerprint() string {
// Generate fingerprint for caching
}
func (q *myNewQuery) Window() (uint64, uint64) {
// Return time range
}
```
5. **Update Querier** (`pkg/querier/querier.go`):
```go
case QueryTypeMyNewType:
myQuery, ok := query.Spec.(MyNewQuerySpec)
if !ok {
return nil, errors.NewInvalidInputf(...)
}
queries[myQuery.Name] = newMyNewQuery(myQuery, ...)
```
### Adding a New Request Type
1. **Define Request Type** (`pkg/types/querybuildertypes/querybuildertypesv5/req.go`):
```go
const (
RequestTypeMyNewType RequestType = "my_new_type"
)
```
2. **Update Statement Builders**: Add handling in `Build()` method
3. **Update Query Execution**: Add result processing for new type
4. **Update Response Models**: Add response data structure
### Adding a New Aggregation Function
1. **Update Aggregation Rewriter** (`pkg/querybuilder/agg_expr_rewriter.go`):
```go
func (r *aggExprRewriter) RewriteAggregation(expr string) (string, error) {
if strings.HasPrefix(expr, "my_function(") {
// Parse arguments
// Return ClickHouse SQL expression
return "myClickHouseFunction(...)", nil
}
// ... existing functions
}
```
2. **Update Documentation**: Document the new function
---
## Common Patterns
### Pattern 1: Simple Time Series Query
```go
req := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
Signal: telemetrytypes.SignalMetrics,
Aggregations: []qbtypes.MetricAggregation{
{Expression: "sum(rate)", Alias: "total"},
},
StepInterval: qbtypes.Step{Duration: time.Minute},
},
},
},
},
}
```
### Pattern 2: Query with Filter and Group By
```go
req := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "A",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{
Expression: "service.name = 'api' AND duration_nano > 1000000",
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "count()", Alias: "total"},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
}},
},
},
},
},
},
}
```
### Pattern 3: Formula Query
```go
req := qbtypes.QueryRangeRequest{
Start: startMs,
End: endMs,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: qbtypes.CompositeQuery{
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "A",
// ... query A definition
},
},
{
Type: qbtypes.QueryTypeBuilder,
Spec: qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation]{
Name: "B",
// ... query B definition
},
},
{
Type: qbtypes.QueryTypeFormula,
Spec: qbtypes.QueryBuilderFormula{
Name: "C",
Expression: "A / B * 100",
},
},
},
},
}
```
---
## Testing
### Unit Tests
- `pkg/querier/querier_test.go` - Querier tests
- `pkg/querier/builder_query_test.go` - Builder query tests
- `pkg/querier/formula_query_test.go` - Formula query tests
### Integration Tests
- `tests/integration/` - End-to-end API tests
### Running Tests
```bash
# Run all querier tests
go test ./pkg/querier/...
# Run with verbose output
go test -v ./pkg/querier/...
# Run specific test
go test -v ./pkg/querier/ -run TestQueryRange
```
---
## Debugging
### Enable Debug Logging
```go
// In querier.go
q.logger.DebugContext(ctx, "Executing query",
"query", queryName,
"start", start,
"end", end)
```
### Common Issues
1. **Query Not Found**: Check query name matches in CompositeQuery
2. **SQL Errors**: Check generated SQL in logs, verify ClickHouse syntax
3. **Performance**: Check execution statistics, optimize time ranges
4. **Cache Issues**: Set `NoCache: true` to bypass cache
5. **Formula Errors**: Check formula expression syntax and referenced query names
---
## References
### Key Files
- `pkg/querier/querier.go` - Main query orchestration
- `pkg/querier/builder_query.go` - Builder query execution
- `pkg/types/querybuildertypes/querybuildertypesv5/` - Request/response models
- `pkg/telemetrystore/` - ClickHouse interface
- `pkg/telemetrymetadata/` - Metadata store
### Signal-Specific Documentation
- [Traces Module](./TRACES_MODULE.md) - Trace-specific details
- Logs module documentation (when available)
- Metrics module documentation (when available)
### Related Documentation
- [ClickHouse Documentation](https://clickhouse.com/docs)
- [PromQL Documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/)
---
## Contributing
When contributing to the Query Range API:
1. **Follow Existing Patterns**: Match the style of existing query types
2. **Add Tests**: Include unit tests for new functionality
3. **Update Documentation**: Update this doc for significant changes
4. **Consider Performance**: Optimize queries and use caching appropriately
5. **Handle Errors**: Provide meaningful error messages
For questions or help, reach out to the maintainers or open an issue.

View File

@@ -1,185 +0,0 @@
# SigNoz Span Metrics Processor
The `signozspanmetricsprocessor` is an OpenTelemetry Collector processor that intercepts trace data to generate RED metrics (Rate, Errors, Duration) from spans.
**Location:** `signoz-otel-collector/processor/signozspanmetricsprocessor/`
## Trace Interception
The processor implements `consumer.Traces` interface and sits in the traces pipeline:
```go
func (p *processorImp) ConsumeTraces(ctx context.Context, traces ptrace.Traces) error {
p.lock.Lock()
p.aggregateMetrics(traces)
p.lock.Unlock()
return p.tracesConsumer.ConsumeTraces(ctx, traces) // forward unchanged
}
```
All traces flow through this method. Metrics are aggregated, then traces are forwarded unmodified to the next consumer.
## Metrics Generated
| Metric | Type | Description |
|--------|------|-------------|
| `signoz_latency` | Histogram | Span latency by service/operation/kind/status |
| `signoz_calls_total` | Counter | Call count per service/operation/kind/status |
| `signoz_db_latency_sum/count` | Counter | DB call latency (spans with `db.system` attribute) |
| `signoz_external_call_latency_sum/count` | Counter | External call latency (client spans with remote address) |
### Dimensions
All metrics include these base dimensions:
- `service.name` - from resource attributes
- `operation` - span name
- `span.kind` - SPAN_KIND_SERVER, SPAN_KIND_CLIENT, etc.
- `status.code` - STATUS_CODE_OK, STATUS_CODE_ERROR, etc.
Additional dimensions can be configured.
## Aggregation Flow
```
traces pipeline
┌─────────────────────────────────────────────────────────┐
│ ConsumeTraces() │
│ │ │
│ ▼ │
│ aggregateMetrics(traces) │
│ │ │
│ ├── for each ResourceSpan │
│ │ extract service.name │
│ │ │ │
│ │ ├── for each Span │
│ │ │ │ │
│ │ │ ▼ │
│ │ │ aggregateMetricsForSpan() │
│ │ │ ├── skip stale spans (>24h) │
│ │ │ ├── skip excluded patterns │
│ │ │ ├── calculate latency │
│ │ │ ├── build metric key │
│ │ │ ├── update histograms │
│ │ │ └── cache dimensions │
│ │ │ │
│ ▼ │
│ forward traces to next consumer │
└─────────────────────────────────────────────────────────┘
```
### Periodic Export
A background goroutine exports aggregated metrics on a ticker interval:
```go
go func() {
for {
select {
case <-p.ticker.C:
p.exportMetrics(ctx) // build and send to metrics exporter
}
}
}()
```
## Key Design Features
### 1. Time Bucketing (Delta Temporality)
For delta temporality, metric keys include a time bucket prefix:
```go
if p.config.GetAggregationTemporality() == pmetric.AggregationTemporalityDelta {
p.AddTimeToKeyBuf(span.StartTimestamp().AsTime()) // truncated to interval
}
```
- Spans are grouped by time bucket (default: 1 minute)
- After export, buckets are reset
- Memory-efficient for high-cardinality data
### 2. LRU Dimension Caching
Dimension key-value maps are cached to avoid rebuilding:
```go
if _, has := p.metricKeyToDimensions.Get(k); !has {
p.metricKeyToDimensions.Add(k, p.buildDimensionKVs(...))
}
```
- Configurable cache size (`DimensionsCacheSize`)
- Evicted keys also removed from histograms
### 3. Cardinality Protection
Prevents memory explosion from high cardinality:
```go
if len(p.serviceToOperations) > p.maxNumberOfServicesToTrack {
serviceName = "overflow_service"
}
if len(p.serviceToOperations[serviceName]) > p.maxNumberOfOperationsToTrackPerService {
spanName = "overflow_operation"
}
```
Excess services/operations are aggregated into overflow buckets.
### 4. Exemplars
Trace/span IDs attached to histogram samples for metric-to-trace correlation:
```go
histo.exemplarsData = append(histo.exemplarsData, exemplarData{
traceID: traceID,
spanID: spanID,
value: latency,
})
```
Enables "show me a trace that caused this latency spike" in UI.
## Configuration Options
| Option | Description | Default |
|--------|-------------|---------|
| `metrics_exporter` | Target exporter for generated metrics | required |
| `latency_histogram_buckets` | Custom histogram bucket boundaries | 2,4,6,8,10,50,100,200,400,800,1000,1400,2000,5000,10000,15000 ms |
| `dimensions` | Additional span/resource attributes to include | [] |
| `dimensions_cache_size` | LRU cache size for dimension maps | 1000 |
| `aggregation_temporality` | cumulative or delta | cumulative |
| `time_bucket_interval` | Bucket interval for delta temporality | 1m |
| `skip_spans_older_than` | Skip stale spans | 24h |
| `max_services_to_track` | Cardinality limit for services | - |
| `max_operations_to_track_per_service` | Cardinality limit for operations | - |
| `exclude_patterns` | Regex patterns to skip spans | [] |
## Pipeline Configuration Example
```yaml
processors:
signozspanmetrics:
metrics_exporter: clickhousemetricswrite
latency_histogram_buckets: [2ms, 4ms, 6ms, 8ms, 10ms, 50ms, 100ms, 200ms]
dimensions:
- name: http.method
- name: http.status_code
dimensions_cache_size: 10000
aggregation_temporality: delta
pipelines:
traces:
receivers: [otlp]
processors: [signozspanmetrics, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
exporters: [clickhousemetricswrite]
```
The processor sits in the traces pipeline but exports to a metrics pipeline exporter.

View File

@@ -1,832 +0,0 @@
# SigNoz Traces Module - Developer Guide
This document provides a comprehensive guide to understanding and contributing to the traces module in SigNoz. It covers architecture, APIs, code flows, and implementation details.
## Table of Contents
1. [Overview](#overview)
2. [Architecture](#architecture)
3. [Data Models](#data-models)
4. [API Endpoints](#api-endpoints)
5. [Code Flows](#code-flows)
6. [Key Components](#key-components)
7. [Query Building System](#query-building-system)
8. [Storage Schema](#storage-schema)
9. [Extending the Traces Module](#extending-the-traces-module)
---
## Overview
The traces module in SigNoz handles distributed tracing data from OpenTelemetry. It provides:
- **Ingestion**: Receives traces via OpenTelemetry Collector
- **Storage**: Stores traces in ClickHouse
- **Querying**: Supports complex queries with filters, aggregations, and trace operators
- **Visualization**: Provides waterfall and flamegraph views
- **Trace Funnels**: Advanced analytics for multi-step trace analysis
### Key Technologies
- **Backend**: Go (Golang)
- **Storage**: ClickHouse (columnar database)
- **Protocol**: OpenTelemetry Protocol (OTLP)
- **Query Language**: Custom query builder + ClickHouse SQL
---
## Architecture
### High-Level Flow
```
Application → OpenTelemetry SDK → OTLP Receiver →
[Processors: signozspanmetrics, batch] →
ClickHouse Traces Exporter → ClickHouse Database
Query Service (Go)
Frontend (React/TypeScript)
```
### Component Architecture
```
┌─────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ - TracesExplorer │
│ - TraceDetail (Waterfall/Flamegraph) │
│ - Query Builder UI │
└────────────────────┬────────────────────────────────────┘
│ HTTP/REST API
┌────────────────────▼────────────────────────────────────┐
│ Query Service (Go) │
│ ┌──────────────────────────────────────────────────┐ │
│ │ HTTP Handlers (http_handler.go) │ │
│ │ - QueryRangeV5 (Main query endpoint) │ │
│ │ - GetWaterfallSpansForTrace │ │
│ │ - GetFlamegraphSpansForTrace │ │
│ │ - Trace Fields API │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Querier (querier.go) │ │
│ │ - Query orchestration │ │
│ │ - Cache management │ │
│ │ - Result merging │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Statement Builders │ │
│ │ - traceQueryStatementBuilder │ │
│ │ - traceOperatorStatementBuilder │ │
│ │ - Builds ClickHouse SQL from query specs │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ ClickHouse Reader (clickhouseReader/) │ │
│ │ - Direct trace retrieval │ │
│ │ - Waterfall/Flamegraph data processing │ │
│ └──────────────────────────────────────────────────┘ │
└────────────────────┬────────────────────────────────────┘
│ ClickHouse Protocol
┌────────────────────▼────────────────────────────────────┐
│ ClickHouse Database │
│ - signoz_traces.distributed_signoz_index_v3 │
│ - signoz_traces.distributed_trace_summary │
│ - signoz_traces.distributed_tag_attributes_v2 │
└──────────────────────────────────────────────────────────┘
```
---
## Data Models
### Core Trace Models
**Location**: `pkg/query-service/model/trace.go`
### Query Request Models
**Location**: `pkg/types/querybuildertypes/querybuildertypesv5/`
- `QueryRangeRequest`: Main query request structure
- `QueryBuilderQuery[TraceAggregation]`: Query builder specification for traces
- `QueryBuilderTraceOperator`: Trace operator query specification
- `CompositeQuery`: Container for multiple queries
---
## API Endpoints
### 1. Query Range API (V5) - Primary Query Endpoint
**Endpoint**: `POST /api/v5/query_range`
**Handler**: `QuerierAPI.QueryRange``querier.QueryRange`
**Purpose**: Main query endpoint for traces, logs, and metrics. Supports:
- Query builder queries
- Trace operator queries
- Aggregations, filters, group by
- Time series, scalar, and raw data requests
> **Note**: For detailed information about the Query Range API, including request/response models, query types, and common code flows, see the [Query Range API Documentation](./QUERY_RANGE_API.md).
**Trace-Specific Details**:
- Uses `traceQueryStatementBuilder` for SQL generation
- Supports trace-specific aggregations (count, avg, p99, etc. on duration_nano)
- Trace operator queries combine multiple trace queries with set operations
- Time range optimization when `trace_id` filter is present
**Key Files**:
- `pkg/telemetrytraces/statement_builder.go` - Trace SQL generation
- `pkg/telemetrytraces/trace_operator_statement_builder.go` - Trace operator SQL
- `pkg/querier/trace_operator_query.go` - Trace operator execution
### 2. Waterfall View API
**Endpoint**: `POST /api/v2/traces/waterfall/{traceId}`
**Handler**: `GetWaterfallSpansForTraceWithMetadata`
**Purpose**: Retrieves spans for waterfall visualization with metadata
**Request Parameters**:
```go
type GetWaterfallSpansForTraceWithMetadataParams struct {
SelectedSpanID string // Selected span to focus on
IsSelectedSpanIDUnCollapsed bool // Whether selected span is expanded
UncollapsedSpans []string // List of expanded span IDs
}
```
**Response**:
```go
type GetWaterfallSpansForTraceWithMetadataResponse struct {
StartTimestampMillis uint64 // Trace start time
EndTimestampMillis uint64 // Trace end time
DurationNano uint64 // Total duration
RootServiceName string // Root service
RootServiceEntryPoint string // Entry point operation
TotalSpansCount uint64 // Total spans
TotalErrorSpansCount uint64 // Error spans
ServiceNameToTotalDurationMap map[string]uint64 // Service durations
Spans []*Span // Span tree
HasMissingSpans bool // Missing spans indicator
UncollapsedSpans []string // Expanded spans
}
```
**Code Flow**:
```
Handler → ClickHouseReader.GetWaterfallSpansForTraceWithMetadata
→ Query trace_summary for time range
→ Query spans from signoz_index_v3
→ Build span tree structure
→ Apply uncollapsed/selected span logic
→ Return filtered spans (500 span limit)
```
**Key Files**:
- `pkg/query-service/app/http_handler.go:1748` - Handler
- `pkg/query-service/app/clickhouseReader/reader.go:873` - Implementation
- `pkg/query-service/app/traces/tracedetail/waterfall.go` - Tree processing
### 3. Flamegraph View API
**Endpoint**: `POST /api/v2/traces/flamegraph/{traceId}`
**Handler**: `GetFlamegraphSpansForTrace`
**Purpose**: Retrieves spans organized by level for flamegraph visualization
**Request Parameters**:
```go
type GetFlamegraphSpansForTraceParams struct {
SelectedSpanID string // Selected span ID
}
```
**Response**:
```go
type GetFlamegraphSpansForTraceResponse struct {
StartTimestampMillis uint64 // Trace start
EndTimestampMillis uint64 // Trace end
DurationNano uint64 // Total duration
Spans [][]*FlamegraphSpan // Spans organized by level
}
```
**Code Flow**:
```
Handler → ClickHouseReader.GetFlamegraphSpansForTrace
→ Query trace_summary for time range
→ Query spans from signoz_index_v3
→ Build span tree
→ BFS traversal to organize by level
→ Sample spans (50 levels, 100 spans/level max)
→ Return level-organized spans
```
**Key Files**:
- `pkg/query-service/app/http_handler.go:1781` - Handler
- `pkg/query-service/app/clickhouseReader/reader.go:1091` - Implementation
- `pkg/query-service/app/traces/tracedetail/flamegraph.go` - BFS processing
### 4. Trace Fields API
**Endpoint**:
- `GET /api/v2/traces/fields` - Get available trace fields
- `POST /api/v2/traces/fields` - Update trace field metadata
**Handler**: `traceFields`, `updateTraceField`
**Purpose**: Manage trace field metadata for query builder
**Key Files**:
- `pkg/query-service/app/http_handler.go:4912` - Get handler
- `pkg/query-service/app/http_handler.go:4921` - Update handler
### 5. Trace Funnels API
**Endpoint**: `/api/v1/trace-funnels/*`
**Purpose**: Manage trace funnels (multi-step trace analysis)
**Endpoints**:
- `POST /api/v1/trace-funnels/new` - Create funnel
- `GET /api/v1/trace-funnels/list` - List funnels
- `GET /api/v1/trace-funnels/{funnel_id}` - Get funnel
- `PUT /api/v1/trace-funnels/{funnel_id}` - Update funnel
- `DELETE /api/v1/trace-funnels/{funnel_id}` - Delete funnel
- `POST /api/v1/trace-funnels/{funnel_id}/analytics/*` - Analytics endpoints
**Key Files**:
- `pkg/query-service/app/http_handler.go:5084` - Route registration
- `pkg/modules/tracefunnel/` - Funnel implementation
---
## Code Flows
### Flow 1: Query Range Request (V5)
This is the primary query flow for traces. For the complete flow covering all query types, see the [Query Range API Documentation](./QUERY_RANGE_API.md#code-flow).
**Trace-Specific Flow**:
```
1. HTTP Request
POST /api/v5/query_range
2. Querier.QueryRange (common flow - see QUERY_RANGE_API.md)
3. Trace Query Processing:
a. Builder Query (QueryTypeBuilder with SignalTraces):
- newBuilderQuery() creates builderQuery instance
- Uses traceStmtBuilder (traceQueryStatementBuilder)
b. Trace Operator Query (QueryTypeTraceOperator):
- newTraceOperatorQuery() creates traceOperatorQuery
- Uses traceOperatorStmtBuilder
4. Trace Statement Building
traceQueryStatementBuilder.Build() (pkg/telemetrytraces/statement_builder.go:58)
- Resolves trace field keys from metadata store
- Optimizes time range if trace_id filter present (queries trace_summary)
- Maps fields using traceFieldMapper
- Builds conditions using traceConditionBuilder
- Builds SQL based on request type:
* RequestTypeRaw → buildListQuery()
* RequestTypeTimeSeries → buildTimeSeriesQuery()
* RequestTypeScalar → buildScalarQuery()
* RequestTypeTrace → buildTraceQuery()
5. Query Execution
builderQuery.Execute() (pkg/querier/builder_query.go)
- Executes SQL against ClickHouse (signoz_traces database)
- Processes results into response format
6. Result Processing (common flow - see QUERY_RANGE_API.md)
- Merges results from multiple queries
- Applies formulas if present
- Handles caching
7. HTTP Response
- Returns QueryRangeResponse with trace results
```
**Trace-Specific Key Components**:
- `pkg/telemetrytraces/statement_builder.go` - Trace SQL generation
- `pkg/telemetrytraces/field_mapper.go` - Trace field mapping
- `pkg/telemetrytraces/condition_builder.go` - Trace filter building
- `pkg/telemetrytraces/trace_operator_statement_builder.go` - Trace operator SQL
### Flow 2: Waterfall View Request
```
1. HTTP Request
POST /api/v2/traces/waterfall/{traceId}
2. GetWaterfallSpansForTraceWithMetadata handler
- Extracts traceId from URL
- Parses request body for params
3. ClickHouseReader.GetWaterfallSpansForTraceWithMetadata
- Checks cache first (5 minute TTL)
4. If cache miss:
a. Query trace_summary table
SELECT * FROM distributed_trace_summary WHERE trace_id = ?
- Gets time range (start, end, num_spans)
b. Query spans table
SELECT ... FROM distributed_signoz_index_v3
WHERE trace_id = ?
AND ts_bucket_start >= ? AND ts_bucket_start <= ?
- Retrieves all spans for trace
c. Build span tree
- Parse references to build parent-child relationships
- Identify root spans (no parent)
- Calculate service durations
d. Cache result
5. Apply selection logic
tracedetail.GetSelectedSpans()
- Traverses tree based on uncollapsed spans
- Finds path to selected span
- Returns sliding window (500 spans max)
6. HTTP Response
- Returns spans with metadata
```
**Key Components**:
- `pkg/query-service/app/clickhouseReader/reader.go:873`
- `pkg/query-service/app/traces/tracedetail/waterfall.go`
- `pkg/query-service/model/trace.go`
### Flow 3: Trace Operator Query
Trace operators allow combining multiple trace queries with set operations.
```
1. QueryRangeRequest with QueryTypeTraceOperator
2. Querier identifies trace operator queries
- Parses expression to find dependencies
- Collects referenced queries
3. traceOperatorStatementBuilder.Build()
- Parses expression (e.g., "A AND B", "A OR B")
- Builds expression tree
4. traceOperatorCTEBuilder.build()
- Creates CTEs (Common Table Expressions) for each query
- Builds final query with set operations:
* AND → INTERSECT
* OR → UNION
* NOT → EXCEPT
5. Execute combined query
- Returns traces matching the operator expression
```
**Key Components**:
- `pkg/telemetrytraces/trace_operator_statement_builder.go`
- `pkg/telemetrytraces/trace_operator_cte_builder.go`
- `pkg/querier/trace_operator_query.go`
---
## Key Components
> **Note**: For common components used across all signals (Querier, TelemetryStore, MetadataStore, etc.), see the [Query Range API Documentation](./QUERY_RANGE_API.md#key-components).
### 1. Trace Statement Builder
**Location**: `pkg/telemetrytraces/statement_builder.go`
**Purpose**: Converts trace query builder specifications into ClickHouse SQL
**Key Methods**:
- `Build()`: Main entry point, builds SQL statement
- `buildListQuery()`: Builds query for raw/list results
- `buildTimeSeriesQuery()`: Builds query for time series
- `buildScalarQuery()`: Builds query for scalar values
- `buildTraceQuery()`: Builds query for trace-specific results
**Key Features**:
- Trace field resolution via metadata store
- Time range optimization for trace_id filters (queries trace_summary first)
- Support for trace aggregations, filters, group by, ordering
- Calculated field support (http_method, db_name, has_error, etc.)
- Resource filter support via resourceFilterStmtBuilder
### 2. Trace Field Mapper
**Location**: `pkg/telemetrytraces/field_mapper.go`
**Purpose**: Maps trace query field names to ClickHouse column names
**Field Types**:
- **Intrinsic Fields**: Built-in fields (trace_id, span_id, duration_nano, name, kind_string, status_code_string, etc.)
- **Calculated Fields**: Derived fields (http_method, db_name, has_error, response_status_code, etc.)
- **Attribute Fields**: Dynamic span/resource attributes (accessed via attributes_string, attributes_number, attributes_bool, resources_string)
**Example Mapping**:
```
"service.name" → "resource_string_service$$name"
"http.method" → Calculated from attributes_string['http.method']
"duration_nano" → "duration_nano" (intrinsic)
"trace_id" → "trace_id" (intrinsic)
```
**Key Methods**:
- `MapField()`: Maps a field to ClickHouse expression
- `MapAttribute()`: Maps attribute fields
- `MapResource()`: Maps resource fields
### 3. Trace Condition Builder
**Location**: `pkg/telemetrytraces/condition_builder.go`
**Purpose**: Builds WHERE clause conditions from trace filter expressions
**Supported Operators**:
- `=`, `!=`, `IN`, `NOT IN`
- `>`, `>=`, `<`, `<=`
- `LIKE`, `NOT LIKE`, `ILIKE`
- `EXISTS`, `NOT EXISTS`
- `CONTAINS`, `NOT CONTAINS`
**Key Methods**:
- `BuildCondition()`: Builds condition from filter expression
- Handles attribute, resource, and intrinsic field filtering
### 4. Trace Operator Statement Builder
**Location**: `pkg/telemetrytraces/trace_operator_statement_builder.go`
**Purpose**: Builds SQL for trace operator queries (AND, OR, NOT operations on trace queries)
**Key Methods**:
- `Build()`: Builds CTE-based SQL for trace operators
- Uses `traceOperatorCTEBuilder` to create Common Table Expressions
**Features**:
- Parses operator expressions (e.g., "A AND B")
- Creates CTEs for each referenced query
- Combines results using INTERSECT, UNION, EXCEPT
### 5. ClickHouse Reader (Trace-Specific Methods)
**Location**: `pkg/query-service/app/clickhouseReader/reader.go`
**Purpose**: Direct trace data retrieval and processing (bypasses query builder)
**Key Methods**:
- `GetWaterfallSpansForTraceWithMetadata()`: Waterfall view data
- `GetFlamegraphSpansForTrace()`: Flamegraph view data
- `SearchTraces()`: Legacy trace search (still used for some flows)
- `GetMinAndMaxTimestampForTraceID()`: Time range optimization helper
**Caching**: Implements 5-minute cache for trace detail views
**Note**: These methods are used for trace-specific visualizations. For general trace queries, use the Query Range API.
---
## Query Building System
> **Note**: For general query building concepts and patterns, see the [Query Range API Documentation](./QUERY_RANGE_API.md). This section covers trace-specific aspects.
### Trace Query Builder Structure
A trace query consists of:
```go
QueryBuilderQuery[TraceAggregation] {
Name: "query_name",
Signal: SignalTraces,
Filter: &Filter {
Expression: "service.name = 'api' AND duration_nano > 1000000"
},
Aggregations: []TraceAggregation {
{Expression: "count()", Alias: "total"},
{Expression: "avg(duration_nano)", Alias: "avg_duration"},
{Expression: "p99(duration_nano)", Alias: "p99"},
},
GroupBy: []GroupByKey {
{TelemetryFieldKey: {Name: "service.name", ...}},
},
Order: []OrderBy {...},
Limit: 100,
}
```
### Trace-Specific SQL Generation Process
1. **Field Resolution**:
- Resolve trace field names using `traceFieldMapper`
- Handle intrinsic, calculated, and attribute fields
- Map to ClickHouse columns (e.g., `service.name``resource_string_service$$name`)
2. **Time Range Optimization**:
- If `trace_id` filter present, query `trace_summary` first
- Narrow time range based on trace start/end times
- Reduces data scanned significantly
3. **Filter Building**:
- Convert filter expression using `traceConditionBuilder`
- Handle attribute filters (attributes_string, attributes_number, attributes_bool)
- Handle resource filters (resources_string)
- Handle intrinsic field filters
4. **Aggregation Building**:
- Build SELECT with trace aggregations
- Support trace-specific functions (count, avg, p99, etc. on duration_nano)
5. **Group By Building**:
- Add GROUP BY clause with trace fields
- Support grouping by service.name, operation name, etc.
6. **Order Building**:
- Add ORDER BY clause
- Support ordering by duration, timestamp, etc.
7. **Limit/Offset**:
- Add pagination
### Example Generated SQL
For query: `count() WHERE service.name = 'api' GROUP BY service.name`
```sql
SELECT
count() AS total,
resource_string_service$$name AS service_name
FROM signoz_traces.distributed_signoz_index_v3
WHERE
timestamp >= toDateTime64(1234567890/1e9, 9)
AND timestamp <= toDateTime64(1234567899/1e9, 9)
AND ts_bucket_start >= toDateTime64(1234567890/1e9, 9)
AND ts_bucket_start <= toDateTime64(1234567899/1e9, 9)
AND resource_string_service$$name = 'api'
GROUP BY resource_string_service$$name
```
**Note**: The query uses `ts_bucket_start` for efficient time filtering (partitioning column).
---
## Storage Schema
### Main Tables
**Location**: `pkg/telemetrytraces/tables.go`
#### 1. `distributed_signoz_index_v3`
Main span index table. Stores all span data.
**Key Columns**:
- `timestamp`: Span timestamp
- `duration_nano`: Span duration
- `span_id`, `trace_id`: Identifiers
- `has_error`: Error indicator
- `kind`: Span kind
- `name`: Operation name
- `attributes_string`, `attributes_number`, `attributes_bool`: Attributes
- `resources_string`: Resource attributes
- `events`: Span events
- `status_code_string`, `status_message`: Status
- `ts_bucket_start`: Time bucket for partitioning
#### 2. `distributed_trace_summary`
Trace-level summary for quick lookups.
**Columns**:
- `trace_id`: Trace identifier
- `start`: Earliest span timestamp
- `end`: Latest span timestamp
- `num_spans`: Total span count
#### 3. `distributed_tag_attributes_v2`
Metadata table for attribute keys.
**Purpose**: Stores available attribute keys for autocomplete
#### 4. `distributed_span_attributes_keys`
Span attribute keys metadata.
**Purpose**: Tracks which attributes exist in spans
### Database
All trace tables are in the `signoz_traces` database.
---
## Extending the Traces Module
### Adding a New Calculated Field
1. **Define Field in Constants** (`pkg/telemetrytraces/const.go`):
```go
CalculatedFields = map[string]telemetrytypes.TelemetryFieldKey{
"my_new_field": {
Name: "my_new_field",
Description: "Description of the field",
Signal: telemetrytypes.SignalTraces,
FieldContext: telemetrytypes.FieldContextSpan,
FieldDataType: telemetrytypes.FieldDataTypeString,
},
}
```
2. **Implement Field Mapping** (`pkg/telemetrytraces/field_mapper.go`):
```go
func (fm *fieldMapper) MapField(field telemetrytypes.TelemetryFieldKey) (string, error) {
if field.Name == "my_new_field" {
// Return ClickHouse expression
return "attributes_string['my.attribute.key']", nil
}
// ... existing mappings
}
```
3. **Update Condition Builder** (if needed for filtering):
```go
// In condition_builder.go, add support for your field
```
### Adding a New API Endpoint
1. **Add Handler Method** (`pkg/query-service/app/http_handler.go`):
```go
func (aH *APIHandler) MyNewTraceHandler(w http.ResponseWriter, r *http.Request) {
// Extract parameters
// Call reader or querier
// Return response
}
```
2. **Register Route** (in `RegisterRoutes` or separate method):
```go
router.HandleFunc("/api/v2/traces/my-endpoint",
am.ViewAccess(aH.MyNewTraceHandler)).Methods(http.MethodPost)
```
3. **Implement Logic**:
- Add to `ClickHouseReader` if direct DB access needed
- Or use `Querier` for query builder queries
### Adding a New Aggregation Function
1. **Update Aggregation Rewriter** (`pkg/querybuilder/agg_expr_rewriter.go`):
```go
func (r *aggExprRewriter) RewriteAggregation(expr string) (string, error) {
// Add parsing for your function
if strings.HasPrefix(expr, "my_function(") {
// Return ClickHouse SQL expression
return "myClickHouseFunction(...)", nil
}
}
```
2. **Update Statement Builder** (if special handling needed):
```go
// In statement_builder.go, add special case if needed
```
### Adding Trace Operator Support
Trace operators are already extensible. To add a new operator:
1. **Update Grammar** (`grammar/TraceOperatorGrammar.g4`):
```antlr
operator: AND | OR | NOT | MY_NEW_OPERATOR;
```
2. **Update CTE Builder** (`pkg/telemetrytraces/trace_operator_cte_builder.go`):
```go
func (b *traceOperatorCTEBuilder) buildOperatorQuery(op TraceOperatorType) string {
switch op {
case TraceOperatorTypeMyNewOperator:
return "MY_CLICKHOUSE_OPERATION"
}
}
```
---
## Common Patterns
### Pattern 1: Query with Filter
```go
query := qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "filtered_traces",
Signal: telemetrytypes.SignalTraces,
Filter: &qbtypes.Filter{
Expression: "service.name = 'api' AND duration_nano > 1000000",
},
Aggregations: []qbtypes.TraceAggregation{
{Expression: "count()", Alias: "total"},
},
}
```
### Pattern 2: Time Series Query
```go
query := qbtypes.QueryBuilderQuery[qbtypes.TraceAggregation]{
Name: "time_series",
Signal: telemetrytypes.SignalTraces,
Aggregations: []qbtypes.TraceAggregation{
{Expression: "avg(duration_nano)", Alias: "avg_duration"},
},
GroupBy: []qbtypes.GroupByKey{
{TelemetryFieldKey: telemetrytypes.TelemetryFieldKey{
Name: "service.name",
FieldContext: telemetrytypes.FieldContextResource,
}},
},
StepInterval: qbtypes.Step{Duration: time.Minute},
}
```
### Pattern 3: Trace Operator Query
```go
query := qbtypes.QueryBuilderTraceOperator{
Name: "operator_query",
Expression: "A AND B", // A and B are query names
Filter: &qbtypes.Filter{
Expression: "duration_nano > 5000000",
},
}
```
---
## Performance Considerations
### Caching
- **Trace Detail Views**: 5-minute cache for waterfall/flamegraph
- **Query Results**: Bucket-based caching in querier
- **Metadata**: Cached attribute keys and field metadata
### Query Optimization
1. **Time Range Optimization**: When `trace_id` is in filter, query `trace_summary` first to narrow time range
2. **Index Usage**: Queries use `ts_bucket_start` for time filtering
3. **Limit Enforcement**: Waterfall/flamegraph have span limits (500/50)
### Best Practices
1. **Use Query Builder**: Prefer query builder over raw SQL for better optimization
2. **Limit Time Ranges**: Always specify reasonable time ranges
3. **Use Aggregations**: For large datasets, use aggregations instead of raw data
4. **Cache Awareness**: Be mindful of cache TTLs when testing
---
## References
### Key Files
- `pkg/telemetrytraces/` - Core trace query building
- `statement_builder.go` - Trace SQL generation
- `field_mapper.go` - Trace field mapping
- `condition_builder.go` - Trace filter building
- `trace_operator_statement_builder.go` - Trace operator SQL
- `pkg/query-service/app/clickhouseReader/reader.go` - Direct trace access
- `pkg/query-service/app/http_handler.go` - API handlers
- `pkg/query-service/model/trace.go` - Data models
### Related Documentation
- [Query Range API Documentation](./QUERY_RANGE_API.md) - Common query_range API details
- [OpenTelemetry Specification](https://opentelemetry.io/docs/specs/)
- [ClickHouse Documentation](https://clickhouse.com/docs)
- [Query Builder Guide](../contributing/go/query-builder.md)
---
## Contributing
When contributing to the traces module:
1. **Follow Existing Patterns**: Match the style of existing code
2. **Add Tests**: Include unit tests for new functionality
3. **Update Documentation**: Update this doc for significant changes
4. **Consider Performance**: Optimize queries and use caching appropriately
5. **Handle Errors**: Provide meaningful error messages
For questions or help, reach out to the maintainers or open an issue.

View File

@@ -16,7 +16,7 @@ __Table of Contents__
- [Prerequisites](#prerequisites-1)
- [Install Helm Repo and Charts](#install-helm-repo-and-charts)
- [Start the OpenTelemetry Demo App](#start-the-opentelemetry-demo-app-1)
- [Monitor with SigNoz (Kubernetes)](#monitor-with-signoz-kubernetes)
- [Moniitor with SigNoz (Kubernetes)](#monitor-with-signoz-kubernetes)
- [What's next](#whats-next)
@@ -103,19 +103,9 @@ Remember to replace the region and ingestion key with proper values as obtained
Both SigNoz and OTel demo app [frontend-proxy service, to be accurate] share common port allocation at 8080. To prevent port allocation conflicts, modify the OTel demo application config to use port 8081 as the `ENVOY_PORT` value as shown below, and run docker compose command.
Also, both SigNoz and OTel Demo App have the same `PROMETHEUS_PORT` configured, by default both of them try to start at `9090`, which may cause either of them to fail depending upon which one acquires it first. To prevent this, we need to mofify the value of `PROMETHEUS_PORT` too.
```sh
ENVOY_PORT=8081 PROMETHEUS_PORT=9091 docker compose up -d
ENVOY_PORT=8081 docker compose up -d
```
Alternatively, we can modify these values using the `.env` file too, which reduces the command as just:
```sh
docker compose up -d
```
This spins up multiple microservices, with OpenTelemetry instrumentation enabled. you can verify this by,
```sh
docker compose ps -a

View File

@@ -1,34 +0,0 @@
package anomaly
import (
"context"
"github.com/SigNoz/signoz/pkg/valuer"
)
type DailyProvider struct {
BaseSeasonalProvider
}
var _ BaseProvider = (*DailyProvider)(nil)
func (dp *DailyProvider) GetBaseSeasonalProvider() *BaseSeasonalProvider {
return &dp.BaseSeasonalProvider
}
func NewDailyProvider(opts ...GenericProviderOption[*DailyProvider]) *DailyProvider {
dp := &DailyProvider{
BaseSeasonalProvider: BaseSeasonalProvider{},
}
for _, opt := range opts {
opt(dp)
}
return dp
}
func (p *DailyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *AnomaliesRequest) (*AnomaliesResponse, error) {
req.Seasonality = SeasonalityDaily
return p.getAnomalies(ctx, orgID, req)
}

View File

@@ -1,35 +0,0 @@
package anomaly
import (
"context"
"github.com/SigNoz/signoz/pkg/valuer"
)
type HourlyProvider struct {
BaseSeasonalProvider
}
var _ BaseProvider = (*HourlyProvider)(nil)
func (hp *HourlyProvider) GetBaseSeasonalProvider() *BaseSeasonalProvider {
return &hp.BaseSeasonalProvider
}
// NewHourlyProvider now uses the generic option type
func NewHourlyProvider(opts ...GenericProviderOption[*HourlyProvider]) *HourlyProvider {
hp := &HourlyProvider{
BaseSeasonalProvider: BaseSeasonalProvider{},
}
for _, opt := range opts {
opt(hp)
}
return hp
}
func (p *HourlyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *AnomaliesRequest) (*AnomaliesResponse, error) {
req.Seasonality = SeasonalityHourly
return p.getAnomalies(ctx, orgID, req)
}

View File

@@ -1,223 +0,0 @@
package anomaly
import (
"time"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/valuer"
)
type Seasonality struct{ valuer.String }
var (
SeasonalityHourly = Seasonality{valuer.NewString("hourly")}
SeasonalityDaily = Seasonality{valuer.NewString("daily")}
SeasonalityWeekly = Seasonality{valuer.NewString("weekly")}
)
var (
oneWeekOffset = uint64(24 * 7 * time.Hour.Milliseconds())
oneDayOffset = uint64(24 * time.Hour.Milliseconds())
oneHourOffset = uint64(time.Hour.Milliseconds())
fiveMinOffset = uint64(5 * time.Minute.Milliseconds())
)
func (s Seasonality) IsValid() bool {
switch s {
case SeasonalityHourly, SeasonalityDaily, SeasonalityWeekly:
return true
default:
return false
}
}
type AnomaliesRequest struct {
Params qbtypes.QueryRangeRequest
Seasonality Seasonality
}
type AnomaliesResponse struct {
Results []*qbtypes.TimeSeriesData
}
// anomalyParams is the params for anomaly detection
// prediction = avg(past_period_query) + avg(current_season_query) - mean(past_season_query, past2_season_query, past3_season_query)
//
// ^ ^
// | |
// (rounded value for past peiod) + (seasonal growth)
//
// score = abs(value - prediction) / stddev (current_season_query)
type anomalyQueryParams struct {
// CurrentPeriodQuery is the query range params for period user is looking at or eval window
// Example: (now-5m, now), (now-30m, now), (now-1h, now)
// The results obtained from this query are used to compare with predicted values
// and to detect anomalies
CurrentPeriodQuery qbtypes.QueryRangeRequest
// PastPeriodQuery is the query range params for past period of seasonality
// Example: For weekly seasonality, (now-1w-5m, now-1w)
// : For daily seasonality, (now-1d-5m, now-1d)
// : For hourly seasonality, (now-1h-5m, now-1h)
PastPeriodQuery qbtypes.QueryRangeRequest
// CurrentSeasonQuery is the query range params for current period (seasonal)
// Example: For weekly seasonality, this is the query range params for the (now-1w-5m, now)
// : For daily seasonality, this is the query range params for the (now-1d-5m, now)
// : For hourly seasonality, this is the query range params for the (now-1h-5m, now)
CurrentSeasonQuery qbtypes.QueryRangeRequest
// PastSeasonQuery is the query range params for past seasonal period to the current season
// Example: For weekly seasonality, this is the query range params for the (now-2w-5m, now-1w)
// : For daily seasonality, this is the query range params for the (now-2d-5m, now-1d)
// : For hourly seasonality, this is the query range params for the (now-2h-5m, now-1h)
PastSeasonQuery qbtypes.QueryRangeRequest
// Past2SeasonQuery is the query range params for past 2 seasonal period to the current season
// Example: For weekly seasonality, this is the query range params for the (now-3w-5m, now-2w)
// : For daily seasonality, this is the query range params for the (now-3d-5m, now-2d)
// : For hourly seasonality, this is the query range params for the (now-3h-5m, now-2h)
Past2SeasonQuery qbtypes.QueryRangeRequest
// Past3SeasonQuery is the query range params for past 3 seasonal period to the current season
// Example: For weekly seasonality, this is the query range params for the (now-4w-5m, now-3w)
// : For daily seasonality, this is the query range params for the (now-4d-5m, now-3d)
// : For hourly seasonality, this is the query range params for the (now-4h-5m, now-3h)
Past3SeasonQuery qbtypes.QueryRangeRequest
}
func prepareAnomalyQueryParams(req qbtypes.QueryRangeRequest, seasonality Seasonality) *anomalyQueryParams {
start := req.Start
end := req.End
currentPeriodQuery := qbtypes.QueryRangeRequest{
Start: start,
End: end,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: req.CompositeQuery,
NoCache: false,
}
var pastPeriodStart, pastPeriodEnd uint64
switch seasonality {
// for one week period, we fetch the data from the past week with 5 min offset
case SeasonalityWeekly:
pastPeriodStart = start - oneWeekOffset - fiveMinOffset
pastPeriodEnd = end - oneWeekOffset
// for one day period, we fetch the data from the past day with 5 min offset
case SeasonalityDaily:
pastPeriodStart = start - oneDayOffset - fiveMinOffset
pastPeriodEnd = end - oneDayOffset
// for one hour period, we fetch the data from the past hour with 5 min offset
case SeasonalityHourly:
pastPeriodStart = start - oneHourOffset - fiveMinOffset
pastPeriodEnd = end - oneHourOffset
}
pastPeriodQuery := qbtypes.QueryRangeRequest{
Start: pastPeriodStart,
End: pastPeriodEnd,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: req.CompositeQuery,
NoCache: false,
}
// seasonality growth trend
var currentGrowthPeriodStart, currentGrowthPeriodEnd uint64
switch seasonality {
case SeasonalityWeekly:
currentGrowthPeriodStart = start - oneWeekOffset
currentGrowthPeriodEnd = start
case SeasonalityDaily:
currentGrowthPeriodStart = start - oneDayOffset
currentGrowthPeriodEnd = start
case SeasonalityHourly:
currentGrowthPeriodStart = start - oneHourOffset
currentGrowthPeriodEnd = start
}
currentGrowthQuery := qbtypes.QueryRangeRequest{
Start: currentGrowthPeriodStart,
End: currentGrowthPeriodEnd,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: req.CompositeQuery,
NoCache: false,
}
var pastGrowthPeriodStart, pastGrowthPeriodEnd uint64
switch seasonality {
case SeasonalityWeekly:
pastGrowthPeriodStart = start - 2*oneWeekOffset
pastGrowthPeriodEnd = start - 1*oneWeekOffset
case SeasonalityDaily:
pastGrowthPeriodStart = start - 2*oneDayOffset
pastGrowthPeriodEnd = start - 1*oneDayOffset
case SeasonalityHourly:
pastGrowthPeriodStart = start - 2*oneHourOffset
pastGrowthPeriodEnd = start - 1*oneHourOffset
}
pastGrowthQuery := qbtypes.QueryRangeRequest{
Start: pastGrowthPeriodStart,
End: pastGrowthPeriodEnd,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: req.CompositeQuery,
NoCache: false,
}
var past2GrowthPeriodStart, past2GrowthPeriodEnd uint64
switch seasonality {
case SeasonalityWeekly:
past2GrowthPeriodStart = start - 3*oneWeekOffset
past2GrowthPeriodEnd = start - 2*oneWeekOffset
case SeasonalityDaily:
past2GrowthPeriodStart = start - 3*oneDayOffset
past2GrowthPeriodEnd = start - 2*oneDayOffset
case SeasonalityHourly:
past2GrowthPeriodStart = start - 3*oneHourOffset
past2GrowthPeriodEnd = start - 2*oneHourOffset
}
past2GrowthQuery := qbtypes.QueryRangeRequest{
Start: past2GrowthPeriodStart,
End: past2GrowthPeriodEnd,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: req.CompositeQuery,
NoCache: false,
}
var past3GrowthPeriodStart, past3GrowthPeriodEnd uint64
switch seasonality {
case SeasonalityWeekly:
past3GrowthPeriodStart = start - 4*oneWeekOffset
past3GrowthPeriodEnd = start - 3*oneWeekOffset
case SeasonalityDaily:
past3GrowthPeriodStart = start - 4*oneDayOffset
past3GrowthPeriodEnd = start - 3*oneDayOffset
case SeasonalityHourly:
past3GrowthPeriodStart = start - 4*oneHourOffset
past3GrowthPeriodEnd = start - 3*oneHourOffset
}
past3GrowthQuery := qbtypes.QueryRangeRequest{
Start: past3GrowthPeriodStart,
End: past3GrowthPeriodEnd,
RequestType: qbtypes.RequestTypeTimeSeries,
CompositeQuery: req.CompositeQuery,
NoCache: false,
}
return &anomalyQueryParams{
CurrentPeriodQuery: currentPeriodQuery,
PastPeriodQuery: pastPeriodQuery,
CurrentSeasonQuery: currentGrowthQuery,
PastSeasonQuery: pastGrowthQuery,
Past2SeasonQuery: past2GrowthQuery,
Past3SeasonQuery: past3GrowthQuery,
}
}
type anomalyQueryResults struct {
CurrentPeriodResults []*qbtypes.TimeSeriesData
PastPeriodResults []*qbtypes.TimeSeriesData
CurrentSeasonResults []*qbtypes.TimeSeriesData
PastSeasonResults []*qbtypes.TimeSeriesData
Past2SeasonResults []*qbtypes.TimeSeriesData
Past3SeasonResults []*qbtypes.TimeSeriesData
}

View File

@@ -1,11 +0,0 @@
package anomaly
import (
"context"
"github.com/SigNoz/signoz/pkg/valuer"
)
type Provider interface {
GetAnomalies(ctx context.Context, orgID valuer.UUID, req *AnomaliesRequest) (*AnomaliesResponse, error)
}

View File

@@ -1,465 +0,0 @@
package anomaly
import (
"context"
"log/slog"
"math"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/valuer"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
)
var (
// TODO(srikanthccv): make this configurable?
movingAvgWindowSize = 7
)
// BaseProvider is an interface that includes common methods for all provider types
type BaseProvider interface {
GetBaseSeasonalProvider() *BaseSeasonalProvider
}
// GenericProviderOption is a generic type for provider options
type GenericProviderOption[T BaseProvider] func(T)
func WithQuerier[T BaseProvider](querier querier.Querier) GenericProviderOption[T] {
return func(p T) {
p.GetBaseSeasonalProvider().querier = querier
}
}
func WithLogger[T BaseProvider](logger *slog.Logger) GenericProviderOption[T] {
return func(p T) {
p.GetBaseSeasonalProvider().logger = logger
}
}
type BaseSeasonalProvider struct {
querier querier.Querier
logger *slog.Logger
}
func (p *BaseSeasonalProvider) getQueryParams(req *AnomaliesRequest) *anomalyQueryParams {
if !req.Seasonality.IsValid() {
req.Seasonality = SeasonalityDaily
}
return prepareAnomalyQueryParams(req.Params, req.Seasonality)
}
func (p *BaseSeasonalProvider) toTSResults(ctx context.Context, resp *qbtypes.QueryRangeResponse) []*qbtypes.TimeSeriesData {
tsData := []*qbtypes.TimeSeriesData{}
if resp == nil {
p.logger.InfoContext(ctx, "nil response from query range")
return tsData
}
for _, item := range resp.Data.Results {
if resultData, ok := item.(*qbtypes.TimeSeriesData); ok {
tsData = append(tsData, resultData)
}
}
return tsData
}
func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID, params *anomalyQueryParams) (*anomalyQueryResults, error) {
// TODO(srikanthccv): parallelize this?
p.logger.InfoContext(ctx, "fetching results for current period", "anomaly_current_period_query", params.CurrentPeriodQuery)
currentPeriodResults, err := p.querier.QueryRange(ctx, orgID, &params.CurrentPeriodQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past period", "anomaly_past_period_query", params.PastPeriodQuery)
pastPeriodResults, err := p.querier.QueryRange(ctx, orgID, &params.PastPeriodQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for current season", "anomaly_current_season_query", params.CurrentSeasonQuery)
currentSeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.CurrentSeasonQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past season", "anomaly_past_season_query", params.PastSeasonQuery)
pastSeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.PastSeasonQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past 2 season", "anomaly_past_2season_query", params.Past2SeasonQuery)
past2SeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.Past2SeasonQuery)
if err != nil {
return nil, err
}
p.logger.InfoContext(ctx, "fetching results for past 3 season", "anomaly_past_3season_query", params.Past3SeasonQuery)
past3SeasonResults, err := p.querier.QueryRange(ctx, orgID, &params.Past3SeasonQuery)
if err != nil {
return nil, err
}
return &anomalyQueryResults{
CurrentPeriodResults: p.toTSResults(ctx, currentPeriodResults),
PastPeriodResults: p.toTSResults(ctx, pastPeriodResults),
CurrentSeasonResults: p.toTSResults(ctx, currentSeasonResults),
PastSeasonResults: p.toTSResults(ctx, pastSeasonResults),
Past2SeasonResults: p.toTSResults(ctx, past2SeasonResults),
Past3SeasonResults: p.toTSResults(ctx, past3SeasonResults),
}, nil
}
// getMatchingSeries gets the matching series from the query result
// for the given series
func (p *BaseSeasonalProvider) getMatchingSeries(_ context.Context, queryResult *qbtypes.TimeSeriesData, series *qbtypes.TimeSeries) *qbtypes.TimeSeries {
if queryResult == nil || len(queryResult.Aggregations) == 0 || len(queryResult.Aggregations[0].Series) == 0 {
return nil
}
for _, curr := range queryResult.Aggregations[0].Series {
currLabelsKey := qbtypes.GetUniqueSeriesKey(curr.Labels)
seriesLabelsKey := qbtypes.GetUniqueSeriesKey(series.Labels)
if currLabelsKey == seriesLabelsKey {
return curr
}
}
return nil
}
func (p *BaseSeasonalProvider) getAvg(series *qbtypes.TimeSeries) float64 {
if series == nil || len(series.Values) == 0 {
return 0
}
var sum float64
for _, smpl := range series.Values {
sum += smpl.Value
}
return sum / float64(len(series.Values))
}
func (p *BaseSeasonalProvider) getStdDev(series *qbtypes.TimeSeries) float64 {
if series == nil || len(series.Values) == 0 {
return 0
}
avg := p.getAvg(series)
var sum float64
for _, smpl := range series.Values {
sum += math.Pow(smpl.Value-avg, 2)
}
return math.Sqrt(sum / float64(len(series.Values)))
}
// getMovingAvg gets the moving average for the given series
// for the given window size and start index
func (p *BaseSeasonalProvider) getMovingAvg(series *qbtypes.TimeSeries, movingAvgWindowSize, startIdx int) float64 {
if series == nil || len(series.Values) == 0 {
return 0
}
if startIdx >= len(series.Values)-movingAvgWindowSize {
startIdx = int(math.Max(0, float64(len(series.Values)-movingAvgWindowSize)))
}
var sum float64
points := series.Values[startIdx:]
windowSize := int(math.Min(float64(movingAvgWindowSize), float64(len(points))))
for i := 0; i < windowSize; i++ {
sum += points[i].Value
}
avg := sum / float64(windowSize)
return avg
}
func (p *BaseSeasonalProvider) getMean(floats ...float64) float64 {
if len(floats) == 0 {
return 0
}
var sum float64
for _, f := range floats {
sum += f
}
return sum / float64(len(floats))
}
func (p *BaseSeasonalProvider) getPredictedSeries(
ctx context.Context,
series, prevSeries, currentSeasonSeries, pastSeasonSeries, past2SeasonSeries, past3SeasonSeries *qbtypes.TimeSeries,
) *qbtypes.TimeSeries {
predictedSeries := &qbtypes.TimeSeries{
Labels: series.Labels,
Values: make([]*qbtypes.TimeSeriesValue, 0),
}
// for each point in the series, get the predicted value
// the predicted value is the moving average (with window size = 7) of the previous period series
// plus the average of the current season series
// minus the mean of the past season series, past2 season series and past3 season series
for idx, curr := range series.Values {
movingAvg := p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
avg := p.getAvg(currentSeasonSeries)
mean := p.getMean(p.getAvg(pastSeasonSeries), p.getAvg(past2SeasonSeries), p.getAvg(past3SeasonSeries))
predictedValue := movingAvg + avg - mean
if predictedValue < 0 {
// this should not happen (except when the data has extreme outliers)
// we will use the moving avg of the previous period series in this case
p.logger.WarnContext(ctx, "predicted value is less than 0 for series", "anomaly_predicted_value", predictedValue, "anomaly_labels", series.Labels)
predictedValue = p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
}
p.logger.DebugContext(ctx, "predicted value for series",
"anomaly_moving_avg", movingAvg,
"anomaly_avg", avg,
"anomaly_mean", mean,
"anomaly_labels", series.Labels,
"anomaly_predicted_value", predictedValue,
"anomaly_curr", curr.Value,
)
predictedSeries.Values = append(predictedSeries.Values, &qbtypes.TimeSeriesValue{
Timestamp: curr.Timestamp,
Value: predictedValue,
})
}
return predictedSeries
}
// getBounds gets the upper and lower bounds for the given series
// for the given z score threshold
// moving avg of the previous period series + z score threshold * std dev of the series
// moving avg of the previous period series - z score threshold * std dev of the series
func (p *BaseSeasonalProvider) getBounds(
series, predictedSeries, weekSeries *qbtypes.TimeSeries,
zScoreThreshold float64,
) (*qbtypes.TimeSeries, *qbtypes.TimeSeries) {
upperBoundSeries := &qbtypes.TimeSeries{
Labels: series.Labels,
Values: make([]*qbtypes.TimeSeriesValue, 0),
}
lowerBoundSeries := &qbtypes.TimeSeries{
Labels: series.Labels,
Values: make([]*qbtypes.TimeSeriesValue, 0),
}
for idx, curr := range series.Values {
upperBound := p.getMovingAvg(predictedSeries, movingAvgWindowSize, idx) + zScoreThreshold*p.getStdDev(weekSeries)
lowerBound := p.getMovingAvg(predictedSeries, movingAvgWindowSize, idx) - zScoreThreshold*p.getStdDev(weekSeries)
upperBoundSeries.Values = append(upperBoundSeries.Values, &qbtypes.TimeSeriesValue{
Timestamp: curr.Timestamp,
Value: upperBound,
})
lowerBoundSeries.Values = append(lowerBoundSeries.Values, &qbtypes.TimeSeriesValue{
Timestamp: curr.Timestamp,
Value: math.Max(lowerBound, 0),
})
}
return upperBoundSeries, lowerBoundSeries
}
// getExpectedValue gets the expected value for the given series
// for the given index
// prevSeriesAvg + currentSeasonSeriesAvg - mean of past season series, past2 season series and past3 season series
func (p *BaseSeasonalProvider) getExpectedValue(
_, prevSeries, currentSeasonSeries, pastSeasonSeries, past2SeasonSeries, past3SeasonSeries *qbtypes.TimeSeries, idx int,
) float64 {
prevSeriesAvg := p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
currentSeasonSeriesAvg := p.getAvg(currentSeasonSeries)
pastSeasonSeriesAvg := p.getAvg(pastSeasonSeries)
past2SeasonSeriesAvg := p.getAvg(past2SeasonSeries)
past3SeasonSeriesAvg := p.getAvg(past3SeasonSeries)
return prevSeriesAvg + currentSeasonSeriesAvg - p.getMean(pastSeasonSeriesAvg, past2SeasonSeriesAvg, past3SeasonSeriesAvg)
}
// getScore gets the anomaly score for the given series
// for the given index
// (value - expectedValue) / std dev of the series
func (p *BaseSeasonalProvider) getScore(
series, prevSeries, weekSeries, weekPrevSeries, past2SeasonSeries, past3SeasonSeries *qbtypes.TimeSeries, value float64, idx int,
) float64 {
expectedValue := p.getExpectedValue(series, prevSeries, weekSeries, weekPrevSeries, past2SeasonSeries, past3SeasonSeries, idx)
if expectedValue < 0 {
expectedValue = p.getMovingAvg(prevSeries, movingAvgWindowSize, idx)
}
return (value - expectedValue) / p.getStdDev(weekSeries)
}
// getAnomalyScores gets the anomaly scores for the given series
// for the given index
// (value - expectedValue) / std dev of the series
func (p *BaseSeasonalProvider) getAnomalyScores(
series, prevSeries, currentSeasonSeries, pastSeasonSeries, past2SeasonSeries, past3SeasonSeries *qbtypes.TimeSeries,
) *qbtypes.TimeSeries {
anomalyScoreSeries := &qbtypes.TimeSeries{
Labels: series.Labels,
Values: make([]*qbtypes.TimeSeriesValue, 0),
}
for idx, curr := range series.Values {
anomalyScore := p.getScore(series, prevSeries, currentSeasonSeries, pastSeasonSeries, past2SeasonSeries, past3SeasonSeries, curr.Value, idx)
anomalyScoreSeries.Values = append(anomalyScoreSeries.Values, &qbtypes.TimeSeriesValue{
Timestamp: curr.Timestamp,
Value: anomalyScore,
})
}
return anomalyScoreSeries
}
func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, orgID valuer.UUID, req *AnomaliesRequest) (*AnomaliesResponse, error) {
anomalyParams := p.getQueryParams(req)
anomalyQueryResults, err := p.getResults(ctx, orgID, anomalyParams)
if err != nil {
return nil, err
}
currentPeriodResults := make(map[string]*qbtypes.TimeSeriesData)
for _, result := range anomalyQueryResults.CurrentPeriodResults {
currentPeriodResults[result.QueryName] = result
}
pastPeriodResults := make(map[string]*qbtypes.TimeSeriesData)
for _, result := range anomalyQueryResults.PastPeriodResults {
pastPeriodResults[result.QueryName] = result
}
currentSeasonResults := make(map[string]*qbtypes.TimeSeriesData)
for _, result := range anomalyQueryResults.CurrentSeasonResults {
currentSeasonResults[result.QueryName] = result
}
pastSeasonResults := make(map[string]*qbtypes.TimeSeriesData)
for _, result := range anomalyQueryResults.PastSeasonResults {
pastSeasonResults[result.QueryName] = result
}
past2SeasonResults := make(map[string]*qbtypes.TimeSeriesData)
for _, result := range anomalyQueryResults.Past2SeasonResults {
past2SeasonResults[result.QueryName] = result
}
past3SeasonResults := make(map[string]*qbtypes.TimeSeriesData)
for _, result := range anomalyQueryResults.Past3SeasonResults {
past3SeasonResults[result.QueryName] = result
}
for _, result := range currentPeriodResults {
funcs := req.Params.FuncsForQuery(result.QueryName)
var zScoreThreshold float64
for _, f := range funcs {
if f.Name == qbtypes.FunctionNameAnomaly {
for _, arg := range f.Args {
if arg.Name != "z_score_threshold" {
continue
}
value, ok := arg.Value.(float64)
if ok {
zScoreThreshold = value
} else {
p.logger.InfoContext(ctx, "z_score_threshold not provided, defaulting")
zScoreThreshold = 3
}
break
}
}
}
pastPeriodResult, ok := pastPeriodResults[result.QueryName]
if !ok {
continue
}
currentSeasonResult, ok := currentSeasonResults[result.QueryName]
if !ok {
continue
}
pastSeasonResult, ok := pastSeasonResults[result.QueryName]
if !ok {
continue
}
past2SeasonResult, ok := past2SeasonResults[result.QueryName]
if !ok {
continue
}
past3SeasonResult, ok := past3SeasonResults[result.QueryName]
if !ok {
continue
}
// no data;
if len(result.Aggregations) == 0 {
continue
}
aggOfInterest := result.Aggregations[0]
for _, series := range aggOfInterest.Series {
pastPeriodSeries := p.getMatchingSeries(ctx, pastPeriodResult, series)
currentSeasonSeries := p.getMatchingSeries(ctx, currentSeasonResult, series)
pastSeasonSeries := p.getMatchingSeries(ctx, pastSeasonResult, series)
past2SeasonSeries := p.getMatchingSeries(ctx, past2SeasonResult, series)
past3SeasonSeries := p.getMatchingSeries(ctx, past3SeasonResult, series)
stdDev := p.getStdDev(currentSeasonSeries)
p.logger.InfoContext(ctx, "calculated standard deviation for series", "anomaly_std_dev", stdDev, "anomaly_labels", series.Labels)
prevSeriesAvg := p.getAvg(pastPeriodSeries)
currentSeasonSeriesAvg := p.getAvg(currentSeasonSeries)
pastSeasonSeriesAvg := p.getAvg(pastSeasonSeries)
past2SeasonSeriesAvg := p.getAvg(past2SeasonSeries)
past3SeasonSeriesAvg := p.getAvg(past3SeasonSeries)
p.logger.InfoContext(ctx, "calculated mean for series",
"anomaly_prev_series_avg", prevSeriesAvg,
"anomaly_current_season_series_avg", currentSeasonSeriesAvg,
"anomaly_past_season_series_avg", pastSeasonSeriesAvg,
"anomaly_past_2season_series_avg", past2SeasonSeriesAvg,
"anomaly_past_3season_series_avg", past3SeasonSeriesAvg,
"anomaly_labels", series.Labels,
)
predictedSeries := p.getPredictedSeries(
ctx,
series,
pastPeriodSeries,
currentSeasonSeries,
pastSeasonSeries,
past2SeasonSeries,
past3SeasonSeries,
)
aggOfInterest.PredictedSeries = append(aggOfInterest.PredictedSeries, predictedSeries)
upperBoundSeries, lowerBoundSeries := p.getBounds(
series,
predictedSeries,
currentSeasonSeries,
zScoreThreshold,
)
aggOfInterest.UpperBoundSeries = append(aggOfInterest.UpperBoundSeries, upperBoundSeries)
aggOfInterest.LowerBoundSeries = append(aggOfInterest.LowerBoundSeries, lowerBoundSeries)
anomalyScoreSeries := p.getAnomalyScores(
series,
pastPeriodSeries,
currentSeasonSeries,
pastSeasonSeries,
past2SeasonSeries,
past3SeasonSeries,
)
aggOfInterest.AnomalyScores = append(aggOfInterest.AnomalyScores, anomalyScoreSeries)
}
}
results := make([]*qbtypes.TimeSeriesData, 0, len(currentPeriodResults))
for _, result := range currentPeriodResults {
results = append(results, result)
}
return &AnomaliesResponse{
Results: results,
}, nil
}

View File

@@ -1,34 +0,0 @@
package anomaly
import (
"context"
"github.com/SigNoz/signoz/pkg/valuer"
)
type WeeklyProvider struct {
BaseSeasonalProvider
}
var _ BaseProvider = (*WeeklyProvider)(nil)
func (wp *WeeklyProvider) GetBaseSeasonalProvider() *BaseSeasonalProvider {
return &wp.BaseSeasonalProvider
}
func NewWeeklyProvider(opts ...GenericProviderOption[*WeeklyProvider]) *WeeklyProvider {
wp := &WeeklyProvider{
BaseSeasonalProvider: BaseSeasonalProvider{},
}
for _, opt := range opts {
opt(wp)
}
return wp
}
func (p *WeeklyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *AnomaliesRequest) (*AnomaliesResponse, error) {
req.Seasonality = SeasonalityWeekly
return p.getAnomalies(ctx, orgID, req)
}

View File

@@ -1,240 +0,0 @@
package oidccallbackauthn
import (
"context"
"fmt"
"net/url"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/http/client"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/coreos/go-oidc/v3/oidc"
"golang.org/x/oauth2"
)
const (
redirectPath string = "/api/v1/complete/oidc"
)
var defaultScopes []string = []string{"email", "profile", oidc.ScopeOpenID}
var _ authn.CallbackAuthN = (*AuthN)(nil)
type AuthN struct {
settings factory.ScopedProviderSettings
store authtypes.AuthNStore
licensing licensing.Licensing
httpClient *client.Client
}
func New(store authtypes.AuthNStore, licensing licensing.Licensing, providerSettings factory.ProviderSettings) (*AuthN, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/authn/callbackauthn/oidccallbackauthn")
httpClient, err := client.New(providerSettings.Logger, providerSettings.TracerProvider, providerSettings.MeterProvider)
if err != nil {
return nil, err
}
return &AuthN{
settings: settings,
store: store,
licensing: licensing,
httpClient: httpClient,
}, nil
}
func (a *AuthN) LoginURL(ctx context.Context, siteURL *url.URL, authDomain *authtypes.AuthDomain) (string, error) {
if authDomain.AuthDomainConfig().AuthNProvider != authtypes.AuthNProviderOIDC {
return "", errors.Newf(errors.TypeInternal, authtypes.ErrCodeAuthDomainMismatch, "domain type is not oidc")
}
_, oauth2Config, err := a.oidcProviderAndoauth2Config(ctx, siteURL, authDomain)
if err != nil {
return "", err
}
return oauth2Config.AuthCodeURL(authtypes.NewState(siteURL, authDomain.StorableAuthDomain().ID).URL.String()), nil
}
func (a *AuthN) HandleCallback(ctx context.Context, query url.Values) (*authtypes.CallbackIdentity, error) {
if err := query.Get("error"); err != "" {
return nil, errors.Newf(errors.TypeInternal, errors.CodeInternal, "oidc: error while authenticating").WithAdditional(query.Get("error_description"))
}
state, err := authtypes.NewStateFromString(query.Get("state"))
if err != nil {
return nil, errors.Newf(errors.TypeInvalidInput, authtypes.ErrCodeInvalidState, "oidc: invalid state").WithAdditional(err.Error())
}
authDomain, err := a.store.GetAuthDomainFromID(ctx, state.DomainID)
if err != nil {
return nil, err
}
_, err = a.licensing.GetActive(ctx, authDomain.StorableAuthDomain().OrgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
oidcProvider, oauth2Config, err := a.oidcProviderAndoauth2Config(ctx, state.URL, authDomain)
if err != nil {
return nil, err
}
ctx = context.WithValue(ctx, oauth2.HTTPClient, a.httpClient.Client())
token, err := oauth2Config.Exchange(ctx, query.Get("code"))
if err != nil {
var retrieveError *oauth2.RetrieveError
if errors.As(err, &retrieveError) {
return nil, errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "oidc: failed to get token").WithAdditional(retrieveError.ErrorDescription).WithAdditional(string(retrieveError.Body))
}
return nil, errors.Newf(errors.TypeInternal, errors.CodeInternal, "oidc: failed to get token").WithAdditional(err.Error())
}
claims, err := a.claimsFromIDToken(ctx, authDomain, oidcProvider, token)
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return nil, err
}
if claims == nil && authDomain.AuthDomainConfig().OIDC.GetUserInfo {
claims, err = a.claimsFromUserInfo(ctx, oidcProvider, token)
if err != nil {
return nil, err
}
}
emailClaim, ok := claims[authDomain.AuthDomainConfig().OIDC.ClaimMapping.Email].(string)
if !ok {
return nil, errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "oidc: missing email in claims")
}
email, err := valuer.NewEmail(emailClaim)
if err != nil {
return nil, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "oidc: failed to parse email").WithAdditional(err.Error())
}
if !authDomain.AuthDomainConfig().OIDC.InsecureSkipEmailVerified {
emailVerifiedClaim, ok := claims["email_verified"].(bool)
if !ok {
return nil, errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "oidc: missing email_verified in claims")
}
if !emailVerifiedClaim {
return nil, errors.New(errors.TypeForbidden, errors.CodeForbidden, "oidc: email is not verified")
}
}
name := ""
if nameClaim := authDomain.AuthDomainConfig().OIDC.ClaimMapping.Name; nameClaim != "" {
if n, ok := claims[nameClaim].(string); ok {
name = n
}
}
var groups []string
if groupsClaim := authDomain.AuthDomainConfig().OIDC.ClaimMapping.Groups; groupsClaim != "" {
if claimValue, exists := claims[groupsClaim]; exists {
switch g := claimValue.(type) {
case []any:
for _, group := range g {
if gs, ok := group.(string); ok {
groups = append(groups, gs)
}
}
case string:
// Some IDPs return a single group as a string instead of an array
groups = append(groups, g)
default:
a.settings.Logger().WarnContext(ctx, "oidc: unsupported groups type", "type", fmt.Sprintf("%T", claimValue))
}
}
}
role := ""
if roleClaim := authDomain.AuthDomainConfig().OIDC.ClaimMapping.Role; roleClaim != "" {
if r, ok := claims[roleClaim].(string); ok {
role = r
}
}
return authtypes.NewCallbackIdentity(name, email, authDomain.StorableAuthDomain().OrgID, state, groups, role), nil
}
func (a *AuthN) ProviderInfo(ctx context.Context, authDomain *authtypes.AuthDomain) *authtypes.AuthNProviderInfo {
return &authtypes.AuthNProviderInfo{
RelayStatePath: nil,
}
}
func (a *AuthN) oidcProviderAndoauth2Config(ctx context.Context, siteURL *url.URL, authDomain *authtypes.AuthDomain) (*oidc.Provider, *oauth2.Config, error) {
if authDomain.AuthDomainConfig().OIDC.IssuerAlias != "" {
ctx = oidc.InsecureIssuerURLContext(ctx, authDomain.AuthDomainConfig().OIDC.IssuerAlias)
}
oidcProvider, err := oidc.NewProvider(ctx, authDomain.AuthDomainConfig().OIDC.Issuer)
if err != nil {
return nil, nil, err
}
scopes := make([]string, len(defaultScopes))
copy(scopes, defaultScopes)
if authDomain.AuthDomainConfig().RoleMapping != nil && len(authDomain.AuthDomainConfig().RoleMapping.GroupMappings) > 0 {
scopes = append(scopes, "groups")
}
return oidcProvider, &oauth2.Config{
ClientID: authDomain.AuthDomainConfig().OIDC.ClientID,
ClientSecret: authDomain.AuthDomainConfig().OIDC.ClientSecret,
Endpoint: oidcProvider.Endpoint(),
Scopes: scopes,
RedirectURL: (&url.URL{
Scheme: siteURL.Scheme,
Host: siteURL.Host,
Path: redirectPath,
}).String(),
}, nil
}
func (a *AuthN) claimsFromIDToken(ctx context.Context, authDomain *authtypes.AuthDomain, provider *oidc.Provider, token *oauth2.Token) (map[string]any, error) {
rawIDToken, ok := token.Extra("id_token").(string)
if !ok {
return nil, errors.New(errors.TypeNotFound, errors.CodeNotFound, "oidc: no id_token in token response")
}
verifier := provider.Verifier(&oidc.Config{ClientID: authDomain.AuthDomainConfig().OIDC.ClientID})
idToken, err := verifier.Verify(ctx, rawIDToken)
if err != nil {
return nil, errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "oidc: failed to verify token").WithAdditional(err.Error())
}
var claims map[string]any
if err := idToken.Claims(&claims); err != nil {
return nil, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "oidc: failed to decode claims").WithAdditional(err.Error())
}
return claims, nil
}
func (a *AuthN) claimsFromUserInfo(ctx context.Context, provider *oidc.Provider, token *oauth2.Token) (map[string]any, error) {
var claims map[string]any
userInfo, err := provider.UserInfo(ctx, oauth2.StaticTokenSource(&oauth2.Token{
AccessToken: token.AccessToken,
TokenType: "Bearer", // The UserInfo endpoint requires a bearer token as per RFC6750
}))
if err != nil {
return nil, errors.Newf(errors.TypeInternal, errors.CodeInternal, "oidc: failed to get user info").WithAdditional(err.Error())
}
if err := userInfo.Claims(&claims); err != nil {
return nil, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "oidc: failed to decode claims").WithAdditional(err.Error())
}
return claims, nil
}

View File

@@ -1,182 +0,0 @@
package samlcallbackauthn
import (
"context"
"crypto/x509"
"encoding/base64"
"encoding/pem"
"net/url"
"strings"
"github.com/SigNoz/signoz/pkg/authn"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
saml2 "github.com/russellhaering/gosaml2"
dsig "github.com/russellhaering/goxmldsig"
)
const (
redirectPath string = "/api/v1/complete/saml"
)
var _ authn.CallbackAuthN = (*AuthN)(nil)
type AuthN struct {
store authtypes.AuthNStore
licensing licensing.Licensing
}
func New(ctx context.Context, store authtypes.AuthNStore, licensing licensing.Licensing) (*AuthN, error) {
return &AuthN{
store: store,
licensing: licensing,
}, nil
}
func (a *AuthN) LoginURL(ctx context.Context, siteURL *url.URL, authDomain *authtypes.AuthDomain) (string, error) {
if authDomain.AuthDomainConfig().AuthNProvider != authtypes.AuthNProviderSAML {
return "", errors.Newf(errors.TypeInternal, authtypes.ErrCodeAuthDomainMismatch, "saml: domain type is not saml")
}
sp, err := a.serviceProvider(siteURL, authDomain)
if err != nil {
return "", err
}
url, err := sp.BuildAuthURL(authtypes.NewState(siteURL, authDomain.StorableAuthDomain().ID).URL.String())
if err != nil {
return "", err
}
return url, nil
}
func (a *AuthN) HandleCallback(ctx context.Context, formValues url.Values) (*authtypes.CallbackIdentity, error) {
state, err := authtypes.NewStateFromString(formValues.Get("RelayState"))
if err != nil {
return nil, errors.New(errors.TypeInvalidInput, authtypes.ErrCodeInvalidState, "saml: invalid state").WithAdditional(err.Error())
}
authDomain, err := a.store.GetAuthDomainFromID(ctx, state.DomainID)
if err != nil {
return nil, err
}
_, err = a.licensing.GetActive(ctx, authDomain.StorableAuthDomain().OrgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
sp, err := a.serviceProvider(state.URL, authDomain)
if err != nil {
return nil, err
}
assertionInfo, err := sp.RetrieveAssertionInfo(formValues.Get("SAMLResponse"))
if err != nil {
if errors.As(err, &saml2.ErrVerification{}) {
return nil, errors.New(errors.TypeForbidden, errors.CodeForbidden, err.Error())
}
if errors.As(err, &saml2.ErrMissingElement{}) {
return nil, errors.New(errors.TypeNotFound, errors.CodeNotFound, err.Error())
}
return nil, err
}
if assertionInfo.WarningInfo.InvalidTime {
return nil, errors.New(errors.TypeForbidden, errors.CodeForbidden, "saml: expired saml response")
}
email, err := valuer.NewEmail(assertionInfo.NameID)
if err != nil {
return nil, errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "saml: invalid email").WithAdditional("The nameID assertion is used to retrieve the email address, please check your IDP configuration and try again.")
}
name := ""
if nameAttribute := authDomain.AuthDomainConfig().SAML.AttributeMapping.Name; nameAttribute != "" {
if val := assertionInfo.Values.Get(nameAttribute); val != "" {
name = val
}
}
var groups []string
if groupAttribute := authDomain.AuthDomainConfig().SAML.AttributeMapping.Groups; groupAttribute != "" {
groups = assertionInfo.Values.GetAll(groupAttribute)
}
role := ""
if roleAttribute := authDomain.AuthDomainConfig().SAML.AttributeMapping.Role; roleAttribute != "" {
if val := assertionInfo.Values.Get(roleAttribute); val != "" {
role = val
}
}
return authtypes.NewCallbackIdentity(name, email, authDomain.StorableAuthDomain().OrgID, state, groups, role), nil
}
func (a *AuthN) ProviderInfo(ctx context.Context, authDomain *authtypes.AuthDomain) *authtypes.AuthNProviderInfo {
state := authtypes.NewState(&url.URL{Path: "login"}, authDomain.StorableAuthDomain().ID).URL.String()
return &authtypes.AuthNProviderInfo{
RelayStatePath: &state,
}
}
func (a *AuthN) serviceProvider(siteURL *url.URL, authDomain *authtypes.AuthDomain) (*saml2.SAMLServiceProvider, error) {
certStore, err := a.getCertificateStore(authDomain)
if err != nil {
return nil, err
}
acsURL := &url.URL{Scheme: siteURL.Scheme, Host: siteURL.Host, Path: redirectPath}
// Note:
// The ServiceProviderIssuer is the client id in case of keycloak. Since we set it to the host here, we need to set the client id == host in keycloak.
// For AWSSSO, this is the value of Application SAML audience.
return &saml2.SAMLServiceProvider{
IdentityProviderSSOURL: authDomain.AuthDomainConfig().SAML.SamlIdp,
IdentityProviderIssuer: authDomain.AuthDomainConfig().SAML.SamlEntity,
ServiceProviderIssuer: siteURL.Host,
AssertionConsumerServiceURL: acsURL.String(),
SignAuthnRequests: !authDomain.AuthDomainConfig().SAML.InsecureSkipAuthNRequestsSigned,
AllowMissingAttributes: true,
IDPCertificateStore: certStore,
SPKeyStore: dsig.RandomKeyStoreForTest(),
}, nil
}
func (a *AuthN) getCertificateStore(authDomain *authtypes.AuthDomain) (dsig.X509CertificateStore, error) {
certStore := &dsig.MemoryX509CertificateStore{
Roots: []*x509.Certificate{},
}
var certBytes []byte
if strings.Contains(authDomain.AuthDomainConfig().SAML.SamlCert, "-----BEGIN CERTIFICATE-----") {
block, _ := pem.Decode([]byte(authDomain.AuthDomainConfig().SAML.SamlCert))
if block == nil {
return certStore, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "no valid pem cert found")
}
certBytes = block.Bytes
} else {
certData, err := base64.StdEncoding.DecodeString(authDomain.AuthDomainConfig().SAML.SamlCert)
if err != nil {
return certStore, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "failed to read certificate: %s", err.Error())
}
certBytes = certData
}
idpCert, err := x509.ParseCertificate(certBytes)
if err != nil {
return certStore, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "failed to prepare saml request, invalid cert: %s", err.Error())
}
certStore.Roots = append(certStore.Roots, idpCert)
return certStore, nil
}

View File

@@ -1,98 +0,0 @@
package openfgaauthz
import (
"context"
"github.com/SigNoz/signoz/pkg/authz"
pkgopenfgaauthz "github.com/SigNoz/signoz/pkg/authz/openfgaauthz"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
openfgav1 "github.com/openfga/api/proto/openfga/v1"
openfgapkgtransformer "github.com/openfga/language/pkg/go/transformer"
)
type provider struct {
pkgAuthzService authz.AuthZ
}
func NewProviderFactory(sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile) factory.ProviderFactory[authz.AuthZ, authz.Config] {
return factory.NewProviderFactory(factory.MustNewName("openfga"), func(ctx context.Context, ps factory.ProviderSettings, config authz.Config) (authz.AuthZ, error) {
return newOpenfgaProvider(ctx, ps, config, sqlstore, openfgaSchema)
})
}
func newOpenfgaProvider(ctx context.Context, settings factory.ProviderSettings, config authz.Config, sqlstore sqlstore.SQLStore, openfgaSchema []openfgapkgtransformer.ModuleFile) (authz.AuthZ, error) {
pkgOpenfgaAuthzProvider := pkgopenfgaauthz.NewProviderFactory(sqlstore, openfgaSchema)
pkgAuthzService, err := pkgOpenfgaAuthzProvider.New(ctx, settings, config)
if err != nil {
return nil, err
}
return &provider{
pkgAuthzService: pkgAuthzService,
}, nil
}
func (provider *provider) Start(ctx context.Context) error {
return provider.pkgAuthzService.Start(ctx)
}
func (provider *provider) Stop(ctx context.Context) error {
return provider.pkgAuthzService.Stop(ctx)
}
func (provider *provider) Check(ctx context.Context, tuple *openfgav1.TupleKey) error {
return provider.pkgAuthzService.Check(ctx, tuple)
}
func (provider *provider) CheckWithTupleCreation(ctx context.Context, claims authtypes.Claims, orgID valuer.UUID, relation authtypes.Relation, _ authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableUser, claims.UserID, orgID, nil)
if err != nil {
return err
}
tuples, err := typeable.Tuples(subject, relation, selectors, orgID)
if err != nil {
return err
}
err = provider.BatchCheck(ctx, tuples)
if err != nil {
return err
}
return nil
}
func (provider *provider) CheckWithTupleCreationWithoutClaims(ctx context.Context, orgID valuer.UUID, relation authtypes.Relation, _ authtypes.Relation, typeable authtypes.Typeable, selectors []authtypes.Selector) error {
subject, err := authtypes.NewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.String(), orgID, nil)
if err != nil {
return err
}
tuples, err := typeable.Tuples(subject, relation, selectors, orgID)
if err != nil {
return err
}
err = provider.BatchCheck(ctx, tuples)
if err != nil {
return err
}
return nil
}
func (provider *provider) BatchCheck(ctx context.Context, tuples []*openfgav1.TupleKey) error {
return provider.pkgAuthzService.BatchCheck(ctx, tuples)
}
func (provider *provider) ListObjects(ctx context.Context, subject string, relation authtypes.Relation, typeable authtypes.Typeable) ([]*authtypes.Object, error) {
return provider.pkgAuthzService.ListObjects(ctx, subject, relation, typeable)
}
func (provider *provider) Write(ctx context.Context, additions []*openfgav1.TupleKey, deletions []*openfgav1.TupleKey) error {
return provider.pkgAuthzService.Write(ctx, additions, deletions)
}

View File

@@ -1,40 +0,0 @@
module base
type organisation
relations
define read: [user, role#assignee]
define update: [user, role#assignee]
type user
relations
define read: [user, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
type anonymous
type role
relations
define assignee: [user, anonymous]
define read: [user, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
type metaresources
relations
define create: [user, role#assignee]
define list: [user, role#assignee]
type metaresource
relations
define read: [user, anonymous, role#assignee]
define update: [user, role#assignee]
define delete: [user, role#assignee]
define block: [user, role#assignee]
type telemetryresource
relations
define read: [user, role#assignee]

View File

@@ -1,29 +0,0 @@
package openfgaschema
import (
"context"
_ "embed"
"github.com/SigNoz/signoz/pkg/authz"
openfgapkgtransformer "github.com/openfga/language/pkg/go/transformer"
)
var (
//go:embed base.fga
baseDSL string
)
type schema struct{}
func NewSchema() authz.Schema {
return &schema{}
}
func (schema *schema) Get(ctx context.Context) []openfgapkgtransformer.ModuleFile {
return []openfgapkgtransformer.ModuleFile{
{
Name: "base.fga",
Contents: baseDSL,
},
}
}

View File

@@ -1,282 +0,0 @@
package httpgateway
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
"net/url"
"strconv"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/http/client"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/types/gatewaytypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/tidwall/gjson"
)
type Provider struct {
settings factory.ScopedProviderSettings
config gateway.Config
httpClient *client.Client
licensing licensing.Licensing
}
func NewProviderFactory(licensing licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
return factory.NewProviderFactory(factory.MustNewName("http"), func(ctx context.Context, ps factory.ProviderSettings, c gateway.Config) (gateway.Gateway, error) {
return New(ctx, ps, c, licensing)
})
}
func New(ctx context.Context, providerSettings factory.ProviderSettings, config gateway.Config, licensing licensing.Licensing) (gateway.Gateway, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/gateway/httpgateway")
httpClient, err := client.New(
settings.Logger(),
providerSettings.TracerProvider,
providerSettings.MeterProvider,
client.WithRequestResponseLog(true),
client.WithRetryCount(3),
)
if err != nil {
return nil, err
}
return &Provider{
settings: settings,
config: config,
httpClient: httpClient,
licensing: licensing,
}, nil
}
func (provider *Provider) GetIngestionKeys(ctx context.Context, orgID valuer.UUID, page, perPage int) (*gatewaytypes.GettableIngestionKeys, error) {
qParams := url.Values{}
qParams.Add("page", strconv.Itoa(page))
qParams.Add("per_page", strconv.Itoa(perPage))
responseBody, err := provider.do(ctx, orgID, http.MethodGet, "/v1/workspaces/me/keys", qParams, nil)
if err != nil {
return nil, err
}
var ingestionKeys []gatewaytypes.IngestionKey
if err := json.Unmarshal([]byte(gjson.GetBytes(responseBody, "data").String()), &ingestionKeys); err != nil {
return nil, err
}
var pagination gatewaytypes.Pagination
if err := json.Unmarshal([]byte(gjson.GetBytes(responseBody, "_pagination").String()), &pagination); err != nil {
return nil, err
}
return &gatewaytypes.GettableIngestionKeys{
Keys: ingestionKeys,
Pagination: pagination,
}, nil
}
func (provider *Provider) SearchIngestionKeysByName(ctx context.Context, orgID valuer.UUID, name string, page, perPage int) (*gatewaytypes.GettableIngestionKeys, error) {
qParams := url.Values{}
qParams.Add("name", name)
qParams.Add("page", strconv.Itoa(page))
qParams.Add("per_page", strconv.Itoa(perPage))
responseBody, err := provider.do(ctx, orgID, http.MethodGet, "/v1/workspaces/me/keys/search", qParams, nil)
if err != nil {
return nil, err
}
var ingestionKeys []gatewaytypes.IngestionKey
if err := json.Unmarshal([]byte(gjson.GetBytes(responseBody, "data").String()), &ingestionKeys); err != nil {
return nil, err
}
var pagination gatewaytypes.Pagination
if err := json.Unmarshal([]byte(gjson.GetBytes(responseBody, "_pagination").String()), &pagination); err != nil {
return nil, err
}
return &gatewaytypes.GettableIngestionKeys{
Keys: ingestionKeys,
Pagination: pagination,
}, nil
}
func (provider *Provider) CreateIngestionKey(ctx context.Context, orgID valuer.UUID, name string, tags []string, expiresAt time.Time) (*gatewaytypes.GettableCreatedIngestionKey, error) {
requestBody := gatewaytypes.PostableIngestionKey{
Name: name,
Tags: tags,
ExpiresAt: expiresAt,
}
requestBodyBytes, err := json.Marshal(requestBody)
if err != nil {
return nil, err
}
responseBody, err := provider.do(ctx, orgID, http.MethodPost, "/v1/workspaces/me/keys", nil, requestBodyBytes)
if err != nil {
return nil, err
}
var createdKeyResponse gatewaytypes.GettableCreatedIngestionKey
if err := json.Unmarshal([]byte(gjson.GetBytes(responseBody, "data").String()), &createdKeyResponse); err != nil {
return nil, err
}
return &createdKeyResponse, nil
}
func (provider *Provider) UpdateIngestionKey(ctx context.Context, orgID valuer.UUID, keyID string, name string, tags []string, expiresAt time.Time) error {
requestBody := gatewaytypes.PostableIngestionKey{
Name: name,
Tags: tags,
ExpiresAt: expiresAt,
}
requestBodyBytes, err := json.Marshal(requestBody)
if err != nil {
return err
}
_, err = provider.do(ctx, orgID, http.MethodPatch, "/v1/workspaces/me/keys/"+keyID, nil, requestBodyBytes)
if err != nil {
return err
}
return nil
}
func (provider *Provider) DeleteIngestionKey(ctx context.Context, orgID valuer.UUID, keyID string) error {
_, err := provider.do(ctx, orgID, http.MethodDelete, "/v1/workspaces/me/keys/"+keyID, nil, nil)
if err != nil {
return err
}
return nil
}
func (provider *Provider) CreateIngestionKeyLimit(ctx context.Context, orgID valuer.UUID, keyID string, signal string, limitConfig gatewaytypes.LimitConfig, tags []string) (*gatewaytypes.GettableCreatedIngestionKeyLimit, error) {
requestBody := gatewaytypes.PostableIngestionKeyLimit{
Signal: signal,
Config: limitConfig,
Tags: tags,
}
requestBodyBytes, err := json.Marshal(requestBody)
if err != nil {
return nil, err
}
responseBody, err := provider.do(ctx, orgID, http.MethodPost, "/v1/workspaces/me/keys/"+keyID+"/limits", nil, requestBodyBytes)
if err != nil {
return nil, err
}
var createdIngestionKeyLimitResponse gatewaytypes.GettableCreatedIngestionKeyLimit
if err := json.Unmarshal([]byte(gjson.GetBytes(responseBody, "data").String()), &createdIngestionKeyLimitResponse); err != nil {
return nil, err
}
return &createdIngestionKeyLimitResponse, nil
}
func (provider *Provider) UpdateIngestionKeyLimit(ctx context.Context, orgID valuer.UUID, limitID string, limitConfig gatewaytypes.LimitConfig, tags []string) error {
requestBody := gatewaytypes.UpdatableIngestionKeyLimit{
Config: limitConfig,
Tags: tags,
}
requestBodyBytes, err := json.Marshal(requestBody)
if err != nil {
return err
}
_, err = provider.do(ctx, orgID, http.MethodPatch, "/v1/workspaces/me/limits/"+limitID, nil, requestBodyBytes)
if err != nil {
return err
}
return nil
}
func (provider *Provider) DeleteIngestionKeyLimit(ctx context.Context, orgID valuer.UUID, limitID string) error {
_, err := provider.do(ctx, orgID, http.MethodDelete, "/v1/workspaces/me/limits/"+limitID, nil, nil)
if err != nil {
return err
}
return nil
}
func (provider *Provider) do(ctx context.Context, orgID valuer.UUID, method string, path string, queryParams url.Values, body []byte) ([]byte, error) {
license, err := provider.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "no valid license found").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
// build url
requestURL := provider.config.URL.JoinPath(path)
// add query params to the url
if queryParams != nil {
requestURL.RawQuery = queryParams.Encode()
}
// build request
request, err := http.NewRequestWithContext(ctx, method, requestURL.String(), bytes.NewBuffer(body))
if err != nil {
return nil, err
}
// add headers needed to call gateway
request.Header.Set("Content-Type", "application/json")
request.Header.Set("X-Signoz-Cloud-Api-Key", license.Key)
request.Header.Set("X-Consumer-Username", "lid:00000000-0000-0000-0000-000000000000")
request.Header.Set("X-Consumer-Groups", "ns:default")
// execute request
response, err := provider.httpClient.Do(request)
if err != nil {
return nil, err
}
// read response
defer response.Body.Close()
responseBody, err := io.ReadAll(response.Body)
if err != nil {
return nil, err
}
// only 2XX
if response.StatusCode/100 == 2 {
return responseBody, nil
}
errorMessage := gjson.GetBytes(responseBody, "error").String()
if errorMessage == "" {
errorMessage = "an unknown error occurred"
}
// return error for non 2XX
return nil, provider.errFromStatusCode(response.StatusCode, errorMessage)
}
func (provider *Provider) errFromStatusCode(code int, errorMessage string) error {
switch code {
case http.StatusBadRequest:
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, errorMessage)
case http.StatusUnauthorized:
return errors.New(errors.TypeUnauthenticated, errors.CodeUnauthenticated, errorMessage)
case http.StatusForbidden:
return errors.New(errors.TypeForbidden, errors.CodeForbidden, errorMessage)
case http.StatusNotFound:
return errors.New(errors.TypeNotFound, errors.CodeNotFound, errorMessage)
case http.StatusConflict:
return errors.New(errors.TypeAlreadyExists, errors.CodeAlreadyExists, errorMessage)
}
return errors.New(errors.TypeInternal, errors.CodeInternal, errorMessage)
}

91
ee/http/middleware/pat.go Normal file
View File

@@ -0,0 +1,91 @@
package middleware
import (
"net/http"
"time"
eeTypes "github.com/SigNoz/signoz/ee/types"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"go.uber.org/zap"
)
type Pat struct {
store sqlstore.SQLStore
uuid *authtypes.UUID
headers []string
}
func NewPat(store sqlstore.SQLStore, headers []string) *Pat {
return &Pat{store: store, uuid: authtypes.NewUUID(), headers: headers}
}
func (p *Pat) Wrap(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var values []string
var patToken string
var pat eeTypes.StorablePersonalAccessToken
for _, header := range p.headers {
values = append(values, r.Header.Get(header))
}
ctx, err := p.uuid.ContextFromRequest(r.Context(), values...)
if err != nil {
next.ServeHTTP(w, r)
return
}
patToken, ok := authtypes.UUIDFromContext(ctx)
if !ok {
next.ServeHTTP(w, r)
return
}
err = p.store.BunDB().NewSelect().Model(&pat).Where("token = ?", patToken).Scan(r.Context())
if err != nil {
next.ServeHTTP(w, r)
return
}
if pat.ExpiresAt < time.Now().Unix() && pat.ExpiresAt != 0 {
next.ServeHTTP(w, r)
return
}
// get user from db
user := types.User{}
err = p.store.BunDB().NewSelect().Model(&user).Where("id = ?", pat.UserID).Scan(r.Context())
if err != nil {
next.ServeHTTP(w, r)
return
}
role, err := authtypes.NewRole(user.Role)
if err != nil {
next.ServeHTTP(w, r)
return
}
jwt := authtypes.Claims{
UserID: user.ID,
Role: role,
Email: user.Email,
OrgID: user.OrgID,
}
ctx = authtypes.NewContextWithClaims(ctx, jwt)
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
pat.LastUsed = time.Now().Unix()
_, err = p.store.BunDB().NewUpdate().Model(&pat).Column("last_used").Where("token = ?", patToken).Where("revoked = false").Exec(r.Context())
if err != nil {
zap.L().Error("Failed to update PAT last used in db, err: %v", zap.Error(err))
}
})
}

View File

@@ -1,26 +0,0 @@
package licensing
import (
"sync"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/licensing"
)
var (
config licensing.Config
once sync.Once
)
// initializes the licensing configuration
func Config(pollInterval time.Duration, failureThreshold int) licensing.Config {
once.Do(func() {
config = licensing.Config{PollInterval: pollInterval, FailureThreshold: failureThreshold}
if err := config.Validate(); err != nil {
panic(errors.WrapInternalf(err, errors.CodeInternal, "invalid licensing config"))
}
})
return config
}

View File

@@ -1,168 +0,0 @@
package httplicensing
import (
"context"
"encoding/json"
"net/http"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type licensingAPI struct {
licensing licensing.Licensing
}
func NewLicensingAPI(licensing licensing.Licensing) licensing.API {
return &licensingAPI{licensing: licensing}
}
func (api *licensingAPI) Activate(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
req := new(licensetypes.PostableLicense)
err = json.NewDecoder(r.Body).Decode(&req)
if err != nil {
render.Error(rw, err)
return
}
err = api.licensing.Activate(r.Context(), orgID, req.Key)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusAccepted, nil)
}
func (api *licensingAPI) GetActive(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
license, err := api.licensing.GetActive(r.Context(), orgID)
if err != nil {
render.Error(rw, err)
return
}
gettableLicense := licensetypes.NewGettableLicense(license.Data, license.Key)
render.Success(rw, http.StatusOK, gettableLicense)
}
func (api *licensingAPI) Refresh(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
err = api.licensing.Refresh(r.Context(), orgID)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusNoContent, nil)
}
func (api *licensingAPI) Checkout(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
req := new(licensetypes.PostableSubscription)
if err := json.NewDecoder(r.Body).Decode(req); err != nil {
render.Error(rw, err)
return
}
gettableSubscription, err := api.licensing.Checkout(ctx, orgID, req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusCreated, gettableSubscription)
}
func (api *licensingAPI) Portal(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
req := new(licensetypes.PostableSubscription)
if err := json.NewDecoder(r.Body).Decode(req); err != nil {
render.Error(rw, err)
return
}
gettableSubscription, err := api.licensing.Portal(ctx, orgID, req)
if err != nil {
render.Error(rw, err)
return
}
render.Success(rw, http.StatusCreated, gettableSubscription)
}

View File

@@ -1,249 +0,0 @@
package httplicensing
import (
"context"
"encoding/json"
"time"
"github.com/SigNoz/signoz/ee/licensing/licensingstore/sqllicensingstore"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/analyticstypes"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/zeus"
"github.com/tidwall/gjson"
)
type provider struct {
store licensetypes.Store
zeus zeus.Zeus
config licensing.Config
settings factory.ScopedProviderSettings
orgGetter organization.Getter
analytics analytics.Analytics
stopChan chan struct{}
}
func NewProviderFactory(store sqlstore.SQLStore, zeus zeus.Zeus, orgGetter organization.Getter, analytics analytics.Analytics) factory.ProviderFactory[licensing.Licensing, licensing.Config] {
return factory.NewProviderFactory(factory.MustNewName("http"), func(ctx context.Context, providerSettings factory.ProviderSettings, config licensing.Config) (licensing.Licensing, error) {
return New(ctx, providerSettings, config, store, zeus, orgGetter, analytics)
})
}
func New(ctx context.Context, ps factory.ProviderSettings, config licensing.Config, sqlstore sqlstore.SQLStore, zeus zeus.Zeus, orgGetter organization.Getter, analytics analytics.Analytics) (licensing.Licensing, error) {
settings := factory.NewScopedProviderSettings(ps, "github.com/SigNoz/signoz/ee/licensing/httplicensing")
licensestore := sqllicensingstore.New(sqlstore)
return &provider{
store: licensestore,
zeus: zeus,
config: config,
settings: settings,
orgGetter: orgGetter,
stopChan: make(chan struct{}),
analytics: analytics,
}, nil
}
func (provider *provider) Start(ctx context.Context) error {
tick := time.NewTicker(provider.config.PollInterval)
defer tick.Stop()
err := provider.Validate(ctx)
if err != nil {
provider.settings.Logger().ErrorContext(ctx, "failed to validate license from upstream server", "error", err)
}
for {
select {
case <-provider.stopChan:
return nil
case <-tick.C:
err := provider.Validate(ctx)
if err != nil {
provider.settings.Logger().ErrorContext(ctx, "failed to validate license from upstream server", "error", err)
}
}
}
}
func (provider *provider) Stop(ctx context.Context) error {
provider.settings.Logger().DebugContext(ctx, "license validation stopped")
close(provider.stopChan)
return nil
}
func (provider *provider) Validate(ctx context.Context) error {
organizations, err := provider.orgGetter.ListByOwnedKeyRange(ctx)
if err != nil {
return err
}
for _, organization := range organizations {
err := provider.Refresh(ctx, organization.ID)
if err != nil {
return err
}
}
return nil
}
func (provider *provider) Activate(ctx context.Context, organizationID valuer.UUID, key string) error {
data, err := provider.zeus.GetLicense(ctx, key)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "unable to fetch license data with upstream server")
}
license, err := licensetypes.NewLicense(data, organizationID)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to create license entity")
}
storableLicense := licensetypes.NewStorableLicenseFromLicense(license)
err = provider.store.Create(ctx, storableLicense)
if err != nil {
return err
}
return nil
}
func (provider *provider) GetActive(ctx context.Context, organizationID valuer.UUID) (*licensetypes.License, error) {
storableLicenses, err := provider.store.GetAll(ctx, organizationID)
if err != nil {
return nil, err
}
activeLicense, err := licensetypes.GetActiveLicenseFromStorableLicenses(storableLicenses, organizationID)
if err != nil {
return nil, err
}
return activeLicense, nil
}
func (provider *provider) Refresh(ctx context.Context, organizationID valuer.UUID) error {
activeLicense, err := provider.GetActive(ctx, organizationID)
if err != nil {
if errors.Ast(err, errors.TypeNotFound) {
return nil
}
provider.settings.Logger().ErrorContext(ctx, "license validation failed", "org_id", organizationID.StringValue())
return err
}
data, err := provider.zeus.GetLicense(ctx, activeLicense.Key)
if err != nil {
if time.Since(activeLicense.LastValidatedAt) > time.Duration(provider.config.FailureThreshold)*provider.config.PollInterval {
activeLicense.UpdateFeatures(licensetypes.BasicPlan)
updatedStorableLicense := licensetypes.NewStorableLicenseFromLicense(activeLicense)
err = provider.store.Update(ctx, organizationID, updatedStorableLicense)
if err != nil {
return err
}
return nil
}
return err
}
err = activeLicense.Update(data)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to create license entity from license data")
}
updatedStorableLicense := licensetypes.NewStorableLicenseFromLicense(activeLicense)
err = provider.store.Update(ctx, organizationID, updatedStorableLicense)
if err != nil {
return err
}
stats := licensetypes.NewStatsFromLicense(activeLicense)
provider.analytics.Send(ctx,
analyticstypes.Track{
UserId: "stats_" + organizationID.String(),
Event: "License Updated",
Properties: analyticstypes.NewPropertiesFromMap(stats),
Context: &analyticstypes.Context{
Extra: map[string]interface{}{
analyticstypes.KeyGroupID: organizationID.String(),
},
},
},
analyticstypes.Group{
UserId: "stats_" + organizationID.String(),
GroupId: organizationID.String(),
Traits: analyticstypes.NewTraitsFromMap(stats),
},
)
return nil
}
func (provider *provider) Checkout(ctx context.Context, organizationID valuer.UUID, postableSubscription *licensetypes.PostableSubscription) (*licensetypes.GettableSubscription, error) {
activeLicense, err := provider.GetActive(ctx, organizationID)
if err != nil {
return nil, err
}
body, err := json.Marshal(postableSubscription)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInvalidInput, errors.CodeInvalidInput, "failed to marshal checkout payload")
}
response, err := provider.zeus.GetCheckoutURL(ctx, activeLicense.Key, body)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to generate checkout session")
}
return &licensetypes.GettableSubscription{RedirectURL: gjson.GetBytes(response, "url").String()}, nil
}
func (provider *provider) Portal(ctx context.Context, organizationID valuer.UUID, postableSubscription *licensetypes.PostableSubscription) (*licensetypes.GettableSubscription, error) {
activeLicense, err := provider.GetActive(ctx, organizationID)
if err != nil {
return nil, err
}
body, err := json.Marshal(postableSubscription)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInvalidInput, errors.CodeInvalidInput, "failed to marshal portal payload")
}
response, err := provider.zeus.GetPortalURL(ctx, activeLicense.Key, body)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to generate portal session")
}
return &licensetypes.GettableSubscription{RedirectURL: gjson.GetBytes(response, "url").String()}, nil
}
func (provider *provider) GetFeatureFlags(ctx context.Context, organizationID valuer.UUID) ([]*licensetypes.Feature, error) {
license, err := provider.GetActive(ctx, organizationID)
if err != nil {
if errors.Ast(err, errors.TypeNotFound) {
return licensetypes.BasicPlan, nil
}
return nil, err
}
return license.Features, nil
}
func (provider *provider) Collect(ctx context.Context, orgID valuer.UUID) (map[string]any, error) {
activeLicense, err := provider.GetActive(ctx, orgID)
if err != nil {
if errors.Ast(err, errors.TypeNotFound) {
return map[string]any{}, nil
}
return nil, err
}
return licensetypes.NewStatsFromLicense(activeLicense), nil
}

View File

@@ -1,81 +0,0 @@
package sqllicensingstore
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type store struct {
sqlstore sqlstore.SQLStore
}
func New(sqlstore sqlstore.SQLStore) licensetypes.Store {
return &store{sqlstore}
}
func (store *store) Create(ctx context.Context, storableLicense *licensetypes.StorableLicense) error {
_, err := store.
sqlstore.
BunDB().
NewInsert().
Model(storableLicense).
Exec(ctx)
if err != nil {
return store.sqlstore.WrapAlreadyExistsErrf(err, errors.CodeAlreadyExists, "license with ID: %s already exists", storableLicense.ID)
}
return nil
}
func (store *store) Get(ctx context.Context, organizationID valuer.UUID, licenseID valuer.UUID) (*licensetypes.StorableLicense, error) {
storableLicense := new(licensetypes.StorableLicense)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(storableLicense).
Where("org_id = ?", organizationID).
Where("id = ?", licenseID).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, errors.CodeNotFound, "license with ID: %s does not exist", licenseID)
}
return storableLicense, nil
}
func (store *store) GetAll(ctx context.Context, organizationID valuer.UUID) ([]*licensetypes.StorableLicense, error) {
storableLicenses := make([]*licensetypes.StorableLicense, 0)
err := store.
sqlstore.
BunDB().
NewSelect().
Model(&storableLicenses).
Where("org_id = ?", organizationID).
Scan(ctx)
if err != nil {
return nil, store.sqlstore.WrapNotFoundErrf(err, errors.CodeNotFound, "licenses for organizationID: %s does not exists", organizationID)
}
return storableLicenses, nil
}
func (store *store) Update(ctx context.Context, organizationID valuer.UUID, storableLicense *licensetypes.StorableLicense) error {
_, err := store.
sqlstore.
BunDB().
NewUpdate().
Model(storableLicense).
WherePK().
Where("org_id = ?", organizationID).
Exec(ctx)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "unable to update license with ID: %s", storableLicense.ID)
}
return nil
}

View File

@@ -1,297 +0,0 @@
package impldashboard
import (
"context"
"maps"
"github.com/SigNoz/signoz/pkg/analytics"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/role"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/dashboardtypes"
"github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/roletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type module struct {
pkgDashboardModule dashboard.Module
store dashboardtypes.Store
settings factory.ScopedProviderSettings
role role.Module
querier querier.Querier
licensing licensing.Licensing
}
func NewModule(store dashboardtypes.Store, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, role role.Module, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
scopedProviderSettings := factory.NewScopedProviderSettings(settings, "github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard")
pkgDashboardModule := pkgimpldashboard.NewModule(store, settings, analytics, orgGetter, queryParser)
return &module{
pkgDashboardModule: pkgDashboardModule,
store: store,
settings: scopedProviderSettings,
role: role,
querier: querier,
licensing: licensing,
}
}
func (module *module) CreatePublic(ctx context.Context, orgID valuer.UUID, publicDashboard *dashboardtypes.PublicDashboard) error {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storablePublicDashboard, err := module.store.GetPublic(ctx, publicDashboard.DashboardID.StringValue())
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return err
}
if storablePublicDashboard != nil {
return errors.Newf(errors.TypeAlreadyExists, dashboardtypes.ErrCodePublicDashboardAlreadyExists, "dashboard with id %s is already public", storablePublicDashboard.DashboardID)
}
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
err = module.role.Assign(ctx, role.ID, orgID, authtypes.MustNewSubject(authtypes.TypeableAnonymous, authtypes.AnonymousUser.StringValue(), orgID, nil))
if err != nil {
return err
}
additionObject := authtypes.MustNewObject(
authtypes.Resource{
Name: dashboardtypes.TypeableMetaResourcePublicDashboard.Name(),
Type: authtypes.TypeMetaResource,
},
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, []*authtypes.Object{additionObject}, nil)
if err != nil {
return err
}
err = module.store.CreatePublic(ctx, dashboardtypes.NewStorablePublicDashboardFromPublicDashboard(publicDashboard))
if err != nil {
return err
}
return nil
}
func (module *module) GetPublic(ctx context.Context, orgID valuer.UUID, dashboardID valuer.UUID) (*dashboardtypes.PublicDashboard, error) {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return nil, errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
storablePublicDashboard, err := module.store.GetPublic(ctx, dashboardID.StringValue())
if err != nil {
return nil, err
}
return dashboardtypes.NewPublicDashboardFromStorablePublicDashboard(storablePublicDashboard), nil
}
func (module *module) GetDashboardByPublicID(ctx context.Context, id valuer.UUID) (*dashboardtypes.Dashboard, error) {
storableDashboard, err := module.store.GetDashboardByPublicID(ctx, id.StringValue())
if err != nil {
return nil, err
}
return dashboardtypes.NewDashboardFromStorableDashboard(storableDashboard), nil
}
func (module *module) GetPublicDashboardSelectorsAndOrg(ctx context.Context, id valuer.UUID, orgs []*types.Organization) ([]authtypes.Selector, valuer.UUID, error) {
orgIDs := make([]string, len(orgs))
for idx, org := range orgs {
orgIDs[idx] = org.ID.StringValue()
}
storableDashboard, err := module.store.GetDashboardByOrgsAndPublicID(ctx, orgIDs, id.StringValue())
if err != nil {
return nil, valuer.UUID{}, err
}
return []authtypes.Selector{
authtypes.MustNewSelector(authtypes.TypeMetaResource, id.StringValue()),
}, storableDashboard.OrgID, nil
}
func (module *module) GetPublicWidgetQueryRange(ctx context.Context, id valuer.UUID, widgetIdx, startTime, endTime uint64) (*querybuildertypesv5.QueryRangeResponse, error) {
dashboard, err := module.GetDashboardByPublicID(ctx, id)
if err != nil {
return nil, err
}
query, err := dashboard.GetWidgetQuery(startTime, endTime, widgetIdx, module.settings.Logger())
if err != nil {
return nil, err
}
return module.querier.QueryRange(ctx, dashboard.OrgID, query)
}
func (module *module) UpdatePublic(ctx context.Context, orgID valuer.UUID, publicDashboard *dashboardtypes.PublicDashboard) error {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
return module.store.UpdatePublic(ctx, dashboardtypes.NewStorablePublicDashboardFromPublicDashboard(publicDashboard))
}
func (module *module) Delete(ctx context.Context, orgID valuer.UUID, id valuer.UUID) error {
dashboard, err := module.Get(ctx, orgID, id)
if err != nil {
return err
}
if dashboard.Locked {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "dashboard is locked, please unlock the dashboard to be delete it")
}
err = module.store.RunInTx(ctx, func(ctx context.Context) error {
err := module.deletePublic(ctx, orgID, id)
if err != nil && !errors.Ast(err, errors.TypeNotFound) {
return err
}
err = module.store.Delete(ctx, orgID, id)
if err != nil {
return err
}
return nil
})
if err != nil {
return err
}
return nil
}
func (module *module) DeletePublic(ctx context.Context, orgID valuer.UUID, dashboardID valuer.UUID) error {
_, err := module.licensing.GetActive(ctx, orgID)
if err != nil {
return errors.New(errors.TypeLicenseUnavailable, errors.CodeLicenseUnavailable, "a valid license is not available").WithAdditional("this feature requires a valid license").WithAdditional(err.Error())
}
publicDashboard, err := module.GetPublic(ctx, orgID, dashboardID)
if err != nil {
return err
}
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
deletionObject := authtypes.MustNewObject(
authtypes.Resource{
Name: dashboardtypes.TypeableMetaResourcePublicDashboard.Name(),
Type: authtypes.TypeMetaResource,
},
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
if err != nil {
return err
}
err = module.store.DeletePublic(ctx, dashboardID.StringValue())
if err != nil {
return err
}
return nil
}
func (module *module) Collect(ctx context.Context, orgID valuer.UUID) (map[string]any, error) {
dashboards, err := module.store.List(ctx, orgID)
if err != nil {
return nil, err
}
publicDashboards, err := module.store.ListPublic(ctx, orgID)
if err != nil {
return nil, err
}
stats := make(map[string]any)
maps.Copy(stats, dashboardtypes.NewStatsFromStorableDashboards(dashboards))
maps.Copy(stats, dashboardtypes.NewStatsFromStorablePublicDashboards(publicDashboards))
return stats, nil
}
func (module *module) Create(ctx context.Context, orgID valuer.UUID, createdBy string, creator valuer.UUID, data dashboardtypes.PostableDashboard) (*dashboardtypes.Dashboard, error) {
return module.pkgDashboardModule.Create(ctx, orgID, createdBy, creator, data)
}
func (module *module) Get(ctx context.Context, orgID valuer.UUID, id valuer.UUID) (*dashboardtypes.Dashboard, error) {
return module.pkgDashboardModule.Get(ctx, orgID, id)
}
func (module *module) GetByMetricNames(ctx context.Context, orgID valuer.UUID, metricNames []string) (map[string][]map[string]string, error) {
return module.pkgDashboardModule.GetByMetricNames(ctx, orgID, metricNames)
}
func (module *module) MustGetTypeables() []authtypes.Typeable {
return module.pkgDashboardModule.MustGetTypeables()
}
func (module *module) List(ctx context.Context, orgID valuer.UUID) ([]*dashboardtypes.Dashboard, error) {
return module.pkgDashboardModule.List(ctx, orgID)
}
func (module *module) Update(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, data dashboardtypes.UpdatableDashboard, diff int) (*dashboardtypes.Dashboard, error) {
return module.pkgDashboardModule.Update(ctx, orgID, id, updatedBy, data, diff)
}
func (module *module) LockUnlock(ctx context.Context, orgID valuer.UUID, id valuer.UUID, updatedBy string, role types.Role, lock bool) error {
return module.pkgDashboardModule.LockUnlock(ctx, orgID, id, updatedBy, role, lock)
}
func (module *module) deletePublic(ctx context.Context, orgID valuer.UUID, dashboardID valuer.UUID) error {
publicDashboard, err := module.store.GetPublic(ctx, dashboardID.String())
if err != nil {
return err
}
role, err := module.role.GetOrCreate(ctx, roletypes.NewRole(roletypes.AnonymousUserRoleName, roletypes.AnonymousUserRoleDescription, roletypes.RoleTypeManaged.StringValue(), orgID))
if err != nil {
return err
}
deletionObject := authtypes.MustNewObject(
authtypes.Resource{
Name: dashboardtypes.TypeableMetaResourcePublicDashboard.Name(),
Type: authtypes.TypeMetaResource,
},
authtypes.MustNewSelector(authtypes.TypeMetaResource, publicDashboard.ID.String()),
)
err = module.role.PatchObjects(ctx, orgID, role.ID, authtypes.RelationRead, nil, []*authtypes.Object{deletionObject})
if err != nil {
return err
}
err = module.store.DeletePublic(ctx, dashboardID.StringValue())
if err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,4 @@
.vscode
README.md
signoz.db
bin

View File

@@ -11,7 +11,13 @@ before:
builds:
- id: signoz
binary: bin/signoz
main: ./cmd/enterprise
main: ee/query-service/main.go
env:
- CGO_ENABLED=1
- >-
{{- if eq .Os "linux" }}
{{- if eq .Arch "arm64" }}CC=aarch64-linux-gnu-gcc{{- end }}
{{- end }}
goos:
- linux
- darwin
@@ -29,10 +35,10 @@ builds:
- -X github.com/SigNoz/signoz/pkg/version.hash={{ .ShortCommit }}
- -X github.com/SigNoz/signoz/pkg/version.time={{ .CommitTimestamp }}
- -X github.com/SigNoz/signoz/pkg/version.branch={{ .Branch }}
- -X github.com/SigNoz/signoz/ee/zeus.url=https://api.signoz.cloud
- -X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.signoz.io
- -X github.com/SigNoz/signoz/ee/query-service/constants.ZeusURL=https://api.signoz.cloud
- -X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1
- -X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr
- >-
{{- if eq .Os "linux" }}-linkmode external -extldflags '-static'{{- end }}
mod_timestamp: "{{ .CommitTimestamp }}"
tags:
- timetzdata

View File

@@ -11,9 +11,11 @@ RUN apk update && \
COPY ./target/${OS}-${TARGETARCH}/signoz /root/signoz
COPY ./conf/prometheus.yml /root/config/prometheus.yml
COPY ./templates/email /root/templates
COPY frontend/build/ /etc/signoz/web/
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["./signoz", "server"]
ENTRYPOINT ["./signoz"]
CMD ["-config", "/root/config/prometheus.yml"]

View File

@@ -1,4 +1,4 @@
FROM golang:1.24-bullseye
FROM golang:1.22-bullseye
ARG OS="linux"
ARG TARGETARCH
@@ -23,7 +23,6 @@ COPY go.mod go.sum ./
RUN go mod download
COPY ./cmd/ ./cmd/
COPY ./ee/ ./ee/
COPY ./pkg/ ./pkg/
COPY ./templates/email /root/templates
@@ -34,4 +33,4 @@ RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["/root/signoz", "server"]
ENTRYPOINT ["/root/signoz"]

View File

@@ -12,9 +12,11 @@ RUN apk update && \
rm -rf /var/cache/apk/*
COPY ./target/${OS}-${ARCH}/signoz /root/signoz
COPY ./conf/prometheus.yml /root/config/prometheus.yml
COPY ./templates/email /root/templates
COPY frontend/build/ /etc/signoz/web/
RUN chmod 755 /root /root/signoz
ENTRYPOINT ["./signoz", "server"]
ENTRYPOINT ["./signoz"]
CMD ["-config", "/root/config/prometheus.yml"]

View File

@@ -5,7 +5,6 @@ import (
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/valuer"
)
type DailyProvider struct {
@@ -38,7 +37,7 @@ func NewDailyProvider(opts ...GenericProviderOption[*DailyProvider]) *DailyProvi
return dp
}
func (p *DailyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
func (p *DailyProvider) GetAnomalies(ctx context.Context, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
req.Seasonality = SeasonalityDaily
return p.getAnomalies(ctx, orgID, req)
return p.getAnomalies(ctx, req)
}

View File

@@ -5,7 +5,6 @@ import (
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/valuer"
)
type HourlyProvider struct {
@@ -38,7 +37,7 @@ func NewHourlyProvider(opts ...GenericProviderOption[*HourlyProvider]) *HourlyPr
return hp
}
func (p *HourlyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
func (p *HourlyProvider) GetAnomalies(ctx context.Context, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
req.Seasonality = SeasonalityHourly
return p.getAnomalies(ctx, orgID, req)
return p.getAnomalies(ctx, req)
}

View File

@@ -2,10 +2,8 @@ package anomaly
import (
"context"
"github.com/SigNoz/signoz/pkg/valuer"
)
type Provider interface {
GetAnomalies(ctx context.Context, orgID valuer.UUID, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error)
GetAnomalies(ctx context.Context, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error)
}

View File

@@ -5,12 +5,11 @@ import (
"math"
"time"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/query-service/cache"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/postprocess"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/valuer"
"go.uber.org/zap"
)
@@ -60,9 +59,9 @@ func (p *BaseSeasonalProvider) getQueryParams(req *GetAnomaliesRequest) *anomaly
return prepareAnomalyQueryParams(req.Params, req.Seasonality)
}
func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID, params *anomalyQueryParams) (*anomalyQueryResults, error) {
func (p *BaseSeasonalProvider) getResults(ctx context.Context, params *anomalyQueryParams) (*anomalyQueryResults, error) {
zap.L().Info("fetching results for current period", zap.Any("currentPeriodQuery", params.CurrentPeriodQuery))
currentPeriodResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.CurrentPeriodQuery)
currentPeriodResults, _, err := p.querierV2.QueryRange(ctx, params.CurrentPeriodQuery)
if err != nil {
return nil, err
}
@@ -73,7 +72,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
}
zap.L().Info("fetching results for past period", zap.Any("pastPeriodQuery", params.PastPeriodQuery))
pastPeriodResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.PastPeriodQuery)
pastPeriodResults, _, err := p.querierV2.QueryRange(ctx, params.PastPeriodQuery)
if err != nil {
return nil, err
}
@@ -84,7 +83,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
}
zap.L().Info("fetching results for current season", zap.Any("currentSeasonQuery", params.CurrentSeasonQuery))
currentSeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.CurrentSeasonQuery)
currentSeasonResults, _, err := p.querierV2.QueryRange(ctx, params.CurrentSeasonQuery)
if err != nil {
return nil, err
}
@@ -95,7 +94,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
}
zap.L().Info("fetching results for past season", zap.Any("pastSeasonQuery", params.PastSeasonQuery))
pastSeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.PastSeasonQuery)
pastSeasonResults, _, err := p.querierV2.QueryRange(ctx, params.PastSeasonQuery)
if err != nil {
return nil, err
}
@@ -106,7 +105,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
}
zap.L().Info("fetching results for past 2 season", zap.Any("past2SeasonQuery", params.Past2SeasonQuery))
past2SeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.Past2SeasonQuery)
past2SeasonResults, _, err := p.querierV2.QueryRange(ctx, params.Past2SeasonQuery)
if err != nil {
return nil, err
}
@@ -117,7 +116,7 @@ func (p *BaseSeasonalProvider) getResults(ctx context.Context, orgID valuer.UUID
}
zap.L().Info("fetching results for past 3 season", zap.Any("past3SeasonQuery", params.Past3SeasonQuery))
past3SeasonResults, _, err := p.querierV2.QueryRange(ctx, orgID, params.Past3SeasonQuery)
past3SeasonResults, _, err := p.querierV2.QueryRange(ctx, params.Past3SeasonQuery)
if err != nil {
return nil, err
}
@@ -336,9 +335,9 @@ func (p *BaseSeasonalProvider) getAnomalyScores(
return anomalyScoreSeries
}
func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, orgID valuer.UUID, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
func (p *BaseSeasonalProvider) getAnomalies(ctx context.Context, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
anomalyParams := p.getQueryParams(req)
anomalyQueryResults, err := p.getResults(ctx, orgID, anomalyParams)
anomalyQueryResults, err := p.getResults(ctx, anomalyParams)
if err != nil {
return nil, err
}

View File

@@ -5,7 +5,6 @@ import (
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/valuer"
)
type WeeklyProvider struct {
@@ -37,7 +36,7 @@ func NewWeeklyProvider(opts ...GenericProviderOption[*WeeklyProvider]) *WeeklyPr
return wp
}
func (p *WeeklyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
func (p *WeeklyProvider) GetAnomalies(ctx context.Context, req *GetAnomaliesRequest) (*GetAnomaliesResponse, error) {
req.Seasonality = SeasonalityWeekly
return p.getAnomalies(ctx, orgID, req)
return p.getAnomalies(ctx, req)
}

View File

@@ -5,39 +5,47 @@ import (
"net/http/httputil"
"time"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/query-service/dao"
"github.com/SigNoz/signoz/ee/query-service/integrations/gateway"
"github.com/SigNoz/signoz/ee/query-service/interfaces"
"github.com/SigNoz/signoz/ee/query-service/license"
"github.com/SigNoz/signoz/ee/query-service/usage"
"github.com/SigNoz/signoz/pkg/alertmanager"
"github.com/SigNoz/signoz/pkg/apis/fields"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/http/middleware"
querierAPI "github.com/SigNoz/signoz/pkg/querier"
baseapp "github.com/SigNoz/signoz/pkg/query-service/app"
"github.com/SigNoz/signoz/pkg/query-service/app/cloudintegrations"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
"github.com/SigNoz/signoz/pkg/query-service/app/logparsingpipeline"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
"github.com/SigNoz/signoz/pkg/query-service/cache"
baseint "github.com/SigNoz/signoz/pkg/query-service/interfaces"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
rules "github.com/SigNoz/signoz/pkg/query-service/rules"
"github.com/SigNoz/signoz/pkg/queryparser"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/version"
"github.com/gorilla/mux"
)
type APIHandlerOptions struct {
DataConnector interfaces.Reader
DataConnector interfaces.DataConnector
PreferSpanMetrics bool
AppDao dao.ModelDao
RulesManager *rules.Manager
UsageManager *usage.Manager
FeatureFlags baseint.FeatureLookup
LicenseManager *license.Manager
IntegrationsController *integrations.Controller
CloudIntegrationsController *cloudintegrations.Controller
LogsParsingPipelineController *logparsingpipeline.LogParsingPipelineController
Cache cache.Cache
Gateway *httputil.ReverseProxy
GatewayUrl string
// Querier Influx Interval
FluxInterval time.Duration
GlobalConfig global.Config
FluxInterval time.Duration
UseLogsNewSchema bool
UseTraceNewSchema bool
JWT *authtypes.JWT
}
type APIHandler struct {
@@ -49,17 +57,20 @@ type APIHandler struct {
func NewAPIHandler(opts APIHandlerOptions, signoz *signoz.SigNoz) (*APIHandler, error) {
baseHandler, err := baseapp.NewAPIHandler(baseapp.APIHandlerOpts{
Reader: opts.DataConnector,
PreferSpanMetrics: opts.PreferSpanMetrics,
AppDao: opts.AppDao,
RuleManager: opts.RulesManager,
FeatureFlags: opts.FeatureFlags,
IntegrationsController: opts.IntegrationsController,
CloudIntegrationsController: opts.CloudIntegrationsController,
LogsParsingPipelineController: opts.LogsParsingPipelineController,
Cache: opts.Cache,
FluxInterval: opts.FluxInterval,
UseLogsNewSchema: opts.UseLogsNewSchema,
UseTraceNewSchema: opts.UseTraceNewSchema,
AlertmanagerAPI: alertmanager.NewAPI(signoz.Alertmanager),
LicensingAPI: httplicensing.NewLicensingAPI(signoz.Licensing),
FieldsAPI: fields.NewAPI(signoz.Instrumentation.ToProviderSettings(), signoz.TelemetryStore),
FieldsAPI: fields.NewAPI(signoz.TelemetryStore),
Signoz: signoz,
QuerierAPI: querierAPI.NewAPI(signoz.Instrumentation.ToProviderSettings(), signoz.Querier, signoz.Analytics),
QueryParserAPI: queryparser.NewAPI(signoz.Instrumentation.ToProviderSettings(), signoz.QueryParser),
})
if err != nil {
@@ -73,45 +84,102 @@ func NewAPIHandler(opts APIHandlerOptions, signoz *signoz.SigNoz) (*APIHandler,
return ah, nil
}
func (ah *APIHandler) FF() baseint.FeatureLookup {
return ah.opts.FeatureFlags
}
func (ah *APIHandler) RM() *rules.Manager {
return ah.opts.RulesManager
}
func (ah *APIHandler) LM() *license.Manager {
return ah.opts.LicenseManager
}
func (ah *APIHandler) UM() *usage.Manager {
return ah.opts.UsageManager
}
func (ah *APIHandler) AppDao() dao.ModelDao {
return ah.opts.AppDao
}
func (ah *APIHandler) Gateway() *httputil.ReverseProxy {
return ah.opts.Gateway
}
func (ah *APIHandler) CheckFeature(f string) bool {
err := ah.FF().CheckFeature(f)
return err == nil
}
// RegisterRoutes registers routes for this handler on the given router
func (ah *APIHandler) RegisterRoutes(router *mux.Router, am *middleware.AuthZ) {
// note: add ee override methods first
// routes available only in ee version
router.HandleFunc("/api/v1/features", am.ViewAccess(ah.getFeatureFlags)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/featureFlags",
am.OpenAccess(ah.getFeatureFlags)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/loginPrecheck",
am.OpenAccess(ah.precheckLogin)).
Methods(http.MethodGet)
// paid plans specific routes
router.HandleFunc("/api/v1/complete/saml",
am.OpenAccess(ah.receiveSAML)).
Methods(http.MethodPost)
router.HandleFunc("/api/v1/complete/google",
am.OpenAccess(ah.receiveGoogleAuth)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/orgs/{orgId}/domains",
am.AdminAccess(ah.listDomainsByOrg)).
Methods(http.MethodGet)
router.HandleFunc("/api/v1/domains",
am.AdminAccess(ah.postDomain)).
Methods(http.MethodPost)
router.HandleFunc("/api/v1/domains/{id}",
am.AdminAccess(ah.putDomain)).
Methods(http.MethodPut)
router.HandleFunc("/api/v1/domains/{id}",
am.AdminAccess(ah.deleteDomain)).
Methods(http.MethodDelete)
// base overrides
router.HandleFunc("/api/v1/version", am.OpenAccess(ah.getVersion)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/invite/{token}", am.OpenAccess(ah.getInvite)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/register", am.OpenAccess(ah.registerUser)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/login", am.OpenAccess(ah.loginUser)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/checkout", am.AdminAccess(ah.LicensingAPI.Checkout)).Methods(http.MethodPost)
// PAT APIs
router.HandleFunc("/api/v1/pats", am.AdminAccess(ah.createPAT)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/pats", am.AdminAccess(ah.getPATs)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/pats/{id}", am.AdminAccess(ah.updatePAT)).Methods(http.MethodPut)
router.HandleFunc("/api/v1/pats/{id}", am.AdminAccess(ah.revokePAT)).Methods(http.MethodDelete)
router.HandleFunc("/api/v1/checkout", am.AdminAccess(ah.checkout)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/billing", am.AdminAccess(ah.getBilling)).Methods(http.MethodGet)
router.HandleFunc("/api/v1/portal", am.AdminAccess(ah.LicensingAPI.Portal)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/portal", am.AdminAccess(ah.portalSession)).Methods(http.MethodPost)
router.HandleFunc("/api/v1/dashboards/{uuid}/lock", am.EditAccess(ah.lockDashboard)).Methods(http.MethodPut)
router.HandleFunc("/api/v1/dashboards/{uuid}/unlock", am.EditAccess(ah.unlockDashboard)).Methods(http.MethodPut)
// v3
router.HandleFunc("/api/v3/licenses", am.AdminAccess(ah.LicensingAPI.Activate)).Methods(http.MethodPost)
router.HandleFunc("/api/v3/licenses", am.AdminAccess(ah.LicensingAPI.Refresh)).Methods(http.MethodPut)
router.HandleFunc("/api/v3/licenses/active", am.ViewAccess(ah.LicensingAPI.GetActive)).Methods(http.MethodGet)
router.HandleFunc("/api/v3/licenses", am.ViewAccess(ah.listLicensesV3)).Methods(http.MethodGet)
router.HandleFunc("/api/v3/licenses", am.AdminAccess(ah.applyLicenseV3)).Methods(http.MethodPost)
router.HandleFunc("/api/v3/licenses", am.AdminAccess(ah.refreshLicensesV3)).Methods(http.MethodPut)
router.HandleFunc("/api/v3/licenses/active", am.ViewAccess(ah.getActiveLicenseV3)).Methods(http.MethodGet)
// v4
router.HandleFunc("/api/v4/query_range", am.ViewAccess(ah.queryRangeV4)).Methods(http.MethodPost)
// v5
router.HandleFunc("/api/v5/query_range", am.ViewAccess(ah.queryRangeV5)).Methods(http.MethodPost)
router.HandleFunc("/api/v5/substitute_vars", am.ViewAccess(ah.QuerierAPI.ReplaceVariables)).Methods(http.MethodPost)
// Gateway
router.PathPrefix(gateway.RoutePrefix).HandlerFunc(am.EditAccess(ah.ServeGatewayHTTP))

View File

@@ -0,0 +1,341 @@
package api
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"github.com/gorilla/mux"
"go.uber.org/zap"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/model"
baseauth "github.com/SigNoz/signoz/pkg/query-service/auth"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
)
func parseRequest(r *http.Request, req interface{}) error {
defer r.Body.Close()
requestBody, err := io.ReadAll(r.Body)
if err != nil {
return err
}
err = json.Unmarshal(requestBody, &req)
return err
}
// loginUser overrides base handler and considers SSO case.
func (ah *APIHandler) loginUser(w http.ResponseWriter, r *http.Request) {
req := basemodel.LoginRequest{}
err := parseRequest(r, &req)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
ctx := context.Background()
if req.Email != "" && ah.CheckFeature(model.SSO) {
var apierr basemodel.BaseApiError
_, apierr = ah.AppDao().CanUsePassword(ctx, req.Email)
if apierr != nil && !apierr.IsNil() {
RespondError(w, apierr, nil)
}
}
// if all looks good, call auth
resp, err := baseauth.Login(ctx, &req, ah.opts.JWT)
if ah.HandleError(w, err, http.StatusUnauthorized) {
return
}
ah.WriteJSON(w, r, resp)
}
// registerUser registers a user and responds with a precheck
// so the front-end can decide the login method
func (ah *APIHandler) registerUser(w http.ResponseWriter, r *http.Request) {
if !ah.CheckFeature(model.SSO) {
ah.APIHandler.Register(w, r)
return
}
ctx := context.Background()
var req *baseauth.RegisterRequest
defer r.Body.Close()
requestBody, err := io.ReadAll(r.Body)
if err != nil {
zap.L().Error("received no input in api", zap.Error(err))
RespondError(w, model.BadRequest(err), nil)
return
}
err = json.Unmarshal(requestBody, &req)
if err != nil {
zap.L().Error("received invalid user registration request", zap.Error(err))
RespondError(w, model.BadRequest(fmt.Errorf("failed to register user")), nil)
return
}
// get invite object
invite, err := baseauth.ValidateInvite(ctx, req)
if err != nil {
zap.L().Error("failed to validate invite token", zap.Error(err))
RespondError(w, model.BadRequest(err), nil)
return
}
if invite == nil {
zap.L().Error("failed to validate invite token: it is either empty or invalid", zap.Error(err))
RespondError(w, model.BadRequest(basemodel.ErrSignupFailed{}), nil)
return
}
// get auth domain from email domain
domain, apierr := ah.AppDao().GetDomainByEmail(ctx, invite.Email)
if apierr != nil {
zap.L().Error("failed to get domain from email", zap.Error(apierr))
RespondError(w, model.InternalError(basemodel.ErrSignupFailed{}), nil)
}
precheckResp := &basemodel.PrecheckResponse{
SSO: false,
IsUser: false,
}
if domain != nil && domain.SsoEnabled {
// sso is enabled, create user and respond precheck data
user, apierr := baseauth.RegisterInvitedUser(ctx, req, true)
if apierr != nil {
RespondError(w, apierr, nil)
return
}
var precheckError basemodel.BaseApiError
precheckResp, precheckError = ah.AppDao().PrecheckLogin(ctx, user.Email, req.SourceUrl)
if precheckError != nil {
RespondError(w, precheckError, precheckResp)
}
} else {
// no-sso, validate password
if err := baseauth.ValidatePassword(req.Password); err != nil {
RespondError(w, model.InternalError(fmt.Errorf("password is not in a valid format")), nil)
return
}
_, registerError := baseauth.Register(ctx, req, ah.Signoz.Alertmanager, ah.Signoz.Modules.Organization)
if !registerError.IsNil() {
RespondError(w, apierr, nil)
return
}
precheckResp.IsUser = true
}
ah.Respond(w, precheckResp)
}
// getInvite returns the invite object details for the given invite token. We do not need to
// protect this API because invite token itself is meant to be private.
func (ah *APIHandler) getInvite(w http.ResponseWriter, r *http.Request) {
token := mux.Vars(r)["token"]
sourceUrl := r.URL.Query().Get("ref")
inviteObject, err := baseauth.GetInvite(r.Context(), token, ah.Signoz.Modules.Organization)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
resp := model.GettableInvitation{
InvitationResponseObject: inviteObject,
}
precheck, apierr := ah.AppDao().PrecheckLogin(r.Context(), inviteObject.Email, sourceUrl)
resp.Precheck = precheck
if apierr != nil {
RespondError(w, apierr, resp)
}
ah.WriteJSON(w, r, resp)
}
// PrecheckLogin enables browser login page to display appropriate
// login methods
func (ah *APIHandler) precheckLogin(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
email := r.URL.Query().Get("email")
sourceUrl := r.URL.Query().Get("ref")
resp, apierr := ah.AppDao().PrecheckLogin(ctx, email, sourceUrl)
if apierr != nil {
RespondError(w, apierr, resp)
}
ah.Respond(w, resp)
}
func handleSsoError(w http.ResponseWriter, r *http.Request, redirectURL string) {
ssoError := []byte("Login failed. Please contact your system administrator")
dst := make([]byte, base64.StdEncoding.EncodedLen(len(ssoError)))
base64.StdEncoding.Encode(dst, ssoError)
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectURL, string(dst)), http.StatusSeeOther)
}
// receiveGoogleAuth completes google OAuth response and forwards a request
// to front-end to sign user in
func (ah *APIHandler) receiveGoogleAuth(w http.ResponseWriter, r *http.Request) {
redirectUri := constants.GetDefaultSiteURL()
ctx := context.Background()
if !ah.CheckFeature(model.SSO) {
zap.L().Error("[receiveGoogleAuth] sso requested but feature unavailable in org domain")
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectUri, "feature unavailable, please upgrade your billing plan to access this feature"), http.StatusMovedPermanently)
return
}
q := r.URL.Query()
if errType := q.Get("error"); errType != "" {
zap.L().Error("[receiveGoogleAuth] failed to login with google auth", zap.String("error", errType), zap.String("error_description", q.Get("error_description")))
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectUri, "failed to login through SSO "), http.StatusMovedPermanently)
return
}
relayState := q.Get("state")
zap.L().Debug("[receiveGoogleAuth] relay state received", zap.String("state", relayState))
parsedState, err := url.Parse(relayState)
if err != nil || relayState == "" {
zap.L().Error("[receiveGoogleAuth] failed to process response - invalid response from IDP", zap.Error(err), zap.Any("request", r))
handleSsoError(w, r, redirectUri)
return
}
// upgrade redirect url from the relay state for better accuracy
redirectUri = fmt.Sprintf("%s://%s%s", parsedState.Scheme, parsedState.Host, "/login")
// fetch domain by parsing relay state.
domain, err := ah.AppDao().GetDomainFromSsoResponse(ctx, parsedState)
if err != nil {
handleSsoError(w, r, redirectUri)
return
}
// now that we have domain, use domain to fetch sso settings.
// prepare google callback handler using parsedState -
// which contains redirect URL (front-end endpoint)
callbackHandler, err := domain.PrepareGoogleOAuthProvider(parsedState)
if err != nil {
zap.L().Error("[receiveGoogleAuth] failed to prepare google oauth provider", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
identity, err := callbackHandler.HandleCallback(r)
if err != nil {
zap.L().Error("[receiveGoogleAuth] failed to process HandleCallback ", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
nextPage, err := ah.AppDao().PrepareSsoRedirect(ctx, redirectUri, identity.Email, ah.opts.JWT)
if err != nil {
zap.L().Error("[receiveGoogleAuth] failed to generate redirect URI after successful login ", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
http.Redirect(w, r, nextPage, http.StatusSeeOther)
}
// receiveSAML completes a SAML request and gets user logged in
func (ah *APIHandler) receiveSAML(w http.ResponseWriter, r *http.Request) {
// this is the source url that initiated the login request
redirectUri := constants.GetDefaultSiteURL()
ctx := context.Background()
if !ah.CheckFeature(model.SSO) {
zap.L().Error("[receiveSAML] sso requested but feature unavailable in org domain")
http.Redirect(w, r, fmt.Sprintf("%s?ssoerror=%s", redirectUri, "feature unavailable, please upgrade your billing plan to access this feature"), http.StatusMovedPermanently)
return
}
err := r.ParseForm()
if err != nil {
zap.L().Error("[receiveSAML] failed to process response - invalid response from IDP", zap.Error(err), zap.Any("request", r))
handleSsoError(w, r, redirectUri)
return
}
// the relay state is sent when a login request is submitted to
// Idp.
relayState := r.FormValue("RelayState")
zap.L().Debug("[receiveML] relay state", zap.String("relayState", relayState))
parsedState, err := url.Parse(relayState)
if err != nil || relayState == "" {
zap.L().Error("[receiveSAML] failed to process response - invalid response from IDP", zap.Error(err), zap.Any("request", r))
handleSsoError(w, r, redirectUri)
return
}
// upgrade redirect url from the relay state for better accuracy
redirectUri = fmt.Sprintf("%s://%s%s", parsedState.Scheme, parsedState.Host, "/login")
// fetch domain by parsing relay state.
domain, err := ah.AppDao().GetDomainFromSsoResponse(ctx, parsedState)
if err != nil {
handleSsoError(w, r, redirectUri)
return
}
sp, err := domain.PrepareSamlRequest(parsedState)
if err != nil {
zap.L().Error("[receiveSAML] failed to prepare saml request for domain", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
assertionInfo, err := sp.RetrieveAssertionInfo(r.FormValue("SAMLResponse"))
if err != nil {
zap.L().Error("[receiveSAML] failed to retrieve assertion info from saml response", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
if assertionInfo.WarningInfo.InvalidTime {
zap.L().Error("[receiveSAML] expired saml response", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
email := assertionInfo.NameID
if email == "" {
zap.L().Error("[receiveSAML] invalid email in the SSO response", zap.String("domain", domain.String()))
handleSsoError(w, r, redirectUri)
return
}
nextPage, err := ah.AppDao().PrepareSsoRedirect(ctx, redirectUri, email, ah.opts.JWT)
if err != nil {
zap.L().Error("[receiveSAML] failed to generate redirect URI after successful login ", zap.String("domain", domain.String()), zap.Error(err))
handleSsoError(w, r, redirectUri)
return
}
http.Redirect(w, r, nextPage, http.StatusSeeOther)
}

View File

@@ -10,13 +10,14 @@ import (
"strings"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/ee/query-service/constants"
eeTypes "github.com/SigNoz/signoz/ee/types"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/modules/user"
"github.com/SigNoz/signoz/pkg/query-service/auth"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/google/uuid"
"github.com/gorilla/mux"
"go.uber.org/zap"
)
@@ -35,12 +36,6 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(w, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
cloudProvider := mux.Vars(r)["cloudProvider"]
if cloudProvider != "aws" {
RespondError(w, basemodel.BadRequest(fmt.Errorf(
@@ -61,9 +56,11 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
SigNozAPIKey: apiKey,
}
license, err := ah.Signoz.Licensing.GetActive(r.Context(), orgID)
if err != nil {
render.Error(w, err)
license, apiErr := ah.LM().GetRepo().GetActiveLicense(r.Context())
if apiErr != nil {
RespondError(w, basemodel.WrapApiError(
apiErr, "couldn't look for active license",
), nil)
return
}
@@ -76,7 +73,7 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
return
}
signozApiUrl, apiErr := ah.getIngestionUrlAndSigNozAPIUrl(r.Context(), license.Key)
ingestionUrl, signozApiUrl, apiErr := getIngestionUrlAndSigNozAPIUrl(r.Context(), license.Key)
if apiErr != nil {
RespondError(w, basemodel.WrapApiError(
apiErr, "couldn't deduce ingestion url and signoz api url",
@@ -84,7 +81,7 @@ func (ah *APIHandler) CloudIntegrationsGenerateConnectionParams(w http.ResponseW
return
}
result.IngestionUrl = ah.opts.GlobalConfig.IngestionURL.String()
result.IngestionUrl = ingestionUrl
result.SigNozAPIUrl = signozApiUrl
gatewayUrl := ah.opts.GatewayUrl
@@ -119,14 +116,7 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
return "", apiErr
}
orgIdUUID, err := valuer.NewUUID(orgId)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't parse orgId: %w", err,
))
}
allPats, err := ah.Signoz.Modules.User.ListAPIKeys(ctx, orgIdUUID)
allPats, err := ah.AppDao().ListPATs(ctx, orgId)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't list PATs: %w", err,
@@ -143,90 +133,125 @@ func (ah *APIHandler) getOrCreateCloudIntegrationPAT(ctx context.Context, orgId
zap.String("cloudProvider", cloudProvider),
)
newPAT, err := types.NewStorableAPIKey(
newPAT := eeTypes.NewGettablePAT(
integrationPATName,
authtypes.RoleViewer.String(),
integrationUser.ID,
types.RoleViewer,
0,
)
integrationPAT, err := ah.AppDao().CreatePAT(ctx, orgId, newPAT)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't create cloud integration PAT: %w", err,
))
}
err = ah.Signoz.Modules.User.CreateAPIKey(ctx, newPAT)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't create cloud integration PAT: %w", err,
))
}
return newPAT.Token, nil
return integrationPAT.Token, nil
}
func (ah *APIHandler) getOrCreateCloudIntegrationUser(
ctx context.Context, orgId string, cloudProvider string,
) (*types.User, *basemodel.ApiError) {
cloudIntegrationUserName := fmt.Sprintf("%s-integration", cloudProvider)
email := valuer.MustNewEmail(fmt.Sprintf("%s@signoz.io", cloudIntegrationUserName))
cloudIntegrationUser := fmt.Sprintf("%s-integration", cloudProvider)
email := fmt.Sprintf("%s@signoz.io", cloudIntegrationUser)
cloudIntegrationUser, err := types.NewUser(cloudIntegrationUserName, email, types.RoleViewer, valuer.MustNewUUID(orgId))
if err != nil {
return nil, basemodel.InternalError(fmt.Errorf("couldn't create cloud integration user: %w", err))
// TODO(nitya): there should be orgId here
integrationUserResult, apiErr := ah.AppDao().GetUserByEmail(ctx, email)
if apiErr != nil {
return nil, basemodel.WrapApiError(apiErr, "couldn't look for integration user")
}
password := types.MustGenerateFactorPassword(cloudIntegrationUser.ID.StringValue())
cloudIntegrationUser, err = ah.Signoz.Modules.User.GetOrCreateUser(ctx, cloudIntegrationUser, user.WithFactorPassword(password))
if err != nil {
return nil, basemodel.InternalError(fmt.Errorf("couldn't look for integration user: %w", err))
if integrationUserResult != nil {
return &integrationUserResult.User, nil
}
return cloudIntegrationUser, nil
zap.L().Info(
"cloud integration user not found. Attempting to create the user",
zap.String("cloudProvider", cloudProvider),
)
newUser := &types.User{
ID: uuid.New().String(),
Name: cloudIntegrationUser,
Email: email,
TimeAuditable: types.TimeAuditable{
CreatedAt: time.Now(),
},
OrgID: orgId,
}
newUser.Role = authtypes.RoleViewer.String()
passwordHash, err := auth.PasswordHash(uuid.NewString())
if err != nil {
return nil, basemodel.InternalError(fmt.Errorf(
"couldn't hash random password for cloud integration user: %w", err,
))
}
newUser.Password = passwordHash
integrationUser, apiErr := ah.AppDao().CreateUser(ctx, newUser, false)
if apiErr != nil {
return nil, basemodel.WrapApiError(apiErr, "couldn't create cloud integration user")
}
return integrationUser, nil
}
func (ah *APIHandler) getIngestionUrlAndSigNozAPIUrl(ctx context.Context, licenseKey string) (
string, *basemodel.ApiError,
func getIngestionUrlAndSigNozAPIUrl(ctx context.Context, licenseKey string) (
string, string, *basemodel.ApiError,
) {
// TODO: remove this struct from here
url := fmt.Sprintf(
"%s%s",
strings.TrimSuffix(constants.ZeusURL, "/"),
"/v2/deployments/me",
)
type deploymentResponse struct {
Name string `json:"name"`
ClusterInfo struct {
Region struct {
DNS string `json:"dns"`
} `json:"region"`
} `json:"cluster"`
Status string `json:"status"`
Error string `json:"error"`
Data struct {
Name string `json:"name"`
ClusterInfo struct {
Region struct {
DNS string `json:"dns"`
} `json:"region"`
} `json:"cluster"`
} `json:"data"`
}
respBytes, err := ah.Signoz.Zeus.GetDeployment(ctx, licenseKey)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't query for deployment info: error: %w", err,
resp, apiErr := requestAndParseResponse[deploymentResponse](
ctx, url, map[string]string{"X-Signoz-Cloud-Api-Key": licenseKey}, nil,
)
if apiErr != nil {
return "", "", basemodel.WrapApiError(
apiErr, "couldn't query for deployment info",
)
}
if resp.Status != "success" {
return "", "", basemodel.InternalError(fmt.Errorf(
"couldn't query for deployment info: status: %s, error: %s",
resp.Status, resp.Error,
))
}
resp := new(deploymentResponse)
err = json.Unmarshal(respBytes, resp)
if err != nil {
return "", basemodel.InternalError(fmt.Errorf(
"couldn't unmarshal deployment info response: error: %w", err,
))
}
regionDns := resp.ClusterInfo.Region.DNS
deploymentName := resp.Name
regionDns := resp.Data.ClusterInfo.Region.DNS
deploymentName := resp.Data.Name
if len(regionDns) < 1 || len(deploymentName) < 1 {
// Fail early if actual response structure and expectation here ever diverge
return "", basemodel.InternalError(fmt.Errorf(
return "", "", basemodel.InternalError(fmt.Errorf(
"deployment info response not in expected shape. couldn't determine region dns and deployment name",
))
}
ingestionUrl := fmt.Sprintf("https://ingest.%s", regionDns)
signozApiUrl := fmt.Sprintf("https://%s.%s", deploymentName, regionDns)
return signozApiUrl, nil
return ingestionUrl, signozApiUrl, nil
}
type ingestionKey struct {

View File

@@ -0,0 +1,63 @@
package api
import (
"net/http"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/query-service/app/dashboards"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/gorilla/mux"
)
func (ah *APIHandler) lockDashboard(w http.ResponseWriter, r *http.Request) {
ah.lockUnlockDashboard(w, r, true)
}
func (ah *APIHandler) unlockDashboard(w http.ResponseWriter, r *http.Request) {
ah.lockUnlockDashboard(w, r, false)
}
func (ah *APIHandler) lockUnlockDashboard(w http.ResponseWriter, r *http.Request, lock bool) {
// Locking can only be done by the owner of the dashboard
// or an admin
// - Fetch the dashboard
// - Check if the user is the owner or an admin
// - If yes, lock/unlock the dashboard
// - If no, return 403
// Get the dashboard UUID from the request
uuid := mux.Vars(r)["uuid"]
if strings.HasPrefix(uuid, "integration") {
render.Error(w, errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "dashboards created by integrations cannot be modified"))
return
}
claims, err := authtypes.ClaimsFromContext(r.Context())
if err != nil {
render.Error(w, errors.Newf(errors.TypeUnauthenticated, errors.CodeUnauthenticated, "unauthenticated"))
return
}
dashboard, err := dashboards.GetDashboard(r.Context(), claims.OrgID, uuid)
if err != nil {
render.Error(w, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to get dashboard"))
return
}
if err := claims.IsAdmin(); err != nil && (dashboard.CreatedBy != claims.Email) {
render.Error(w, errors.Newf(errors.TypeForbidden, errors.CodeForbidden, "You are not authorized to lock/unlock this dashboard"))
return
}
// Lock/Unlock the dashboard
err = dashboards.LockUnlockDashboard(r.Context(), claims.OrgID, uuid, lock)
if err != nil {
render.Error(w, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "failed to lock/unlock dashboard"))
return
}
ah.Respond(w, "Dashboard updated successfully")
}

View File

@@ -0,0 +1,91 @@
package api
import (
"context"
"encoding/json"
"fmt"
"net/http"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/ee/types"
"github.com/google/uuid"
"github.com/gorilla/mux"
)
func (ah *APIHandler) listDomainsByOrg(w http.ResponseWriter, r *http.Request) {
orgId := mux.Vars(r)["orgId"]
domains, apierr := ah.AppDao().ListDomains(context.Background(), orgId)
if apierr != nil {
RespondError(w, apierr, domains)
return
}
ah.Respond(w, domains)
}
func (ah *APIHandler) postDomain(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
req := types.GettableOrgDomain{}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
if err := req.ValidNew(); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
if apierr := ah.AppDao().CreateDomain(ctx, &req); apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, &req)
}
func (ah *APIHandler) putDomain(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
domainIdStr := mux.Vars(r)["id"]
domainId, err := uuid.Parse(domainIdStr)
if err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
req := types.GettableOrgDomain{StorableOrgDomain: types.StorableOrgDomain{ID: domainId}}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
req.ID = domainId
if err := req.Valid(nil); err != nil {
RespondError(w, model.BadRequest(err), nil)
}
if apierr := ah.AppDao().UpdateDomain(ctx, &req); apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, &req)
}
func (ah *APIHandler) deleteDomain(w http.ResponseWriter, r *http.Request) {
domainIdStr := mux.Vars(r)["id"]
domainId, err := uuid.Parse(domainIdStr)
if err != nil {
RespondError(w, model.BadRequest(fmt.Errorf("invalid domain id")), nil)
return
}
apierr := ah.AppDao().DeleteDomain(context.Background(), domainId)
if apierr != nil {
RespondError(w, apierr, nil)
return
}
ah.Respond(w, nil)
}

View File

@@ -9,26 +9,13 @@ import (
"time"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/types/featuretypes"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"go.uber.org/zap"
)
func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(w, err)
return
}
orgID := valuer.MustNewUUID(claims.OrgID)
featureSet, err := ah.Signoz.Licensing.GetFeatureFlags(r.Context(), orgID)
featureSet, err := ah.FF().GetFeatureFlags()
if err != nil {
ah.HandleError(w, err, http.StatusInternalServerError)
return
@@ -36,7 +23,7 @@ func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
if constants.FetchFeatures == "true" {
zap.L().Debug("fetching license")
license, err := ah.Signoz.Licensing.GetActive(ctx, orgID)
license, err := ah.LM().GetRepo().GetActiveLicense(ctx)
if err != nil {
zap.L().Error("failed to fetch license", zap.Error(err))
} else if license == nil {
@@ -56,20 +43,10 @@ func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
}
}
evalCtx := featuretypes.NewFlaggerEvaluationContext(orgID)
useSpanMetrics := ah.Signoz.Flagger.BooleanOrEmpty(ctx, flagger.FeatureUseSpanMetrics, evalCtx)
featureSet = append(featureSet, &licensetypes.Feature{
Name: valuer.NewString(flagger.FeatureUseSpanMetrics.String()),
Active: useSpanMetrics,
Usage: 0,
UsageLimit: -1,
Route: "",
})
if constants.IsDotMetricsEnabled {
for idx, feature := range featureSet {
if feature.Name == licensetypes.DotMetricsEnabled {
if ah.opts.PreferSpanMetrics {
for idx := range featureSet {
feature := &featureSet[idx]
if feature.Name == basemodel.UseSpanMetrics {
featureSet[idx].Active = true
}
}
@@ -80,7 +57,7 @@ func (ah *APIHandler) getFeatureFlags(w http.ResponseWriter, r *http.Request) {
// fetchZeusFeatures makes an HTTP GET request to the /zeusFeatures endpoint
// and returns the FeatureSet.
func fetchZeusFeatures(url, licenseKey string) ([]*licensetypes.Feature, error) {
func fetchZeusFeatures(url, licenseKey string) (basemodel.FeatureSet, error) {
// Check if the URL is empty
if url == "" {
return nil, fmt.Errorf("url is empty")
@@ -139,28 +116,28 @@ func fetchZeusFeatures(url, licenseKey string) ([]*licensetypes.Feature, error)
}
type ZeusFeaturesResponse struct {
Status string `json:"status"`
Data []*licensetypes.Feature `json:"data"`
Status string `json:"status"`
Data basemodel.FeatureSet `json:"data"`
}
// MergeFeatureSets merges two FeatureSet arrays with precedence to zeusFeatures.
func MergeFeatureSets(zeusFeatures, internalFeatures []*licensetypes.Feature) []*licensetypes.Feature {
func MergeFeatureSets(zeusFeatures, internalFeatures basemodel.FeatureSet) basemodel.FeatureSet {
// Create a map to store the merged features
featureMap := make(map[string]*licensetypes.Feature)
featureMap := make(map[string]basemodel.Feature)
// Add all features from the otherFeatures set to the map
for _, feature := range internalFeatures {
featureMap[feature.Name.StringValue()] = feature
featureMap[feature.Name] = feature
}
// Add all features from the zeusFeatures set to the map
// If a feature already exists (i.e., same name), the zeusFeature will overwrite it
for _, feature := range zeusFeatures {
featureMap[feature.Name.StringValue()] = feature
featureMap[feature.Name] = feature
}
// Convert the map back to a FeatureSet slice
var mergedFeatures []*licensetypes.Feature
var mergedFeatures basemodel.FeatureSet
for _, feature := range featureMap {
mergedFeatures = append(mergedFeatures, feature)
}

View File

@@ -3,79 +3,78 @@ package api
import (
"testing"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/valuer"
basemodel "github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/stretchr/testify/assert"
)
func TestMergeFeatureSets(t *testing.T) {
tests := []struct {
name string
zeusFeatures []*licensetypes.Feature
internalFeatures []*licensetypes.Feature
expected []*licensetypes.Feature
zeusFeatures basemodel.FeatureSet
internalFeatures basemodel.FeatureSet
expected basemodel.FeatureSet
}{
{
name: "empty zeusFeatures and internalFeatures",
zeusFeatures: []*licensetypes.Feature{},
internalFeatures: []*licensetypes.Feature{},
expected: []*licensetypes.Feature{},
zeusFeatures: basemodel.FeatureSet{},
internalFeatures: basemodel.FeatureSet{},
expected: basemodel.FeatureSet{},
},
{
name: "non-empty zeusFeatures and empty internalFeatures",
zeusFeatures: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: false},
zeusFeatures: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: false},
},
internalFeatures: []*licensetypes.Feature{},
expected: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: false},
internalFeatures: basemodel.FeatureSet{},
expected: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: false},
},
},
{
name: "empty zeusFeatures and non-empty internalFeatures",
zeusFeatures: []*licensetypes.Feature{},
internalFeatures: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: false},
zeusFeatures: basemodel.FeatureSet{},
internalFeatures: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: false},
},
expected: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: false},
expected: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: false},
},
},
{
name: "non-empty zeusFeatures and non-empty internalFeatures with no conflicts",
zeusFeatures: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature3"), Active: false},
zeusFeatures: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature3", Active: false},
},
internalFeatures: []*licensetypes.Feature{
{Name: valuer.NewString("Feature2"), Active: true},
{Name: valuer.NewString("Feature4"), Active: false},
internalFeatures: basemodel.FeatureSet{
{Name: "Feature2", Active: true},
{Name: "Feature4", Active: false},
},
expected: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: true},
{Name: valuer.NewString("Feature3"), Active: false},
{Name: valuer.NewString("Feature4"), Active: false},
expected: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: true},
{Name: "Feature3", Active: false},
{Name: "Feature4", Active: false},
},
},
{
name: "non-empty zeusFeatures and non-empty internalFeatures with conflicts",
zeusFeatures: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: false},
zeusFeatures: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: false},
},
internalFeatures: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: false},
{Name: valuer.NewString("Feature3"), Active: true},
internalFeatures: basemodel.FeatureSet{
{Name: "Feature1", Active: false},
{Name: "Feature3", Active: true},
},
expected: []*licensetypes.Feature{
{Name: valuer.NewString("Feature1"), Active: true},
{Name: valuer.NewString("Feature2"), Active: false},
{Name: valuer.NewString("Feature3"), Active: true},
expected: basemodel.FeatureSet{
{Name: "Feature1", Active: true},
{Name: "Feature2", Active: false},
{Name: "Feature3", Active: true},
},
},
}

View File

@@ -5,26 +5,10 @@ import (
"strings"
"github.com/SigNoz/signoz/ee/query-service/integrations/gateway"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/http/render"
"github.com/SigNoz/signoz/pkg/types/authtypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
func (ah *APIHandler) ServeGatewayHTTP(rw http.ResponseWriter, req *http.Request) {
ctx := req.Context()
claims, err := authtypes.ClaimsFromContext(ctx)
if err != nil {
render.Error(rw, err)
return
}
orgID, err := valuer.NewUUID(claims.OrgID)
if err != nil {
render.Error(rw, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "orgId is invalid"))
return
}
validPath := false
for _, allowedPrefix := range gateway.AllowedPrefix {
if strings.HasPrefix(req.URL.Path, gateway.RoutePrefix+allowedPrefix) {
@@ -38,9 +22,9 @@ func (ah *APIHandler) ServeGatewayHTTP(rw http.ResponseWriter, req *http.Request
return
}
license, err := ah.Signoz.Licensing.GetActive(ctx, orgID)
license, err := ah.LM().GetRepo().GetActiveLicense(ctx)
if err != nil {
render.Error(rw, err)
RespondError(rw, err, nil)
return
}

View File

@@ -6,7 +6,9 @@ import (
"net/http"
"github.com/SigNoz/signoz/ee/query-service/constants"
"github.com/SigNoz/signoz/ee/query-service/integrations/signozio"
"github.com/SigNoz/signoz/ee/query-service/model"
"github.com/SigNoz/signoz/pkg/http/render"
)
type DayWiseBreakdown struct {
@@ -45,6 +47,10 @@ type details struct {
BillTotal float64 `json:"billTotal"`
}
type Redirect struct {
RedirectURL string `json:"redirectURL"`
}
type billingDetails struct {
Status string `json:"status"`
Data struct {
@@ -56,6 +62,93 @@ type billingDetails struct {
} `json:"data"`
}
type ApplyLicenseRequest struct {
LicenseKey string `json:"key"`
}
func (ah *APIHandler) listLicensesV3(w http.ResponseWriter, r *http.Request) {
ah.listLicensesV2(w, r)
}
func (ah *APIHandler) getActiveLicenseV3(w http.ResponseWriter, r *http.Request) {
activeLicense, err := ah.LM().GetRepo().GetActiveLicenseV3(r.Context())
if err != nil {
RespondError(w, &model.ApiError{Typ: model.ErrorInternal, Err: err}, nil)
return
}
// return 404 not found if there is no active license
if activeLicense == nil {
RespondError(w, &model.ApiError{Typ: model.ErrorNotFound, Err: fmt.Errorf("no active license found")}, nil)
return
}
// TODO deprecate this when we move away from key for stripe
activeLicense.Data["key"] = activeLicense.Key
render.Success(w, http.StatusOK, activeLicense.Data)
}
// this function is called by zeus when inserting licenses in the query-service
func (ah *APIHandler) applyLicenseV3(w http.ResponseWriter, r *http.Request) {
var licenseKey ApplyLicenseRequest
if err := json.NewDecoder(r.Body).Decode(&licenseKey); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
if licenseKey.LicenseKey == "" {
RespondError(w, model.BadRequest(fmt.Errorf("license key is required")), nil)
return
}
_, apiError := ah.LM().ActivateV3(r.Context(), licenseKey.LicenseKey)
if apiError != nil {
RespondError(w, apiError, nil)
return
}
render.Success(w, http.StatusAccepted, nil)
}
func (ah *APIHandler) refreshLicensesV3(w http.ResponseWriter, r *http.Request) {
apiError := ah.LM().RefreshLicense(r.Context())
if apiError != nil {
RespondError(w, apiError, nil)
return
}
render.Success(w, http.StatusNoContent, nil)
}
func getCheckoutPortalResponse(redirectURL string) *Redirect {
return &Redirect{RedirectURL: redirectURL}
}
func (ah *APIHandler) checkout(w http.ResponseWriter, r *http.Request) {
checkoutRequest := &model.CheckoutRequest{}
if err := json.NewDecoder(r.Body).Decode(checkoutRequest); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
license := ah.LM().GetActiveLicense()
if license == nil {
RespondError(w, model.BadRequestStr("cannot proceed with checkout without license key"), nil)
return
}
redirectUrl, err := signozio.CheckoutSession(r.Context(), checkoutRequest, license.Key)
if err != nil {
RespondError(w, err, nil)
return
}
ah.Respond(w, getCheckoutPortalResponse(redirectUrl))
}
func (ah *APIHandler) getBilling(w http.ResponseWriter, r *http.Request) {
licenseKey := r.URL.Query().Get("licenseKey")
@@ -89,3 +182,72 @@ func (ah *APIHandler) getBilling(w http.ResponseWriter, r *http.Request) {
// TODO(srikanthccv):Fetch the current day usage and add it to the response
ah.Respond(w, billingResponse.Data)
}
func convertLicenseV3ToLicenseV2(licenses []*model.LicenseV3) []model.License {
licensesV2 := []model.License{}
for _, l := range licenses {
planKeyFromPlanName, ok := model.MapOldPlanKeyToNewPlanName[l.PlanName]
if !ok {
planKeyFromPlanName = model.Basic
}
licenseV2 := model.License{
Key: l.Key,
ActivationId: "",
PlanDetails: "",
FeatureSet: l.Features,
ValidationMessage: "",
IsCurrent: l.IsCurrent,
LicensePlan: model.LicensePlan{
PlanKey: planKeyFromPlanName,
ValidFrom: l.ValidFrom,
ValidUntil: l.ValidUntil,
Status: l.Status},
}
licensesV2 = append(licensesV2, licenseV2)
}
return licensesV2
}
func (ah *APIHandler) listLicensesV2(w http.ResponseWriter, r *http.Request) {
licensesV3, apierr := ah.LM().GetLicensesV3(r.Context())
if apierr != nil {
RespondError(w, apierr, nil)
return
}
licenses := convertLicenseV3ToLicenseV2(licensesV3)
resp := model.Licenses{
TrialStart: -1,
TrialEnd: -1,
OnTrial: false,
WorkSpaceBlock: false,
TrialConvertedToSubscription: false,
GracePeriodEnd: -1,
Licenses: licenses,
}
ah.Respond(w, resp)
}
func (ah *APIHandler) portalSession(w http.ResponseWriter, r *http.Request) {
portalRequest := &model.PortalRequest{}
if err := json.NewDecoder(r.Body).Decode(portalRequest); err != nil {
RespondError(w, model.BadRequest(err), nil)
return
}
license := ah.LM().GetActiveLicense()
if license == nil {
RespondError(w, model.BadRequestStr("cannot request the portal session without license key"), nil)
return
}
redirectUrl, err := signozio.PortalSession(r.Context(), portalRequest, license.Key)
if err != nil {
RespondError(w, err, nil)
return
}
ah.Respond(w, getCheckoutPortalResponse(redirectUrl))
}

Some files were not shown because too many files have changed in this diff Show More