mirror of
https://github.com/SigNoz/signoz.git
synced 2026-02-17 14:42:12 +00:00
Compare commits
11 Commits
ns/claude-
...
alertmanag
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d05e279bcd | ||
|
|
f1c5d873f7 | ||
|
|
aec239cc7c | ||
|
|
59e26652dc | ||
|
|
e02afc5e97 | ||
|
|
3eac8ac30b | ||
|
|
382c4f58e1 | ||
|
|
73ea632a3f | ||
|
|
00fa8810c0 | ||
|
|
6cee330d44 | ||
|
|
871c6e642c |
@@ -1,136 +0,0 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
SigNoz is an open-source observability platform (APM, logs, metrics, traces) built on OpenTelemetry and ClickHouse. It provides a unified solution for monitoring applications with features including distributed tracing, log management, metrics dashboards, and alerting.
|
||||
|
||||
## Build and Development Commands
|
||||
|
||||
### Development Environment Setup
|
||||
```bash
|
||||
make devenv-up # Start ClickHouse and OTel Collector for local dev
|
||||
make devenv-clickhouse # Start only ClickHouse
|
||||
make devenv-signoz-otel-collector # Start only OTel Collector
|
||||
make devenv-clickhouse-clean # Clean ClickHouse data
|
||||
```
|
||||
|
||||
### Backend (Go)
|
||||
```bash
|
||||
make go-run-community # Run community backend server
|
||||
make go-run-enterprise # Run enterprise backend server
|
||||
make go-test # Run all Go unit tests
|
||||
go test -race ./pkg/... # Run tests for specific package
|
||||
go test -race ./pkg/querier/... # Example: run querier tests
|
||||
```
|
||||
|
||||
### Integration Tests (Python)
|
||||
```bash
|
||||
cd tests/integration
|
||||
uv sync # Install dependencies
|
||||
make py-test-setup # Start test environment (keep running with --reuse)
|
||||
make py-test # Run all integration tests
|
||||
make py-test-teardown # Stop test environment
|
||||
|
||||
# Run specific test
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>/<file>.py::test_name
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
```bash
|
||||
# Go linting (golangci-lint)
|
||||
golangci-lint run
|
||||
|
||||
# Python formatting/linting
|
||||
make py-fmt # Format with black
|
||||
make py-lint # Run isort, autoflake, pylint
|
||||
```
|
||||
|
||||
### OpenAPI Generation
|
||||
```bash
|
||||
go run cmd/enterprise/*.go generate openapi
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### Backend Structure
|
||||
|
||||
The Go backend follows a **provider pattern** for dependency injection:
|
||||
|
||||
- **`pkg/signoz/`** - IoC container that wires all providers together
|
||||
- **`pkg/modules/`** - Business logic modules (user, organization, dashboard, etc.)
|
||||
- **`pkg/<provider>/`** - Provider implementations following consistent structure:
|
||||
- `<name>.go` - Interface definition
|
||||
- `config.go` - Configuration (implements `factory.Config`)
|
||||
- `<implname><name>/provider.go` - Implementation
|
||||
- `<name>test/` - Mock implementations for testing
|
||||
|
||||
### Key Packages
|
||||
- **`pkg/querier/`** - Query engine for telemetry data (logs, traces, metrics)
|
||||
- **`pkg/telemetrystore/`** - ClickHouse telemetry storage interface
|
||||
- **`pkg/sqlstore/`** - Relational database (SQLite/PostgreSQL) for metadata
|
||||
- **`pkg/apiserver/`** - HTTP API server with OpenAPI integration
|
||||
- **`pkg/alertmanager/`** - Alert management
|
||||
- **`pkg/authn/`, `pkg/authz/`** - Authentication and authorization
|
||||
- **`pkg/flagger/`** - Feature flags (OpenFeature-based)
|
||||
- **`pkg/errors/`** - Structured error handling
|
||||
|
||||
### Enterprise vs Community
|
||||
- **`cmd/community/`** - Community edition entry point
|
||||
- **`cmd/enterprise/`** - Enterprise edition entry point
|
||||
- **`ee/`** - Enterprise-only features
|
||||
|
||||
## Code Conventions
|
||||
|
||||
### Error Handling
|
||||
Use the custom `pkg/errors` package instead of standard library:
|
||||
```go
|
||||
errors.New(typ, code, message) // Instead of errors.New()
|
||||
errors.Newf(typ, code, message, args...) // Instead of fmt.Errorf()
|
||||
errors.Wrapf(err, typ, code, msg) // Wrap with context
|
||||
```
|
||||
|
||||
Define domain-specific error codes:
|
||||
```go
|
||||
var CodeThingNotFound = errors.MustNewCode("thing_not_found")
|
||||
```
|
||||
|
||||
### HTTP Handlers
|
||||
Handlers are thin adapters in modules that:
|
||||
1. Extract auth context from request
|
||||
2. Decode request body using `binding` package
|
||||
3. Call module functions
|
||||
4. Return responses using `render` package
|
||||
|
||||
Register routes in `pkg/apiserver/signozapiserver/` with `handler.New()` and `OpenAPIDef`.
|
||||
|
||||
### SQL/Database
|
||||
- Use Bun ORM via `sqlstore.BunDBCtx(ctx)`
|
||||
- Star schema with `organizations` as central entity
|
||||
- All tables have `id`, `created_at`, `updated_at`, `org_id` columns
|
||||
- Write idempotent migrations in `pkg/sqlmigration/`
|
||||
- No `ON CASCADE` deletes - handle in application logic
|
||||
|
||||
### REST Endpoints
|
||||
- Use plural resource names: `/v1/organizations`, `/v1/users`
|
||||
- Use `me` for current user/org: `/v1/organizations/me/users`
|
||||
- Follow RESTful conventions for CRUD operations
|
||||
|
||||
### Linting Rules (from .golangci.yml)
|
||||
- Don't use `errors` package - use `pkg/errors`
|
||||
- Don't use `zap` logger - use `slog`
|
||||
- Don't use `fmt.Errorf` or `fmt.Print*`
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
- Run with race detector: `go test -race ./...`
|
||||
- Provider mocks are in `<provider>test/` packages
|
||||
|
||||
### Integration Tests
|
||||
- Located in `tests/integration/`
|
||||
- Use pytest with testcontainers
|
||||
- Files prefixed with numbers for execution order (e.g., `01_database.py`)
|
||||
- Always use `--reuse` flag during development
|
||||
- Fixtures in `tests/integration/fixtures/`
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read",
|
||||
"Glob",
|
||||
"Grep",
|
||||
"Bash(git *)",
|
||||
"Bash(make *)",
|
||||
"Bash(cd *)",
|
||||
"Bash(ls *)",
|
||||
"Bash(go run *)",
|
||||
"Bash(yarn run *)"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
description: Write optimised ClickHouse queries for SigNoz dashboards (traces, errors, logs)
|
||||
user_invocable: true
|
||||
---
|
||||
|
||||
# Writing ClickHouse Queries for SigNoz Dashboards
|
||||
|
||||
Read [clickhouse-traces-reference.md](./clickhouse-traces-reference.md) for full schema and query reference before writing any query. It covers:
|
||||
|
||||
- All table schemas (`distributed_signoz_index_v3`, `distributed_traces_v3_resource`, `distributed_signoz_error_index_v2`, etc.)
|
||||
- The mandatory resource filter CTE pattern and timestamp bucketing
|
||||
- Attribute access syntax (standard, indexed, resource)
|
||||
- Dashboard panel query templates (timeseries, value, table)
|
||||
- Real-world query examples (span counts, error rates, latency, event extraction)
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Understand the ask**: What metric/data does the user want? (e.g., error rate, latency, span count)
|
||||
2. **Pick the panel type**: Timeseries (time-series chart), Value (single number), or Table (rows).
|
||||
3. **Build the query** following the mandatory patterns from the reference doc.
|
||||
4. **Validate** the query uses all required optimizations (resource CTE, ts_bucket_start, indexed columns).
|
||||
@@ -1,460 +0,0 @@
|
||||
# ClickHouse Traces Query Reference for SigNoz
|
||||
|
||||
Source: https://signoz.io/docs/userguide/writing-clickhouse-traces-query/
|
||||
|
||||
All tables live in the `signoz_traces` database.
|
||||
|
||||
---
|
||||
|
||||
## Table Schemas
|
||||
|
||||
### distributed_signoz_index_v3 (Primary Spans Table)
|
||||
|
||||
The main table for querying span data. 30+ columns following OpenTelemetry conventions.
|
||||
|
||||
```sql
|
||||
(
|
||||
`ts_bucket_start` UInt64 CODEC(DoubleDelta, LZ4),
|
||||
`resource_fingerprint` String CODEC(ZSTD(1)),
|
||||
`timestamp` DateTime64(9) CODEC(DoubleDelta, LZ4),
|
||||
`trace_id` FixedString(32) CODEC(ZSTD(1)),
|
||||
`span_id` String CODEC(ZSTD(1)),
|
||||
`trace_state` String CODEC(ZSTD(1)),
|
||||
`parent_span_id` String CODEC(ZSTD(1)),
|
||||
`flags` UInt32 CODEC(T64, ZSTD(1)),
|
||||
`name` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`kind` Int8 CODEC(T64, ZSTD(1)),
|
||||
`kind_string` String CODEC(ZSTD(1)),
|
||||
`duration_nano` UInt64 CODEC(T64, ZSTD(1)),
|
||||
`status_code` Int16 CODEC(T64, ZSTD(1)),
|
||||
`status_message` String CODEC(ZSTD(1)),
|
||||
`status_code_string` String CODEC(ZSTD(1)),
|
||||
`attributes_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
|
||||
`attributes_number` Map(LowCardinality(String), Float64) CODEC(ZSTD(1)),
|
||||
`attributes_bool` Map(LowCardinality(String), Bool) CODEC(ZSTD(1)),
|
||||
`resources_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)), -- deprecated
|
||||
`resource` JSON(max_dynamic_paths = 100) CODEC(ZSTD(1)),
|
||||
`events` Array(String) CODEC(ZSTD(2)),
|
||||
`links` String CODEC(ZSTD(1)),
|
||||
`response_status_code` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`external_http_url` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`http_url` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`external_http_method` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`http_method` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`http_host` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`db_name` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`db_operation` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`has_error` Bool CODEC(T64, ZSTD(1)),
|
||||
`is_remote` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
-- Pre-indexed "selected" columns (use these instead of map access when available):
|
||||
`resource_string_service$$name` LowCardinality(String) DEFAULT resources_string['service.name'] CODEC(ZSTD(1)),
|
||||
`attribute_string_http$$route` LowCardinality(String) DEFAULT attributes_string['http.route'] CODEC(ZSTD(1)),
|
||||
`attribute_string_messaging$$system` LowCardinality(String) DEFAULT attributes_string['messaging.system'] CODEC(ZSTD(1)),
|
||||
`attribute_string_messaging$$operation` LowCardinality(String) DEFAULT attributes_string['messaging.operation'] CODEC(ZSTD(1)),
|
||||
`attribute_string_db$$system` LowCardinality(String) DEFAULT attributes_string['db.system'] CODEC(ZSTD(1)),
|
||||
`attribute_string_rpc$$system` LowCardinality(String) DEFAULT attributes_string['rpc.system'] CODEC(ZSTD(1)),
|
||||
`attribute_string_rpc$$service` LowCardinality(String) DEFAULT attributes_string['rpc.service'] CODEC(ZSTD(1)),
|
||||
`attribute_string_rpc$$method` LowCardinality(String) DEFAULT attributes_string['rpc.method'] CODEC(ZSTD(1)),
|
||||
`attribute_string_peer$$service` LowCardinality(String) DEFAULT attributes_string['peer.service'] CODEC(ZSTD(1))
|
||||
)
|
||||
ORDER BY (ts_bucket_start, resource_fingerprint, has_error, name, timestamp)
|
||||
```
|
||||
|
||||
### distributed_traces_v3_resource (Resource Lookup Table)
|
||||
|
||||
Used in the resource filter CTE pattern for efficient filtering by resource attributes.
|
||||
|
||||
```sql
|
||||
(
|
||||
`labels` String CODEC(ZSTD(5)),
|
||||
`fingerprint` String CODEC(ZSTD(1)),
|
||||
`seen_at_ts_bucket_start` Int64 CODEC(Delta(8), ZSTD(1))
|
||||
)
|
||||
```
|
||||
|
||||
### distributed_signoz_error_index_v2 (Error Events)
|
||||
|
||||
```sql
|
||||
(
|
||||
`timestamp` DateTime64(9) CODEC(DoubleDelta, LZ4),
|
||||
`errorID` FixedString(32) CODEC(ZSTD(1)),
|
||||
`groupID` FixedString(32) CODEC(ZSTD(1)),
|
||||
`traceID` FixedString(32) CODEC(ZSTD(1)),
|
||||
`spanID` String CODEC(ZSTD(1)),
|
||||
`serviceName` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`exceptionType` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`exceptionMessage` String CODEC(ZSTD(1)),
|
||||
`exceptionStacktrace` String CODEC(ZSTD(1)),
|
||||
`exceptionEscaped` Bool CODEC(T64, ZSTD(1)),
|
||||
`resourceTagsMap` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
|
||||
INDEX idx_error_id errorID TYPE bloom_filter GRANULARITY 4,
|
||||
INDEX idx_resourceTagsMapKeys mapKeys(resourceTagsMap) TYPE bloom_filter(0.01) GRANULARITY 64,
|
||||
INDEX idx_resourceTagsMapValues mapValues(resourceTagsMap) TYPE bloom_filter(0.01) GRANULARITY 64
|
||||
)
|
||||
```
|
||||
|
||||
### distributed_top_level_operations
|
||||
|
||||
```sql
|
||||
(
|
||||
`name` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`serviceName` LowCardinality(String) CODEC(ZSTD(1))
|
||||
)
|
||||
```
|
||||
|
||||
### distributed_span_attributes_keys
|
||||
|
||||
```sql
|
||||
(
|
||||
`tagKey` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`tagType` Enum8('tag' = 1, 'resource' = 2) CODEC(ZSTD(1)),
|
||||
`dataType` Enum8('string' = 1, 'bool' = 2, 'float64' = 3) CODEC(ZSTD(1)),
|
||||
`isColumn` Bool CODEC(ZSTD(1))
|
||||
)
|
||||
```
|
||||
|
||||
### distributed_span_attributes
|
||||
|
||||
```sql
|
||||
(
|
||||
`timestamp` DateTime CODEC(DoubleDelta, ZSTD(1)),
|
||||
`tagKey` LowCardinality(String) CODEC(ZSTD(1)),
|
||||
`tagType` Enum8('tag' = 1, 'resource' = 2) CODEC(ZSTD(1)),
|
||||
`dataType` Enum8('string' = 1, 'bool' = 2, 'float64' = 3) CODEC(ZSTD(1)),
|
||||
`stringTagValue` String CODEC(ZSTD(1)),
|
||||
`float64TagValue` Nullable(Float64) CODEC(ZSTD(1)),
|
||||
`isColumn` Bool CODEC(ZSTD(1))
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Optimization Patterns
|
||||
|
||||
### 1. Resource Filter CTE
|
||||
|
||||
**Always** use a CTE to pre-filter resource fingerprints when filtering by resource attributes (service.name, environment, etc.). This is the single most impactful optimization.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE (simpleJSONExtractString(labels, 'service.name') = 'myservice')
|
||||
AND seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT ...
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE resource_fingerprint GLOBAL IN __resource_filter
|
||||
AND ...
|
||||
```
|
||||
|
||||
- Multiple resource filters: chain with AND in the CTE WHERE clause.
|
||||
- Use `simpleJSONExtractString(labels, '<key>')` to extract resource attribute values.
|
||||
|
||||
### 2. Timestamp Bucketing
|
||||
|
||||
**Always** include `ts_bucket_start` filter alongside `timestamp` filter. Data is bucketed in 30-minute (1800-second) intervals.
|
||||
|
||||
```sql
|
||||
WHERE timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}}
|
||||
AND ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
```
|
||||
|
||||
The `- 1800` on the start ensures spans at bucket boundaries are not missed.
|
||||
|
||||
### 3. Use Indexed Columns Over Map Access
|
||||
|
||||
When a pre-indexed ("selected") column exists, use it instead of map access:
|
||||
|
||||
| Instead of | Use |
|
||||
|---|---|
|
||||
| `attributes_string['http.route']` | `attribute_string_http$$route` |
|
||||
| `attributes_string['db.system']` | `attribute_string_db$$system` |
|
||||
| `attributes_string['rpc.method']` | `attribute_string_rpc$$method` |
|
||||
| `attributes_string['peer.service']` | `attribute_string_peer$$service` |
|
||||
| `resources_string['service.name']` | `resource_string_service$$name` |
|
||||
|
||||
The naming convention: replace `.` with `$$` in the attribute name and prefix with `attribute_string_`, `attribute_number_`, or `attribute_bool_`.
|
||||
|
||||
### 4. Use Pre-extracted Columns
|
||||
|
||||
These top-level columns are faster than map access:
|
||||
- `http_method`, `http_url`, `http_host`
|
||||
- `db_name`, `db_operation`
|
||||
- `has_error`, `duration_nano`, `name`, `kind`
|
||||
- `response_status_code`
|
||||
|
||||
---
|
||||
|
||||
## Attribute Access Syntax
|
||||
|
||||
### Standard (non-indexed) attributes
|
||||
```sql
|
||||
attributes_string['http.status_code']
|
||||
attributes_number['response_time']
|
||||
attributes_bool['is_error']
|
||||
```
|
||||
|
||||
### Selected (indexed) attributes — direct column names
|
||||
```sql
|
||||
attribute_string_http$$route -- for http.route
|
||||
attribute_number_response$$time -- for response.time
|
||||
attribute_bool_is$$error -- for is.error
|
||||
```
|
||||
|
||||
### Resource attributes in SELECT / GROUP BY
|
||||
```sql
|
||||
resource.service.name::String
|
||||
resource.environment::String
|
||||
```
|
||||
|
||||
### Resource attributes in WHERE (via CTE)
|
||||
```sql
|
||||
simpleJSONExtractString(labels, 'service.name') = 'myservice'
|
||||
```
|
||||
|
||||
### Checking attribute existence
|
||||
```sql
|
||||
mapContains(attributes_string, 'http.method')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Panel Query Templates
|
||||
|
||||
### Timeseries Panel
|
||||
|
||||
Aggregates data over time intervals for chart visualization.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE (simpleJSONExtractString(labels, 'service.name') = '{{service}}')
|
||||
AND seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT
|
||||
toStartOfInterval(timestamp, INTERVAL 1 MINUTE) AS ts,
|
||||
toFloat64(count()) AS value
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE
|
||||
resource_fingerprint GLOBAL IN __resource_filter AND
|
||||
timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}} AND
|
||||
ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
GROUP BY ts
|
||||
ORDER BY ts ASC;
|
||||
```
|
||||
|
||||
### Value Panel
|
||||
|
||||
Returns a single aggregated number. Wrap the timeseries query and reduce with `avg()`, `sum()`, `min()`, `max()`, or `any()`.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE (simpleJSONExtractString(labels, 'service.name') = '{{service}}')
|
||||
AND seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT
|
||||
avg(value) as value,
|
||||
any(ts) as ts
|
||||
FROM (
|
||||
SELECT
|
||||
toStartOfInterval(timestamp, INTERVAL 1 MINUTE) AS ts,
|
||||
toFloat64(count()) AS value
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE
|
||||
resource_fingerprint GLOBAL IN __resource_filter AND
|
||||
timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}} AND
|
||||
ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
GROUP BY ts
|
||||
ORDER BY ts ASC
|
||||
)
|
||||
```
|
||||
|
||||
### Table Panel
|
||||
|
||||
Rows grouped by dimensions. Use `now() as ts` instead of a time interval column.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT
|
||||
now() as ts,
|
||||
resource.service.name::String as `service.name`,
|
||||
toFloat64(count()) AS value
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE
|
||||
resource_fingerprint GLOBAL IN __resource_filter AND
|
||||
timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}} AND
|
||||
ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp AND
|
||||
`service.name` IS NOT NULL
|
||||
GROUP BY `service.name`, ts
|
||||
ORDER BY value DESC;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Query Examples
|
||||
|
||||
### Timeseries — Error spans per service per minute
|
||||
|
||||
Shows `has_error` filtering, resource attribute in SELECT, and multi-series grouping.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT
|
||||
toStartOfInterval(timestamp, INTERVAL 1 MINUTE) AS ts,
|
||||
resource.service.name::String as `service.name`,
|
||||
toFloat64(count()) AS value
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE
|
||||
resource_fingerprint GLOBAL IN __resource_filter AND
|
||||
timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}} AND
|
||||
has_error = true AND
|
||||
`service.name` IS NOT NULL AND
|
||||
ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
GROUP BY `service.name`, ts
|
||||
ORDER BY ts ASC;
|
||||
```
|
||||
|
||||
### Value — Average duration of GET requests
|
||||
|
||||
Shows the value-panel wrapping pattern (`avg(value)` / `any(ts)`) with a service resource filter.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE (simpleJSONExtractString(labels, 'service.name') = 'api-service')
|
||||
AND seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT
|
||||
avg(value) as value,
|
||||
any(ts) as ts FROM (
|
||||
SELECT
|
||||
toStartOfInterval(timestamp, INTERVAL 1 MINUTE) AS ts,
|
||||
toFloat64(avg(duration_nano)) AS value
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE
|
||||
resource_fingerprint GLOBAL IN __resource_filter AND
|
||||
timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}} AND
|
||||
ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp AND
|
||||
http_method = 'GET'
|
||||
GROUP BY ts
|
||||
ORDER BY ts ASC
|
||||
)
|
||||
```
|
||||
|
||||
### Table — Average duration by HTTP method
|
||||
|
||||
Shows `now() as ts` pattern, pre-extracted column usage, and non-null filtering.
|
||||
|
||||
```sql
|
||||
WITH __resource_filter AS (
|
||||
SELECT fingerprint
|
||||
FROM signoz_traces.distributed_traces_v3_resource
|
||||
WHERE (simpleJSONExtractString(labels, 'service.name') = 'api-gateway')
|
||||
AND seen_at_ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp
|
||||
)
|
||||
SELECT
|
||||
now() as ts,
|
||||
http_method,
|
||||
toFloat64(avg(duration_nano)) AS avg_duration_nano
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE
|
||||
resource_fingerprint GLOBAL IN __resource_filter AND
|
||||
timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}} AND
|
||||
ts_bucket_start BETWEEN $start_timestamp - 1800 AND $end_timestamp AND
|
||||
http_method IS NOT NULL AND http_method != ''
|
||||
GROUP BY http_method, ts
|
||||
ORDER BY avg_duration_nano DESC;
|
||||
```
|
||||
|
||||
### Advanced — Extract values from span events
|
||||
|
||||
Shows `arrayFilter`/`arrayMap` pattern for querying the `events` JSON array.
|
||||
|
||||
```sql
|
||||
WITH arrayFilter(x -> JSONExtractString(x, 'name')='Getting customer', events) AS filteredEvents
|
||||
SELECT toStartOfInterval(timestamp, INTERVAL 1 MINUTE) AS interval,
|
||||
toFloat64(count()) AS count,
|
||||
arrayJoin(arrayMap(x -> JSONExtractString(JSONExtractString(x, 'attributeMap'), 'customer_id'), filteredEvents)) AS resultArray
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE not empty(filteredEvents)
|
||||
AND timestamp > toUnixTimestamp(now() - INTERVAL 30 MINUTE)
|
||||
AND ts_bucket_start >= toUInt64(toUnixTimestamp(now() - toIntervalMinute(30))) - 1800
|
||||
GROUP BY (resultArray, interval) order by (resultArray, interval) ASC;
|
||||
```
|
||||
|
||||
### Advanced — Average latency between two specific spans
|
||||
|
||||
Shows cross-span latency calculation using `minIf()` and indexed service columns.
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
interval,
|
||||
round(avg(time_diff), 2) AS result
|
||||
FROM
|
||||
(
|
||||
SELECT
|
||||
interval,
|
||||
traceID,
|
||||
if(startTime1 != 0, if(startTime2 != 0, (toUnixTimestamp64Nano(startTime2) - toUnixTimestamp64Nano(startTime1)) / 1000000, nan), nan) AS time_diff
|
||||
FROM
|
||||
(
|
||||
SELECT
|
||||
toStartOfInterval(timestamp, toIntervalMinute(1)) AS interval,
|
||||
traceID,
|
||||
minIf(timestamp, if(resource_string_service$$name='driver', if(name = '/driver.DriverService/FindNearest', if((resources_string['component']) = 'gRPC', true, false), false), false)) AS startTime1,
|
||||
minIf(timestamp, if(resource_string_service$$name='route', if(name = 'HTTP GET /route', true, false), false)) AS startTime2
|
||||
FROM signoz_traces.distributed_signoz_index_v3
|
||||
WHERE (timestamp BETWEEN {{.start_datetime}} AND {{.end_datetime}})
|
||||
AND (ts_bucket_start BETWEEN {{.start_timestamp}} - 1800 AND {{.end_timestamp}})
|
||||
AND (resource_string_service$$name IN ('driver', 'route'))
|
||||
GROUP BY (interval, traceID)
|
||||
ORDER BY (interval, traceID) ASC
|
||||
)
|
||||
)
|
||||
WHERE isNaN(time_diff) = 0
|
||||
GROUP BY interval
|
||||
ORDER BY interval ASC;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SigNoz Dashboard Variables
|
||||
|
||||
These template variables are automatically replaced by SigNoz when the query runs:
|
||||
|
||||
| Variable | Description |
|
||||
|---|---|
|
||||
| `{{.start_datetime}}` | Start of selected time range (DateTime64) |
|
||||
| `{{.end_datetime}}` | End of selected time range (DateTime64) |
|
||||
| `$start_timestamp` | Start as Unix timestamp (seconds) |
|
||||
| `$end_timestamp` | End as Unix timestamp (seconds) |
|
||||
|
||||
---
|
||||
|
||||
## Query Optimization Checklist
|
||||
|
||||
Before finalizing any query, verify:
|
||||
|
||||
- [ ] **Resource filter CTE** is present when filtering by resource attributes (service.name, environment, etc.)
|
||||
- [ ] **`ts_bucket_start`** filter is included alongside `timestamp` filter, with `- 1800` on start
|
||||
- [ ] **`GLOBAL IN`** is used (not just `IN`) for the resource fingerprint subquery
|
||||
- [ ] **Indexed columns** are used over map access where available (e.g., `http_method` over `attributes_string['http.method']`)
|
||||
- [ ] **Pre-extracted columns** are used where available (`has_error`, `duration_nano`, `http_method`, `db_name`, etc.)
|
||||
- [ ] **`seen_at_ts_bucket_start`** filter is included in the resource CTE
|
||||
- [ ] Aggregation results are cast with `toFloat64()` for dashboard compatibility
|
||||
- [ ] For timeseries: results are ordered by time column ASC
|
||||
- [ ] For table panels: `now() as ts` is used instead of time intervals
|
||||
- [ ] For value panels: outer query uses `avg(value)` / `any(ts)` pattern
|
||||
@@ -1,36 +0,0 @@
|
||||
---
|
||||
name: commit
|
||||
description: Create a conventional commit with staged changes
|
||||
allowed-tools: Bash(git:*)
|
||||
---
|
||||
|
||||
# Create Conventional Commit
|
||||
|
||||
Commit staged changes using conventional commit format: `type(scope): description`
|
||||
|
||||
## Types
|
||||
|
||||
- `feat:` - New feature
|
||||
- `fix:` - Bug fix
|
||||
- `chore:` - Maintenance/refactor/tooling
|
||||
- `test:` - Tests only
|
||||
- `docs:` - Documentation
|
||||
|
||||
## Process
|
||||
|
||||
1. Review staged changes: `git diff --cached`
|
||||
2. Determine type, optional scope, and description (imperative, <70 chars)
|
||||
3. Commit using HEREDOC:
|
||||
```bash
|
||||
git commit -m "$(cat <<'EOF'
|
||||
type(scope): description
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
4. Verify: `git log -1`
|
||||
|
||||
## Notes
|
||||
|
||||
- Description: imperative mood, lowercase, no period
|
||||
- Body: explain WHY, not WHAT (code shows what). Keep it concise and brief.
|
||||
- Do not include co-authored by claude in commit message, we want ownership and accountability to remain with the human contributor
|
||||
@@ -1,22 +0,0 @@
|
||||
---
|
||||
description: How to start SigNoz frontend and backend dev servers
|
||||
---
|
||||
|
||||
# Dev Server Setup
|
||||
|
||||
Full guide: [development.md](../../docs/contributing/development.md)
|
||||
|
||||
## Start Order
|
||||
|
||||
1. **Infra**: Ensure clickhouse container is running using `docker ps | grep clickhouse`
|
||||
2. **Backend**: `make go-run-community` (serves at `localhost:8080`)
|
||||
3. **Frontend**: `cd frontend && yarn install && yarn dev` (serves at `localhost:3301`)
|
||||
- Requires `frontend/.env` with `FRONTEND_API_ENDPOINT=http://localhost:8080`
|
||||
- For git worktrees, frontend/.env can be created using command: `cp frontend/example.env frontend/.env`.
|
||||
|
||||
## Verify
|
||||
|
||||
- ClickHouse: `curl http://localhost:8123/ping` → "Ok."
|
||||
- OTel Collector: `curl http://localhost:13133`
|
||||
- Backend: `curl http://localhost:8080/api/v1/health` → `{"status":"ok"}`
|
||||
- Frontend: `http://localhost:3301`
|
||||
@@ -1,55 +0,0 @@
|
||||
---
|
||||
name: raise-pr
|
||||
description: Create a pull request with auto-filled template. Pass 'commit' to commit staged changes first.
|
||||
allowed-tools: Bash(gh:*, git:*), Read
|
||||
argument-hint: [commit?]
|
||||
---
|
||||
|
||||
# Raise Pull Request
|
||||
|
||||
Create a PR with auto-filled template from commits after origin/main.
|
||||
|
||||
## Arguments
|
||||
|
||||
- No argument: Create PR with existing commits
|
||||
- `commit`: Commit staged changes first, then create PR
|
||||
|
||||
## Process
|
||||
|
||||
1. **If `$ARGUMENTS` is "commit"**: Review staged changes and commit with descriptive message
|
||||
- Check for staged changes: `git diff --cached --stat`
|
||||
- If changes exist:
|
||||
- Review the changes: `git diff --cached`
|
||||
- Use commit skill for making the commit, i.e. follow conventional commit practices
|
||||
- Commit command: `git commit -m "message"`
|
||||
|
||||
2. **Analyze commits since origin/main**:
|
||||
- `git log origin/main..HEAD --pretty=format:"%s%n%b"` - get commit messages
|
||||
- `git diff origin/main...HEAD --stat` - see changes
|
||||
|
||||
3. **Read template**: `.github/pull_request_template.md`
|
||||
|
||||
4. **Generate PR**:
|
||||
- **Title**: Short (<70 chars), from commit messages or main change
|
||||
- **Body**: Fill template sections based on commits/changes:
|
||||
- Summary (why/what/approach) - end with "Closes #<issue_number>" if issue number is available from branch name (git branch --show-current)
|
||||
- Change Type checkboxes
|
||||
- Bug Context (if applicable)
|
||||
- Testing Strategy
|
||||
- Risk Assessment
|
||||
- Changelog (if user-facing)
|
||||
- Checklist
|
||||
|
||||
5. **Create PR**:
|
||||
```bash
|
||||
git push -u origin $(git branch --show-current)
|
||||
gh pr create --base main --title "..." --body "..."
|
||||
gh pr view
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Analyze ALL commits messages from origin/main to HEAD
|
||||
- Fill template sections based on code analysis
|
||||
- Leave template sections as they are if you can't determine the content
|
||||
- Don't add the changes to git stage, only commit or push whatever user has already staged
|
||||
@@ -1,254 +0,0 @@
|
||||
---
|
||||
name: review
|
||||
description: Review code changes for bugs, performance issues, and SigNoz convention compliance
|
||||
allowed-tools: Bash(git:*, gh:*), Read, Glob, Grep
|
||||
---
|
||||
|
||||
# Review Command
|
||||
|
||||
Perform a thorough code review following SigNoz's coding conventions and contributing guidelines and any potential bug introduced.
|
||||
|
||||
## Usage
|
||||
|
||||
Invoke this command to review code changes, files, or pull requests with actionable and concise feedback.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Determine scope**:
|
||||
- Ask user what to review if not specified:
|
||||
- Specific files or directories
|
||||
- Current git diff (staged or unstaged)
|
||||
- Specific PR number or commit range
|
||||
- All changes since origin/main
|
||||
|
||||
2. **Gather context**:
|
||||
```bash
|
||||
# For current changes
|
||||
git diff --cached # Staged changes
|
||||
git diff # Unstaged changes
|
||||
|
||||
# For commit range
|
||||
git diff origin/main...HEAD # All changes since main
|
||||
|
||||
# for last commit only
|
||||
git diff HEAD~1..HEAD
|
||||
|
||||
# For specific PR
|
||||
gh pr view <number> --json files,additions,deletions
|
||||
gh pr diff <number>
|
||||
```
|
||||
|
||||
3. **Read all relevant files thoroughly**:
|
||||
- Use Read tool for modified files
|
||||
- Understand the context and purpose of changes
|
||||
- Check surrounding code for context
|
||||
|
||||
4. **Review against SigNoz guidelines**:
|
||||
- **Frontend**: Check [Frontend Guidelines](../../frontend/CONTRIBUTIONS.md)
|
||||
- **Backend/Architecture**: Check [CLAUDE.md](../CLAUDE.md) for provider pattern, error handling, SQL, REST, and linting conventions
|
||||
- **General**: Check [Contributing Guidelines](../../CONTRIBUTING.md)
|
||||
- **Commits**: Verify [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)
|
||||
|
||||
5. **Verify feature intent**:
|
||||
- Read the PR description, commit message, or linked issue to understand *what* the change claims to do
|
||||
- Trace the code path end-to-end to confirm the change actually achieves its stated goal
|
||||
- Check that the happy path works as described
|
||||
- Identify any scenarios where the feature silently does nothing or produces wrong results
|
||||
|
||||
6. **Review for bug introduction**:
|
||||
- **Regressions**: Does the change break existing behavior? Check callers of modified functions/interfaces
|
||||
- **Edge cases**: Empty inputs, nil/undefined values, boundary conditions, concurrent access
|
||||
- **Error paths**: Are all error cases handled? Can errors be swallowed silently?
|
||||
- **State management**: Are state transitions correct? Can state become inconsistent?
|
||||
- **Race conditions**: Shared mutable state, async operations, missing locks or guards
|
||||
- **Type mismatches**: Unsafe casts, implicit conversions, `any` usage hiding real types
|
||||
|
||||
7. **Review for performance implications**:
|
||||
- **Backend**: N+1 queries, missing indexes, unbounded result sets, large allocations in hot paths, unnecessary DB round-trips
|
||||
- **Frontend**: Unnecessary re-renders from inline objects/functions as props, missing memoization on expensive computations, large bundle imports that should be lazy-loaded, unthrottled event handlers
|
||||
- **General**: O(n²) or worse algorithms on potentially large datasets, unnecessary network calls, missing pagination or limits
|
||||
|
||||
8. **Provide actionable, concise feedback** in structured format
|
||||
|
||||
## Review Checklist
|
||||
|
||||
For coding conventions and style, refer to the linked guideline docs. This checklist focuses on **review-specific concerns** that guidelines alone don't catch.
|
||||
|
||||
### Correctness & Intent
|
||||
- [ ] Change achieves what the PR/commit/issue describes
|
||||
- [ ] Happy path works end-to-end
|
||||
- [ ] Edge cases handled (empty, nil, boundary, concurrent)
|
||||
- [ ] Error paths don't swallow failures silently
|
||||
- [ ] No regressions to existing callers of modified code
|
||||
|
||||
### Security
|
||||
- [ ] No exposed secrets, API keys, credentials
|
||||
- [ ] No sensitive data in logs
|
||||
- [ ] Input validation at system boundaries
|
||||
- [ ] Authentication/authorization checked for new endpoints
|
||||
- [ ] No SQL injection or XSS risks
|
||||
|
||||
### Performance
|
||||
- [ ] No N+1 queries or unbounded result sets
|
||||
- [ ] No unnecessary re-renders (inline objects/functions as props, missing memoization)
|
||||
- [ ] No large imports that should be lazy-loaded
|
||||
- [ ] No O(n²) on potentially large datasets
|
||||
- [ ] Pagination/limits present where needed
|
||||
|
||||
### Testing
|
||||
- [ ] New functionality has tests
|
||||
- [ ] Edge cases and error paths tested
|
||||
- [ ] Tests are deterministic (no flakiness)
|
||||
|
||||
### Git/Commits
|
||||
- [ ] Commit messages follow `type(scope): description` ([Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/))
|
||||
- [ ] Commits are atomic and logical
|
||||
|
||||
## Output Format
|
||||
|
||||
Provide feedback in this structured format:
|
||||
|
||||
```markdown
|
||||
## Code Review
|
||||
|
||||
**Scope**: [What was reviewed]
|
||||
**Overall**: [1-2 sentence summary and general sentiment]
|
||||
|
||||
---
|
||||
|
||||
### 🚨 Critical Issues (Must Fix)
|
||||
|
||||
1. **[Category]** `file:line`
|
||||
**Problem**: [What's wrong]
|
||||
**Why**: [Why it matters]
|
||||
**Fix**: [Specific solution]
|
||||
```[language]
|
||||
// Example fix if helpful
|
||||
```
|
||||
|
||||
### ⚠️ Suggestions (Should Consider)
|
||||
|
||||
1. **[Category]** `file:line`
|
||||
**Issue**: [What could be improved]
|
||||
**Suggestion**: [Concrete improvement]
|
||||
|
||||
### ✅ Positive Highlights
|
||||
|
||||
- [Good practice observed]
|
||||
- [Well-implemented feature]
|
||||
|
||||
---
|
||||
|
||||
**References**:
|
||||
- [Relevant guideline links]
|
||||
```
|
||||
|
||||
## Review Categories
|
||||
|
||||
Use these categories for issues:
|
||||
|
||||
- **Bug / Regression**: Logic errors, edge cases, race conditions, broken existing behavior
|
||||
- **Feature Gap**: Change doesn't fully achieve its stated intent
|
||||
- **Security Risk**: Authentication, authorization, data exposure, injection
|
||||
- **Performance Issue**: Inefficient queries, unnecessary re-renders, memory leaks, unbounded data
|
||||
- **Convention Violation**: Style, patterns, architectural guidelines (link to relevant guideline doc)
|
||||
- **Code Quality**: Complexity, duplication, naming, type safety
|
||||
- **Testing**: Missing tests, inadequate coverage, flaky tests
|
||||
|
||||
## Example Review
|
||||
|
||||
```markdown
|
||||
## Code Review
|
||||
|
||||
**Scope**: Changes in `frontend/src/pages/TraceDetail/` (3 files, 245 additions)
|
||||
**Overall**: Good implementation of pagination feature. Found 2 critical issues and 3 suggestions.
|
||||
|
||||
---
|
||||
|
||||
### 🚨 Critical Issues (Must Fix)
|
||||
|
||||
1. **Security Risk** `TraceList.tsx:45`
|
||||
**Problem**: API token exposed in client-side code
|
||||
**Why**: Security vulnerability - tokens should never be in frontend
|
||||
**Fix**: Move authentication to backend, use session-based auth
|
||||
|
||||
2. **Performance Issue** `TraceList.tsx:89`
|
||||
**Problem**: Inline function passed as prop causes unnecessary re-renders
|
||||
**Why**: Violates frontend guideline, degrades performance with large lists
|
||||
**Fix**:
|
||||
```typescript
|
||||
const handleTraceClick = useCallback((traceId: string) => {
|
||||
navigate(`/trace/${traceId}`);
|
||||
}, [navigate]);
|
||||
```
|
||||
|
||||
### ⚠️ Suggestions (Should Consider)
|
||||
|
||||
1. **Code Quality** `TraceList.tsx:120-180`
|
||||
**Issue**: Function exceeds 40-line guideline
|
||||
**Suggestion**: Extract into smaller functions:
|
||||
- `filterTracesByTimeRange()`
|
||||
- `aggregateMetrics()`
|
||||
- `renderChartData()`
|
||||
|
||||
2. **Type Safety** `types.ts:23`
|
||||
**Issue**: Using `any` for trace attributes
|
||||
**Suggestion**: Define proper interface for TraceAttributes
|
||||
|
||||
3. **Convention** `TraceList.tsx:12`
|
||||
**Issue**: File imports not organized
|
||||
**Suggestion**: Let simple-import-sort auto-organize (will happen on save)
|
||||
|
||||
### ✅ Positive Highlights
|
||||
|
||||
- Excellent use of virtualization for large trace lists
|
||||
- Good error boundary implementation
|
||||
- Well-structured component hierarchy
|
||||
- Comprehensive unit tests included
|
||||
|
||||
---
|
||||
|
||||
**References**:
|
||||
- [Frontend Guidelines](../../frontend/CONTRIBUTIONS.md)
|
||||
- [useCallback best practices](https://kentcdodds.com/blog/usememo-and-usecallback)
|
||||
```
|
||||
|
||||
## Tone Guidelines
|
||||
|
||||
- **Be respectful**: Focus on code, not the person
|
||||
- **Be specific**: Always reference exact file:line locations
|
||||
- **Be concise**: Get to the point, avoid verbosity
|
||||
- **Be actionable**: Every comment should have clear resolution path
|
||||
- **Be balanced**: Acknowledge good work alongside issues
|
||||
- **Be educational**: Explain why something is an issue, link to guidelines
|
||||
|
||||
## Priority Levels
|
||||
|
||||
1. **Critical (🚨)**: Security, bugs, data corruption, crashes
|
||||
2. **Important (⚠️)**: Performance, maintainability, convention violations
|
||||
3. **Nice to have (💡)**: Style preferences, micro-optimizations
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Reference specific guidelines** from docs when applicable
|
||||
- **Provide code examples** for fixes when helpful
|
||||
- **Ask questions** if code intent is unclear
|
||||
- **Link to external resources** for educational value
|
||||
- **Distinguish** must-fix from should-consider
|
||||
- **Be concise** - reviewers value their time
|
||||
|
||||
## Critical Rules
|
||||
|
||||
- **NEVER** be vague - always specify file and line number
|
||||
- **NEVER** just point out problems - suggest solutions
|
||||
- **NEVER** review without reading the actual code
|
||||
- **ALWAYS** check against SigNoz's specific guidelines
|
||||
- **ALWAYS** provide rationale for each comment
|
||||
- **ALWAYS** be constructive and respectful
|
||||
|
||||
## Reference Documents
|
||||
|
||||
- [Frontend Guidelines](../../frontend/CONTRIBUTIONS.md) - React, TypeScript, styling
|
||||
- [Contributing Guidelines](../../CONTRIBUTING.md) - Workflow, commit conventions
|
||||
- [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) - Commit format
|
||||
- [CLAUDE.md](../CLAUDE.md) - Project architecture and conventions
|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
description: Architecture context for the traces module (trace detail, waterfall, flamegraph)
|
||||
---
|
||||
|
||||
# Traces Module
|
||||
|
||||
Read [trace-detail-architecture.md](./trace-detail-architecture.md) for full context before working on this module. It covers:
|
||||
|
||||
- ClickHouse tables (`signoz_index_v3`, `trace_summary`) and their gotchas
|
||||
- Backend API endpoints (waterfall + flamegraph) and processing pipelines
|
||||
- Frontend component map, state flow, and API hooks
|
||||
- Key file index for both backend and frontend
|
||||
@@ -1,285 +0,0 @@
|
||||
# Trace Detail Module Architecture
|
||||
|
||||
Use this document as context when working on the trace detail page — the waterfall view, flamegraph view, or any related backend/frontend code. This avoids expensive codebase searches.
|
||||
|
||||
---
|
||||
|
||||
## ClickHouse Tables
|
||||
|
||||
### `signoz_index_v3` (local) / `distributed_signoz_index_v3` (distributed)
|
||||
|
||||
- **Engine**: MergeTree (plain — no deduplication)
|
||||
- **Location**: `signoz-otel-collector/cmd/signozschemamigrator/schema_migrator/traces_migrations.go`
|
||||
- **Purpose**: Primary span storage. Every span is a row.
|
||||
- **Key columns**: `ts_bucket_start` (UInt64), `timestamp` (DateTime64(9)), `trace_id` (FixedString(32)), `span_id`, `duration_nano`, `has_error`, `name`, `resource_string_service$$name` (service name), `links` (references/parent info), `attributes_string`, `events`
|
||||
- **ORDER BY**: `(ts_bucket_start, resource_fingerprint, has_error, name, timestamp)`
|
||||
- **Partition**: `toDate(timestamp)`
|
||||
- **Important**: Since it's a plain MergeTree, duplicate rows for the same `span_id` can exist. Queries must use `DISTINCT ON (span_id)` to deduplicate.
|
||||
|
||||
### `trace_summary` (local) / `distributed_trace_summary` (distributed)
|
||||
|
||||
- **Engine**: AggregatingMergeTree
|
||||
- **Purpose**: Pre-aggregated trace-level metadata (start time, end time, span count per trace).
|
||||
- **Columns**:
|
||||
- `trace_id` (String) — ORDER BY key
|
||||
- `start` (SimpleAggregateFunction(min, DateTime64(9)))
|
||||
- `end` (SimpleAggregateFunction(max, DateTime64(9)))
|
||||
- `num_spans` (SimpleAggregateFunction(sum, UInt64))
|
||||
- **Partition**: `toDate(end)`
|
||||
- **Distributed sharding**: `cityHash64(trace_id)`
|
||||
- **CRITICAL**: Because this is AggregatingMergeTree, `SELECT *` without `FINAL` or `GROUP BY` can return multiple partial rows per trace_id (one per unmerged part). Always query with:
|
||||
```sql
|
||||
SELECT trace_id, min(start) AS start, max(end) AS end, sum(num_spans) AS num_spans
|
||||
FROM distributed_trace_summary WHERE trace_id = $1 GROUP BY trace_id
|
||||
```
|
||||
|
||||
### `trace_summary_mv` (Materialized View)
|
||||
|
||||
- **Source**: `signoz_index_v3`
|
||||
- **Destination**: `trace_summary`
|
||||
- **Query**: `SELECT trace_id, min(timestamp) AS start, max(timestamp) AS end, toUInt64(count()) AS num_spans FROM signoz_index_v3 GROUP BY trace_id`
|
||||
- **How it works**: Acts as an INSERT trigger — runs only on each new batch of rows inserted into `signoz_index_v3`, NOT on the full table. Each batch produces a partial aggregate row in `trace_summary`.
|
||||
|
||||
---
|
||||
|
||||
## Data Flow: Ingestion to Storage
|
||||
|
||||
```
|
||||
OTel Collector sends spans in batches
|
||||
→ INSERT into signoz_index_v3 (raw span rows)
|
||||
→ trace_summary_mv triggers on the batch
|
||||
→ Computes min(timestamp), max(timestamp), count() per trace_id FOR THAT BATCH ONLY
|
||||
→ Inserts partial aggregate row into trace_summary
|
||||
→ ClickHouse eventually merges partial rows (background, async, no timing guarantee)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backend API Endpoints
|
||||
|
||||
### POST `/api/v2/traces/waterfall/{traceId}`
|
||||
|
||||
- **Handler**: `pkg/query-service/app/http_handler.go` → `GetWaterfallSpansForTraceWithMetadata` (line ~1748)
|
||||
- **Reader**: `pkg/query-service/app/clickhouseReader/reader.go` → `GetWaterfallSpansForTraceWithMetadata` (line ~873)
|
||||
- **Request body** (`model.GetWaterfallSpansForTraceWithMetadataParams` in `pkg/query-service/model/queryParams.go:332`):
|
||||
```json
|
||||
{
|
||||
"selectedSpanId": "abc123",
|
||||
"isSelectedSpanIDUnCollapsed": true,
|
||||
"uncollapsedSpans": ["span1", "span2"]
|
||||
}
|
||||
```
|
||||
- **Response** (`model.GetWaterfallSpansForTraceWithMetadataResponse` in `pkg/query-service/model/response.go:319`):
|
||||
```json
|
||||
{
|
||||
"startTimestampMillis": 1707300000000,
|
||||
"endTimestampMillis": 1707302460000,
|
||||
"totalSpansCount": 166,
|
||||
"totalErrorSpansCount": 0,
|
||||
"rootServiceName": "frontend",
|
||||
"rootServiceEntryPoint": "GET /api/data",
|
||||
"serviceNameToTotalDurationMap": {"frontend": 5000, "backend": 3000},
|
||||
"spans": [/* flat list of Span objects */],
|
||||
"hasMissingSpans": true,
|
||||
"uncollapsedSpans": ["span1", "span2", "span3"]
|
||||
}
|
||||
```
|
||||
|
||||
### POST `/api/v2/traces/flamegraph/{traceId}`
|
||||
|
||||
- **Handler**: `http_handler.go` → `GetFlamegraphSpansForTrace` (line ~1781)
|
||||
- **Reader**: `reader.go` → `GetFlamegraphSpansForTrace` (line ~1091)
|
||||
- **Request body** (`model.GetFlamegraphSpansForTraceParams` in `queryParams.go:338`):
|
||||
```json
|
||||
{ "selectedSpanId": "abc123" }
|
||||
```
|
||||
- **Response** (`model.GetFlamegraphSpansForTraceResponse` in `response.go:334`):
|
||||
```json
|
||||
{
|
||||
"startTimestampMillis": 1707300000000,
|
||||
"endTimestampMillis": 1707302460000,
|
||||
"durationNano": 2460000000000,
|
||||
"spans": [/* 2D array: spans[level][index] = FlamegraphSpan */]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backend Processing Pipeline (Waterfall)
|
||||
|
||||
### Step 1: Get spans from DB (`GetSpansForTrace`, reader.go:831)
|
||||
|
||||
1. Query `distributed_trace_summary` for the trace's start/end time range
|
||||
2. Use that range to query `distributed_signoz_index_v3`:
|
||||
```sql
|
||||
SELECT DISTINCT ON (span_id) timestamp, duration_nano, span_id, ...
|
||||
FROM distributed_signoz_index_v3
|
||||
WHERE trace_id = $1
|
||||
AND ts_bucket_start >= traceSummary.Start - 1800
|
||||
AND ts_bucket_start <= traceSummary.End
|
||||
ORDER BY timestamp ASC, name ASC
|
||||
```
|
||||
3. `totalSpans = len(results)` — count of actual DB spans
|
||||
|
||||
### Step 2: Build the span tree (reader.go:907-1017)
|
||||
|
||||
1. Create a `spanIdToSpanNodeMap` (map[spanID] → *Span)
|
||||
2. For each span, find its parent via `References` (CHILD_OF ref type):
|
||||
- If parent exists in map → append as child
|
||||
- If parent NOT in map → create a **Missing Span** node, add it as root
|
||||
3. Spans with no parent reference and not already in roots → add as root
|
||||
4. Sort roots by timestamp
|
||||
|
||||
### Step 3: Cache (reader.go:1029-1045)
|
||||
|
||||
- Cache key: `"getWaterfallSpansForTraceWithMetadata-{traceID}"`
|
||||
- TTL: 5 minutes
|
||||
- Cached data: the full tree (spanIdToSpanNodeMap, traceRoots), metadata (totalSpans, startTime, etc.)
|
||||
- **Flux interval** (default 2 minutes, config: `--flux-interval-for-trace-detail`): If the trace's end time is within this interval from now, cache is SKIPPED — forces fresh DB query
|
||||
|
||||
### Step 4: Select visible spans (`GetSelectedSpans`, tracedetail/waterfall.go:159)
|
||||
|
||||
1. Find path from root to `selectedSpanID` → auto-uncollapse those nodes
|
||||
2. Pre-order DFS traversal of the tree, only descending into uncollapsed nodes
|
||||
3. For each visited node, compute `SubTreeNodeCount` (total descendants + self)
|
||||
4. Apply a **sliding window** of 500 spans centered around the selected span (40% before, 60% after)
|
||||
5. Return the windowed flat list + updated uncollapsedSpans list
|
||||
|
||||
### Key Constants (waterfall.go)
|
||||
- `SPAN_LIMIT_PER_REQUEST_FOR_WATERFALL = 500` — max spans per API response
|
||||
|
||||
---
|
||||
|
||||
## Backend Processing Pipeline (Flamegraph)
|
||||
|
||||
### Differences from Waterfall
|
||||
- Uses `model.FlamegraphSpan` (lighter model, no tagMap/statusMessage/etc.)
|
||||
- Uses **BFS traversal** instead of DFS — organizes spans by level (depth)
|
||||
- Returns spans as `[][]*FlamegraphSpan` — a 2D array where index = tree level
|
||||
- Applies **level sampling** when a level has > 100 spans:
|
||||
- Keeps top 5 by latency
|
||||
- Buckets remaining spans into 50 timestamp buckets, keeps 2 per bucket
|
||||
- Sliding window of 50 levels centered on selected span
|
||||
|
||||
### Key Constants (flamegraph.go)
|
||||
- `SPAN_LIMIT_PER_REQUEST_FOR_FLAMEGRAPH = 50` — max levels per response
|
||||
- `SPAN_LIMIT_PER_LEVEL = 100` — triggers sampling within a level
|
||||
- `TIMESTAMP_SAMPLING_BUCKET_COUNT = 50` — buckets for timestamp-based sampling
|
||||
|
||||
---
|
||||
|
||||
## Frontend Architecture
|
||||
|
||||
### Page: `frontend/src/pages/TraceDetailV2/TraceDetailV2.tsx`
|
||||
|
||||
The main trace detail page. Uses a resizable panel layout with trace content on the left and span details drawer on the right.
|
||||
|
||||
### Key State Flow
|
||||
```
|
||||
TraceDetailsV2
|
||||
├── uncollapsedNodes (string[]) — tracks which spans are expanded
|
||||
├── interestedSpanId — the span to focus on (from URL or user click)
|
||||
├── selectedSpan — currently selected span for detail drawer
|
||||
│
|
||||
├── useGetTraceV2 hook — calls waterfall API
|
||||
│ queryKey: [GET_TRACE_V2_WATERFALL, traceId, selectedSpanId, isUncollapsed]
|
||||
│ triggers: on traceId or interestedSpanId change
|
||||
│
|
||||
├── TraceMetadata — displays totalSpansCount, errorSpansCount, duration, etc.
|
||||
├── TraceFlamegraph — separate API call via useGetTraceFlamegraph hook
|
||||
└── TraceWaterfall → Success component → TableV3 (virtualized table)
|
||||
```
|
||||
|
||||
### Component Map
|
||||
|
||||
| Component | File | Purpose |
|
||||
|-----------|------|---------|
|
||||
| TraceDetailsV2 | `pages/TraceDetailV2/TraceDetailV2.tsx` | Page container, state management |
|
||||
| TraceMetadata | `container/TraceMetadata/TraceMetadata.tsx` | Top bar: trace ID, duration, total/error spans |
|
||||
| TraceWaterfall | `container/TraceWaterfall/TraceWaterfall.tsx` | State machine (loading/error/success) |
|
||||
| Success | `container/TraceWaterfall/TraceWaterfallStates/Success/Success.tsx` | Waterfall table with virtualized rows |
|
||||
| SpanOverview | (inside Success.tsx) | Left column: span name, service, collapse button |
|
||||
| SpanDuration | (inside Success.tsx) | Right column: span duration bar |
|
||||
| Filters | `container/TraceWaterfall/TraceWaterfallStates/Success/Filters/Filters.tsx` | Search/filter spans within trace |
|
||||
| TraceFlamegraph | `container/PaginatedTraceFlamegraph/PaginatedTraceFlamegraph.tsx` | Flamegraph visualization |
|
||||
| SpanDetailsDrawer | `container/SpanDetailsDrawer/SpanDetailsDrawer.tsx` | Right panel: selected span attributes |
|
||||
|
||||
### API Layer
|
||||
|
||||
| Hook | File | API |
|
||||
|------|------|-----|
|
||||
| useGetTraceV2 | `hooks/trace/useGetTraceV2.tsx` | POST `/api/v2/traces/waterfall/{traceId}` |
|
||||
| useGetTraceFlamegraph | `hooks/trace/useGetTraceFlamegraph.tsx` | POST `/api/v2/traces/flamegraph/{traceId}` |
|
||||
|
||||
### API Adapter
|
||||
- `frontend/src/api/trace/getTraceV2.tsx` — prepares POST body, filters uncollapsedSpans based on isSelectedSpanIDUnCollapsed
|
||||
|
||||
### Frontend Types
|
||||
- `frontend/src/types/api/trace/getTraceV2.ts` — Span, GetTraceV2SuccessResponse, GetTraceV2PayloadProps
|
||||
|
||||
---
|
||||
|
||||
## Models (Backend)
|
||||
|
||||
| Model | File | Used By |
|
||||
|-------|------|---------|
|
||||
| `Span` | `model/response.go:279` | Waterfall — includes Children, SubTreeNodeCount, Level, HasChildren, TagMap |
|
||||
| `FlamegraphSpan` | `model/response.go:305` | Flamegraph — lighter, includes Level, no TagMap |
|
||||
| `SpanItemV2` | `model/trace.go` | Raw DB scan result |
|
||||
| `TraceSummary` | `model/trace.go:26` | trace_summary table row: TraceID, Start, End, NumSpans |
|
||||
| `GetWaterfallSpansForTraceWithMetadataCache` | `model/cacheable.go:10` | Cache structure for waterfall |
|
||||
| `GetFlamegraphSpansForTraceCache` | `model/cacheable.go:51` | Cache structure for flamegraph |
|
||||
|
||||
---
|
||||
|
||||
## Known Gotchas
|
||||
|
||||
1. **trace_summary reads**: Always use `GROUP BY trace_id` when reading from `distributed_trace_summary`. Raw `SELECT *` can return partial unmerged rows from AggregatingMergeTree.
|
||||
|
||||
3. **Flux interval**: Traces whose end time is within 2 minutes of now always bypass cache, causing fresh DB queries on every interaction (collapse/expand/refresh).
|
||||
|
||||
4. **signoz_index_v3 has no dedup**: It's a plain MergeTree. Duplicate span rows can exist. The waterfall query uses `DISTINCT ON (span_id)` to handle this. The flamegraph query does NOT — it relies on `spanIdToSpanNodeMap` (map keyed by spanID) which naturally deduplicates but keeps the last-seen duplicate.
|
||||
|
||||
5. **SubTreeNodeCount is self-inclusive**: The count displayed next to a span's collapse button includes the span itself. For the root span, this equals the total number of nodes in the tree (real spans + missing spans).
|
||||
|
||||
6. **Waterfall pagination**: The API returns at most 500 spans per request (a sliding window). The frontend uses virtual scrolling and triggers new API calls when the user scrolls to the edges (`handleVirtualizerInstanceChanged` in Success.tsx).
|
||||
|
||||
---
|
||||
|
||||
## Key File Index
|
||||
|
||||
### Backend
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `pkg/query-service/app/http_handler.go:566-567` | Route registration |
|
||||
| `pkg/query-service/app/http_handler.go:1748` | Waterfall handler |
|
||||
| `pkg/query-service/app/http_handler.go:1781` | Flamegraph handler |
|
||||
| `pkg/query-service/app/clickhouseReader/reader.go:831` | `GetSpansForTrace` — queries trace_summary + spans table |
|
||||
| `pkg/query-service/app/clickhouseReader/reader.go:856` | Waterfall cache retrieval |
|
||||
| `pkg/query-service/app/clickhouseReader/reader.go:873` | `GetWaterfallSpansForTraceWithMetadata` — main waterfall logic |
|
||||
| `pkg/query-service/app/clickhouseReader/reader.go:1074` | Flamegraph cache retrieval |
|
||||
| `pkg/query-service/app/clickhouseReader/reader.go:1091` | `GetFlamegraphSpansForTrace` — main flamegraph logic |
|
||||
| `pkg/query-service/app/traces/tracedetail/waterfall.go` | Tree traversal, span selection, SubTreeNodeCount calculation |
|
||||
| `pkg/query-service/app/traces/tracedetail/flamegraph.go` | BFS traversal, level-based organization, sampling |
|
||||
| `pkg/query-service/model/response.go:279-339` | Span and FlamegraphSpan models |
|
||||
| `pkg/query-service/model/queryParams.go:332-340` | Request param models |
|
||||
| `pkg/query-service/model/cacheable.go` | Cache data structures |
|
||||
|
||||
### Frontend
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `frontend/src/pages/TraceDetailV2/TraceDetailV2.tsx` | Main page component |
|
||||
| `frontend/src/container/TraceMetadata/TraceMetadata.tsx` | Header with total/error spans |
|
||||
| `frontend/src/container/TraceWaterfall/TraceWaterfall.tsx` | Waterfall state machine |
|
||||
| `frontend/src/container/TraceWaterfall/TraceWaterfallStates/Success/Success.tsx` | Waterfall table rendering |
|
||||
| `frontend/src/container/TraceWaterfall/TraceWaterfallStates/Success/Filters/Filters.tsx` | Span search/filter |
|
||||
| `frontend/src/container/PaginatedTraceFlamegraph/PaginatedTraceFlamegraph.tsx` | Flamegraph component |
|
||||
| `frontend/src/hooks/trace/useGetTraceV2.tsx` | Waterfall API hook |
|
||||
| `frontend/src/api/trace/getTraceV2.tsx` | Waterfall API adapter |
|
||||
| `frontend/src/types/api/trace/getTraceV2.ts` | TypeScript types |
|
||||
|
||||
### Schema
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `signoz-otel-collector/cmd/signozschemamigrator/schema_migrator/traces_migrations.go:10-134` | signoz_index_v3 table DDL |
|
||||
| `signoz-otel-collector/cmd/signozschemamigrator/schema_migrator/traces_migrations.go:271-348` | trace_summary + MV DDL |
|
||||
@@ -1,71 +0,0 @@
|
||||
services:
|
||||
clickhouse:
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
container_name: clickhouse
|
||||
volumes:
|
||||
- ${PWD}/fs/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
|
||||
- ${PWD}/fs/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
|
||||
- ${PWD}/fs/tmp/var/lib/clickhouse/:/var/lib/clickhouse/
|
||||
- ${PWD}/fs/tmp/var/lib/clickhouse/user_scripts/:/var/lib/clickhouse/user_scripts/
|
||||
ports:
|
||||
- '127.0.0.1:8123:8123'
|
||||
- '127.0.0.1:9000:9000'
|
||||
tty: true
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- wget
|
||||
- --spider
|
||||
- -q
|
||||
- 0.0.0.0:8123/ping
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
depends_on:
|
||||
- zookeeper
|
||||
environment:
|
||||
- CLICKHOUSE_SKIP_USER_SETUP=1
|
||||
zookeeper:
|
||||
image: signoz/zookeeper:3.7.1
|
||||
container_name: zookeeper
|
||||
volumes:
|
||||
- ${PWD}/fs/tmp/zookeeper:/bitnami/zookeeper
|
||||
ports:
|
||||
- '127.0.0.1:2181:2181'
|
||||
environment:
|
||||
- ALLOW_ANONYMOUS_LOGIN=yes
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD-SHELL
|
||||
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
schema-migrator-sync:
|
||||
image: signoz/signoz-schema-migrator:v0.129.12
|
||||
container_name: schema-migrator-sync
|
||||
command:
|
||||
- sync
|
||||
- --cluster-name=cluster
|
||||
- --dsn=tcp://clickhouse:9000
|
||||
- --replication=true
|
||||
- --up=
|
||||
depends_on:
|
||||
clickhouse:
|
||||
condition: service_healthy
|
||||
restart: on-failure
|
||||
schema-migrator-async:
|
||||
image: signoz/signoz-schema-migrator:v0.129.12
|
||||
container_name: schema-migrator-async
|
||||
command:
|
||||
- async
|
||||
- --cluster-name=cluster
|
||||
- --dsn=tcp://clickhouse:9000
|
||||
- --replication=true
|
||||
- --up=
|
||||
depends_on:
|
||||
clickhouse:
|
||||
condition: service_healthy
|
||||
schema-migrator-sync:
|
||||
condition: service_completed_successfully
|
||||
restart: on-failure
|
||||
@@ -1,47 +0,0 @@
|
||||
<clickhouse replace="true">
|
||||
<logger>
|
||||
<level>information</level>
|
||||
<formatting>
|
||||
<type>json</type>
|
||||
</formatting>
|
||||
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
|
||||
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
|
||||
<size>1000M</size>
|
||||
<count>3</count>
|
||||
</logger>
|
||||
<display_name>cluster</display_name>
|
||||
<listen_host>0.0.0.0</listen_host>
|
||||
<http_port>8123</http_port>
|
||||
<tcp_port>9000</tcp_port>
|
||||
<user_directories>
|
||||
<users_xml>
|
||||
<path>users.xml</path>
|
||||
</users_xml>
|
||||
<local_directory>
|
||||
<path>/var/lib/clickhouse/access/</path>
|
||||
</local_directory>
|
||||
</user_directories>
|
||||
<distributed_ddl>
|
||||
<path>/clickhouse/task_queue/ddl</path>
|
||||
</distributed_ddl>
|
||||
<remote_servers>
|
||||
<cluster>
|
||||
<shard>
|
||||
<replica>
|
||||
<host>clickhouse</host>
|
||||
<port>9000</port>
|
||||
</replica>
|
||||
</shard>
|
||||
</cluster>
|
||||
</remote_servers>
|
||||
<zookeeper>
|
||||
<node>
|
||||
<host>zookeeper</host>
|
||||
<port>2181</port>
|
||||
</node>
|
||||
</zookeeper>
|
||||
<macros>
|
||||
<shard>01</shard>
|
||||
<replica>01</replica>
|
||||
</macros>
|
||||
</clickhouse>
|
||||
@@ -1,36 +0,0 @@
|
||||
<?xml version="1.0"?>
|
||||
<clickhouse replace="true">
|
||||
<profiles>
|
||||
<default>
|
||||
<max_memory_usage>10000000000</max_memory_usage>
|
||||
<use_uncompressed_cache>0</use_uncompressed_cache>
|
||||
<load_balancing>in_order</load_balancing>
|
||||
<log_queries>1</log_queries>
|
||||
</default>
|
||||
</profiles>
|
||||
<users>
|
||||
<default>
|
||||
<profile>default</profile>
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
<quotas>
|
||||
<default>
|
||||
<interval>
|
||||
<duration>3600</duration>
|
||||
<queries>0</queries>
|
||||
<errors>0</errors>
|
||||
<result_rows>0</result_rows>
|
||||
<read_rows>0</read_rows>
|
||||
<execution_time>0</execution_time>
|
||||
</interval>
|
||||
</default>
|
||||
</quotas>
|
||||
</clickhouse>
|
||||
@@ -1,27 +0,0 @@
|
||||
services:
|
||||
|
||||
postgres:
|
||||
image: postgres:15
|
||||
container_name: postgres
|
||||
environment:
|
||||
POSTGRES_DB: signoz
|
||||
POSTGRES_USER: postgres
|
||||
POSTGRES_PASSWORD: password
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
"CMD",
|
||||
"pg_isready",
|
||||
"-d",
|
||||
"signoz",
|
||||
"-U",
|
||||
"postgres"
|
||||
]
|
||||
interval: 30s
|
||||
timeout: 30s
|
||||
retries: 3
|
||||
restart: on-failure
|
||||
ports:
|
||||
- "127.0.0.1:5432:5432/tcp"
|
||||
volumes:
|
||||
- ${PWD}/fs/tmp/var/lib/postgresql/data/:/var/lib/postgresql/data/
|
||||
@@ -1,29 +0,0 @@
|
||||
services:
|
||||
signoz-otel-collector:
|
||||
image: signoz/signoz-otel-collector:v0.129.6
|
||||
container_name: signoz-otel-collector-dev
|
||||
command:
|
||||
- --config=/etc/otel-collector-config.yaml
|
||||
- --feature-gates=-pkg.translator.prometheus.NormalizeName
|
||||
volumes:
|
||||
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
|
||||
environment:
|
||||
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
|
||||
- LOW_CARDINAL_EXCEPTION_GROUPING=false
|
||||
ports:
|
||||
- "4317:4317" # OTLP gRPC receiver
|
||||
- "4318:4318" # OTLP HTTP receiver
|
||||
- "13133:13133" # health check extension
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- wget
|
||||
- --spider
|
||||
- -q
|
||||
- localhost:13133
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
restart: unless-stopped
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
@@ -1,96 +0,0 @@
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
endpoint: 0.0.0.0:4317
|
||||
http:
|
||||
endpoint: 0.0.0.0:4318
|
||||
prometheus:
|
||||
config:
|
||||
global:
|
||||
scrape_interval: 60s
|
||||
scrape_configs:
|
||||
- job_name: otel-collector
|
||||
static_configs:
|
||||
- targets:
|
||||
- localhost:8888
|
||||
labels:
|
||||
job_name: otel-collector
|
||||
|
||||
processors:
|
||||
batch:
|
||||
send_batch_size: 10000
|
||||
send_batch_max_size: 11000
|
||||
timeout: 10s
|
||||
resourcedetection:
|
||||
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
|
||||
detectors: [env, system]
|
||||
timeout: 2s
|
||||
signozspanmetrics/delta:
|
||||
metrics_exporter: signozclickhousemetrics
|
||||
metrics_flush_interval: 60s
|
||||
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
|
||||
dimensions_cache_size: 100000
|
||||
aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
|
||||
enable_exp_histogram: true
|
||||
dimensions:
|
||||
- name: service.namespace
|
||||
default: default
|
||||
- name: deployment.environment
|
||||
default: default
|
||||
# This is added to ensure the uniqueness of the timeseries
|
||||
# Otherwise, identical timeseries produced by multiple replicas of
|
||||
# collectors result in incorrect APM metrics
|
||||
- name: signoz.collector.id
|
||||
- name: service.version
|
||||
- name: browser.platform
|
||||
- name: browser.mobile
|
||||
- name: k8s.cluster.name
|
||||
- name: k8s.node.name
|
||||
- name: k8s.namespace.name
|
||||
- name: host.name
|
||||
- name: host.type
|
||||
- name: container.name
|
||||
|
||||
extensions:
|
||||
health_check:
|
||||
endpoint: 0.0.0.0:13133
|
||||
pprof:
|
||||
endpoint: 0.0.0.0:1777
|
||||
|
||||
exporters:
|
||||
clickhousetraces:
|
||||
datasource: tcp://host.docker.internal:9000/signoz_traces
|
||||
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
|
||||
use_new_schema: true
|
||||
signozclickhousemetrics:
|
||||
dsn: tcp://host.docker.internal:9000/signoz_metrics
|
||||
clickhouselogsexporter:
|
||||
dsn: tcp://host.docker.internal:9000/signoz_logs
|
||||
timeout: 10s
|
||||
use_new_schema: true
|
||||
|
||||
service:
|
||||
telemetry:
|
||||
logs:
|
||||
encoding: json
|
||||
extensions:
|
||||
- health_check
|
||||
- pprof
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [signozspanmetrics/delta, batch]
|
||||
exporters: [clickhousetraces]
|
||||
metrics:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [signozclickhousemetrics]
|
||||
metrics/prometheus:
|
||||
receivers: [prometheus]
|
||||
processors: [batch]
|
||||
exporters: [signozclickhousemetrics]
|
||||
logs:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [clickhouselogsexporter]
|
||||
@@ -1,7 +1,6 @@
|
||||
.git
|
||||
.github
|
||||
.vscode
|
||||
.devenv
|
||||
README.md
|
||||
deploy
|
||||
sample-apps
|
||||
|
||||
131
.github/CODEOWNERS
vendored
131
.github/CODEOWNERS
vendored
@@ -1,133 +1,10 @@
|
||||
# CODEOWNERS info: https://help.github.com/en/articles/about-code-owners
|
||||
|
||||
# Owners are automatically requested for review for PRs that changes code
|
||||
|
||||
# that they own.
|
||||
|
||||
/frontend/ @SigNoz/frontend-maintainers
|
||||
|
||||
# Onboarding
|
||||
|
||||
/frontend/src/container/OnboardingV2Container/onboarding-configs/onboarding-config-with-links.json @makeavish
|
||||
/frontend/src/container/OnboardingV2Container/AddDataSource/AddDataSource.tsx @makeavish
|
||||
|
||||
/frontend/ @YounixM
|
||||
/frontend/src/container/MetricsApplication @srikanthccv
|
||||
/frontend/src/container/NewWidget/RightContainer/types.ts @srikanthccv
|
||||
/deploy/ @SigNoz/devops
|
||||
/sample-apps/ @SigNoz/devops
|
||||
.github @SigNoz/devops
|
||||
|
||||
# Scaffold Owners
|
||||
|
||||
/pkg/config/ @vikrantgupta25
|
||||
/pkg/errors/ @vikrantgupta25
|
||||
/pkg/factory/ @vikrantgupta25
|
||||
/pkg/types/ @vikrantgupta25
|
||||
/pkg/valuer/ @vikrantgupta25
|
||||
/cmd/ @vikrantgupta25
|
||||
.golangci.yml @vikrantgupta25
|
||||
|
||||
# Zeus Owners
|
||||
|
||||
/pkg/zeus/ @vikrantgupta25
|
||||
/ee/zeus/ @vikrantgupta25
|
||||
/pkg/licensing/ @vikrantgupta25
|
||||
/ee/licensing/ @vikrantgupta25
|
||||
|
||||
# SQL Owners
|
||||
|
||||
/pkg/sqlmigration/ @vikrantgupta25
|
||||
/ee/sqlmigration/ @vikrantgupta25
|
||||
/pkg/sqlschema/ @vikrantgupta25
|
||||
/ee/sqlschema/ @vikrantgupta25
|
||||
|
||||
# Analytics Owners
|
||||
|
||||
/pkg/analytics/ @vikrantgupta25
|
||||
/pkg/statsreporter/ @vikrantgupta25
|
||||
|
||||
# Querier Owners
|
||||
|
||||
/pkg/querier/ @srikanthccv
|
||||
/pkg/variables/ @srikanthccv
|
||||
/pkg/types/querybuildertypes/ @srikanthccv
|
||||
/pkg/types/telemetrytypes/ @srikanthccv
|
||||
/pkg/querybuilder/ @srikanthccv
|
||||
/pkg/telemetrylogs/ @srikanthccv
|
||||
/pkg/telemetrymetadata/ @srikanthccv
|
||||
/pkg/telemetrymetrics/ @srikanthccv
|
||||
/pkg/telemetrytraces/ @srikanthccv
|
||||
|
||||
# Metrics
|
||||
|
||||
/pkg/types/metrictypes/ @srikanthccv
|
||||
/pkg/types/metricsexplorertypes/ @srikanthccv
|
||||
/pkg/modules/metricsexplorer/ @srikanthccv
|
||||
/pkg/prometheus/ @srikanthccv
|
||||
|
||||
# APM
|
||||
|
||||
/pkg/types/servicetypes/ @srikanthccv
|
||||
/pkg/types/apdextypes/ @srikanthccv
|
||||
/pkg/modules/apdex/ @srikanthccv
|
||||
/pkg/modules/services/ @srikanthccv
|
||||
|
||||
# Dashboard
|
||||
|
||||
/pkg/types/dashboardtypes/ @srikanthccv
|
||||
/pkg/modules/dashboard/ @srikanthccv
|
||||
|
||||
# Rule/Alertmanager
|
||||
|
||||
/pkg/types/ruletypes/ @srikanthccv
|
||||
/pkg/types/alertmanagertypes @srikanthccv
|
||||
/pkg/alertmanager/ @srikanthccv
|
||||
/pkg/ruler/ @srikanthccv
|
||||
|
||||
# Correlation-adjacent
|
||||
|
||||
/pkg/contextlinks/ @srikanthccv
|
||||
/pkg/types/parsertypes/ @srikanthccv
|
||||
/pkg/queryparser/ @srikanthccv
|
||||
|
||||
# AuthN / AuthZ Owners
|
||||
|
||||
/pkg/authz/ @vikrantgupta25
|
||||
/ee/authz/ @vikrantgupta25
|
||||
/pkg/authn/ @vikrantgupta25
|
||||
/ee/authn/ @vikrantgupta25
|
||||
/pkg/modules/user/ @vikrantgupta25
|
||||
/pkg/modules/session/ @vikrantgupta25
|
||||
/pkg/modules/organization/ @vikrantgupta25
|
||||
/pkg/modules/authdomain/ @vikrantgupta25
|
||||
/pkg/modules/role/ @vikrantgupta25
|
||||
|
||||
# Integration tests
|
||||
|
||||
/tests/integration/ @vikrantgupta25
|
||||
|
||||
# OpenAPI types generator
|
||||
|
||||
/frontend/src/api @SigNoz/frontend-maintainers
|
||||
|
||||
# Dashboard Owners
|
||||
|
||||
/frontend/src/hooks/dashboard/ @SigNoz/pulse-frontend
|
||||
|
||||
## Dashboard Types
|
||||
|
||||
/frontend/src/api/types/dashboard/ @SigNoz/pulse-frontend
|
||||
|
||||
## Dashboard List
|
||||
|
||||
/frontend/src/pages/DashboardsListPage/ @SigNoz/pulse-frontend
|
||||
/frontend/src/container/ListOfDashboard/ @SigNoz/pulse-frontend
|
||||
|
||||
## Dashboard Page
|
||||
|
||||
/frontend/src/pages/DashboardPage/ @SigNoz/pulse-frontend
|
||||
/frontend/src/container/DashboardContainer/ @SigNoz/pulse-frontend
|
||||
/frontend/src/container/GridCardLayout/ @SigNoz/pulse-frontend
|
||||
/frontend/src/container/NewWidget/ @SigNoz/pulse-frontend
|
||||
|
||||
## Public Dashboard Page
|
||||
|
||||
/frontend/src/pages/PublicDashboard/ @SigNoz/pulse-frontend
|
||||
/frontend/src/container/PublicDashboardContainer/ @SigNoz/pulse-frontend
|
||||
|
||||
86
.github/pull_request_template.md
vendored
86
.github/pull_request_template.md
vendored
@@ -1,85 +1,17 @@
|
||||
## Pull Request
|
||||
### Summary
|
||||
|
||||
---
|
||||
<!-- ✍️ A clear and concise description...-->
|
||||
|
||||
### 📄 Summary
|
||||
> Why does this change exist?
|
||||
> What problem does it solve, and why is this the right approach?
|
||||
#### Related Issues / PR's
|
||||
|
||||
<!-- ✍️ Add the issues being resolved here and related PR's where applicable -->
|
||||
|
||||
#### Screenshots
|
||||
|
||||
#### Screenshots / Screen Recordings (if applicable)
|
||||
> Include screenshots or screen recordings that clearly show the behavior before the change and the result after the change. This helps reviewers quickly understand the impact and verify the update.
|
||||
NA
|
||||
|
||||
<!-- ✍️ Add screenshots of before and after changes where applicable-->
|
||||
|
||||
#### Issues closed by this PR
|
||||
> Reference issues using `Closes #issue-number` to enable automatic closure on merge.
|
||||
#### Affected Areas and Manually Tested Areas
|
||||
|
||||
---
|
||||
|
||||
### ✅ Change Type
|
||||
_Select all that apply_
|
||||
|
||||
- [ ] ✨ Feature
|
||||
- [ ] 🐛 Bug fix
|
||||
- [ ] ♻️ Refactor
|
||||
- [ ] 🛠️ Infra / Tooling
|
||||
- [ ] 🧪 Test-only
|
||||
|
||||
---
|
||||
|
||||
### 🐛 Bug Context
|
||||
> Required if this PR fixes a bug
|
||||
|
||||
#### Root Cause
|
||||
> What caused the issue?
|
||||
> Regression, faulty assumption, edge case, refactor, etc.
|
||||
|
||||
#### Fix Strategy
|
||||
> How does this PR address the root cause?
|
||||
|
||||
---
|
||||
|
||||
### 🧪 Testing Strategy
|
||||
> How was this change validated?
|
||||
|
||||
- Tests added/updated:
|
||||
- Manual verification:
|
||||
- Edge cases covered:
|
||||
|
||||
---
|
||||
|
||||
### ⚠️ Risk & Impact Assessment
|
||||
> What could break? How do we recover?
|
||||
|
||||
- Blast radius:
|
||||
- Potential regressions:
|
||||
- Rollback plan:
|
||||
|
||||
---
|
||||
|
||||
### 📝 Changelog
|
||||
> Fill only if this affects users, APIs, UI, or documented behavior
|
||||
> Use **N/A** for internal or non-user-facing changes
|
||||
|
||||
| Field | Value |
|
||||
|------|-------|
|
||||
| Deployment Type | Cloud / OSS / Enterprise |
|
||||
| Change Type | Feature / Bug Fix / Maintenance |
|
||||
| Description | User-facing summary |
|
||||
|
||||
---
|
||||
|
||||
### 📋 Checklist
|
||||
- [ ] Tests added or explicitly not required
|
||||
- [ ] Manually tested
|
||||
- [ ] Breaking changes documented
|
||||
- [ ] Backward compatibility considered
|
||||
|
||||
---
|
||||
|
||||
## 👀 Notes for Reviewers
|
||||
|
||||
<!-- Anything reviewers should keep in mind while reviewing -->
|
||||
|
||||
---
|
||||
<!-- ✍️ Add details of blast radius and dev testing areas where applicable-->
|
||||
|
||||
42
.github/workflows/README.md
vendored
Normal file
42
.github/workflows/README.md
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
# Github actions
|
||||
|
||||
## Testing the UI manually on each PR
|
||||
|
||||
First we need to make sure the UI is ready
|
||||
* Check the `Start tunnel` step in `e2e-k8s/deploy-on-k3s-cluster` job and make sure you see `your url is: https://pull-<number>-signoz.loca.lt`
|
||||
* This job will run until the PR is merged or closed to keep the local tunneling alive
|
||||
- github will cancel this job if the PR wasn't merged after 6h
|
||||
- if the job was cancel, go to the action and press `Re-run all jobs`
|
||||
|
||||
Now you can open your browser at https://pull-<number>-signoz.loca.lt and check the UI.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
To run GitHub workflow, a few environment variables needs to add in GitHub secrets
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th> Variables </th>
|
||||
<th> Description </th>
|
||||
<th> Example </th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> REPONAME </td>
|
||||
<td> Provide the DockerHub user/organisation name of the image. </td>
|
||||
<td> signoz</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> DOCKERHUB_USERNAME </td>
|
||||
<td> Docker hub username </td>
|
||||
<td> signoz</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> DOCKERHUB_TOKEN </td>
|
||||
<td> Docker hub password/token with push permission </td>
|
||||
<td> **** </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> SONAR_TOKEN </td>
|
||||
<td> <a href="https://sonarcloud.io">SonarCloud</a> token </td>
|
||||
<td> **** </td>
|
||||
</tr>
|
||||
82
.github/workflows/build-community.yaml
vendored
82
.github/workflows/build-community.yaml
vendored
@@ -1,82 +0,0 @@
|
||||
name: build-community
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v[0-9]+.[0-9]+.[0-9]+"
|
||||
- "v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+"
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
env:
|
||||
PRIMUS_HOME: .primus
|
||||
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
version: ${{ steps.build-info.outputs.version }}
|
||||
hash: ${{ steps.build-info.outputs.hash }}
|
||||
time: ${{ steps.build-info.outputs.time }}
|
||||
branch: ${{ steps.build-info.outputs.branch }}
|
||||
steps:
|
||||
- name: self-checkout
|
||||
uses: actions/checkout@v4
|
||||
- id: token
|
||||
name: github-token-gen
|
||||
uses: actions/create-github-app-token@v1
|
||||
with:
|
||||
app-id: ${{ secrets.PRIMUS_APP_ID }}
|
||||
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
|
||||
owner: ${{ github.repository_owner }}
|
||||
- name: primus-checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: signoz/primus
|
||||
ref: main
|
||||
path: .primus
|
||||
token: ${{ steps.token.outputs.token }}
|
||||
- name: build-info
|
||||
run: |
|
||||
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
|
||||
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
|
||||
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
|
||||
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
|
||||
js-build:
|
||||
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
|
||||
needs: prepare
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
JS_OUTPUT_ARTIFACT_CACHE_KEY: community-jsbuild-${{ github.sha }}
|
||||
JS_OUTPUT_ARTIFACT_PATH: frontend/build
|
||||
DOCKER_BUILD: false
|
||||
DOCKER_MANIFEST: false
|
||||
go-build:
|
||||
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
|
||||
needs: [prepare, js-build]
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_VERSION: 1.24
|
||||
GO_NAME: signoz-community
|
||||
GO_INPUT_ARTIFACT_CACHE_KEY: community-jsbuild-${{ github.sha }}
|
||||
GO_INPUT_ARTIFACT_PATH: frontend/build
|
||||
GO_BUILD_CONTEXT: ./cmd/community
|
||||
GO_BUILD_FLAGS: >-
|
||||
-tags timetzdata
|
||||
-ldflags='-s -w
|
||||
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.variant=community
|
||||
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
|
||||
-X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr'
|
||||
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
|
||||
DOCKER_DOCKERFILE_PATH: ./cmd/community/Dockerfile.multi-arch
|
||||
DOCKER_MANIFEST: true
|
||||
DOCKER_PROVIDERS: dockerhub
|
||||
116
.github/workflows/build-enterprise.yaml
vendored
116
.github/workflows/build-enterprise.yaml
vendored
@@ -1,116 +0,0 @@
|
||||
name: build-enterprise
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- v*
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
env:
|
||||
PRIMUS_HOME: .primus
|
||||
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
docker_providers: ${{ steps.set-docker-providers.outputs.providers }}
|
||||
version: ${{ steps.build-info.outputs.version }}
|
||||
hash: ${{ steps.build-info.outputs.hash }}
|
||||
time: ${{ steps.build-info.outputs.time }}
|
||||
branch: ${{ steps.build-info.outputs.branch }}
|
||||
steps:
|
||||
- name: self-checkout
|
||||
uses: actions/checkout@v4
|
||||
- id: token
|
||||
name: github-token-gen
|
||||
uses: actions/create-github-app-token@v1
|
||||
with:
|
||||
app-id: ${{ secrets.PRIMUS_APP_ID }}
|
||||
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
|
||||
owner: ${{ github.repository_owner }}
|
||||
- name: primus-checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: signoz/primus
|
||||
ref: main
|
||||
path: .primus
|
||||
token: ${{ steps.token.outputs.token }}
|
||||
- name: build-info
|
||||
id: build-info
|
||||
run: |
|
||||
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
|
||||
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
|
||||
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
|
||||
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
|
||||
- name: set-docker-providers
|
||||
id: set-docker-providers
|
||||
run: |
|
||||
if [[ ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+$ || ${{ github.event.ref }} =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+-rc\.[0-9]+$ ]]; then
|
||||
echo "providers=dockerhub gcp" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "providers=gcp" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
- name: create-dotenv
|
||||
run: |
|
||||
mkdir -p frontend
|
||||
echo 'CI=1' > frontend/.env
|
||||
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' >> frontend/.env
|
||||
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> frontend/.env
|
||||
echo 'SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}"' >> frontend/.env
|
||||
echo 'SENTRY_ORG="${{ secrets.SENTRY_ORG }}"' >> frontend/.env
|
||||
echo 'SENTRY_PROJECT_ID="${{ secrets.SENTRY_PROJECT_ID }}"' >> frontend/.env
|
||||
echo 'SENTRY_DSN="${{ secrets.SENTRY_DSN }}"' >> frontend/.env
|
||||
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> frontend/.env
|
||||
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> frontend/.env
|
||||
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> frontend/.env
|
||||
echo 'PYLON_APP_ID="${{ secrets.PYLON_APP_ID }}"' >> frontend/.env
|
||||
echo 'APPCUES_APP_ID="${{ secrets.APPCUES_APP_ID }}"' >> frontend/.env
|
||||
echo 'PYLON_IDENTITY_SECRET="${{ secrets.PYLON_IDENTITY_SECRET }}"' >> frontend/.env
|
||||
- name: cache-dotenv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: frontend/.env
|
||||
key: enterprise-dotenv-${{ github.sha }}
|
||||
js-build:
|
||||
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
|
||||
needs: prepare
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
JS_INPUT_ARTIFACT_CACHE_KEY: enterprise-dotenv-${{ github.sha }}
|
||||
JS_INPUT_ARTIFACT_PATH: frontend/.env
|
||||
JS_OUTPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
|
||||
JS_OUTPUT_ARTIFACT_PATH: frontend/build
|
||||
DOCKER_BUILD: false
|
||||
DOCKER_MANIFEST: false
|
||||
go-build:
|
||||
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
|
||||
needs: [prepare, js-build]
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_VERSION: 1.24
|
||||
GO_INPUT_ARTIFACT_CACHE_KEY: enterprise-jsbuild-${{ github.sha }}
|
||||
GO_INPUT_ARTIFACT_PATH: frontend/build
|
||||
GO_BUILD_CONTEXT: ./cmd/enterprise
|
||||
GO_BUILD_FLAGS: >-
|
||||
-tags timetzdata
|
||||
-ldflags='-s -w
|
||||
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
|
||||
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
|
||||
-X github.com/SigNoz/signoz/ee/zeus.url=https://api.signoz.cloud
|
||||
-X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.signoz.io
|
||||
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1
|
||||
-X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr'
|
||||
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
|
||||
DOCKER_DOCKERFILE_PATH: ./cmd/enterprise/Dockerfile.multi-arch
|
||||
DOCKER_MANIFEST: true
|
||||
DOCKER_PROVIDERS: ${{ needs.prepare.outputs.docker_providers }}
|
||||
127
.github/workflows/build-staging.yaml
vendored
127
.github/workflows/build-staging.yaml
vendored
@@ -1,127 +0,0 @@
|
||||
name: build-staging
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
types: [labeled]
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
env:
|
||||
PRIMUS_HOME: .primus
|
||||
MAKE: make --no-print-directory --makefile=.primus/src/make/main.mk
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ contains(github.event.label.name, 'staging:') || github.event.ref == 'refs/heads/main' }}
|
||||
outputs:
|
||||
version: ${{ steps.build-info.outputs.version }}
|
||||
hash: ${{ steps.build-info.outputs.hash }}
|
||||
time: ${{ steps.build-info.outputs.time }}
|
||||
branch: ${{ steps.build-info.outputs.branch }}
|
||||
deployment: ${{ steps.build-info.outputs.deployment }}
|
||||
steps:
|
||||
- name: self-checkout
|
||||
uses: actions/checkout@v4
|
||||
- id: token
|
||||
name: github-token-gen
|
||||
uses: actions/create-github-app-token@v1
|
||||
with:
|
||||
app-id: ${{ secrets.PRIMUS_APP_ID }}
|
||||
private-key: ${{ secrets.PRIMUS_PRIVATE_KEY }}
|
||||
owner: ${{ github.repository_owner }}
|
||||
- name: primus-checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: signoz/primus
|
||||
ref: main
|
||||
path: .primus
|
||||
token: ${{ steps.token.outputs.token }}
|
||||
- name: build-info
|
||||
id: build-info
|
||||
run: |
|
||||
echo "version=$($MAKE info-version)" >> $GITHUB_OUTPUT
|
||||
echo "hash=$($MAKE info-commit-short)" >> $GITHUB_OUTPUT
|
||||
echo "time=$($MAKE info-timestamp)" >> $GITHUB_OUTPUT
|
||||
echo "branch=$($MAKE info-branch)" >> $GITHUB_OUTPUT
|
||||
|
||||
staging_label="${{ github.event.label.name }}"
|
||||
if [[ "${staging_label}" == "staging:"* ]]; then
|
||||
deployment=${staging_label#"staging:"}
|
||||
elif [[ "${{ github.event.ref }}" == "refs/heads/main" ]]; then
|
||||
deployment="staging"
|
||||
else
|
||||
echo "error: not able to determine deployment - please verify the PR label or the branch"
|
||||
exit 1
|
||||
fi
|
||||
echo "deployment=${deployment}" >> $GITHUB_OUTPUT
|
||||
- name: create-dotenv
|
||||
run: |
|
||||
mkdir -p frontend
|
||||
echo 'CI=1' > frontend/.env
|
||||
echo 'TUNNEL_URL="${{ secrets.NP_TUNNEL_URL }}"' >> frontend/.env
|
||||
echo 'TUNNEL_DOMAIN="${{ secrets.NP_TUNNEL_DOMAIN }}"' >> frontend/.env
|
||||
echo 'PYLON_APP_ID="${{ secrets.NP_PYLON_APP_ID }}"' >> frontend/.env
|
||||
echo 'APPCUES_APP_ID="${{ secrets.NP_APPCUES_APP_ID }}"' >> frontend/.env
|
||||
echo 'PYLON_IDENTITY_SECRET="${{ secrets.NP_PYLON_IDENTITY_SECRET }}"' >> frontend/.env
|
||||
- name: cache-dotenv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: frontend/.env
|
||||
key: staging-dotenv-${{ github.sha }}
|
||||
js-build:
|
||||
uses: signoz/primus.workflows/.github/workflows/js-build.yaml@main
|
||||
needs: prepare
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
JS_INPUT_ARTIFACT_CACHE_KEY: staging-dotenv-${{ github.sha }}
|
||||
JS_INPUT_ARTIFACT_PATH: frontend/.env
|
||||
JS_OUTPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
|
||||
JS_OUTPUT_ARTIFACT_PATH: frontend/build
|
||||
DOCKER_BUILD: false
|
||||
DOCKER_MANIFEST: false
|
||||
go-build:
|
||||
uses: signoz/primus.workflows/.github/workflows/go-build.yaml@main
|
||||
needs: [prepare, js-build]
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_VERSION: 1.24
|
||||
GO_INPUT_ARTIFACT_CACHE_KEY: staging-jsbuild-${{ github.sha }}
|
||||
GO_INPUT_ARTIFACT_PATH: frontend/build
|
||||
GO_BUILD_CONTEXT: ./cmd/enterprise
|
||||
GO_BUILD_FLAGS: >-
|
||||
-tags timetzdata
|
||||
-ldflags='-s -w
|
||||
-X github.com/SigNoz/signoz/pkg/version.version=${{ needs.prepare.outputs.version }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.variant=enterprise
|
||||
-X github.com/SigNoz/signoz/pkg/version.hash=${{ needs.prepare.outputs.hash }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.time=${{ needs.prepare.outputs.time }}
|
||||
-X github.com/SigNoz/signoz/pkg/version.branch=${{ needs.prepare.outputs.branch }}
|
||||
-X github.com/SigNoz/signoz/ee/zeus.url=https://api.staging.signoz.cloud
|
||||
-X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.staging.signoz.cloud
|
||||
-X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.staging.signoz.cloud/api/v1
|
||||
-X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr'
|
||||
DOCKER_BASE_IMAGES: '{"alpine": "alpine:3.20.3"}'
|
||||
DOCKER_DOCKERFILE_PATH: ./cmd/enterprise/Dockerfile.multi-arch
|
||||
DOCKER_MANIFEST: true
|
||||
DOCKER_PROVIDERS: gcp
|
||||
staging:
|
||||
if: ${{ contains(github.event.label.name, 'staging:') || github.event.ref == 'refs/heads/main' }}
|
||||
uses: signoz/primus.workflows/.github/workflows/github-trigger.yaml@main
|
||||
secrets: inherit
|
||||
needs: [prepare, go-build]
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GITHUB_ENVIRONMENT: staging
|
||||
GITHUB_SILENT: true
|
||||
GITHUB_REPOSITORY_NAME: charts-saas-v3-staging
|
||||
GITHUB_EVENT_NAME: releaser
|
||||
GITHUB_EVENT_PAYLOAD: '{"deployment": "${{ needs.prepare.outputs.deployment }}", "signoz_version": "${{ needs.prepare.outputs.version }}"}'
|
||||
89
.github/workflows/build.yaml
vendored
Normal file
89
.github/workflows/build.yaml
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
name: build-pipeline
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
- release/v*
|
||||
|
||||
jobs:
|
||||
check-no-ee-references:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Run check
|
||||
run: make check-no-ee-references
|
||||
|
||||
build-frontend:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Install dependencies
|
||||
run: cd frontend && yarn install
|
||||
- name: Run ESLint
|
||||
run: cd frontend && npm run lint
|
||||
- name: Run Jest
|
||||
run: cd frontend && npm run jest
|
||||
- name: TSC
|
||||
run: yarn tsc
|
||||
working-directory: ./frontend
|
||||
- name: Build frontend docker image
|
||||
shell: bash
|
||||
run: |
|
||||
make build-frontend-amd64
|
||||
|
||||
build-frontend-ee:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Create .env file
|
||||
run: |
|
||||
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' > frontend/.env
|
||||
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> frontend/.env
|
||||
- name: Install dependencies
|
||||
run: cd frontend && yarn install
|
||||
- name: Run ESLint
|
||||
run: cd frontend && npm run lint
|
||||
- name: Run Jest
|
||||
run: cd frontend && npm run jest
|
||||
- name: TSC
|
||||
run: yarn tsc
|
||||
working-directory: ./frontend
|
||||
- name: Build frontend docker image
|
||||
shell: bash
|
||||
run: |
|
||||
make build-frontend-amd64
|
||||
|
||||
build-query-service:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Setup golang
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: "1.21"
|
||||
- name: Run tests
|
||||
shell: bash
|
||||
run: |
|
||||
make test
|
||||
- name: Build query-service image
|
||||
shell: bash
|
||||
run: |
|
||||
make build-query-service-amd64
|
||||
|
||||
build-ee-query-service:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Setup golang
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: "1.21"
|
||||
- name: Build EE query-service image
|
||||
shell: bash
|
||||
run: |
|
||||
make build-ee-query-service-amd64
|
||||
17
.github/workflows/codeball.yml
vendored
Normal file
17
.github/workflows/codeball.yml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
name: Codeball
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
codeball_job:
|
||||
runs-on: ubuntu-latest
|
||||
name: Codeball
|
||||
steps:
|
||||
# Run Codeball on all new Pull Requests 🚀
|
||||
# For customizations and more documentation, see https://github.com/sturdy-dev/codeball-action
|
||||
- name: Codeball
|
||||
uses: sturdy-dev/codeball-action@v2
|
||||
with:
|
||||
approvePullRequests: "true"
|
||||
labelPullRequestsWhenApproved: "true"
|
||||
labelPullRequestsWhenReviewNeeded: "false"
|
||||
failJobsWhenReviewNeeded: "false"
|
||||
71
.github/workflows/codeql.yaml
vendored
Normal file
71
.github/workflows/codeql.yaml
vendored
Normal file
@@ -0,0 +1,71 @@
|
||||
# For most projects, this workflow file will not need changing; you simply need
|
||||
# to commit it to your repository.
|
||||
#
|
||||
# You may wish to alter this file to override the set of languages analyzed,
|
||||
# or to provide custom queries or build logic.
|
||||
#
|
||||
# ******** NOTE ********
|
||||
# We have attempted to detect the languages in your repository. Please check
|
||||
# the `language` matrix defined below to confirm you have the correct set of
|
||||
# supported CodeQL languages.
|
||||
#
|
||||
name: "CodeQL"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, v* ]
|
||||
pull_request:
|
||||
# The branches below must be a subset of the branches above
|
||||
branches: [ main ]
|
||||
schedule:
|
||||
- cron: '32 5 * * 5'
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
security-events: write
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
language: [ 'go', 'javascript', 'python' ]
|
||||
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
|
||||
# Learn more:
|
||||
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v2
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
# By default, queries listed here will override any specified in a config file.
|
||||
# Prefix the list here with "+" to use these queries and those in the config file.
|
||||
# queries: ./path/to/local/query, your-org/your-repo/queries@main
|
||||
|
||||
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
|
||||
# If this step fails, then you should remove it and run the build manually (see below)
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@v2
|
||||
|
||||
# ℹ️ Command-line programs to run using the OS shell.
|
||||
# 📚 https://git.io/JvXDl
|
||||
|
||||
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
|
||||
# and modify them (or add more) to build your code if your project
|
||||
# uses a compiled language
|
||||
|
||||
#- run: |
|
||||
# make bootstrap
|
||||
# make release
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v2
|
||||
34
.github/workflows/commitci.yaml
vendored
34
.github/workflows/commitci.yaml
vendored
@@ -1,34 +0,0 @@
|
||||
name: commitci
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
pull_request_target:
|
||||
types:
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
refcheck:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: check
|
||||
run: |
|
||||
if grep -R --include="*.go" '.*/ee/.*' pkg/; then
|
||||
echo "Error: Found references to 'ee' packages in 'pkg' directory"
|
||||
exit 1
|
||||
else
|
||||
echo "No references to 'ee' packages found in 'pkg' directory"
|
||||
fi
|
||||
|
||||
if grep -R --include="*.go" '.*/ee/.*' cmd/community/; then
|
||||
echo "Error: Found references to 'ee' packages in 'cmd/community' directory"
|
||||
exit 1
|
||||
else
|
||||
echo "No references to 'ee' packages found in 'cmd/community' directory"
|
||||
fi
|
||||
13
.github/workflows/commitlint.yml
vendored
Normal file
13
.github/workflows/commitlint.yml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
name: commitlint
|
||||
on: [pull_request]
|
||||
defaults:
|
||||
run:
|
||||
working-directory: frontend
|
||||
jobs:
|
||||
lint-commits:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: wagoid/commitlint-github-action@v5
|
||||
27
.github/workflows/create-issue-on-pr-merge.yml
vendored
Normal file
27
.github/workflows/create-issue-on-pr-merge.yml
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
on:
|
||||
pull_request_target:
|
||||
types:
|
||||
- closed
|
||||
|
||||
env:
|
||||
GITHUB_ACCESS_TOKEN: ${{ secrets.CI_BOT_TOKEN }}
|
||||
PR_NUMBER: ${{ github.event.number }}
|
||||
jobs:
|
||||
create_issue_on_merge:
|
||||
if: github.event.pull_request.merged == true
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Codebase
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: signoz/gh-bot
|
||||
- name: Use Node v16
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 16
|
||||
- name: Setup Cache & Install Dependencies
|
||||
uses: bahmutov/npm-install@v1
|
||||
with:
|
||||
install-command: yarn --frozen-lockfile
|
||||
- name: Comment on PR
|
||||
run: node create-issue.js
|
||||
22
.github/workflows/dependency-review.yml
vendored
Normal file
22
.github/workflows/dependency-review.yml
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
# Dependency Review Action
|
||||
#
|
||||
# This Action will scan dependency manifest files that change as part of a Pull Request, surfacing known-vulnerable versions of the packages declared or updated in the PR. Once installed, if the workflow run is marked as required, PRs introducing known-vulnerable packages will be blocked from merging.
|
||||
#
|
||||
# Source repository: https://github.com/actions/dependency-review-action
|
||||
# Public documentation: https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-enforcement
|
||||
name: 'Dependency Review'
|
||||
on: [pull_request]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
dependency-review:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: 'Checkout Repository'
|
||||
uses: actions/checkout@v4
|
||||
- name: 'Dependency Review'
|
||||
with:
|
||||
fail-on-severity: high
|
||||
uses: actions/dependency-review-action@v3
|
||||
93
.github/workflows/e2e-k3s.yaml
vendored
Normal file
93
.github/workflows/e2e-k3s.yaml
vendored
Normal file
@@ -0,0 +1,93 @@
|
||||
name: e2e-k3s
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
|
||||
e2e-k3s:
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ github.event.label.name == 'ok-to-test' }}
|
||||
env:
|
||||
DOCKER_TAG: pull-${{ github.event.number }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup golang
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: "1.21"
|
||||
|
||||
- name: Build query-service image
|
||||
env:
|
||||
DEV_BUILD: 1
|
||||
run: make build-ee-query-service-amd64
|
||||
|
||||
- name: Build frontend image
|
||||
run: make build-frontend-amd64
|
||||
|
||||
- name: Create a k3s cluster
|
||||
uses: AbsaOSS/k3d-action@v2
|
||||
with:
|
||||
cluster-name: "signoz"
|
||||
|
||||
- name: Inject the images to the cluster
|
||||
run: k3d image import signoz/query-service:$DOCKER_TAG signoz/frontend:$DOCKER_TAG -c signoz
|
||||
|
||||
- name: Set up HotROD sample-app
|
||||
run: |
|
||||
# create sample-application namespace
|
||||
kubectl create ns sample-application
|
||||
|
||||
# apply hotrod k8s manifest file
|
||||
kubectl -n sample-application apply -f https://raw.githubusercontent.com/SigNoz/signoz/main/sample-apps/hotrod/hotrod.yaml
|
||||
|
||||
# wait for all deployments in sample-application namespace to be READY
|
||||
kubectl -n sample-application get deploy --output name | xargs -r -n1 -t kubectl -n sample-application rollout status --timeout=300s
|
||||
|
||||
- name: Deploy the app
|
||||
run: |
|
||||
# add signoz helm repository
|
||||
helm repo add signoz https://charts.signoz.io
|
||||
|
||||
# create platform namespace
|
||||
kubectl create ns platform
|
||||
|
||||
# installing signoz using helm
|
||||
helm install my-release signoz/signoz -n platform \
|
||||
--wait \
|
||||
--timeout 10m0s \
|
||||
--set frontend.service.type=LoadBalancer \
|
||||
--set queryService.image.tag=$DOCKER_TAG \
|
||||
--set frontend.image.tag=$DOCKER_TAG
|
||||
|
||||
# get pods, services and the container images
|
||||
kubectl get pods -n platform
|
||||
kubectl get svc -n platform
|
||||
|
||||
- name: Kick off a sample-app workload
|
||||
run: |
|
||||
# start the locust swarm
|
||||
kubectl --namespace sample-application run strzal --image=djbingham/curl \
|
||||
--restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \
|
||||
'user_count=6' -F 'spawn_rate=2' http://locust-master:8089/swarm
|
||||
|
||||
- name: Get short commit SHA, display tunnel URL and IP Address of the worker node
|
||||
id: get-subdomain
|
||||
run: |
|
||||
subdomain="pr-$(git rev-parse --short HEAD)"
|
||||
echo "URL for tunnelling: https://$subdomain.loca.lt"
|
||||
echo "subdomain=$subdomain" >> $GITHUB_OUTPUT
|
||||
worker_ip="$(curl -4 -s ipconfig.io/ip)"
|
||||
echo "Worker node IP address: $worker_ip"
|
||||
|
||||
- name: Start tunnel
|
||||
env:
|
||||
SUBDOMAIN: ${{ steps.get-subdomain.outputs.subdomain }}
|
||||
run: |
|
||||
npm install -g localtunnel
|
||||
host=$(kubectl get svc -n platform | grep frontend | tr -s ' ' | cut -d" " -f4)
|
||||
port=$(kubectl get svc -n platform | grep frontend | tr -s ' ' | cut -d" " -f5 | cut -d":" -f1)
|
||||
lt -p $port -l $host -s $SUBDOMAIN
|
||||
95
.github/workflows/goci.yaml
vendored
95
.github/workflows/goci.yaml
vendored
@@ -1,95 +0,0 @@
|
||||
name: goci
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
pull_request_target:
|
||||
types:
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
test:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/go-test.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_TEST_CONTEXT: ./...
|
||||
GO_VERSION: 1.24
|
||||
fmt:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/go-fmt.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_VERSION: 1.24
|
||||
lint:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/go-lint.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_VERSION: 1.24
|
||||
deps:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/go-deps.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GO_VERSION: 1.24
|
||||
build:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: self-checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: go-install
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.24"
|
||||
- name: qemu-install
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: aarch64-install
|
||||
run: |
|
||||
set -ex
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
|
||||
- name: node-install
|
||||
uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: "22"
|
||||
- name: docker-community
|
||||
shell: bash
|
||||
run: |
|
||||
make docker-build-community
|
||||
- name: docker-enterprise
|
||||
shell: bash
|
||||
run: |
|
||||
make docker-build-enterprise
|
||||
openapi:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: self-checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: go-install
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.24"
|
||||
- name: generate-openapi
|
||||
run: |
|
||||
go run cmd/enterprise/*.go generate openapi
|
||||
git diff --compact-summary --exit-code || (echo; echo "Unexpected difference in openapi spec. Run go run cmd/enterprise/*.go generate openapi locally and commit."; exit 1)
|
||||
155
.github/workflows/gor-signoz-community.yaml
vendored
155
.github/workflows/gor-signoz-community.yaml
vendored
@@ -1,155 +0,0 @@
|
||||
name: gor-signoz-community
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v[0-9]+.[0-9]+.[0-9]+'
|
||||
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: get-sha
|
||||
shell: bash
|
||||
run: |
|
||||
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
|
||||
- name: build-frontend
|
||||
run: make js-build
|
||||
- name: upload-frontend-artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: community-frontend-build-${{ env.sha_short }}
|
||||
path: frontend/build
|
||||
build:
|
||||
needs: prepare
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-latest
|
||||
- macos-latest
|
||||
env:
|
||||
CONFIG_PATH: cmd/community/.goreleaser.yaml
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: setup-qemu
|
||||
uses: docker/setup-qemu-action@v3
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
- name: setup-buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
- name: ghcr-login
|
||||
uses: docker/login-action@v3
|
||||
if: matrix.os != 'macos-latest'
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: setup-go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.24"
|
||||
- name: cross-compilation-tools
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
|
||||
- name: get-sha
|
||||
shell: bash
|
||||
run: |
|
||||
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
|
||||
- name: download-frontend-artifact
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: community-frontend-build-${{ env.sha_short }}
|
||||
path: frontend/build
|
||||
- name: cache-linux
|
||||
uses: actions/cache@v4
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
with:
|
||||
path: dist/linux
|
||||
key: signoz-community-linux-${{ env.sha_short }}
|
||||
- name: cache-darwin
|
||||
uses: actions/cache@v4
|
||||
if: matrix.os == 'macos-latest'
|
||||
with:
|
||||
path: dist/darwin
|
||||
key: signoz-community-darwin-${{ env.sha_short }}
|
||||
- name: release
|
||||
uses: goreleaser/goreleaser-action@v6
|
||||
with:
|
||||
distribution: goreleaser-pro
|
||||
version: '~> v2'
|
||||
args: release --config ${{ env.CONFIG_PATH }} --clean --split
|
||||
workdir: .
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GORELEASER_KEY: ${{ secrets.GORELEASER_KEY }}
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
env:
|
||||
DOCKER_CLI_EXPERIMENTAL: "enabled"
|
||||
WORKDIR: cmd/community
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: setup-qemu
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: setup-buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- name: cosign-installer
|
||||
uses: sigstore/cosign-installer@v3.8.1
|
||||
- name: download-syft
|
||||
uses: anchore/sbom-action/download-syft@v0.18.0
|
||||
- name: ghcr-login
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: setup-go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.24"
|
||||
|
||||
# copy the caches from build
|
||||
- name: get-sha
|
||||
shell: bash
|
||||
run: |
|
||||
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
|
||||
- name: cache-linux
|
||||
id: cache-linux
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: dist/linux
|
||||
key: signoz-community-linux-${{ env.sha_short }}
|
||||
- name: cache-darwin
|
||||
id: cache-darwin
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: dist/darwin
|
||||
key: signoz-community-darwin-${{ env.sha_short }}
|
||||
|
||||
# release
|
||||
- uses: goreleaser/goreleaser-action@v6
|
||||
if: steps.cache-linux.outputs.cache-hit == 'true' && steps.cache-darwin.outputs.cache-hit == 'true' # only run if caches hit
|
||||
with:
|
||||
distribution: goreleaser-pro
|
||||
version: '~> v2'
|
||||
args: continue --merge
|
||||
workdir: .
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GORELEASER_KEY: ${{ secrets.GORELEASER_KEY }}
|
||||
169
.github/workflows/gor-signoz.yaml
vendored
169
.github/workflows/gor-signoz.yaml
vendored
@@ -1,169 +0,0 @@
|
||||
name: gor-signoz
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v[0-9]+.[0-9]+.[0-9]+'
|
||||
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: get-sha
|
||||
shell: bash
|
||||
run: |
|
||||
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
|
||||
- name: dotenv-frontend
|
||||
working-directory: frontend
|
||||
run: |
|
||||
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' > .env
|
||||
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> .env
|
||||
echo 'SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}"' >> .env
|
||||
echo 'SENTRY_ORG="${{ secrets.SENTRY_ORG }}"' >> .env
|
||||
echo 'SENTRY_PROJECT_ID="${{ secrets.SENTRY_PROJECT_ID }}"' >> .env
|
||||
echo 'SENTRY_DSN="${{ secrets.SENTRY_DSN }}"' >> .env
|
||||
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> .env
|
||||
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> .env
|
||||
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> .env
|
||||
echo 'PYLON_APP_ID="${{ secrets.PYLON_APP_ID }}"' >> .env
|
||||
echo 'APPCUES_APP_ID="${{ secrets.APPCUES_APP_ID }}"' >> .env
|
||||
echo 'PYLON_IDENTITY_SECRET="${{ secrets.PYLON_IDENTITY_SECRET }}"' >> .env
|
||||
- name: build-frontend
|
||||
run: make js-build
|
||||
- name: upload-frontend-artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: frontend-build-${{ env.sha_short }}
|
||||
path: frontend/build
|
||||
build:
|
||||
needs: prepare
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-latest
|
||||
- macos-latest
|
||||
env:
|
||||
CONFIG_PATH: cmd/enterprise/.goreleaser.yaml
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: setup-qemu
|
||||
uses: docker/setup-qemu-action@v3
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
- name: setup-buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
- name: ghcr-login
|
||||
uses: docker/login-action@v3
|
||||
if: matrix.os != 'macos-latest'
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: setup-go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.24"
|
||||
- name: cross-compilation-tools
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
|
||||
- name: get-sha
|
||||
shell: bash
|
||||
run: |
|
||||
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
|
||||
- name: download-frontend-artifact
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: frontend-build-${{ env.sha_short }}
|
||||
path: frontend/build
|
||||
- name: cache-linux
|
||||
uses: actions/cache@v4
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
with:
|
||||
path: dist/linux
|
||||
key: signoz-linux-${{ env.sha_short }}
|
||||
- name: cache-darwin
|
||||
uses: actions/cache@v4
|
||||
if: matrix.os == 'macos-latest'
|
||||
with:
|
||||
path: dist/darwin
|
||||
key: signoz-darwin-${{ env.sha_short }}
|
||||
- name: release
|
||||
uses: goreleaser/goreleaser-action@v6
|
||||
with:
|
||||
distribution: goreleaser-pro
|
||||
version: '~> v2'
|
||||
args: release --config ${{ env.CONFIG_PATH }} --clean --split
|
||||
workdir: .
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GORELEASER_KEY: ${{ secrets.GORELEASER_KEY }}
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
env:
|
||||
DOCKER_CLI_EXPERIMENTAL: "enabled"
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: setup-qemu
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: setup-buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- name: cosign-installer
|
||||
uses: sigstore/cosign-installer@v3.8.1
|
||||
- name: download-syft
|
||||
uses: anchore/sbom-action/download-syft@v0.18.0
|
||||
- name: ghcr-login
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: setup-go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "1.24"
|
||||
|
||||
# copy the caches from build
|
||||
- name: get-sha
|
||||
shell: bash
|
||||
run: |
|
||||
echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
|
||||
- name: cache-linux
|
||||
id: cache-linux
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: dist/linux
|
||||
key: signoz-linux-${{ env.sha_short }}
|
||||
- name: cache-darwin
|
||||
id: cache-darwin
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: dist/darwin
|
||||
key: signoz-darwin-${{ env.sha_short }}
|
||||
|
||||
# release
|
||||
- uses: goreleaser/goreleaser-action@v6
|
||||
if: steps.cache-linux.outputs.cache-hit == 'true' && steps.cache-darwin.outputs.cache-hit == 'true' # only run if caches hit
|
||||
with:
|
||||
distribution: goreleaser-pro
|
||||
version: '~> v2'
|
||||
args: continue --merge
|
||||
workdir: .
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GORELEASER_KEY: ${{ secrets.GORELEASER_KEY }}
|
||||
@@ -1,8 +1,9 @@
|
||||
name: gor-histogramquantile
|
||||
name: goreleaser
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- v*
|
||||
- histogram-quantile/v*
|
||||
|
||||
permissions:
|
||||
101
.github/workflows/integrationci.yaml
vendored
101
.github/workflows/integrationci.yaml
vendored
@@ -1,101 +0,0 @@
|
||||
name: integrationci
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- labeled
|
||||
pull_request_target:
|
||||
types:
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
fmtlint:
|
||||
if: |
|
||||
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-integrate')
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.13
|
||||
- name: uv
|
||||
uses: astral-sh/setup-uv@v4
|
||||
- name: install
|
||||
run: |
|
||||
cd tests/integration && uv sync
|
||||
- name: fmt
|
||||
run: |
|
||||
make py-fmt
|
||||
- name: lint
|
||||
run: |
|
||||
make py-lint
|
||||
test:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
src:
|
||||
- bootstrap
|
||||
- passwordauthn
|
||||
- callbackauthn
|
||||
- cloudintegrations
|
||||
- dashboard
|
||||
- querier
|
||||
- ttl
|
||||
- preference
|
||||
- logspipelines
|
||||
sqlstore-provider:
|
||||
- postgres
|
||||
- sqlite
|
||||
clickhouse-version:
|
||||
- 25.5.6
|
||||
- 25.10.1
|
||||
schema-migrator-version:
|
||||
- v0.129.7
|
||||
postgres-version:
|
||||
- 15
|
||||
if: |
|
||||
((github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))) && contains(github.event.pull_request.labels.*.name, 'safe-to-integrate')
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.13
|
||||
- name: uv
|
||||
uses: astral-sh/setup-uv@v4
|
||||
- name: install
|
||||
run: |
|
||||
cd tests/integration && uv sync
|
||||
- name: webdriver
|
||||
run: |
|
||||
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
|
||||
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" | sudo tee -a /etc/apt/sources.list.d/google-chrome.list
|
||||
sudo apt-get update -qqy
|
||||
sudo apt-get -qqy install google-chrome-stable
|
||||
CHROME_VERSION=$(google-chrome-stable --version)
|
||||
CHROME_FULL_VERSION=${CHROME_VERSION%%.*}
|
||||
CHROME_MAJOR_VERSION=${CHROME_FULL_VERSION//[!0-9]}
|
||||
sudo rm /etc/apt/sources.list.d/google-chrome.list
|
||||
export CHROMEDRIVER_VERSION=`curl -s https://googlechromelabs.github.io/chrome-for-testing/LATEST_RELEASE_${CHROME_MAJOR_VERSION%%.*}`
|
||||
curl -L -O "https://storage.googleapis.com/chrome-for-testing-public/${CHROMEDRIVER_VERSION}/linux64/chromedriver-linux64.zip"
|
||||
unzip chromedriver-linux64.zip
|
||||
chmod +x chromedriver-linux64/chromedriver
|
||||
sudo mv chromedriver-linux64/chromedriver /usr/local/bin/chromedriver
|
||||
chromedriver -version
|
||||
google-chrome-stable --version
|
||||
- name: run
|
||||
run: |
|
||||
cd tests/integration && \
|
||||
uv run pytest \
|
||||
--basetemp=./tmp/ \
|
||||
src/${{matrix.src}} \
|
||||
--sqlstore-provider ${{matrix.sqlstore-provider}} \
|
||||
--postgres-version ${{matrix.postgres-version}} \
|
||||
--clickhouse-version ${{matrix.clickhouse-version}} \
|
||||
--schema-migrator-version ${{matrix.schema-migrator-version}}
|
||||
32
.github/workflows/jest-coverage-changes.yml
vendored
Normal file
32
.github/workflows/jest-coverage-changes.yml
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
name: Jest Coverage - changed files
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: "refs/heads/main"
|
||||
token: ${{ secrets.GITHUB_TOKEN }} # Provide the GitHub token for authentication
|
||||
|
||||
- name: Fetch branch
|
||||
run: git fetch origin ${{ github.event.pull_request.head.ref }}
|
||||
|
||||
- run: |
|
||||
git checkout ${{ github.event.pull_request.head.sha }}
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: lts/*
|
||||
|
||||
- name: Install dependencies
|
||||
run: cd frontend && npm install -g yarn && yarn
|
||||
|
||||
- name: npm run test:changedsince
|
||||
run: cd frontend && npm run i18n:generate-hash && npm run test:changedsince
|
||||
63
.github/workflows/jsci.yaml
vendored
63
.github/workflows/jsci.yaml
vendored
@@ -1,63 +0,0 @@
|
||||
name: jsci
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
pull_request_target:
|
||||
types:
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
tsc:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: setup node
|
||||
uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: "22"
|
||||
- name: install
|
||||
run: cd frontend && yarn install
|
||||
- name: tsc
|
||||
run: cd frontend && yarn tsc
|
||||
tsc2:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/js-tsc.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
test:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/js-test.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
fmt:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/js-fmt.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
lint:
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && ! github.event.pull_request.head.repo.fork && github.event.pull_request.user.login != 'dependabot[bot]' && ! contains(github.event.pull_request.labels.*.name, 'safe-to-test')) ||
|
||||
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'safe-to-test'))
|
||||
uses: signoz/primus.workflows/.github/workflows/js-lint.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
JS_SRC: frontend
|
||||
24
.github/workflows/playwright.yaml
vendored
Normal file
24
.github/workflows/playwright.yaml
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
name: Playwright Tests
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
playwright:
|
||||
defaults:
|
||||
run:
|
||||
working-directory: frontend
|
||||
timeout-minutes: 60
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "16.x"
|
||||
- name: Install dependencies
|
||||
run: CI=1 yarn install
|
||||
- name: Install Playwright
|
||||
run: npx playwright install --with-deps
|
||||
- name: Run Playwright tests
|
||||
run: yarn playwright
|
||||
env:
|
||||
# This might depend on your test-runner/language binding
|
||||
PLAYWRIGHT_TEST_BASE_URL: ${{ secrets.PLAYWRIGHT_TEST_BASE_URL }}
|
||||
18
.github/workflows/postci.yaml
vendored
18
.github/workflows/postci.yaml
vendored
@@ -1,18 +0,0 @@
|
||||
name: postci
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
branches:
|
||||
- main
|
||||
types:
|
||||
- synchronize
|
||||
|
||||
jobs:
|
||||
remove:
|
||||
if: github.event.pull_request.head.repo.fork || github.event.pull_request.user.login == 'dependabot[bot]'
|
||||
uses: signoz/primus.workflows/.github/workflows/github-label.yaml@main
|
||||
secrets: inherit
|
||||
with:
|
||||
PRIMUS_REF: main
|
||||
GITHUB_LABEL_ACTION: remove
|
||||
GITHUB_LABEL_NAME: safe-to-test
|
||||
4
.github/workflows/prereleaser.yaml
vendored
4
.github/workflows/prereleaser.yaml
vendored
@@ -1,6 +1,10 @@
|
||||
name: prereleaser
|
||||
|
||||
on:
|
||||
# schedule every wednesday 9:30 AM UTC (3pm IST)
|
||||
schedule:
|
||||
- cron: '30 9 * * 3'
|
||||
|
||||
# allow manual triggering of the workflow by a maintainer
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
|
||||
209
.github/workflows/push.yaml
vendored
Normal file
209
.github/workflows/push.yaml
vendored
Normal file
@@ -0,0 +1,209 @@
|
||||
name: push
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
tags:
|
||||
- v*
|
||||
|
||||
jobs:
|
||||
image-build-and-push-query-service:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Setup golang
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: "1.21"
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
version: latest
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- uses: benjlevesque/short-sha@v2.2
|
||||
id: short-sha
|
||||
- name: Get branch name
|
||||
id: branch-name
|
||||
uses: tj-actions/branch-names@v7.0.7
|
||||
- name: Set docker tag environment
|
||||
run: |
|
||||
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
|
||||
tag="${{ steps.branch-name.outputs.tag }}"
|
||||
tag="${tag:1}"
|
||||
echo "DOCKER_TAG=${tag}-oss" >> $GITHUB_ENV
|
||||
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
|
||||
echo "DOCKER_TAG=latest-oss" >> $GITHUB_ENV
|
||||
else
|
||||
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}-oss" >> $GITHUB_ENV
|
||||
fi
|
||||
- name: Install cross-compilation tools
|
||||
run: |
|
||||
set -ex
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
|
||||
- name: Build and push docker image
|
||||
run: make build-push-query-service
|
||||
|
||||
image-build-and-push-ee-query-service:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Create .env file
|
||||
run: |
|
||||
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' > frontend/.env
|
||||
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> frontend/.env
|
||||
echo 'SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}"' >> frontend/.env
|
||||
echo 'SENTRY_ORG="${{ secrets.SENTRY_ORG }}"' >> frontend/.env
|
||||
echo 'SENTRY_PROJECT_ID="${{ secrets.SENTRY_PROJECT_ID }}"' >> frontend/.env
|
||||
echo 'SENTRY_DSN="${{ secrets.SENTRY_DSN }}"' >> frontend/.env
|
||||
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> frontend/.env
|
||||
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> frontend/.env
|
||||
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> frontend/.env
|
||||
echo 'CUSTOMERIO_ID="${{ secrets.CUSTOMERIO_ID }}"' >> frontend/.env
|
||||
echo 'CUSTOMERIO_SITE_ID="${{ secrets.CUSTOMERIO_SITE_ID }}"' >> frontend/.env
|
||||
- name: Setup golang
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: "1.21"
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
version: latest
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- uses: benjlevesque/short-sha@v2.2
|
||||
id: short-sha
|
||||
- name: Get branch name
|
||||
id: branch-name
|
||||
uses: tj-actions/branch-names@v7.0.7
|
||||
- name: Set docker tag environment
|
||||
run: |
|
||||
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
|
||||
tag="${{ steps.branch-name.outputs.tag }}"
|
||||
tag="${tag:1}"
|
||||
echo "DOCKER_TAG=$tag" >> $GITHUB_ENV
|
||||
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
|
||||
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
|
||||
else
|
||||
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
|
||||
fi
|
||||
- name: Install cross-compilation tools
|
||||
run: |
|
||||
set -ex
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y gcc-aarch64-linux-gnu musl-tools
|
||||
- name: Build and push docker image
|
||||
run: make build-push-ee-query-service
|
||||
|
||||
image-build-and-push-frontend:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Install dependencies
|
||||
working-directory: frontend
|
||||
run: yarn install
|
||||
- name: Run Prettier
|
||||
working-directory: frontend
|
||||
run: npm run prettify
|
||||
continue-on-error: true
|
||||
- name: Run ESLint
|
||||
working-directory: frontend
|
||||
run: npm run lint
|
||||
continue-on-error: true
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
version: latest
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- uses: benjlevesque/short-sha@v2.2
|
||||
id: short-sha
|
||||
- name: Get branch name
|
||||
id: branch-name
|
||||
uses: tj-actions/branch-names@v7.0.7
|
||||
- name: Set docker tag environment
|
||||
run: |
|
||||
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
|
||||
tag="${{ steps.branch-name.outputs.tag }}"
|
||||
tag="${tag:1}"
|
||||
echo "DOCKER_TAG=$tag" >> $GITHUB_ENV
|
||||
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
|
||||
echo "DOCKER_TAG=latest" >> $GITHUB_ENV
|
||||
else
|
||||
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
|
||||
fi
|
||||
- name: Build and push docker image
|
||||
run: make build-push-frontend
|
||||
|
||||
image-build-and-push-frontend-ee:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Create .env file
|
||||
run: |
|
||||
echo 'INTERCOM_APP_ID="${{ secrets.INTERCOM_APP_ID }}"' > frontend/.env
|
||||
echo 'SEGMENT_ID="${{ secrets.SEGMENT_ID }}"' >> frontend/.env
|
||||
echo 'SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}"' >> frontend/.env
|
||||
echo 'SENTRY_ORG="${{ secrets.SENTRY_ORG }}"' >> frontend/.env
|
||||
echo 'SENTRY_PROJECT_ID="${{ secrets.SENTRY_PROJECT_ID }}"' >> frontend/.env
|
||||
echo 'SENTRY_DSN="${{ secrets.SENTRY_DSN }}"' >> frontend/.env
|
||||
echo 'TUNNEL_URL="${{ secrets.TUNNEL_URL }}"' >> frontend/.env
|
||||
echo 'TUNNEL_DOMAIN="${{ secrets.TUNNEL_DOMAIN }}"' >> frontend/.env
|
||||
echo 'POSTHOG_KEY="${{ secrets.POSTHOG_KEY }}"' >> frontend/.env
|
||||
- name: Install dependencies
|
||||
working-directory: frontend
|
||||
run: yarn install
|
||||
- name: Run Prettier
|
||||
working-directory: frontend
|
||||
run: npm run prettify
|
||||
continue-on-error: true
|
||||
- name: Run ESLint
|
||||
working-directory: frontend
|
||||
run: npm run lint
|
||||
continue-on-error: true
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
version: latest
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- uses: benjlevesque/short-sha@v2.2
|
||||
id: short-sha
|
||||
- name: Get branch name
|
||||
id: branch-name
|
||||
uses: tj-actions/branch-names@v7.0.7
|
||||
- name: Set docker tag environment
|
||||
run: |
|
||||
if [ '${{ steps.branch-name.outputs.is_tag }}' == 'true' ]; then
|
||||
tag="${{ steps.branch-name.outputs.tag }}"
|
||||
tag="${tag:1}"
|
||||
echo "DOCKER_TAG=${tag}-ee" >> $GITHUB_ENV
|
||||
elif [ '${{ steps.branch-name.outputs.current_branch }}' == 'main' ]; then
|
||||
echo "DOCKER_TAG=latest-ee" >> $GITHUB_ENV
|
||||
else
|
||||
echo "DOCKER_TAG=${{ steps.branch-name.outputs.current_branch }}-ee" >> $GITHUB_ENV
|
||||
fi
|
||||
- name: Build and push docker image
|
||||
run: make build-push-frontend
|
||||
22
.github/workflows/remove-label.yaml
vendored
Normal file
22
.github/workflows/remove-label.yaml
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
name: remove-label
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [synchronize]
|
||||
|
||||
jobs:
|
||||
remove:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Remove label ok-to-test from PR
|
||||
uses: buildsville/add-remove-label@v2.0.0
|
||||
with:
|
||||
label: ok-to-test
|
||||
type: remove
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Remove label testing-deploy from PR
|
||||
uses: buildsville/add-remove-label@v2.0.0
|
||||
with:
|
||||
label: testing-deploy
|
||||
type: remove
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
62
.github/workflows/run-e2e.yaml
vendored
62
.github/workflows/run-e2e.yaml
vendored
@@ -1,62 +0,0 @@
|
||||
name: e2eci
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
userRole:
|
||||
description: "Role of the user (ADMIN, EDITOR, VIEWER)"
|
||||
required: true
|
||||
type: choice
|
||||
options:
|
||||
- ADMIN
|
||||
- EDITOR
|
||||
- VIEWER
|
||||
|
||||
jobs:
|
||||
test:
|
||||
name: Run Playwright Tests
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: lts/*
|
||||
|
||||
- name: Mask secrets and input
|
||||
run: |
|
||||
echo "::add-mask::${{ secrets.BASE_URL }}"
|
||||
echo "::add-mask::${{ secrets.LOGIN_USERNAME }}"
|
||||
echo "::add-mask::${{ secrets.LOGIN_PASSWORD }}"
|
||||
echo "::add-mask::${{ github.event.inputs.userRole }}"
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: frontend
|
||||
run: |
|
||||
npm install -g yarn
|
||||
yarn
|
||||
|
||||
- name: Install Playwright Browsers
|
||||
working-directory: frontend
|
||||
run: yarn playwright install --with-deps
|
||||
|
||||
- name: Run Playwright Tests
|
||||
working-directory: frontend
|
||||
run: |
|
||||
BASE_URL="${{ secrets.BASE_URL }}" \
|
||||
LOGIN_USERNAME="${{ secrets.LOGIN_USERNAME }}" \
|
||||
LOGIN_PASSWORD="${{ secrets.LOGIN_PASSWORD }}" \
|
||||
USER_ROLE="${{ github.event.inputs.userRole }}" \
|
||||
yarn playwright test
|
||||
|
||||
- name: Upload Playwright Report
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: playwright-report
|
||||
path: frontend/playwright-report/
|
||||
retention-days: 30
|
||||
25
.github/workflows/sonar.yml
vendored
Normal file
25
.github/workflows/sonar.yml
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
name: sonar
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- 'frontend/**'
|
||||
defaults:
|
||||
run:
|
||||
working-directory: frontend
|
||||
jobs:
|
||||
sonar-analysis:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Sonar analysis
|
||||
uses: sonarsource/sonarcloud-github-action@master
|
||||
with:
|
||||
projectBaseDir: frontend
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||
56
.github/workflows/staging-deployment.yaml
vendored
Normal file
56
.github/workflows/staging-deployment.yaml
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
name: staging-deployment
|
||||
# Trigger deployment only on push to main branch
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
deploy:
|
||||
name: Deploy latest main branch to staging
|
||||
runs-on: ubuntu-latest
|
||||
environment: staging
|
||||
permissions:
|
||||
contents: 'read'
|
||||
id-token: 'write'
|
||||
steps:
|
||||
- id: 'auth'
|
||||
uses: 'google-github-actions/auth@v2'
|
||||
with:
|
||||
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }}
|
||||
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
|
||||
|
||||
- name: 'sdk'
|
||||
uses: 'google-github-actions/setup-gcloud@v2'
|
||||
|
||||
- name: 'ssh'
|
||||
shell: bash
|
||||
env:
|
||||
GITHUB_BRANCH: ${{ github.head_ref || github.ref_name }}
|
||||
GITHUB_SHA: ${{ github.sha }}
|
||||
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
|
||||
GCP_ZONE: ${{ secrets.GCP_ZONE }}
|
||||
GCP_INSTANCE: ${{ secrets.GCP_INSTANCE }}
|
||||
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
|
||||
run: |
|
||||
read -r -d '' COMMAND <<EOF || true
|
||||
echo "GITHUB_BRANCH: ${GITHUB_BRANCH}"
|
||||
echo "GITHUB_SHA: ${GITHUB_SHA}"
|
||||
export DOCKER_TAG="${GITHUB_SHA:0:7}" # needed for child process to access it
|
||||
export OTELCOL_TAG="main"
|
||||
export PATH="/usr/local/go/bin/:$PATH" # needed for Golang to work
|
||||
export KAFKA_SPAN_EVAL="true"
|
||||
docker system prune --force
|
||||
docker pull signoz/signoz-otel-collector:main
|
||||
docker pull signoz/signoz-schema-migrator:main
|
||||
cd ~/signoz
|
||||
git status
|
||||
git add .
|
||||
git stash push -m "stashed on $(date --iso-8601=seconds)"
|
||||
git fetch origin
|
||||
git checkout ${GITHUB_BRANCH}
|
||||
git pull
|
||||
make build-ee-query-service-amd64
|
||||
make build-frontend-amd64
|
||||
make run-testing
|
||||
EOF
|
||||
gcloud beta compute ssh ${GCP_INSTANCE} --zone ${GCP_ZONE} --ssh-key-expire-after=15m --tunnel-through-iap --project ${GCP_PROJECT} --command "${COMMAND}"
|
||||
56
.github/workflows/testing-deployment.yaml
vendored
Normal file
56
.github/workflows/testing-deployment.yaml
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
name: testing-deployment
|
||||
# Trigger deployment only on testing-deploy label on pull request
|
||||
on:
|
||||
pull_request:
|
||||
types: [labeled]
|
||||
jobs:
|
||||
deploy:
|
||||
name: Deploy PR branch to testing
|
||||
runs-on: ubuntu-latest
|
||||
environment: testing
|
||||
if: ${{ github.event.label.name == 'testing-deploy' }}
|
||||
permissions:
|
||||
contents: 'read'
|
||||
id-token: 'write'
|
||||
steps:
|
||||
- id: 'auth'
|
||||
uses: 'google-github-actions/auth@v2'
|
||||
with:
|
||||
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }}
|
||||
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
|
||||
|
||||
- name: 'sdk'
|
||||
uses: 'google-github-actions/setup-gcloud@v2'
|
||||
|
||||
- name: 'ssh'
|
||||
shell: bash
|
||||
env:
|
||||
GITHUB_BRANCH: ${{ github.head_ref || github.ref_name }}
|
||||
GITHUB_SHA: ${{ github.sha }}
|
||||
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
|
||||
GCP_ZONE: ${{ secrets.GCP_ZONE }}
|
||||
GCP_INSTANCE: ${{ secrets.GCP_INSTANCE }}
|
||||
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
|
||||
run: |
|
||||
read -r -d '' COMMAND <<EOF || true
|
||||
echo "GITHUB_BRANCH: ${GITHUB_BRANCH}"
|
||||
echo "GITHUB_SHA: ${GITHUB_SHA}"
|
||||
export DOCKER_TAG="${GITHUB_SHA:0:7}" # needed for child process to access it
|
||||
export DEV_BUILD="1"
|
||||
export PATH="/usr/local/go/bin/:$PATH" # needed for Golang to work
|
||||
docker system prune --force
|
||||
cd ~/signoz
|
||||
git status
|
||||
git add .
|
||||
git stash push -m "stashed on $(date --iso-8601=seconds)"
|
||||
git fetch origin
|
||||
git checkout main
|
||||
git pull
|
||||
# This is added to include the scenerio when new commit in PR is force-pushed
|
||||
git branch -D ${GITHUB_BRANCH}
|
||||
git checkout --track origin/${GITHUB_BRANCH}
|
||||
make build-ee-query-service-amd64
|
||||
make build-frontend-amd64
|
||||
make run-testing
|
||||
EOF
|
||||
gcloud beta compute ssh ${GCP_INSTANCE} --zone ${GCP_ZONE} --ssh-key-expire-after=15m --tunnel-through-iap --project ${GCP_PROJECT} --command "${COMMAND}"
|
||||
164
.gitignore
vendored
164
.gitignore
vendored
@@ -1,9 +1,6 @@
|
||||
|
||||
node_modules
|
||||
|
||||
.vscode
|
||||
!.vscode/settings.json
|
||||
|
||||
deploy/docker/environment_tiny/common_test
|
||||
frontend/node_modules
|
||||
frontend/.pnp
|
||||
@@ -15,6 +12,7 @@ frontend/coverage
|
||||
|
||||
# production
|
||||
frontend/build
|
||||
frontend/.vscode
|
||||
frontend/.yarnclean
|
||||
frontend/.temp_cache
|
||||
frontend/test-results
|
||||
@@ -33,6 +31,7 @@ frontend/src/constants/env.ts
|
||||
|
||||
.idea
|
||||
|
||||
**/.vscode
|
||||
**/build
|
||||
**/storage
|
||||
**/locust-scripts/__pycache__/
|
||||
@@ -50,25 +49,24 @@ ee/query-service/tests/test-deploy/data/
|
||||
# local data
|
||||
*.backup
|
||||
*.db
|
||||
**/db
|
||||
/deploy/docker/clickhouse-setup/data/
|
||||
/deploy/docker-swarm/clickhouse-setup/data/
|
||||
bin/
|
||||
.local/
|
||||
*/query-service/queries.active
|
||||
ee/query-service/db
|
||||
|
||||
# e2e
|
||||
|
||||
e2e/node_modules/
|
||||
e2e/test-results/
|
||||
e2e/playwright-report/
|
||||
e2e/blob-report/
|
||||
e2e/playwright/.cache/
|
||||
e2e/.auth
|
||||
|
||||
# go
|
||||
vendor/
|
||||
**/main/**
|
||||
__debug_bin**
|
||||
|
||||
# git-town
|
||||
.git-branches.toml
|
||||
@@ -78,157 +76,5 @@ dist/
|
||||
|
||||
# ignore user_scripts that is fetched by init-clickhouse
|
||||
deploy/common/clickhouse/user_scripts/
|
||||
|
||||
# queries.active
|
||||
queries.active
|
||||
|
||||
# tmp
|
||||
**/tmp/**
|
||||
|
||||
# .devenv tmp files
|
||||
.devenv/**/tmp/**
|
||||
.qodo
|
||||
|
||||
.dev
|
||||
|
||||
### Python ###
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
#.idea/
|
||||
|
||||
### Python Patch ###
|
||||
|
||||
# ruff
|
||||
.ruff_cache/
|
||||
|
||||
# LSP config files
|
||||
pyrightconfig.json
|
||||
|
||||
|
||||
# cursor files
|
||||
frontend/.cursor/
|
||||
|
||||
@@ -16,8 +16,10 @@ tasks:
|
||||
yarn dev
|
||||
|
||||
ports:
|
||||
- port: 8080
|
||||
- port: 3301
|
||||
onOpen: open-browser
|
||||
- port: 8080
|
||||
onOpen: ignore
|
||||
- port: 9000
|
||||
onOpen: ignore
|
||||
- port: 8123
|
||||
|
||||
@@ -1,64 +0,0 @@
|
||||
version: "2"
|
||||
linters:
|
||||
default: none
|
||||
enable:
|
||||
- bodyclose
|
||||
- depguard
|
||||
- errcheck
|
||||
- forbidigo
|
||||
- govet
|
||||
- iface
|
||||
- ineffassign
|
||||
- misspell
|
||||
- nilnil
|
||||
- sloglint
|
||||
- wastedassign
|
||||
- unparam
|
||||
- unused
|
||||
settings:
|
||||
depguard:
|
||||
rules:
|
||||
noerrors:
|
||||
deny:
|
||||
- pkg: errors
|
||||
desc: Do not use errors package. Use github.com/SigNoz/signoz/pkg/errors instead.
|
||||
nozap:
|
||||
deny:
|
||||
- pkg: go.uber.org/zap
|
||||
desc: Do not use zap logger. Use slog instead.
|
||||
forbidigo:
|
||||
forbid:
|
||||
- pattern: fmt.Errorf
|
||||
- pattern: ^(fmt\.Print.*|print|println)$
|
||||
iface:
|
||||
enable:
|
||||
- identical
|
||||
sloglint:
|
||||
no-mixed-args: true
|
||||
kv-only: true
|
||||
no-global: all
|
||||
context: all
|
||||
static-msg: true
|
||||
key-naming-case: snake
|
||||
exclusions:
|
||||
generated: lax
|
||||
presets:
|
||||
- comments
|
||||
- common-false-positives
|
||||
- legacy
|
||||
- std-error-handling
|
||||
paths:
|
||||
- pkg/query-service
|
||||
- ee/query-service
|
||||
- scripts/
|
||||
- tmp/
|
||||
- third_party$
|
||||
- builtin$
|
||||
- examples$
|
||||
formatters:
|
||||
exclusions:
|
||||
generated: lax
|
||||
paths:
|
||||
- third_party$
|
||||
- builtin$
|
||||
- examples$
|
||||
17
.mockery.yml
17
.mockery.yml
@@ -1,17 +0,0 @@
|
||||
# Link to template variables: https://pkg.go.dev/github.com/vektra/mockery/v3/config#TemplateData
|
||||
template: testify
|
||||
packages:
|
||||
github.com/SigNoz/signoz/pkg/alertmanager:
|
||||
config:
|
||||
all: true
|
||||
dir: '{{.InterfaceDir}}/alertmanagertest'
|
||||
filename: "alertmanager.go"
|
||||
structname: 'Mock{{.InterfaceName}}'
|
||||
pkgname: '{{.SrcPackageName}}test'
|
||||
github.com/SigNoz/signoz/pkg/tokenizer:
|
||||
config:
|
||||
all: true
|
||||
dir: '{{.InterfaceDir}}/tokenizertest'
|
||||
filename: "tokenizer.go"
|
||||
structname: 'Mock{{.InterfaceName}}'
|
||||
pkgname: '{{.SrcPackageName}}test'
|
||||
@@ -1,17 +0,0 @@
|
||||
#### Auto generated by make docker-version-alpine. DO NOT EDIT! ####
|
||||
amd64=029a752048e32e843bd6defe3841186fb8d19a28dae8ec287f433bb9d6d1ad85
|
||||
unknown=5fea95373b9ec85974843f31446fa6a9df4492dddae4e1cb056193c34a20a5be
|
||||
arm=b4aef1a899e0271f06d948c9a8fa626ecdb2202d3a178bc14775dd559e23df8e
|
||||
unknown=a4d1e27e63a9d6353046eb25a2f0ec02945012b217f4364cd83a73fe6dfb0b15
|
||||
arm=4fdafe217d0922f3c3e2b4f64cf043f8403a4636685cd9c51fea2cbd1f419740
|
||||
unknown=7f21ac2018d95b2c51a5779c1d5ca6c327504adc3b0fdc747a6725d30b3f13c2
|
||||
arm64=ea3c5a9671f7b3f7eb47eab06f73bc6591df978b0d5955689a9e6f943aa368c0
|
||||
unknown=a8ba68c1a9e6eea8041b4b8f996c235163440808b9654a865976fdcbede0f433
|
||||
386=dea9f02e103e837849f984d5679305c758aba7fea1b95b7766218597f61a05ab
|
||||
unknown=3c6629bec05c8273a927d46b77428bf4a378dad911a0ae284887becdc149b734
|
||||
ppc64le=0880443bffa028dfbbc4094a32dd6b7ac25684e4c0a3d50da9e0acae355c5eaf
|
||||
unknown=bb48308f976b266e3ab39bbf9af84521959bd9c295d3c763690cf41f8df2a626
|
||||
riscv64=d76e6fbe348ff20c2931bb7f101e49379648e026de95dd37f96e00ce1909dcf7
|
||||
unknown=dd807544365f6dc187cbe6de0806adce2ea9de3e7124717d1d8e8b7a18b77b64
|
||||
s390x=b815fadf80495594eb6296a6af0bc647ae5f193e0044e07acec7e5b378c9ce2d
|
||||
unknown=74681be74a280a88abb53ff1e048eb1fb624b30d0066730df6d8afd02ba82e01
|
||||
13
.vscode/settings.json
vendored
13
.vscode/settings.json
vendored
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"eslint.workingDirectories": ["./frontend"],
|
||||
"editor.formatOnSave": true,
|
||||
"editor.defaultFormatter": "esbenp.prettier-vscode",
|
||||
"editor.codeActionsOnSave": {
|
||||
"source.fixAll.eslint": "explicit"
|
||||
},
|
||||
"prettier.requireConfig": true,
|
||||
"[go]": {
|
||||
"editor.formatOnSave": true,
|
||||
"editor.defaultFormatter": "golang.go"
|
||||
}
|
||||
}
|
||||
62
ADVOCATE.md
62
ADVOCATE.md
@@ -1,62 +0,0 @@
|
||||
# SigNoz Community Advocate Program
|
||||
|
||||
Our community is filled with passionate developers who love SigNoz and have been helping spread the word about observability across the world. The SigNoz Community Advocate Program is our way of recognizing these incredible community members and creating deeper collaboration opportunities.
|
||||
|
||||
## What is the SigNoz Community Advocate Program?
|
||||
|
||||
The SigNoz Community Advocate Program celebrates and supports community members who are already passionate about observability and helping fellow developers. If you're someone who loves discussing SigNoz, helping others with their implementations, or sharing knowledge about observability practices, this program is designed with you in mind.
|
||||
|
||||
Our advocates are the heart of the SigNoz community, helping other developers succeed with observability and providing valuable insights that help us build better products.
|
||||
|
||||
## What Do Advocates Do?
|
||||
|
||||
1. **Community Support**
|
||||
|
||||
- Help fellow developers in our Slack community and GitHub Discussions
|
||||
- Answer questions and share solutions
|
||||
- Guide newcomers through SigNoz self-host implementations
|
||||
|
||||
2. **Knowledge Sharing**
|
||||
|
||||
- Spread awareness about observability best practices on developer forums
|
||||
- Create content like blog posts, social media posts, and videos
|
||||
- Host local meetups and events in their regions
|
||||
|
||||
3. **Product Collaboration**
|
||||
|
||||
- Provide insights on features, changes, and improvements the community needs
|
||||
- Beta test new features and provide early feedback
|
||||
- Help us understand real-world use cases and pain points
|
||||
|
||||
## What's In It For You?
|
||||
|
||||
**Recognition & Swag**
|
||||
|
||||
- Official recognition as a SigNoz advocate
|
||||
- Welcome hamper upon joining
|
||||
- Exclusive swag box within your first 3 months
|
||||
- Feature on our website (with your permission)
|
||||
|
||||
**Early Access**
|
||||
|
||||
- First look at new features and updates
|
||||
- Direct line to the SigNoz team for feedback and suggestions
|
||||
- Opportunity to influence product roadmap
|
||||
|
||||
**Community Impact**
|
||||
|
||||
- Help shape the observability landscape
|
||||
- Build your reputation in the developer community
|
||||
- Connect with like-minded developers globally
|
||||
|
||||
## How Does It Work?
|
||||
|
||||
Currently, the SigNoz Community Advocate Program is **invite-only**. We're starting with a small group of passionate community members who have already been making a difference.
|
||||
|
||||
We'll be working closely with our first advocates to shape the program details, benefits, and structure based on what works best for everyone involved.
|
||||
|
||||
If you're interested in learning more about the program or want to get more involved in the SigNoz community, join our [Slack community](https://signoz-community.slack.com/) and let us know!
|
||||
|
||||
---
|
||||
|
||||
*The SigNoz Community Advocate Program recognizes and celebrates the amazing community members who are already passionate about helping fellow developers succeed with observability.*
|
||||
421
CONTRIBUTING.md
421
CONTRIBUTING.md
@@ -1,82 +1,389 @@
|
||||
# Contributing Guidelines
|
||||
|
||||
Thank you for your interest in contributing to our project! We greatly value feedback and contributions from our community. This document will guide you through the contribution process.
|
||||
## Welcome to SigNoz Contributing section 🎉
|
||||
|
||||
## How can I contribute?
|
||||
Hi there! We're thrilled that you'd like to contribute to this project, thank you for your interest. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.
|
||||
|
||||
### Finding Issues to Work On
|
||||
- Check our [existing open issues](https://github.com/SigNoz/signoz/issues?q=is%3Aopen+is%3Aissue)
|
||||
- Look for [good first issues](https://github.com/SigNoz/signoz/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) to start with
|
||||
- Review [recently closed issues](https://github.com/SigNoz/signoz/issues?q=is%3Aissue+is%3Aclosed) to avoid duplicates
|
||||
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.
|
||||
|
||||
### Types of Contributions
|
||||
- We accept contributions made to the [SigNoz `develop` branch]()
|
||||
- Find all SigNoz Docker Hub images here
|
||||
- [signoz/frontend](https://hub.docker.com/r/signoz/frontend)
|
||||
- [signoz/query-service](https://hub.docker.com/r/signoz/query-service)
|
||||
- [signoz/otelcontribcol](https://hub.docker.com/r/signoz/otelcontribcol)
|
||||
|
||||
1. **Report Bugs**: Use our [Bug Report template](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=bug_report.md&title=)
|
||||
2. **Request Features**: Submit using [Feature Request template](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=feature_request.md&title=)
|
||||
3. **Improve Documentation**: Create an issue with the `documentation` label
|
||||
4. **Report Performance Issues**: Use our [Performance Issue template](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=performance-issue-report.md&title=)
|
||||
5. **Request Dashboards**: Submit using [Dashboard Request template](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=dashboard-template&projects=&template=request_dashboard.md&title=%5BDashboard+Request%5D+)
|
||||
6. **Report Security Issues**: Follow our [Security Policy](https://github.com/SigNoz/signoz/security/policy)
|
||||
7. **Join Discussions**: Participate in [project discussions](https://github.com/SigNoz/signoz/discussions)
|
||||
## Finding contributions to work on 💬
|
||||
|
||||
### Creating Helpful Issues
|
||||
Looking at the existing issues is a great way to find something to contribute on.
|
||||
Also, have a look at these [good first issues label](https://github.com/SigNoz/signoz/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) to start with.
|
||||
|
||||
When creating issues, include:
|
||||
|
||||
- **For Feature Requests**:
|
||||
- Clear use case and requirements
|
||||
- Proposed solution or improvement
|
||||
- Any open questions or considerations
|
||||
## Sections:
|
||||
- [General Instructions](#1-general-instructions-)
|
||||
- [For Creating Issue(s)](#11-for-creating-issues)
|
||||
- [For Pull Requests(s)](#12-for-pull-requests)
|
||||
- [How to Contribute](#2-how-to-contribute-%EF%B8%8F)
|
||||
- [Develop Frontend](#3-develop-frontend-)
|
||||
- [Contribute to Frontend with Docker installation of SigNoz](#31-contribute-to-frontend-with-docker-installation-of-signoz)
|
||||
- [Contribute to Frontend without installing SigNoz backend](#32-contribute-to-frontend-without-installing-signoz-backend)
|
||||
- [Contribute to Backend (Query-Service)](#4-contribute-to-backend-query-service-)
|
||||
- [To run ClickHouse setup](#41-to-run-clickhouse-setup-recommended-for-local-development)
|
||||
- [Contribute to SigNoz Helm Chart](#5-contribute-to-signoz-helm-chart-)
|
||||
- [To run helm chart for local development](#51-to-run-helm-chart-for-local-development)
|
||||
- [Contribute to Dashboards](#6-contribute-to-dashboards-)
|
||||
- [Other Ways to Contribute](#other-ways-to-contribute)
|
||||
|
||||
- **For Bug Reports**:
|
||||
- Step-by-step reproduction steps
|
||||
- Version information
|
||||
- Relevant environment details
|
||||
- Any modifications you've made
|
||||
- Expected vs actual behavior
|
||||
# 1. General Instructions 📝
|
||||
|
||||
### Submitting Pull Requests
|
||||
## 1.1 For Creating Issue(s)
|
||||
Before making any significant changes and before filing a new issue, please check [existing open](https://github.com/SigNoz/signoz/issues?q=is%3Aopen+is%3Aissue), or [recently closed](https://github.com/SigNoz/signoz/issues?q=is%3Aissue+is%3Aclosed) issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can.
|
||||
|
||||
1. **Development**:
|
||||
- Setup your [development environment](docs/contributing/development.md)
|
||||
- Work against the latest `main` branch
|
||||
- Focus on specific changes
|
||||
- Ensure all tests pass locally
|
||||
- Follow our [commit convention](#commit-convention)
|
||||
**Issue Types** - [Bug Report](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=bug_report.md&title=) | [Feature Request](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=feature_request.md&title=) | [Performance Issue Report](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=performance-issue-report.md&title=) | [Request Dashboard](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=dashboard-template&projects=&template=request_dashboard.md&title=%5BDashboard+Request%5D+) | [Report a Security Vulnerability](https://github.com/SigNoz/signoz/security/policy)
|
||||
|
||||
2. **Submit PR**:
|
||||
- Ensure your branch can be auto-merged
|
||||
- Address any CI failures
|
||||
- Respond to review comments promptly
|
||||
#### Details like these are incredibly useful:
|
||||
|
||||
For substantial changes, please split your contribution into multiple PRs:
|
||||
- **Requirement** - what kind of use case are you trying to solve?
|
||||
- **Proposal** - what do you suggest to solve the problem or improve the existing
|
||||
situation?
|
||||
- Any open questions to address❓
|
||||
|
||||
1. First PR: Overall structure (README, configurations, interfaces)
|
||||
2. Second PR: Core implementation (split further if needed)
|
||||
3. Final PR: Documentation updates and end-to-end tests
|
||||
#### If you are reporting a bug, details like these are incredibly useful:
|
||||
|
||||
### Commit Convention
|
||||
- A reproducible test case or series of steps.
|
||||
- The version of our code being used.
|
||||
- Any modifications you've made relevant to the bug🐞.
|
||||
- Anything unusual about your environment or deployment.
|
||||
|
||||
We follow [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/). All commits and PRs should include type specifiers (e.g., `feat:`, `fix:`, `docs:`, etc.).
|
||||
Discussing your proposed changes ahead of time will make the contribution
|
||||
process smooth for everyone 🙌.
|
||||
|
||||
## How can I contribute to other repositories?
|
||||
**[`^top^`](#contributing-guidelines)**
|
||||
|
||||
<hr>
|
||||
|
||||
You can find other repositories in the [SigNoz](https://github.com/SigNoz) organization to contribute to. Here is a list of **highlighted** repositories:
|
||||
## 1.2 For Pull Request(s)
|
||||
|
||||
- [charts](https://github.com/SigNoz/charts)
|
||||
- [dashboards](https://github.com/SigNoz/dashboards)
|
||||
Contributions via pull requests are much appreciated. Once the approach is agreed upon ✅, make your changes and open a Pull Request(s).
|
||||
Before sending us a pull request, please ensure that,
|
||||
|
||||
Each repository has its own contributing guidelines. Please refer to the guidelines of the repository you want to contribute to.
|
||||
- Fork the SigNoz repo on GitHub, clone it on your machine.
|
||||
- Create a branch with your changes.
|
||||
- You are working against the latest source on the `develop` branch.
|
||||
- Modify the source; please focus only on the specific change you are contributing.
|
||||
- Ensure local tests pass.
|
||||
- Commit to your fork using clear commit messages.
|
||||
- Send us a pull request, answering any default questions in the pull request interface.
|
||||
- Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation
|
||||
- Once you've pushed your commits to GitHub, make sure that your branch can be auto-merged (there are no merge conflicts). If not, on your computer, merge main into your branch, resolve any merge conflicts, make sure everything still runs correctly and passes all the tests, and then push up those changes.
|
||||
- Once the change has been approved and merged, we will inform you in a comment.
|
||||
|
||||
## How can I get help?
|
||||
|
||||
Need assistance? Join our Slack community:
|
||||
- [`#contributing`](https://signoz-community.slack.com/archives/C01LWQ8KS7M)
|
||||
- [`#contributing-frontend`](https://signoz-community.slack.com/archives/C027134DM8B)
|
||||
GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
|
||||
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
|
||||
|
||||
## Where do I go from here?
|
||||
**Note:** Unless your change is small, **please** consider submitting different Pull Request(s):
|
||||
|
||||
- Set up your [development environment](docs/contributing/development.md)
|
||||
- Deploy and observe [SigNoz in action with OpenTelemetry Demo Application](docs/otel-demo-docs.md)
|
||||
- Explore the [SigNoz Community Advocate Program](ADVOCATE.md), which recognises contributors who support the community, share their expertise, and help shape SigNoz's future.
|
||||
- Write [integration tests](docs/contributing/go/integration.md)
|
||||
* 1️⃣ First PR should include the overall structure of the new component:
|
||||
* Readme, configuration, interfaces or base classes, etc...
|
||||
* This PR is usually trivial to review, so the size limit does not apply to
|
||||
it.
|
||||
* 2️⃣ Second PR should include the concrete implementation of the component. If the
|
||||
size of this PR is larger than the recommended size, consider **splitting** ⚔️ it into
|
||||
multiple PRs.
|
||||
* If there are multiple sub-component then ideally each one should be implemented as
|
||||
a **separate** pull request.
|
||||
* Last PR should include changes to **any user-facing documentation.** And should include
|
||||
end-to-end tests if applicable. The component must be enabled
|
||||
only after sufficient testing, and there is enough confidence in the
|
||||
stability and quality of the component.
|
||||
|
||||
|
||||
You can always reach out to `ankit@signoz.io` to understand more about the repo and product. We are very responsive over email and [slack community](https://signoz.io/slack).
|
||||
|
||||
### Pointers:
|
||||
- If you find any **bugs** → please create an [**issue.**](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=bug_report.md&title=)
|
||||
- If you find anything **missing** in documentation → you can create an issue with the label **`documentation`**.
|
||||
- If you want to build any **new feature** → please create an [issue with the label **`enhancement`**.](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=&template=feature_request.md&title=)
|
||||
- If you want to **discuss** something about the product, start a new [**discussion**.](https://github.com/SigNoz/signoz/discussions)
|
||||
- If you want to request a new **dashboard template** → please create an issue [here](https://github.com/SigNoz/signoz/issues/new?assignees=&labels=dashboard-template&projects=&template=request_dashboard.md&title=%5BDashboard+Request%5D+).
|
||||
|
||||
<hr>
|
||||
|
||||
### Conventions to follow when submitting Commits and Pull Request(s).
|
||||
|
||||
We try to follow [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), more specifically the commits and PRs **should have type specifiers** prefixed in the name. [This](https://www.conventionalcommits.org/en/v1.0.0/#specification) should give you a better idea.
|
||||
|
||||
e.g. If you are submitting a fix for an issue in frontend, the PR name should be prefixed with **`fix(FE):`**
|
||||
|
||||
- Follow [GitHub Flow](https://guides.github.com/introduction/flow/) guidelines for your contribution flows.
|
||||
|
||||
- Feel free to ping us on [`#contributing`](https://signoz-community.slack.com/archives/C01LWQ8KS7M) or [`#contributing-frontend`](https://signoz-community.slack.com/archives/C027134DM8B) on our slack community if you need any help on this :)
|
||||
|
||||
**[`^top^`](#contributing-guidelines)**
|
||||
|
||||
<hr>
|
||||
|
||||
# 2. How to Contribute 🙋🏻♂️
|
||||
|
||||
#### There are primarily 2 areas in which you can contribute to SigNoz
|
||||
|
||||
- [**Frontend**](#3-develop-frontend-) (Written in Typescript, React)
|
||||
- [**Backend**](#4-contribute-to-backend-query-service-) (Query Service, written in Go)
|
||||
- [**Dashboard Templates**](#6-contribute-to-dashboards-) (JSON dashboard templates built with SigNoz)
|
||||
|
||||
Depending upon your area of expertise & interest, you can choose one or more to contribute. Below are detailed instructions to contribute in each area.
|
||||
|
||||
**Please note:** If you want to work on an issue, please add a brief description of your solution on the issue before starting work on it.
|
||||
|
||||
**[`^top^`](#contributing-guidelines)**
|
||||
|
||||
<hr>
|
||||
|
||||
# 3. Develop Frontend 🌚
|
||||
|
||||
**Need to Update: [https://github.com/SigNoz/signoz/tree/main/frontend](https://github.com/SigNoz/signoz/tree/main/frontend)**
|
||||
|
||||
Also, have a look at [Frontend README.md](https://github.com/SigNoz/signoz/blob/main/frontend/README.md) sections for more info on how to setup SigNoz frontend locally (with and without Docker).
|
||||
|
||||
## 3.1 Contribute to Frontend with Docker installation of SigNoz
|
||||
|
||||
- Clone the SigNoz repository and cd into signoz directory,
|
||||
```
|
||||
git clone https://github.com/SigNoz/signoz.git && cd signoz
|
||||
```
|
||||
- Comment out `frontend` service section at [`deploy/docker/docker-compose.yaml#L68`](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L68)
|
||||
|
||||

|
||||
|
||||
|
||||
- run `cd deploy` to move to deploy directory,
|
||||
- Install signoz locally **without** the frontend,
|
||||
- Add / Uncomment the below configuration to query-service section at [`deploy/docker/docker-compose.yaml#L47`](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L47)
|
||||
```
|
||||
ports:
|
||||
- "8080:8080"
|
||||
```
|
||||
<img width="869" alt="query service" src="https://user-images.githubusercontent.com/52788043/179010251-8489be31-04ca-42f8-b30d-ef0bb6accb6b.png">
|
||||
|
||||
- Next run,
|
||||
```
|
||||
cd deploy/docker
|
||||
sudo docker compose up -d
|
||||
```
|
||||
- `cd ../frontend` and change baseURL in file [`frontend/src/constants/env.ts#L2`](https://github.com/SigNoz/signoz/blob/main/frontend/src/constants/env.ts#L2) and for that, you need to create a `.env` file in the `frontend` directory with the following environment variable (`FRONTEND_API_ENDPOINT`) matching your configuration.
|
||||
|
||||
If you have backend api exposed via frontend nginx:
|
||||
```
|
||||
FRONTEND_API_ENDPOINT=http://localhost:3301
|
||||
```
|
||||
If not:
|
||||
```
|
||||
FRONTEND_API_ENDPOINT=http://localhost:8080
|
||||
```
|
||||
|
||||
- Next,
|
||||
```
|
||||
yarn install
|
||||
yarn dev
|
||||
```
|
||||
|
||||
## 3.2 Contribute to Frontend without installing SigNoz backend
|
||||
|
||||
If you don't want to install the SigNoz backend just for doing frontend development, we can provide you with test environments that you can use as the backend.
|
||||
|
||||
- Clone the SigNoz repository and cd into signoz/frontend directory,
|
||||
```
|
||||
git clone https://github.com/SigNoz/signoz.git && cd signoz/frontend
|
||||
````
|
||||
- Create a file `.env` in the `frontend` directory with `FRONTEND_API_ENDPOINT=<test environment URL>`
|
||||
- Next,
|
||||
```
|
||||
yarn install
|
||||
yarn dev
|
||||
```
|
||||
|
||||
Please ping us in the [`#contributing`](https://signoz-community.slack.com/archives/C01LWQ8KS7M) channel or ask `@Prashant Shahi` in our [Slack Community](https://signoz.io/slack) and we will DM you with `<test environment URL>`.
|
||||
|
||||
**Frontend should now be accessible at** [`http://localhost:3301/services`](http://localhost:3301/services)
|
||||
|
||||
**[`^top^`](#contributing-guidelines)**
|
||||
|
||||
<hr>
|
||||
|
||||
# 4. Contribute to Backend (Query-Service) 🌑
|
||||
|
||||
**Need to Update: [https://github.com/SigNoz/signoz/tree/main/pkg/query-service](https://github.com/SigNoz/signoz/tree/main/pkg/query-service)**
|
||||
|
||||
## 4.1 Prerequisites
|
||||
|
||||
### 4.1.1 Install SQLite3
|
||||
|
||||
- Run `sqlite3` command to check if you already have SQLite3 installed on your machine.
|
||||
|
||||
- If not installed already, Install using below command
|
||||
- on Linux
|
||||
- on Debian / Ubuntu
|
||||
```
|
||||
sudo apt install sqlite3
|
||||
```
|
||||
- on CentOS / Fedora / RedHat
|
||||
```
|
||||
sudo yum install sqlite3
|
||||
```
|
||||
|
||||
## 4.2 To run ClickHouse setup (recommended for local development)
|
||||
|
||||
- Clone the SigNoz repository and cd into signoz directory,
|
||||
```
|
||||
git clone https://github.com/SigNoz/signoz.git && cd signoz
|
||||
```
|
||||
- run `sudo make dev-setup` to configure local setup to run query-service,
|
||||
- Comment out `frontend` service section at [`deploy/docker/docker-compose.yaml#L68`](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L68)
|
||||
<img width="982" alt="develop-frontend" src="https://user-images.githubusercontent.com/52788043/179043977-012be8b0-a2ed-40d1-b2e6-2ab72d7989c0.png">
|
||||
|
||||
- Comment out `query-service` section at [`deploy/docker/docker-compose.yaml#L41`,](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml#L41)
|
||||
<img width="1068" alt="Screenshot 2022-07-14 at 22 48 07" src="https://user-images.githubusercontent.com/52788043/179044151-a65ba571-db0b-4a16-b64b-ca3fadcf3af0.png">
|
||||
|
||||
- add below configuration to `clickhouse` section at [`deploy/docker/docker-compose.yaml`,](https://github.com/SigNoz/signoz/blob/main/deploy/docker/docker-compose.yaml)
|
||||
```
|
||||
ports:
|
||||
- 9001:9000
|
||||
```
|
||||
<img width="1013" alt="Screenshot 2022-07-14 at 22 50 37" src="https://user-images.githubusercontent.com/52788043/179044544-a293d3bc-4c4f-49ea-a276-505a381de67d.png">
|
||||
|
||||
- run `cd pkg/query-service/` to move to `query-service` directory,
|
||||
- Then, you need to create a `.env` file with the following environment variable
|
||||
```
|
||||
SIGNOZ_SQLSTORE_SQLITE_PATH="./signoz.db"
|
||||
```
|
||||
to set your local environment with the right `RELATIONAL_DATASOURCE_PATH` as mentioned in [`./constants/constants.go#L38`,](https://github.com/SigNoz/signoz/blob/main/pkg/query-service/constants/constants.go#L38)
|
||||
|
||||
- Now, install SigNoz locally **without** the `frontend` and `query-service`,
|
||||
- If you are using `x86_64` processors (All Intel/AMD processors) run `sudo make run-x86`
|
||||
- If you are on `arm64` processors (Apple M1 Macs) run `sudo make run-arm`
|
||||
|
||||
#### Run locally,
|
||||
```
|
||||
ClickHouseUrl=tcp://localhost:9001 STORAGE=clickhouse go run main.go
|
||||
```
|
||||
|
||||
#### Build and Run locally
|
||||
```
|
||||
cd pkg/query-service
|
||||
go build -o build/query-service main.go
|
||||
ClickHouseUrl=tcp://localhost:9001 STORAGE=clickhouse build/query-service
|
||||
```
|
||||
|
||||
#### Docker Images
|
||||
The docker images of query-service is available at https://hub.docker.com/r/signoz/query-service
|
||||
|
||||
```
|
||||
docker pull signoz/query-service
|
||||
```
|
||||
|
||||
```
|
||||
docker pull signoz/query-service:latest
|
||||
```
|
||||
|
||||
```
|
||||
docker pull signoz/query-service:develop
|
||||
```
|
||||
|
||||
### Important Note:
|
||||
|
||||
**Query Service should now be available at** [`http://localhost:8080`](http://localhost:8080)
|
||||
|
||||
If you want to see how the frontend plays with query service, you can run the frontend also in your local env with the baseURL changed to `http://localhost:8080` in file [`frontend/src/constants/env.ts`](https://github.com/SigNoz/signoz/blob/main/frontend/src/constants/env.ts) as the `query-service` is now running at port `8080`.
|
||||
|
||||
<!-- Instead of configuring a local setup, you can also use [Gitpod](https://www.gitpod.io/), a VSCode-based Web IDE.
|
||||
|
||||
Click the button below. A workspace with all required environments will be created.
|
||||
|
||||
[](https://gitpod.io/#https://github.com/SigNoz/signoz)
|
||||
|
||||
> To use it on your forked repo, edit the 'Open in Gitpod' button URL to `https://gitpod.io/#https://github.com/<your-github-username>/signoz` -->
|
||||
|
||||
**[`^top^`](#contributing-guidelines)**
|
||||
|
||||
<hr>
|
||||
|
||||
# 5. Contribute to SigNoz Helm Chart 📊
|
||||
|
||||
**Need to Update: [https://github.com/SigNoz/charts](https://github.com/SigNoz/charts).**
|
||||
|
||||
## 5.1 To run helm chart for local development
|
||||
|
||||
- Clone the SigNoz repository and cd into charts directory,
|
||||
```
|
||||
git clone https://github.com/SigNoz/charts.git && cd charts
|
||||
```
|
||||
- It is recommended to use lightweight kubernetes (k8s) cluster for local development:
|
||||
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
|
||||
- [k3d](https://k3d.io/#installation)
|
||||
- [minikube](https://minikube.sigs.k8s.io/docs/start/)
|
||||
- create a k8s cluster and make sure `kubectl` points to the locally created k8s cluster,
|
||||
- run `make dev-install` to install SigNoz chart with `my-release` release name in `platform` namespace,
|
||||
- next run,
|
||||
```
|
||||
kubectl -n platform port-forward svc/my-release-signoz-frontend 3301:3301
|
||||
```
|
||||
to make SigNoz UI available at [localhost:3301](http://localhost:3301)
|
||||
|
||||
**5.1.1 To install the HotROD sample app:**
|
||||
|
||||
```bash
|
||||
curl -sL https://github.com/SigNoz/signoz/raw/main/sample-apps/hotrod/hotrod-install.sh \
|
||||
| HELM_RELEASE=my-release SIGNOZ_NAMESPACE=platform bash
|
||||
```
|
||||
|
||||
**5.1.2 To load data with the HotROD sample app:**
|
||||
|
||||
```bash
|
||||
kubectl -n sample-application run strzal --image=djbingham/curl \
|
||||
--restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \
|
||||
'user_count=6' -F 'spawn_rate=2' http://locust-master:8089/swarm
|
||||
```
|
||||
|
||||
**5.1.3 To stop the load generation:**
|
||||
|
||||
```bash
|
||||
kubectl -n sample-application run strzal --image=djbingham/curl \
|
||||
--restart='OnFailure' -i --tty --rm --command -- curl \
|
||||
http://locust-master:8089/stop
|
||||
```
|
||||
|
||||
**5.1.4 To delete the HotROD sample app:**
|
||||
|
||||
```bash
|
||||
curl -sL https://github.com/SigNoz/signoz/raw/main/sample-apps/hotrod/hotrod-delete.sh \
|
||||
| HOTROD_NAMESPACE=sample-application bash
|
||||
```
|
||||
|
||||
**[`^top^`](#contributing-guidelines)**
|
||||
|
||||
---
|
||||
|
||||
# 6. Contribute to Dashboards 📈
|
||||
|
||||
**Need to Update: [https://github.com/SigNoz/dashboards](https://github.com/SigNoz/dashboards)**
|
||||
|
||||
To contribute a new dashboard template for any service, follow the contribution guidelines in the [Dashboard Contributing Guide](https://github.com/SigNoz/dashboards/blob/main/CONTRIBUTING.md). In brief:
|
||||
|
||||
1. Create a dashboard JSON file.
|
||||
2. Add a README file explaining the dashboard, the metrics ingested, and the configurations needed.
|
||||
3. Include screenshots of the dashboard in the `assets/` directory.
|
||||
4. Submit a pull request for review.
|
||||
|
||||
## Other Ways to Contribute
|
||||
|
||||
There are many other ways to get involved with the community and to participate in this project:
|
||||
|
||||
- Use the product, submitting GitHub issues when a problem is found.
|
||||
- Help code review pull requests and participate in issue threads.
|
||||
- Submit a new feature request as an issue.
|
||||
- Help answer questions on forums such as Stack Overflow and [SigNoz Community Slack Channel](https://signoz.io/slack).
|
||||
- Tell others about the project on Twitter, your blog, etc.
|
||||
|
||||
Again, Feel free to ping us on [`#contributing`](https://signoz-community.slack.com/archives/C01LWQ8KS7M) or [`#contributing-frontend`](https://signoz-community.slack.com/archives/C027134DM8B) on our slack community if you need any help on this :)
|
||||
|
||||
Thank You!
|
||||
|
||||
2
LICENSE
2
LICENSE
@@ -2,7 +2,7 @@ Copyright (c) 2020-present SigNoz Inc.
|
||||
|
||||
Portions of this software are licensed as follows:
|
||||
|
||||
* All content that resides under the "ee/" and the "cmd/enterprise/" directory of this repository, if that directory exists, is licensed under the license defined in "ee/LICENSE".
|
||||
* All content that resides under the "ee/" directory of this repository, if that directory exists, is licensed under the license defined in "ee/LICENSE".
|
||||
* All third party components incorporated into the SigNoz Software are licensed under the original license provided by the owner of the applicable component.
|
||||
* Content outside of the above mentioned directories or restrictions above is available under the "MIT Expat" license as defined below.
|
||||
|
||||
|
||||
395
Makefile
395
Makefile
@@ -1,241 +1,196 @@
|
||||
##############################################################
|
||||
# variables
|
||||
##############################################################
|
||||
SHELL := /bin/bash
|
||||
SRC ?= $(shell pwd)
|
||||
NAME ?= signoz
|
||||
OS ?= $(shell uname -s | tr '[A-Z]' '[a-z]')
|
||||
ARCH ?= $(shell uname -m | sed 's/x86_64/amd64/g' | sed 's/aarch64/arm64/g')
|
||||
COMMIT_SHORT_SHA ?= $(shell git rev-parse --short HEAD)
|
||||
BRANCH_NAME ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
|
||||
VERSION ?= $(BRANCH_NAME)-$(COMMIT_SHORT_SHA)
|
||||
TIMESTAMP ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
ARCHS ?= amd64 arm64
|
||||
TARGET_DIR ?= $(shell pwd)/target
|
||||
#
|
||||
# Reference Guide - https://www.gnu.org/software/make/manual/make.html
|
||||
#
|
||||
|
||||
ZEUS_URL ?= https://api.signoz.cloud
|
||||
GO_BUILD_LDFLAG_ZEUS_URL = -X github.com/SigNoz/signoz/ee/zeus.url=$(ZEUS_URL)
|
||||
LICENSE_URL ?= https://license.signoz.io
|
||||
GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO = -X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=$(LICENSE_URL)
|
||||
# Build variables
|
||||
BUILD_VERSION ?= $(shell git describe --always --tags)
|
||||
BUILD_HASH ?= $(shell git rev-parse --short HEAD)
|
||||
BUILD_TIME ?= $(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
BUILD_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
DEV_LICENSE_SIGNOZ_IO ?= https://staging-license.signoz.io/api/v1
|
||||
ZEUS_URL ?= https://api.signoz.cloud
|
||||
DEV_BUILD ?= "" # set to any non-empty value to enable dev build
|
||||
|
||||
GO_BUILD_VERSION_LDFLAGS = -X github.com/SigNoz/signoz/pkg/version.version=$(VERSION) -X github.com/SigNoz/signoz/pkg/version.hash=$(COMMIT_SHORT_SHA) -X github.com/SigNoz/signoz/pkg/version.time=$(TIMESTAMP) -X github.com/SigNoz/signoz/pkg/version.branch=$(BRANCH_NAME)
|
||||
GO_BUILD_ARCHS_COMMUNITY = $(addprefix go-build-community-,$(ARCHS))
|
||||
GO_BUILD_CONTEXT_COMMUNITY = $(SRC)/cmd/community
|
||||
GO_BUILD_LDFLAGS_COMMUNITY = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=community
|
||||
GO_BUILD_ARCHS_ENTERPRISE = $(addprefix go-build-enterprise-,$(ARCHS))
|
||||
GO_BUILD_ARCHS_ENTERPRISE_RACE = $(addprefix go-build-enterprise-race-,$(ARCHS))
|
||||
GO_BUILD_CONTEXT_ENTERPRISE = $(SRC)/cmd/enterprise
|
||||
GO_BUILD_LDFLAGS_ENTERPRISE = $(GO_BUILD_VERSION_LDFLAGS) -X github.com/SigNoz/signoz/pkg/version.variant=enterprise $(GO_BUILD_LDFLAG_ZEUS_URL) $(GO_BUILD_LDFLAG_LICENSE_SIGNOZ_IO)
|
||||
# Internal variables or constants.
|
||||
FRONTEND_DIRECTORY ?= frontend
|
||||
QUERY_SERVICE_DIRECTORY ?= pkg/query-service
|
||||
EE_QUERY_SERVICE_DIRECTORY ?= ee/query-service
|
||||
STANDALONE_DIRECTORY ?= deploy/docker
|
||||
SWARM_DIRECTORY ?= deploy/docker-swarm
|
||||
CH_HISTOGRAM_QUANTILE_DIRECTORY ?= scripts/clickhouse/histogramquantile
|
||||
|
||||
DOCKER_BUILD_ARCHS_COMMUNITY = $(addprefix docker-build-community-,$(ARCHS))
|
||||
DOCKERFILE_COMMUNITY = $(SRC)/cmd/community/Dockerfile
|
||||
DOCKER_REGISTRY_COMMUNITY ?= docker.io/signoz/signoz-community
|
||||
DOCKER_BUILD_ARCHS_ENTERPRISE = $(addprefix docker-build-enterprise-,$(ARCHS))
|
||||
DOCKERFILE_ENTERPRISE = $(SRC)/cmd/enterprise/Dockerfile
|
||||
DOCKER_REGISTRY_ENTERPRISE ?= docker.io/signoz/signoz
|
||||
JS_BUILD_CONTEXT = $(SRC)/frontend
|
||||
GOOS ?= $(shell go env GOOS)
|
||||
GOARCH ?= $(shell go env GOARCH)
|
||||
GOPATH ?= $(shell go env GOPATH)
|
||||
|
||||
##############################################################
|
||||
# directories
|
||||
##############################################################
|
||||
$(TARGET_DIR):
|
||||
mkdir -p $(TARGET_DIR)
|
||||
REPONAME ?= signoz
|
||||
DOCKER_TAG ?= $(subst v,,$(BUILD_VERSION))
|
||||
FRONTEND_DOCKER_IMAGE ?= frontend
|
||||
QUERY_SERVICE_DOCKER_IMAGE ?= query-service
|
||||
|
||||
##############################################################
|
||||
# common commands
|
||||
##############################################################
|
||||
.PHONY: help
|
||||
help: ## Displays help.
|
||||
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n\nTargets:\n"} /^[a-z0-9A-Z_-]+:.*?##/ { printf " \033[36m%-40s\033[0m %s\n", $$1, $$2 }' $(MAKEFILE_LIST)
|
||||
# Build-time Go variables
|
||||
PACKAGE?=go.signoz.io/signoz
|
||||
buildVersion=${PACKAGE}/pkg/query-service/version.buildVersion
|
||||
buildHash=${PACKAGE}/pkg/query-service/version.buildHash
|
||||
buildTime=${PACKAGE}/pkg/query-service/version.buildTime
|
||||
gitBranch=${PACKAGE}/pkg/query-service/version.gitBranch
|
||||
licenseSignozIo=${PACKAGE}/ee/query-service/constants.LicenseSignozIo
|
||||
zeusURL=${PACKAGE}/ee/query-service/constants.ZeusURL
|
||||
|
||||
##############################################################
|
||||
# devenv commands
|
||||
##############################################################
|
||||
.PHONY: devenv-clickhouse
|
||||
devenv-clickhouse: ## Run clickhouse in devenv
|
||||
@cd .devenv/docker/clickhouse; \
|
||||
docker compose -f compose.yaml up -d
|
||||
LD_FLAGS=-X ${buildHash}=${BUILD_HASH} -X ${buildTime}=${BUILD_TIME} -X ${buildVersion}=${BUILD_VERSION} -X ${gitBranch}=${BUILD_BRANCH} -X ${zeusURL}=${ZEUS_URL}
|
||||
DEV_LD_FLAGS=-X ${licenseSignozIo}=${DEV_LICENSE_SIGNOZ_IO}
|
||||
|
||||
.PHONY: devenv-postgres
|
||||
devenv-postgres: ## Run postgres in devenv
|
||||
@cd .devenv/docker/postgres; \
|
||||
docker compose -f compose.yaml up -d
|
||||
all: build-push-frontend build-push-query-service
|
||||
|
||||
.PHONY: devenv-signoz-otel-collector
|
||||
devenv-signoz-otel-collector: ## Run signoz-otel-collector in devenv (requires clickhouse to be running)
|
||||
@cd .devenv/docker/signoz-otel-collector; \
|
||||
docker compose -f compose.yaml up -d
|
||||
# Steps to build static files of frontend
|
||||
build-frontend-static:
|
||||
@echo "------------------"
|
||||
@echo "--> Building frontend static files"
|
||||
@echo "------------------"
|
||||
@cd $(FRONTEND_DIRECTORY) && \
|
||||
rm -rf build && \
|
||||
CI=1 yarn install && \
|
||||
yarn build && \
|
||||
ls -l build
|
||||
|
||||
.PHONY: devenv-up
|
||||
devenv-up: devenv-clickhouse devenv-signoz-otel-collector ## Start both clickhouse and signoz-otel-collector for local development
|
||||
@echo "Development environment is ready!"
|
||||
@echo " - ClickHouse: http://localhost:8123"
|
||||
@echo " - Signoz OTel Collector: grpc://localhost:4317, http://localhost:4318"
|
||||
# Steps to build and push docker image of frontend
|
||||
.PHONY: build-frontend-amd64 build-push-frontend
|
||||
# Step to build docker image of frontend in amd64 (used in build pipeline)
|
||||
build-frontend-amd64: build-frontend-static
|
||||
@echo "------------------"
|
||||
@echo "--> Building frontend docker image for amd64"
|
||||
@echo "------------------"
|
||||
@cd $(FRONTEND_DIRECTORY) && \
|
||||
docker build --file Dockerfile -t $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) \
|
||||
--build-arg TARGETPLATFORM="linux/amd64" .
|
||||
|
||||
.PHONY: devenv-clickhouse-clean
|
||||
devenv-clickhouse-clean: ## Clean all ClickHouse data from filesystem
|
||||
@echo "Removing ClickHouse data..."
|
||||
@rm -rf .devenv/docker/clickhouse/fs/tmp/*
|
||||
@echo "ClickHouse data cleaned!"
|
||||
# Step to build and push docker image of frontend(used in push pipeline)
|
||||
build-push-frontend: build-frontend-static
|
||||
@echo "------------------"
|
||||
@echo "--> Building and pushing frontend docker image"
|
||||
@echo "------------------"
|
||||
@cd $(FRONTEND_DIRECTORY) && \
|
||||
docker buildx build --file Dockerfile --progress plain --push --platform linux/arm64,linux/amd64 \
|
||||
--tag $(REPONAME)/$(FRONTEND_DOCKER_IMAGE):$(DOCKER_TAG) .
|
||||
|
||||
##############################################################
|
||||
# go commands
|
||||
##############################################################
|
||||
.PHONY: go-run-enterprise
|
||||
go-run-enterprise: ## Runs the enterprise go backend server
|
||||
@SIGNOZ_INSTRUMENTATION_LOGS_LEVEL=debug \
|
||||
SIGNOZ_SQLSTORE_SQLITE_PATH=signoz.db \
|
||||
SIGNOZ_WEB_ENABLED=false \
|
||||
SIGNOZ_TOKENIZER_JWT_SECRET=secret \
|
||||
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
|
||||
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
|
||||
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
|
||||
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER=cluster \
|
||||
go run -race \
|
||||
$(GO_BUILD_CONTEXT_ENTERPRISE)/*.go server
|
||||
|
||||
.PHONY: go-test
|
||||
go-test: ## Runs go unit tests
|
||||
@go test -race ./...
|
||||
|
||||
.PHONY: go-run-community
|
||||
go-run-community: ## Runs the community go backend server
|
||||
@SIGNOZ_INSTRUMENTATION_LOGS_LEVEL=debug \
|
||||
SIGNOZ_SQLSTORE_SQLITE_PATH=signoz.db \
|
||||
SIGNOZ_WEB_ENABLED=false \
|
||||
SIGNOZ_TOKENIZER_JWT_SECRET=secret \
|
||||
SIGNOZ_ALERTMANAGER_PROVIDER=signoz \
|
||||
SIGNOZ_TELEMETRYSTORE_PROVIDER=clickhouse \
|
||||
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://127.0.0.1:9000 \
|
||||
SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_CLUSTER=cluster \
|
||||
go run -race \
|
||||
$(GO_BUILD_CONTEXT_COMMUNITY)/*.go server
|
||||
|
||||
.PHONY: go-build-community $(GO_BUILD_ARCHS_COMMUNITY)
|
||||
go-build-community: ## Builds the go backend server for community
|
||||
go-build-community: $(GO_BUILD_ARCHS_COMMUNITY)
|
||||
$(GO_BUILD_ARCHS_COMMUNITY): go-build-community-%: $(TARGET_DIR)
|
||||
@mkdir -p $(TARGET_DIR)/$(OS)-$*
|
||||
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)-community"
|
||||
@if [ $* = "arm64" ]; then \
|
||||
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
|
||||
# Steps to build static binary of query service
|
||||
.PHONY: build-query-service-static
|
||||
build-query-service-static:
|
||||
@echo "------------------"
|
||||
@echo "--> Building query-service static binary"
|
||||
@echo "------------------"
|
||||
@if [ $(DEV_BUILD) != "" ]; then \
|
||||
cd $(QUERY_SERVICE_DIRECTORY) && \
|
||||
CGO_ENABLED=1 go build -tags timetzdata -a -o ./bin/query-service-${GOOS}-${GOARCH} \
|
||||
-ldflags "-linkmode external -extldflags '-static' -s -w ${LD_FLAGS} ${DEV_LD_FLAGS}"; \
|
||||
else \
|
||||
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_COMMUNITY) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME)-community -ldflags "-s -w $(GO_BUILD_LDFLAGS_COMMUNITY)"; \
|
||||
cd $(QUERY_SERVICE_DIRECTORY) && \
|
||||
CGO_ENABLED=1 go build -tags timetzdata -a -o ./bin/query-service-${GOOS}-${GOARCH} \
|
||||
-ldflags "-linkmode external -extldflags '-static' -s -w ${LD_FLAGS}"; \
|
||||
fi
|
||||
|
||||
.PHONY: build-query-service-static-amd64
|
||||
build-query-service-static-amd64:
|
||||
make GOARCH=amd64 build-query-service-static
|
||||
|
||||
.PHONY: go-build-enterprise $(GO_BUILD_ARCHS_ENTERPRISE)
|
||||
go-build-enterprise: ## Builds the go backend server for enterprise
|
||||
go-build-enterprise: $(GO_BUILD_ARCHS_ENTERPRISE)
|
||||
$(GO_BUILD_ARCHS_ENTERPRISE): go-build-enterprise-%: $(TARGET_DIR)
|
||||
@mkdir -p $(TARGET_DIR)/$(OS)-$*
|
||||
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
|
||||
@if [ $* = "arm64" ]; then \
|
||||
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
|
||||
.PHONY: build-query-service-static-arm64
|
||||
build-query-service-static-arm64:
|
||||
make CC=aarch64-linux-gnu-gcc GOARCH=arm64 build-query-service-static
|
||||
|
||||
# Steps to build static binary of query service for all platforms
|
||||
.PHONY: build-query-service-static-all
|
||||
build-query-service-static-all: build-query-service-static-amd64 build-query-service-static-arm64 build-frontend-static
|
||||
|
||||
# Steps to build and push docker image of query service
|
||||
.PHONY: build-query-service-amd64 build-push-query-service
|
||||
# Step to build docker image of query service in amd64 (used in build pipeline)
|
||||
build-query-service-amd64: build-query-service-static-amd64 build-frontend-static
|
||||
@echo "------------------"
|
||||
@echo "--> Building query-service docker image for amd64"
|
||||
@echo "------------------"
|
||||
@docker build --file $(QUERY_SERVICE_DIRECTORY)/Dockerfile \
|
||||
--tag $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) \
|
||||
--build-arg TARGETPLATFORM="linux/amd64" .
|
||||
|
||||
# Step to build and push docker image of query in amd64 and arm64 (used in push pipeline)
|
||||
build-push-query-service: build-query-service-static-all
|
||||
@echo "------------------"
|
||||
@echo "--> Building and pushing query-service docker image"
|
||||
@echo "------------------"
|
||||
@docker buildx build --file $(QUERY_SERVICE_DIRECTORY)/Dockerfile --progress plain \
|
||||
--push --platform linux/arm64,linux/amd64 \
|
||||
--tag $(REPONAME)/$(QUERY_SERVICE_DOCKER_IMAGE):$(DOCKER_TAG) .
|
||||
|
||||
# Step to build EE docker image of query service in amd64 (used in build pipeline)
|
||||
build-ee-query-service-amd64:
|
||||
@echo "------------------"
|
||||
@echo "--> Building query-service docker image for amd64"
|
||||
@echo "------------------"
|
||||
make QUERY_SERVICE_DIRECTORY=${EE_QUERY_SERVICE_DIRECTORY} build-query-service-amd64
|
||||
|
||||
# Step to build and push EE docker image of query in amd64 and arm64 (used in push pipeline)
|
||||
build-push-ee-query-service:
|
||||
@echo "------------------"
|
||||
@echo "--> Building and pushing query-service docker image"
|
||||
@echo "------------------"
|
||||
make QUERY_SERVICE_DIRECTORY=${EE_QUERY_SERVICE_DIRECTORY} build-push-query-service
|
||||
|
||||
dev-setup:
|
||||
mkdir -p /var/lib/signoz
|
||||
sqlite3 /var/lib/signoz/signoz.db "VACUUM";
|
||||
mkdir -p pkg/query-service/config/dashboards
|
||||
@echo "------------------"
|
||||
@echo "--> Local Setup completed"
|
||||
@echo "------------------"
|
||||
|
||||
pull-signoz:
|
||||
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml pull
|
||||
|
||||
run-signoz:
|
||||
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml up --build -d
|
||||
|
||||
run-testing:
|
||||
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.testing.yaml up --build -d
|
||||
|
||||
down-signoz:
|
||||
@docker-compose -f $(STANDALONE_DIRECTORY)/docker-compose.yaml down -v
|
||||
|
||||
clear-standalone-data:
|
||||
@docker run --rm -v "$(PWD)/$(STANDALONE_DIRECTORY)/data:/pwd" busybox \
|
||||
sh -c "cd /pwd && rm -rf alertmanager/* clickhouse*/* signoz/* zookeeper-*/*"
|
||||
|
||||
clear-swarm-data:
|
||||
@docker run --rm -v "$(PWD)/$(SWARM_DIRECTORY)/data:/pwd" busybox \
|
||||
sh -c "cd /pwd && rm -rf alertmanager/* clickhouse*/* signoz/* zookeeper-*/*"
|
||||
|
||||
clear-standalone-ch:
|
||||
@docker run --rm -v "$(PWD)/$(STANDALONE_DIRECTORY)/data:/pwd" busybox \
|
||||
sh -c "cd /pwd && rm -rf clickhouse*/* zookeeper-*/*"
|
||||
|
||||
clear-swarm-ch:
|
||||
@docker run --rm -v "$(PWD)/$(SWARM_DIRECTORY)/data:/pwd" busybox \
|
||||
sh -c "cd /pwd && rm -rf clickhouse*/* zookeeper-*/*"
|
||||
|
||||
check-no-ee-references:
|
||||
@echo "Checking for 'ee' package references in 'pkg' directory..."
|
||||
@if grep -R --include="*.go" '.*/ee/.*' pkg/; then \
|
||||
echo "Error: Found references to 'ee' packages in 'pkg' directory"; \
|
||||
exit 1; \
|
||||
else \
|
||||
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
|
||||
echo "No references to 'ee' packages found in 'pkg' directory"; \
|
||||
fi
|
||||
|
||||
.PHONY: go-build-enterprise-race $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
|
||||
go-build-enterprise-race: ## Builds the go backend server for enterprise with race
|
||||
go-build-enterprise-race: $(GO_BUILD_ARCHS_ENTERPRISE_RACE)
|
||||
$(GO_BUILD_ARCHS_ENTERPRISE_RACE): go-build-enterprise-race-%: $(TARGET_DIR)
|
||||
@mkdir -p $(TARGET_DIR)/$(OS)-$*
|
||||
@echo ">> building binary $(TARGET_DIR)/$(OS)-$*/$(NAME)"
|
||||
@if [ $* = "arm64" ]; then \
|
||||
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
|
||||
test:
|
||||
go test ./pkg/...
|
||||
|
||||
goreleaser-snapshot:
|
||||
@if [[ ${GORELEASER_WORKDIR} ]]; then \
|
||||
cd ${GORELEASER_WORKDIR} && \
|
||||
goreleaser release --clean --snapshot; \
|
||||
cd -; \
|
||||
else \
|
||||
GOARCH=$* GOOS=$(OS) go build -C $(GO_BUILD_CONTEXT_ENTERPRISE) -race -tags timetzdata -o $(TARGET_DIR)/$(OS)-$*/$(NAME) -ldflags "-s -w $(GO_BUILD_LDFLAGS_ENTERPRISE)"; \
|
||||
goreleaser release --clean --snapshot; \
|
||||
fi
|
||||
|
||||
##############################################################
|
||||
# js commands
|
||||
##############################################################
|
||||
.PHONY: js-build
|
||||
js-build: ## Builds the js frontend
|
||||
@echo ">> building js frontend"
|
||||
@cd $(JS_BUILD_CONTEXT) && CI=1 yarn install && yarn build
|
||||
|
||||
##############################################################
|
||||
# docker commands
|
||||
##############################################################
|
||||
.PHONY: docker-build-community $(DOCKER_BUILD_ARCHS_COMMUNITY)
|
||||
docker-build-community: ## Builds the docker image for community
|
||||
docker-build-community: $(DOCKER_BUILD_ARCHS_COMMUNITY)
|
||||
$(DOCKER_BUILD_ARCHS_COMMUNITY): docker-build-community-%: go-build-community-% js-build
|
||||
@echo ">> building docker image for $(NAME)-community"
|
||||
@docker build -t "$(DOCKER_REGISTRY_COMMUNITY):$(VERSION)-$*" \
|
||||
--build-arg TARGETARCH="$*" \
|
||||
-f $(DOCKERFILE_COMMUNITY) $(SRC)
|
||||
|
||||
.PHONY: docker-buildx-community
|
||||
docker-buildx-community: ## Builds the docker image for community using buildx
|
||||
docker-buildx-community: go-build-community js-build
|
||||
@echo ">> building docker image for $(NAME)-community"
|
||||
@docker buildx build --file $(DOCKERFILE_COMMUNITY) \
|
||||
--progress plain \
|
||||
--platform linux/arm64,linux/amd64 \
|
||||
--push \
|
||||
--tag $(DOCKER_REGISTRY_COMMUNITY):$(VERSION) $(SRC)
|
||||
|
||||
.PHONY: docker-build-enterprise $(DOCKER_BUILD_ARCHS_ENTERPRISE)
|
||||
docker-build-enterprise: ## Builds the docker image for enterprise
|
||||
docker-build-enterprise: $(DOCKER_BUILD_ARCHS_ENTERPRISE)
|
||||
$(DOCKER_BUILD_ARCHS_ENTERPRISE): docker-build-enterprise-%: go-build-enterprise-% js-build
|
||||
@echo ">> building docker image for $(NAME)"
|
||||
@docker build -t "$(DOCKER_REGISTRY_ENTERPRISE):$(VERSION)-$*" \
|
||||
--build-arg TARGETARCH="$*" \
|
||||
-f $(DOCKERFILE_ENTERPRISE) $(SRC)
|
||||
|
||||
.PHONY: docker-buildx-enterprise
|
||||
docker-buildx-enterprise: ## Builds the docker image for enterprise using buildx
|
||||
docker-buildx-enterprise: go-build-enterprise js-build
|
||||
@echo ">> building docker image for $(NAME)"
|
||||
@docker buildx build --file $(DOCKERFILE_ENTERPRISE) \
|
||||
--progress plain \
|
||||
--platform linux/arm64,linux/amd64 \
|
||||
--push \
|
||||
--tag $(DOCKER_REGISTRY_ENTERPRISE):$(VERSION) $(SRC)
|
||||
|
||||
##############################################################
|
||||
# python commands
|
||||
##############################################################
|
||||
.PHONY: py-fmt
|
||||
py-fmt: ## Run black for integration tests
|
||||
@cd tests/integration && uv run black .
|
||||
|
||||
.PHONY: py-lint
|
||||
py-lint: ## Run lint for integration tests
|
||||
@cd tests/integration && uv run isort .
|
||||
@cd tests/integration && uv run autoflake .
|
||||
@cd tests/integration && uv run pylint .
|
||||
|
||||
.PHONY: py-test-setup
|
||||
py-test-setup: ## Runs integration tests
|
||||
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --reuse --capture=no src/bootstrap/setup.py::test_setup
|
||||
|
||||
.PHONY: py-test-teardown
|
||||
py-test-teardown: ## Runs integration tests with teardown
|
||||
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --teardown --capture=no src/bootstrap/setup.py::test_teardown
|
||||
|
||||
.PHONY: py-test
|
||||
py-test: ## Runs integration tests
|
||||
@cd tests/integration && uv run pytest --basetemp=./tmp/ -vv --capture=no src/
|
||||
|
||||
.PHONY: py-clean
|
||||
py-clean: ## Clear all pycache and pytest cache from tests directory recursively
|
||||
@echo ">> cleaning python cache files from tests directory"
|
||||
@find tests -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
|
||||
@find tests -type d -name .pytest_cache -exec rm -rf {} + 2>/dev/null || true
|
||||
@find tests -type f -name "*.pyc" -delete 2>/dev/null || true
|
||||
@find tests -type f -name "*.pyo" -delete 2>/dev/null || true
|
||||
@echo ">> python cache cleaned"
|
||||
|
||||
|
||||
##############################################################
|
||||
# generate commands
|
||||
##############################################################
|
||||
.PHONY: gen-mocks
|
||||
gen-mocks:
|
||||
@echo ">> Generating mocks"
|
||||
@mockery --config .mockery.yml
|
||||
goreleaser-snapshot-histogram-quantile:
|
||||
make GORELEASER_WORKDIR=$(CH_HISTOGRAM_QUANTILE_DIRECTORY) goreleaser-snapshot
|
||||
|
||||
11
README.md
11
README.md
@@ -8,6 +8,7 @@
|
||||
<p align="center">All your logs, metrics, and traces in one place. Monitor your application, spot issues before they occur and troubleshoot downtime quickly with rich context. SigNoz is a cost-effective open-source alternative to Datadog and New Relic. Visit <a href="https://signoz.io" target="_blank">signoz.io</a> for the full documentation, tutorials, and guide.</p>
|
||||
|
||||
<p align="center">
|
||||
<img alt="Downloads" src="https://img.shields.io/docker/pulls/signoz/query-service?label=Docker Downloads"> </a>
|
||||
<img alt="GitHub issues" src="https://img.shields.io/github/issues/signoz/signoz"> </a>
|
||||
<a href="https://twitter.com/intent/tweet?text=Monitor%20your%20applications%20and%20troubleshoot%20problems%20with%20SigNoz,%20an%20open-source%20alternative%20to%20DataDog,%20NewRelic.&url=https://signoz.io/&via=SigNozHQ&hashtags=opensource,signoz,observability">
|
||||
<img alt="tweet" src="https://img.shields.io/twitter/url/http/shields.io.svg?style=social"> </a>
|
||||
@@ -218,25 +219,17 @@ Not sure how to get started? Just ping us on `#contributing` in our [slack commu
|
||||
- [Nityananda Gohain](https://github.com/nityanandagohain)
|
||||
- [Srikanth Chekuri](https://github.com/srikanthccv)
|
||||
- [Vishal Sharma](https://github.com/makeavish)
|
||||
- [Shivanshu Raj Shrivastava](https://github.com/shivanshuraj1333)
|
||||
- [Ekansh Gupta](https://github.com/eKuG)
|
||||
- [Aniket Agarwal](https://github.com/aniketio-ctrl)
|
||||
|
||||
#### Frontend
|
||||
|
||||
- [Yunus M](https://github.com/YounixM)
|
||||
- [Vikrant Gupta](https://github.com/vikrantgupta25)
|
||||
- [Sagar Rajput](https://github.com/SagarRajput-7)
|
||||
- [Shaheer Kochai](https://github.com/ahmadshaheer)
|
||||
- [Amlan Kumar Nandy](https://github.com/amlannandy)
|
||||
- [Sahil Khan](https://github.com/sawhil)
|
||||
- [Aditya Singh](https://github.com/aks07)
|
||||
- [Abhi Kumar](https://github.com/ahrefabhi)
|
||||
|
||||
#### DevOps
|
||||
|
||||
- [Prashant Shahi](https://github.com/prashant-shahi)
|
||||
- [Vibhu Pandey](https://github.com/therealpandey)
|
||||
- [Vibhu Pandey](https://github.com/grandwizard28)
|
||||
|
||||
<br /><br />
|
||||
|
||||
|
||||
@@ -1,59 +0,0 @@
|
||||
# yaml-language-server: $schema=https://goreleaser.com/static/schema-pro.json
|
||||
# vim: set ts=2 sw=2 tw=0 fo=cnqoj
|
||||
version: 2
|
||||
|
||||
project_name: signoz-community
|
||||
|
||||
before:
|
||||
hooks:
|
||||
- go mod tidy
|
||||
|
||||
builds:
|
||||
- id: signoz
|
||||
binary: bin/signoz
|
||||
main: ./cmd/community
|
||||
goos:
|
||||
- linux
|
||||
- darwin
|
||||
goarch:
|
||||
- amd64
|
||||
- arm64
|
||||
goamd64:
|
||||
- v1
|
||||
goarm64:
|
||||
- v8.0
|
||||
ldflags:
|
||||
- -s -w
|
||||
- -X github.com/SigNoz/signoz/pkg/version.version=v{{ .Version }}
|
||||
- -X github.com/SigNoz/signoz/pkg/version.variant=community
|
||||
- -X github.com/SigNoz/signoz/pkg/version.hash={{ .ShortCommit }}
|
||||
- -X github.com/SigNoz/signoz/pkg/version.time={{ .CommitTimestamp }}
|
||||
- -X github.com/SigNoz/signoz/pkg/version.branch={{ .Branch }}
|
||||
- -X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr
|
||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||
tags:
|
||||
- timetzdata
|
||||
|
||||
archives:
|
||||
- formats:
|
||||
- tar.gz
|
||||
name_template: >-
|
||||
{{ .ProjectName }}_{{- .Os }}_{{- .Arch }}
|
||||
wrap_in_directory: true
|
||||
strip_binary_directory: false
|
||||
files:
|
||||
- src: README.md
|
||||
dst: README.md
|
||||
- src: LICENSE
|
||||
dst: LICENSE
|
||||
- src: frontend/build
|
||||
dst: web
|
||||
- src: conf
|
||||
dst: conf
|
||||
- src: templates
|
||||
dst: templates
|
||||
|
||||
release:
|
||||
name_template: "v{{ .Version }}"
|
||||
draft: false
|
||||
prerelease: auto
|
||||
@@ -1,19 +0,0 @@
|
||||
FROM alpine:3.20.3
|
||||
LABEL maintainer="signoz"
|
||||
WORKDIR /root
|
||||
|
||||
ARG OS="linux"
|
||||
ARG TARGETARCH
|
||||
|
||||
RUN apk update && \
|
||||
apk add ca-certificates && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
||||
|
||||
COPY ./target/${OS}-${TARGETARCH}/signoz-community /root/signoz
|
||||
COPY ./templates/email /root/templates
|
||||
COPY frontend/build/ /etc/signoz/web/
|
||||
|
||||
RUN chmod 755 /root /root/signoz
|
||||
|
||||
ENTRYPOINT ["./signoz", "server"]
|
||||
@@ -1,20 +0,0 @@
|
||||
ARG ALPINE_SHA="pass-a-valid-docker-sha-otherwise-this-will-fail"
|
||||
|
||||
FROM alpine@sha256:${ALPINE_SHA}
|
||||
LABEL maintainer="signoz"
|
||||
WORKDIR /root
|
||||
|
||||
ARG OS="linux"
|
||||
ARG ARCH
|
||||
|
||||
RUN apk update && \
|
||||
apk add ca-certificates && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
||||
COPY ./target/${OS}-${ARCH}/signoz-community /root/signoz-community
|
||||
COPY ./templates/email /root/templates
|
||||
COPY frontend/build/ /etc/signoz/web/
|
||||
|
||||
RUN chmod 755 /root /root/signoz-community
|
||||
|
||||
ENTRYPOINT ["./signoz-community", "server"]
|
||||
@@ -1,19 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
|
||||
"github.com/SigNoz/signoz/cmd"
|
||||
"github.com/SigNoz/signoz/pkg/instrumentation"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// initialize logger for logging in the cmd/ package. This logger is different from the logger used in the application.
|
||||
logger := instrumentation.NewLogger(instrumentation.Config{Logs: instrumentation.LogsConfig{Level: slog.LevelInfo}})
|
||||
|
||||
// register a list of commands to the root command
|
||||
registerServer(cmd.RootCmd, logger)
|
||||
cmd.RegisterGenerate(cmd.RootCmd, logger)
|
||||
|
||||
cmd.Execute(logger)
|
||||
}
|
||||
@@ -1,126 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log/slog"
|
||||
|
||||
"github.com/SigNoz/signoz/cmd"
|
||||
"github.com/SigNoz/signoz/pkg/analytics"
|
||||
"github.com/SigNoz/signoz/pkg/authn"
|
||||
"github.com/SigNoz/signoz/pkg/authz"
|
||||
"github.com/SigNoz/signoz/pkg/authz/openfgaauthz"
|
||||
"github.com/SigNoz/signoz/pkg/authz/openfgaschema"
|
||||
"github.com/SigNoz/signoz/pkg/factory"
|
||||
"github.com/SigNoz/signoz/pkg/gateway"
|
||||
"github.com/SigNoz/signoz/pkg/gateway/noopgateway"
|
||||
"github.com/SigNoz/signoz/pkg/licensing"
|
||||
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
|
||||
"github.com/SigNoz/signoz/pkg/modules/dashboard"
|
||||
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
|
||||
"github.com/SigNoz/signoz/pkg/modules/organization"
|
||||
"github.com/SigNoz/signoz/pkg/modules/role"
|
||||
"github.com/SigNoz/signoz/pkg/querier"
|
||||
"github.com/SigNoz/signoz/pkg/query-service/app"
|
||||
"github.com/SigNoz/signoz/pkg/queryparser"
|
||||
"github.com/SigNoz/signoz/pkg/signoz"
|
||||
"github.com/SigNoz/signoz/pkg/sqlschema"
|
||||
"github.com/SigNoz/signoz/pkg/sqlstore"
|
||||
"github.com/SigNoz/signoz/pkg/types/authtypes"
|
||||
"github.com/SigNoz/signoz/pkg/version"
|
||||
"github.com/SigNoz/signoz/pkg/zeus"
|
||||
"github.com/SigNoz/signoz/pkg/zeus/noopzeus"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
|
||||
var flags signoz.DeprecatedFlags
|
||||
|
||||
serverCmd := &cobra.Command{
|
||||
Use: "server",
|
||||
Short: "Run the SigNoz server",
|
||||
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
|
||||
RunE: func(currCmd *cobra.Command, args []string) error {
|
||||
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, flags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return runServer(currCmd.Context(), config, logger)
|
||||
},
|
||||
}
|
||||
|
||||
flags.RegisterFlags(serverCmd)
|
||||
parentCmd.AddCommand(serverCmd)
|
||||
}
|
||||
|
||||
func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) error {
|
||||
// print the version
|
||||
version.Info.PrettyPrint(config.Version)
|
||||
|
||||
signoz, err := signoz.New(
|
||||
ctx,
|
||||
config,
|
||||
zeus.Config{},
|
||||
noopzeus.NewProviderFactory(),
|
||||
licensing.Config{},
|
||||
func(_ sqlstore.SQLStore, _ zeus.Zeus, _ organization.Getter, _ analytics.Analytics) factory.ProviderFactory[licensing.Licensing, licensing.Config] {
|
||||
return nooplicensing.NewFactory()
|
||||
},
|
||||
signoz.NewEmailingProviderFactories(),
|
||||
signoz.NewCacheProviderFactories(),
|
||||
signoz.NewWebProviderFactories(),
|
||||
func(sqlstore sqlstore.SQLStore) factory.NamedMap[factory.ProviderFactory[sqlschema.SQLSchema, sqlschema.Config]] {
|
||||
return signoz.NewSQLSchemaProviderFactories(sqlstore)
|
||||
},
|
||||
signoz.NewSQLStoreProviderFactories(),
|
||||
signoz.NewTelemetryStoreProviderFactories(),
|
||||
func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error) {
|
||||
return signoz.NewAuthNs(ctx, providerSettings, store, licensing)
|
||||
},
|
||||
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
|
||||
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
|
||||
},
|
||||
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, _ role.Module, queryParser queryparser.QueryParser, _ querier.Querier, _ licensing.Licensing) dashboard.Module {
|
||||
return impldashboard.NewModule(impldashboard.NewStore(store), settings, analytics, orgGetter, queryParser)
|
||||
},
|
||||
func(_ licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
|
||||
return noopgateway.NewProviderFactory()
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
server, err := app.NewServer(config, signoz)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to create server", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := server.Start(ctx); err != nil {
|
||||
logger.ErrorContext(ctx, "failed to start server", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
signoz.Start(ctx)
|
||||
|
||||
if err := signoz.Wait(ctx); err != nil {
|
||||
logger.ErrorContext(ctx, "failed to start signoz", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
err = server.Stop(ctx)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to stop server", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
err = signoz.Stop(ctx)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to stop signoz", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log/slog"
|
||||
|
||||
"github.com/SigNoz/signoz/pkg/config"
|
||||
"github.com/SigNoz/signoz/pkg/config/envprovider"
|
||||
"github.com/SigNoz/signoz/pkg/config/fileprovider"
|
||||
"github.com/SigNoz/signoz/pkg/signoz"
|
||||
)
|
||||
|
||||
func NewSigNozConfig(ctx context.Context, logger *slog.Logger, flags signoz.DeprecatedFlags) (signoz.Config, error) {
|
||||
config, err := signoz.NewConfig(
|
||||
ctx,
|
||||
logger,
|
||||
config.ResolverConfig{
|
||||
Uris: []string{"env:"},
|
||||
ProviderFactories: []config.ProviderFactory{
|
||||
envprovider.NewFactory(),
|
||||
fileprovider.NewFactory(),
|
||||
},
|
||||
},
|
||||
flags,
|
||||
)
|
||||
if err != nil {
|
||||
return signoz.Config{}, err
|
||||
}
|
||||
|
||||
return config, nil
|
||||
}
|
||||
@@ -1,62 +0,0 @@
|
||||
# yaml-language-server: $schema=https://goreleaser.com/static/schema-pro.json
|
||||
# vim: set ts=2 sw=2 tw=0 fo=cnqoj
|
||||
version: 2
|
||||
|
||||
project_name: signoz
|
||||
|
||||
before:
|
||||
hooks:
|
||||
- go mod tidy
|
||||
|
||||
builds:
|
||||
- id: signoz
|
||||
binary: bin/signoz
|
||||
main: ./cmd/enterprise
|
||||
goos:
|
||||
- linux
|
||||
- darwin
|
||||
goarch:
|
||||
- amd64
|
||||
- arm64
|
||||
goamd64:
|
||||
- v1
|
||||
goarm64:
|
||||
- v8.0
|
||||
ldflags:
|
||||
- -s -w
|
||||
- -X github.com/SigNoz/signoz/pkg/version.version=v{{ .Version }}
|
||||
- -X github.com/SigNoz/signoz/pkg/version.variant=enterprise
|
||||
- -X github.com/SigNoz/signoz/pkg/version.hash={{ .ShortCommit }}
|
||||
- -X github.com/SigNoz/signoz/pkg/version.time={{ .CommitTimestamp }}
|
||||
- -X github.com/SigNoz/signoz/pkg/version.branch={{ .Branch }}
|
||||
- -X github.com/SigNoz/signoz/ee/zeus.url=https://api.signoz.cloud
|
||||
- -X github.com/SigNoz/signoz/ee/zeus.deprecatedURL=https://license.signoz.io
|
||||
- -X github.com/SigNoz/signoz/ee/query-service/constants.LicenseSignozIo=https://license.signoz.io/api/v1
|
||||
- -X github.com/SigNoz/signoz/pkg/analytics.key=9kRrJ7oPCGPEJLF6QjMPLt5bljFhRQBr
|
||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||
tags:
|
||||
- timetzdata
|
||||
|
||||
archives:
|
||||
- formats:
|
||||
- tar.gz
|
||||
name_template: >-
|
||||
{{ .ProjectName }}_{{- .Os }}_{{- .Arch }}
|
||||
wrap_in_directory: true
|
||||
strip_binary_directory: false
|
||||
files:
|
||||
- src: README.md
|
||||
dst: README.md
|
||||
- src: LICENSE
|
||||
dst: LICENSE
|
||||
- src: frontend/build
|
||||
dst: web
|
||||
- src: conf
|
||||
dst: conf
|
||||
- src: templates
|
||||
dst: templates
|
||||
|
||||
release:
|
||||
name_template: "v{{ .Version }}"
|
||||
draft: false
|
||||
prerelease: auto
|
||||
@@ -1,19 +0,0 @@
|
||||
FROM alpine:3.20.3
|
||||
LABEL maintainer="signoz"
|
||||
WORKDIR /root
|
||||
|
||||
ARG OS="linux"
|
||||
ARG TARGETARCH
|
||||
|
||||
RUN apk update && \
|
||||
apk add ca-certificates && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
||||
|
||||
COPY ./target/${OS}-${TARGETARCH}/signoz /root/signoz
|
||||
COPY ./templates/email /root/templates
|
||||
COPY frontend/build/ /etc/signoz/web/
|
||||
|
||||
RUN chmod 755 /root /root/signoz
|
||||
|
||||
ENTRYPOINT ["./signoz", "server"]
|
||||
@@ -1,37 +0,0 @@
|
||||
FROM golang:1.24-bullseye
|
||||
|
||||
ARG OS="linux"
|
||||
ARG TARGETARCH
|
||||
ARG ZEUSURL
|
||||
|
||||
# This path is important for stacktraces
|
||||
WORKDIR $GOPATH/src/github.com/signoz/signoz
|
||||
WORKDIR /root
|
||||
|
||||
RUN set -eux; \
|
||||
apt-get update; \
|
||||
apt-get install -y --no-install-recommends \
|
||||
g++ \
|
||||
gcc \
|
||||
libc6-dev \
|
||||
make \
|
||||
pkg-config \
|
||||
; \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
|
||||
RUN go mod download
|
||||
|
||||
COPY ./cmd/ ./cmd/
|
||||
COPY ./ee/ ./ee/
|
||||
COPY ./pkg/ ./pkg/
|
||||
COPY ./templates/email /root/templates
|
||||
|
||||
COPY Makefile Makefile
|
||||
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
|
||||
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
|
||||
|
||||
RUN chmod 755 /root /root/signoz
|
||||
|
||||
ENTRYPOINT ["/root/signoz", "server"]
|
||||
@@ -1,20 +0,0 @@
|
||||
ARG ALPINE_SHA="pass-a-valid-docker-sha-otherwise-this-will-fail"
|
||||
|
||||
FROM alpine@sha256:${ALPINE_SHA}
|
||||
LABEL maintainer="signoz"
|
||||
WORKDIR /root
|
||||
|
||||
ARG OS="linux"
|
||||
ARG ARCH
|
||||
|
||||
RUN apk update && \
|
||||
apk add ca-certificates && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
||||
COPY ./target/${OS}-${ARCH}/signoz /root/signoz
|
||||
COPY ./templates/email /root/templates
|
||||
COPY frontend/build/ /etc/signoz/web/
|
||||
|
||||
RUN chmod 755 /root /root/signoz
|
||||
|
||||
ENTRYPOINT ["./signoz", "server"]
|
||||
@@ -1,47 +0,0 @@
|
||||
FROM node:18-bullseye AS build
|
||||
|
||||
WORKDIR /opt/
|
||||
COPY ./frontend/ ./
|
||||
ENV NODE_OPTIONS=--max-old-space-size=8192
|
||||
RUN CI=1 yarn install
|
||||
RUN CI=1 yarn build
|
||||
|
||||
FROM golang:1.24-bullseye
|
||||
|
||||
ARG OS="linux"
|
||||
ARG TARGETARCH
|
||||
ARG ZEUSURL
|
||||
|
||||
# This path is important for stacktraces
|
||||
WORKDIR $GOPATH/src/github.com/signoz/signoz
|
||||
WORKDIR /root
|
||||
|
||||
RUN set -eux; \
|
||||
apt-get update; \
|
||||
apt-get install -y --no-install-recommends \
|
||||
g++ \
|
||||
gcc \
|
||||
libc6-dev \
|
||||
make \
|
||||
pkg-config \
|
||||
; \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
|
||||
RUN go mod download
|
||||
|
||||
COPY ./cmd/ ./cmd/
|
||||
COPY ./ee/ ./ee/
|
||||
COPY ./pkg/ ./pkg/
|
||||
COPY ./templates/email /root/templates
|
||||
|
||||
COPY Makefile Makefile
|
||||
RUN TARGET_DIR=/root ARCHS=${TARGETARCH} ZEUS_URL=${ZEUSURL} LICENSE_URL=${ZEUSURL}/api/v1 make go-build-enterprise-race
|
||||
RUN mv /root/linux-${TARGETARCH}/signoz /root/signoz
|
||||
|
||||
COPY --from=build /opt/build ./web/
|
||||
|
||||
RUN chmod 755 /root /root/signoz
|
||||
|
||||
ENTRYPOINT ["/root/signoz", "server"]
|
||||
@@ -1,19 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
|
||||
"github.com/SigNoz/signoz/cmd"
|
||||
"github.com/SigNoz/signoz/pkg/instrumentation"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// initialize logger for logging in the cmd/ package. This logger is different from the logger used in the application.
|
||||
logger := instrumentation.NewLogger(instrumentation.Config{Logs: instrumentation.LogsConfig{Level: slog.LevelInfo}})
|
||||
|
||||
// register a list of commands to the root command
|
||||
registerServer(cmd.RootCmd, logger)
|
||||
cmd.RegisterGenerate(cmd.RootCmd, logger)
|
||||
|
||||
cmd.Execute(logger)
|
||||
}
|
||||
@@ -1,165 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log/slog"
|
||||
"time"
|
||||
|
||||
"github.com/SigNoz/signoz/cmd"
|
||||
"github.com/SigNoz/signoz/ee/authn/callbackauthn/oidccallbackauthn"
|
||||
"github.com/SigNoz/signoz/ee/authn/callbackauthn/samlcallbackauthn"
|
||||
"github.com/SigNoz/signoz/ee/authz/openfgaauthz"
|
||||
"github.com/SigNoz/signoz/ee/authz/openfgaschema"
|
||||
"github.com/SigNoz/signoz/ee/gateway/httpgateway"
|
||||
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
|
||||
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
|
||||
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
|
||||
enterpriseapp "github.com/SigNoz/signoz/ee/query-service/app"
|
||||
"github.com/SigNoz/signoz/ee/sqlschema/postgressqlschema"
|
||||
"github.com/SigNoz/signoz/ee/sqlstore/postgressqlstore"
|
||||
enterprisezeus "github.com/SigNoz/signoz/ee/zeus"
|
||||
"github.com/SigNoz/signoz/ee/zeus/httpzeus"
|
||||
"github.com/SigNoz/signoz/pkg/analytics"
|
||||
"github.com/SigNoz/signoz/pkg/authn"
|
||||
"github.com/SigNoz/signoz/pkg/authz"
|
||||
"github.com/SigNoz/signoz/pkg/factory"
|
||||
"github.com/SigNoz/signoz/pkg/gateway"
|
||||
"github.com/SigNoz/signoz/pkg/licensing"
|
||||
"github.com/SigNoz/signoz/pkg/modules/dashboard"
|
||||
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
|
||||
"github.com/SigNoz/signoz/pkg/modules/organization"
|
||||
"github.com/SigNoz/signoz/pkg/modules/role"
|
||||
"github.com/SigNoz/signoz/pkg/querier"
|
||||
"github.com/SigNoz/signoz/pkg/queryparser"
|
||||
"github.com/SigNoz/signoz/pkg/signoz"
|
||||
"github.com/SigNoz/signoz/pkg/sqlschema"
|
||||
"github.com/SigNoz/signoz/pkg/sqlstore"
|
||||
"github.com/SigNoz/signoz/pkg/sqlstore/sqlstorehook"
|
||||
"github.com/SigNoz/signoz/pkg/types/authtypes"
|
||||
"github.com/SigNoz/signoz/pkg/version"
|
||||
"github.com/SigNoz/signoz/pkg/zeus"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func registerServer(parentCmd *cobra.Command, logger *slog.Logger) {
|
||||
var flags signoz.DeprecatedFlags
|
||||
|
||||
serverCmd := &cobra.Command{
|
||||
Use: "server",
|
||||
Short: "Run the SigNoz server",
|
||||
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
|
||||
RunE: func(currCmd *cobra.Command, args []string) error {
|
||||
config, err := cmd.NewSigNozConfig(currCmd.Context(), logger, flags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return runServer(currCmd.Context(), config, logger)
|
||||
},
|
||||
}
|
||||
|
||||
flags.RegisterFlags(serverCmd)
|
||||
parentCmd.AddCommand(serverCmd)
|
||||
}
|
||||
|
||||
func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) error {
|
||||
// print the version
|
||||
version.Info.PrettyPrint(config.Version)
|
||||
|
||||
// add enterprise sqlstore factories to the community sqlstore factories
|
||||
sqlstoreFactories := signoz.NewSQLStoreProviderFactories()
|
||||
if err := sqlstoreFactories.Add(postgressqlstore.NewFactory(sqlstorehook.NewLoggingFactory(), sqlstorehook.NewInstrumentationFactory())); err != nil {
|
||||
logger.ErrorContext(ctx, "failed to add postgressqlstore factory", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
signoz, err := signoz.New(
|
||||
ctx,
|
||||
config,
|
||||
enterprisezeus.Config(),
|
||||
httpzeus.NewProviderFactory(),
|
||||
enterpriselicensing.Config(24*time.Hour, 3),
|
||||
func(sqlstore sqlstore.SQLStore, zeus zeus.Zeus, orgGetter organization.Getter, analytics analytics.Analytics) factory.ProviderFactory[licensing.Licensing, licensing.Config] {
|
||||
return httplicensing.NewProviderFactory(sqlstore, zeus, orgGetter, analytics)
|
||||
},
|
||||
signoz.NewEmailingProviderFactories(),
|
||||
signoz.NewCacheProviderFactories(),
|
||||
signoz.NewWebProviderFactories(),
|
||||
func(sqlstore sqlstore.SQLStore) factory.NamedMap[factory.ProviderFactory[sqlschema.SQLSchema, sqlschema.Config]] {
|
||||
existingFactories := signoz.NewSQLSchemaProviderFactories(sqlstore)
|
||||
if err := existingFactories.Add(postgressqlschema.NewFactory(sqlstore)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return existingFactories
|
||||
},
|
||||
sqlstoreFactories,
|
||||
signoz.NewTelemetryStoreProviderFactories(),
|
||||
func(ctx context.Context, providerSettings factory.ProviderSettings, store authtypes.AuthNStore, licensing licensing.Licensing) (map[authtypes.AuthNProvider]authn.AuthN, error) {
|
||||
samlCallbackAuthN, err := samlcallbackauthn.New(ctx, store, licensing)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
oidcCallbackAuthN, err := oidccallbackauthn.New(store, licensing, providerSettings)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
authNs, err := signoz.NewAuthNs(ctx, providerSettings, store, licensing)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
authNs[authtypes.AuthNProviderSAML] = samlCallbackAuthN
|
||||
authNs[authtypes.AuthNProviderOIDC] = oidcCallbackAuthN
|
||||
|
||||
return authNs, nil
|
||||
},
|
||||
func(ctx context.Context, sqlstore sqlstore.SQLStore) factory.ProviderFactory[authz.AuthZ, authz.Config] {
|
||||
return openfgaauthz.NewProviderFactory(sqlstore, openfgaschema.NewSchema().Get(ctx))
|
||||
},
|
||||
func(store sqlstore.SQLStore, settings factory.ProviderSettings, analytics analytics.Analytics, orgGetter organization.Getter, role role.Module, queryParser queryparser.QueryParser, querier querier.Querier, licensing licensing.Licensing) dashboard.Module {
|
||||
return impldashboard.NewModule(pkgimpldashboard.NewStore(store), settings, analytics, orgGetter, role, queryParser, querier, licensing)
|
||||
},
|
||||
func(licensing licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config] {
|
||||
return httpgateway.NewProviderFactory(licensing)
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to create signoz", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
server, err := enterpriseapp.NewServer(config, signoz)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to create server", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := server.Start(ctx); err != nil {
|
||||
logger.ErrorContext(ctx, "failed to start server", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
signoz.Start(ctx)
|
||||
|
||||
if err := signoz.Wait(ctx); err != nil {
|
||||
logger.ErrorContext(ctx, "failed to start signoz", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
err = server.Stop(ctx)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to stop server", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
err = signoz.Stop(ctx)
|
||||
if err != nil {
|
||||
logger.ErrorContext(ctx, "failed to stop signoz", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func RegisterGenerate(parentCmd *cobra.Command, logger *slog.Logger) {
|
||||
var generateCmd = &cobra.Command{
|
||||
Use: "generate",
|
||||
Short: "Generate artifacts",
|
||||
SilenceUsage: true,
|
||||
SilenceErrors: true,
|
||||
CompletionOptions: cobra.CompletionOptions{DisableDefaultCmd: true},
|
||||
}
|
||||
|
||||
registerGenerateOpenAPI(generateCmd)
|
||||
|
||||
parentCmd.AddCommand(generateCmd)
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log/slog"
|
||||
|
||||
"github.com/SigNoz/signoz/pkg/instrumentation"
|
||||
"github.com/SigNoz/signoz/pkg/signoz"
|
||||
"github.com/SigNoz/signoz/pkg/version"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func registerGenerateOpenAPI(parentCmd *cobra.Command) {
|
||||
openapiCmd := &cobra.Command{
|
||||
Use: "openapi",
|
||||
Short: "Generate OpenAPI schema for SigNoz",
|
||||
RunE: func(currCmd *cobra.Command, args []string) error {
|
||||
return runGenerateOpenAPI(currCmd.Context())
|
||||
},
|
||||
}
|
||||
|
||||
parentCmd.AddCommand(openapiCmd)
|
||||
}
|
||||
|
||||
func runGenerateOpenAPI(ctx context.Context) error {
|
||||
instrumentation, err := instrumentation.New(ctx, instrumentation.Config{Logs: instrumentation.LogsConfig{Level: slog.LevelInfo}}, version.Info, "signoz")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
openapi, err := signoz.NewOpenAPI(ctx, instrumentation)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := openapi.CreateAndWrite("docs/api/openapi.yml"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
33
cmd/root.go
33
cmd/root.go
@@ -1,33 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
"os"
|
||||
|
||||
"github.com/SigNoz/signoz/pkg/version"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/zap" //nolint:depguard
|
||||
)
|
||||
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "signoz",
|
||||
Short: "OpenTelemetry-Native Logs, Metrics and Traces in a single pane",
|
||||
Version: version.Info.Version(),
|
||||
SilenceUsage: true,
|
||||
SilenceErrors: true,
|
||||
CompletionOptions: cobra.CompletionOptions{DisableDefaultCmd: true},
|
||||
}
|
||||
|
||||
func Execute(logger *slog.Logger) {
|
||||
zapLogger := newZapLogger()
|
||||
zap.ReplaceGlobals(zapLogger)
|
||||
defer func() {
|
||||
_ = zapLogger.Sync()
|
||||
}()
|
||||
|
||||
err := RootCmd.Execute()
|
||||
if err != nil {
|
||||
logger.ErrorContext(RootCmd.Context(), "error running command", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
110
cmd/zap.go
110
cmd/zap.go
@@ -1,110 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/SigNoz/signoz/pkg/errors"
|
||||
"go.uber.org/zap" //nolint:depguard
|
||||
"go.uber.org/zap/zapcore" //nolint:depguard
|
||||
)
|
||||
|
||||
// Deprecated: Use `NewLogger` from `pkg/instrumentation` instead.
|
||||
func newZapLogger() *zap.Logger {
|
||||
config := zap.NewProductionConfig()
|
||||
config.EncoderConfig.TimeKey = "timestamp"
|
||||
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
|
||||
|
||||
// Extract sampling config before building the logger.
|
||||
// We need to disable sampling in the config and apply it manually later
|
||||
// to ensure correct core ordering. See filteringCore documentation for details.
|
||||
samplerConfig := config.Sampling
|
||||
config.Sampling = nil
|
||||
|
||||
logger, _ := config.Build()
|
||||
|
||||
// Wrap with custom core wrapping to filter certain log entries.
|
||||
// The order of wrapping is important:
|
||||
// 1. First wrap with filteringCore
|
||||
// 2. Then wrap with sampler
|
||||
//
|
||||
// This creates the call chain: sampler -> filteringCore -> ioCore
|
||||
//
|
||||
// During logging:
|
||||
// - sampler.Check decides whether to sample the log entry
|
||||
// - If sampled, filteringCore.Check is called
|
||||
// - filteringCore adds itself to CheckedEntry.cores
|
||||
// - All cores in CheckedEntry.cores have their Write method called
|
||||
// - filteringCore.Write can now filter the entry before passing to ioCore
|
||||
//
|
||||
// If we didn't disable the sampler above, filteringCore would have wrapped
|
||||
// sampler. By calling sampler.Check we would have allowed it to call
|
||||
// ioCore.Check that adds itself to CheckedEntry.cores. Then ioCore.Write
|
||||
// would have bypassed our checks, making filtering impossible.
|
||||
return logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
|
||||
core = &filteringCore{core}
|
||||
if samplerConfig != nil {
|
||||
core = zapcore.NewSamplerWithOptions(
|
||||
core,
|
||||
time.Second,
|
||||
samplerConfig.Initial,
|
||||
samplerConfig.Thereafter,
|
||||
)
|
||||
}
|
||||
return core
|
||||
}))
|
||||
}
|
||||
|
||||
// filteringCore wraps a zapcore.Core to filter out log entries based on a
|
||||
// custom logic.
|
||||
//
|
||||
// Note: This core must be positioned before the sampler in the core chain
|
||||
// to ensure Write is called. See newZapLogger for ordering details.
|
||||
type filteringCore struct {
|
||||
zapcore.Core
|
||||
}
|
||||
|
||||
// filter determines whether a log entry should be written based on its fields.
|
||||
// Returns false if the entry should be suppressed, true otherwise.
|
||||
//
|
||||
// Current filters:
|
||||
// - context.Canceled: These are expected errors from cancelled operations,
|
||||
// and create noise in logs.
|
||||
func (c *filteringCore) filter(fields []zapcore.Field) bool {
|
||||
for _, field := range fields {
|
||||
if field.Type == zapcore.ErrorType {
|
||||
if loggedErr, ok := field.Interface.(error); ok {
|
||||
// Suppress logs containing context.Canceled errors
|
||||
if errors.Is(loggedErr, context.Canceled) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// With implements zapcore.Core.With
|
||||
// It returns a new copy with the added context.
|
||||
func (c *filteringCore) With(fields []zapcore.Field) zapcore.Core {
|
||||
return &filteringCore{c.Core.With(fields)}
|
||||
}
|
||||
|
||||
// Check implements zapcore.Core.Check.
|
||||
// It adds this core to the CheckedEntry if the log level is enabled,
|
||||
// ensuring that Write will be called for this entry.
|
||||
func (c *filteringCore) Check(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {
|
||||
if c.Enabled(ent.Level) {
|
||||
return ce.AddCore(ent, c)
|
||||
}
|
||||
return ce
|
||||
}
|
||||
|
||||
// Write implements zapcore.Core.Write.
|
||||
// It filters log entries based on their fields before delegating to the wrapped core.
|
||||
func (c *filteringCore) Write(ent zapcore.Entry, fields []zapcore.Field) error {
|
||||
if !c.filter(fields) {
|
||||
return nil
|
||||
}
|
||||
return c.Core.Write(ent, fields)
|
||||
}
|
||||
@@ -1,21 +1,8 @@
|
||||
##################### SigNoz Configuration Example #####################
|
||||
#
|
||||
#
|
||||
# Do not modify this file
|
||||
#
|
||||
|
||||
##################### Global #####################
|
||||
global:
|
||||
# the url under which the signoz apiserver is externally reachable.
|
||||
external_url: <unset>
|
||||
# the url where the SigNoz backend receives telemetry data (traces, metrics, logs) from instrumented applications.
|
||||
ingestion_url: <unset>
|
||||
|
||||
##################### Version #####################
|
||||
version:
|
||||
banner:
|
||||
# Whether to enable the version banner on startup.
|
||||
enabled: true
|
||||
|
||||
##################### Instrumentation #####################
|
||||
instrumentation:
|
||||
logs:
|
||||
@@ -54,10 +41,10 @@ cache:
|
||||
provider: memory
|
||||
# memory: Uses in-memory caching.
|
||||
memory:
|
||||
# Max items for the in-memory cache (10x the entries)
|
||||
num_counters: 100000
|
||||
# Total cost in bytes allocated bounded cache
|
||||
max_cost: 67108864
|
||||
# Time-to-live for cache entries in memory. Specify the duration in ns
|
||||
ttl: 60000000000
|
||||
# The interval at which the cache will be cleaned up
|
||||
cleanupInterval: 1m
|
||||
# redis: Uses Redis as the caching backend.
|
||||
redis:
|
||||
# The hostname or IP address of the Redis server.
|
||||
@@ -65,7 +52,7 @@ cache:
|
||||
# The port on which the Redis server is running. Default is usually 6379.
|
||||
port: 6379
|
||||
# The password for authenticating with the Redis server, if required.
|
||||
password:
|
||||
password:
|
||||
# The Redis database number to use
|
||||
db: 0
|
||||
|
||||
@@ -78,216 +65,31 @@ sqlstore:
|
||||
sqlite:
|
||||
# The path to the SQLite database file.
|
||||
path: /var/lib/signoz/signoz.db
|
||||
# Mode is the mode to use for the sqlite database.
|
||||
mode: delete
|
||||
# BusyTimeout is the timeout for the sqlite database to wait for a lock.
|
||||
busy_timeout: 10s
|
||||
|
||||
|
||||
##################### APIServer #####################
|
||||
apiserver:
|
||||
timeout:
|
||||
# Default request timeout.
|
||||
default: 60s
|
||||
# Maximum request timeout.
|
||||
max: 600s
|
||||
# List of routes to exclude from request timeout.
|
||||
excluded_routes:
|
||||
- /api/v1/logs/tail
|
||||
- /api/v3/logs/livetail
|
||||
logging:
|
||||
# List of routes to exclude from request responselogging.
|
||||
excluded_routes:
|
||||
- /api/v1/health
|
||||
- /api/v1/version
|
||||
- /
|
||||
|
||||
##################### Querier #####################
|
||||
querier:
|
||||
# The TTL for cached query results.
|
||||
cache_ttl: 168h
|
||||
# The interval for recent data that should not be cached.
|
||||
flux_interval: 5m
|
||||
# The maximum number of concurrent queries for missing ranges.
|
||||
max_concurrent_queries: 4
|
||||
|
||||
##################### TelemetryStore #####################
|
||||
telemetrystore:
|
||||
# specifies the telemetrystore provider to use.
|
||||
provider: clickhouse
|
||||
clickhouse:
|
||||
# The DSN to use for ClickHouse.
|
||||
dsn: http://localhost:9000
|
||||
# Maximum number of idle connections in the connection pool.
|
||||
max_idle_conns: 50
|
||||
# Maximum number of open connections to the database.
|
||||
max_open_conns: 100
|
||||
# Maximum time to wait for a connection to be established.
|
||||
dial_timeout: 5s
|
||||
# Specifies the telemetrystore provider to use.
|
||||
provider: clickhouse
|
||||
clickhouse:
|
||||
# The DSN to use for clickhouse.
|
||||
dsn: tcp://localhost:9000
|
||||
# The cluster name to use for clickhouse.
|
||||
cluster: cluster
|
||||
# The query settings for clickhouse.
|
||||
settings:
|
||||
max_execution_time: 0
|
||||
max_execution_time_leaf: 0
|
||||
timeout_before_checking_execution_speed: 0
|
||||
max_bytes_to_read: 0
|
||||
max_result_rows: 0
|
||||
ignore_data_skipping_indices: ""
|
||||
secondary_indices_enable_bulk_filtering: false
|
||||
|
||||
##################### Prometheus #####################
|
||||
prometheus:
|
||||
active_query_tracker:
|
||||
# Whether to enable the active query tracker.
|
||||
enabled: true
|
||||
# The path to use for the active query tracker.
|
||||
path: ""
|
||||
# The maximum number of concurrent queries.
|
||||
max_concurrent: 20
|
||||
|
||||
##################### Alertmanager #####################
|
||||
alertmanager:
|
||||
# Specifies the alertmanager provider to use.
|
||||
provider: signoz
|
||||
signoz:
|
||||
# The poll interval for periodically syncing the alertmanager with the config in the store.
|
||||
poll_interval: 1m
|
||||
# The URL under which Alertmanager is externally reachable (for example, if Alertmanager is served via a reverse proxy). Used for generating relative and absolute links back to Alertmanager itself.
|
||||
external_url: http://localhost:8080
|
||||
# The global configuration for the alertmanager. All the exahustive fields can be found in the upstream: https://github.com/prometheus/alertmanager/blob/efa05feffd644ba4accb526e98a8c6545d26a783/config/config.go#L833
|
||||
global:
|
||||
# ResolveTimeout is the time after which an alert is declared resolved if it has not been updated.
|
||||
resolve_timeout: 5m
|
||||
route:
|
||||
# GroupByStr is the list of labels to group alerts by.
|
||||
group_by:
|
||||
- alertname
|
||||
# GroupInterval is the interval at which alerts are grouped.
|
||||
group_interval: 1m
|
||||
# GroupWait is the time to wait before sending alerts to receivers.
|
||||
group_wait: 1m
|
||||
# RepeatInterval is the interval at which alerts are repeated.
|
||||
repeat_interval: 1h
|
||||
alerts:
|
||||
# Interval between garbage collection of alerts.
|
||||
gc_interval: 30m
|
||||
silences:
|
||||
# Maximum number of silences, including expired silences. If negative or zero, no limit is set.
|
||||
max: 0
|
||||
# Maximum size of the silences in bytes. If negative or zero, no limit is set.
|
||||
max_size_bytes: 0
|
||||
# Interval between garbage collection and snapshotting of the silences. The snapshot will be stored in the state store.
|
||||
maintenance_interval: 15m
|
||||
# Retention of the silences.
|
||||
retention: 120h
|
||||
nflog:
|
||||
# Interval between garbage collection and snapshotting of the notification logs. The snapshot will be stored in the state store.
|
||||
maintenance_interval: 15m
|
||||
# Retention of the notification logs.
|
||||
retention: 120h
|
||||
|
||||
##################### Emailing #####################
|
||||
emailing:
|
||||
# Whether to enable emailing.
|
||||
enabled: false
|
||||
templates:
|
||||
# The directory containing the email templates. This directory should contain a list of files defined at pkg/types/emailtypes/template.go.
|
||||
directory: /opt/signoz/conf/templates/email
|
||||
smtp:
|
||||
# The SMTP server address.
|
||||
address: localhost:25
|
||||
# The email address to use for the SMTP server.
|
||||
from:
|
||||
# The hello message to use for the SMTP server.
|
||||
hello:
|
||||
# The static headers to send with the email.
|
||||
headers: {}
|
||||
auth:
|
||||
# The username to use for the SMTP server.
|
||||
username:
|
||||
# The password to use for the SMTP server.
|
||||
password:
|
||||
# The secret to use for the SMTP server.
|
||||
secret:
|
||||
# The identity to use for the SMTP server.
|
||||
identity:
|
||||
tls:
|
||||
# Whether to enable TLS. It should be false in most cases since the authentication mechanism should use the STARTTLS extension instead.
|
||||
enabled: false
|
||||
# Whether to skip TLS verification.
|
||||
insecure_skip_verify: false
|
||||
# The path to the CA file.
|
||||
ca_file_path:
|
||||
# The path to the key file.
|
||||
key_file_path:
|
||||
# The path to the certificate file.
|
||||
cert_file_path:
|
||||
|
||||
##################### Sharder (experimental) #####################
|
||||
sharder:
|
||||
# Specifies the sharder provider to use.
|
||||
provider: noop
|
||||
single:
|
||||
# The org id to which this instance belongs to.
|
||||
org_id: org_id
|
||||
|
||||
##################### Analytics #####################
|
||||
analytics:
|
||||
# Whether to enable analytics.
|
||||
enabled: false
|
||||
segment:
|
||||
# The key to use for segment.
|
||||
key: ""
|
||||
|
||||
##################### StatsReporter #####################
|
||||
statsreporter:
|
||||
# Whether to enable stats reporter. This is used to provide valuable insights to the SigNoz team. It does not collect any sensitive/PII data.
|
||||
enabled: true
|
||||
# The interval at which the stats are collected.
|
||||
interval: 6h
|
||||
collect:
|
||||
# Whether to collect identities and traits (emails).
|
||||
identities: true
|
||||
|
||||
##################### Gateway (License only) #####################
|
||||
gateway:
|
||||
# The URL of the gateway's api.
|
||||
url: http://localhost:8080
|
||||
|
||||
##################### Tokenizer #####################
|
||||
tokenizer:
|
||||
# Specifies the tokenizer provider to use.
|
||||
provider: jwt
|
||||
lifetime:
|
||||
# The duration for which a user can be idle before being required to authenticate.
|
||||
idle: 168h
|
||||
# The duration for which a user can remain logged in before being asked to login.
|
||||
max: 720h
|
||||
rotation:
|
||||
# The interval to rotate tokens in.
|
||||
interval: 30m
|
||||
# The duration for which the previous token pair remains valid after a token pair is rotated.
|
||||
duration: 60s
|
||||
jwt:
|
||||
# The secret to sign the JWT tokens.
|
||||
secret: secret
|
||||
opaque:
|
||||
gc:
|
||||
# The interval to perform garbage collection.
|
||||
interval: 1h
|
||||
token:
|
||||
# The maximum number of tokens a user can have. This limits the number of concurrent sessions a user can have.
|
||||
max_per_user: 5
|
||||
|
||||
##################### Flagger #####################
|
||||
flagger:
|
||||
# Config are the overrides for the feature flags which come directly from the config file.
|
||||
config:
|
||||
boolean:
|
||||
use_span_metrics: true
|
||||
interpolation_enabled: false
|
||||
kafka_span_eval: false
|
||||
string:
|
||||
float:
|
||||
integer:
|
||||
object:
|
||||
dial_timeout: 5s
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"url": "https://context7.com/signoz/signoz",
|
||||
"public_key": "pk_6g9GfjdkuPEIDuTGAxnol"
|
||||
}
|
||||
@@ -26,7 +26,7 @@ cd deploy/docker
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Open http://localhost:8080 in your favourite browser.
|
||||
Open http://localhost:3301 in your favourite browser.
|
||||
|
||||
To start collecting logs and metrics from your infrastructure, run the following command:
|
||||
|
||||
@@ -55,7 +55,7 @@ cd deploy/docker-swarm
|
||||
docker stack deploy -c docker-compose.yaml signoz
|
||||
```
|
||||
|
||||
Open http://localhost:8080 in your favourite browser.
|
||||
Open http://localhost:3301 in your favourite browser.
|
||||
|
||||
To start collecting logs and metrics from your infrastructure, run the following command:
|
||||
|
||||
|
||||
64
deploy/common/signoz/nginx-config.conf
Normal file
64
deploy/common/signoz/nginx-config.conf
Normal file
@@ -0,0 +1,64 @@
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 3301;
|
||||
server_name _;
|
||||
|
||||
gzip on;
|
||||
gzip_static on;
|
||||
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
|
||||
gzip_proxied any;
|
||||
gzip_vary on;
|
||||
gzip_comp_level 6;
|
||||
gzip_buffers 16 8k;
|
||||
gzip_http_version 1.1;
|
||||
|
||||
# to handle uri issue 414 from nginx
|
||||
client_max_body_size 24M;
|
||||
large_client_header_buffers 8 128k;
|
||||
|
||||
location / {
|
||||
if ( $uri = '/index.html' ) {
|
||||
add_header Cache-Control no-store always;
|
||||
}
|
||||
root /usr/share/nginx/html;
|
||||
index index.html index.htm;
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
location ~ ^/api/(v1|v3)/logs/(tail|livetail){
|
||||
proxy_pass http://query-service:8080;
|
||||
proxy_http_version 1.1;
|
||||
|
||||
# connection will be closed if no data is read for 600s between successive read operations
|
||||
proxy_read_timeout 600s;
|
||||
|
||||
# dont buffer the data send it directly to client.
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
}
|
||||
|
||||
location /api {
|
||||
proxy_pass http://query-service:8080/api;
|
||||
# connection will be closed if no data is read for 600s between successive read operations
|
||||
proxy_read_timeout 600s;
|
||||
}
|
||||
|
||||
location /ws {
|
||||
proxy_pass http://query-service:8080/ws;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade "websocket";
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
|
||||
# redirect server error pages to the static page /50x.html
|
||||
#
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
}
|
||||
@@ -1 +1 @@
|
||||
server_endpoint: ws://signoz:4320/v1/opamp
|
||||
server_endpoint: ws://query-service:4320/v1/opamp
|
||||
|
||||
@@ -11,7 +11,7 @@ x-common: &common
|
||||
max-file: "3"
|
||||
x-clickhouse-defaults: &clickhouse-defaults
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
tty: true
|
||||
deploy:
|
||||
labels:
|
||||
@@ -37,11 +37,9 @@ x-clickhouse-defaults: &clickhouse-defaults
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
environment:
|
||||
- CLICKHOUSE_SKIP_USER_SETUP=1
|
||||
x-zookeeper-defaults: &zookeeper-defaults
|
||||
!!merge <<: *common
|
||||
image: signoz/zookeeper:3.7.1
|
||||
image: bitnami/zookeeper:3.7.1
|
||||
user: root
|
||||
deploy:
|
||||
labels:
|
||||
@@ -65,7 +63,7 @@ x-db-depend: &db-depend
|
||||
services:
|
||||
init-clickhouse:
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
@@ -78,9 +76,6 @@ services:
|
||||
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
|
||||
tar -xvzf histogram-quantile.tar.gz
|
||||
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
volumes:
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
zookeeper-1:
|
||||
@@ -174,29 +169,39 @@ services:
|
||||
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
|
||||
- ./clickhouse-setup/data/clickhouse-3/:/var/lib/clickhouse/
|
||||
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
|
||||
signoz:
|
||||
alertmanager:
|
||||
!!merge <<: *common
|
||||
image: signoz/alertmanager:0.23.7
|
||||
command:
|
||||
- --queryService.url=http://query-service:8085
|
||||
- --storage.path=/data
|
||||
volumes:
|
||||
- ./clickhouse-setup/data/alertmanager:/data
|
||||
depends_on:
|
||||
- query-service
|
||||
query-service:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz:v0.108.0
|
||||
image: signoz/query-service:0.72.0
|
||||
command:
|
||||
- --config=/root/config/prometheus.yml
|
||||
ports:
|
||||
- "8080:8080" # signoz port
|
||||
- --use-logs-new-schema=true
|
||||
- --use-trace-new-schema=true
|
||||
# ports:
|
||||
# - "8080:8080" # signoz port
|
||||
# - "6060:6060" # pprof port
|
||||
volumes:
|
||||
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
|
||||
- ../common/dashboards:/root/config/dashboards
|
||||
- ./clickhouse-setup/data/signoz/:/var/lib/signoz/
|
||||
environment:
|
||||
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
|
||||
- SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
|
||||
- ClickHouseUrl=tcp://clickhouse:9000
|
||||
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
|
||||
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
|
||||
- DASHBOARDS_PATH=/root/config/dashboards
|
||||
- STORAGE=clickhouse
|
||||
- GODEBUG=netdns=go
|
||||
- TELEMETRY_ENABLED=true
|
||||
- DEPLOYMENT_TYPE=docker-swarm
|
||||
- SIGNOZ_TOKENIZER_JWT_SECRET=secret
|
||||
- DOT_METRICS_ENABLED=true
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
@@ -207,9 +212,19 @@ services:
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
frontend:
|
||||
!!merge <<: *common
|
||||
image: signoz/frontend:0.72.0
|
||||
depends_on:
|
||||
- alertmanager
|
||||
- query-service
|
||||
ports:
|
||||
- "3301:3301"
|
||||
volumes:
|
||||
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
|
||||
otel-collector:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-otel-collector:v0.129.12
|
||||
image: signoz/signoz-otel-collector:0.111.27
|
||||
command:
|
||||
- --config=/etc/otel-collector-config.yaml
|
||||
- --manager-config=/etc/manager-config.yaml
|
||||
@@ -230,10 +245,10 @@ services:
|
||||
depends_on:
|
||||
- clickhouse
|
||||
- schema-migrator
|
||||
- signoz
|
||||
- query-service
|
||||
schema-migrator:
|
||||
!!merge <<: *common
|
||||
image: signoz/signoz-schema-migrator:v0.129.12
|
||||
image: signoz/signoz-schema-migrator:0.111.24
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
@@ -248,6 +263,8 @@ networks:
|
||||
signoz-net:
|
||||
name: signoz-net
|
||||
volumes:
|
||||
alertmanager:
|
||||
name: signoz-alertmanager
|
||||
clickhouse:
|
||||
name: signoz-clickhouse
|
||||
clickhouse-2:
|
||||
|
||||
@@ -11,7 +11,7 @@ x-common: &common
|
||||
max-file: "3"
|
||||
x-clickhouse-defaults: &clickhouse-defaults
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
tty: true
|
||||
deploy:
|
||||
labels:
|
||||
@@ -36,11 +36,9 @@ x-clickhouse-defaults: &clickhouse-defaults
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
environment:
|
||||
- CLICKHOUSE_SKIP_USER_SETUP=1
|
||||
x-zookeeper-defaults: &zookeeper-defaults
|
||||
!!merge <<: *common
|
||||
image: signoz/zookeeper:3.7.1
|
||||
image: bitnami/zookeeper:3.7.1
|
||||
user: root
|
||||
deploy:
|
||||
labels:
|
||||
@@ -62,7 +60,7 @@ x-db-depend: &db-depend
|
||||
services:
|
||||
init-clickhouse:
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
@@ -75,9 +73,6 @@ services:
|
||||
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
|
||||
tar -xvzf histogram-quantile.tar.gz
|
||||
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
volumes:
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
zookeeper-1:
|
||||
@@ -102,42 +97,47 @@ services:
|
||||
# - "9000:9000"
|
||||
# - "8123:8123"
|
||||
# - "9181:9181"
|
||||
|
||||
configs:
|
||||
- source: clickhouse-config
|
||||
target: /etc/clickhouse-server/config.xml
|
||||
- source: clickhouse-users
|
||||
target: /etc/clickhouse-server/users.xml
|
||||
- source: clickhouse-custom-function
|
||||
target: /etc/clickhouse-server/custom-function.xml
|
||||
- source: clickhouse-cluster
|
||||
target: /etc/clickhouse-server/config.d/cluster.xml
|
||||
volumes:
|
||||
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
|
||||
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
|
||||
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
|
||||
- clickhouse:/var/lib/clickhouse/
|
||||
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
|
||||
signoz:
|
||||
alertmanager:
|
||||
!!merge <<: *common
|
||||
image: signoz/alertmanager:0.23.7
|
||||
command:
|
||||
- --queryService.url=http://query-service:8085
|
||||
- --storage.path=/data
|
||||
volumes:
|
||||
- alertmanager:/data
|
||||
depends_on:
|
||||
- query-service
|
||||
query-service:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz:v0.108.0
|
||||
image: signoz/query-service:0.72.0
|
||||
command:
|
||||
- --config=/root/config/prometheus.yml
|
||||
ports:
|
||||
- "8080:8080" # signoz port
|
||||
- --use-logs-new-schema=true
|
||||
- --use-trace-new-schema=true
|
||||
# ports:
|
||||
# - "8080:8080" # signoz port
|
||||
# - "6060:6060" # pprof port
|
||||
volumes:
|
||||
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
|
||||
- ../common/dashboards:/root/config/dashboards
|
||||
- sqlite:/var/lib/signoz/
|
||||
configs:
|
||||
- source: signoz-prometheus-config
|
||||
target: /root/config/prometheus.yml
|
||||
environment:
|
||||
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
|
||||
- SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
|
||||
- ClickHouseUrl=tcp://clickhouse:9000
|
||||
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
|
||||
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
|
||||
- DASHBOARDS_PATH=/root/config/dashboards
|
||||
- STORAGE=clickhouse
|
||||
- GODEBUG=netdns=go
|
||||
- TELEMETRY_ENABLED=true
|
||||
- DEPLOYMENT_TYPE=docker-swarm
|
||||
- DOT_METRICS_ENABLED=true
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
@@ -148,19 +148,27 @@ services:
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
frontend:
|
||||
!!merge <<: *common
|
||||
image: signoz/frontend:0.72.0
|
||||
depends_on:
|
||||
- alertmanager
|
||||
- query-service
|
||||
ports:
|
||||
- "3301:3301"
|
||||
volumes:
|
||||
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
|
||||
otel-collector:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-otel-collector:v0.129.12
|
||||
image: signoz/signoz-otel-collector:0.111.27
|
||||
command:
|
||||
- --config=/etc/otel-collector-config.yaml
|
||||
- --manager-config=/etc/manager-config.yaml
|
||||
- --copy-path=/var/tmp/collector-config.yaml
|
||||
- --feature-gates=-pkg.translator.prometheus.NormalizeName
|
||||
configs:
|
||||
- source: otel-collector-config
|
||||
target: /etc/otel-collector-config.yaml
|
||||
- source: otel-manager-config
|
||||
target: /etc/manager-config.yaml
|
||||
volumes:
|
||||
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
|
||||
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
|
||||
environment:
|
||||
- OTEL_RESOURCE_ATTRIBUTES=host.name={{.Node.Hostname}},os.type={{.Node.Platform.OS}}
|
||||
- LOW_CARDINAL_EXCEPTION_GROUPING=false
|
||||
@@ -173,10 +181,10 @@ services:
|
||||
depends_on:
|
||||
- clickhouse
|
||||
- schema-migrator
|
||||
- signoz
|
||||
- query-service
|
||||
schema-migrator:
|
||||
!!merge <<: *common
|
||||
image: signoz/signoz-schema-migrator:v0.129.12
|
||||
image: signoz/signoz-schema-migrator:0.111.24
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
@@ -191,30 +199,11 @@ networks:
|
||||
signoz-net:
|
||||
name: signoz-net
|
||||
volumes:
|
||||
alertmanager:
|
||||
name: signoz-alertmanager
|
||||
clickhouse:
|
||||
name: signoz-clickhouse
|
||||
sqlite:
|
||||
name: signoz-sqlite
|
||||
zookeeper-1:
|
||||
name: signoz-zookeeper-1
|
||||
configs:
|
||||
clickhouse-config:
|
||||
file: ../common/clickhouse/config.xml
|
||||
clickhouse-users:
|
||||
file: ../common/clickhouse/users.xml
|
||||
clickhouse-custom-function:
|
||||
file: ../common/clickhouse/custom-function.xml
|
||||
clickhouse-cluster:
|
||||
file: ../common/clickhouse/cluster.xml
|
||||
signoz-prometheus-config:
|
||||
file: ../common/signoz/prometheus.yml
|
||||
# If you have multiple dashboard files, you can list them individually:
|
||||
# dashboard-foo:
|
||||
# file: ../common/dashboards/foo.json
|
||||
# dashboard-bar:
|
||||
# file: ../common/dashboards/bar.json
|
||||
|
||||
otel-collector-config:
|
||||
file: ./otel-collector-config.yaml
|
||||
otel-manager-config:
|
||||
file: ../common/signoz/otel-collector-opamp-config.yaml
|
||||
|
||||
@@ -42,7 +42,7 @@ receivers:
|
||||
# please remove names from below if you want to collect logs from them
|
||||
- type: filter
|
||||
id: signoz_logs_filter
|
||||
expr: 'attributes.container_name matches "^(signoz_(logspout|signoz|otel-collector|clickhouse|zookeeper))|(infra_(logspout|otel-agent|otel-metrics)).*"'
|
||||
expr: 'attributes.container_name matches "^(signoz_(logspout|alertmanager|query-service|otel-collector|clickhouse|zookeeper))|(infra_(logspout|otel-agent|otel-metrics)).*"'
|
||||
processors:
|
||||
batch:
|
||||
send_batch_size: 10000
|
||||
|
||||
@@ -1,10 +1,3 @@
|
||||
connectors:
|
||||
signozmeter:
|
||||
metrics_flush_interval: 1h
|
||||
dimensions:
|
||||
- name: service.name
|
||||
- name: deployment.environment
|
||||
- name: host.name
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
@@ -28,16 +21,12 @@ processors:
|
||||
send_batch_size: 10000
|
||||
send_batch_max_size: 11000
|
||||
timeout: 10s
|
||||
batch/meter:
|
||||
send_batch_max_size: 25000
|
||||
send_batch_size: 20000
|
||||
timeout: 1s
|
||||
resourcedetection:
|
||||
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
|
||||
detectors: [env, system]
|
||||
timeout: 2s
|
||||
signozspanmetrics/delta:
|
||||
metrics_exporter: signozclickhousemetrics
|
||||
metrics_exporter: clickhousemetricswrite
|
||||
metrics_flush_interval: 60s
|
||||
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
|
||||
dimensions_cache_size: 100000
|
||||
@@ -71,21 +60,25 @@ exporters:
|
||||
datasource: tcp://clickhouse:9000/signoz_traces
|
||||
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
|
||||
use_new_schema: true
|
||||
clickhousemetricswrite:
|
||||
endpoint: tcp://clickhouse:9000/signoz_metrics
|
||||
resource_to_telemetry_conversion:
|
||||
enabled: true
|
||||
clickhousemetricswrite/prometheus:
|
||||
endpoint: tcp://clickhouse:9000/signoz_metrics
|
||||
signozclickhousemetrics:
|
||||
dsn: tcp://clickhouse:9000/signoz_metrics
|
||||
clickhouselogsexporter:
|
||||
dsn: tcp://clickhouse:9000/signoz_logs
|
||||
timeout: 10s
|
||||
use_new_schema: true
|
||||
signozclickhousemeter:
|
||||
dsn: tcp://clickhouse:9000/signoz_meter
|
||||
timeout: 45s
|
||||
sending_queue:
|
||||
enabled: false
|
||||
# debug: {}
|
||||
service:
|
||||
telemetry:
|
||||
logs:
|
||||
encoding: json
|
||||
metrics:
|
||||
address: 0.0.0.0:8888
|
||||
extensions:
|
||||
- health_check
|
||||
- pprof
|
||||
@@ -93,20 +86,16 @@ service:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [signozspanmetrics/delta, batch]
|
||||
exporters: [clickhousetraces, signozmeter]
|
||||
exporters: [clickhousetraces]
|
||||
metrics:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [signozclickhousemetrics, signozmeter]
|
||||
exporters: [clickhousemetricswrite, signozclickhousemetrics]
|
||||
metrics/prometheus:
|
||||
receivers: [prometheus]
|
||||
processors: [batch]
|
||||
exporters: [signozclickhousemetrics, signozmeter]
|
||||
exporters: [clickhousemetricswrite/prometheus, signozclickhousemetrics]
|
||||
logs:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [clickhouselogsexporter, signozmeter]
|
||||
metrics/meter:
|
||||
receivers: [signozmeter]
|
||||
processors: [batch/meter]
|
||||
exporters: [signozclickhousemeter]
|
||||
exporters: [clickhouselogsexporter]
|
||||
|
||||
@@ -2,7 +2,7 @@ version: "3"
|
||||
x-common: &common
|
||||
networks:
|
||||
- signoz-net
|
||||
restart: unless-stopped
|
||||
restart: on-failure
|
||||
logging:
|
||||
options:
|
||||
max-size: 50m
|
||||
@@ -10,7 +10,7 @@ x-common: &common
|
||||
x-clickhouse-defaults: &clickhouse-defaults
|
||||
!!merge <<: *common
|
||||
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
tty: true
|
||||
labels:
|
||||
signoz.io/scrape: "true"
|
||||
@@ -40,11 +40,9 @@ x-clickhouse-defaults: &clickhouse-defaults
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
environment:
|
||||
- CLICKHOUSE_SKIP_USER_SETUP=1
|
||||
x-zookeeper-defaults: &zookeeper-defaults
|
||||
!!merge <<: *common
|
||||
image: signoz/zookeeper:3.7.1
|
||||
image: bitnami/zookeeper:3.7.1
|
||||
user: root
|
||||
labels:
|
||||
signoz.io/scrape: "true"
|
||||
@@ -67,7 +65,7 @@ x-db-depend: &db-depend
|
||||
services:
|
||||
init-clickhouse:
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
container_name: signoz-init-clickhouse
|
||||
command:
|
||||
- bash
|
||||
@@ -81,7 +79,6 @@ services:
|
||||
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
|
||||
tar -xvzf histogram-quantile.tar.gz
|
||||
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
|
||||
restart: on-failure
|
||||
volumes:
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
zookeeper-1:
|
||||
@@ -177,29 +174,42 @@ services:
|
||||
- ../common/clickhouse/cluster.ha.xml:/etc/clickhouse-server/config.d/cluster.xml
|
||||
- clickhouse-3:/var/lib/clickhouse/
|
||||
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
|
||||
signoz:
|
||||
alertmanager:
|
||||
!!merge <<: *common
|
||||
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
|
||||
container_name: signoz-alertmanager
|
||||
command:
|
||||
- --queryService.url=http://query-service:8085
|
||||
- --storage.path=/data
|
||||
volumes:
|
||||
- alertmanager:/data
|
||||
depends_on:
|
||||
query-service:
|
||||
condition: service_healthy
|
||||
query-service:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz:${VERSION:-v0.108.0}
|
||||
container_name: signoz
|
||||
image: signoz/query-service:${DOCKER_TAG:-0.72.0}
|
||||
container_name: signoz-query-service
|
||||
command:
|
||||
- --config=/root/config/prometheus.yml
|
||||
ports:
|
||||
- "8080:8080" # signoz port
|
||||
- --use-logs-new-schema=true
|
||||
- --use-trace-new-schema=true
|
||||
# ports:
|
||||
# - "3301:8080" # signoz port
|
||||
# - "6060:6060" # pprof port
|
||||
volumes:
|
||||
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
|
||||
- ../common/dashboards:/root/config/dashboards
|
||||
- sqlite:/var/lib/signoz/
|
||||
environment:
|
||||
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
|
||||
- SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
|
||||
- ClickHouseUrl=tcp://clickhouse:9000
|
||||
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
|
||||
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
|
||||
- DASHBOARDS_PATH=/root/config/dashboards
|
||||
- STORAGE=clickhouse
|
||||
- GODEBUG=netdns=go
|
||||
- TELEMETRY_ENABLED=true
|
||||
- DEPLOYMENT_TYPE=docker-standalone-amd
|
||||
- DOT_METRICS_ENABLED=true
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
@@ -210,10 +220,21 @@ services:
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
frontend:
|
||||
!!merge <<: *common
|
||||
image: signoz/frontend:${DOCKER_TAG:-0.72.0}
|
||||
container_name: signoz-frontend
|
||||
depends_on:
|
||||
- alertmanager
|
||||
- query-service
|
||||
ports:
|
||||
- "3301:3301"
|
||||
volumes:
|
||||
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
|
||||
# TODO: support otel-collector multiple replicas. Nginx/Traefik for loadbalancing?
|
||||
otel-collector:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.12}
|
||||
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.27}
|
||||
container_name: signoz-otel-collector
|
||||
command:
|
||||
- --config=/etc/otel-collector-config.yaml
|
||||
@@ -235,11 +256,11 @@ services:
|
||||
condition: service_healthy
|
||||
schema-migrator-sync:
|
||||
condition: service_completed_successfully
|
||||
signoz:
|
||||
query-service:
|
||||
condition: service_healthy
|
||||
schema-migrator-sync:
|
||||
!!merge <<: *common
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
|
||||
container_name: schema-migrator-sync
|
||||
command:
|
||||
- sync
|
||||
@@ -250,17 +271,18 @@ services:
|
||||
condition: service_healthy
|
||||
schema-migrator-async:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
|
||||
container_name: schema-migrator-async
|
||||
command:
|
||||
- async
|
||||
- --dsn=tcp://clickhouse:9000
|
||||
- --up=
|
||||
restart: on-failure
|
||||
networks:
|
||||
signoz-net:
|
||||
name: signoz-net
|
||||
volumes:
|
||||
alertmanager:
|
||||
name: signoz-alertmanager
|
||||
clickhouse:
|
||||
name: signoz-clickhouse
|
||||
clickhouse-2:
|
||||
|
||||
221
deploy/docker/docker-compose.testing.yaml
Normal file
221
deploy/docker/docker-compose.testing.yaml
Normal file
@@ -0,0 +1,221 @@
|
||||
version: "3"
|
||||
x-common: &common
|
||||
networks:
|
||||
- signoz-net
|
||||
restart: on-failure
|
||||
logging:
|
||||
options:
|
||||
max-size: 50m
|
||||
max-file: "3"
|
||||
x-clickhouse-defaults: &clickhouse-defaults
|
||||
!!merge <<: *common
|
||||
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
tty: true
|
||||
labels:
|
||||
signoz.io/scrape: "true"
|
||||
signoz.io/port: "9363"
|
||||
signoz.io/path: "/metrics"
|
||||
depends_on:
|
||||
init-clickhouse:
|
||||
condition: service_completed_successfully
|
||||
zookeeper-1:
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- wget
|
||||
- --spider
|
||||
- -q
|
||||
- 0.0.0.0:8123/ping
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
ulimits:
|
||||
nproc: 65535
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
x-zookeeper-defaults: &zookeeper-defaults
|
||||
!!merge <<: *common
|
||||
image: bitnami/zookeeper:3.7.1
|
||||
user: root
|
||||
labels:
|
||||
signoz.io/scrape: "true"
|
||||
signoz.io/port: "9141"
|
||||
signoz.io/path: "/metrics"
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD-SHELL
|
||||
- curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
x-db-depend: &db-depend
|
||||
!!merge <<: *common
|
||||
depends_on:
|
||||
clickhouse:
|
||||
condition: service_healthy
|
||||
schema-migrator-sync:
|
||||
condition: service_completed_successfully
|
||||
services:
|
||||
init-clickhouse:
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
container_name: signoz-init-clickhouse
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
- |
|
||||
version="v0.0.1"
|
||||
node_os=$$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
node_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)
|
||||
echo "Fetching histogram-binary for $${node_os}/$${node_arch}"
|
||||
cd /tmp
|
||||
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
|
||||
tar -xvzf histogram-quantile.tar.gz
|
||||
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
|
||||
volumes:
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
zookeeper-1:
|
||||
!!merge <<: *zookeeper-defaults
|
||||
container_name: signoz-zookeeper-1
|
||||
ports:
|
||||
- "2181:2181"
|
||||
- "2888:2888"
|
||||
- "3888:3888"
|
||||
volumes:
|
||||
- zookeeper-1:/bitnami/zookeeper
|
||||
environment:
|
||||
- ZOO_SERVER_ID=1
|
||||
- ALLOW_ANONYMOUS_LOGIN=yes
|
||||
- ZOO_AUTOPURGE_INTERVAL=1
|
||||
- ZOO_ENABLE_PROMETHEUS_METRICS=yes
|
||||
- ZOO_PROMETHEUS_METRICS_PORT_NUMBER=9141
|
||||
clickhouse:
|
||||
!!merge <<: *clickhouse-defaults
|
||||
container_name: signoz-clickhouse
|
||||
ports:
|
||||
- "9000:9000"
|
||||
- "8123:8123"
|
||||
- "9181:9181"
|
||||
volumes:
|
||||
- ../common/clickhouse/config.xml:/etc/clickhouse-server/config.xml
|
||||
- ../common/clickhouse/users.xml:/etc/clickhouse-server/users.xml
|
||||
- ../common/clickhouse/custom-function.xml:/etc/clickhouse-server/custom-function.xml
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
|
||||
- clickhouse:/var/lib/clickhouse/
|
||||
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
|
||||
alertmanager:
|
||||
!!merge <<: *common
|
||||
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
|
||||
container_name: signoz-alertmanager
|
||||
command:
|
||||
- --queryService.url=http://query-service:8085
|
||||
- --storage.path=/data
|
||||
volumes:
|
||||
- alertmanager:/data
|
||||
depends_on:
|
||||
query-service:
|
||||
condition: service_healthy
|
||||
query-service:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/query-service:${DOCKER_TAG:-0.72.0}
|
||||
container_name: signoz-query-service
|
||||
command:
|
||||
- --config=/root/config/prometheus.yml
|
||||
- --gateway-url=https://api.staging.signoz.cloud
|
||||
- --use-logs-new-schema=true
|
||||
- --use-trace-new-schema=true
|
||||
# ports:
|
||||
# - "8080:8080" # signoz port
|
||||
# - "6060:6060" # pprof port
|
||||
volumes:
|
||||
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
|
||||
- ../common/dashboards:/root/config/dashboards
|
||||
- sqlite:/var/lib/signoz/
|
||||
environment:
|
||||
- ClickHouseUrl=tcp://clickhouse:9000
|
||||
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
|
||||
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
|
||||
- DASHBOARDS_PATH=/root/config/dashboards
|
||||
- STORAGE=clickhouse
|
||||
- GODEBUG=netdns=go
|
||||
- TELEMETRY_ENABLED=true
|
||||
- DEPLOYMENT_TYPE=docker-standalone-amd
|
||||
- KAFKA_SPAN_EVAL=${KAFKA_SPAN_EVAL:-false}
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- wget
|
||||
- --spider
|
||||
- -q
|
||||
- localhost:8080/api/v1/health
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
frontend:
|
||||
!!merge <<: *common
|
||||
image: signoz/frontend:${DOCKER_TAG:-0.72.0}
|
||||
container_name: signoz-frontend
|
||||
depends_on:
|
||||
- alertmanager
|
||||
- query-service
|
||||
ports:
|
||||
- "3301:3301"
|
||||
volumes:
|
||||
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
|
||||
otel-collector:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.27}
|
||||
container_name: signoz-otel-collector
|
||||
command:
|
||||
- --config=/etc/otel-collector-config.yaml
|
||||
- --manager-config=/etc/manager-config.yaml
|
||||
- --copy-path=/var/tmp/collector-config.yaml
|
||||
- --feature-gates=-pkg.translator.prometheus.NormalizeName
|
||||
volumes:
|
||||
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
|
||||
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml
|
||||
environment:
|
||||
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
|
||||
- LOW_CARDINAL_EXCEPTION_GROUPING=false
|
||||
ports:
|
||||
# - "1777:1777" # pprof extension
|
||||
- "4317:4317" # OTLP gRPC receiver
|
||||
- "4318:4318" # OTLP HTTP receiver
|
||||
depends_on:
|
||||
query-service:
|
||||
condition: service_healthy
|
||||
schema-migrator-sync:
|
||||
!!merge <<: *common
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
|
||||
container_name: schema-migrator-sync
|
||||
command:
|
||||
- sync
|
||||
- --dsn=tcp://clickhouse:9000
|
||||
- --up=
|
||||
depends_on:
|
||||
clickhouse:
|
||||
condition: service_healthy
|
||||
schema-migrator-async:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
|
||||
container_name: schema-migrator-async
|
||||
command:
|
||||
- async
|
||||
- --dsn=tcp://clickhouse:9000
|
||||
- --up=
|
||||
networks:
|
||||
signoz-net:
|
||||
name: signoz-net
|
||||
volumes:
|
||||
alertmanager:
|
||||
name: signoz-alertmanager
|
||||
clickhouse:
|
||||
name: signoz-clickhouse
|
||||
sqlite:
|
||||
name: signoz-sqlite
|
||||
zookeeper-1:
|
||||
name: signoz-zookeeper-1
|
||||
@@ -2,14 +2,15 @@ version: "3"
|
||||
x-common: &common
|
||||
networks:
|
||||
- signoz-net
|
||||
restart: unless-stopped
|
||||
restart: on-failure
|
||||
logging:
|
||||
options:
|
||||
max-size: 50m
|
||||
max-file: "3"
|
||||
x-clickhouse-defaults: &clickhouse-defaults
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
# addding non LTS version due to this fix https://github.com/ClickHouse/ClickHouse/commit/32caf8716352f45c1b617274c7508c86b7d1afab
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
tty: true
|
||||
labels:
|
||||
signoz.io/scrape: "true"
|
||||
@@ -35,11 +36,9 @@ x-clickhouse-defaults: &clickhouse-defaults
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
environment:
|
||||
- CLICKHOUSE_SKIP_USER_SETUP=1
|
||||
x-zookeeper-defaults: &zookeeper-defaults
|
||||
!!merge <<: *common
|
||||
image: signoz/zookeeper:3.7.1
|
||||
image: bitnami/zookeeper:3.7.1
|
||||
user: root
|
||||
labels:
|
||||
signoz.io/scrape: "true"
|
||||
@@ -62,7 +61,7 @@ x-db-depend: &db-depend
|
||||
services:
|
||||
init-clickhouse:
|
||||
!!merge <<: *common
|
||||
image: clickhouse/clickhouse-server:25.5.6
|
||||
image: clickhouse/clickhouse-server:24.1.2-alpine
|
||||
container_name: signoz-init-clickhouse
|
||||
command:
|
||||
- bash
|
||||
@@ -76,7 +75,6 @@ services:
|
||||
wget -O histogram-quantile.tar.gz "https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz"
|
||||
tar -xvzf histogram-quantile.tar.gz
|
||||
mv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile
|
||||
restart: on-failure
|
||||
volumes:
|
||||
- ../common/clickhouse/user_scripts:/var/lib/clickhouse/user_scripts/
|
||||
zookeeper-1:
|
||||
@@ -109,29 +107,42 @@ services:
|
||||
- ../common/clickhouse/cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
|
||||
- clickhouse:/var/lib/clickhouse/
|
||||
# - ../common/clickhouse/storage.xml:/etc/clickhouse-server/config.d/storage.xml
|
||||
signoz:
|
||||
alertmanager:
|
||||
!!merge <<: *common
|
||||
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.7}
|
||||
container_name: signoz-alertmanager
|
||||
command:
|
||||
- --queryService.url=http://query-service:8085
|
||||
- --storage.path=/data
|
||||
volumes:
|
||||
- alertmanager:/data
|
||||
depends_on:
|
||||
query-service:
|
||||
condition: service_healthy
|
||||
query-service:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz:${VERSION:-v0.108.0}
|
||||
container_name: signoz
|
||||
image: signoz/query-service:${DOCKER_TAG:-0.72.0}
|
||||
container_name: signoz-query-service
|
||||
command:
|
||||
- --config=/root/config/prometheus.yml
|
||||
ports:
|
||||
- "8080:8080" # signoz port
|
||||
- --use-logs-new-schema=true
|
||||
- --use-trace-new-schema=true
|
||||
# ports:
|
||||
# - "3301:8080" # signoz port
|
||||
# - "6060:6060" # pprof port
|
||||
volumes:
|
||||
- ../common/signoz/prometheus.yml:/root/config/prometheus.yml
|
||||
- ../common/dashboards:/root/config/dashboards
|
||||
- sqlite:/var/lib/signoz/
|
||||
environment:
|
||||
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
|
||||
- SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000
|
||||
- ClickHouseUrl=tcp://clickhouse:9000
|
||||
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
|
||||
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
|
||||
- DASHBOARDS_PATH=/root/config/dashboards
|
||||
- STORAGE=clickhouse
|
||||
- GODEBUG=netdns=go
|
||||
- TELEMETRY_ENABLED=true
|
||||
- DEPLOYMENT_TYPE=docker-standalone-amd
|
||||
- DOT_METRICS_ENABLED=true
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
@@ -142,9 +153,20 @@ services:
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
frontend:
|
||||
!!merge <<: *common
|
||||
image: signoz/frontend:${DOCKER_TAG:-0.72.0}
|
||||
container_name: signoz-frontend
|
||||
depends_on:
|
||||
- alertmanager
|
||||
- query-service
|
||||
ports:
|
||||
- "3301:3301"
|
||||
volumes:
|
||||
- ../common/signoz/nginx-config.conf:/etc/nginx/conf.d/default.conf
|
||||
otel-collector:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-v0.129.12}
|
||||
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.111.27}
|
||||
container_name: signoz-otel-collector
|
||||
command:
|
||||
- --config=/etc/otel-collector-config.yaml
|
||||
@@ -162,11 +184,11 @@ services:
|
||||
- "4317:4317" # OTLP gRPC receiver
|
||||
- "4318:4318" # OTLP HTTP receiver
|
||||
depends_on:
|
||||
signoz:
|
||||
query-service:
|
||||
condition: service_healthy
|
||||
schema-migrator-sync:
|
||||
!!merge <<: *common
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
|
||||
container_name: schema-migrator-sync
|
||||
command:
|
||||
- sync
|
||||
@@ -175,20 +197,20 @@ services:
|
||||
depends_on:
|
||||
clickhouse:
|
||||
condition: service_healthy
|
||||
restart: on-failure
|
||||
schema-migrator-async:
|
||||
!!merge <<: *db-depend
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-v0.129.12}
|
||||
image: signoz/signoz-schema-migrator:${OTELCOL_TAG:-0.111.24}
|
||||
container_name: schema-migrator-async
|
||||
command:
|
||||
- async
|
||||
- --dsn=tcp://clickhouse:9000
|
||||
- --up=
|
||||
restart: on-failure
|
||||
networks:
|
||||
signoz-net:
|
||||
name: signoz-net
|
||||
volumes:
|
||||
alertmanager:
|
||||
name: signoz-alertmanager
|
||||
clickhouse:
|
||||
name: signoz-clickhouse
|
||||
sqlite:
|
||||
|
||||
@@ -8,7 +8,7 @@ x-common: &common
|
||||
options:
|
||||
max-size: 50m
|
||||
max-file: "3"
|
||||
restart: unless-stopped
|
||||
restart: on-failure
|
||||
services:
|
||||
hotrod:
|
||||
<<: *common
|
||||
|
||||
@@ -8,7 +8,7 @@ x-common: &common
|
||||
options:
|
||||
max-size: 50m
|
||||
max-file: "3"
|
||||
restart: unless-stopped
|
||||
restart: on-failure
|
||||
services:
|
||||
otel-agent:
|
||||
<<: *common
|
||||
|
||||
@@ -79,7 +79,7 @@ receivers:
|
||||
# please remove names from below if you want to collect logs from them
|
||||
- type: filter
|
||||
id: signoz_logs_filter
|
||||
expr: 'attributes.container_name matches "^signoz|(signoz-(|otel-collector|clickhouse|zookeeper))|(infra-(logspout|otel-agent)-.*)"'
|
||||
expr: 'attributes.container_name matches "^(signoz-(|alertmanager|query-service|otel-collector|clickhouse|zookeeper))|(infra-(logspout|otel-agent)-.*)"'
|
||||
processors:
|
||||
batch:
|
||||
send_batch_size: 10000
|
||||
|
||||
@@ -1,10 +1,3 @@
|
||||
connectors:
|
||||
signozmeter:
|
||||
metrics_flush_interval: 1h
|
||||
dimensions:
|
||||
- name: service.name
|
||||
- name: deployment.environment
|
||||
- name: host.name
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
@@ -28,16 +21,12 @@ processors:
|
||||
send_batch_size: 10000
|
||||
send_batch_max_size: 11000
|
||||
timeout: 10s
|
||||
batch/meter:
|
||||
send_batch_max_size: 25000
|
||||
send_batch_size: 20000
|
||||
timeout: 1s
|
||||
resourcedetection:
|
||||
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
|
||||
detectors: [env, system]
|
||||
timeout: 2s
|
||||
signozspanmetrics/delta:
|
||||
metrics_exporter: signozclickhousemetrics
|
||||
metrics_exporter: clickhousemetricswrite
|
||||
metrics_flush_interval: 60s
|
||||
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
|
||||
dimensions_cache_size: 100000
|
||||
@@ -71,21 +60,25 @@ exporters:
|
||||
datasource: tcp://clickhouse:9000/signoz_traces
|
||||
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
|
||||
use_new_schema: true
|
||||
clickhousemetricswrite:
|
||||
endpoint: tcp://clickhouse:9000/signoz_metrics
|
||||
resource_to_telemetry_conversion:
|
||||
enabled: true
|
||||
clickhousemetricswrite/prometheus:
|
||||
endpoint: tcp://clickhouse:9000/signoz_metrics
|
||||
signozclickhousemetrics:
|
||||
dsn: tcp://clickhouse:9000/signoz_metrics
|
||||
clickhouselogsexporter:
|
||||
dsn: tcp://clickhouse:9000/signoz_logs
|
||||
timeout: 10s
|
||||
use_new_schema: true
|
||||
signozclickhousemeter:
|
||||
dsn: tcp://clickhouse:9000/signoz_meter
|
||||
timeout: 45s
|
||||
sending_queue:
|
||||
enabled: false
|
||||
# debug: {}
|
||||
service:
|
||||
telemetry:
|
||||
logs:
|
||||
encoding: json
|
||||
metrics:
|
||||
address: 0.0.0.0:8888
|
||||
extensions:
|
||||
- health_check
|
||||
- pprof
|
||||
@@ -93,20 +86,16 @@ service:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [signozspanmetrics/delta, batch]
|
||||
exporters: [clickhousetraces, signozmeter]
|
||||
exporters: [clickhousetraces]
|
||||
metrics:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [signozclickhousemetrics, signozmeter]
|
||||
exporters: [clickhousemetricswrite, signozclickhousemetrics]
|
||||
metrics/prometheus:
|
||||
receivers: [prometheus]
|
||||
processors: [batch]
|
||||
exporters: [signozclickhousemetrics, signozmeter]
|
||||
exporters: [clickhousemetricswrite/prometheus, signozclickhousemetrics]
|
||||
logs:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [clickhouselogsexporter, signozmeter]
|
||||
metrics/meter:
|
||||
receivers: [signozmeter]
|
||||
processors: [batch/meter]
|
||||
exporters: [signozclickhousemeter]
|
||||
exporters: [clickhouselogsexporter]
|
||||
|
||||
@@ -93,7 +93,7 @@ check_os() {
|
||||
;;
|
||||
Red\ Hat*)
|
||||
desired_os=1
|
||||
os="rhel"
|
||||
os="red hat"
|
||||
package_manager="yum"
|
||||
;;
|
||||
CentOS*)
|
||||
@@ -127,7 +127,7 @@ check_os() {
|
||||
# The script should error out in case they aren't available
|
||||
check_ports_occupied() {
|
||||
local port_check_output
|
||||
local ports_pattern="8080|4317"
|
||||
local ports_pattern="3301|4317"
|
||||
|
||||
if is_mac; then
|
||||
port_check_output="$(netstat -anp tcp | awk '$6 == "LISTEN" && $4 ~ /^.*\.('"$ports_pattern"')$/')"
|
||||
@@ -144,7 +144,7 @@ check_ports_occupied() {
|
||||
send_event "port_not_available"
|
||||
|
||||
echo "+++++++++++ ERROR ++++++++++++++++++++++"
|
||||
echo "SigNoz requires ports 8080 & 4317 to be open. Please shut down any other service(s) that may be running on these ports."
|
||||
echo "SigNoz requires ports 3301 & 4317 to be open. Please shut down any other service(s) that may be running on these ports."
|
||||
echo "You can run SigNoz on another port following this guide https://signoz.io/docs/install/troubleshooting/"
|
||||
echo "++++++++++++++++++++++++++++++++++++++++"
|
||||
echo ""
|
||||
@@ -248,7 +248,7 @@ wait_for_containers_start() {
|
||||
|
||||
# The while loop is important because for-loops don't work for dynamic values
|
||||
while [[ $timeout -gt 0 ]]; do
|
||||
status_code="$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:8080/api/v1/health?live=1" || true)"
|
||||
status_code="$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:3301/api/v1/health?live=1" || true)"
|
||||
if [[ status_code -eq 200 ]]; then
|
||||
break
|
||||
else
|
||||
@@ -484,7 +484,7 @@ pushd "${BASE_DIR}/${DOCKER_STANDALONE_DIR}" > /dev/null 2>&1
|
||||
|
||||
# check for open ports, if signoz is not installed
|
||||
if is_command_present docker-compose; then
|
||||
if $sudo_cmd $docker_compose_cmd ps | grep "signoz" | grep -q "healthy" > /dev/null 2>&1; then
|
||||
if $sudo_cmd $docker_compose_cmd ps | grep "signoz-query-service" | grep -q "healthy" > /dev/null 2>&1; then
|
||||
echo "SigNoz already installed, skipping the occupied ports check"
|
||||
else
|
||||
check_ports_occupied
|
||||
@@ -533,7 +533,7 @@ else
|
||||
echo ""
|
||||
echo "🟢 Your installation is complete!"
|
||||
echo ""
|
||||
echo -e "🟢 SigNoz is running on http://localhost:8080"
|
||||
echo -e "🟢 Your frontend is running on http://localhost:3301"
|
||||
echo ""
|
||||
echo "ℹ️ By default, retention period is set to 15 days for logs and traces, and 30 days for metrics."
|
||||
echo -e "To change this, navigate to the General tab on the Settings page of SigNoz UI. For more details, refer to https://signoz.io/docs/userguide/retention-period \n"
|
||||
|
||||
4063
docs/api/openapi.yml
4063
docs/api/openapi.yml
File diff suppressed because it is too large
Load Diff
@@ -1,135 +0,0 @@
|
||||
# Development Guide
|
||||
|
||||
Welcome! This guide will help you set up your local development environment for SigNoz. Let's get you started! 🚀
|
||||
|
||||
## What do I need?
|
||||
|
||||
Before diving in, make sure you have these tools installed:
|
||||
|
||||
- **Git** - Our version control system
|
||||
- Download from [git-scm.com](https://git-scm.com/)
|
||||
|
||||
- **Go** - Powers our backend
|
||||
- Download from [go.dev/dl](https://go.dev/dl/)
|
||||
- Check [go.mod](../../go.mod#L3) for the minimum version
|
||||
|
||||
|
||||
- **Node** - Powers our frontend
|
||||
- Download from [nodejs.org](https://nodejs.org)
|
||||
- Check [.nvmrc](../../frontend/.nvmrc) for the version
|
||||
|
||||
- **Yarn** - Our frontend package manager
|
||||
- Follow the [installation guide](https://yarnpkg.com/getting-started/install)
|
||||
|
||||
- **Docker** - For running Clickhouse and Postgres locally
|
||||
- Get it from [docs.docker.com/get-docker](https://docs.docker.com/get-docker/)
|
||||
|
||||
> 💡 **Tip**: Run `make help` to see all available commands with descriptions
|
||||
|
||||
## How do I get the code?
|
||||
|
||||
1. Open your terminal
|
||||
2. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/SigNoz/signoz.git
|
||||
```
|
||||
3. Navigate to the project:
|
||||
```bash
|
||||
cd signoz
|
||||
```
|
||||
|
||||
## How do I run it locally?
|
||||
|
||||
SigNoz has three main components: Clickhouse, Backend, and Frontend. Let's set them up one by one.
|
||||
|
||||
### 1. Setting up ClickHouse
|
||||
|
||||
First, we need to get ClickHouse running:
|
||||
|
||||
```bash
|
||||
make devenv-clickhouse
|
||||
```
|
||||
|
||||
This command:
|
||||
- Starts ClickHouse in a single-shard, single-replica cluster
|
||||
- Sets up Zookeeper
|
||||
- Runs the latest schema migrations
|
||||
|
||||
### 2. Setting up SigNoz OpenTelemetry Collector
|
||||
|
||||
Next, start the OpenTelemetry Collector to receive telemetry data:
|
||||
|
||||
```bash
|
||||
make devenv-signoz-otel-collector
|
||||
```
|
||||
|
||||
This command:
|
||||
- Starts the SigNoz OpenTelemetry Collector
|
||||
- Listens on port 4317 (gRPC) and 4318 (HTTP) for incoming telemetry data
|
||||
- Forwards data to ClickHouse for storage
|
||||
|
||||
> 💡 **Quick Setup**: Use `make devenv-up` to start both ClickHouse and OTel Collector together
|
||||
|
||||
### 3. Starting the Backend
|
||||
|
||||
1. Run the backend server:
|
||||
```bash
|
||||
make go-run-community
|
||||
```
|
||||
|
||||
2. Verify it's working:
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/health
|
||||
```
|
||||
|
||||
You should see: `{"status":"ok"}`
|
||||
|
||||
> 💡 **Tip**: The API server runs at `http://localhost:8080/` by default
|
||||
|
||||
### 4. Setting up the Frontend
|
||||
|
||||
1. Navigate to the frontend directory:
|
||||
```bash
|
||||
cd frontend
|
||||
```
|
||||
|
||||
2. Install dependencies:
|
||||
```bash
|
||||
yarn install
|
||||
```
|
||||
|
||||
3. Create a `.env` file in this directory:
|
||||
```env
|
||||
FRONTEND_API_ENDPOINT=http://localhost:8080
|
||||
```
|
||||
|
||||
4. Start the development server:
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||
> 💡 **Tip**: `yarn dev` will automatically rebuild when you make changes to the code
|
||||
|
||||
Now you're all set to start developing! Happy coding! 🎉
|
||||
|
||||
## Verifying Your Setup
|
||||
To verify everything is working correctly:
|
||||
|
||||
1. **Check ClickHouse**: `curl http://localhost:8123/ping` (should return "Ok.")
|
||||
2. **Check OTel Collector**: `curl http://localhost:13133` (should return health status)
|
||||
3. **Check Backend**: `curl http://localhost:8080/api/v1/health` (should return `{"status":"ok"}`)
|
||||
4. **Check Frontend**: Open `http://localhost:3301` in your browser
|
||||
|
||||
## How to send test data?
|
||||
|
||||
You can now send telemetry data to your local SigNoz instance:
|
||||
|
||||
- **OTLP gRPC**: `localhost:4317`
|
||||
- **OTLP HTTP**: `localhost:4318`
|
||||
|
||||
For example, using `curl` to send a test trace:
|
||||
```bash
|
||||
curl -X POST http://localhost:4318/v1/traces \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"resourceSpans":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"test-service"}}]},"scopeSpans":[{"spans":[{"traceId":"12345678901234567890123456789012","spanId":"1234567890123456","name":"test-span","startTimeUnixNano":"1609459200000000000","endTimeUnixNano":"1609459201000000000"}]}]}]}'
|
||||
```
|
||||
@@ -1,51 +0,0 @@
|
||||
# Endpoint
|
||||
|
||||
This guide outlines the recommended approach for designing endpoints, with a focus on entity relationships, RESTful structure, and examples from the codebase.
|
||||
|
||||
## How do we design an endpoint?
|
||||
|
||||
### Understand the core entities and their relationships
|
||||
|
||||
Start with understanding the core entities and their relationships. For example:
|
||||
|
||||
- **Organization**: an organization can have multiple users
|
||||
|
||||
### Structure Endpoints RESTfully
|
||||
|
||||
Endpoints should reflect the resource hierarchy and follow RESTful conventions. Use clear, **pluralized resource names** and versioning. For example:
|
||||
|
||||
- `POST /v1/organizations` — Create an organization
|
||||
- `GET /v1/organizations/:id` — Get an organization by id
|
||||
- `DELETE /v1/organizations/:id` — Delete an organization by id
|
||||
- `PUT /v1/organizations/:id` — Update an organization by id
|
||||
- `GET /v1/organizations/:id/users` — Get all users in an organization
|
||||
- `GET /v1/organizations/me/users` — Get all users in my organization
|
||||
|
||||
Think in terms of resource navigation in a file system. For example, to find your organization, you would navigate to the root of the file system and then to the `organizations` directory. To find a user in an organization, you would navigate to the `organizations` directory and then to the `id` directory.
|
||||
|
||||
```bash
|
||||
v1/
|
||||
├── organizations/
|
||||
│ └── 123/
|
||||
│ └── users/
|
||||
```
|
||||
|
||||
`me` endpoints are special. They are used to determine the actual id via some auth/external mechanism. For `me` endpoints, think of the `me` directory being symlinked to your organization directory. For example, if you are a part of the organization `123`, the `me` directory will be symlinked to `/v1/organizations/123`:
|
||||
|
||||
```bash
|
||||
v1/
|
||||
├── organizations/
|
||||
│ └── me/ -> symlink to /v1/organizations/123
|
||||
│ └── users/
|
||||
│ └── 123/
|
||||
│ └── users/
|
||||
```
|
||||
|
||||
> 💡 **Note**: There are various ways to structure endpoints. Some prefer to use singular resource names instead of `me`. Others prefer to use singular resource names for all endpoints. We have, however, chosen to standardize our endpoints in the manner described above.
|
||||
|
||||
## What should I remember?
|
||||
|
||||
- Use clear, **plural resource names**
|
||||
- Use `me` endpoints for determining the actual id via some auth mechanism
|
||||
|
||||
> 💡 **Note**: When in doubt, diagram the relationships and walk through the user flows as if navigating a file system. This will help you design endpoints that are both logical and user-friendly.
|
||||
@@ -1,103 +0,0 @@
|
||||
# Errors
|
||||
|
||||
SigNoz includes its own structured [errors](/pkg/errors/errors.go) package. It's built on top of Go's `error` interface, extending it to add additional context that helps provide more meaningful error messages throughout the application.
|
||||
|
||||
## How to use it?
|
||||
|
||||
To use the SigNoz structured errors package, use these functions instead of the standard library alternatives:
|
||||
|
||||
```go
|
||||
// Instead of errors.New()
|
||||
errors.New(typ, code, message)
|
||||
|
||||
// Instead of fmt.Errorf()
|
||||
errors.Newf(typ, code, message, args...)
|
||||
```
|
||||
|
||||
### Typ
|
||||
The Typ (read as Type, defined as `typ`) is used to categorize errors across the codebase and is loosely coupled with HTTP/GRPC status codes. All predefined types can be found in [pkg/errors/type.go](/pkg/errors/type.go). For example:
|
||||
|
||||
- `TypeInvalidInput` - Indicates invalid input was provided
|
||||
- `TypeNotFound` - Indicates a resource was not found
|
||||
|
||||
By design, `typ` is unexported and cannot be declared outside of [errors](/pkg/errors/errors.go) package. This ensures that it is consistent across the codebase and is used in a way that is meaningful.
|
||||
|
||||
### Code
|
||||
Codes are used to provide more granular categorization within types. For instance, a type of `TypeInvalidInput` might have codes like `CodeInvalidEmail` or `CodeInvalidPassword`.
|
||||
|
||||
To create new error codes, use the `errors.MustNewCode` function:
|
||||
|
||||
```go
|
||||
var (
|
||||
CodeThingAlreadyExists = errors.MustNewCode("thing_already_exists")
|
||||
CodeThingNotFound = errors.MustNewCode("thing_not_found")
|
||||
)
|
||||
```
|
||||
|
||||
> 💡 **Note**: Error codes must match the regex `^[a-z_]+$` otherwise the code will panic.
|
||||
|
||||
## Show me some examples
|
||||
|
||||
### Using the error
|
||||
A basic example of using the error:
|
||||
|
||||
```go
|
||||
var (
|
||||
CodeThingAlreadyExists = errors.MustNewCode("thing_already_exists")
|
||||
)
|
||||
|
||||
func CreateThing(id string) error {
|
||||
t, err := thing.GetFromStore(id)
|
||||
if err != nil {
|
||||
if errors.As(err, errors.TypeNotFound) {
|
||||
// thing was not found, create it
|
||||
return thing.Create(id)
|
||||
}
|
||||
|
||||
// something else went wrong, wrap the error with more context
|
||||
return errors.Wrapf(err, errors.TypeInternal, errors.CodeUnknown, "failed to get thing from store")
|
||||
}
|
||||
|
||||
return errors.Newf(errors.TypeAlreadyExists, CodeThingAlreadyExists, "thing with id %s already exists", id)
|
||||
}
|
||||
```
|
||||
|
||||
### Changing the error
|
||||
Sometimes you may want to change the error while preserving the message:
|
||||
|
||||
```go
|
||||
func GetUserSecurely(id string) (*User, error) {
|
||||
user, err := repository.GetUser(id)
|
||||
if err != nil {
|
||||
if errors.Ast(err, errors.TypeNotFound) {
|
||||
// Convert NotFound to Forbidden for security reasons
|
||||
return nil, errors.New(errors.TypeForbidden, errors.CodeAccessDenied, "access denied to requested resource")
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return user, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Why do we need this?
|
||||
|
||||
In a large codebase like SigNoz, error handling is critical for maintaining reliability, debuggability, and a good user experience. We believe that it is the **responsibility of a function** to return **well-defined** errors that **accurately describe what went wrong**. With our structured error system:
|
||||
|
||||
- Functions can create precise errors with appropriate additional context
|
||||
- Callers can make informed decisions based on the additional context
|
||||
- Error context is preserved and enhanced as it moves up the call stack
|
||||
|
||||
The caller (which can be another function or a HTTP/gRPC handler or something else entirely), can then choose to use this error to take appropriate actions such as:
|
||||
|
||||
- A function can branch into different paths based on the context
|
||||
- An HTTP/gRPC handler can derive the correct status code and message from the error and send it to the client
|
||||
- Logging systems can capture structured error information for better diagnostics
|
||||
|
||||
Although there might be cases where this might seem too verbose, it makes the code more maintainable and consistent. A little verbose code is better than clever code that doesn't provide enough context.
|
||||
|
||||
## What should I remember?
|
||||
|
||||
- Think about error handling as you write your code, not as an afterthought.
|
||||
- Always use the [errors](/pkg/errors/errors.go) package instead of the standard library's `errors.New()` or `fmt.Errorf()`.
|
||||
- Always assign appropriate codes to errors when creating them instead of using the "catch all" error codes defined in [pkg/errors/code.go](/pkg/errors/code.go).
|
||||
- Use `errors.Wrapf()` to add context to errors while preserving the original when appropriate.
|
||||
@@ -1,134 +0,0 @@
|
||||
# Flagger
|
||||
|
||||
Flagger is SigNoz's feature flagging system built on top of the [OpenFeature](https://openfeature.dev/) standard. It provides a unified interface for evaluating feature flags across the application, allowing features to be enabled, disabled, or configured dynamically without code changes.
|
||||
|
||||
> 💡 **Note**: OpenFeature is a CNCF project that provides a vendor-agnostic feature flagging API, making it easy to switch providers without changing application code.
|
||||
|
||||
## How does it work?
|
||||
|
||||
Flagger consists of three main components:
|
||||
|
||||
1. **Registry** (`pkg/flagger/registry.go`) - Contains all available feature flags with their metadata and default values
|
||||
2. **Flagger** (`pkg/flagger/flagger.go`) - The consumer-facing interface for evaluating feature flags
|
||||
3. **Providers** (`pkg/flagger/<provider>flagger/`) - Implementations that supply feature flag values (e.g., `configflagger` for config-based flags)
|
||||
|
||||
The evaluation flow works as follows:
|
||||
|
||||
1. The caller requests a feature flag value via the `Flagger` interface
|
||||
2. Flagger checks the registry to validate the flag exists and get its default value
|
||||
3. Each registered provider is queried for an override value
|
||||
4. If a provider returns a value different from the default, that value is returned
|
||||
5. Otherwise, the default value from the registry is returned
|
||||
|
||||
## How to add a new feature flag?
|
||||
|
||||
### 1. Register the flag in the registry
|
||||
|
||||
Add your feature flag definition in `pkg/flagger/registry.go`:
|
||||
|
||||
```go
|
||||
var (
|
||||
// Export the feature name for use in evaluations
|
||||
FeatureMyNewFeature = featuretypes.MustNewName("my_new_feature")
|
||||
)
|
||||
|
||||
func MustNewRegistry() featuretypes.Registry {
|
||||
registry, err := featuretypes.NewRegistry(
|
||||
// ...existing features...
|
||||
&featuretypes.Feature{
|
||||
Name: FeatureMyNewFeature,
|
||||
Kind: featuretypes.KindBoolean, // or KindString, KindFloat, KindInt, KindObject
|
||||
Stage: featuretypes.StageStable, // or StageAlpha, StageBeta
|
||||
Description: "Controls whether my new feature is enabled",
|
||||
DefaultVariant: featuretypes.MustNewName("disabled"),
|
||||
Variants: featuretypes.NewBooleanVariants(),
|
||||
},
|
||||
)
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
> 💡 **Note**: Feature names must match the regex `^[a-z_]+$` (lowercase letters and underscores only).
|
||||
|
||||
### 2. Configure the feature flag value (optional)
|
||||
|
||||
To override the default value, add an entry in your configuration file:
|
||||
|
||||
```yaml
|
||||
flagger:
|
||||
config:
|
||||
boolean:
|
||||
my_new_feature: true
|
||||
```
|
||||
|
||||
Supported configuration types:
|
||||
|
||||
| Type | Config Key | Go Type |
|
||||
|------|------------|---------|
|
||||
| Boolean | `boolean` | `bool` |
|
||||
| String | `string` | `string` |
|
||||
| Float | `float` | `float64` |
|
||||
| Integer | `integer` | `int64` |
|
||||
| Object | `object` | `any` |
|
||||
|
||||
## How to evaluate a feature flag?
|
||||
|
||||
Use the `Flagger` interface to evaluate feature flags. The interface provides typed methods for each value type:
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/SigNoz/signoz/pkg/flagger"
|
||||
"github.com/SigNoz/signoz/pkg/types/featuretypes"
|
||||
)
|
||||
|
||||
func DoSomething(ctx context.Context, flagger flagger.Flagger) error {
|
||||
// Create an evaluation context (typically with org ID)
|
||||
evalCtx := featuretypes.NewFlaggerEvaluationContext(orgID)
|
||||
|
||||
// Evaluate with error handling
|
||||
enabled, err := flagger.Boolean(ctx, flagger.FeatureMyNewFeature, evalCtx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if enabled {
|
||||
// Feature is enabled
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### Empty variants
|
||||
|
||||
For cases where you want to use a default value on error (and log the error), use the `*OrEmpty` methods:
|
||||
|
||||
```go
|
||||
func DoSomething(ctx context.Context, flagger flagger.Flagger) {
|
||||
evalCtx := featuretypes.NewFlaggerEvaluationContext(orgID)
|
||||
|
||||
// Returns false on error and logs the error
|
||||
if flagger.BooleanOrEmpty(ctx, flagger.FeatureMyNewFeature, evalCtx) {
|
||||
// Feature is enabled
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Available evaluation methods
|
||||
|
||||
| Method | Return Type | Empty Variant Default |
|
||||
|--------|-------------|---------------------|
|
||||
| `Boolean()` | `(bool, error)` | `false` |
|
||||
| `String()` | `(string, error)` | `""` |
|
||||
| `Float()` | `(float64, error)` | `0.0` |
|
||||
| `Int()` | `(int64, error)` | `0` |
|
||||
| `Object()` | `(any, error)` | `struct{}{}` |
|
||||
|
||||
## What should I remember?
|
||||
|
||||
- Always define feature flags in the registry (`pkg/flagger/registry.go`) before using them
|
||||
- Use descriptive feature names that clearly indicate what the flag controls
|
||||
- Prefer `*OrEmpty` methods for non-critical features to avoid error handling overhead
|
||||
- Export feature name variables (e.g., `FeatureMyNewFeature`) for type-safe usage across packages
|
||||
- Consider the feature's lifecycle stage (`Alpha`, `Beta`, `Stable`) when defining it
|
||||
- Providers are evaluated in order; the first non-default value wins
|
||||
@@ -1,179 +0,0 @@
|
||||
# Handler
|
||||
|
||||
Handlers in SigNoz are responsible for exposing module functionality over HTTP. They are thin adapters that:
|
||||
|
||||
- Decode incoming HTTP requests
|
||||
- Call the appropriate module layer
|
||||
- Return structured responses (or errors) in a consistent format
|
||||
- Describe themselves for OpenAPI generation
|
||||
|
||||
They are **not** the place for complex business logic; that belongs in modules (for example, `pkg/modules/user`, `pkg/modules/session`, etc).
|
||||
|
||||
## How are handlers structured?
|
||||
|
||||
At a high level, a typical flow looks like this:
|
||||
|
||||
1. A `Handler` interface is defined in the module (for example, `user.Handler`, `session.Handler`, `organization.Handler`).
|
||||
2. The `apiserver` provider wires those handlers into HTTP routes using Gorilla `mux.Router`.
|
||||
|
||||
Each route wraps a module handler method with the following:
|
||||
- Authorization middleware (from `pkg/http/middleware`)
|
||||
- A generic HTTP `handler.Handler` (from `pkg/http/handler`)
|
||||
- An `OpenAPIDef` that describes the operation for OpenAPI generation
|
||||
|
||||
For example, in `pkg/apiserver/signozapiserver`:
|
||||
|
||||
```go
|
||||
if err := router.Handle("/api/v1/invite", handler.New(
|
||||
provider.authZ.AdminAccess(provider.userHandler.CreateInvite),
|
||||
handler.OpenAPIDef{
|
||||
ID: "CreateInvite",
|
||||
Tags: []string{"users"},
|
||||
Summary: "Create invite",
|
||||
Description: "This endpoint creates an invite for a user",
|
||||
Request: new(types.PostableInvite),
|
||||
RequestContentType: "application/json",
|
||||
Response: new(types.Invite),
|
||||
ResponseContentType: "application/json",
|
||||
SuccessStatusCode: http.StatusCreated,
|
||||
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
|
||||
Deprecated: false,
|
||||
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
|
||||
},
|
||||
)).Methods(http.MethodPost).GetError(); err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
In this pattern:
|
||||
|
||||
- `provider.userHandler.CreateInvite` is a handler method.
|
||||
- `provider.authZ.AdminAccess(...)` wraps that method with authorization checks and context setup.
|
||||
- `handler.New` converts it into an HTTP handler and wires it to OpenAPI via the `OpenAPIDef`.
|
||||
|
||||
## How to write a new handler method?
|
||||
|
||||
When adding a new endpoint:
|
||||
|
||||
1. Add a method to the appropriate module `Handler` interface.
|
||||
2. Implement that method in the module.
|
||||
3. Register the method in `signozapiserver` with the correct route, HTTP method, auth, and `OpenAPIDef`.
|
||||
|
||||
### 1. Extend an existing `Handler` interface or create a new one
|
||||
|
||||
Find the module in `pkg/modules/<name>` and extend its `Handler` interface with a new method that receives an `http.ResponseWriter` and `*http.Request`. For example:
|
||||
|
||||
```go
|
||||
type Handler interface {
|
||||
// existing methods...
|
||||
CreateThing(rw http.ResponseWriter, req *http.Request)
|
||||
}
|
||||
```
|
||||
|
||||
Keep the method focused on HTTP concerns and delegate business logic to the module.
|
||||
|
||||
### 2. Implement the handler method
|
||||
|
||||
In the module implementation, implement the new method. A typical implementation:
|
||||
|
||||
- Extracts authentication and organization context from `req.Context()`
|
||||
- Decodes the request body into a `types.*` struct using the `binding` package
|
||||
- Calls module functions
|
||||
- Uses the `render` package to write responses or errors
|
||||
|
||||
```go
|
||||
func (h *handler) CreateThing(rw http.ResponseWriter, req *http.Request) {
|
||||
// Extract authentication and organization context from req.Context()
|
||||
claims, err := authtypes.ClaimsFromContext(req.Context())
|
||||
if err != nil {
|
||||
render.Error(rw, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Decode the request body into a `types.*` struct using the `binding` package
|
||||
var in types.PostableThing
|
||||
if err := binding.JSON.BindBody(req.Body, &in); err != nil {
|
||||
render.Error(rw, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Call module functions
|
||||
out, err := h.module.CreateThing(req.Context(), claims.OrgID, &in)
|
||||
if err != nil {
|
||||
render.Error(rw, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Use the `render` package to write responses or errors
|
||||
render.Success(rw, http.StatusCreated, out)
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Register the handler in `signozapiserver`
|
||||
|
||||
In `pkg/apiserver/signozapiserver`, add a route in the appropriate `add*Routes` function (`addUserRoutes`, `addSessionRoutes`, `addOrgRoutes`, etc.). The pattern is:
|
||||
|
||||
```go
|
||||
if err := router.Handle("/api/v1/things", handler.New(
|
||||
provider.authZ.AdminAccess(provider.thingHandler.CreateThing),
|
||||
handler.OpenAPIDef{
|
||||
ID: "CreateThing",
|
||||
Tags: []string{"things"},
|
||||
Summary: "Create thing",
|
||||
Description: "This endpoint creates a thing",
|
||||
Request: new(types.PostableThing),
|
||||
RequestContentType: "application/json",
|
||||
Response: new(types.GettableThing),
|
||||
ResponseContentType: "application/json",
|
||||
SuccessStatusCode: http.StatusCreated,
|
||||
ErrorStatusCodes: []int{http.StatusBadRequest, http.StatusConflict},
|
||||
Deprecated: false,
|
||||
SecuritySchemes: newSecuritySchemes(types.RoleAdmin),
|
||||
},
|
||||
)).Methods(http.MethodPost).GetError(); err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Update the OpenAPI spec
|
||||
|
||||
Run the following command to update the OpenAPI spec:
|
||||
|
||||
```bash
|
||||
go run cmd/enterprise/*.go generate openapi
|
||||
```
|
||||
|
||||
This will update the OpenAPI spec in `docs/api/openapi.yml` to reflect the new endpoint.
|
||||
|
||||
## How does OpenAPI integration work?
|
||||
|
||||
The `handler.New` function ties the HTTP handler to OpenAPI metadata via `OpenAPIDef`. This drives the generated OpenAPI document.
|
||||
|
||||
- **ID**: A unique identifier for the operation (used as the `operationId`).
|
||||
- **Tags**: Logical grouping for the operation (for example, `"users"`, `"sessions"`, `"orgs"`).
|
||||
- **Summary / Description**: Human-friendly documentation.
|
||||
- **Request / RequestContentType**:
|
||||
- `Request` is a Go type that describes the request body or form.
|
||||
- `RequestContentType` is usually `"application/json"` or `"application/x-www-form-urlencoded"` (for callbacks like SAML).
|
||||
- **Response / ResponseContentType**:
|
||||
- `Response` is the Go type for the successful response payload.
|
||||
- `ResponseContentType` is usually `"application/json"`; use `""` for responses without a body.
|
||||
- **SuccessStatusCode**: The HTTP status for successful responses (for example, `http.StatusOK`, `http.StatusCreated`, `http.StatusNoContent`).
|
||||
- **ErrorStatusCodes**: Additional error status codes beyond the standard ones automatically added by `handler.New`.
|
||||
- **SecuritySchemes**: Auth mechanisms and scopes required by the operation.
|
||||
|
||||
The generic handler:
|
||||
|
||||
- Automatically appends `401`, `403`, and `500` to `ErrorStatusCodes` when appropriate.
|
||||
- Registers request and response schemas with the OpenAPI reflector so they appear in `docs/api/openapi.yml`.
|
||||
|
||||
See existing examples in:
|
||||
|
||||
- `addUserRoutes` (for typical JSON request/response)
|
||||
- `addSessionRoutes` (for form-encoded and redirect flows)
|
||||
|
||||
## What should I remember?
|
||||
|
||||
- **Keep handlers thin**: focus on HTTP concerns and delegate logic to modules/services.
|
||||
- **Always register routes through `signozapiserver`** using `handler.New` and a complete `OpenAPIDef`.
|
||||
- **Choose accurate request/response types** from the `types` packages so OpenAPI schemas are correct.
|
||||
@@ -1,215 +0,0 @@
|
||||
# Integration Tests
|
||||
|
||||
SigNoz uses integration tests to verify that different components work together correctly in a real environment. These tests run against actual services (ClickHouse, PostgreSQL, etc.) to ensure end-to-end functionality.
|
||||
|
||||
## How to set up the integration test environment?
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before running integration tests, ensure you have the following installed:
|
||||
|
||||
- Python 3.13+
|
||||
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
|
||||
- Docker (for containerized services)
|
||||
|
||||
### Initial Setup
|
||||
|
||||
1. Navigate to the integration tests directory:
|
||||
```bash
|
||||
cd tests/integration
|
||||
```
|
||||
|
||||
2. Install dependencies using uv:
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
> **_NOTE:_** the build backend could throw an error while installing `psycopg2`, pleae see https://www.psycopg.org/docs/install.html#build-prerequisites
|
||||
|
||||
### Starting the Test Environment
|
||||
|
||||
To spin up all the containers necessary for writing integration tests and keep them running:
|
||||
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/bootstrap/setup.py::test_setup
|
||||
```
|
||||
|
||||
This command will:
|
||||
- Start all required services (ClickHouse, PostgreSQL, Zookeeper, etc.)
|
||||
- Keep containers running due to the `--reuse` flag
|
||||
- Verify that the setup is working correctly
|
||||
|
||||
### Stopping the Test Environment
|
||||
|
||||
When you're done writing integration tests, clean up the environment:
|
||||
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --teardown -s src/bootstrap/setup.py::test_teardown
|
||||
```
|
||||
|
||||
This will destroy the running integration test setup and clean up resources.
|
||||
|
||||
## Understanding the Integration Test Framework
|
||||
|
||||
Python and pytest form the foundation of the integration testing framework. Testcontainers are used to spin up disposable integration environments. Wiremock is used to spin up **test doubles** of other services.
|
||||
|
||||
- **Why Python/pytest?** It's expressive, low-boilerplate, and has powerful fixture capabilities that make integration testing straightforward. Extensive libraries for HTTP requests, JSON handling, and data analysis (numpy) make it easier to test APIs and verify data
|
||||
- **Why testcontainers?** They let us spin up isolated dependencies that match our production environment without complex setup.
|
||||
- **Why wiremock?** Well maintained, documented and extensible.
|
||||
|
||||
```
|
||||
.
|
||||
├── conftest.py
|
||||
├── fixtures
|
||||
│ ├── __init__.py
|
||||
│ ├── auth.py
|
||||
│ ├── clickhouse.py
|
||||
│ ├── fs.py
|
||||
│ ├── http.py
|
||||
│ ├── migrator.py
|
||||
│ ├── network.py
|
||||
│ ├── postgres.py
|
||||
│ ├── signoz.py
|
||||
│ ├── sql.py
|
||||
│ ├── sqlite.py
|
||||
│ ├── types.py
|
||||
│ └── zookeeper.py
|
||||
├── uv.lock
|
||||
├── pyproject.toml
|
||||
└── src
|
||||
└── bootstrap
|
||||
├── __init__.py
|
||||
├── 01_database.py
|
||||
├── 02_register.py
|
||||
└── 03_license.py
|
||||
```
|
||||
|
||||
Each test suite follows some important principles:
|
||||
|
||||
1. **Organization**: Test suites live under `src/` in self-contained packages. Fixtures (a pytest concept) live inside `fixtures/`.
|
||||
2. **Execution Order**: Files are prefixed with two-digit numbers (`01_`, `02_`, `03_`) to ensure sequential execution.
|
||||
3. **Time Constraints**: Each suite should complete in under 10 minutes (setup takes ~4 mins).
|
||||
|
||||
### Test Suite Design
|
||||
|
||||
Test suites should target functional domains or subsystems within SigNoz. When designing a test suite, consider these principles:
|
||||
|
||||
- **Functional Cohesion**: Group tests around a specific capability or service boundary
|
||||
- **Data Flow**: Follow the path of data through related components
|
||||
- **Change Patterns**: Components frequently modified together should be tested together
|
||||
|
||||
The exact boundaries for modules are intentionally flexible, allowing teams to define logical groupings based on their specific context and knowledge of the system.
|
||||
|
||||
Eg: The **bootstrap** integration test suite validates core system functionality:
|
||||
|
||||
- Database initialization
|
||||
- Version check
|
||||
|
||||
Other test suites can be **pipelines, auth, querier.**
|
||||
|
||||
## How to write an integration test?
|
||||
|
||||
Now start writing an integration test. Create a new file `src/bootstrap/05_version.py` and paste the following:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
from fixtures import types
|
||||
from fixtures.logger import setup_logger
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
def test_version(signoz: types.SigNoz) -> None:
|
||||
response = requests.get(signoz.self.host_config.get("/api/v1/version"), timeout=2)
|
||||
logger.info(response)
|
||||
```
|
||||
|
||||
We have written a simple test which calls the `version` endpoint of the container in step 1. In **order to just run this function, run the following command:**
|
||||
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/bootstrap/05_version.py::test_version
|
||||
```
|
||||
|
||||
> Note: The `--reuse` flag is used to reuse the environment if it is already running. Always use this flag when writing and running integration tests. If you don't use this flag, the environment will be destroyed and recreated every time you run the test.
|
||||
|
||||
Here's another example of how to write a more comprehensive integration test:
|
||||
|
||||
```python
|
||||
from http import HTTPStatus
|
||||
import requests
|
||||
from fixtures import types
|
||||
from fixtures.logger import setup_logger
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
def test_user_registration(signoz: types.SigNoz) -> None:
|
||||
"""Test user registration functionality."""
|
||||
response = requests.post(
|
||||
signoz.self.host_configs["8080"].get("/api/v1/register"),
|
||||
json={
|
||||
"name": "testuser",
|
||||
"orgId": "",
|
||||
"orgName": "test.org",
|
||||
"email": "test@example.com",
|
||||
"password": "password123Z$",
|
||||
},
|
||||
timeout=2,
|
||||
)
|
||||
|
||||
assert response.status_code == HTTPStatus.OK
|
||||
assert response.json()["setupCompleted"] is True
|
||||
```
|
||||
|
||||
## How to run integration tests?
|
||||
|
||||
### Running All Tests
|
||||
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/
|
||||
```
|
||||
|
||||
### Running Specific Test Categories
|
||||
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>
|
||||
|
||||
# Run querier tests
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/querier/
|
||||
# Run auth tests
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/auth/
|
||||
```
|
||||
|
||||
### Running Individual Tests
|
||||
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/<suite>/<file>.py::test_name
|
||||
|
||||
# Run test_register in file 01_register.py in passwordauthn suite
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse src/passwordauthn/01_register.py::test_register
|
||||
```
|
||||
|
||||
## How to configure different options for integration tests?
|
||||
|
||||
Tests can be configured using pytest options:
|
||||
|
||||
- `--sqlstore-provider` - Choose database provider (default: postgres)
|
||||
- `--postgres-version` - PostgreSQL version (default: 15)
|
||||
- `--clickhouse-version` - ClickHouse version (default: 25.5.6)
|
||||
- `--zookeeper-version` - Zookeeper version (default: 3.7.1)
|
||||
|
||||
Example:
|
||||
```bash
|
||||
uv run pytest --basetemp=./tmp/ -vv --reuse --sqlstore-provider=postgres --postgres-version=14 src/auth/
|
||||
```
|
||||
|
||||
|
||||
## What should I remember?
|
||||
|
||||
- **Always use the `--reuse` flag** when setting up the environment to keep containers running
|
||||
- **Use the `--teardown` flag** when cleaning up to avoid resource leaks
|
||||
- **Follow the naming convention** with two-digit numeric prefixes (`01_`, `02_`) for test execution order
|
||||
- **Use proper timeouts** in HTTP requests to avoid hanging tests
|
||||
- **Clean up test data** between tests to avoid interference
|
||||
- **Use descriptive test names** that clearly indicate what is being tested
|
||||
- **Leverage fixtures** for common setup and authentication
|
||||
- **Test both success and failure scenarios** to ensure robust functionality
|
||||
@@ -1,106 +0,0 @@
|
||||
# Provider
|
||||
|
||||
SigNoz is built on the provider pattern, a design approach where code is organized into providers that handle specific application responsibilities. Providers act as adapter components that integrate with external services and deliver required functionality to the application.
|
||||
|
||||
> 💡 **Note**: Coming from a DDD background? Providers are similar (not exactly the same) to adapter/infrastructure services.
|
||||
|
||||
## How to create a new provider?
|
||||
|
||||
To create a new provider, create a directory in the `pkg/` directory named after your provider. The provider package consists of four key components:
|
||||
|
||||
- **Interface** (`pkg/<name>/<name>.go`): Defines the provider's interface. Other packages should import this interface to use the provider.
|
||||
- **Config** (`pkg/<name>/config.go`): Contains provider configuration, implementing the `factory.Config` interface from [factory/config.go](/pkg/factory/config.go).
|
||||
- **Implementation** (`pkg/<name>/<implname><name>/provider.go`): Contains the provider implementation, including a `NewProvider` function that returns a `factory.Provider` interface from [factory/provider.go](/pkg/factory/provider.go).
|
||||
- **Mock** (`pkg/<name>/<name>test.go`): Provides mocks for the provider, typically used by dependent packages for unit testing.
|
||||
|
||||
For example, the [prometheus](/pkg/prometheus) provider delivers a prometheus engine to the application:
|
||||
|
||||
- `pkg/prometheus/prometheus.go` - Interface definition
|
||||
- `pkg/prometheus/config.go` - Configuration
|
||||
- `pkg/prometheus/clickhouseprometheus/provider.go` - Clickhouse-powered implementation
|
||||
- `pkg/prometheus/prometheustest/provider.go` - Mock implementation
|
||||
|
||||
## How to wire it up?
|
||||
|
||||
The `pkg/signoz` package contains the inversion of control container responsible for wiring providers. It handles instantiation, configuration, and assembly of providers based on configuration metadata.
|
||||
|
||||
> 💡 **Note**: Coming from a Java background? Providers are similar to Spring beans.
|
||||
|
||||
Wiring up a provider involves three steps:
|
||||
|
||||
1. Wiring up the configuration
|
||||
Add your config from `pkg/<name>/config.go` to the `pkg/signoz/config.Config` struct and in new factories:
|
||||
|
||||
```go
|
||||
type Config struct {
|
||||
...
|
||||
MyProvider myprovider.Config `mapstructure:"myprovider"`
|
||||
...
|
||||
}
|
||||
|
||||
func NewConfig(ctx context.Context, resolverConfig config.ResolverConfig, ....) (Config, error) {
|
||||
...
|
||||
configFactories := []factory.ConfigFactory{
|
||||
myprovider.NewConfigFactory(),
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
2. Wiring up the provider
|
||||
Add available provider implementations in `pkg/signoz/provider.go`:
|
||||
|
||||
```go
|
||||
func NewMyProviderFactories() factory.NamedMap[factory.ProviderFactory[myprovider.MyProvider, myprovider.Config]] {
|
||||
return factory.MustNewNamedMap(
|
||||
myproviderone.NewFactory(),
|
||||
myprovidertwo.NewFactory(),
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
3. Instantiate the provider by adding it to the `SigNoz` struct in `pkg/signoz/signoz.go`:
|
||||
|
||||
```go
|
||||
type SigNoz struct {
|
||||
...
|
||||
MyProvider myprovider.MyProvider
|
||||
...
|
||||
}
|
||||
|
||||
func New(...) (*SigNoz, error) {
|
||||
...
|
||||
myprovider, err := myproviderone.New(ctx, settings, config.MyProvider, "one/two")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
## How to use it?
|
||||
|
||||
To use a provider, import its interface. For example, to use the prometheus provider, import `pkg/prometheus/prometheus.go`:
|
||||
|
||||
```go
|
||||
import "github.com/SigNoz/signoz/pkg/prometheus/prometheus"
|
||||
|
||||
func CreateSomething(ctx context.Context, prometheus prometheus.Prometheus) {
|
||||
...
|
||||
prometheus.DoSomething()
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
## Why do we need this?
|
||||
|
||||
Like any dependency injection framework, providers decouple the codebase from implementation details. This is especially valuable in SigNoz's large codebase, where we need to swap implementations without changing dependent code. The provider pattern offers several benefits apart from the obvious one of decoupling:
|
||||
|
||||
- Configuration is **defined with each provider and centralized in one place**, making it easier to understand and manage through various methods (environment variables, config files, etc.)
|
||||
- Provider mocking is **straightforward for unit testing**, with a consistent pattern for locating mocks
|
||||
- **Multiple implementations** of the same provider are **supported**, as demonstrated by our sqlstore provider
|
||||
|
||||
## What should I remember?
|
||||
|
||||
- Use the provider pattern wherever applicable.
|
||||
- Always create a provider **irrespective of the number of implementations**. This makes it easier to add new implementations in the future.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user