Compare commits

..

1 Commits

Author SHA1 Message Date
srikanthccv
54b429acae chore: perses research 2026-02-15 00:51:42 +05:30
7 changed files with 3743 additions and 546 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,436 @@
# Feasibility Analysis: Adopting Perses.dev Specification for SigNoz Dashboards
## Executive Summary
SigNoz's dashboard JSON has been a free-form `map[string]interface{}` with no schema enforcement. This document evaluates adopting [Perses.dev](https://perses.dev/) (a CNCF Sandbox project) as a structured dashboard specification. The conclusion is that **wholesale adoption is not recommended**, but several Perses design patterns should be borrowed into a SigNoz-native v6 dashboard schema.
---
## 1. Current State: SigNoz Dashboard JSON
### 1.1 Structure Overview
The dashboard is stored as `StorableDashboardData = map[string]interface{}` in Go (see `pkg/types/dashboardtypes/dashboard.go`). Top-level fields:
| Field | Type | Description |
|-------|------|-------------|
| `title` | `string` | Dashboard display name |
| `description` | `string` | Dashboard description text |
| `tags` | `string[]` | Categorization tags (e.g., `["redis", "database"]`) |
| `image` | `string` | Base64 or URL-encoded SVG for dashboard icon/thumbnail |
| `version` | `string` | Schema version: `"v4"` or `"v5"` |
| `layout` | `Layout[]` | React-grid-layout positioning for each widget |
| `panelMap` | `Record<string, {widgets: Layout[], collapsed: boolean}>` | Groups panels under row widgets for collapsible sections |
| `widgets` | `Widget[]` | Array of panel/widget definitions |
| `variables` | `Record<string, IDashboardVariable>` | Dashboard variable definitions |
| `uploadedGrafana` | `boolean` | Flag indicating if imported from Grafana |
### 1.2 Panel/Widget Types
| Panel Type | Constant | API Request Type |
|-----------|----------|-----------------|
| Time Series | `graph` | `time_series` |
| Bar Chart | `bar` | `time_series` |
| Table | `table` | `scalar` |
| Pie Chart | `pie` | `scalar` |
| Single Value | `value` | `scalar` |
| List/Logs | `list` | `raw` |
| Trace | `trace` | `trace` |
| Histogram | `histogram` | `distribution` |
| Row (group header) | `row` | N/A |
### 1.3 Query System
Each widget carries a `query` object with three modes simultaneously:
```json
{
"queryType": "builder|clickhouse_sql|promql",
"builder": {
"queryData": [],
"queryFormulas": []
},
"clickhouse_sql": [],
"promql": []
}
```
Only `queryType` determines which mode is active; the other sections carry empty placeholder defaults.
#### Builder Query v4 (legacy, widely used)
- `aggregateOperator` + `aggregateAttribute` as separate fields
- `filters` with structured `items[]` containing key objects with synthetic IDs
- `having: []` as array
#### Builder Query v5 (newer, integration dashboards)
- `aggregations[]` array with `metricName`, `timeAggregation`, `spaceAggregation` combined
- `filter` with expression string (e.g., `"host_name IN $host_name"`)
- `having: {expression: ""}` as object
### 1.4 Query Range V5 API
The V5 execution API wraps queries in a `QueryEnvelope` discriminated union:
```
QueryEnvelope = {type: QueryType, spec: any}
```
Seven query types: `builder_query`, `builder_formula`, `builder_sub_query`, `builder_join`, `builder_trace_operator`, `promql`, `clickhouse_sql`
Six request types: `scalar`, `time_series`, `raw`, `raw_stream`, `trace`, `distribution`
Three signals with distinct aggregation models:
- **Metrics**: `metricName + temporality + timeAggregation + spaceAggregation`
- **Traces**: Expression-based (e.g., `"COUNT()"`, `"p99(duration_nano)"`)
- **Logs**: Expression-based (e.g., `"COUNT()"`, `"count_distinct(host.name)"`)
17 post-processing functions, server-side formula evaluation, SQL-style joins, and trace span relationship operators.
### 1.5 Documented Pain Points
1. **`StorableDashboardData` is `map[string]interface{}`**: All nested property access requires manual type assertions with fragile `ok` checks.
2. **Two incompatible query schema versions coexisting**: v4 and v5 query formats coexist in the same codebase. The backend migration layer (`pkg/transition/migrate_common.go`) converts at execution time.
3. **Massive boilerplate**: Every widget carries `selectedLogFields` and `selectedTracesFields` arrays even for metrics-only panels. Identical 5-element arrays copy-pasted hundreds of times.
4. **Duplicate query slots**: Every widget carries all three query types (`builder`, `clickhouse_sql`, `promql`) with empty placeholders for inactive types.
5. **Variable key inconsistency**: Variables keyed by human-readable name (e.g., `"Account"`) OR UUID depending on dashboard.
6. **Variables coupled to ClickHouse SQL**: Variable queries use raw `SELECT ... FROM signoz_metrics.distributed_time_series_v4_1day`, coupling dashboard definitions to internal storage schema.
7. **Redundant synthetic IDs**: Filter keys contain derived `id` fields like `"cloud_account_id--string--tag--false"`.
8. **Spelling errors baked in**: `"timePreferance"` (misspelled) is embedded in the serialized JSON contract.
9. **Layout/widget coupling implicit**: Layout items reference widgets by matching `i` to `id` with no schema enforcement. `panelMap` adds another implicit layer.
10. **No schema validation**: Dashboard data has no Go struct for validation. Relies entirely on frontend TypeScript types with extensive optional markers.
---
## 2. Perses.dev Specification Overview
### 2.1 What is Perses?
Perses is a CNCF Sandbox project providing:
- An **open dashboard specification** (implemented in Go, CUE, TypeScript)
- A **plugin-based extension model** for panels, queries, datasources, and variables
- **Dashboard-as-Code** via CUE and Go SDKs
- **Static validation** via `percli` CLI
- **Grafana migration** tooling
Adopters: Chronosphere, RedHat, SAP, Amadeus.
### 2.2 Dashboard Structure
```yaml
kind: Dashboard
metadata:
name: "..."
project: "..."
spec:
display: {name, description}
datasources: {name: DatasourceSpec} # inline or referenced
variables: [Variable] # ordered list
panels: {id: Panel} # map of panel definitions
layouts: [Layout] # separate from panels
duration: "5m"
refreshInterval: "30s"
```
### 2.3 Core Design: Plugin = `{kind: string, spec: any}`
The universal extension point. Panels, queries, datasources, and variables are all plugins:
```go
type Plugin struct {
Kind string `json:"kind"`
Metadata *PluginMetadata `json:"metadata,omitempty"`
Spec any `json:"spec"`
}
```
### 2.4 Panel Structure
```yaml
kind: Panel
spec:
display: {name, description}
plugin: {kind: "TimeSeriesChart", spec: {...}}
queries:
- kind: TimeSeriesQuery
spec:
plugin:
kind: PrometheusTimeSeriesQuery
spec: {query: "up", datasource: "$ds"}
```
### 2.5 Layout System
Panels separated from layout. Grid-based positioning with JSON `$ref` pointers:
```yaml
kind: Grid
spec:
display:
title: "Section Name"
collapse: {open: true}
items:
- x: 0, y: 0, width: 6, height: 6
content: {"$ref": "#/spec/panels/my_panel"}
```
### 2.6 Supported Datasources
| Datasource | Plugin Kind | Protocol |
|---|---|---|
| Prometheus | `PrometheusDatasource` | PromQL |
| Tempo | `TempoDatasource` | TraceQL |
| Loki | `LokiDatasource` | LogQL |
| Pyroscope | `PyroscopeDatasource` | Pyroscope API |
| ClickHouse | Community plugin | SQL |
| VictoriaLogs | Community plugin | VictoriaLogs API |
### 2.7 Plugin System
Five plugin categories: datasource, query, panel, variable, explore. Each distributes as a compressed archive with CUE schemas, React components (via module federation), and optional Grafana migration logic.
---
## 3. Feasibility Assessment
### 3.1 Support for Logs/Metrics/Traces/Events/Profiles
| Signal | Perses Status | SigNoz Requirement | Gap |
|---|---|---|---|
| **Metrics** | Prometheus plugin (mature) | ClickHouse-backed with dual aggregation model (time + space) | **Significant** - Perses assumes PromQL |
| **Traces** | Tempo plugin (exists) | ClickHouse-backed with trace operators, span-level queries, joins | **Significant** - Perses Tempo does basic TraceQL |
| **Logs** | Loki plugin (exists) | ClickHouse-backed with builder queries, raw list views, streaming | **Moderate** - Perses Loki uses LogQL |
| **Profiles** | Pyroscope plugin (exists) | Not yet core in SigNoz dashboards | Low gap |
| **Events** | No plugin | Future SigNoz need | Would require custom plugin |
Perses has plugins for all four pillars, but each assumes a specific backend protocol (PromQL, TraceQL, LogQL). SigNoz uses a **unified query builder** abstracting over ClickHouse. This is a fundamental architectural mismatch.
### 3.2 Extensibility
Perses's plugin architecture is genuinely extensible. SigNoz could create custom plugins (`SigNozDatasource`, `SigNozBuilderQuery`, etc.). However, this means:
- Writing and maintaining a **full Perses plugin ecosystem** for SigNoz
- Plugin must handle all 7 query types and 3 signal types
- CUE schema definitions for all SigNoz query structures
- Tracking Perses upstream changes (still a Sandbox project, not graduated)
### 3.3 Coupling Analysis
| Dimension | Current SigNoz | With Perses | Assessment |
|---|---|---|---|
| Dashboard to Storage | Variables use raw ClickHouse SQL | Would need SigNoz query plugin | Improvement possible |
| Dashboard to Frontend | Widget types tightly coupled to React | Perses separates panel spec from rendering | Improvement |
| Dashboard to Query API | Widgets carry full query objects | Plugin-typed, referenced via datasource | Improvement, but adds indirection |
| Dashboard to Perses | N/A | Depends on Perses versioning, plugin compat, CUE toolchain | **New coupling** |
### 3.4 Support for Query Range V5
This is the **most critical gap**:
| SigNoz V5 Feature | Perses Equivalent | Plugin Solvable? |
|---|---|---|
| `builder_query` with signal-specific aggregation | Plugin `spec: any` | Yes, but SigNoz-specific |
| `builder_formula` (cross-query math) | No formula concept | **Partially** - needs custom panel logic |
| `builder_join` (SQL-style cross-signal joins) | No equivalent | **No** - fundamentally different model |
| `builder_trace_operator` (span relationships) | No equivalent | **No** - unique to SigNoz |
| `builder_sub_query` (nested queries) | No equivalent | Would need plugin extension |
| Multiple query types per panel | Single-typed queries | Would need wrapper plugin |
| Post-processing functions (ewma, anomaly, timeShift) | No equivalent | Would need to be in plugin spec |
---
## 4. Why NOT Adopt Perses Wholesale
### 4.1 SigNoz-inside-Perses
Every SigNoz query feature would live inside `spec: any` blobs within Perses plugin wrappers:
```json
{
"kind": "TimeSeriesQuery",
"spec": {
"plugin": {
"kind": "SigNozBuilderQuery",
"spec": { /* ALL SigNoz-specific content here */ }
}
}
}
```
Perses validates the envelope (`kind: TimeSeriesQuery` exists, `plugin.kind` is registered). But the actual content is opaque to Perses. You'd still need your own validation for everything inside `spec`.
### 4.2 Formulas, Joins, and Trace Operators Have No Home
Perses model: `Panel -> queries[] -> Query`. Each query is independent.
SigNoz model: `Panel -> compositeQuery -> {queries[], formulas[], joins[], traceOperators[]}`. Queries reference each other by name. Formulas combine results. Joins cross signals.
You'd have to either:
- Shove the entire compositeQuery into a single plugin spec (making Perses query structure meaningless)
- Fork/extend Perses core spec (ongoing merge conflicts)
### 4.3 Plugin Maintenance Burden
Required custom plugins:
| Plugin | Purpose |
|---|---|
| `SigNozDatasource` | Points to SigNoz query-service |
| `SigNozBuilderQuery` | Wraps v5 builder queries for metrics/logs/traces |
| `SigNozFormulaQuery` | Wraps formula evaluation |
| `SigNozTraceOperatorQuery` | Wraps trace structural operators |
| `SigNozJoinQuery` | Wraps cross-signal joins |
| `SigNozSubQuery` | Wraps nested queries |
| `SigNozAttributeValuesVariable` | Variable from attribute values |
| `SigNozQueryVariable` | Variable from query results |
Each needs: CUE schema, React component, Grafana migration handler, and tests. Every v5 feature addition requires plugin schema updates.
### 4.4 Community Mismatch
Perses adopters are primarily Prometheus-centric. A `SigNozDatasource` plugin is useful only to SigNoz. You'd be the sole maintainer of the plugin suite.
### 4.5 The Counterargument
The one strong argument FOR wholesale adoption: **you get out of the "dashboard spec" business entirely**. Even if 80% is SigNoz-specific plugins, the 20% Perses handles (metadata, layout, display, variable ordering, versioning, RBAC scoping) is real work you don't have to maintain. If Perses graduates from CNCF sandbox, ecosystem benefits compound.
This trade-off doesn't justify the ongoing plugin maintenance tax, especially since those patterns are straightforward to implement natively. However, if SigNoz plans to eventually expose PromQL/TraceQL/LogQL-compatible endpoints, the calculus changes significantly.
---
## 5. Recommendation: Borrow Patterns, Build Native
### 5.1 Patterns to Adopt from Perses
| Perses Pattern | SigNoz Adoption |
|---|---|
| **`kind` + `spec` envelope** | Use for query types, panel types, variables. Consistent with v5 `QueryEnvelope`. |
| **Panels separated from Layout** | `panels: {}` map + `layouts: []` referencing by ID. |
| **Ordered variables as array** | Move from `variables: {name: {...}}` to `variables: [...]`. |
| **CUE or JSON Schema validation** | Define formal schema for dashboards. Use for CI and import/export. |
| **Dashboard-as-Code SDK** | Go/TypeScript SDK for programmatic dashboard generation. |
| **Metadata structure** | `kind` + `apiVersion` + `metadata` + `spec` top-level (Kubernetes-style). |
### 5.2 Proposed v6 Dashboard Structure
```json
{
"kind": "Dashboard",
"apiVersion": "signoz.io/v1",
"metadata": {
"name": "redis-overview",
"title": "Redis Overview",
"description": "...",
"tags": ["redis", "database"],
"image": "..."
},
"spec": {
"defaults": {
"timeRange": "5m",
"refreshInterval": "30s"
},
"variables": [
{
"kind": "QueryVariable",
"spec": {
"name": "host_name",
"signal": "metrics",
"attributeName": "host_name",
"multiSelect": true
}
}
],
"panels": {
"hit_rate": {
"kind": "TimeSeriesPanel",
"spec": {
"title": "Hit Rate",
"description": "Cache hit rate across hosts",
"display": {
"yAxisUnit": "percent",
"legend": {"position": "bottom"}
},
"query": {
"type": "composite",
"spec": {
"queries": [
{
"type": "builder_query",
"spec": {
"name": "A",
"signal": "metrics",
"aggregations": [{"metricName": "redis_keyspace_hits", "timeAggregation": "rate", "spaceAggregation": "sum"}],
"filter": {"expression": "host_name IN $host_name"}
}
}
],
"formulas": [
{"type": "builder_formula", "spec": {"expression": "A / (A + B) * 100"}}
]
}
}
}
}
},
"layouts": [
{
"kind": "Grid",
"spec": {
"title": "Overview",
"collapsible": true,
"collapsed": false,
"items": [
{"panel": "hit_rate", "x": 0, "y": 0, "w": 6, "h": 6}
]
}
}
]
}
}
```
### 5.3 Key Improvements Over Current Format
| Issue | Current | v6 |
|---|---|---|
| No validation | `map[string]interface{}` | JSON Schema enforced |
| Boilerplate | Every widget has selectedLogFields, all query modes | Only active query mode stored, display options per panel type |
| Variable ordering | `order` field inside map entries | Array position |
| Variable keys | Name or UUID inconsistently | `name` field in spec, array position |
| Layout coupling | Implicit `i` matches `id` | Explicit `panel` reference in layout items |
| Spelling errors | `timePreferance` | `timePreference` (fixed) |
| Query structure | Flat list of all queries + formulas | Typed envelope matching v5 API |
| Row grouping | Separate `panelMap` with duplicate layout entries | Integrated into `layouts[]` with `collapsible` flag |
### 5.4 Migration Path
1. **v4/v5 to v6 migration**: Build a Go migration function that transforms existing dashboards to v6 format. Handle both v4 and v5 query formats.
2. **Backward compatibility**: Support reading v4/v5 dashboards with automatic upgrade to v6 on save.
3. **Frontend**: Update TypeScript interfaces to match v6 schema. Remove legacy response converters once v5 API is fully adopted.
4. **Validation**: Add JSON Schema validation on dashboard create/update API endpoints.
5. **Integration dashboards**: Regenerate all dashboards in `SigNoz/dashboards` repo using v6 format.
---
## References
- [Perses Homepage](https://perses.dev/)
- [Perses Dashboard API](https://perses.dev/perses/docs/api/dashboard/)
- [Perses Open Specification](https://perses.dev/perses/docs/concepts/open-specification/)
- [Perses Plugin Creation](https://perses.dev/perses/docs/plugins/creation/)
- [Perses Prometheus Plugin Model](https://perses.dev/plugins/docs/prometheus/model/)
- [Perses GitHub Repository](https://github.com/perses/perses)
- [SigNoz Dashboards Repository](https://github.com/SigNoz/dashboards)
- SigNoz source: `pkg/types/dashboardtypes/dashboard.go`
- SigNoz source: `pkg/types/querybuildertypes/querybuildertypesv5/`
- SigNoz source: `frontend/src/types/api/dashboard/getAll.ts`
- SigNoz source: `frontend/src/types/api/queryBuilder/queryBuilderData.ts`
- SigNoz source: `frontend/src/types/api/v5/queryRange.ts`
- SigNoz source: `pkg/transition/migrate_common.go`

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,378 @@
// SigNoz Perses Plugin Schemas
//
// CUE schemas for all SigNoz-specific Perses plugin types.
// These define the `spec` shape inside Perses Plugin wrappers:
// { "kind": "<PluginKind>", "spec": <validated by this file> }
//
// Perses core types (Dashboard, Panel, Layout, Variable envelopes)
// are validated by Perses itself. This file only covers SigNoz plugins.
//
// Usage:
// percli lint --schema-dir ./signoz-perses-plugins dashboard.json
//
// Reference: https://perses.dev/perses/docs/api/plugin/
package signoz
// ============================================================================
// Datasource Plugin
// ============================================================================
// SigNozDatasource configures the connection to a SigNoz query service.
// Used inside: DatasourceSpec.plugin { kind: "SigNozDatasource", spec: ... }
#SigNozDatasource: {
// Direct URL for embedded mode (SigNoz serves its own dashboards).
// The frontend calls SigNoz APIs directly at this base URL.
directUrl?: string & =~"^/|^https?://"
// HTTP proxy config for external mode (standalone Perses connecting to SigNoz).
proxy?: #HTTPProxy
}
#HTTPProxy: {
kind: "HTTPProxy"
spec: {
url: string & =~"^https?://"
allowedEndpoints?: [...#AllowedEndpoint]
}
}
#AllowedEndpoint: {
endpointPattern: string
method: "GET" | "POST" | "PUT" | "DELETE"
}
// ============================================================================
// Query Plugins
// ============================================================================
// SigNozBuilderQuery is a single builder query for one signal.
// Used inside: TimeSeriesQuery.spec.plugin { kind: "SigNozBuilderQuery", spec: ... }
// Use this when the panel has independent queries (no formulas, joins, or trace operators).
#SigNozBuilderQuery: {
name: #QueryName
signal: #Signal
source?: string
disabled?: bool | *false
// Metrics use structured aggregations; traces/logs use expression aggregations
aggregations?: [...#MetricAggregation | #ExpressionAggregation]
filter?: #Filter
groupBy?: [...#GroupByKey]
order?: [...#OrderBy]
having?: #Having
selectFields?: [...#TelemetryFieldKey]
limit?: int & >=0 & <=10000
limitBy?: #LimitBy
offset?: int & >=0
cursor?: string
secondaryAggregations?: [...#SecondaryAggregation]
functions?: [...#PostProcessingFunction]
legend?: string
reduceTo?: #ReduceTo
stepInterval?: #StepInterval
}
// SigNozCompositeQuery bundles multiple queries with formulas, joins, or trace operators.
// Used inside: TimeSeriesQuery.spec.plugin { kind: "SigNozCompositeQuery", spec: ... }
// Use this when a panel needs cross-query references (A/B formulas, joins, trace operators).
#SigNozCompositeQuery: {
queries: [...#CompositeQueryEntry] & [_, ...] // at least one query
formulas?: [...#FormulaEntry]
joins?: [...#JoinEntry]
traceOperators?: [...#TraceOperatorEntry]
}
// A query within a composite. Same shape as SigNozBuilderQuery.
#CompositeQueryEntry: #SigNozBuilderQuery
// A formula referencing other queries by name.
#FormulaEntry: {
name: #QueryName
expression: string & !="" // e.g. "A/B * 100"
disabled?: bool | *false
order?: [...#OrderBy]
limit?: int & >=0 & <=10000
having?: #Having
functions?: [...#PostProcessingFunction]
legend?: string
}
// A join combining results from two queries.
#JoinEntry: {
name: #QueryName
disabled?: bool | *false
left: #QueryRef
right: #QueryRef
joinType: "inner" | "left" | "right" | "full" | "cross"
on: string & !="" // join condition expression
aggregations?: [...#MetricAggregation | #ExpressionAggregation]
selectFields?: [...#TelemetryFieldKey]
filter?: #Filter
groupBy?: [...#GroupByKey]
having?: #Having
order?: [...#OrderBy]
limit?: int & >=0 & <=10000
secondaryAggregations?: [...#SecondaryAggregation]
functions?: [...#PostProcessingFunction]
}
// A trace operator expressing span relationships.
#TraceOperatorEntry: {
name: #QueryName
disabled?: bool | *false
// Operators: => (direct descendant), -> (indirect descendant),
// && (AND), || (OR), NOT (exclude). Example: "A => B && C"
expression: string & !=""
filter?: #Filter
returnSpansFrom?: string
order?: [...#TraceOrderBy]
aggregations?: [...#ExpressionAggregation]
stepInterval?: #StepInterval
groupBy?: [...#GroupByKey]
having?: #Having
limit?: int & >=0 & <=10000
offset?: int & >=0
cursor?: string
legend?: string
selectFields?: [...#TelemetryFieldKey]
functions?: [...#PostProcessingFunction]
}
// SigNozPromQL wraps a raw PromQL query.
// Used inside: TimeSeriesQuery.spec.plugin { kind: "SigNozPromQL", spec: ... }
#SigNozPromQL: {
name: string
query: string & !=""
disabled?: bool | *false
step?: #StepInterval
stats?: bool | *false
legend?: string
}
// SigNozClickHouseSQL wraps a raw ClickHouse SQL query.
// Used inside: TimeSeriesQuery.spec.plugin { kind: "SigNozClickHouseSQL", spec: ... }
#SigNozClickHouseSQL: {
name: string
query: string & !=""
disabled?: bool | *false
legend?: string
}
// ============================================================================
// Variable Plugins
// ============================================================================
// SigNozQueryVariable resolves variable values using a builder query.
// Used inside: ListVariable.spec.plugin { kind: "SigNozQueryVariable", spec: ... }
#SigNozQueryVariable: {
// The query that produces variable values.
// Uses the same builder query model as panels.
query: #SigNozBuilderQuery | #SigNozCompositeQuery | #SigNozPromQL | #SigNozClickHouseSQL
}
// SigNozAttributeValues resolves variable values from attribute autocomplete.
// Used inside: ListVariable.spec.plugin { kind: "SigNozAttributeValues", spec: ... }
// This is a simpler alternative to SigNozQueryVariable for common cases
// like "list all values of host.name for metric X".
#SigNozAttributeValues: {
signal: #Signal
metricName?: string // required when signal is "metrics"
attributeName: string & !=""
filter?: #Filter // optional pre-filter
}
// ============================================================================
// Signals
// ============================================================================
#Signal: "metrics" | "traces" | "logs"
// Extensible to "events" | "profiles" in future
// ============================================================================
// Aggregations
// ============================================================================
// MetricAggregation defines the two-level aggregation model for metrics:
// time aggregation (within each time bucket) then space aggregation (across dimensions).
#MetricAggregation: {
metricName: string & !=""
temporality?: #MetricTemporality
timeAggregation?: #TimeAggregation
spaceAggregation?: #SpaceAggregation
reduceTo?: #ReduceTo
}
// ExpressionAggregation uses ClickHouse aggregate function syntax for traces/logs.
// Examples: "count()", "sum(item_price)", "p99(duration_nano)", "countIf(day > 10)"
#ExpressionAggregation: {
expression: string & !=""
alias?: string
}
#MetricTemporality: "delta" | "cumulative" | "unspecified"
#TimeAggregation:
"latest" | "sum" | "avg" | "min" | "max" |
"count" | "count_distinct" | "rate" | "increase"
#SpaceAggregation:
"sum" | "avg" | "min" | "max" | "count" |
"p50" | "p75" | "p90" | "p95" | "p99"
#ReduceTo: "sum" | "count" | "avg" | "min" | "max" | "last" | "median"
// ============================================================================
// Filters
// ============================================================================
// Filter uses expression syntax instead of structured items with synthetic IDs.
// Supports: =, !=, >, >=, <, <=, IN, NOT IN, LIKE, NOT LIKE, ILIKE, NOT ILIKE,
// BETWEEN, NOT BETWEEN, EXISTS, NOT EXISTS, REGEXP, NOT REGEXP, CONTAINS, NOT CONTAINS.
// Variable interpolation: $variable_name
#Filter: {
expression: string
}
// ============================================================================
// Group By, Order By, Having
// ============================================================================
#GroupByKey: {
name: string & !=""
signal?: #Signal
fieldContext?: #FieldContext
fieldDataType?: #FieldDataType
}
#OrderBy: {
key: #OrderByKey
direction: "asc" | "desc"
}
#OrderByKey: {
name: string & !=""
signal?: #Signal
fieldContext?: #FieldContext
fieldDataType?: #FieldDataType
}
#TraceOrderBy: {
key: {
name: "span_count" | "trace_duration"
}
direction: "asc" | "desc"
}
// Having applies a post-aggregation filter.
// Example: "count() > 100"
#Having: {
expression: string & !=""
}
// ============================================================================
// Secondary Aggregations & Limits
// ============================================================================
#SecondaryAggregation: {
expression: string
alias?: string
stepInterval?: #StepInterval
groupBy?: [...#GroupByKey]
order?: [...#OrderBy]
limit?: int & >=0 & <=10000
limitBy?: #LimitBy
}
#LimitBy: {
keys: [...string] & [_, ...] // at least one key
value: string // max rows per group (string for compatibility)
}
// ============================================================================
// Post-Processing Functions
// ============================================================================
#PostProcessingFunction: {
name: #FunctionName
args?: [...#FunctionArg]
}
#FunctionName:
// Threshold functions
"cutOffMin" | "cutOffMax" | "clampMin" | "clampMax" |
// Math functions
"absolute" | "runningDiff" | "log2" | "log10" | "cumulativeSum" |
// Smoothing functions (exponentially weighted moving average)
"ewma3" | "ewma5" | "ewma7" |
// Smoothing functions (sliding median window)
"median3" | "median5" | "median7" |
// Time functions
"timeShift" |
// Analysis functions
"anomaly" |
// Gap filling
"fillZero"
#FunctionArg: {
name?: string
value: number | string | bool
}
// ============================================================================
// Telemetry Field References
// ============================================================================
#TelemetryFieldKey: {
name: string & !=""
description?: string
unit?: string
signal?: #Signal
fieldContext?: #FieldContext
fieldDataType?: #FieldDataType
}
#FieldContext:
"resource" | "scope" | "span" | "event" | "link" |
"log" | "metric" | "body" | "trace"
#FieldDataType:
"string" | "int64" | "float64" | "bool" |
"array(string)" | "array(int64)" | "array(float64)" | "array(bool)"
// ============================================================================
// Common Types
// ============================================================================
// QueryName must be a valid identifier: starts with letter, contains letters/digits/underscores.
#QueryName: =~"^[A-Za-z][A-Za-z0-9_]*$"
// QueryRef references another query by name within a composite query.
#QueryRef: {
name: string & !=""
}
// StepInterval accepts seconds (numeric) or duration string ("15s", "1m", "1h").
#StepInterval: number & >=0 | =~"^[0-9]+(ns|us|ms|s|m|h)$"

View File

@@ -1,117 +0,0 @@
import { AlignedData } from 'uplot';
import { getInitialStackedBands, stack } from '../stackUtils';
describe('stackUtils', () => {
describe('stack', () => {
const neverOmit = (): boolean => false;
it('preserves time axis as first row', () => {
const data: AlignedData = [
[100, 200, 300],
[1, 2, 3],
[4, 5, 6],
];
const { data: result } = stack(data, neverOmit);
expect(result[0]).toEqual([100, 200, 300]);
});
it('stacks value series cumulatively (last = raw, first = total)', () => {
// Time, then 3 value series. Stack order: last series stays raw, then we add upward.
const data: AlignedData = [
[0, 1, 2],
[1, 2, 3], // series 1
[4, 5, 6], // series 2
[7, 8, 9], // series 3
];
const { data: result } = stack(data, neverOmit);
// result[1] = s1+s2+s3, result[2] = s2+s3, result[3] = s3
expect(result[1]).toEqual([12, 15, 18]); // 1+4+7, 2+5+8, 3+6+9
expect(result[2]).toEqual([11, 13, 15]); // 4+7, 5+8, 6+9
expect(result[3]).toEqual([7, 8, 9]);
});
it('treats null values as 0 when stacking', () => {
const data: AlignedData = [
[0, 1],
[1, null],
[null, 10],
];
const { data: result } = stack(data, neverOmit);
expect(result[1]).toEqual([1, 10]); // total
expect(result[2]).toEqual([0, 10]); // last series with null→0
});
it('copies omitted series as-is without accumulating', () => {
// Omit series 2 (index 2); series 1 and 3 are stacked.
const data: AlignedData = [
[0, 1],
[10, 20], // series 1
[100, 200], // series 2 - omitted
[1, 2], // series 3
];
const omitSeries2 = (i: number): boolean => i === 2;
const { data: result } = stack(data, omitSeries2);
// series 3 raw: [1, 2]; series 2 omitted: [100, 200] as-is; series 1 stacked with s3: [11, 22]
expect(result[1]).toEqual([11, 22]); // 10+1, 20+2
expect(result[2]).toEqual([100, 200]); // copied, not stacked
expect(result[3]).toEqual([1, 2]);
});
it('returns bands between consecutive visible series when none omitted', () => {
const data: AlignedData = [
[0, 1],
[1, 2],
[3, 4],
[5, 6],
];
const { bands } = stack(data, neverOmit);
expect(bands).toEqual([{ series: [1, 2] }, { series: [2, 3] }]);
});
it('returns bands only between visible series when some are omitted', () => {
// 4 value series; omit index 2. Visible: 1, 3, 4. Bands: [1,3], [3,4]
const data: AlignedData = [[0], [1], [2], [3], [4]];
const omitSeries2 = (i: number): boolean => i === 2;
const { bands } = stack(data, omitSeries2);
expect(bands).toEqual([{ series: [1, 3] }, { series: [3, 4] }]);
});
it('returns empty bands when only one value series', () => {
const data: AlignedData = [
[0, 1],
[1, 2],
];
const { bands } = stack(data, neverOmit);
expect(bands).toEqual([]);
});
});
describe('getInitialStackedBands', () => {
it('returns one band between each consecutive pair for seriesCount 3', () => {
expect(getInitialStackedBands(3)).toEqual([
{ series: [1, 2] },
{ series: [2, 3] },
]);
});
it('returns empty array for seriesCount 0 or 1', () => {
expect(getInitialStackedBands(0)).toEqual([]);
expect(getInitialStackedBands(1)).toEqual([]);
});
it('returns single band for seriesCount 2', () => {
expect(getInitialStackedBands(2)).toEqual([{ series: [1, 2] }]);
});
it('returns bands [1,2], [2,3], ..., [n-1, n] for seriesCount n', () => {
const bands = getInitialStackedBands(5);
expect(bands).toEqual([
{ series: [1, 2] },
{ series: [2, 3] },
{ series: [3, 4] },
{ series: [4, 5] },
]);
});
});
});

View File

@@ -1,116 +0,0 @@
import uPlot, { AlignedData } from 'uplot';
/**
* Stack data cumulatively (top-down: first series = top, last = bottom).
* When `omit(seriesIndex)` returns true, that series is excluded from stacking.
*/
export function stack(
data: AlignedData,
omit: (seriesIndex: number) => boolean,
): { data: AlignedData; bands: uPlot.Band[] } {
const timeAxis = data[0];
const pointCount = timeAxis.length;
const valueSeriesCount = data.length - 1; // exclude time axis
const stackedSeries = buildStackedSeries({
data,
valueSeriesCount,
pointCount,
omit,
});
const bands = buildFillBands(valueSeriesCount + 1, omit); // +1 for 1-based series indices
return {
data: [timeAxis, ...stackedSeries] as AlignedData,
bands,
};
}
interface BuildStackedSeriesParams {
data: AlignedData;
valueSeriesCount: number;
pointCount: number;
omit: (seriesIndex: number) => boolean;
}
/**
* Accumulate from last series upward: last series = raw values, first = total.
* Omitted series are copied as-is (no accumulation).
*/
function buildStackedSeries({
data,
valueSeriesCount,
pointCount,
omit,
}: BuildStackedSeriesParams): (number | null)[][] {
const stackedSeries: (number | null)[][] = Array(valueSeriesCount);
const cumulativeSums = Array(pointCount).fill(0) as number[];
for (let seriesIndex = valueSeriesCount; seriesIndex >= 1; seriesIndex--) {
const rawValues = data[seriesIndex] as (number | null)[];
if (omit(seriesIndex)) {
stackedSeries[seriesIndex - 1] = rawValues;
} else {
stackedSeries[seriesIndex - 1] = rawValues.map((rawValue, pointIndex) => {
const numericValue = rawValue == null ? 0 : Number(rawValue);
return (cumulativeSums[pointIndex] += numericValue);
});
}
}
return stackedSeries;
}
/**
* Bands define fill between consecutive visible series for stacked appearance.
* uPlot format: [upperSeriesIdx, lowerSeriesIdx].
*/
function buildFillBands(
seriesLength: number,
omit: (seriesIndex: number) => boolean,
): uPlot.Band[] {
const bands: uPlot.Band[] = [];
for (let seriesIndex = 1; seriesIndex < seriesLength; seriesIndex++) {
if (omit(seriesIndex)) {
continue;
}
const nextVisibleSeriesIndex = findNextVisibleSeriesIndex(
seriesLength,
seriesIndex,
omit,
);
if (nextVisibleSeriesIndex !== -1) {
bands.push({ series: [seriesIndex, nextVisibleSeriesIndex] });
}
}
return bands;
}
function findNextVisibleSeriesIndex(
seriesLength: number,
afterIndex: number,
omit: (seriesIndex: number) => boolean,
): number {
for (let i = afterIndex + 1; i < seriesLength; i++) {
if (!omit(i)) {
return i;
}
}
return -1;
}
/**
* Returns band indices for initial stacked state (no series omitted).
* Top-down: first series at top, band fills between consecutive series.
* uPlot band format: [upperSeriesIdx, lowerSeriesIdx].
*/
export function getInitialStackedBands(seriesCount: number): uPlot.Band[] {
const bands: uPlot.Band[] = [];
for (let seriesIndex = 1; seriesIndex < seriesCount; seriesIndex++) {
bands.push({ series: [seriesIndex, seriesIndex + 1] });
}
return bands;
}

View File

@@ -1,313 +0,0 @@
import { renderHook } from '@testing-library/react';
import uPlot from 'uplot';
import type { UseBarChartStackingParams } from '../useBarChartStacking';
import { useBarChartStacking } from '../useBarChartStacking';
type MockConfig = { addHook: jest.Mock };
function asConfig(c: MockConfig): UseBarChartStackingParams['config'] {
return (c as unknown) as UseBarChartStackingParams['config'];
}
function createMockConfig(): {
config: MockConfig;
invokeSetData: (plot: uPlot) => void;
invokeSetSeries: (
plot: uPlot,
seriesIndex: number | null,
opts: Partial<uPlot.Series> & { focus?: boolean },
) => void;
removeSetData: jest.Mock;
removeSetSeries: jest.Mock;
} {
let setDataHandler: ((plot: uPlot) => void) | null = null;
let setSeriesHandler:
| ((plot: uPlot, seriesIndex: number | null, opts: uPlot.Series) => void)
| null = null;
const removeSetData = jest.fn();
const removeSetSeries = jest.fn();
const addHook = jest.fn(
(
hookName: string,
handler: (plot: uPlot, ...args: unknown[]) => void,
): (() => void) => {
if (hookName === 'setData') {
setDataHandler = handler as (plot: uPlot) => void;
return removeSetData;
}
if (hookName === 'setSeries') {
setSeriesHandler = handler as (
plot: uPlot,
seriesIndex: number | null,
opts: uPlot.Series,
) => void;
return removeSetSeries;
}
return jest.fn();
},
);
const config: MockConfig = { addHook };
const invokeSetData = (plot: uPlot): void => {
setDataHandler?.(plot);
};
const invokeSetSeries = (
plot: uPlot,
seriesIndex: number | null,
opts: Partial<uPlot.Series> & { focus?: boolean },
): void => {
setSeriesHandler?.(plot, seriesIndex, opts as uPlot.Series);
};
return {
config,
invokeSetData,
invokeSetSeries,
removeSetData,
removeSetSeries,
};
}
function createMockPlot(overrides: Partial<uPlot> = {}): uPlot {
return ({
data: [
[0, 1, 2],
[1, 2, 3],
[4, 5, 6],
],
series: [{ show: true }, { show: true }, { show: true }],
delBand: jest.fn(),
addBand: jest.fn(),
setData: jest.fn(),
...overrides,
} as unknown) as uPlot;
}
describe('useBarChartStacking', () => {
it('returns data as-is when isStackedBarChart is false', () => {
const data: uPlot.AlignedData = [
[100, 200],
[1, 2],
[3, 4],
];
const { result } = renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: false,
config: null,
}),
);
expect(result.current).toBe(data);
});
it('returns data as-is when config is null and isStackedBarChart is true', () => {
const data: uPlot.AlignedData = [
[0, 1],
[1, 2],
[4, 5],
];
const { result } = renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: null,
}),
);
// Still returns stacked data (computed in useMemo); no hooks registered
expect(result.current[0]).toEqual([0, 1]);
expect(result.current[1]).toEqual([5, 7]); // stacked
expect(result.current[2]).toEqual([4, 5]);
});
it('returns stacked data when isStackedBarChart is true and multiple value series', () => {
const data: uPlot.AlignedData = [
[0, 1, 2],
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
];
const { result } = renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: null,
}),
);
expect(result.current[0]).toEqual([0, 1, 2]);
expect(result.current[1]).toEqual([12, 15, 18]); // s1+s2+s3
expect(result.current[2]).toEqual([11, 13, 15]); // s2+s3
expect(result.current[3]).toEqual([7, 8, 9]);
});
it('returns data as-is when only one value series (no stacking needed)', () => {
const data: uPlot.AlignedData = [
[0, 1],
[1, 2],
];
const { result } = renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: null,
}),
);
expect(result.current).toEqual(data);
});
it('registers setData and setSeries hooks when isStackedBarChart and config provided', () => {
const { config } = createMockConfig();
const data: uPlot.AlignedData = [
[0, 1],
[1, 2],
[3, 4],
];
renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: asConfig(config),
}),
);
expect(config.addHook).toHaveBeenCalledWith('setData', expect.any(Function));
expect(config.addHook).toHaveBeenCalledWith(
'setSeries',
expect.any(Function),
);
});
it('does not register hooks when isStackedBarChart is false', () => {
const { config } = createMockConfig();
const data: uPlot.AlignedData = [
[0, 1],
[1, 2],
[3, 4],
];
renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: false,
config: asConfig(config),
}),
);
expect(config.addHook).not.toHaveBeenCalled();
});
it('calls cleanup when unmounted', () => {
const { config, removeSetData, removeSetSeries } = createMockConfig();
const data: uPlot.AlignedData = [
[0, 1],
[1, 2],
[3, 4],
];
const { unmount } = renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: asConfig(config),
}),
);
unmount();
expect(removeSetData).toHaveBeenCalled();
expect(removeSetSeries).toHaveBeenCalled();
});
it('re-stacks and updates plot when setData hook is invoked', () => {
const { config, invokeSetData } = createMockConfig();
const data: uPlot.AlignedData = [
[0, 1, 2],
[1, 2, 3],
[4, 5, 6],
];
const plot = createMockPlot({
data: [
[0, 1, 2],
[5, 7, 9],
[4, 5, 6],
],
});
renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: asConfig(config),
}),
);
invokeSetData(plot);
expect(plot.delBand).toHaveBeenCalledWith(null);
expect(plot.addBand).toHaveBeenCalled();
expect(plot.setData).toHaveBeenCalledWith(
expect.arrayContaining([
[0, 1, 2],
expect.any(Array), // stacked row 1
expect.any(Array), // stacked row 2
]),
);
});
it('re-stacks when setSeries hook is invoked (e.g. legend toggle)', () => {
const { config, invokeSetSeries } = createMockConfig();
const data: uPlot.AlignedData = [
[0, 1],
[10, 20],
[5, 10],
];
// Plot data must match unstacked length so canApplyStacking passes
const plot = createMockPlot({
data: [
[0, 1],
[15, 30],
[5, 10],
],
});
renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: asConfig(config),
}),
);
invokeSetSeries(plot, 1, { show: false });
expect(plot.setData).toHaveBeenCalled();
});
it('does not re-stack when setSeries is called with focus option', () => {
const { config, invokeSetSeries } = createMockConfig();
const data: uPlot.AlignedData = [
[0, 1],
[1, 2],
[3, 4],
];
const plot = createMockPlot();
renderHook(() =>
useBarChartStacking({
data,
isStackedBarChart: true,
config: asConfig(config),
}),
);
(plot.setData as jest.Mock).mockClear();
invokeSetSeries(plot, 1, { focus: true } as uPlot.Series);
expect(plot.setData).not.toHaveBeenCalled();
});
});