Compare commits

...

16 Commits

Author SHA1 Message Date
Srikanth Chekuri
25e0c19ddb Merge branch 'main' into remove-v4-support-rules 2026-02-14 23:51:21 +05:30
srikanthccv
8a6abb2f09 chore: remove support for older versions in rules 2026-02-14 23:48:09 +05:30
Abhishek Kumar Singh
4bbe5ead07 test(integration): alerts e2e test cases with basic rule manager alerts (#10163)
Some checks are pending
build-staging / prepare (push) Waiting to run
build-staging / js-build (push) Blocked by required conditions
build-staging / go-build (push) Blocked by required conditions
build-staging / staging (push) Blocked by required conditions
Release Drafter / update_release_draft (push) Waiting to run
2026-02-14 22:32:50 +05:30
Yunus M
e36689ecba fix: show ip addresses toggle and add regression test (#10251) 2026-02-14 19:47:16 +05:30
Abhi kumar
2c948ef9f6 feat: added new barpanel component (#10266)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: refactored the config builder and added base config builder

* chore: added a common chart wrapper

* chore: tsc fix

* fix: pr review changes

* fix: pr review changes

* chore: added different tooltips

* chore: removed dayjs extention

* feat: added new barpanel component

* fix: added fix for pr review changes

* chore: added support for bar alignment configuration

* chore: updated structure for bar panel

* chore: pr review fix
2026-02-13 23:22:52 +05:30
Abhi kumar
3c30114642 feat: added option to copy legend text (#10294)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat: added option to copy legend text

* chore: added test for legend copy action

* chore: updated legend styles

* chore: added check icon when legend copied

* chore: added copytoclipboard hook

* chore: removed copytoclipboard options
2026-02-13 08:08:04 +00:00
Ashwin Bhatkal
d042fad1e3 chore: shared utils update + API plumbing (#10257)
* chore: shared utils update + API plumbing

* chore: add tests
2026-02-13 06:05:44 +00:00
Abhi kumar
235c606b44 test: added tests for utils + components (#10281)
* test: added tests for utils + components

* fix: added fix for legend test

* chore: pr review comments

* chore: fixed plotcontext test

* fix: added tests failure handing + moved from map based approach to array

* fix: updated the way we used to consume graph visibility state

* fix: fixed label spelling
2026-02-13 05:45:35 +00:00
Abhi kumar
a49d7e1662 chore: resetting spangaps to old default state in the new timeseries chart + added thresholds in scale computation (#10287)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* chore: moved spangaps to old default state in the new timeseries chart

* chore: added thresholds in scale builder
2026-02-12 22:22:41 +05:30
Abhi kumar
b3e41b5520 feat: enabled new time-series panel (#10273) 2026-02-12 21:20:22 +05:30
Abhi kumar
83bb97cc58 test: added unit tests for uplot config builders (#10220)
* test: added unit tests for uplot config builders

* test: added more tests

* test: updated tests

* fix: updated tests

* fix: fixed failing test

* fix: updated tests

* test: added test for axis

* chore: added test for thresholds with different scale key

* chore: pr review comments

* chore: pr review comments

* fix: fixed axis tests

---------

Co-authored-by: Ashwin Bhatkal <ashwin96@gmail.com>
2026-02-12 15:22:58 +00:00
swapnil-signoz
68ea28cf6b test(cloudintegration): add tests for cloudintegrations (#10237)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat: adding tests for cloudintegration api

* refactor: updating method

* ci: updated logs

* ci: fmt fixes

* refactor: removing unused vars

* refactor: updating test assertion

* refactor: worked on review comments

* fix: fixing tests

* refactor: using yield

* refactor: fixing lint issues

* refactor: cleaning tests

* refactor: review comments

* refactor: removing unused imports

* refactor: updating fixture name

---------

Co-authored-by: Vikrant Gupta <vikrant@signoz.io>
2026-02-12 13:43:29 +05:30
Naman Verma
3726c0aac1 feat: add support for simultaneous delta and cumulative temporality (#10202) 2026-02-12 06:36:07 +00:00
Nikhil Soni
dae2d3239b fix: guide user on empty external api monitoring page (#10133)
* fix: guide user on empty external api monitoring page

Fixes #3703

* fix: change language of msg to handle empty query result

* fix: hide table header for empty result

* fix: add styling for empty state message
2026-02-12 05:54:02 +00:00
Ishan
0c660f8618 feat: Faster way to view associated logs for a trace in logs explorer (#10242)
Some checks failed
build-staging / staging (push) Has been cancelled
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
* feat: replace filter feature

* feat: updated parsing field value

* feat: pr comments fix
2026-02-12 08:40:29 +05:30
Nageshbansal
97cffbc20a fix: remove unused flag for otel-col (#10275)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-02-12 01:34:32 +05:30
145 changed files with 10719 additions and 2375 deletions

View File

@@ -4,7 +4,6 @@ services:
container_name: signoz-otel-collector-dev
command:
- --config=/etc/otel-collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
environment:

View File

@@ -214,7 +214,6 @@ services:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml

View File

@@ -155,7 +155,6 @@ services:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
configs:
- source: otel-collector-config
target: /etc/otel-collector-config.yaml

View File

@@ -219,7 +219,6 @@ services:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml

View File

@@ -150,7 +150,6 @@ services:
- --config=/etc/otel-collector-config.yaml
- --manager-config=/etc/manager-config.yaml
- --copy-path=/var/tmp/collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ../common/signoz/otel-collector-opamp-config.yaml:/etc/manager-config.yaml

View File

@@ -5,23 +5,16 @@ import (
"encoding/json"
"fmt"
"log/slog"
"math"
"strings"
"sync"
"time"
"github.com/SigNoz/signoz/ee/query-service/anomaly"
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/query-service/common"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/transition"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/valuer"
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/query-service/utils/times"
"github.com/SigNoz/signoz/pkg/query-service/utils/timestamp"
@@ -32,6 +25,8 @@ import (
querierV5 "github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/ee/query-service/anomaly"
anomalyV2 "github.com/SigNoz/signoz/ee/anomaly"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
@@ -48,17 +43,12 @@ type AnomalyRule struct {
reader interfaces.Reader
// querierV2 is used for alerts created after the introduction of new metrics query builder
querierV2 interfaces.Querier
// querierV5 is used for alerts migrated after the introduction of new query builder
// querierV5 is the query builder v5 querier used for all alert rule evaluation
querierV5 querierV5.Querier
provider anomaly.Provider
providerV2 anomalyV2.Provider
version string
logger *slog.Logger
logger *slog.Logger
seasonality anomaly.Seasonality
}
@@ -102,34 +92,6 @@ func NewAnomalyRule(
logger.Info("using seasonality", "seasonality", t.seasonality.String())
querierOptsV2 := querierV2.QuerierOptions{
Reader: reader,
Cache: cache,
KeyGenerator: queryBuilder.NewKeyGenerator(),
}
t.querierV2 = querierV2.NewQuerier(querierOptsV2)
t.reader = reader
if t.seasonality == anomaly.SeasonalityHourly {
t.provider = anomaly.NewHourlyProvider(
anomaly.WithCache[*anomaly.HourlyProvider](cache),
anomaly.WithKeyGenerator[*anomaly.HourlyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.HourlyProvider](reader),
)
} else if t.seasonality == anomaly.SeasonalityDaily {
t.provider = anomaly.NewDailyProvider(
anomaly.WithCache[*anomaly.DailyProvider](cache),
anomaly.WithKeyGenerator[*anomaly.DailyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.DailyProvider](reader),
)
} else if t.seasonality == anomaly.SeasonalityWeekly {
t.provider = anomaly.NewWeeklyProvider(
anomaly.WithCache[*anomaly.WeeklyProvider](cache),
anomaly.WithKeyGenerator[*anomaly.WeeklyProvider](queryBuilder.NewKeyGenerator()),
anomaly.WithReader[*anomaly.WeeklyProvider](reader),
)
}
if t.seasonality == anomaly.SeasonalityHourly {
t.providerV2 = anomalyV2.NewHourlyProvider(
anomalyV2.WithQuerier[*anomalyV2.HourlyProvider](querierV5),
@@ -148,7 +110,7 @@ func NewAnomalyRule(
}
t.querierV5 = querierV5
t.version = p.Version
t.reader = reader
t.logger = logger
return &t, nil
}
@@ -157,36 +119,9 @@ func (r *AnomalyRule) Type() ruletypes.RuleType {
return RuleTypeAnomaly
}
func (r *AnomalyRule) prepareQueryRange(ctx context.Context, ts time.Time) (*v3.QueryRangeParamsV3, error) {
func (r *AnomalyRule) prepareQueryRange(ctx context.Context, ts time.Time) (*qbtypes.QueryRangeRequest, error) {
r.logger.InfoContext(
ctx, "prepare query range request v4", "ts", ts.UnixMilli(), "eval_window", r.EvalWindow().Milliseconds(), "eval_delay", r.EvalDelay().Milliseconds(),
)
st, en := r.Timestamps(ts)
start := st.UnixMilli()
end := en.UnixMilli()
compositeQuery := r.Condition().CompositeQuery
if compositeQuery.PanelType != v3.PanelTypeGraph {
compositeQuery.PanelType = v3.PanelTypeGraph
}
// default mode
return &v3.QueryRangeParamsV3{
Start: start,
End: end,
Step: int64(math.Max(float64(common.MinAllowedStepInterval(start, end)), 60)),
CompositeQuery: compositeQuery,
Variables: make(map[string]interface{}, 0),
NoCache: false,
}, nil
}
func (r *AnomalyRule) prepareQueryRangeV5(ctx context.Context, ts time.Time) (*qbtypes.QueryRangeRequest, error) {
r.logger.InfoContext(ctx, "prepare query range request v5", "ts", ts.UnixMilli(), "eval_window", r.EvalWindow().Milliseconds(), "eval_delay", r.EvalDelay().Milliseconds())
r.logger.InfoContext(ctx, "prepare query range request", "ts", ts.UnixMilli(), "eval_window", r.EvalWindow().Milliseconds(), "eval_delay", r.EvalDelay().Milliseconds())
startTs, endTs := r.Timestamps(ts)
start, end := startTs.UnixMilli(), endTs.UnixMilli()
@@ -215,60 +150,6 @@ func (r *AnomalyRule) buildAndRunQuery(ctx context.Context, orgID valuer.UUID, t
if err != nil {
return nil, err
}
err = r.PopulateTemporality(ctx, orgID, params)
if err != nil {
return nil, fmt.Errorf("internal error while setting temporality")
}
anomalies, err := r.provider.GetAnomalies(ctx, orgID, &anomaly.GetAnomaliesRequest{
Params: params,
Seasonality: r.seasonality,
})
if err != nil {
return nil, err
}
var queryResult *v3.Result
for _, result := range anomalies.Results {
if result.QueryName == r.GetSelectedQuery() {
queryResult = result
break
}
}
hasData := len(queryResult.AnomalyScores) > 0
if missingDataAlert := r.HandleMissingDataAlert(ctx, ts, hasData); missingDataAlert != nil {
return ruletypes.Vector{*missingDataAlert}, nil
}
var resultVector ruletypes.Vector
scoresJSON, _ := json.Marshal(queryResult.AnomalyScores)
r.logger.InfoContext(ctx, "anomaly scores", "scores", string(scoresJSON))
for _, series := range queryResult.AnomalyScores {
if !r.Condition().ShouldEval(series) {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)
continue
}
results, err := r.Threshold.Eval(*series, r.Unit(), ruletypes.EvalData{
ActiveAlerts: r.ActiveAlertsLabelFP(),
SendUnmatched: r.ShouldSendUnmatched(),
})
if err != nil {
return nil, err
}
resultVector = append(resultVector, results...)
}
return resultVector, nil
}
func (r *AnomalyRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUID, ts time.Time) (ruletypes.Vector, error) {
params, err := r.prepareQueryRangeV5(ctx, ts)
if err != nil {
return nil, err
}
anomalies, err := r.providerV2.GetAnomalies(ctx, orgID, &anomalyV2.AnomaliesRequest{
Params: *params,
@@ -290,20 +171,25 @@ func (r *AnomalyRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUID,
r.logger.WarnContext(ctx, "nil qb result", "ts", ts.UnixMilli())
}
queryResult := transition.ConvertV5TimeSeriesDataToV4Result(qbResult)
var anomalyScores []*qbtypes.TimeSeries
if qbResult != nil {
for _, bucket := range qbResult.Aggregations {
anomalyScores = append(anomalyScores, bucket.AnomalyScores...)
}
}
hasData := len(queryResult.AnomalyScores) > 0
hasData := len(anomalyScores) > 0
if missingDataAlert := r.HandleMissingDataAlert(ctx, ts, hasData); missingDataAlert != nil {
return ruletypes.Vector{*missingDataAlert}, nil
}
var resultVector ruletypes.Vector
scoresJSON, _ := json.Marshal(queryResult.AnomalyScores)
scoresJSON, _ := json.Marshal(anomalyScores)
r.logger.InfoContext(ctx, "anomaly scores", "scores", string(scoresJSON))
// Filter out new series if newGroupEvalDelay is configured
seriesToProcess := queryResult.AnomalyScores
seriesToProcess := anomalyScores
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, seriesToProcess)
// In case of error we log the error and continue with the original series
@@ -316,7 +202,7 @@ func (r *AnomalyRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUID,
for _, series := range seriesToProcess {
if !r.Condition().ShouldEval(series) {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Values), "requiredPoints", r.Condition().RequiredNumPoints)
continue
}
results, err := r.Threshold.Eval(*series, r.Unit(), ruletypes.EvalData{
@@ -337,16 +223,7 @@ func (r *AnomalyRule) Eval(ctx context.Context, ts time.Time) (int, error) {
valueFormatter := formatter.FromUnit(r.Unit())
var res ruletypes.Vector
var err error
if r.version == "v5" {
r.logger.InfoContext(ctx, "running v5 query")
res, err = r.buildAndRunQueryV5(ctx, r.OrgID(), ts)
} else {
r.logger.InfoContext(ctx, "running v4 query")
res, err = r.buildAndRunQuery(ctx, r.OrgID(), ts)
}
res, err := r.buildAndRunQuery(ctx, r.OrgID(), ts)
if err != nil {
return 0, err
}

View File

@@ -8,28 +8,27 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/SigNoz/signoz/ee/query-service/anomaly"
anomalyV2 "github.com/SigNoz/signoz/ee/anomaly"
"github.com/SigNoz/signoz/pkg/instrumentation/instrumentationtest"
"github.com/SigNoz/signoz/pkg/query-service/app/clickhouseReader"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrystore/telemetrystoretest"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/valuer"
)
// mockAnomalyProvider is a mock implementation of anomaly.Provider for testing.
// We need this because the anomaly provider makes 6 different queries for various
// time periods (current, past period, current season, past season, past 2 seasons,
// past 3 seasons), making it cumbersome to create mock data.
type mockAnomalyProvider struct {
responses []*anomaly.GetAnomaliesResponse
// mockAnomalyProviderV2 is a mock implementation of anomalyV2.Provider for testing.
type mockAnomalyProviderV2 struct {
responses []*anomalyV2.AnomaliesResponse
callCount int
}
func (m *mockAnomalyProvider) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *anomaly.GetAnomaliesRequest) (*anomaly.GetAnomaliesResponse, error) {
func (m *mockAnomalyProviderV2) GetAnomalies(ctx context.Context, orgID valuer.UUID, req *anomalyV2.AnomaliesRequest) (*anomalyV2.AnomaliesResponse, error) {
if m.callCount >= len(m.responses) {
return &anomaly.GetAnomaliesResponse{Results: []*v3.Result{}}, nil
return &anomalyV2.AnomaliesResponse{Results: []*qbtypes.TimeSeriesData{}}, nil
}
resp := m.responses[m.callCount]
m.callCount++
@@ -82,11 +81,11 @@ func TestAnomalyRule_NoData_AlertOnAbsent(t *testing.T) {
},
}
responseNoData := &anomaly.GetAnomaliesResponse{
Results: []*v3.Result{
responseNoData := &anomalyV2.AnomaliesResponse{
Results: []*qbtypes.TimeSeriesData{
{
QueryName: "A",
AnomalyScores: []*v3.Series{},
QueryName: "A",
Aggregations: []*qbtypes.AggregationBucket{},
},
},
}
@@ -129,8 +128,8 @@ func TestAnomalyRule_NoData_AlertOnAbsent(t *testing.T) {
)
require.NoError(t, err)
rule.provider = &mockAnomalyProvider{
responses: []*anomaly.GetAnomaliesResponse{responseNoData},
rule.providerV2 = &mockAnomalyProviderV2{
responses: []*anomalyV2.AnomaliesResponse{responseNoData},
}
alertsFound, err := rule.Eval(context.Background(), evalTime)
@@ -190,11 +189,11 @@ func TestAnomalyRule_NoData_AbsentFor(t *testing.T) {
},
}
responseNoData := &anomaly.GetAnomaliesResponse{
Results: []*v3.Result{
responseNoData := &anomalyV2.AnomaliesResponse{
Results: []*qbtypes.TimeSeriesData{
{
QueryName: "A",
AnomalyScores: []*v3.Series{},
QueryName: "A",
Aggregations: []*qbtypes.AggregationBucket{},
},
},
}
@@ -228,16 +227,22 @@ func TestAnomalyRule_NoData_AbsentFor(t *testing.T) {
t1 := baseTime.Add(5 * time.Minute)
t2 := t1.Add(c.timeBetweenEvals)
responseWithData := &anomaly.GetAnomaliesResponse{
Results: []*v3.Result{
responseWithData := &anomalyV2.AnomaliesResponse{
Results: []*qbtypes.TimeSeriesData{
{
QueryName: "A",
AnomalyScores: []*v3.Series{
Aggregations: []*qbtypes.AggregationBucket{
{
Labels: map[string]string{"test": "label"},
Points: []v3.Point{
{Timestamp: baseTime.UnixMilli(), Value: 1.0},
{Timestamp: baseTime.Add(time.Minute).UnixMilli(), Value: 1.5},
AnomalyScores: []*qbtypes.TimeSeries{
{
Labels: []*qbtypes.Label{
{Key: telemetrytypes.TelemetryFieldKey{Name: "test"}, Value: "label"},
},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: baseTime.UnixMilli(), Value: 1.0},
{Timestamp: baseTime.Add(time.Minute).UnixMilli(), Value: 1.5},
},
},
},
},
},
@@ -252,8 +257,8 @@ func TestAnomalyRule_NoData_AbsentFor(t *testing.T) {
rule, err := NewAnomalyRule("test-anomaly-rule", valuer.GenerateUUID(), &postableRule, reader, nil, logger, nil)
require.NoError(t, err)
rule.provider = &mockAnomalyProvider{
responses: []*anomaly.GetAnomaliesResponse{responseWithData, responseNoData},
rule.providerV2 = &mockAnomalyProviderV2{
responses: []*anomalyV2.AnomaliesResponse{responseWithData, responseNoData},
}
alertsFound1, err := rule.Eval(context.Background(), t1)

View File

@@ -12,6 +12,8 @@ export interface MockUPlotInstance {
export interface MockUPlotPaths {
spline: jest.Mock;
bars: jest.Mock;
linear: jest.Mock;
stepped: jest.Mock;
}
// Create mock instance methods
@@ -23,10 +25,23 @@ const createMockUPlotInstance = (): MockUPlotInstance => ({
setSeries: jest.fn(),
});
// Create mock paths
const mockPaths: MockUPlotPaths = {
spline: jest.fn(),
bars: jest.fn(),
// Path builder: (self, seriesIdx, idx0, idx1) => paths or null
const createMockPathBuilder = (name: string): jest.Mock =>
jest.fn(() => ({
name, // To test if the correct pathBuilder is used
stroke: jest.fn(),
fill: jest.fn(),
clip: jest.fn(),
}));
// Create mock paths - linear, spline, stepped needed by UPlotSeriesBuilder.getPathBuilder
const mockPaths = {
spline: jest.fn(() => createMockPathBuilder('spline')),
bars: jest.fn(() => createMockPathBuilder('bars')),
linear: jest.fn(() => createMockPathBuilder('linear')),
stepped: jest.fn((opts?: { align?: number }) =>
createMockPathBuilder(`stepped-(${opts?.align ?? 0})`),
),
};
// Mock static methods

View File

@@ -11,6 +11,7 @@ import {
const dashboardVariablesQuery = async (
props: Props,
signal?: AbortSignal,
): Promise<SuccessResponse<VariableResponseProps> | ErrorResponse> => {
try {
const { globalTime } = store.getState();
@@ -32,7 +33,7 @@ const dashboardVariablesQuery = async (
payload.variables = { ...payload.variables, ...timeVariables };
const response = await axios.post(`/variables/query`, payload);
const response = await axios.post(`/variables/query`, payload, { signal });
return {
statusCode: 200,

View File

@@ -19,6 +19,7 @@ export const getFieldValues = async (
startUnixMilli?: number,
endUnixMilli?: number,
existingQuery?: string,
abortSignal?: AbortSignal,
): Promise<SuccessResponseV2<FieldValueResponse>> => {
const params: Record<string, string> = {};
@@ -47,7 +48,10 @@ export const getFieldValues = async (
}
try {
const response = await axios.get('/fields/values', { params });
const response = await axios.get('/fields/values', {
params,
signal: abortSignal,
});
// Normalize values from different types (stringValues, boolValues, etc.)
if (response.data?.data?.values) {

View File

@@ -73,6 +73,7 @@ const CustomMultiSelect: React.FC<CustomMultiSelectProps> = ({
enableRegexOption = false,
isDynamicVariable = false,
showRetryButton = true,
waitingMessage,
...rest
}) => {
// ===== State & Refs =====
@@ -1681,6 +1682,7 @@ const CustomMultiSelect: React.FC<CustomMultiSelectProps> = ({
{!loading &&
!errorMessage &&
!noDataMessage &&
!waitingMessage &&
!(showIncompleteDataMessage && isScrolledToBottom) && (
<section className="navigate">
<ArrowDown size={8} className="icons" />
@@ -1698,7 +1700,17 @@ const CustomMultiSelect: React.FC<CustomMultiSelectProps> = ({
<div className="navigation-text">Refreshing values...</div>
</div>
)}
{errorMessage && !loading && (
{!loading && waitingMessage && (
<div className="navigation-loading">
<div className="navigation-icons">
<LoadingOutlined />
</div>
<div className="navigation-text" title={waitingMessage}>
{waitingMessage}
</div>
</div>
)}
{errorMessage && !loading && !waitingMessage && (
<div className="navigation-error">
<div className="navigation-text">
{errorMessage || SOMETHING_WENT_WRONG}
@@ -1720,6 +1732,7 @@ const CustomMultiSelect: React.FC<CustomMultiSelectProps> = ({
{showIncompleteDataMessage &&
isScrolledToBottom &&
!loading &&
!waitingMessage &&
!errorMessage && (
<div className="navigation-text-incomplete">
Don&apos;t see the value? Use search
@@ -1762,6 +1775,7 @@ const CustomMultiSelect: React.FC<CustomMultiSelectProps> = ({
isDarkMode,
isDynamicVariable,
showRetryButton,
waitingMessage,
]);
// Custom handler for dropdown visibility changes

View File

@@ -63,6 +63,7 @@ const CustomSelect: React.FC<CustomSelectProps> = ({
showIncompleteDataMessage = false,
showRetryButton = true,
isDynamicVariable = false,
waitingMessage,
...rest
}) => {
// ===== State & Refs =====
@@ -568,6 +569,7 @@ const CustomSelect: React.FC<CustomSelectProps> = ({
{!loading &&
!errorMessage &&
!noDataMessage &&
!waitingMessage &&
!(showIncompleteDataMessage && isScrolledToBottom) && (
<section className="navigate">
<ArrowDown size={8} className="icons" />
@@ -583,6 +585,16 @@ const CustomSelect: React.FC<CustomSelectProps> = ({
<div className="navigation-text">Refreshing values...</div>
</div>
)}
{!loading && waitingMessage && (
<div className="navigation-loading">
<div className="navigation-icons">
<LoadingOutlined />
</div>
<div className="navigation-text" title={waitingMessage}>
{waitingMessage}
</div>
</div>
)}
{errorMessage && !loading && (
<div className="navigation-error">
<div className="navigation-text">
@@ -605,6 +617,7 @@ const CustomSelect: React.FC<CustomSelectProps> = ({
{showIncompleteDataMessage &&
isScrolledToBottom &&
!loading &&
!waitingMessage &&
!errorMessage && (
<div className="navigation-text-incomplete">
Don&apos;t see the value? Use search
@@ -641,6 +654,7 @@ const CustomSelect: React.FC<CustomSelectProps> = ({
showRetryButton,
isDarkMode,
isDynamicVariable,
waitingMessage,
]);
// Handle dropdown visibility changes

View File

@@ -30,6 +30,7 @@ export interface CustomSelectProps extends Omit<SelectProps, 'options'> {
showIncompleteDataMessage?: boolean;
showRetryButton?: boolean;
isDynamicVariable?: boolean;
waitingMessage?: string;
}
export interface CustomTagProps {
@@ -66,4 +67,5 @@ export interface CustomMultiSelectProps
enableRegexOption?: boolean;
isDynamicVariable?: boolean;
showRetryButton?: boolean;
waitingMessage?: string;
}

View File

@@ -282,11 +282,11 @@ export default function QuickFilters(props: IQuickFiltersProps): JSX.Element {
size="small"
style={{ marginLeft: 'auto' }}
checked={showIP ?? true}
onClick={(): void => {
onChange={(checked): void => {
logEvent('API Monitoring: Show IP addresses clicked', {
showIP: !(showIP ?? true),
showIP: checked,
});
setParams({ showIP });
setParams({ showIP: checked });
}}
/>
</div>

View File

@@ -1,4 +1,8 @@
import { ENVIRONMENT } from 'constants/env';
import {
ApiMonitoringParams,
useApiMonitoringParams,
} from 'container/ApiMonitoring/queryParams';
import { useQueryBuilder } from 'hooks/queryBuilder/useQueryBuilder';
import {
otherFiltersResponse,
@@ -18,10 +22,15 @@ import { QuickFiltersConfig } from './constants';
jest.mock('hooks/queryBuilder/useQueryBuilder', () => ({
useQueryBuilder: jest.fn(),
}));
jest.mock('container/ApiMonitoring/queryParams');
const handleFilterVisibilityChange = jest.fn();
const redirectWithQueryBuilderData = jest.fn();
const putHandler = jest.fn();
const mockSetApiMonitoringParams = jest.fn() as jest.MockedFunction<
(newParams: Partial<ApiMonitoringParams>, replace?: boolean) => void
>;
const mockUseApiMonitoringParams = jest.mocked(useApiMonitoringParams);
const BASE_URL = ENVIRONMENT.baseURL;
const SIGNAL = SignalType.LOGS;
@@ -84,6 +93,28 @@ TestQuickFilters.defaultProps = {
config: QuickFiltersConfig,
};
function TestQuickFiltersApiMonitoring({
signal = SignalType.LOGS,
config = QuickFiltersConfig,
}: {
signal?: SignalType;
config?: IQuickFiltersConfig[];
}): JSX.Element {
return (
<QuickFilters
source={QuickFiltersSource.API_MONITORING}
config={config}
handleFilterVisibilityChange={handleFilterVisibilityChange}
signal={signal}
/>
);
}
TestQuickFiltersApiMonitoring.defaultProps = {
signal: '',
config: QuickFiltersConfig,
};
beforeAll(() => {
server.listen();
});
@@ -112,6 +143,10 @@ beforeEach(() => {
lastUsedQuery: 0,
redirectWithQueryBuilderData,
});
mockUseApiMonitoringParams.mockReturnValue([
{ showIP: true } as ApiMonitoringParams,
mockSetApiMonitoringParams,
]);
setupServer();
});
@@ -251,6 +286,24 @@ describe('Quick Filters', () => {
);
});
});
it('toggles Show IP addresses and updates API Monitoring params', async () => {
const user = userEvent.setup({ pointerEventsCheck: 0 });
render(<TestQuickFiltersApiMonitoring />);
// Switch should be rendered and initially checked
expect(screen.getByText('Show IP addresses')).toBeInTheDocument();
const toggle = screen.getByRole('switch');
expect(toggle).toHaveAttribute('aria-checked', 'true');
await user.click(toggle);
await waitFor(() => {
expect(mockSetApiMonitoringParams).toHaveBeenCalledWith(
expect.objectContaining({ showIP: false }),
);
});
});
});
describe('Quick Filters with custom filters', () => {

View File

@@ -1,7 +1,7 @@
import { useCallback, useEffect, useMemo, useState } from 'react';
import { useSelector } from 'react-redux';
import { LoadingOutlined } from '@ant-design/icons';
import { Spin, Table, Typography } from 'antd';
import { Spin, Table } from 'antd';
import logEvent from 'api/common/logEvent';
import cx from 'classnames';
import QuerySearch from 'components/QueryBuilderV2/QueryV2/QuerySearch/QuerySearch';
@@ -14,11 +14,13 @@ import { useQueryOperations } from 'hooks/queryBuilder/useQueryBuilderOperations
import { useShareBuilderUrl } from 'hooks/queryBuilder/useShareBuilderUrl';
import { useListOverview } from 'hooks/thirdPartyApis/useListOverview';
import { get } from 'lodash-es';
import { MoveUpRight } from 'lucide-react';
import { AppState } from 'store/reducers';
import { BaseAutocompleteData } from 'types/api/queryBuilder/queryAutocompleteResponse';
import { HandleChangeQueryDataV5 } from 'types/common/operations.types';
import { DataSource } from 'types/common/queryBuilder';
import { GlobalReducer } from 'types/reducer/globalTime';
import DOCLINKS from 'utils/docLinks';
import { ApiMonitoringHardcodedAttributeKeys } from '../../constants';
import { DEFAULT_PARAMS, useApiMonitoringParams } from '../../queryParams';
@@ -125,51 +127,67 @@ function DomainList(): JSX.Element {
hardcodedAttributeKeys={ApiMonitoringHardcodedAttributeKeys}
/>
</div>
<Table
className={cx('api-monitoring-domain-list-table')}
dataSource={isFetching || isLoading ? [] : formattedDataForTable}
columns={columnsConfig}
loading={{
spinning: isFetching || isLoading,
indicator: <Spin indicator={<LoadingOutlined size={14} spin />} />,
}}
locale={{
emptyText:
isFetching || isLoading ? null : (
<div className="no-filtered-domains-message-container">
<div className="no-filtered-domains-message-content">
<img
src="/Icons/emptyState.svg"
alt="thinking-emoji"
className="empty-state-svg"
/>
{!isFetching && !isLoading && formattedDataForTable.length === 0 && (
<div className="no-filtered-domains-message-container">
<div className="no-filtered-domains-message-content">
<img
src="/Icons/emptyState.svg"
alt="thinking-emoji"
className="empty-state-svg"
/>
<Typography.Text className="no-filtered-domains-message">
This query had no results. Edit your query and try again!
</Typography.Text>
</div>
<div className="no-filtered-domains-message">
<div className="no-domain-title">
No External API calls detected with applied filters.
</div>
),
}}
scroll={{ x: true }}
tableLayout="fixed"
onRow={(record, index): { onClick: () => void; className: string } => ({
onClick: (): void => {
if (index !== undefined) {
const dataIndex = formattedDataForTable.findIndex(
(item) => item.key === record.key,
);
setSelectedDomainIndex(dataIndex);
setParams({ selectedDomain: record.domainName });
logEvent('API Monitoring: Domain name row clicked', {});
}
},
className: 'expanded-clickable-row',
})}
rowClassName={(_, index): string =>
index % 2 === 0 ? 'table-row-dark' : 'table-row-light'
}
/>
<div className="no-domain-subtitle">
Ensure all HTTP client spans are being sent with kind as{' '}
<span className="attribute">Client</span> and url set in{' '}
<span className="attribute">url.full</span> or{' '}
<span className="attribute">http.url</span> attribute.
</div>
<a
href={DOCLINKS.EXTERNAL_API_MONITORING}
target="_blank"
rel="noreferrer"
className="external-api-doc-link"
>
Learn how External API monitoring works in SigNoz{' '}
<MoveUpRight size={14} />
</a>
</div>
</div>
</div>
)}
{(isFetching || isLoading || formattedDataForTable.length > 0) && (
<Table
className="api-monitoring-domain-list-table"
dataSource={isFetching || isLoading ? [] : formattedDataForTable}
columns={columnsConfig}
loading={{
spinning: isFetching || isLoading,
indicator: <Spin indicator={<LoadingOutlined size={14} spin />} />,
}}
scroll={{ x: true }}
tableLayout="fixed"
onRow={(record, index): { onClick: () => void; className: string } => ({
onClick: (): void => {
if (index !== undefined) {
const dataIndex = formattedDataForTable.findIndex(
(item) => item.key === record.key,
);
setSelectedDomainIndex(dataIndex);
setParams({ selectedDomain: record.domainName });
logEvent('API Monitoring: Domain name row clicked', {});
}
},
className: 'expanded-clickable-row',
})}
rowClassName={(_, index): string =>
index % 2 === 0 ? 'table-row-dark' : 'table-row-light'
}
/>
)}
{selectedDomainIndex !== -1 && (
<DomainDetails
domainData={formattedDataForTable[selectedDomainIndex]}

View File

@@ -180,10 +180,59 @@
.no-filtered-domains-message {
margin-top: 8px;
display: flex;
gap: 8px;
flex-direction: column;
.no-domain-title {
color: var(--bg-vanilla-100, #fff);
font-family: Inter;
font-size: 14px;
font-style: normal;
font-weight: 500;
line-height: 20px; /* 142.857% */
}
.no-domain-subtitle {
color: var(--bg-vanilla-400, #c0c1c3);
font-family: Inter;
font-size: 14px;
font-style: normal;
font-weight: 400;
line-height: 20px; /* 142.857% */
.attribute {
font-family: 'Space Mono';
}
}
.external-api-doc-link {
display: flex;
flex-direction: row;
gap: 4px;
align-items: center;
}
}
}
.lightMode {
.no-filtered-domains-message-container {
.no-filtered-domains-message-content {
.no-filtered-domains-message {
.no-domain-title {
color: var(--text-ink-500);
}
.no-domain-subtitle {
color: var(--text-ink-400);
.attribute {
font-family: 'Space Mono';
}
}
}
}
}
.api-monitoring-domain-list-table {
.ant-table {
.ant-table-thead > tr > th {

View File

@@ -0,0 +1,45 @@
import { useCallback } from 'react';
import ChartWrapper from 'container/DashboardContainer/visualization/charts/ChartWrapper/ChartWrapper';
import BarChartTooltip from 'lib/uPlotV2/components/Tooltip/BarChartTooltip';
import {
BarTooltipProps,
TooltipRenderArgs,
} from 'lib/uPlotV2/components/types';
import { useBarChartStacking } from '../../hooks/useBarChartStacking';
import { BarChartProps } from '../types';
export default function BarChart(props: BarChartProps): JSX.Element {
const { children, isStackedBarChart, config, data, ...rest } = props;
const chartData = useBarChartStacking({
data,
isStackedBarChart,
config,
});
const renderTooltip = useCallback(
(props: TooltipRenderArgs): React.ReactNode => {
const tooltipProps: BarTooltipProps = {
...props,
timezone: rest.timezone,
yAxisUnit: rest.yAxisUnit,
decimalPrecision: rest.decimalPrecision,
isStackedBarChart: isStackedBarChart,
};
return <BarChartTooltip {...tooltipProps} />;
},
[rest.timezone, rest.yAxisUnit, rest.decimalPrecision, isStackedBarChart],
);
return (
<ChartWrapper
{...rest}
config={config}
data={chartData}
renderTooltip={renderTooltip}
>
{children}
</ChartWrapper>
);
}

View File

@@ -0,0 +1,116 @@
import uPlot, { AlignedData } from 'uplot';
/**
* Stack data cumulatively (top-down: first series = top, last = bottom).
* When `omit(seriesIndex)` returns true, that series is excluded from stacking.
*/
export function stackSeries(
data: AlignedData,
omit: (seriesIndex: number) => boolean,
): { data: AlignedData; bands: uPlot.Band[] } {
const timeAxis = data[0];
const pointCount = timeAxis.length;
const valueSeriesCount = data.length - 1; // exclude time axis
const stackedSeries = buildStackedSeries({
data,
valueSeriesCount,
pointCount,
omit,
});
const bands = buildFillBands(valueSeriesCount + 1, omit); // +1 for 1-based series indices
return {
data: [timeAxis, ...stackedSeries] as AlignedData,
bands,
};
}
interface BuildStackedSeriesParams {
data: AlignedData;
valueSeriesCount: number;
pointCount: number;
omit: (seriesIndex: number) => boolean;
}
/**
* Accumulate from last series upward: last series = raw values, first = total.
* Omitted series are copied as-is (no accumulation).
*/
function buildStackedSeries({
data,
valueSeriesCount,
pointCount,
omit,
}: BuildStackedSeriesParams): (number | null)[][] {
const stackedSeries: (number | null)[][] = Array(valueSeriesCount);
const cumulativeSums = Array(pointCount).fill(0) as number[];
for (let seriesIndex = valueSeriesCount; seriesIndex >= 1; seriesIndex--) {
const rawValues = data[seriesIndex] as (number | null)[];
if (omit(seriesIndex)) {
stackedSeries[seriesIndex - 1] = rawValues;
} else {
stackedSeries[seriesIndex - 1] = rawValues.map((rawValue, pointIndex) => {
const numericValue = rawValue == null ? 0 : Number(rawValue);
return (cumulativeSums[pointIndex] += numericValue);
});
}
}
return stackedSeries;
}
/**
* Bands define fill between consecutive visible series for stacked appearance.
* uPlot format: [upperSeriesIdx, lowerSeriesIdx].
*/
function buildFillBands(
seriesLength: number,
omit: (seriesIndex: number) => boolean,
): uPlot.Band[] {
const bands: uPlot.Band[] = [];
for (let seriesIndex = 1; seriesIndex < seriesLength; seriesIndex++) {
if (omit(seriesIndex)) {
continue;
}
const nextVisibleSeriesIndex = findNextVisibleSeriesIndex(
seriesLength,
seriesIndex,
omit,
);
if (nextVisibleSeriesIndex !== -1) {
bands.push({ series: [seriesIndex, nextVisibleSeriesIndex] });
}
}
return bands;
}
function findNextVisibleSeriesIndex(
seriesLength: number,
afterIndex: number,
omit: (seriesIndex: number) => boolean,
): number {
for (let i = afterIndex + 1; i < seriesLength; i++) {
if (!omit(i)) {
return i;
}
}
return -1;
}
/**
* Returns band indices for initial stacked state (no series omitted).
* Top-down: first series at top, band fills between consecutive series.
* uPlot band format: [upperSeriesIdx, lowerSeriesIdx].
*/
export function getInitialStackedBands(seriesCount: number): uPlot.Band[] {
const bands: uPlot.Band[] = [];
for (let seriesIndex = 1; seriesIndex < seriesCount; seriesIndex++) {
bands.push({ series: [seriesIndex, seriesIndex + 1] });
}
return bands;
}

View File

@@ -0,0 +1,125 @@
import {
MutableRefObject,
useCallback,
useLayoutEffect,
useMemo,
useRef,
} from 'react';
import { UPlotConfigBuilder } from 'lib/uPlotV2/config/UPlotConfigBuilder';
import { has } from 'lodash-es';
import uPlot from 'uplot';
import { stackSeries } from '../charts/utils/stackSeriesUtils';
/** Returns true if the series at the given index is hidden (e.g. via legend toggle). */
function isSeriesHidden(plot: uPlot, seriesIndex: number): boolean {
return !plot.series[seriesIndex]?.show;
}
function canApplyStacking(
unstackedData: uPlot.AlignedData | null,
plot: uPlot,
isUpdating: boolean,
): boolean {
return (
!isUpdating &&
!!unstackedData &&
!!plot.data &&
unstackedData[0]?.length === plot.data[0]?.length
);
}
function setupStackingHooks(
config: UPlotConfigBuilder,
applyStackingToChart: (plot: uPlot) => void,
isUpdatingRef: MutableRefObject<boolean>,
): () => void {
const onDataChange = (plot: uPlot): void => {
if (!isUpdatingRef.current) {
applyStackingToChart(plot);
}
};
const onSeriesVisibilityChange = (
plot: uPlot,
_seriesIdx: number | null,
opts: uPlot.Series,
): void => {
if (!has(opts, 'focus')) {
applyStackingToChart(plot);
}
};
const removeSetDataHook = config.addHook('setData', onDataChange);
const removeSetSeriesHook = config.addHook(
'setSeries',
onSeriesVisibilityChange,
);
return (): void => {
removeSetDataHook?.();
removeSetSeriesHook?.();
};
}
export interface UseBarChartStackingParams {
data: uPlot.AlignedData;
isStackedBarChart?: boolean;
config: UPlotConfigBuilder | null;
}
/**
* Handles stacking for bar charts: computes initial stacked data and re-stacks
* when data or series visibility changes (e.g. legend toggles).
*/
export function useBarChartStacking({
data,
isStackedBarChart = false,
config,
}: UseBarChartStackingParams): uPlot.AlignedData {
// Store unstacked source data so uPlot hooks can access it (hooks run outside React's render cycle)
const unstackedDataRef = useRef<uPlot.AlignedData | null>(null);
unstackedDataRef.current = isStackedBarChart ? data : null;
// Prevents re-entrant calls when we update chart data (avoids infinite loop in setData hook)
const isUpdatingChartRef = useRef(false);
const chartData = useMemo((): uPlot.AlignedData => {
if (!isStackedBarChart || !data || data.length < 2) {
return data;
}
const noSeriesHidden = (): boolean => false; // include all series in initial stack
const { data: stacked } = stackSeries(data, noSeriesHidden);
return stacked;
}, [data, isStackedBarChart]);
const applyStackingToChart = useCallback((plot: uPlot): void => {
const unstacked = unstackedDataRef.current;
if (
!unstacked ||
!canApplyStacking(unstacked, plot, isUpdatingChartRef.current)
) {
return;
}
const shouldExcludeSeries = (idx: number): boolean =>
isSeriesHidden(plot, idx);
const { data: stacked, bands } = stackSeries(unstacked, shouldExcludeSeries);
plot.delBand(null);
bands.forEach((band: uPlot.Band) => plot.addBand(band));
isUpdatingChartRef.current = true;
plot.setData(stacked);
isUpdatingChartRef.current = false;
}, []);
useLayoutEffect(() => {
if (!isStackedBarChart || !config) {
return undefined;
}
return setupStackingHooks(config, applyStackingToChart, isUpdatingChartRef);
}, [isStackedBarChart, config, applyStackingToChart]);
return chartData;
}

View File

@@ -0,0 +1,160 @@
import { useCallback, useEffect, useMemo, useRef, useState } from 'react';
import { PanelWrapperProps } from 'container/PanelWrapper/panelWrapper.types';
import { useIsDarkMode } from 'hooks/useDarkMode';
import { useResizeObserver } from 'hooks/useDimensions';
import { LegendPosition } from 'lib/uPlotV2/components/types';
import ContextMenu from 'periscope/components/ContextMenu';
import { useDashboard } from 'providers/Dashboard/Dashboard';
import { useTimezone } from 'providers/Timezone';
import { MetricRangePayloadProps } from 'types/api/metrics/getQueryRange';
import uPlot from 'uplot';
import { getTimeRange } from 'utils/getTimeRange';
import BarChart from '../../charts/BarChart/BarChart';
import ChartManager from '../../components/ChartManager/ChartManager';
import { usePanelContextMenu } from '../../hooks/usePanelContextMenu';
import { prepareBarPanelConfig, prepareBarPanelData } from './utils';
import '../Panel.styles.scss';
function BarPanel(props: PanelWrapperProps): JSX.Element {
const {
panelMode,
queryResponse,
widget,
onDragSelect,
isFullViewMode,
onToggleModelHandler,
} = props;
const uPlotRef = useRef<uPlot | null>(null);
const { toScrollWidgetId, setToScrollWidgetId } = useDashboard();
const graphRef = useRef<HTMLDivElement>(null);
const [minTimeScale, setMinTimeScale] = useState<number>();
const [maxTimeScale, setMaxTimeScale] = useState<number>();
const containerDimensions = useResizeObserver(graphRef);
const isDarkMode = useIsDarkMode();
const { timezone } = useTimezone();
useEffect(() => {
if (toScrollWidgetId === widget.id) {
graphRef.current?.scrollIntoView({
behavior: 'smooth',
block: 'center',
});
graphRef.current?.focus();
setToScrollWidgetId('');
}
}, [toScrollWidgetId, setToScrollWidgetId, widget.id]);
useEffect((): void => {
const { startTime, endTime } = getTimeRange(queryResponse);
setMinTimeScale(startTime);
setMaxTimeScale(endTime);
}, [queryResponse]);
const {
coordinates,
popoverPosition,
onClose,
menuItemsConfig,
clickHandlerWithContextMenu,
} = usePanelContextMenu({
widget,
queryResponse,
});
const config = useMemo(() => {
return prepareBarPanelConfig({
widget,
isDarkMode,
currentQuery: widget.query,
onClick: clickHandlerWithContextMenu,
onDragSelect,
apiResponse: queryResponse?.data?.payload as MetricRangePayloadProps,
timezone,
panelMode,
minTimeScale: minTimeScale,
maxTimeScale: maxTimeScale,
});
}, [
widget,
isDarkMode,
queryResponse?.data?.payload,
clickHandlerWithContextMenu,
onDragSelect,
minTimeScale,
maxTimeScale,
timezone,
panelMode,
]);
const chartData = useMemo(() => {
if (!queryResponse?.data?.payload) {
return [];
}
return prepareBarPanelData(queryResponse?.data?.payload);
}, [queryResponse?.data?.payload]);
const layoutChildren = useMemo(() => {
if (!isFullViewMode) {
return null;
}
return (
<ChartManager
config={config}
alignedData={chartData}
yAxisUnit={widget.yAxisUnit}
onCancel={onToggleModelHandler}
/>
);
}, [
isFullViewMode,
config,
chartData,
widget.yAxisUnit,
onToggleModelHandler,
]);
const onPlotDestroy = useCallback(() => {
uPlotRef.current = null;
}, []);
const onPlotRef = useCallback((plot: uPlot | null): void => {
uPlotRef.current = plot;
}, []);
return (
<div className="panel-container" ref={graphRef}>
{containerDimensions.width > 0 && containerDimensions.height > 0 && (
<BarChart
config={config}
legendConfig={{
position: widget?.legendPosition ?? LegendPosition.BOTTOM,
}}
plotRef={onPlotRef}
onDestroy={onPlotDestroy}
yAxisUnit={widget.yAxisUnit}
decimalPrecision={widget.decimalPrecision}
timezone={timezone.value}
data={chartData as uPlot.AlignedData}
width={containerDimensions.width}
height={containerDimensions.height}
layoutChildren={layoutChildren}
isStackedBarChart={widget.stackedBarChart ?? false}
>
<ContextMenu
coordinates={coordinates}
popoverPosition={popoverPosition}
title={menuItemsConfig.header as string}
items={menuItemsConfig.items}
onClose={onClose}
/>
</BarChart>
)}
</div>
);
}
export default BarPanel;

View File

@@ -0,0 +1,108 @@
import { Timezone } from 'components/CustomTimePicker/timezoneUtils';
import { PANEL_TYPES } from 'constants/queryBuilder';
import { getInitialStackedBands } from 'container/DashboardContainer/visualization/charts/utils/stackSeriesUtils';
import { getLegend } from 'lib/dashboard/getQueryResults';
import getLabelName from 'lib/getLabelName';
import { OnClickPluginOpts } from 'lib/uPlotLib/plugins/onClickPlugin';
import {
DrawStyle,
LineInterpolation,
LineStyle,
VisibilityMode,
} from 'lib/uPlotV2/config/types';
import { UPlotConfigBuilder } from 'lib/uPlotV2/config/UPlotConfigBuilder';
import { Widgets } from 'types/api/dashboard/getAll';
import { MetricRangePayloadProps } from 'types/api/metrics/getQueryRange';
import { Query } from 'types/api/queryBuilder/queryBuilderData';
import { QueryData } from 'types/api/widgets/getQuery';
import { AlignedData } from 'uplot';
import { PanelMode } from '../types';
import { fillMissingXAxisTimestamps, getXAxisTimestamps } from '../utils';
import { buildBaseConfig } from '../utils/baseConfigBuilder';
export function prepareBarPanelData(
apiResponse: MetricRangePayloadProps,
): AlignedData {
const seriesList = apiResponse?.data?.result || [];
const timestampArr = getXAxisTimestamps(seriesList);
const yAxisValuesArr = fillMissingXAxisTimestamps(timestampArr, seriesList);
return [timestampArr, ...yAxisValuesArr];
}
export function prepareBarPanelConfig({
widget,
isDarkMode,
currentQuery,
onClick,
onDragSelect,
apiResponse,
timezone,
panelMode,
minTimeScale,
maxTimeScale,
}: {
widget: Widgets;
isDarkMode: boolean;
currentQuery: Query;
onClick: OnClickPluginOpts['onClick'];
onDragSelect: (startTime: number, endTime: number) => void;
apiResponse: MetricRangePayloadProps;
timezone: Timezone;
panelMode: PanelMode;
minTimeScale?: number;
maxTimeScale?: number;
}): UPlotConfigBuilder {
const builder = buildBaseConfig({
widget,
isDarkMode,
onClick,
onDragSelect,
apiResponse,
timezone,
panelMode,
panelType: PANEL_TYPES.BAR,
minTimeScale,
maxTimeScale,
});
builder.setCursor({
focus: {
prox: 1e3,
},
});
if (widget.stackedBarChart) {
const seriesCount = (apiResponse?.data?.result?.length ?? 0) + 1; // +1 for 1-based uPlot series indices
builder.setBands(getInitialStackedBands(seriesCount));
}
const seriesList: QueryData[] = apiResponse?.data?.result || [];
seriesList.forEach((series) => {
const baseLabelName = getLabelName(
series.metric,
series.queryName || '', // query
series.legend || '',
);
const label = currentQuery
? getLegend(series, currentQuery, baseLabelName)
: baseLabelName;
builder.addSeries({
scaleKey: 'y',
drawStyle: DrawStyle.Bar,
panelType: PANEL_TYPES.BAR,
label: label,
colorMapping: widget.customLegendColors ?? {},
spanGaps: false,
lineStyle: LineStyle.Solid,
lineInterpolation: LineInterpolation.Spline,
showPoints: VisibilityMode.Never,
pointSize: 5,
isDarkMode,
});
});
return builder;
}

View File

@@ -83,7 +83,7 @@ export const prepareUPlotConfig = ({
drawStyle: DrawStyle.Line,
label: label,
colorMapping: widget.customLegendColors ?? {},
spanGaps: false,
spanGaps: true,
lineStyle: LineStyle.Solid,
lineInterpolation: LineInterpolation.Spline,
showPoints: VisibilityMode.Never,

View File

@@ -14,6 +14,11 @@ export interface GraphVisibilityState {
dataIndex: SeriesVisibilityItem[];
}
export interface SeriesVisibilityState {
labels: string[];
visibility: boolean[];
}
/**
* Context in which a panel is rendered. Used to vary behavior (e.g. persistence,
* interactions) per context.

View File

@@ -0,0 +1,271 @@
import { LOCALSTORAGE } from 'constants/localStorage';
import type { GraphVisibilityState } from '../../types';
import {
getStoredSeriesVisibility,
updateSeriesVisibilityToLocalStorage,
} from '../legendVisibilityUtils';
describe('legendVisibilityUtils', () => {
const storageKey = LOCALSTORAGE.GRAPH_VISIBILITY_STATES;
beforeEach(() => {
localStorage.clear();
jest.spyOn(window.localStorage.__proto__, 'getItem');
jest.spyOn(window.localStorage.__proto__, 'setItem');
});
afterEach(() => {
jest.restoreAllMocks();
});
describe('getStoredSeriesVisibility', () => {
it('returns null when there is no stored visibility state', () => {
const result = getStoredSeriesVisibility('widget-1');
expect(result).toBeNull();
expect(localStorage.getItem).toHaveBeenCalledWith(storageKey);
});
it('returns null when widget has no stored dataIndex', () => {
const stored: GraphVisibilityState[] = [
{
name: 'widget-1',
dataIndex: [],
},
];
localStorage.setItem(storageKey, JSON.stringify(stored));
const result = getStoredSeriesVisibility('widget-1');
expect(result).toBeNull();
});
it('returns visibility array by index when widget state exists', () => {
const stored: GraphVisibilityState[] = [
{
name: 'widget-1',
dataIndex: [
{ label: 'CPU', show: true },
{ label: 'Memory', show: false },
],
},
{
name: 'widget-2',
dataIndex: [{ label: 'Errors', show: true }],
},
];
localStorage.setItem(storageKey, JSON.stringify(stored));
const result = getStoredSeriesVisibility('widget-1');
expect(result).not.toBeNull();
expect(result).toEqual({
labels: ['CPU', 'Memory'],
visibility: [true, false],
});
});
it('returns visibility by index including duplicate labels', () => {
const stored: GraphVisibilityState[] = [
{
name: 'widget-1',
dataIndex: [
{ label: 'CPU', show: true },
{ label: 'CPU', show: false },
{ label: 'Memory', show: false },
],
},
];
localStorage.setItem(storageKey, JSON.stringify(stored));
const result = getStoredSeriesVisibility('widget-1');
expect(result).not.toBeNull();
expect(result).toEqual({
labels: ['CPU', 'CPU', 'Memory'],
visibility: [true, false, false],
});
});
it('returns null on malformed JSON in localStorage', () => {
localStorage.setItem(storageKey, '{invalid-json');
const result = getStoredSeriesVisibility('widget-1');
expect(result).toBeNull();
});
it('returns null when widget id is not found', () => {
const stored: GraphVisibilityState[] = [
{
name: 'another-widget',
dataIndex: [{ label: 'CPU', show: true }],
},
];
localStorage.setItem(storageKey, JSON.stringify(stored));
const result = getStoredSeriesVisibility('widget-1');
expect(result).toBeNull();
});
});
describe('updateSeriesVisibilityToLocalStorage', () => {
it('creates new visibility state when none exists', () => {
const seriesVisibility = [
{ label: 'CPU', show: true },
{ label: 'Memory', show: false },
];
updateSeriesVisibilityToLocalStorage('widget-1', seriesVisibility);
const stored = getStoredSeriesVisibility('widget-1');
expect(stored).not.toBeNull();
expect(stored).toEqual({
labels: ['CPU', 'Memory'],
visibility: [true, false],
});
});
it('adds a new widget entry when other widgets already exist', () => {
const existing: GraphVisibilityState[] = [
{
name: 'widget-existing',
dataIndex: [{ label: 'Errors', show: true }],
},
];
localStorage.setItem(storageKey, JSON.stringify(existing));
const newVisibility = [{ label: 'CPU', show: false }];
updateSeriesVisibilityToLocalStorage('widget-new', newVisibility);
const stored = getStoredSeriesVisibility('widget-new');
expect(stored).not.toBeNull();
expect(stored).toEqual({ labels: ['CPU'], visibility: [false] });
});
it('updates existing widget visibility when entry already exists', () => {
const initialVisibility: GraphVisibilityState[] = [
{
name: 'widget-1',
dataIndex: [
{ label: 'CPU', show: true },
{ label: 'Memory', show: true },
],
},
];
localStorage.setItem(storageKey, JSON.stringify(initialVisibility));
const updatedVisibility = [
{ label: 'CPU', show: false },
{ label: 'Memory', show: true },
];
updateSeriesVisibilityToLocalStorage('widget-1', updatedVisibility);
const stored = getStoredSeriesVisibility('widget-1');
expect(stored).not.toBeNull();
expect(stored).toEqual({
labels: ['CPU', 'Memory'],
visibility: [false, true],
});
});
it('silently handles malformed existing JSON without throwing', () => {
localStorage.setItem(storageKey, '{invalid-json');
expect(() =>
updateSeriesVisibilityToLocalStorage('widget-1', [
{ label: 'CPU', show: true },
]),
).not.toThrow();
});
it('when existing JSON is malformed, overwrites with valid data for the widget', () => {
localStorage.setItem(storageKey, '{invalid-json');
updateSeriesVisibilityToLocalStorage('widget-1', [
{ label: 'x-axis', show: true },
{ label: 'CPU', show: false },
]);
const stored = getStoredSeriesVisibility('widget-1');
expect(stored).not.toBeNull();
expect(stored).toEqual({
labels: ['x-axis', 'CPU'],
visibility: [true, false],
});
const expected = [
{
name: 'widget-1',
dataIndex: [
{ label: 'x-axis', show: true },
{ label: 'CPU', show: false },
],
},
];
expect(localStorage.setItem).toHaveBeenCalledWith(
storageKey,
JSON.stringify(expected),
);
});
it('preserves other widgets when updating one widget', () => {
const existing: GraphVisibilityState[] = [
{ name: 'widget-a', dataIndex: [{ label: 'A', show: true }] },
{ name: 'widget-b', dataIndex: [{ label: 'B', show: false }] },
];
localStorage.setItem(storageKey, JSON.stringify(existing));
updateSeriesVisibilityToLocalStorage('widget-b', [
{ label: 'B', show: true },
]);
expect(getStoredSeriesVisibility('widget-a')).toEqual({
labels: ['A'],
visibility: [true],
});
expect(getStoredSeriesVisibility('widget-b')).toEqual({
labels: ['B'],
visibility: [true],
});
});
it('calls setItem with storage key and stringified visibility states', () => {
updateSeriesVisibilityToLocalStorage('widget-1', [
{ label: 'CPU', show: true },
]);
expect(localStorage.setItem).toHaveBeenCalledTimes(1);
expect(localStorage.setItem).toHaveBeenCalledWith(
storageKey,
expect.any(String),
);
const [_, value] = (localStorage.setItem as jest.Mock).mock.calls[0];
expect((): void => JSON.parse(value)).not.toThrow();
expect(JSON.parse(value)).toEqual([
{ name: 'widget-1', dataIndex: [{ label: 'CPU', show: true }] },
]);
});
it('stores empty dataIndex when seriesVisibility is empty', () => {
updateSeriesVisibilityToLocalStorage('widget-1', []);
const raw = localStorage.getItem(storageKey);
expect(raw).not.toBeNull();
const parsed = JSON.parse(raw ?? '[]');
expect(parsed).toEqual([{ name: 'widget-1', dataIndex: [] }]);
expect(getStoredSeriesVisibility('widget-1')).toBeNull();
});
});
});

View File

@@ -88,7 +88,7 @@ export function buildBaseConfig({
max: undefined,
softMin: widget.softMin ?? undefined,
softMax: widget.softMax ?? undefined,
// thresholds,
thresholds: thresholdOptions,
logBase: widget.isLogScale ? 10 : undefined,
distribution: widget.isLogScale
? DistributionType.Logarithmic

View File

@@ -1,15 +1,20 @@
import { LOCALSTORAGE } from 'constants/localStorage';
import { GraphVisibilityState, SeriesVisibilityItem } from '../types';
import {
GraphVisibilityState,
SeriesVisibilityItem,
SeriesVisibilityState,
} from '../types';
/**
* Retrieves the visibility map for a specific widget from localStorage
* Retrieves the stored series visibility for a specific widget from localStorage by index.
* Index 0 is the x-axis (time); indices 1, 2, ... are data series (same order as uPlot plot.series).
* @param widgetId - The unique identifier of the widget
* @returns A Map of series labels to their visibility state, or null if not found
* @returns visibility[i] = show state for series at index i, or null if not found
*/
export function getStoredSeriesVisibility(
widgetId: string,
): Map<string, boolean> | null {
): SeriesVisibilityState | null {
try {
const storedData = localStorage.getItem(LOCALSTORAGE.GRAPH_VISIBILITY_STATES);
@@ -24,8 +29,15 @@ export function getStoredSeriesVisibility(
return null;
}
return new Map(widgetState.dataIndex.map((item) => [item.label, item.show]));
} catch {
return {
labels: widgetState.dataIndex.map((item) => item.label),
visibility: widgetState.dataIndex.map((item) => item.show),
};
} catch (error) {
if (error instanceof SyntaxError) {
// If the stored data is malformed, remove it
localStorage.removeItem(LOCALSTORAGE.GRAPH_VISIBILITY_STATES);
}
// Silently handle parsing errors - fall back to default visibility
return null;
}
@@ -35,40 +47,31 @@ export function updateSeriesVisibilityToLocalStorage(
widgetId: string,
seriesVisibility: SeriesVisibilityItem[],
): void {
let visibilityStates: GraphVisibilityState[] = [];
try {
const storedData = localStorage.getItem(LOCALSTORAGE.GRAPH_VISIBILITY_STATES);
let visibilityStates: GraphVisibilityState[];
if (!storedData) {
visibilityStates = [
{
name: widgetId,
dataIndex: seriesVisibility,
},
];
} else {
visibilityStates = JSON.parse(storedData);
visibilityStates = JSON.parse(storedData || '[]');
} catch (error) {
if (error instanceof SyntaxError) {
visibilityStates = [];
}
const widgetState = visibilityStates.find((state) => state.name === widgetId);
if (!widgetState) {
visibilityStates = [
...visibilityStates,
{
name: widgetId,
dataIndex: seriesVisibility,
},
];
} else {
widgetState.dataIndex = seriesVisibility;
}
localStorage.setItem(
LOCALSTORAGE.GRAPH_VISIBILITY_STATES,
JSON.stringify(visibilityStates),
);
} catch {
// Silently handle parsing errors - fall back to default visibility
}
const widgetState = visibilityStates.find((state) => state.name === widgetId);
if (widgetState) {
widgetState.dataIndex = seriesVisibility;
} else {
visibilityStates = [
...visibilityStates,
{
name: widgetId,
dataIndex: seriesVisibility,
},
];
}
localStorage.setItem(
LOCALSTORAGE.GRAPH_VISIBILITY_STATES,
JSON.stringify(visibilityStates),
);
}

View File

@@ -6,7 +6,7 @@ import logEvent from 'api/common/logEvent';
import PromQLIcon from 'assets/Dashboard/PromQl';
import { QueryBuilderV2 } from 'components/QueryBuilderV2/QueryBuilderV2';
import { ALERTS_DATA_SOURCE_MAP } from 'constants/alerts';
import { ENTITY_VERSION_V4 } from 'constants/app';
import { ENTITY_VERSION_V5 } from 'constants/app';
import { PANEL_TYPES } from 'constants/queryBuilder';
import { QBShortcuts } from 'constants/shortcuts/QBShortcuts';
import RunQueryBtn from 'container/QueryBuilder/components/RunQueryBtn/RunQueryBtn';
@@ -63,12 +63,8 @@ function QuerySection({
signalSource: signalSource === 'meter' ? 'meter' : '',
}}
showTraceOperator={alertType === AlertTypes.TRACES_BASED_ALERT}
showFunctions={
(alertType === AlertTypes.METRICS_BASED_ALERT &&
alertDef.version === ENTITY_VERSION_V4) ||
alertType === AlertTypes.LOGS_BASED_ALERT
}
version={alertDef.version || 'v3'}
showFunctions
version={ENTITY_VERSION_V5}
onSignalSourceChange={handleSignalSourceChange}
signalSourceChangeEnabled
/>

View File

@@ -35,10 +35,10 @@
box-shadow: 4px 10px 16px 2px rgba(0, 0, 0, 0.2);
backdrop-filter: blur(20px);
padding: 0px;
.group-by-clause {
.more-filter-actions {
display: flex;
align-items: center;
gap: 4px;
gap: 8px;
color: var(--bg-vanilla-400);
font-family: Inter;
font-size: 14px;
@@ -53,7 +53,7 @@
}
}
.group-by-clause:hover {
.more-filter-actions:hover {
background-color: unset !important;
}
}
@@ -65,7 +65,7 @@
border: 1px solid var(--bg-vanilla-400);
background: var(--bg-vanilla-100) !important;
.group-by-clause {
.more-filter-actions {
color: var(--bg-ink-400);
}
}

View File

@@ -1,4 +1,5 @@
/* eslint-disable sonarjs/no-duplicate-string */
/* eslint-disable sonarjs/cognitive-complexity */
import React, { useCallback, useMemo, useState } from 'react';
import { useLocation } from 'react-router-dom';
import { Color } from '@signozhq/design-tokens';
@@ -16,7 +17,12 @@ import { MetricsType } from 'container/MetricsApplication/constant';
import { useGetSearchQueryParam } from 'hooks/queryBuilder/useGetSearchQueryParam';
import { useQueryBuilder } from 'hooks/queryBuilder/useQueryBuilder';
import { ICurrentQueryData } from 'hooks/useHandleExplorerTabChange';
import { ArrowDownToDot, ArrowUpFromDot, Ellipsis } from 'lucide-react';
import {
ArrowDownToDot,
ArrowUpFromDot,
Ellipsis,
RefreshCw,
} from 'lucide-react';
import { ExplorerViews } from 'pages/LogsExplorer/utils';
import { useTimezone } from 'providers/Timezone';
import {
@@ -205,6 +211,70 @@ export default function TableViewActions(
viewName,
]);
const handleReplaceFilter = useCallback((): void => {
if (!stagedQuery) {
return;
}
const normalizedDataType: DataTypes | undefined =
dataType && Object.values(DataTypes).includes(dataType as DataTypes)
? (dataType as DataTypes)
: undefined;
const updatedQuery = updateQueriesData(
stagedQuery,
'queryData',
(item, index) => {
// Only replace filters for index 0
if (index === 0) {
const newFilterItem: BaseAutocompleteData = {
key: fieldFilterKey,
type: fieldType || '',
dataType: normalizedDataType,
};
// Create new filter items array with single IN filter
const newFilters = {
items: [
{
id: '',
key: newFilterItem,
op: OPERATORS.IN,
value: [parseFieldValue(fieldData.value)],
},
],
op: 'AND',
};
// Clear the expression and update filters
return {
...item,
filters: newFilters,
filter: { expression: '' },
};
}
return item;
},
);
const queryData: ICurrentQueryData = {
name: viewName,
id: updatedQuery.id,
query: updatedQuery,
};
handleChangeSelectedView?.(ExplorerViews.LIST, queryData);
}, [
stagedQuery,
updateQueriesData,
fieldFilterKey,
fieldType,
dataType,
fieldData,
handleChangeSelectedView,
viewName,
]);
// Memoize textToCopy computation
const textToCopy = useMemo(() => {
let text = fieldData.value;
@@ -327,13 +397,21 @@ export default function TableViewActions(
content={
<div>
<Button
className="group-by-clause"
className="more-filter-actions"
type="text"
icon={<GroupByIcon />}
onClick={handleGroupByAttribute}
>
Group By Attribute
</Button>
<Button
className="more-filter-actions"
type="text"
icon={<RefreshCw size={14} />}
onClick={handleReplaceFilter}
>
Replace filters with this value
</Button>
</div>
}
rootClassName="table-view-actions-content"
@@ -405,13 +483,21 @@ export default function TableViewActions(
content={
<div>
<Button
className="group-by-clause"
className="more-filter-actions"
type="text"
icon={<GroupByIcon />}
onClick={handleGroupByAttribute}
>
Group By Attribute
</Button>
<Button
className="more-filter-actions"
type="text"
icon={<RefreshCw size={14} />}
onClick={handleReplaceFilter}
>
Replace filters with this value
</Button>
</div>
}
rootClassName="table-view-actions-content"

View File

@@ -1,5 +1,6 @@
import { PANEL_TYPES } from 'constants/queryBuilder';
import TimeSeriesPanel from '../DashboardContainer/visualization/panels/TimeSeriesPanel/TimeSeriesPanel';
import HistogramPanelWrapper from './HistogramPanelWrapper';
import ListPanelWrapper from './ListPanelWrapper';
import PiePanelWrapper from './PiePanelWrapper';
@@ -8,7 +9,7 @@ import UplotPanelWrapper from './UplotPanelWrapper';
import ValuePanelWrapper from './ValuePanelWrapper';
export const PanelTypeVsPanelWrapper = {
[PANEL_TYPES.TIME_SERIES]: UplotPanelWrapper,
[PANEL_TYPES.TIME_SERIES]: TimeSeriesPanel,
[PANEL_TYPES.TABLE]: TablePanelWrapper,
[PANEL_TYPES.LIST]: ListPanelWrapper,
[PANEL_TYPES.VALUE]: ValuePanelWrapper,

View File

@@ -0,0 +1,61 @@
import { useCallback, useEffect, useRef, useState } from 'react';
const DEFAULT_COPIED_RESET_MS = 2000;
export interface UseCopyToClipboardOptions {
/** How long (ms) to keep "copied" state before resetting. Default 2000. */
copiedResetMs?: number;
}
export type ID = number | string | null;
export interface UseCopyToClipboardReturn {
/** Copy text to clipboard. Pass an optional id to track which item was copied (e.g. seriesIndex). */
copyToClipboard: (text: string, id?: ID) => void;
/** True when something was just copied and still within the reset threshold. */
isCopied: boolean;
/** The id passed to the last successful copy, or null after reset. Use to show "copied" state for a specific item (e.g. copiedId === item.seriesIndex). */
id: ID;
}
export function useCopyToClipboard(
options: UseCopyToClipboardOptions = {},
): UseCopyToClipboardReturn {
const { copiedResetMs = DEFAULT_COPIED_RESET_MS } = options;
const [state, setState] = useState<{ isCopied: boolean; id: ID }>({
isCopied: false,
id: null,
});
const timeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
useEffect(() => {
return (): void => {
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
timeoutRef.current = null;
}
};
}, []);
const copyToClipboard = useCallback(
(text: string, id?: ID): void => {
navigator.clipboard.writeText(text).then(() => {
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
setState({ isCopied: true, id: id ?? null });
timeoutRef.current = setTimeout(() => {
setState({ isCopied: false, id: null });
timeoutRef.current = null;
}, copiedResetMs);
});
},
[copiedResetMs],
);
return {
copyToClipboard,
isCopied: state.isCopied,
id: state.id,
};
}

View File

@@ -0,0 +1,445 @@
import { IBuilderQuery, Query } from 'types/api/queryBuilder/queryBuilderData';
import { EQueryType } from 'types/common/dashboard';
import {
buildVariableReferencePattern,
extractQueryTextStrings,
getVariableReferencesInQuery,
textContainsVariableReference,
} from './variableReference';
describe('buildVariableReferencePattern', () => {
const varName = 'deployment_environment';
it.each([
['{{.deployment_environment}}', '{{.var}} syntax'],
['{{ .deployment_environment }}', '{{.var}} with spaces'],
['{{deployment_environment}}', '{{var}} syntax'],
['{{ deployment_environment }}', '{{var}} with spaces'],
['$deployment_environment', '$var syntax'],
['[[deployment_environment]]', '[[var]] syntax'],
['[[ deployment_environment ]]', '[[var]] with spaces'],
])('matches %s (%s)', (text) => {
expect(buildVariableReferencePattern(varName).test(text)).toBe(true);
});
it('does not match partial variable names', () => {
const pattern = buildVariableReferencePattern('env');
// $env should match at word boundary, but $environment should not match $env
expect(pattern.test('$environment')).toBe(false);
});
it('matches $var at word boundary within larger text', () => {
const pattern = buildVariableReferencePattern('env');
expect(pattern.test('SELECT * WHERE x = $env')).toBe(true);
expect(pattern.test('$env AND y = 1')).toBe(true);
});
});
describe('textContainsVariableReference', () => {
describe('guard clauses', () => {
it('returns false for empty text', () => {
expect(textContainsVariableReference('', 'var')).toBe(false);
});
it('returns false for empty variable name', () => {
expect(textContainsVariableReference('some text', '')).toBe(false);
});
});
describe('all syntax formats', () => {
const varName = 'service_name';
it('detects {{.var}} format', () => {
const query = "SELECT * FROM table WHERE service = '{{.service_name}}'";
expect(textContainsVariableReference(query, varName)).toBe(true);
});
it('detects {{var}} format', () => {
const query = "SELECT * FROM table WHERE service = '{{service_name}}'";
expect(textContainsVariableReference(query, varName)).toBe(true);
});
it('detects $var format', () => {
const query = "SELECT * FROM table WHERE service = '$service_name'";
expect(textContainsVariableReference(query, varName)).toBe(true);
});
it('detects [[var]] format', () => {
const query = "SELECT * FROM table WHERE service = '[[service_name]]'";
expect(textContainsVariableReference(query, varName)).toBe(true);
});
});
describe('embedded in larger text', () => {
it('finds variable in a multi-line query', () => {
const query = `SELECT JSONExtractString(labels, 'k8s_node_name') AS k8s_node_name
FROM signoz_metrics.distributed_time_series_v4_1day
WHERE metric_name = 'k8s_node_cpu_time' AND JSONExtractString(labels, 'k8s_cluster_name') = {{.k8s_cluster_name}}
GROUP BY k8s_node_name`;
expect(textContainsVariableReference(query, 'k8s_cluster_name')).toBe(true);
expect(textContainsVariableReference(query, 'k8s_node_name')).toBe(false); // plain text, not a variable reference
});
});
describe('no false positives', () => {
it('does not match substring of a longer variable name', () => {
expect(
textContainsVariableReference('$service_name_v2', 'service_name'),
).toBe(false);
});
it('does not match plain text that happens to contain the name', () => {
expect(
textContainsVariableReference(
'the service_name column is important',
'service_name',
),
).toBe(false);
});
});
});
// ---- Query text extraction & variable reference detection ----
const baseQuery: Query = {
id: 'test-query',
queryType: EQueryType.QUERY_BUILDER,
promql: [],
builder: { queryData: [], queryFormulas: [], queryTraceOperator: [] },
clickhouse_sql: [],
};
describe('extractQueryTextStrings', () => {
it('returns empty array for query builder with no data', () => {
expect(extractQueryTextStrings(baseQuery)).toEqual([]);
});
it('extracts string values from query builder filter items', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: {
items: [
{ id: '1', op: '=', value: ['$service_name', 'hardcoded'] },
{ id: '2', op: '=', value: '$env' },
],
op: 'AND',
},
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
const texts = extractQueryTextStrings(query);
expect(texts).toEqual(['$service_name', 'hardcoded', '$env']);
});
it('extracts filter expression from query builder', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: { items: [], op: 'AND' },
filter: { expression: 'env = $deployment_environment' },
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
const texts = extractQueryTextStrings(query);
expect(texts).toEqual(['env = $deployment_environment']);
});
it('skips non-string filter values', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: {
items: [{ id: '1', op: '=', value: [42, true] }],
op: 'AND',
},
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
expect(extractQueryTextStrings(query)).toEqual([]);
});
it('extracts promql query strings', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.PROM,
promql: [
{ name: 'A', query: 'up{env="$env"}', legend: '', disabled: false },
{ name: 'B', query: 'cpu{ns="$namespace"}', legend: '', disabled: false },
],
};
expect(extractQueryTextStrings(query)).toEqual([
'up{env="$env"}',
'cpu{ns="$namespace"}',
]);
});
it('extracts clickhouse sql query strings', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.CLICKHOUSE,
clickhouse_sql: [
{
name: 'A',
query: 'SELECT * WHERE env = {{.env}}',
legend: '',
disabled: false,
},
],
};
expect(extractQueryTextStrings(query)).toEqual([
'SELECT * WHERE env = {{.env}}',
]);
});
it('accumulates texts across multiple queryData entries', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: {
items: [{ id: '1', op: '=', value: '$env' }],
op: 'AND',
},
} as unknown) as IBuilderQuery,
({
filters: {
items: [{ id: '2', op: '=', value: ['$service_name'] }],
op: 'AND',
},
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
expect(extractQueryTextStrings(query)).toEqual(['$env', '$service_name']);
});
it('collects both filter items and filter expression from the same queryData', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: {
items: [{ id: '1', op: '=', value: '$service_name' }],
op: 'AND',
},
filter: { expression: 'env = $deployment_environment' },
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
expect(extractQueryTextStrings(query)).toEqual([
'$service_name',
'env = $deployment_environment',
]);
});
it('skips promql entries with empty query strings', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.PROM,
promql: [
{ name: 'A', query: '', legend: '', disabled: false },
{ name: 'B', query: 'up{env="$env"}', legend: '', disabled: false },
],
};
expect(extractQueryTextStrings(query)).toEqual(['up{env="$env"}']);
});
it('skips clickhouse entries with empty query strings', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.CLICKHOUSE,
clickhouse_sql: [
{ name: 'A', query: '', legend: '', disabled: false },
{
name: 'B',
query: 'SELECT * WHERE x = {{.env}}',
legend: '',
disabled: false,
},
],
};
expect(extractQueryTextStrings(query)).toEqual([
'SELECT * WHERE x = {{.env}}',
]);
});
it('returns empty array for unknown query type', () => {
const query = {
...baseQuery,
queryType: ('unknown' as unknown) as EQueryType,
};
expect(extractQueryTextStrings(query)).toEqual([]);
});
});
describe('getVariableReferencesInQuery', () => {
const variableNames = [
'deployment_environment',
'service_name',
'endpoint',
'unused_var',
];
it('returns empty array when query has no text', () => {
expect(getVariableReferencesInQuery(baseQuery, variableNames)).toEqual([]);
});
it('detects variables referenced in query builder filters', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: {
items: [
{ id: '1', op: '=', value: '$service_name' },
{ id: '2', op: 'IN', value: ['$deployment_environment'] },
],
op: 'AND',
},
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
const result = getVariableReferencesInQuery(query, variableNames);
expect(result).toEqual(['deployment_environment', 'service_name']);
});
it('detects variables in promql queries', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.PROM,
promql: [
{
name: 'A',
query:
'http_requests{env="{{.deployment_environment}}", endpoint="$endpoint"}',
legend: '',
disabled: false,
},
],
};
const result = getVariableReferencesInQuery(query, variableNames);
expect(result).toEqual(['deployment_environment', 'endpoint']);
});
it('detects variables in clickhouse sql queries', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.CLICKHOUSE,
clickhouse_sql: [
{
name: 'A',
query: 'SELECT * FROM table WHERE service = [[service_name]]',
legend: '',
disabled: false,
},
],
};
const result = getVariableReferencesInQuery(query, variableNames);
expect(result).toEqual(['service_name']);
});
it('detects variables spread across multiple queryData entries', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.QUERY_BUILDER,
builder: {
queryData: [
({
filters: {
items: [{ id: '1', op: '=', value: '$service_name' }],
op: 'AND',
},
} as unknown) as IBuilderQuery,
({
filter: { expression: 'env = $deployment_environment' },
} as unknown) as IBuilderQuery,
],
queryFormulas: [],
queryTraceOperator: [],
},
};
const result = getVariableReferencesInQuery(query, variableNames);
expect(result).toEqual(['deployment_environment', 'service_name']);
});
it('returns empty array when no variables are referenced', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.PROM,
promql: [
{
name: 'A',
query: 'up{job="api"}',
legend: '',
disabled: false,
},
],
};
expect(getVariableReferencesInQuery(query, variableNames)).toEqual([]);
});
it('returns empty array when variableNames list is empty', () => {
const query: Query = {
...baseQuery,
queryType: EQueryType.PROM,
promql: [
{
name: 'A',
query: 'up{env="$deployment_environment"}',
legend: '',
disabled: false,
},
],
};
expect(getVariableReferencesInQuery(query, [])).toEqual([]);
});
});

View File

@@ -0,0 +1,136 @@
import { isArray } from 'lodash-es';
import { Query, TagFilterItem } from 'types/api/queryBuilder/queryBuilderData';
import { EQueryType } from 'types/common/dashboard';
/**
* Builds a RegExp that matches any recognized variable reference syntax:
* {{.variableName}} — dot prefix, optional whitespace
* {{variableName}} — no dot, optional whitespace
* $variableName — dollar prefix, word-boundary terminated
* [[variableName]] — square brackets, optional whitespace
*/
export function buildVariableReferencePattern(variableName: string): RegExp {
const patterns = [
`\\{\\{\\s*?\\.${variableName}\\s*?\\}\\}`,
`\\{\\{\\s*${variableName}\\s*\\}\\}`,
`\\$${variableName}\\b`,
`\\[\\[\\s*${variableName}\\s*\\]\\]`,
];
return new RegExp(patterns.join('|'));
}
/**
* Returns true if `text` contains a reference to `variableName` in any of the
* recognized variable syntaxes.
*/
export function textContainsVariableReference(
text: string,
variableName: string,
): boolean {
if (!text || !variableName) {
return false;
}
return buildVariableReferencePattern(variableName).test(text);
}
/**
* Extracts all text strings from a widget Query that could contain variable
* references. Covers:
* - QUERY_BUILDER: filter items' string values + filter.expression
* - PROM: each promql[].query
* - CLICKHOUSE: each clickhouse_sql[].query
*/
function extractQueryBuilderTexts(query: Query): string[] {
const texts: string[] = [];
const queryDataList = query.builder?.queryData;
if (isArray(queryDataList)) {
queryDataList.forEach((queryData) => {
// Collect string values from filter items
queryData.filters?.items?.forEach((filter: TagFilterItem) => {
if (isArray(filter.value)) {
filter.value.forEach((v) => {
if (typeof v === 'string') {
texts.push(v);
}
});
} else if (typeof filter.value === 'string') {
texts.push(filter.value);
}
});
// Collect filter expression
if (queryData.filter?.expression) {
texts.push(queryData.filter.expression);
}
});
}
return texts;
}
function extractPromQLTexts(query: Query): string[] {
const texts: string[] = [];
if (isArray(query.promql)) {
query.promql.forEach((promqlQuery) => {
if (promqlQuery.query) {
texts.push(promqlQuery.query);
}
});
}
return texts;
}
function extractClickhouseSQLTexts(query: Query): string[] {
const texts: string[] = [];
if (isArray(query.clickhouse_sql)) {
query.clickhouse_sql.forEach((clickhouseQuery) => {
if (clickhouseQuery.query) {
texts.push(clickhouseQuery.query);
}
});
}
return texts;
}
/**
* Extracts all text strings from a widget Query that could contain variable
* references. Covers:
* - QUERY_BUILDER: filter items' string values + filter.expression
* - PROM: each promql[].query
* - CLICKHOUSE: each clickhouse_sql[].query
*/
export function extractQueryTextStrings(query: Query): string[] {
if (query.queryType === EQueryType.QUERY_BUILDER) {
return extractQueryBuilderTexts(query);
}
if (query.queryType === EQueryType.PROM) {
return extractPromQLTexts(query);
}
if (query.queryType === EQueryType.CLICKHOUSE) {
return extractClickhouseSQLTexts(query);
}
return [];
}
/**
* Given a widget Query and an array of variable names, returns the subset of
* variable names that are referenced in the query text.
*
* This performs text-based detection only. Structural checks (like
* filter.key.key matching a variable attribute) are NOT included.
*/
export function getVariableReferencesInQuery(
query: Query,
variableNames: string[],
): string[] {
const texts = extractQueryTextStrings(query);
if (texts.length === 0) {
return [];
}
return variableNames.filter((name) =>
texts.some((text) => textContainsVariableReference(text, name)),
);
}

View File

@@ -128,6 +128,15 @@
opacity: 1;
}
.legend-item-label-trigger {
display: flex;
align-items: center;
gap: 6px;
flex: 1;
min-width: 0;
cursor: pointer;
}
.legend-marker {
border-width: 2px;
border-radius: 50%;
@@ -157,10 +166,34 @@
text-overflow: ellipsis;
white-space: nowrap;
min-width: 0;
user-select: none;
}
.legend-copy-button {
display: none;
align-items: center;
justify-content: center;
flex-shrink: 0;
padding: 2px;
margin: 0;
border: none;
color: var(--bg-vanilla-400);
cursor: pointer;
border-radius: 4px;
opacity: 1;
transition: opacity 0.15s ease, color 0.15s ease;
&:hover {
color: var(--bg-vanilla-100);
}
}
&:hover {
background: rgba(255, 255, 255, 0.05);
.legend-copy-button {
display: flex;
opacity: 1;
}
}
}
@@ -172,4 +205,17 @@
}
}
}
.legend-item {
&:hover {
background: rgba(0, 0, 0, 0.05);
}
.legend-copy-button {
color: var(--bg-ink-400);
&:hover {
color: var(--bg-ink-500);
}
}
}
}

View File

@@ -2,11 +2,13 @@ import { useCallback, useMemo, useRef, useState } from 'react';
import { VirtuosoGrid } from 'react-virtuoso';
import { Input, Tooltip as AntdTooltip } from 'antd';
import cx from 'classnames';
import { useCopyToClipboard } from 'hooks/useCopyToClipboard';
import { LegendItem } from 'lib/uPlotV2/config/types';
import useLegendsSync from 'lib/uPlotV2/hooks/useLegendsSync';
import { Check, Copy } from 'lucide-react';
import { useLegendActions } from '../../hooks/useLegendActions';
import { LegendPosition, LegendProps } from '../types';
import { useLegendActions } from './useLegendActions';
import './Legend.styles.scss';
@@ -32,6 +34,7 @@ export default function Legend({
});
const legendContainerRef = useRef<HTMLDivElement | null>(null);
const [legendSearchQuery, setLegendSearchQuery] = useState('');
const { copyToClipboard, id: copiedId } = useCopyToClipboard();
const legendItems = useMemo(() => Object.values(legendItemsMap), [
legendItemsMap,
@@ -59,26 +62,53 @@ export default function Legend({
);
}, [position, legendSearchQuery, legendItems]);
const handleCopyLegendItem = useCallback(
(e: React.MouseEvent, seriesIndex: number, label: string): void => {
e.stopPropagation();
copyToClipboard(label, seriesIndex);
},
[copyToClipboard],
);
const renderLegendItem = useCallback(
(item: LegendItem): JSX.Element => (
<AntdTooltip key={item.seriesIndex} title={item.label}>
(item: LegendItem): JSX.Element => {
const isCopied = copiedId === item.seriesIndex;
return (
<div
key={item.seriesIndex}
data-legend-item-id={item.seriesIndex}
className={cx('legend-item', `legend-item-${position.toLowerCase()}`, {
'legend-item-off': !item.show,
'legend-item-focused': focusedSeriesIndex === item.seriesIndex,
})}
>
<div
className="legend-marker"
style={{ borderColor: String(item.color) }}
data-is-legend-marker={true}
/>
<span className="legend-label">{item.label}</span>
<AntdTooltip title={item.label}>
<div className="legend-item-label-trigger">
<div
className="legend-marker"
style={{ borderColor: String(item.color) }}
data-is-legend-marker={true}
/>
<span className="legend-label">{item.label}</span>
</div>
</AntdTooltip>
<AntdTooltip title={isCopied ? 'Copied' : 'Copy'}>
<button
type="button"
className="legend-copy-button"
onClick={(e): void =>
handleCopyLegendItem(e, item.seriesIndex, item.label ?? '')
}
aria-label={`Copy ${item.label}`}
data-testid="legend-copy"
>
{isCopied ? <Check size={12} /> : <Copy size={12} />}
</button>
</AntdTooltip>
</div>
</AntdTooltip>
),
[focusedSeriesIndex, position],
);
},
[copiedId, focusedSeriesIndex, handleCopyLegendItem, position],
);
const isEmptyState = useMemo(() => {
@@ -106,6 +136,7 @@ export default function Legend({
placeholder="Search..."
value={legendSearchQuery}
onChange={(e): void => setLegendSearchQuery(e.target.value)}
data-testid="legend-search-input"
className="legend-search-input"
/>
</div>

View File

@@ -0,0 +1,31 @@
import { useMemo } from 'react';
import { BarTooltipProps, TooltipContentItem } from '../types';
import Tooltip from './Tooltip';
import { buildTooltipContent } from './utils';
export default function BarChartTooltip(props: BarTooltipProps): JSX.Element {
const content = useMemo(
(): TooltipContentItem[] =>
buildTooltipContent({
data: props.uPlotInstance.data,
series: props.uPlotInstance.series,
dataIndexes: props.dataIndexes,
activeSeriesIndex: props.seriesIndex,
uPlotInstance: props.uPlotInstance,
yAxisUnit: props.yAxisUnit ?? '',
decimalPrecision: props.decimalPrecision,
isStackedBarChart: props.isStackedBarChart,
}),
[
props.uPlotInstance,
props.seriesIndex,
props.dataIndexes,
props.yAxisUnit,
props.decimalPrecision,
props.isStackedBarChart,
],
);
return <Tooltip {...props} content={content} />;
}

View File

@@ -25,16 +25,28 @@ export function getTooltipBaseValue({
index,
dataIndex,
isStackedBarChart,
series,
}: {
data: AlignedData;
index: number;
dataIndex: number;
isStackedBarChart?: boolean;
series?: Series[];
}): number | null {
let baseValue = data[index][dataIndex] ?? null;
if (isStackedBarChart && index + 1 < data.length && baseValue !== null) {
const nextValue = data[index + 1][dataIndex] ?? null;
if (nextValue !== null) {
// Top-down stacking (first series at top): raw = stacked[i] - stacked[nextVisible].
// When series are hidden, we must use the next *visible* series, not index+1,
// since hidden series keep raw values and would produce negative/wrong results.
if (isStackedBarChart && baseValue !== null && series) {
let nextVisibleIdx = -1;
for (let j = index + 1; j < series.length; j++) {
if (series[j]?.show) {
nextVisibleIdx = j;
break;
}
}
if (nextVisibleIdx >= 1) {
const nextValue = data[nextVisibleIdx][dataIndex] ?? 0;
baseValue = baseValue - nextValue;
}
}
@@ -80,6 +92,7 @@ export function buildTooltipContent({
index,
dataIndex,
isStackedBarChart,
series,
});
const isActive = index === activeSeriesIndex;

View File

@@ -0,0 +1,280 @@
import React from 'react';
import {
fireEvent,
render,
RenderResult,
screen,
within,
} from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { LegendItem } from 'lib/uPlotV2/config/types';
import useLegendsSync from 'lib/uPlotV2/hooks/useLegendsSync';
import { useLegendActions } from '../../hooks/useLegendActions';
import Legend from '../Legend/Legend';
import { LegendPosition } from '../types';
const mockWriteText = jest.fn().mockResolvedValue(undefined);
let clipboardSpy: jest.SpyInstance | undefined;
jest.mock('react-virtuoso', () => ({
VirtuosoGrid: ({
data,
itemContent,
className,
}: {
data: LegendItem[];
itemContent: (index: number, item: LegendItem) => React.ReactNode;
className?: string;
}): JSX.Element => (
<div data-testid="virtuoso-grid" className={className}>
{data.map((item, index) => (
<div key={item.seriesIndex ?? index} data-testid="legend-item-wrapper">
{itemContent(index, item)}
</div>
))}
</div>
),
}));
jest.mock('lib/uPlotV2/hooks/useLegendsSync');
jest.mock('lib/uPlotV2/hooks/useLegendActions');
const mockUseLegendsSync = useLegendsSync as jest.MockedFunction<
typeof useLegendsSync
>;
const mockUseLegendActions = useLegendActions as jest.MockedFunction<
typeof useLegendActions
>;
describe('Legend', () => {
beforeAll(() => {
// JSDOM does not define navigator.clipboard; add it so we can spy on writeText
Object.defineProperty(navigator, 'clipboard', {
value: { writeText: () => Promise.resolve() },
writable: true,
configurable: true,
});
});
const baseLegendItemsMap = {
0: {
seriesIndex: 0,
label: 'A',
show: true,
color: '#ff0000',
},
1: {
seriesIndex: 1,
label: 'B',
show: false,
color: '#00ff00',
},
2: {
seriesIndex: 2,
label: 'C',
show: true,
color: '#0000ff',
},
};
let onLegendClick: jest.Mock;
let onLegendMouseMove: jest.Mock;
let onLegendMouseLeave: jest.Mock;
let onFocusSeries: jest.Mock;
beforeEach(() => {
onLegendClick = jest.fn();
onLegendMouseMove = jest.fn();
onLegendMouseLeave = jest.fn();
onFocusSeries = jest.fn();
mockWriteText.mockClear();
clipboardSpy = jest
.spyOn(navigator.clipboard, 'writeText')
.mockImplementation(mockWriteText);
mockUseLegendsSync.mockReturnValue({
legendItemsMap: baseLegendItemsMap,
focusedSeriesIndex: 1,
setFocusedSeriesIndex: jest.fn(),
});
mockUseLegendActions.mockReturnValue({
onLegendClick,
onLegendMouseMove,
onLegendMouseLeave,
onFocusSeries,
});
});
afterEach(() => {
clipboardSpy?.mockRestore();
jest.clearAllMocks();
});
const renderLegend = (position?: LegendPosition): RenderResult =>
render(
<Legend
position={position}
// config is not used directly in the component, it's consumed by the mocked hook
config={{} as any}
/>,
);
describe('layout and position', () => {
it('renders search input when legend position is RIGHT', () => {
renderLegend(LegendPosition.RIGHT);
expect(screen.getByTestId('legend-search-input')).toBeInTheDocument();
});
it('does not render search input when legend position is BOTTOM (default)', () => {
renderLegend();
expect(screen.queryByTestId('legend-search-input')).not.toBeInTheDocument();
});
it('renders the marker with the correct border color', () => {
renderLegend(LegendPosition.RIGHT);
const legendMarker = document.querySelector(
'[data-legend-item-id="0"] [data-is-legend-marker="true"]',
) as HTMLElement;
expect(legendMarker).toHaveStyle({
'border-color': '#ff0000',
});
});
it('renders all legend items in the grid by default', () => {
renderLegend(LegendPosition.RIGHT);
expect(screen.getByTestId('virtuoso-grid')).toBeInTheDocument();
expect(screen.getByText('A')).toBeInTheDocument();
expect(screen.getByText('B')).toBeInTheDocument();
expect(screen.getByText('C')).toBeInTheDocument();
});
});
describe('search behavior (RIGHT position)', () => {
it('filters legend items based on search query (case-insensitive)', async () => {
const user = userEvent.setup();
renderLegend(LegendPosition.RIGHT);
const searchInput = screen.getByTestId('legend-search-input');
await user.type(searchInput, 'A');
expect(screen.getByText('A')).toBeInTheDocument();
expect(screen.queryByText('B')).not.toBeInTheDocument();
expect(screen.queryByText('C')).not.toBeInTheDocument();
});
it('shows empty state when no legend items match the search query', async () => {
const user = userEvent.setup();
renderLegend(LegendPosition.RIGHT);
const searchInput = screen.getByTestId('legend-search-input');
await user.type(searchInput, 'network');
expect(
screen.getByText(/No series found matching "network"/i),
).toBeInTheDocument();
expect(screen.queryByTestId('virtuoso-grid')).not.toBeInTheDocument();
});
it('does not filter or show empty state when search query is empty or only whitespace', async () => {
const user = userEvent.setup();
renderLegend(LegendPosition.RIGHT);
const searchInput = screen.getByTestId('legend-search-input');
await user.type(searchInput, ' ');
expect(
screen.queryByText(/No series found matching/i),
).not.toBeInTheDocument();
expect(screen.getByText('A')).toBeInTheDocument();
expect(screen.getByText('B')).toBeInTheDocument();
expect(screen.getByText('C')).toBeInTheDocument();
});
});
describe('legend actions', () => {
it('calls onLegendClick when a legend item is clicked', async () => {
const user = userEvent.setup();
renderLegend(LegendPosition.RIGHT);
await user.click(screen.getByText('A'));
expect(onLegendClick).toHaveBeenCalledTimes(1);
});
it('calls mouseMove when the mouse moves over a legend item', async () => {
const user = userEvent.setup();
renderLegend(LegendPosition.RIGHT);
const legendItem = document.querySelector(
'[data-legend-item-id="0"]',
) as HTMLElement;
await user.hover(legendItem);
expect(onLegendMouseMove).toHaveBeenCalledTimes(1);
});
it('calls onLegendMouseLeave when the mouse leaves the legend container', async () => {
const user = userEvent.setup();
renderLegend(LegendPosition.RIGHT);
const container = document.querySelector('.legend-container') as HTMLElement;
await user.hover(container);
await user.unhover(container);
expect(onLegendMouseLeave).toHaveBeenCalledTimes(1);
});
});
describe('copy action', () => {
it('copies the legend label to clipboard when copy button is clicked', () => {
renderLegend(LegendPosition.RIGHT);
const firstLegendItem = document.querySelector(
'[data-legend-item-id="0"]',
) as HTMLElement;
const copyButton = within(firstLegendItem).getByTestId('legend-copy');
fireEvent.click(copyButton);
expect(mockWriteText).toHaveBeenCalledTimes(1);
expect(mockWriteText).toHaveBeenCalledWith('A');
});
it('copies the correct label when copy is clicked on a different legend item', () => {
renderLegend(LegendPosition.RIGHT);
const thirdLegendItem = document.querySelector(
'[data-legend-item-id="2"]',
) as HTMLElement;
const copyButton = within(thirdLegendItem).getByTestId('legend-copy');
fireEvent.click(copyButton);
expect(mockWriteText).toHaveBeenCalledTimes(1);
expect(mockWriteText).toHaveBeenCalledWith('C');
});
it('does not call onLegendClick when copy button is clicked', () => {
renderLegend(LegendPosition.RIGHT);
const firstLegendItem = document.querySelector(
'[data-legend-item-id="0"]',
) as HTMLElement;
const copyButton = within(firstLegendItem).getByTestId('legend-copy');
fireEvent.click(copyButton);
expect(onLegendClick).not.toHaveBeenCalled();
});
});
});

View File

@@ -4,12 +4,12 @@ import uPlot, { Axis } from 'uplot';
import { uPlotXAxisValuesFormat } from '../../uPlotLib/utils/constants';
import getGridColor from '../../uPlotLib/utils/getGridColor';
import { buildYAxisSizeCalculator } from '../utils/axis';
import { AxisProps, ConfigBuilder } from './types';
const PANEL_TYPES_WITH_X_AXIS_DATETIME_FORMAT = [
PANEL_TYPES.TIME_SERIES,
PANEL_TYPES.BAR,
PANEL_TYPES.PIE,
];
/**
@@ -114,81 +114,6 @@ export class UPlotAxisBuilder extends ConfigBuilder<AxisProps, Axis> {
: undefined;
}
/**
* Calculate axis size from existing size property
*/
private getExistingAxisSize(
self: uPlot,
axis: Axis,
values: string[] | undefined,
axisIdx: number,
cycleNum: number,
): number {
const internalSize = (axis as { _size?: number })._size;
if (internalSize !== undefined) {
return internalSize;
}
const existingSize = axis.size;
if (typeof existingSize === 'function') {
return existingSize(self, values ?? [], axisIdx, cycleNum);
}
return existingSize ?? 0;
}
/**
* Calculate text width for longest value
*/
private calculateTextWidth(
self: uPlot,
axis: Axis,
values: string[] | undefined,
): number {
if (!values || values.length === 0) {
return 0;
}
// Find longest value
const longestVal = values.reduce(
(acc, val) => (val.length > acc.length ? val : acc),
'',
);
if (longestVal === '' || !axis.font?.[0]) {
return 0;
}
// eslint-disable-next-line prefer-destructuring, no-param-reassign
self.ctx.font = axis.font[0];
return self.ctx.measureText(longestVal).width / devicePixelRatio;
}
/**
* Build Y-axis dynamic size calculator
*/
private buildYAxisSizeCalculator(): uPlot.Axis.Size {
return (
self: uPlot,
values: string[] | undefined,
axisIdx: number,
cycleNum: number,
): number => {
const axis = self.axes[axisIdx];
// Bail out, force convergence
if (cycleNum > 1) {
return this.getExistingAxisSize(self, axis, values, axisIdx, cycleNum);
}
const gap = this.props.gap ?? 5;
let axisSize = (axis.ticks?.size ?? 0) + gap;
axisSize += this.calculateTextWidth(self, axis, values);
return Math.ceil(axisSize);
};
}
/**
* Build dynamic size calculator for Y-axis
*/
@@ -202,7 +127,7 @@ export class UPlotAxisBuilder extends ConfigBuilder<AxisProps, Axis> {
// Y-axis needs dynamic sizing based on text width
if (scaleKey === 'y') {
return this.buildYAxisSizeCalculator();
return buildYAxisSizeCalculator(this.props.gap ?? 5);
}
return undefined;

View File

@@ -1,3 +1,4 @@
import { SeriesVisibilityState } from 'container/DashboardContainer/visualization/panels/types';
import { getStoredSeriesVisibility } from 'container/DashboardContainer/visualization/panels/utils/legendVisibilityUtils';
import { ThresholdsDrawHookOptions } from 'lib/uPlotV2/hooks/types';
import { thresholdsDrawHook } from 'lib/uPlotV2/hooks/useThresholdsDrawHook';
@@ -235,9 +236,9 @@ export class UPlotConfigBuilder extends ConfigBuilder<
}
/**
* Returns stored series visibility map from localStorage when preferences source is LOCAL_STORAGE, otherwise null.
* Returns stored series visibility by index from localStorage when preferences source is LOCAL_STORAGE, otherwise null.
*/
private getStoredVisibilityMap(): Map<string, boolean> | null {
private getStoredVisibility(): SeriesVisibilityState | null {
if (
this.widgetId &&
this.selectionPreferencesSource === SelectionPreferencesSource.LOCAL_STORAGE
@@ -251,22 +252,23 @@ export class UPlotConfigBuilder extends ConfigBuilder<
* Get legend items with visibility state restored from localStorage if available
*/
getLegendItems(): Record<number, LegendItem> {
const visibilityMap = this.getStoredVisibilityMap();
const isAnySeriesHidden = !!(
visibilityMap && Array.from(visibilityMap.values()).some((show) => !show)
const seriesVisibilityState = this.getStoredVisibility();
const isAnySeriesHidden = !!seriesVisibilityState?.visibility?.some(
(show) => !show,
);
return this.series.reduce((acc, s: UPlotSeriesBuilder, index: number) => {
const seriesConfig = s.getConfig();
const label = seriesConfig.label ?? '';
const seriesIndex = index + 1; // +1 because the first series is the timestamp
const show = resolveSeriesVisibility(
label,
seriesConfig.show,
visibilityMap,
// +1 because uPlot series 0 is x-axis/time; data series are at 1, 2, ... (also matches stored visibility[0]=time, visibility[1]=first data, ...)
const seriesIndex = index + 1;
const show = resolveSeriesVisibility({
seriesIndex,
seriesShow: seriesConfig.show,
seriesLabel: label,
seriesVisibilityState,
isAnySeriesHidden,
);
});
acc[seriesIndex] = {
seriesIndex,
@@ -294,22 +296,23 @@ export class UPlotConfigBuilder extends ConfigBuilder<
...DEFAULT_PLOT_CONFIG,
};
const visibilityMap = this.getStoredVisibilityMap();
const isAnySeriesHidden = !!(
visibilityMap && Array.from(visibilityMap.values()).some((show) => !show)
const seriesVisibilityState = this.getStoredVisibility();
const isAnySeriesHidden = !!seriesVisibilityState?.visibility?.some(
(show) => !show,
);
config.series = [
{ value: (): string => '' }, // Base series for timestamp
...this.series.map((s) => {
...this.series.map((s, index) => {
const series = s.getConfig();
const label = series.label ?? '';
const visible = resolveSeriesVisibility(
label,
series.show,
visibilityMap,
// Stored visibility[0] is x-axis/time; data series start at visibility[1]
const visible = resolveSeriesVisibility({
seriesIndex: index + 1,
seriesShow: series.show,
seriesLabel: series.label ?? '',
seriesVisibilityState,
isAnySeriesHidden,
);
});
return {
...series,
show: visible,

View File

@@ -1,8 +1,10 @@
import { PANEL_TYPES } from 'constants/queryBuilder';
import { themeColors } from 'constants/theme';
import { generateColor } from 'lib/uPlotLib/utils/generateColor';
import uPlot, { Series } from 'uplot';
import {
BarAlignment,
ConfigBuilder,
DrawStyle,
LineInterpolation,
@@ -15,20 +17,41 @@ import {
* Builder for uPlot series configuration
* Handles creation of series settings
*/
/**
* Path builders are static and shared across all instances of UPlotSeriesBuilder
*/
let builders: PathBuilders | null = null;
export class UPlotSeriesBuilder extends ConfigBuilder<SeriesProps, Series> {
constructor(props: SeriesProps) {
super(props);
const pathBuilders = uPlot.paths;
if (!builders) {
const linearBuilder = pathBuilders.linear;
const splineBuilder = pathBuilders.spline;
const steppedBuilder = pathBuilders.stepped;
if (!linearBuilder || !splineBuilder || !steppedBuilder) {
throw new Error('Required uPlot path builders are not available');
}
builders = {
linear: linearBuilder(),
spline: splineBuilder(),
stepBefore: steppedBuilder({ align: -1 }),
stepAfter: steppedBuilder({ align: 1 }),
};
}
}
private buildLineConfig({
lineColor,
lineWidth,
lineStyle,
lineCap,
resolvedLineColor,
}: {
lineColor: string;
lineWidth?: number;
lineStyle?: LineStyle;
lineCap?: Series.Cap;
resolvedLineColor: string;
}): Partial<Series> {
const { lineWidth, lineStyle, lineCap } = this.props;
const lineConfig: Partial<Series> = {
stroke: lineColor,
stroke: resolvedLineColor,
width: lineWidth ?? 2,
};
@@ -39,21 +62,26 @@ export class UPlotSeriesBuilder extends ConfigBuilder<SeriesProps, Series> {
if (lineCap) {
lineConfig.cap = lineCap;
}
if (this.props.panelType === PANEL_TYPES.BAR) {
lineConfig.fill = resolvedLineColor;
}
return lineConfig;
}
/**
* Build path configuration
*/
private buildPathConfig({
pathBuilder,
drawStyle,
lineInterpolation,
}: {
pathBuilder?: Series.PathBuilder | null;
drawStyle: DrawStyle;
lineInterpolation?: LineInterpolation;
}): Partial<Series> {
private buildPathConfig(): Partial<Series> {
const {
pathBuilder,
drawStyle,
lineInterpolation,
barAlignment,
barMaxWidth,
barWidthFactor,
} = this.props;
if (pathBuilder) {
return { paths: pathBuilder };
}
@@ -70,7 +98,13 @@ export class UPlotSeriesBuilder extends ConfigBuilder<SeriesProps, Series> {
idx0: number,
idx1: number,
): Series.Paths | null => {
const pathsBuilder = getPathBuilder(drawStyle, lineInterpolation);
const pathsBuilder = getPathBuilder({
drawStyle,
lineInterpolation,
barAlignment,
barMaxWidth,
barWidthFactor,
});
return pathsBuilder(self, seriesIdx, idx0, idx1);
},
@@ -84,25 +118,21 @@ export class UPlotSeriesBuilder extends ConfigBuilder<SeriesProps, Series> {
* Build points configuration
*/
private buildPointsConfig({
lineColor,
lineWidth,
pointSize,
pointsBuilder,
pointsFilter,
drawStyle,
showPoints,
resolvedLineColor,
}: {
lineColor: string;
lineWidth?: number;
pointSize?: number;
pointsBuilder: Series.Points.Show | null;
pointsFilter: Series.Points.Filter | null;
drawStyle: DrawStyle;
showPoints?: VisibilityMode;
resolvedLineColor: string;
}): Partial<Series.Points> {
const {
lineWidth,
pointSize,
pointsBuilder,
pointsFilter,
drawStyle,
showPoints,
} = this.props;
const pointsConfig: Partial<Series.Points> = {
stroke: lineColor,
fill: lineColor,
stroke: resolvedLineColor,
fill: resolvedLineColor,
size: !pointSize || pointSize < (lineWidth ?? 2) ? undefined : pointSize,
filter: pointsFilter || undefined,
};
@@ -136,44 +166,16 @@ export class UPlotSeriesBuilder extends ConfigBuilder<SeriesProps, Series> {
}
getConfig(): Series {
const {
drawStyle,
pathBuilder,
pointsBuilder,
pointsFilter,
lineInterpolation,
lineWidth,
lineStyle,
lineCap,
showPoints,
pointSize,
scaleKey,
label,
spanGaps,
show = true,
} = this.props;
const { scaleKey, label, spanGaps, show = true } = this.props;
const lineColor = this.getLineColor();
const resolvedLineColor = this.getLineColor();
const lineConfig = this.buildLineConfig({
lineColor,
lineWidth,
lineStyle,
lineCap,
});
const pathConfig = this.buildPathConfig({
pathBuilder,
drawStyle,
lineInterpolation,
resolvedLineColor,
});
const pathConfig = this.buildPathConfig();
const pointsConfig = this.buildPointsConfig({
lineColor,
lineWidth,
pointSize,
pointsBuilder: pointsBuilder ?? null,
pointsFilter: pointsFilter ?? null,
drawStyle,
showPoints,
resolvedLineColor,
});
return {
@@ -198,35 +200,39 @@ interface PathBuilders {
[key: string]: Series.PathBuilder;
}
let builders: PathBuilders | null = null;
/**
* Get path builder based on draw style and interpolation
*/
function getPathBuilder(
style: DrawStyle,
lineInterpolation?: LineInterpolation,
): Series.PathBuilder {
const pathBuilders = uPlot.paths;
function getPathBuilder({
drawStyle,
lineInterpolation,
barAlignment = BarAlignment.Center,
barWidthFactor = 0.6,
barMaxWidth = 200,
}: {
drawStyle: DrawStyle;
lineInterpolation?: LineInterpolation;
barAlignment?: BarAlignment;
barMaxWidth?: number;
barWidthFactor?: number;
}): Series.PathBuilder {
if (!builders) {
const linearBuilder = pathBuilders.linear;
const splineBuilder = pathBuilders.spline;
const steppedBuilder = pathBuilders.stepped;
if (!linearBuilder || !splineBuilder || !steppedBuilder) {
throw new Error('Required uPlot path builders are not available');
}
builders = {
linear: linearBuilder(),
spline: splineBuilder(),
stepBefore: steppedBuilder({ align: -1 }),
stepAfter: steppedBuilder({ align: 1 }),
};
throw new Error('Required uPlot path builders are not available');
}
if (style === DrawStyle.Line) {
if (drawStyle === DrawStyle.Bar) {
const pathBuilders = uPlot.paths;
const barsConfigKey = `bars|${barAlignment}|${barWidthFactor}|${barMaxWidth}`;
if (!builders[barsConfigKey] && pathBuilders.bars) {
builders[barsConfigKey] = pathBuilders.bars({
size: [barWidthFactor, barMaxWidth],
align: barAlignment,
});
}
return builders[barsConfigKey];
}
if (drawStyle === DrawStyle.Line) {
if (lineInterpolation === LineInterpolation.StepBefore) {
return builders.stepBefore;
}

View File

@@ -0,0 +1,393 @@
import { getToolTipValue } from 'components/Graph/yAxisConfig';
import { PANEL_TYPES } from 'constants/queryBuilder';
import { uPlotXAxisValuesFormat } from 'lib/uPlotLib/utils/constants';
import type uPlot from 'uplot';
import type { AxisProps } from '../types';
import { UPlotAxisBuilder } from '../UPlotAxisBuilder';
jest.mock('components/Graph/yAxisConfig', () => ({
getToolTipValue: jest.fn(),
}));
const createAxisProps = (overrides: Partial<AxisProps> = {}): AxisProps => ({
scaleKey: 'x',
label: 'Time',
isDarkMode: false,
show: true,
...overrides,
});
describe('UPlotAxisBuilder', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('builds basic axis config with defaults', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
label: 'Time',
}),
);
const config = builder.getConfig();
expect(config.scale).toBe('x');
expect(config.label).toBe('Time');
expect(config.show).toBe(true);
expect(config.side).toBe(2);
expect(config.gap).toBe(5);
// Default grid and ticks are created
expect(config.grid).toEqual({
stroke: 'rgba(0,0,0,0.5)',
width: 0.2,
show: true,
});
expect(config.ticks).toEqual({
width: 0.3,
show: true,
});
});
it('sets config values when provided', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
label: 'Time',
show: false,
side: 0,
gap: 10,
grid: {
stroke: '#ff0000',
width: 1,
show: false,
},
ticks: {
stroke: '#00ff00',
width: 1,
show: false,
size: 10,
},
values: ['1', '2', '3'],
space: 20,
size: 100,
stroke: '#0000ff',
}),
);
const config = builder.getConfig();
expect(config.scale).toBe('x');
expect(config.label).toBe('Time');
expect(config.show).toBe(false);
expect(config.gap).toBe(10);
expect(config.grid).toEqual({
stroke: '#ff0000',
width: 1,
show: false,
});
expect(config.ticks).toEqual({
stroke: '#00ff00',
width: 1,
show: false,
size: 10,
});
expect(config.values).toEqual(['1', '2', '3']);
expect(config.space).toBe(20);
expect(config.size).toBe(100);
expect(config.stroke).toBe('#0000ff');
});
it('merges custom grid config over defaults and respects isDarkMode and isLogScale', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({
isDarkMode: true,
isLogScale: true,
grid: {
width: 1,
},
}),
);
const config = builder.getConfig();
expect(config.grid).toEqual({
// stroke falls back to theme-based default when not provided
stroke: 'rgba(231,233,237,0.3)',
// provided width overrides default
width: 1,
// show falls back to default when not provided
show: true,
});
});
it('uses provided ticks config when present and falls back to defaults otherwise', () => {
const customTicks = { width: 1, show: false };
const withTicks = new UPlotAxisBuilder(
createAxisProps({
ticks: customTicks,
}),
);
const withoutTicks = new UPlotAxisBuilder(createAxisProps());
expect(withTicks.getConfig().ticks).toBe(customTicks);
expect(withoutTicks.getConfig().ticks).toEqual({
width: 0.3,
show: true,
});
});
it('uses time-based X-axis values formatter for time-series like panels', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
panelType: PANEL_TYPES.TIME_SERIES,
}),
);
const config = builder.getConfig();
expect(config.values).toBe(uPlotXAxisValuesFormat);
});
it('does not attach X-axis datetime formatter when panel type is not supported', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
panelType: PANEL_TYPES.LIST, // not in PANEL_TYPES_WITH_X_AXIS_DATETIME_FORMAT
}),
);
const config = builder.getConfig();
expect(config.values).toBeUndefined();
});
it('builds Y-axis values formatter that delegates to getToolTipValue', () => {
const yBuilder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'y',
yAxisUnit: 'ms',
decimalPrecision: 3,
}),
);
const config = yBuilder.getConfig();
expect(typeof config.values).toBe('function');
(getToolTipValue as jest.Mock).mockImplementation(
(value: string, unit?: string, precision?: unknown) =>
`formatted:${value}:${unit}:${precision}`,
);
// Simulate uPlot calling the values formatter
const valuesFn = (config.values as unknown) as (
self: uPlot,
vals: unknown[],
) => string[];
const result = valuesFn({} as uPlot, [1, null, 2, Number.NaN]);
expect(getToolTipValue).toHaveBeenCalledTimes(2);
expect(getToolTipValue).toHaveBeenNthCalledWith(1, '1', 'ms', 3);
expect(getToolTipValue).toHaveBeenNthCalledWith(2, '2', 'ms', 3);
// Null/NaN values should map to empty strings
expect(result).toEqual(['formatted:1:ms:3', '', 'formatted:2:ms:3', '']);
});
it('adds dynamic size calculator only for Y-axis when size is not provided', () => {
const yBuilder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'y',
}),
);
const xBuilder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
}),
);
const yConfig = yBuilder.getConfig();
const xConfig = xBuilder.getConfig();
expect(typeof yConfig.size).toBe('function');
expect(xConfig.size).toBeUndefined();
});
it('uses explicit size function when provided', () => {
const sizeFn: uPlot.Axis.Size = jest.fn(() => 100) as uPlot.Axis.Size;
const builder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'y',
size: sizeFn,
}),
);
const config = builder.getConfig();
expect(config.size).toBe(sizeFn);
});
it('builds stroke color based on stroke and isDarkMode', () => {
const explicitStroke = new UPlotAxisBuilder(
createAxisProps({
stroke: '#ff0000',
}),
);
const darkStroke = new UPlotAxisBuilder(
createAxisProps({
stroke: undefined,
isDarkMode: true,
}),
);
const lightStroke = new UPlotAxisBuilder(
createAxisProps({
stroke: undefined,
isDarkMode: false,
}),
);
expect(explicitStroke.getConfig().stroke).toBe('#ff0000');
expect(darkStroke.getConfig().stroke).toBe('white');
expect(lightStroke.getConfig().stroke).toBe('black');
});
it('uses explicit values formatter when provided', () => {
const customValues: uPlot.Axis.Values = jest.fn(() => ['a', 'b', 'c']);
const builder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'y',
values: customValues,
}),
);
const config = builder.getConfig();
expect(config.values).toBe(customValues);
});
it('returns undefined values for scaleKey neither x nor y', () => {
const builder = new UPlotAxisBuilder(createAxisProps({ scaleKey: 'custom' }));
const config = builder.getConfig();
expect(config.values).toBeUndefined();
});
it('includes space in config when provided', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({ scaleKey: 'y', space: 50 }),
);
const config = builder.getConfig();
expect(config.space).toBe(50);
});
it('includes PANEL_TYPES.BAR and PANEL_TYPES.TIME_SERIES in X-axis datetime formatter', () => {
const barBuilder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
panelType: PANEL_TYPES.BAR,
}),
);
expect(barBuilder.getConfig().values).toBe(uPlotXAxisValuesFormat);
const timeSeriesBuilder = new UPlotAxisBuilder(
createAxisProps({
scaleKey: 'x',
panelType: PANEL_TYPES.TIME_SERIES,
}),
);
expect(timeSeriesBuilder.getConfig().values).toBe(uPlotXAxisValuesFormat);
});
it('should return the existing size when cycleNum > 1', () => {
const builder = new UPlotAxisBuilder(createAxisProps({ scaleKey: 'y' }));
const config = builder.getConfig();
const sizeFn = config.size;
expect(typeof sizeFn).toBe('function');
const mockAxis = {
_size: 80,
ticks: { size: 10 },
font: ['12px sans-serif'],
};
const mockSelf = ({
axes: [mockAxis],
ctx: { measureText: jest.fn(() => ({ width: 60 })), font: '' },
} as unknown) as uPlot;
const result = (sizeFn as (
s: uPlot,
v: string[],
a: number,
c: number,
) => number)(
mockSelf,
['100', '200'],
0,
2, // cycleNum > 1
);
expect(result).toBe(80);
});
it('should invoke the size calculator and compute from text width when cycleNum <= 1', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({ scaleKey: 'y', gap: 8 }),
);
const config = builder.getConfig();
const sizeFn = config.size;
expect(typeof sizeFn).toBe('function');
const mockAxis = {
ticks: { size: 12 },
font: ['12px sans-serif'],
};
const measureText = jest.fn(() => ({ width: 48 }));
const mockSelf = ({
axes: [mockAxis],
ctx: {
measureText,
get font() {
return '';
},
set font(_v: string) {
/* noop */
},
},
} as unknown) as uPlot;
const result = (sizeFn as (
s: uPlot,
v: string[],
a: number,
c: number,
) => number)(
mockSelf,
['10', '2000ms'],
0,
0, // cycleNum <= 1
);
expect(measureText).toHaveBeenCalledWith('2000ms');
expect(result).toBeGreaterThanOrEqual(12 + 8);
});
it('merge updates axis props', () => {
const builder = new UPlotAxisBuilder(
createAxisProps({ scaleKey: 'y', label: 'Original' }),
);
builder.merge({ label: 'Merged', yAxisUnit: 'bytes' });
const config = builder.getConfig();
expect(config.label).toBe('Merged');
expect(config.values).toBeDefined();
});
});

View File

@@ -0,0 +1,337 @@
import { PANEL_TYPES } from 'constants/queryBuilder';
import uPlot from 'uplot';
import type { SeriesProps } from '../types';
import { DrawStyle, SelectionPreferencesSource } from '../types';
import { UPlotConfigBuilder } from '../UPlotConfigBuilder';
// Mock only the real boundary that hits localStorage
jest.mock(
'container/DashboardContainer/visualization/panels/utils/legendVisibilityUtils',
() => ({
getStoredSeriesVisibility: jest.fn(),
}),
);
const getStoredSeriesVisibilityMock = jest.requireMock(
'container/DashboardContainer/visualization/panels/utils/legendVisibilityUtils',
) as {
getStoredSeriesVisibility: jest.Mock;
};
describe('UPlotConfigBuilder', () => {
beforeEach(() => {
jest.clearAllMocks();
});
const createSeriesProps = (
overrides: Partial<SeriesProps> = {},
): SeriesProps => ({
scaleKey: 'y',
label: 'Requests',
colorMapping: {},
drawStyle: DrawStyle.Line,
panelType: PANEL_TYPES.TIME_SERIES,
...overrides,
});
it('returns correct save selection preference flag from constructor args', () => {
const builder = new UPlotConfigBuilder({
shouldSaveSelectionPreference: true,
});
expect(builder.getShouldSaveSelectionPreference()).toBe(true);
});
it('returns widgetId from constructor args', () => {
const builder = new UPlotConfigBuilder({ widgetId: 'widget-123' });
expect(builder.getWidgetId()).toBe('widget-123');
});
it('sets tzDate from constructor and includes it in config', () => {
const tzDate = (ts: number): Date => new Date(ts);
const builder = new UPlotConfigBuilder({ tzDate });
const config = builder.getConfig();
expect(config.tzDate).toBe(tzDate);
});
it('does not call onDragSelect for click without drag (width === 0)', () => {
const onDragSelect = jest.fn();
const builder = new UPlotConfigBuilder({ onDragSelect });
const config = builder.getConfig();
const setSelectHooks = config.hooks?.setSelect ?? [];
expect(setSelectHooks.length).toBe(1);
const uplotInstance = ({
select: { left: 10, width: 0 },
posToVal: jest.fn(),
} as unknown) as uPlot;
// Simulate uPlot calling the hook
const setSelectHook = setSelectHooks[0];
setSelectHook!(uplotInstance);
expect(onDragSelect).not.toHaveBeenCalled();
});
it('calls onDragSelect with start and end times in milliseconds for a drag selection', () => {
const onDragSelect = jest.fn();
const builder = new UPlotConfigBuilder({ onDragSelect });
const config = builder.getConfig();
const setSelectHooks = config.hooks?.setSelect ?? [];
expect(setSelectHooks.length).toBe(1);
const posToVal = jest
.fn()
// left position
.mockReturnValueOnce(100)
// left + width
.mockReturnValueOnce(110);
const uplotInstance = ({
select: { left: 50, width: 20 },
posToVal,
} as unknown) as uPlot;
const setSelectHook = setSelectHooks[0];
setSelectHook!(uplotInstance);
expect(onDragSelect).toHaveBeenCalledTimes(1);
// 100 and 110 seconds converted to milliseconds
expect(onDragSelect).toHaveBeenCalledWith(100_000, 110_000);
});
it('adds and removes hooks via addHook, and exposes them through getConfig', () => {
const builder = new UPlotConfigBuilder();
const drawHook = jest.fn();
const remove = builder.addHook('draw', drawHook as uPlot.Hooks.Defs['draw']);
let config = builder.getConfig();
expect(config.hooks?.draw).toContain(drawHook);
// Remove and ensure it no longer appears in config
remove();
config = builder.getConfig();
expect(config.hooks?.draw ?? []).not.toContain(drawHook);
});
it('adds axes, scales, and series and wires them into the final config', () => {
const builder = new UPlotConfigBuilder();
// Add axis and scale
builder.addAxis({ scaleKey: 'y', label: 'Requests' });
builder.addScale({ scaleKey: 'y' });
// Add two series legend indices should start from 1 (0 is the timestamp series)
builder.addSeries(createSeriesProps({ label: 'Requests' }));
builder.addSeries(createSeriesProps({ label: 'Errors' }));
const config = builder.getConfig();
// Axes
expect(config.axes).toHaveLength(1);
expect(config.axes?.[0].scale).toBe('y');
// Scales are returned as an object keyed by scaleKey
expect(config.scales).toBeDefined();
expect(Object.keys(config.scales ?? {})).toContain('y');
// Series: base timestamp + 2 data series
expect(config.series).toHaveLength(3);
// Base series (index 0) has a value formatter that returns empty string
const baseSeries = config.series?.[0] as { value?: () => string };
expect(typeof baseSeries?.value).toBe('function');
expect(baseSeries?.value?.()).toBe('');
// Legend items align with series and carry label and color from series config
const legendItems = builder.getLegendItems();
expect(Object.keys(legendItems)).toEqual(['1', '2']);
expect(legendItems[1].seriesIndex).toBe(1);
expect(legendItems[1].label).toBe('Requests');
expect(legendItems[2].label).toBe('Errors');
});
it('merges axis when addAxis is called twice with same scaleKey', () => {
const builder = new UPlotConfigBuilder();
builder.addAxis({ scaleKey: 'y', label: 'Requests' });
builder.addAxis({ scaleKey: 'y', label: 'Updated Label', show: false });
const config = builder.getConfig();
expect(config.axes).toHaveLength(1);
expect(config.axes?.[0].label).toBe('Updated Label');
expect(config.axes?.[0].show).toBe(false);
});
it('merges scale when addScale is called twice with same scaleKey', () => {
const builder = new UPlotConfigBuilder();
builder.addScale({ scaleKey: 'y', min: 0 });
builder.addScale({ scaleKey: 'y', max: 100 });
const config = builder.getConfig();
// Only one scale entry for 'y' (merge path used, no duplicate added)
expect(config.scales).toBeDefined();
const scales = config.scales ?? {};
expect(Object.keys(scales)).toEqual(['y']);
expect(scales.y?.range).toBeDefined();
});
it('restores visibility state from localStorage when selectionPreferencesSource is LOCAL_STORAGE', () => {
// Index 0 = x-axis/time; indices 1,2 = data series (Requests, Errors). resolveSeriesVisibility matches by seriesIndex + seriesLabel.
getStoredSeriesVisibilityMock.getStoredSeriesVisibility.mockReturnValue({
labels: ['x-axis', 'Requests', 'Errors'],
visibility: [true, true, false],
});
const builder = new UPlotConfigBuilder({
widgetId: 'widget-1',
selectionPreferencesSource: SelectionPreferencesSource.LOCAL_STORAGE,
});
builder.addSeries(createSeriesProps({ label: 'Requests' }));
builder.addSeries(createSeriesProps({ label: 'Errors' }));
const legendItems = builder.getLegendItems();
// When any series is hidden, legend visibility is driven by the stored map
expect(legendItems[1].show).toBe(true);
expect(legendItems[2].show).toBe(false);
const config = builder.getConfig();
const [, firstSeries, secondSeries] = config.series ?? [];
expect(firstSeries?.show).toBe(true);
expect(secondSeries?.show).toBe(false);
});
it('does not attempt to read stored visibility when using in-memory preferences', () => {
const builder = new UPlotConfigBuilder({
widgetId: 'widget-1',
selectionPreferencesSource: SelectionPreferencesSource.IN_MEMORY,
});
builder.addSeries(createSeriesProps({ label: 'Requests' }));
builder.getLegendItems();
builder.getConfig();
expect(
getStoredSeriesVisibilityMock.getStoredSeriesVisibility,
).not.toHaveBeenCalled();
});
it('adds thresholds only once per scale key', () => {
const builder = new UPlotConfigBuilder();
const thresholdsOptions = {
scaleKey: 'y',
thresholds: [{ thresholdValue: 100 }],
};
builder.addThresholds(thresholdsOptions);
builder.addThresholds(thresholdsOptions);
const config = builder.getConfig();
const drawHooks = config.hooks?.draw ?? [];
// Only a single draw hook should be registered for the same scaleKey
expect(drawHooks.length).toBe(1);
});
it('adds multiple thresholds when scale key is different', () => {
const builder = new UPlotConfigBuilder();
const thresholdsOptions = {
scaleKey: 'y',
thresholds: [{ thresholdValue: 100 }],
};
builder.addThresholds(thresholdsOptions);
const thresholdsOptions2 = {
scaleKey: 'y2',
thresholds: [{ thresholdValue: 200 }],
};
builder.addThresholds(thresholdsOptions2);
const config = builder.getConfig();
const drawHooks = config.hooks?.draw ?? [];
// Two draw hooks should be registered for different scaleKeys
expect(drawHooks.length).toBe(2);
});
it('merges cursor configuration with defaults instead of replacing them', () => {
const builder = new UPlotConfigBuilder();
builder.setCursor({
drag: { setScale: false },
});
const config = builder.getConfig();
expect(config.cursor?.drag?.setScale).toBe(false);
// Points configuration from DEFAULT_CURSOR_CONFIG should still be present
expect(config.cursor?.points).toBeDefined();
});
it('adds plugins and includes them in config', () => {
const builder = new UPlotConfigBuilder();
const plugin: uPlot.Plugin = {
opts: (): void => {},
hooks: {},
};
builder.addPlugin(plugin);
const config = builder.getConfig();
expect(config.plugins).toContain(plugin);
});
it('sets padding, legend, focus, select, tzDate, bands and includes them in config', () => {
const tzDate = (ts: number): Date => new Date(ts);
const builder = new UPlotConfigBuilder();
const bands: uPlot.Band[] = [{ series: [1, 2], fill: (): string => '#000' }];
builder.setBands(bands);
builder.setPadding([10, 20, 30, 40]);
builder.setLegend({ show: true, live: true });
builder.setFocus({ alpha: 0.5 });
builder.setSelect({ left: 0, width: 0, top: 0, height: 0 });
builder.setTzDate(tzDate);
const config = builder.getConfig();
expect(config.bands).toEqual(bands);
expect(config.padding).toEqual([10, 20, 30, 40]);
expect(config.legend).toEqual({ show: true, live: true });
expect(config.focus).toEqual({ alpha: 0.5 });
expect(config.select).toEqual({ left: 0, width: 0, top: 0, height: 0 });
expect(config.tzDate).toBe(tzDate);
});
it('does not include plugins when none added', () => {
const builder = new UPlotConfigBuilder();
const config = builder.getConfig();
expect(config.plugins).toBeUndefined();
});
it('does not include bands when empty', () => {
const builder = new UPlotConfigBuilder();
const config = builder.getConfig();
expect(config.bands).toBeUndefined();
});
});

View File

@@ -0,0 +1,236 @@
import type uPlot from 'uplot';
import * as scaleUtils from '../../utils/scale';
import type { ScaleProps } from '../types';
import { DistributionType } from '../types';
import { UPlotScaleBuilder } from '../UPlotScaleBuilder';
const createScaleProps = (overrides: Partial<ScaleProps> = {}): ScaleProps => ({
scaleKey: 'y',
time: false,
auto: undefined,
min: undefined,
max: undefined,
softMin: undefined,
softMax: undefined,
distribution: DistributionType.Linear,
...overrides,
});
describe('UPlotScaleBuilder', () => {
const getFallbackMinMaxSpy = jest.spyOn(
scaleUtils,
'getFallbackMinMaxTimeStamp',
);
beforeEach(() => {
jest.clearAllMocks();
});
it('initializes softMin/softMax correctly when both are 0 (treated as unset)', () => {
const builder = new UPlotScaleBuilder(
createScaleProps({
softMin: 0,
softMax: 0,
}),
);
// Non-time scale so config path uses thresholds pipeline; we just care that
// adjustSoftLimitsWithThresholds receives null soft limits instead of 0/0.
const adjustSpy = jest.spyOn(scaleUtils, 'adjustSoftLimitsWithThresholds');
builder.getConfig();
expect(adjustSpy).toHaveBeenCalledWith(null, null, undefined, undefined);
});
it('handles time scales using explicit min/max and rounds max down to the previous minute', () => {
const min = 1_700_000_000; // seconds
const max = 1_700_000_600; // seconds
const builder = new UPlotScaleBuilder(
createScaleProps({
scaleKey: 'x',
time: true,
min,
max,
}),
);
const config = builder.getConfig();
const xScale = config.x;
expect(xScale.time).toBe(true);
expect(xScale.auto).toBe(false);
expect(Array.isArray(xScale.range)).toBe(true);
const [resolvedMin, resolvedMax] = xScale.range as [number, number];
// min is passed through
expect(resolvedMin).toBe(min);
// max is coerced to "endTime - 1 minute" and rounded down to minute precision
const oneMinuteAgoTimestamp = (max - 60) * 1000;
const currentDate = new Date(oneMinuteAgoTimestamp);
currentDate.setSeconds(0);
currentDate.setMilliseconds(0);
const expectedMax = Math.floor(currentDate.getTime() / 1000);
expect(resolvedMax).toBe(expectedMax);
});
it('falls back to getFallbackMinMaxTimeStamp when time scale has no min/max', () => {
getFallbackMinMaxSpy.mockReturnValue({
fallbackMin: 100,
fallbackMax: 200,
});
const builder = new UPlotScaleBuilder(
createScaleProps({
scaleKey: 'x',
time: true,
min: undefined,
max: undefined,
}),
);
const config = builder.getConfig();
const [resolvedMin, resolvedMax] = config.x.range as [number, number];
expect(getFallbackMinMaxSpy).toHaveBeenCalled();
expect(resolvedMin).toBe(100);
// max is aligned to "fallbackMax - 60 seconds" minute boundary
expect(resolvedMax).toBeLessThanOrEqual(200);
expect(resolvedMax).toBeGreaterThan(100);
});
it('pipes limits through soft-limit adjustment and log-scale normalization before range config', () => {
const adjustSpy = jest.spyOn(scaleUtils, 'adjustSoftLimitsWithThresholds');
const normalizeSpy = jest.spyOn(scaleUtils, 'normalizeLogScaleLimits');
const getRangeConfigSpy = jest.spyOn(scaleUtils, 'getRangeConfig');
const thresholds = {
scaleKey: 'y',
thresholds: [{ thresholdValue: 10 }],
yAxisUnit: 'ms',
};
const builder = new UPlotScaleBuilder(
createScaleProps({
softMin: 1,
softMax: 5,
min: 0,
max: 100,
distribution: DistributionType.Logarithmic,
thresholds,
logBase: 2,
padMinBy: 0.1,
padMaxBy: 0.2,
}),
);
builder.getConfig();
expect(adjustSpy).toHaveBeenCalledWith(1, 5, thresholds.thresholds, 'ms');
expect(normalizeSpy).toHaveBeenCalledWith({
distr: DistributionType.Logarithmic,
logBase: 2,
limits: {
min: 0,
max: 100,
softMin: expect.anything(),
softMax: expect.anything(),
},
});
expect(getRangeConfigSpy).toHaveBeenCalled();
});
it('computes distribution config for non-time scales and wires range function when range is not provided', () => {
const createRangeFnSpy = jest.spyOn(scaleUtils, 'createRangeFunction');
const builder = new UPlotScaleBuilder(
createScaleProps({
scaleKey: 'y',
time: false,
distribution: DistributionType.Linear,
}),
);
const config = builder.getConfig();
const yScale = config.y;
expect(createRangeFnSpy).toHaveBeenCalled();
// range should be a function when not provided explicitly
expect(typeof yScale.range).toBe('function');
// distribution config should be applied
expect(yScale.distr).toBeDefined();
expect(yScale.log).toBeDefined();
});
it('respects explicit range function when provided on props', () => {
const explicitRange: uPlot.Scale.Range = jest.fn(() => [
0,
10,
]) as uPlot.Scale.Range;
const builder = new UPlotScaleBuilder(
createScaleProps({
scaleKey: 'y',
range: explicitRange,
}),
);
const config = builder.getConfig();
const yScale = config.y;
expect(yScale.range).toBe(explicitRange);
});
it('derives auto flag when not explicitly provided, based on hasFixedRange and time', () => {
const getRangeConfigSpy = jest.spyOn(scaleUtils, 'getRangeConfig');
const builder = new UPlotScaleBuilder(
createScaleProps({
min: 0,
max: 100,
time: false,
}),
);
const config = builder.getConfig();
const yScale = config.y;
expect(getRangeConfigSpy).toHaveBeenCalled();
// For non-time scale with fixed min/max, hasFixedRange is true → auto should remain false
expect(yScale.auto).toBe(false);
});
it('merge updates internal min/max/soft limits while preserving other props', () => {
const builder = new UPlotScaleBuilder(
createScaleProps({
scaleKey: 'y',
min: 0,
max: 10,
softMin: 1,
softMax: 9,
time: false,
}),
);
builder.merge({
min: 2,
softMax: undefined,
});
expect(builder.props.min).toBe(2);
expect(builder.props.softMax).toBe(undefined);
expect(builder.props.max).toBe(10);
expect(builder.props.softMin).toBe(1);
expect(builder.props.time).toBe(false);
expect(builder.props.scaleKey).toBe('y');
expect(builder.props.distribution).toBe(DistributionType.Linear);
expect(builder.props.thresholds).toBe(undefined);
});
});

View File

@@ -0,0 +1,295 @@
import { PANEL_TYPES } from 'constants/queryBuilder';
import { themeColors } from 'constants/theme';
import uPlot from 'uplot';
import type { SeriesProps } from '../types';
import {
DrawStyle,
LineInterpolation,
LineStyle,
VisibilityMode,
} from '../types';
import { UPlotSeriesBuilder } from '../UPlotSeriesBuilder';
const createBaseProps = (
overrides: Partial<SeriesProps> = {},
): SeriesProps => ({
scaleKey: 'y',
label: 'Requests',
colorMapping: {},
drawStyle: DrawStyle.Line,
isDarkMode: false,
panelType: PANEL_TYPES.TIME_SERIES,
...overrides,
});
interface MockPath extends uPlot.Series.Paths {
name?: string;
}
describe('UPlotSeriesBuilder', () => {
it('maps basic props into uPlot series config', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
label: 'Latency',
spanGaps: true,
show: false,
}),
);
const config = builder.getConfig();
expect(config.scale).toBe('y');
expect(config.label).toBe('Latency');
expect(config.spanGaps).toBe(true);
expect(config.show).toBe(false);
expect(config.pxAlign).toBe(true);
expect(typeof config.value).toBe('function');
});
it('uses explicit lineColor when provided, regardless of mapping', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
lineColor: '#ff00ff',
colorMapping: { Requests: '#00ff00' },
}),
);
const config = builder.getConfig();
expect(config.stroke).toBe('#ff00ff');
});
it('falls back to theme colors when no label is provided', () => {
const darkBuilder = new UPlotSeriesBuilder(
createBaseProps({
label: undefined,
isDarkMode: true,
lineColor: undefined,
}),
);
const lightBuilder = new UPlotSeriesBuilder(
createBaseProps({
label: undefined,
isDarkMode: false,
lineColor: undefined,
}),
);
const darkConfig = darkBuilder.getConfig();
const lightConfig = lightBuilder.getConfig();
expect(darkConfig.stroke).toBe(themeColors.white);
expect(lightConfig.stroke).toBe(themeColors.black);
});
it('uses colorMapping when available and no explicit lineColor is provided', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
label: 'Requests',
colorMapping: { Requests: '#123456' },
lineColor: undefined,
}),
);
const config = builder.getConfig();
expect(config.stroke).toBe('#123456');
});
it('passes through a custom pathBuilder when provided', () => {
const customPaths = (jest.fn() as unknown) as uPlot.Series.PathBuilder;
const builder = new UPlotSeriesBuilder(
createBaseProps({
pathBuilder: customPaths,
}),
);
const config = builder.getConfig();
expect(config.paths).toBe(customPaths);
});
it('does not build line paths when drawStyle is Points, but still renders points', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Points,
pointSize: 4,
lineWidth: 2,
lineColor: '#aa00aa',
}),
);
const config = builder.getConfig();
expect(typeof config.paths).toBe('function');
expect(config.paths && config.paths({} as uPlot, 1, 0, 10)).toBeNull();
expect(config.points).toBeDefined();
expect(config.points?.stroke).toBe('#aa00aa');
expect(config.points?.fill).toBe('#aa00aa');
expect(config.points?.show).toBe(true);
expect(config.points?.size).toBe(4);
});
it('derives point size based on lineWidth and pointSize', () => {
const smallPointsBuilder = new UPlotSeriesBuilder(
createBaseProps({
lineWidth: 4,
pointSize: 2,
}),
);
const largePointsBuilder = new UPlotSeriesBuilder(
createBaseProps({
lineWidth: 2,
pointSize: 4,
}),
);
const smallConfig = smallPointsBuilder.getConfig();
const largeConfig = largePointsBuilder.getConfig();
expect(smallConfig.points?.size).toBeUndefined();
expect(largeConfig.points?.size).toBe(4);
});
it('uses pointsBuilder when provided instead of default visibility logic', () => {
const pointsBuilder: uPlot.Series.Points.Show = jest.fn(
() => true,
) as uPlot.Series.Points.Show;
const builder = new UPlotSeriesBuilder(
createBaseProps({
pointsBuilder,
drawStyle: DrawStyle.Line,
}),
);
const config = builder.getConfig();
expect(config.points?.show).toBe(pointsBuilder);
});
it('respects VisibilityMode for point visibility when no custom pointsBuilder is given', () => {
const neverPointsBuilder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Line,
showPoints: VisibilityMode.Never,
}),
);
const alwaysPointsBuilder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Line,
showPoints: VisibilityMode.Always,
}),
);
const neverConfig = neverPointsBuilder.getConfig();
const alwaysConfig = alwaysPointsBuilder.getConfig();
expect(neverConfig.points?.show).toBe(false);
expect(alwaysConfig.points?.show).toBe(true);
});
it('applies LineStyle.Dashed and lineCap to line config', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
lineStyle: LineStyle.Dashed,
lineCap: 'round' as CanvasLineCap,
}),
);
const config = builder.getConfig();
expect(config.dash).toEqual([10, 10]);
expect(config.cap).toBe('round');
});
it('builds default paths for Line drawStyle and invokes the path builder', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Line,
lineInterpolation: LineInterpolation.Linear,
}),
);
const config = builder.getConfig();
const result = config.paths?.({} as uPlot, 1, 0, 10);
expect((result as MockPath).name).toBe('linear');
});
it('uses StepBefore and StepAfter interpolation for line paths', () => {
const stepBeforeBuilder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Line,
lineInterpolation: LineInterpolation.StepBefore,
}),
);
const stepAfterBuilder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Line,
lineInterpolation: LineInterpolation.StepAfter,
}),
);
const stepBeforeConfig = stepBeforeBuilder.getConfig();
const stepAfterConfig = stepAfterBuilder.getConfig();
const stepBeforePath = stepBeforeConfig.paths?.({} as uPlot, 1, 0, 5);
const stepAfterPath = stepAfterConfig.paths?.({} as uPlot, 1, 0, 5);
expect((stepBeforePath as MockPath).name).toBe('stepped-(-1)');
expect((stepAfterPath as MockPath).name).toBe('stepped-(1)');
});
it('defaults to spline interpolation when lineInterpolation is Spline or undefined', () => {
const splineBuilder = new UPlotSeriesBuilder(
createBaseProps({
drawStyle: DrawStyle.Line,
lineInterpolation: LineInterpolation.Spline,
}),
);
const defaultBuilder = new UPlotSeriesBuilder(
createBaseProps({ drawStyle: DrawStyle.Line }),
);
const splineConfig = splineBuilder.getConfig();
const defaultConfig = defaultBuilder.getConfig();
const splinePath = splineConfig.paths?.({} as uPlot, 1, 0, 10);
const defaultPath = defaultConfig.paths?.({} as uPlot, 1, 0, 10);
expect((splinePath as MockPath).name).toBe('spline');
expect((defaultPath as MockPath).name).toBe('spline');
});
it('uses generateColor when label has no colorMapping and no lineColor', () => {
const builder = new UPlotSeriesBuilder(
createBaseProps({
label: 'CustomSeries',
colorMapping: {},
lineColor: undefined,
}),
);
const config = builder.getConfig();
expect(config.stroke).toBe('#E64A3C');
});
it('passes through pointsFilter when provided', () => {
const pointsFilter: uPlot.Series.Points.Filter = jest.fn(
(_self, _seriesIdx, _show) => null,
);
const builder = new UPlotSeriesBuilder(
createBaseProps({
pointsFilter,
drawStyle: DrawStyle.Line,
}),
);
const config = builder.getConfig();
expect(config.points?.filter).toBe(pointsFilter);
});
});

View File

@@ -126,7 +126,45 @@ export enum VisibilityMode {
Never = 'never',
}
export interface SeriesProps {
/**
* Props for configuring lines
*/
export interface LineConfig {
lineColor?: string;
lineInterpolation?: LineInterpolation;
lineStyle?: LineStyle;
lineWidth?: number;
lineCap?: Series.Cap;
}
/**
* Alignment of bars
*/
export enum BarAlignment {
After = 1,
Before = -1,
Center = 0,
}
/**
* Props for configuring bars
*/
export interface BarConfig {
barAlignment?: BarAlignment;
barMaxWidth?: number;
barWidthFactor?: number;
}
/**
* Props for configuring points
*/
export interface PointsConfig {
pointColor?: string;
pointSize?: number;
showPoints?: VisibilityMode;
}
export interface SeriesProps extends LineConfig, PointsConfig, BarConfig {
scaleKey: string;
label?: string;
panelType: PANEL_TYPES;
@@ -137,20 +175,7 @@ export interface SeriesProps {
pointsBuilder?: Series.Points.Show;
show?: boolean;
spanGaps?: boolean;
isDarkMode?: boolean;
// Line config
lineColor?: string;
lineInterpolation?: LineInterpolation;
lineStyle?: LineStyle;
lineWidth?: number;
lineCap?: Series.Cap;
// Points config
pointColor?: string;
pointSize?: number;
showPoints?: VisibilityMode;
}
export interface LegendItem {

View File

@@ -0,0 +1,395 @@
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { updateSeriesVisibilityToLocalStorage } from 'container/DashboardContainer/visualization/panels/utils/legendVisibilityUtils';
import {
PlotContextProvider,
usePlotContext,
} from 'lib/uPlotV2/context/PlotContext';
import type uPlot from 'uplot';
jest.mock(
'container/DashboardContainer/visualization/panels/utils/legendVisibilityUtils',
() => ({
updateSeriesVisibilityToLocalStorage: jest.fn(),
}),
);
const mockUpdateSeriesVisibilityToLocalStorage = updateSeriesVisibilityToLocalStorage as jest.MockedFunction<
typeof updateSeriesVisibilityToLocalStorage
>;
interface MockSeries extends Partial<uPlot.Series> {
label?: string;
show?: boolean;
}
const createMockPlot = (series: MockSeries[] = []): uPlot =>
(({
series,
batch: jest.fn((fn: () => void) => fn()),
setSeries: jest.fn(),
} as unknown) as uPlot);
interface TestComponentProps {
plot?: uPlot;
widgetId?: string;
shouldSaveSelectionPreference?: boolean;
}
const TestComponent = ({
plot,
widgetId,
shouldSaveSelectionPreference,
}: TestComponentProps): JSX.Element => {
const {
setPlotContextInitialState,
syncSeriesVisibilityToLocalStorage,
onToggleSeriesVisibility,
onToggleSeriesOnOff,
onFocusSeries,
} = usePlotContext();
const handleInit = (): void => {
if (
!plot ||
!widgetId ||
typeof shouldSaveSelectionPreference !== 'boolean'
) {
return;
}
setPlotContextInitialState({
uPlotInstance: plot,
widgetId,
shouldSaveSelectionPreference,
});
};
return (
<div>
<button type="button" data-testid="init" onClick={handleInit}>
Init
</button>
<button
type="button"
data-testid="sync-visibility"
onClick={(): void => syncSeriesVisibilityToLocalStorage()}
>
Sync visibility
</button>
<button
type="button"
data-testid="toggle-visibility"
onClick={(): void => onToggleSeriesVisibility(1)}
>
Toggle visibility
</button>
<button
type="button"
data-testid="toggle-on-off-1"
onClick={(): void => onToggleSeriesOnOff(1)}
>
Toggle on/off 1
</button>
<button
type="button"
data-testid="toggle-on-off-5"
onClick={(): void => onToggleSeriesOnOff(5)}
>
Toggle on/off 5
</button>
<button
type="button"
data-testid="focus-series"
onClick={(): void => onFocusSeries(1)}
>
Focus series
</button>
</div>
);
};
describe('PlotContext', () => {
afterEach(() => {
jest.clearAllMocks();
});
it('throws when usePlotContext is used outside provider', () => {
const Consumer = (): JSX.Element => {
// eslint-disable-next-line react-hooks/rules-of-hooks
usePlotContext();
return <div />;
};
expect(() => render(<Consumer />)).toThrow(
'Should be used inside the context',
);
});
it('syncSeriesVisibilityToLocalStorage does nothing without plot or widgetId', async () => {
const user = userEvent.setup();
render(
<PlotContextProvider>
<TestComponent />
</PlotContextProvider>,
);
await user.click(screen.getByTestId('sync-visibility'));
expect(mockUpdateSeriesVisibilityToLocalStorage).not.toHaveBeenCalled();
});
it('syncSeriesVisibilityToLocalStorage serializes series visibility to localStorage helper', async () => {
const user = userEvent.setup();
const plot = createMockPlot([
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
{ label: 'Memory', show: false },
]);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-123"
shouldSaveSelectionPreference
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('sync-visibility'));
expect(mockUpdateSeriesVisibilityToLocalStorage).toHaveBeenCalledTimes(1);
expect(mockUpdateSeriesVisibilityToLocalStorage).toHaveBeenCalledWith(
'widget-123',
[
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
{ label: 'Memory', show: false },
],
);
});
describe('onToggleSeriesVisibility', () => {
it('does nothing when plot instance is not set', async () => {
const user = userEvent.setup();
render(
<PlotContextProvider>
<TestComponent />
</PlotContextProvider>,
);
await user.click(screen.getByTestId('toggle-visibility'));
// No errors and no calls to localStorage helper
expect(mockUpdateSeriesVisibilityToLocalStorage).not.toHaveBeenCalled();
});
it('highlights a single series and saves visibility when preferences are enabled', async () => {
const user = userEvent.setup();
const series: MockSeries[] = [
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
{ label: 'Memory', show: true },
];
const plot = createMockPlot(series);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-visibility"
shouldSaveSelectionPreference
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('toggle-visibility'));
const setSeries = (plot.setSeries as jest.Mock).mock.calls;
// index 0 is skipped, so we expect calls for 1 and 2
expect(setSeries).toEqual([
[1, { show: true }],
[2, { show: false }],
]);
expect(mockUpdateSeriesVisibilityToLocalStorage).toHaveBeenCalledTimes(1);
expect(mockUpdateSeriesVisibilityToLocalStorage).toHaveBeenCalledWith(
'widget-visibility',
[
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
{ label: 'Memory', show: true },
],
);
});
it('resets visibility for all series when toggling the same index again', async () => {
const user = userEvent.setup();
const series: MockSeries[] = [
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
{ label: 'Memory', show: true },
];
const plot = createMockPlot(series);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-reset"
shouldSaveSelectionPreference
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('toggle-visibility'));
(plot.setSeries as jest.Mock).mockClear();
await user.click(screen.getByTestId('toggle-visibility'));
const setSeries = (plot.setSeries as jest.Mock).mock.calls;
// After reset, all non-zero series should be shown
expect(setSeries).toEqual([
[1, { show: true }],
[2, { show: true }],
]);
});
});
describe('onToggleSeriesOnOff', () => {
it('does nothing when plot instance is not set', async () => {
const user = userEvent.setup();
render(
<PlotContextProvider>
<TestComponent />
</PlotContextProvider>,
);
await user.click(screen.getByTestId('toggle-on-off-1'));
expect(mockUpdateSeriesVisibilityToLocalStorage).not.toHaveBeenCalled();
});
it('toggles series show flag and saves visibility when preferences are enabled', async () => {
const user = userEvent.setup();
const series: MockSeries[] = [
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
];
const plot = createMockPlot(series);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-toggle"
shouldSaveSelectionPreference
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('toggle-on-off-1'));
expect(plot.setSeries).toHaveBeenCalledWith(1, { show: false });
expect(mockUpdateSeriesVisibilityToLocalStorage).toHaveBeenCalledTimes(1);
expect(mockUpdateSeriesVisibilityToLocalStorage).toHaveBeenCalledWith(
'widget-toggle',
expect.any(Array),
);
});
it('does not toggle when target series does not exist', async () => {
const user = userEvent.setup();
const series: MockSeries[] = [{ label: 'x-axis', show: true }];
const plot = createMockPlot(series);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-missing-series"
shouldSaveSelectionPreference
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('toggle-on-off-5'));
expect(plot.setSeries).not.toHaveBeenCalled();
expect(mockUpdateSeriesVisibilityToLocalStorage).not.toHaveBeenCalled();
});
it('does not persist visibility when preferences flag is disabled', async () => {
const user = userEvent.setup();
const series: MockSeries[] = [
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
];
const plot = createMockPlot(series);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-no-persist"
shouldSaveSelectionPreference={false}
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('toggle-on-off-1'));
expect(plot.setSeries).toHaveBeenCalledWith(1, { show: false });
expect(mockUpdateSeriesVisibilityToLocalStorage).not.toHaveBeenCalled();
});
});
describe('onFocusSeries', () => {
it('does nothing when plot instance is not set', async () => {
const user = userEvent.setup();
render(
<PlotContextProvider>
<TestComponent />
</PlotContextProvider>,
);
await user.click(screen.getByTestId('focus-series'));
});
it('sets focus on the given series index', async () => {
const user = userEvent.setup();
const plot = createMockPlot([
{ label: 'x-axis', show: true },
{ label: 'CPU', show: true },
]);
render(
<PlotContextProvider>
<TestComponent
plot={plot}
widgetId="widget-focus"
shouldSaveSelectionPreference={false}
/>
</PlotContextProvider>,
);
await user.click(screen.getByTestId('init'));
await user.click(screen.getByTestId('focus-series'));
expect(plot.setSeries).toHaveBeenCalledWith(1, { focus: true }, false);
});
});
});

View File

@@ -0,0 +1,201 @@
import { renderHook } from '@testing-library/react';
import { usePlotContext } from 'lib/uPlotV2/context/PlotContext';
import { useLegendActions } from 'lib/uPlotV2/hooks/useLegendActions';
jest.mock('lib/uPlotV2/context/PlotContext');
const mockUsePlotContext = usePlotContext as jest.MockedFunction<
typeof usePlotContext
>;
describe('useLegendActions', () => {
let onToggleSeriesVisibility: jest.Mock;
let onToggleSeriesOnOff: jest.Mock;
let onFocusSeriesPlot: jest.Mock;
let setPlotContextInitialState: jest.Mock;
let syncSeriesVisibilityToLocalStorage: jest.Mock;
let setFocusedSeriesIndexMock: jest.Mock;
let cancelAnimationFrameSpy: jest.SpyInstance<void, [handle: number]>;
beforeAll(() => {
jest
.spyOn(global, 'requestAnimationFrame')
.mockImplementation((cb: FrameRequestCallback): number => {
cb(0);
return 1;
});
cancelAnimationFrameSpy = jest
.spyOn(global, 'cancelAnimationFrame')
.mockImplementation(() => {});
});
afterAll(() => {
jest.restoreAllMocks();
});
beforeEach(() => {
onToggleSeriesVisibility = jest.fn();
onToggleSeriesOnOff = jest.fn();
onFocusSeriesPlot = jest.fn();
setPlotContextInitialState = jest.fn();
syncSeriesVisibilityToLocalStorage = jest.fn();
setFocusedSeriesIndexMock = jest.fn();
mockUsePlotContext.mockReturnValue({
onToggleSeriesVisibility,
onToggleSeriesOnOff,
onFocusSeries: onFocusSeriesPlot,
setPlotContextInitialState,
syncSeriesVisibilityToLocalStorage,
});
cancelAnimationFrameSpy.mockClear();
});
const createMouseEvent = (options: {
legendItemId?: number;
isMarker?: boolean;
}): any => {
const { legendItemId, isMarker = false } = options;
return {
target: {
dataset: {
...(isMarker ? { isLegendMarker: 'true' } : {}),
},
closest: jest.fn(() =>
legendItemId !== undefined
? { dataset: { legendItemId: String(legendItemId) } }
: null,
),
},
};
};
describe('onLegendClick', () => {
it('toggles series visibility when clicking on legend label', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: null,
}),
);
result.current.onLegendClick(createMouseEvent({ legendItemId: 0 }));
expect(onToggleSeriesVisibility).toHaveBeenCalledTimes(1);
expect(onToggleSeriesVisibility).toHaveBeenCalledWith(0);
expect(onToggleSeriesOnOff).not.toHaveBeenCalled();
});
it('toggles series on/off when clicking on marker', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: null,
}),
);
result.current.onLegendClick(
createMouseEvent({ legendItemId: 0, isMarker: true }),
);
expect(onToggleSeriesOnOff).toHaveBeenCalledTimes(1);
expect(onToggleSeriesOnOff).toHaveBeenCalledWith(0);
expect(onToggleSeriesVisibility).not.toHaveBeenCalled();
});
it('does nothing when click target is not inside a legend item', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: null,
}),
);
result.current.onLegendClick(createMouseEvent({}));
expect(onToggleSeriesOnOff).not.toHaveBeenCalled();
expect(onToggleSeriesVisibility).not.toHaveBeenCalled();
});
});
describe('onFocusSeries', () => {
it('schedules focus update and calls plot focus handler via mouse move', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: null,
}),
);
result.current.onLegendMouseMove(createMouseEvent({ legendItemId: 0 }));
expect(setFocusedSeriesIndexMock).toHaveBeenCalledWith(0);
expect(onFocusSeriesPlot).toHaveBeenCalledWith(0);
});
it('cancels previous animation frame before scheduling new one on subsequent mouse moves', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: null,
}),
);
result.current.onLegendMouseMove(createMouseEvent({ legendItemId: 0 }));
result.current.onLegendMouseMove(createMouseEvent({ legendItemId: 1 }));
expect(cancelAnimationFrameSpy).toHaveBeenCalled();
});
});
describe('onLegendMouseMove', () => {
it('focuses new series when hovering over different legend item', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: 0,
}),
);
result.current.onLegendMouseMove(createMouseEvent({ legendItemId: 1 }));
expect(setFocusedSeriesIndexMock).toHaveBeenCalledWith(1);
expect(onFocusSeriesPlot).toHaveBeenCalledWith(1);
});
it('does nothing when hovering over already focused series', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: 1,
}),
);
result.current.onLegendMouseMove(createMouseEvent({ legendItemId: 1 }));
expect(setFocusedSeriesIndexMock).not.toHaveBeenCalled();
expect(onFocusSeriesPlot).not.toHaveBeenCalled();
});
});
describe('onLegendMouseLeave', () => {
it('cancels pending animation frame and clears focus state', async () => {
const { result } = renderHook(() =>
useLegendActions({
setFocusedSeriesIndex: setFocusedSeriesIndexMock,
focusedSeriesIndex: null,
}),
);
result.current.onLegendMouseMove(createMouseEvent({ legendItemId: 0 }));
result.current.onLegendMouseLeave();
expect(cancelAnimationFrameSpy).toHaveBeenCalled();
expect(setFocusedSeriesIndexMock).toHaveBeenCalledWith(null);
expect(onFocusSeriesPlot).toHaveBeenCalledWith(null);
});
});
});

View File

@@ -0,0 +1,192 @@
import { act, cleanup, renderHook } from '@testing-library/react';
import type { LegendItem } from 'lib/uPlotV2/config/types';
import type { UPlotConfigBuilder } from 'lib/uPlotV2/config/UPlotConfigBuilder';
import useLegendsSync from 'lib/uPlotV2/hooks/useLegendsSync';
describe('useLegendsSync', () => {
let requestAnimationFrameSpy: jest.SpyInstance<
number,
[callback: FrameRequestCallback]
>;
let cancelAnimationFrameSpy: jest.SpyInstance<void, [handle: number]>;
beforeAll(() => {
requestAnimationFrameSpy = jest
.spyOn(global, 'requestAnimationFrame')
.mockImplementation((cb: FrameRequestCallback): number => {
cb(0);
return 1;
});
cancelAnimationFrameSpy = jest
.spyOn(global, 'cancelAnimationFrame')
.mockImplementation(() => {});
});
afterEach(() => {
jest.clearAllMocks();
cleanup();
});
afterAll(() => {
jest.restoreAllMocks();
});
const createMockConfig = (
legendItems: Record<number, LegendItem>,
): {
config: UPlotConfigBuilder;
invokeSetSeries: (
seriesIndex: number | null,
opts: { show?: boolean; focus?: boolean },
fireHook?: boolean,
) => void;
} => {
let setSeriesHandler:
| ((u: uPlot, seriesIndex: number | null, opts: uPlot.Series) => void)
| null = null;
const config = ({
getLegendItems: jest.fn(() => legendItems),
addHook: jest.fn(
(
hookName: string,
handler: (
u: uPlot,
seriesIndex: number | null,
opts: uPlot.Series,
) => void,
) => {
if (hookName === 'setSeries') {
setSeriesHandler = handler;
}
return (): void => {
setSeriesHandler = null;
};
},
),
} as unknown) as UPlotConfigBuilder;
const invokeSetSeries = (
seriesIndex: number | null,
opts: { show?: boolean; focus?: boolean },
): void => {
if (setSeriesHandler) {
setSeriesHandler({} as uPlot, seriesIndex, { ...opts });
}
};
return { config, invokeSetSeries };
};
it('initializes legend items from config', () => {
const initialItems: Record<number, LegendItem> = {
1: { seriesIndex: 1, label: 'CPU', show: true, color: '#f00' },
2: { seriesIndex: 2, label: 'Memory', show: false, color: '#0f0' },
};
const { config } = createMockConfig(initialItems);
const { result } = renderHook(() => useLegendsSync({ config }));
expect(config.getLegendItems).toHaveBeenCalledTimes(1);
expect(config.addHook).toHaveBeenCalledWith(
'setSeries',
expect.any(Function),
);
expect(result.current.legendItemsMap).toEqual(initialItems);
});
it('updates focusedSeriesIndex when a series gains focus via setSeries by default', async () => {
const initialItems: Record<number, LegendItem> = {
1: { seriesIndex: 1, label: 'CPU', show: true, color: '#f00' },
};
const { config, invokeSetSeries } = createMockConfig(initialItems);
const { result } = renderHook(() => useLegendsSync({ config }));
expect(result.current.focusedSeriesIndex).toBeNull();
await act(async () => {
invokeSetSeries(1, { focus: true });
});
expect(result.current.focusedSeriesIndex).toBe(1);
});
it('does not update focusedSeriesIndex when subscribeToFocusChange is false', () => {
const initialItems: Record<number, LegendItem> = {
1: { seriesIndex: 1, label: 'CPU', show: true, color: '#f00' },
};
const { config, invokeSetSeries } = createMockConfig(initialItems);
const { result } = renderHook(() =>
useLegendsSync({ config, subscribeToFocusChange: false }),
);
invokeSetSeries(1, { focus: true });
expect(result.current.focusedSeriesIndex).toBeNull();
});
it('updates legendItemsMap visibility when show changes for a series', async () => {
const initialItems: Record<number, LegendItem> = {
0: { seriesIndex: 0, label: 'x-axis', show: true, color: '#000' },
1: { seriesIndex: 1, label: 'CPU', show: true, color: '#f00' },
};
const { config, invokeSetSeries } = createMockConfig(initialItems);
const { result } = renderHook(() => useLegendsSync({ config }));
// Toggle visibility of series 1
await act(async () => {
invokeSetSeries(1, { show: false });
});
expect(result.current.legendItemsMap[1].show).toBe(false);
});
it('ignores visibility updates for unknown legend items or unchanged show values', () => {
const initialItems: Record<number, LegendItem> = {
1: { seriesIndex: 1, label: 'CPU', show: true, color: '#f00' },
};
const { config, invokeSetSeries } = createMockConfig(initialItems);
const { result } = renderHook(() => useLegendsSync({ config }));
const before = result.current.legendItemsMap;
// Unknown series index
invokeSetSeries(5, { show: false });
// Unchanged visibility for existing item
invokeSetSeries(1, { show: true });
const after = result.current.legendItemsMap;
expect(after).toEqual(before);
});
it('cancels pending visibility RAF on unmount', () => {
const initialItems: Record<number, LegendItem> = {
1: { seriesIndex: 1, label: 'CPU', show: true, color: '#f00' },
};
const { config, invokeSetSeries } = createMockConfig(initialItems);
// Override RAF to not immediately invoke callback so we can assert cancellation
requestAnimationFrameSpy.mockImplementationOnce(() => 42);
const { unmount } = renderHook(() => useLegendsSync({ config }));
invokeSetSeries(1, { show: false });
unmount();
expect(cancelAnimationFrameSpy).toHaveBeenCalledWith(42);
});
});

View File

@@ -0,0 +1,218 @@
import type uPlot from 'uplot';
import { Axis } from 'uplot';
import {
buildYAxisSizeCalculator,
calculateTextWidth,
getExistingAxisSize,
} from '../axis';
describe('axis utils', () => {
describe('calculateTextWidth', () => {
it('returns 0 when values are undefined or empty', () => {
const mockSelf = ({
ctx: {
measureText: jest.fn(),
font: '',
},
} as unknown) as uPlot;
// internally the type is string but it is an array of strings
const mockAxis: Axis = { font: (['12px sans-serif'] as unknown) as string };
expect(calculateTextWidth(mockSelf, mockAxis, undefined)).toBe(0);
expect(calculateTextWidth(mockSelf, mockAxis, [])).toBe(0);
});
it('returns 0 when longest value is empty string or axis has no usable font', () => {
const mockSelf = ({
ctx: {
measureText: jest.fn(),
font: '',
},
} as unknown) as uPlot;
const axisWithoutFont: Axis = { font: '' };
const axisWithEmptyFontArray: Axis = { font: '' };
expect(calculateTextWidth(mockSelf, axisWithoutFont, [''])).toBe(0);
expect(
calculateTextWidth(mockSelf, axisWithEmptyFontArray, ['a', 'bb']),
).toBe(0);
});
it('measures longest value using canvas context and axis font', () => {
const measureText = jest.fn(() => ({ width: 100 }));
const mockSelf = ({
ctx: {
font: '',
measureText,
},
} as unknown) as uPlot;
const mockAxis: Axis = { font: (['14px Arial'] as unknown) as string };
const values = ['1', '1234', '12'];
const dpr =
((global as unknown) as { devicePixelRatio?: number }).devicePixelRatio ??
1;
const result = calculateTextWidth(mockSelf, mockAxis, values);
expect(measureText).toHaveBeenCalledWith('1234');
expect(mockSelf.ctx.font).toBe('14px Arial');
expect(result).toBe(100 / dpr);
});
});
describe('getExistingAxisSize', () => {
it('returns internal _size when present', () => {
const axis: any = {
_size: 42,
size: 10,
};
const result = getExistingAxisSize({
uplotInstance: ({} as unknown) as uPlot,
axis,
axisIdx: 0,
cycleNum: 0,
});
expect(result).toBe(42);
});
it('invokes size function when _size is not set', () => {
const sizeFn = jest.fn(() => 24);
const axis: Axis = { size: sizeFn };
const instance = ({} as unknown) as uPlot;
const result = getExistingAxisSize({
uplotInstance: instance,
axis,
values: ['10', '20'],
axisIdx: 1,
cycleNum: 2,
});
expect(sizeFn).toHaveBeenCalledWith(instance, ['10', '20'], 1, 2);
expect(result).toBe(24);
});
it('returns numeric size or 0 when neither _size nor size are provided', () => {
const axisWithSize: Axis = { size: 16 };
const axisWithoutSize: Axis = {};
const instance = ({} as unknown) as uPlot;
expect(
getExistingAxisSize({
uplotInstance: instance,
axis: axisWithSize,
axisIdx: 0,
cycleNum: 0,
}),
).toBe(16);
expect(
getExistingAxisSize({
uplotInstance: instance,
axis: axisWithoutSize,
axisIdx: 0,
cycleNum: 0,
}),
).toBe(0);
});
});
describe('buildYAxisSizeCalculator', () => {
it('delegates to getExistingAxisSize when cycleNum > 1', () => {
const sizeCalculator = buildYAxisSizeCalculator(5);
const axis: any = {
_size: 80,
ticks: { size: 10 },
font: ['12px sans-serif'],
};
const measureText = jest.fn(() => ({ width: 60 }));
const self = ({
axes: [axis],
ctx: {
font: '',
measureText,
},
} as unknown) as uPlot;
if (typeof sizeCalculator === 'number') {
throw new Error('Size calculator is a number');
}
const result = sizeCalculator(self, ['10', '20'], 0, 2);
expect(result).toBe(80);
expect(measureText).not.toHaveBeenCalled();
});
it('computes size from ticks, gap and text width when cycleNum <= 1', () => {
const gap = 7;
const sizeCalculator = buildYAxisSizeCalculator(gap);
const axis: Axis = {
ticks: { size: 12 },
font: (['12px sans-serif'] as unknown) as string,
};
const measureText = jest.fn(() => ({ width: 50 }));
const self = ({
axes: [axis],
ctx: {
font: '',
measureText,
},
} as unknown) as uPlot;
const dpr =
((global as unknown) as { devicePixelRatio?: number }).devicePixelRatio ??
1;
const expected = Math.ceil(12 + gap + 50 / dpr);
if (typeof sizeCalculator === 'number') {
throw new Error('Size calculator is a number');
}
const result = sizeCalculator(self, ['short', 'the-longest'], 0, 0);
expect(measureText).toHaveBeenCalledWith('the-longest');
expect(result).toBe(expected);
});
it('uses 0 ticks size when ticks are not defined', () => {
const gap = 4;
const sizeCalculator = buildYAxisSizeCalculator(gap);
const axis: Axis = {
font: (['12px sans-serif'] as unknown) as string,
};
const measureText = jest.fn(() => ({ width: 40 }));
const self = ({
axes: [axis],
ctx: {
font: '',
measureText,
},
} as unknown) as uPlot;
const dpr =
((global as unknown) as { devicePixelRatio?: number }).devicePixelRatio ??
1;
const expected = Math.ceil(gap + 40 / dpr);
if (typeof sizeCalculator === 'number') {
throw new Error('Size calculator is a number');
}
const result = sizeCalculator(self, ['1', '123'], 0, 1);
expect(result).toBe(expected);
});
});
});

View File

@@ -0,0 +1,80 @@
import { Axis } from 'uplot';
/**
* Calculate text width for longest value
*/
export function calculateTextWidth(
self: uPlot,
axis: Axis,
values: string[] | undefined,
): number {
if (!values || values.length === 0) {
return 0;
}
// Find longest value
const longestVal = values.reduce(
(acc, val) => (val.length > acc.length ? val : acc),
'',
);
if (longestVal === '' || !axis.font?.[0]) {
return 0;
}
self.ctx.font = axis.font[0];
return self.ctx.measureText(longestVal).width / devicePixelRatio;
}
export function getExistingAxisSize({
uplotInstance,
axis,
values,
axisIdx,
cycleNum,
}: {
uplotInstance: uPlot;
axis: Axis;
values?: string[];
axisIdx: number;
cycleNum: number;
}): number {
const internalSize = (axis as { _size?: number })._size;
if (internalSize !== undefined) {
return internalSize;
}
const existingSize = axis.size;
if (typeof existingSize === 'function') {
return existingSize(uplotInstance, values ?? [], axisIdx, cycleNum);
}
return existingSize ?? 0;
}
export function buildYAxisSizeCalculator(gap: number): uPlot.Axis.Size {
return (
self: uPlot,
values: string[] | undefined,
axisIdx: number,
cycleNum: number,
): number => {
const axis = self.axes[axisIdx];
// Bail out, force convergence
if (cycleNum > 1) {
return getExistingAxisSize({
uplotInstance: self,
axis,
values,
axisIdx,
cycleNum,
});
}
let axisSize = (axis.ticks?.size ?? 0) + gap;
axisSize += calculateTextWidth(self, axis, values);
return Math.ceil(axisSize);
};
}

View File

@@ -1,11 +1,25 @@
export function resolveSeriesVisibility(
label: string,
seriesShow: boolean | undefined | null,
visibilityMap: Map<string, boolean> | null,
isAnySeriesHidden: boolean,
): boolean {
if (isAnySeriesHidden) {
return visibilityMap?.get(label) ?? false;
import { SeriesVisibilityState } from 'container/DashboardContainer/visualization/panels/types';
export function resolveSeriesVisibility({
seriesIndex,
seriesShow,
seriesLabel,
seriesVisibilityState,
isAnySeriesHidden,
}: {
seriesIndex: number;
seriesShow: boolean | undefined | null;
seriesLabel: string;
seriesVisibilityState: SeriesVisibilityState | null;
isAnySeriesHidden: boolean;
}): boolean {
if (
isAnySeriesHidden &&
seriesVisibilityState?.visibility &&
seriesVisibilityState.labels.length > seriesIndex &&
seriesVisibilityState.labels[seriesIndex] === seriesLabel
) {
return seriesVisibilityState.visibility[seriesIndex] ?? false;
}
return seriesShow ?? true;
}

View File

@@ -6,6 +6,8 @@ const DOCLINKS = {
'https://signoz.io/docs/product-features/trace-explorer/?utm_source=product&utm_medium=traces-explorer-trace-tab#traces-view',
METRICS_EXPLORER_EMPTY_STATE:
'https://signoz.io/docs/userguide/send-metrics-cloud/',
EXTERNAL_API_MONITORING:
'https://signoz.io/docs/external-api-monitoring/overview/',
};
export default DOCLINKS;

View File

@@ -278,7 +278,7 @@ func (q *querier) QueryRange(ctx context.Context, orgID valuer.UUID, req *qbtype
var metricTemporality map[string]metrictypes.Temporality
if len(metricNames) > 0 {
var err error
metricTemporality, err = q.metadataStore.FetchTemporalityMulti(ctx, metricNames...)
metricTemporality, err = q.metadataStore.FetchTemporalityMulti(ctx, req.Start, req.End, metricNames...)
if err != nil {
q.logger.WarnContext(ctx, "failed to fetch metric temporality", "error", err, "metrics", metricNames)
// Continue without temporality - statement builder will handle unspecified

View File

@@ -625,7 +625,7 @@ func (r *BaseRule) extractMetricAndGroupBys(ctx context.Context) (map[string][]s
// FilterNewSeries filters out items that are too new based on metadata first_seen timestamps.
// Returns the filtered series (old ones) excluding new series that are still within the grace period.
func (r *BaseRule) FilterNewSeries(ctx context.Context, ts time.Time, series []*v3.Series) ([]*v3.Series, error) {
func (r *BaseRule) FilterNewSeries(ctx context.Context, ts time.Time, series []*qbtypes.TimeSeries) ([]*qbtypes.TimeSeries, error) {
// Extract metric names and groupBy keys
metricToGroupedFields, err := r.extractMetricAndGroupBys(ctx)
if err != nil {
@@ -642,7 +642,7 @@ func (r *BaseRule) FilterNewSeries(ctx context.Context, ts time.Time, series []*
seriesIdxToLookupKeys := make(map[int][]telemetrytypes.MetricMetadataLookupKey) // series index -> lookup keys
for i := 0; i < len(series); i++ {
metricLabelMap := series[i].Labels
metricLabelMap := series[i].LabelsMap()
// Collect groupBy attribute-value pairs for this series
seriesKeys := make([]telemetrytypes.MetricMetadataLookupKey, 0)
@@ -689,7 +689,7 @@ func (r *BaseRule) FilterNewSeries(ctx context.Context, ts time.Time, series []*
}
// Filter series based on first_seen + delay
filteredSeries := make([]*v3.Series, 0, len(series))
filteredSeries := make([]*qbtypes.TimeSeries, 0, len(series))
evalTimeMs := ts.UnixMilli()
newGroupEvalDelayMs := r.newGroupEvalDelay.Milliseconds()
@@ -727,7 +727,7 @@ func (r *BaseRule) FilterNewSeries(ctx context.Context, ts time.Time, series []*
// Check if first_seen + delay has passed
if maxFirstSeen+newGroupEvalDelayMs > evalTimeMs {
// Still within grace period, skip this series
r.logger.InfoContext(ctx, "Skipping new series", "rule_name", r.Name(), "series_idx", i, "max_first_seen", maxFirstSeen, "eval_time_ms", evalTimeMs, "delay_ms", newGroupEvalDelayMs, "labels", series[i].Labels)
r.logger.InfoContext(ctx, "Skipping new series", "rule_name", r.Name(), "series_idx", i, "max_first_seen", maxFirstSeen, "eval_time_ms", evalTimeMs, "delay_ms", newGroupEvalDelayMs, "labels", series[i].LabelsMap())
continue
}

View File

@@ -26,33 +26,33 @@ import (
"github.com/SigNoz/signoz/pkg/valuer"
)
// createTestSeries creates a *v3.Series with the given labels and optional points
// createTestSeries creates a *qbtypes.TimeSeries with the given labels and optional values
// so we don't exactly need the points in the series because the labels are used to determine if the series is new or old
// we use the labels to create a lookup key for the series and then check the first_seen timestamp for the series in the metadata table
func createTestSeries(labels map[string]string, points []v3.Point) *v3.Series {
func createTestSeries(labels map[string]string, points []*qbtypes.TimeSeriesValue) *qbtypes.TimeSeries {
if points == nil {
points = []v3.Point{}
points = []*qbtypes.TimeSeriesValue{}
}
return &v3.Series{
Labels: labels,
Points: points,
lbls := make([]*qbtypes.Label, 0, len(labels))
for k, v := range labels {
lbls = append(lbls, &qbtypes.Label{Key: telemetrytypes.TelemetryFieldKey{Name: k}, Value: v})
}
return &qbtypes.TimeSeries{
Labels: lbls,
Values: points,
}
}
// seriesEqual compares two v3.Series by their labels
// seriesEqual compares two *qbtypes.TimeSeries by their labels
// Returns true if the series have the same labels (order doesn't matter)
func seriesEqual(s1, s2 *v3.Series) bool {
if s1 == nil && s2 == nil {
return true
}
if s1 == nil || s2 == nil {
func seriesEqual(s1, s2 *qbtypes.TimeSeries) bool {
m1 := s1.LabelsMap()
m2 := s2.LabelsMap()
if len(m1) != len(m2) {
return false
}
if len(s1.Labels) != len(s2.Labels) {
return false
}
for k, v := range s1.Labels {
if s2.Labels[k] != v {
for k, v := range m1 {
if m2[k] != v {
return false
}
}
@@ -149,11 +149,11 @@ func createPostableRule(compositeQuery *v3.CompositeQuery) ruletypes.PostableRul
type filterNewSeriesTestCase struct {
name string
compositeQuery *v3.CompositeQuery
series []*v3.Series
series []*qbtypes.TimeSeries
firstSeenMap map[telemetrytypes.MetricMetadataLookupKey]int64
newGroupEvalDelay valuer.TextDuration
evalTime time.Time
expectedFiltered []*v3.Series // series that should be in the final filtered result (old enough)
expectedFiltered []*qbtypes.TimeSeries // series that should be in the final filtered result (old enough)
expectError bool
}
@@ -193,7 +193,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-old", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-new", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-missing", "env": "stage"}, nil),
@@ -205,7 +205,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-old", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-missing", "env": "stage"}, nil),
}, // svc-old and svc-missing should be included; svc-new is filtered out
@@ -227,7 +227,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-new1", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-new2", "env": "stage"}, nil),
},
@@ -237,7 +237,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{}, // all should be filtered out (new series)
expectedFiltered: []*qbtypes.TimeSeries{}, // all should be filtered out (new series)
},
{
name: "all old series - ClickHouse query",
@@ -254,7 +254,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-old1", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-old2", "env": "stage"}, nil),
},
@@ -264,7 +264,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-old1", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-old2", "env": "stage"}, nil),
}, // all should be included (old series)
@@ -292,13 +292,13 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: make(map[telemetrytypes.MetricMetadataLookupKey]int64),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
}, // early return, no filtering - all series included
},
@@ -322,13 +322,13 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: make(map[telemetrytypes.MetricMetadataLookupKey]int64),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
}, // early return, no filtering - all series included
},
@@ -358,13 +358,13 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"status": "200"}, nil), // no service_name or env
},
firstSeenMap: make(map[telemetrytypes.MetricMetadataLookupKey]int64),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"status": "200"}, nil),
}, // series included as we can't decide if it's new or old
},
@@ -385,7 +385,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-old", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-no-metadata", "env": "prod"}, nil),
},
@@ -393,7 +393,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
// svc-no-metadata has no entry in firstSeenMap
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-old", "env": "prod"}, nil),
createTestSeries(map[string]string{"service_name": "svc-no-metadata", "env": "prod"}, nil),
}, // both should be included - svc-old is old, svc-no-metadata can't be decided
@@ -413,7 +413,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-partial", "env": "prod"}, nil),
},
// Only provide metadata for service_name, not env
@@ -423,7 +423,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc-partial", "env": "prod"}, nil),
}, // has some metadata, uses max first_seen which is old
},
@@ -453,11 +453,11 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{},
series: []*qbtypes.TimeSeries{},
firstSeenMap: make(map[telemetrytypes.MetricMetadataLookupKey]int64),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{},
expectedFiltered: []*qbtypes.TimeSeries{},
},
{
name: "zero delay - Builder",
@@ -485,13 +485,13 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: createFirstSeenMap("request_total", defaultGroupByFields, defaultEvalTime, defaultDelay, true, "svc1", "prod"),
newGroupEvalDelay: valuer.TextDuration{}, // zero delay
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
}, // with zero delay, all series pass
},
@@ -526,7 +526,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
firstSeenMap: mergeFirstSeenMaps(
@@ -535,7 +535,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
},
@@ -565,7 +565,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1", "env": "prod"}, nil),
},
// service_name is old, env is new - should use max (new)
@@ -575,7 +575,7 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{}, // max first_seen is new, so should be filtered out
expectedFiltered: []*qbtypes.TimeSeries{}, // max first_seen is new, so should be filtered out
},
{
name: "Logs query - should skip filtering and return empty skip indexes",
@@ -600,14 +600,14 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1"}, nil),
createTestSeries(map[string]string{"service_name": "svc2"}, nil),
},
firstSeenMap: make(map[telemetrytypes.MetricMetadataLookupKey]int64),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1"}, nil),
createTestSeries(map[string]string{"service_name": "svc2"}, nil),
}, // Logs queries should return early, no filtering - all included
@@ -635,14 +635,14 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
},
},
},
series: []*v3.Series{
series: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1"}, nil),
createTestSeries(map[string]string{"service_name": "svc2"}, nil),
},
firstSeenMap: make(map[telemetrytypes.MetricMetadataLookupKey]int64),
newGroupEvalDelay: defaultNewGroupEvalDelay,
evalTime: defaultEvalTime,
expectedFiltered: []*v3.Series{
expectedFiltered: []*qbtypes.TimeSeries{
createTestSeries(map[string]string{"service_name": "svc1"}, nil),
createTestSeries(map[string]string{"service_name": "svc2"}, nil),
}, // Traces queries should return early, no filtering - all included
@@ -724,14 +724,14 @@ func TestBaseRule_FilterNewSeries(t *testing.T) {
// Build a map to count occurrences of each unique label combination in expected series
expectedCounts := make(map[string]int)
for _, expected := range tt.expectedFiltered {
key := labelsKey(expected.Labels)
key := labelsKey(expected.LabelsMap())
expectedCounts[key]++
}
// Build a map to count occurrences of each unique label combination in filtered series
actualCounts := make(map[string]int)
for _, filtered := range filteredSeries {
key := labelsKey(filtered.Labels)
key := labelsKey(filtered.LabelsMap())
actualCounts[key]++
}

View File

@@ -12,19 +12,18 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/formatter"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
qslabels "github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/query-service/utils/times"
"github.com/SigNoz/signoz/pkg/query-service/utils/timestamp"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/prometheus/prometheus/promql"
)
type PromRule struct {
*BaseRule
version string
prometheus prometheus.Prometheus
}
@@ -48,7 +47,6 @@ func NewPromRule(
p := PromRule{
BaseRule: baseRule,
version: postableRule.Version,
prometheus: prometheus,
}
p.logger = logger
@@ -83,48 +81,30 @@ func (r *PromRule) GetSelectedQuery() string {
}
func (r *PromRule) getPqlQuery() (string, error) {
if r.version == "v5" {
if len(r.ruleCondition.CompositeQuery.Queries) > 0 {
selectedQuery := r.GetSelectedQuery()
for _, item := range r.ruleCondition.CompositeQuery.Queries {
switch item.Type {
case qbtypes.QueryTypePromQL:
promQuery, ok := item.Spec.(qbtypes.PromQuery)
if !ok {
return "", errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid promql query spec %T", item.Spec)
}
if promQuery.Name == selectedQuery {
return promQuery.Query, nil
}
if len(r.ruleCondition.CompositeQuery.Queries) > 0 {
selectedQuery := r.GetSelectedQuery()
for _, item := range r.ruleCondition.CompositeQuery.Queries {
switch item.Type {
case qbtypes.QueryTypePromQL:
promQuery, ok := item.Spec.(qbtypes.PromQuery)
if !ok {
return "", errors.NewInvalidInputf(errors.CodeInvalidInput, "invalid promql query spec %T", item.Spec)
}
if promQuery.Name == selectedQuery {
return promQuery.Query, nil
}
}
}
return "", fmt.Errorf("invalid promql rule setup")
}
if r.ruleCondition.CompositeQuery.QueryType == v3.QueryTypePromQL {
if len(r.ruleCondition.CompositeQuery.PromQueries) > 0 {
selectedQuery := r.GetSelectedQuery()
if promQuery, ok := r.ruleCondition.CompositeQuery.PromQueries[selectedQuery]; ok {
query := promQuery.Query
if query == "" {
return query, fmt.Errorf("a promquery needs to be set for this rule to function")
}
return query, nil
}
}
}
return "", fmt.Errorf("invalid promql rule query")
return "", fmt.Errorf("invalid promql rule setup")
}
func (r *PromRule) matrixToV3Series(res promql.Matrix) []*v3.Series {
v3Series := make([]*v3.Series, 0, len(res))
func matrixToTimeSeries(res promql.Matrix) []*qbtypes.TimeSeries {
result := make([]*qbtypes.TimeSeries, 0, len(res))
for _, series := range res {
commonSeries := toCommonSeries(series)
v3Series = append(v3Series, &commonSeries)
result = append(result, promSeriesToTimeSeries(series))
}
return v3Series
return result
}
func (r *PromRule) buildAndRunQuery(ctx context.Context, ts time.Time) (ruletypes.Vector, error) {
@@ -143,31 +123,31 @@ func (r *PromRule) buildAndRunQuery(ctx context.Context, ts time.Time) (ruletype
return nil, err
}
matrixToProcess := r.matrixToV3Series(res)
seriesToProcess := matrixToTimeSeries(res)
hasData := len(matrixToProcess) > 0
hasData := len(seriesToProcess) > 0
if missingDataAlert := r.HandleMissingDataAlert(ctx, ts, hasData); missingDataAlert != nil {
return ruletypes.Vector{*missingDataAlert}, nil
}
// Filter out new series if newGroupEvalDelay is configured
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, matrixToProcess)
filteredSeries, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, seriesToProcess)
// In case of error we log the error and continue with the original series
if filterErr != nil {
r.logger.ErrorContext(ctx, "Error filtering new series, ", "error", filterErr, "rule_name", r.Name())
} else {
matrixToProcess = filteredSeries
seriesToProcess = filteredSeries
}
}
var resultVector ruletypes.Vector
for _, series := range matrixToProcess {
for _, series := range seriesToProcess {
if !r.Condition().ShouldEval(series) {
r.logger.InfoContext(
ctx, "not enough data points to evaluate series, skipping",
"rule_id", r.ID(), "num_points", len(series.Points), "required_points", r.Condition().RequiredNumPoints,
"rule_id", r.ID(), "num_points", len(series.Values), "required_points", r.Condition().RequiredNumPoints,
)
continue
}
@@ -454,26 +434,25 @@ func (r *PromRule) RunAlertQuery(ctx context.Context, qs string, start, end time
}
}
func toCommonSeries(series promql.Series) v3.Series {
commonSeries := v3.Series{
Labels: make(map[string]string),
LabelsArray: make([]map[string]string, 0),
Points: make([]v3.Point, 0),
func promSeriesToTimeSeries(series promql.Series) *qbtypes.TimeSeries {
ts := &qbtypes.TimeSeries{
Labels: make([]*qbtypes.Label, 0, len(series.Metric)),
Values: make([]*qbtypes.TimeSeriesValue, 0, len(series.Floats)),
}
for _, lbl := range series.Metric {
commonSeries.Labels[lbl.Name] = lbl.Value
commonSeries.LabelsArray = append(commonSeries.LabelsArray, map[string]string{
lbl.Name: lbl.Value,
ts.Labels = append(ts.Labels, &qbtypes.Label{
Key: telemetrytypes.TelemetryFieldKey{Name: lbl.Name},
Value: lbl.Value,
})
}
for _, f := range series.Floats {
commonSeries.Points = append(commonSeries.Points, v3.Point{
ts.Values = append(ts.Values, &qbtypes.TimeSeriesValue{
Timestamp: f.T,
Value: f.F,
})
}
return commonSeries
return ts
}

View File

@@ -20,6 +20,7 @@ import (
qslabels "github.com/SigNoz/signoz/pkg/query-service/utils/labels"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/telemetrystore/telemetrystoretest"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -47,9 +48,13 @@ func TestPromRuleEval(t *testing.T) {
RuleCondition: &ruletypes.RuleCondition{
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {
Query: "dummy_query", // This is not used in the test
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "A",
Query: "dummy_query", // This is not used in the test
},
},
},
},
@@ -62,7 +67,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp string
matchType string
target float64
expectedAlertSample v3.Point
expectedAlertSample float64
expectedVectorValues []float64 // Expected values in result vector
}{
// Test cases for Equals Always
@@ -80,7 +85,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "3", // Equals
matchType: "2", // Always
target: 0.0,
expectedAlertSample: v3.Point{Value: 0.0},
expectedAlertSample: 0.0,
expectedVectorValues: []float64{0.0},
},
{
@@ -145,7 +150,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "3", // Equals
matchType: "1", // Once
target: 0.0,
expectedAlertSample: v3.Point{Value: 0.0},
expectedAlertSample: 0.0,
expectedVectorValues: []float64{0.0},
},
{
@@ -162,7 +167,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "3", // Equals
matchType: "1", // Once
target: 0.0,
expectedAlertSample: v3.Point{Value: 0.0},
expectedAlertSample: 0.0,
},
{
values: pql.Series{
@@ -178,7 +183,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "3", // Equals
matchType: "1", // Once
target: 0.0,
expectedAlertSample: v3.Point{Value: 0.0},
expectedAlertSample: 0.0,
},
{
values: pql.Series{
@@ -211,7 +216,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "1", // Greater Than
matchType: "2", // Always
target: 1.5,
expectedAlertSample: v3.Point{Value: 2.0},
expectedAlertSample: 2.0,
expectedVectorValues: []float64{2.0},
},
{
@@ -228,7 +233,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "1", // Above
matchType: "2", // Always
target: 2.0,
expectedAlertSample: v3.Point{Value: 3.0},
expectedAlertSample: 3.0,
},
{
values: pql.Series{
@@ -244,7 +249,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "2", // Below
matchType: "2", // Always
target: 13.0,
expectedAlertSample: v3.Point{Value: 12.0},
expectedAlertSample: 12.0,
},
{
values: pql.Series{
@@ -276,7 +281,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "1", // Greater Than
matchType: "1", // Once
target: 4.5,
expectedAlertSample: v3.Point{Value: 10.0},
expectedAlertSample: 10.0,
expectedVectorValues: []float64{10.0},
},
{
@@ -339,7 +344,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "4", // Not Equals
matchType: "2", // Always
target: 0.0,
expectedAlertSample: v3.Point{Value: 1.0},
expectedAlertSample: 1.0,
},
{
values: pql.Series{
@@ -371,7 +376,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "4", // Not Equals
matchType: "1", // Once
target: 0.0,
expectedAlertSample: v3.Point{Value: 1.0},
expectedAlertSample: 1.0,
},
{
values: pql.Series{
@@ -402,7 +407,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "4", // Not Equals
matchType: "1", // Once
target: 0.0,
expectedAlertSample: v3.Point{Value: 1.0},
expectedAlertSample: 1.0,
},
{
values: pql.Series{
@@ -418,7 +423,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "4", // Not Equals
matchType: "1", // Once
target: 0.0,
expectedAlertSample: v3.Point{Value: 1.0},
expectedAlertSample: 1.0,
},
// Test cases for Less Than Always
{
@@ -435,7 +440,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "2", // Less Than
matchType: "2", // Always
target: 4,
expectedAlertSample: v3.Point{Value: 1.5},
expectedAlertSample: 1.5,
},
{
values: pql.Series{
@@ -467,7 +472,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "2", // Less Than
matchType: "1", // Once
target: 4,
expectedAlertSample: v3.Point{Value: 2.5},
expectedAlertSample: 2.5,
},
{
values: pql.Series{
@@ -499,7 +504,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "3", // Equals
matchType: "3", // OnAverage
target: 6.0,
expectedAlertSample: v3.Point{Value: 6.0},
expectedAlertSample: 6.0,
},
{
values: pql.Series{
@@ -530,7 +535,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "4", // Not Equals
matchType: "3", // OnAverage
target: 4.5,
expectedAlertSample: v3.Point{Value: 6.0},
expectedAlertSample: 6.0,
},
{
values: pql.Series{
@@ -561,7 +566,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "1", // Greater Than
matchType: "3", // OnAverage
target: 4.5,
expectedAlertSample: v3.Point{Value: 6.0},
expectedAlertSample: 6.0,
},
{
values: pql.Series{
@@ -577,7 +582,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "2", // Less Than
matchType: "3", // OnAverage
target: 12.0,
expectedAlertSample: v3.Point{Value: 6.0},
expectedAlertSample: 6.0,
},
// Test cases for InTotal
{
@@ -594,7 +599,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "3", // Equals
matchType: "4", // InTotal
target: 30.0,
expectedAlertSample: v3.Point{Value: 30.0},
expectedAlertSample: 30.0,
},
{
values: pql.Series{
@@ -621,7 +626,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "4", // Not Equals
matchType: "4", // InTotal
target: 9.0,
expectedAlertSample: v3.Point{Value: 10.0},
expectedAlertSample: 10.0,
},
{
values: pql.Series{
@@ -645,7 +650,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "1", // Greater Than
matchType: "4", // InTotal
target: 10.0,
expectedAlertSample: v3.Point{Value: 20.0},
expectedAlertSample: 20.0,
},
{
values: pql.Series{
@@ -670,7 +675,7 @@ func TestPromRuleEval(t *testing.T) {
compareOp: "2", // Less Than
matchType: "4", // InTotal
target: 30.0,
expectedAlertSample: v3.Point{Value: 20.0},
expectedAlertSample: 20.0,
},
{
values: pql.Series{
@@ -708,7 +713,7 @@ func TestPromRuleEval(t *testing.T) {
assert.NoError(t, err)
}
resultVectors, err := rule.Threshold.Eval(toCommonSeries(c.values), rule.Unit(), ruletypes.EvalData{})
resultVectors, err := rule.Threshold.Eval(*promSeriesToTimeSeries(c.values), rule.Unit(), ruletypes.EvalData{})
assert.NoError(t, err)
// Compare full result vector with expected vector
@@ -724,12 +729,12 @@ func TestPromRuleEval(t *testing.T) {
if len(resultVectors) > 0 {
found := false
for _, sample := range resultVectors {
if sample.V == c.expectedAlertSample.Value {
if sample.V == c.expectedAlertSample {
found = true
break
}
}
assert.True(t, found, "Expected alert sample value %.2f not found in result vectors for case %d. Got values: %v", c.expectedAlertSample.Value, idx, actualValues)
assert.True(t, found, "Expected alert sample value %.2f not found in result vectors for case %d. Got values: %v", c.expectedAlertSample, idx, actualValues)
}
} else {
assert.Empty(t, resultVectors, "Expected no alert but got result vectors for case %d", idx)
@@ -754,9 +759,13 @@ func TestPromRuleUnitCombinations(t *testing.T) {
RuleCondition: &ruletypes.RuleCondition{
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {
Query: "test_metric",
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "A",
Query: "test_metric",
},
},
},
},
@@ -1013,9 +1022,13 @@ func _Enable_this_after_9146_issue_fix_is_merged_TestPromRuleNoData(t *testing.T
RuleCondition: &ruletypes.RuleCondition{
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {
Query: "test_metric",
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "A",
Query: "test_metric",
},
},
},
},
@@ -1124,9 +1137,13 @@ func TestMultipleThresholdPromRule(t *testing.T) {
RuleCondition: &ruletypes.RuleCondition{
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {
Query: "test_metric",
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{
Name: "A",
Query: "test_metric",
},
},
},
},
@@ -1361,8 +1378,11 @@ func TestPromRule_NoData(t *testing.T) {
MatchType: ruletypes.AtleastOnce,
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {Query: "test_metric"},
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{Name: "A", Query: "test_metric"},
},
},
},
Thresholds: &ruletypes.RuleThresholdData{
@@ -1486,8 +1506,11 @@ func TestPromRule_NoData_AbsentFor(t *testing.T) {
Target: &target,
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {Query: "test_metric"},
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{Name: "A", Query: "test_metric"},
},
},
},
Thresholds: &ruletypes.RuleThresholdData{
@@ -1635,8 +1658,11 @@ func TestPromRuleEval_RequireMinPoints(t *testing.T) {
MatchType: ruletypes.AtleastOnce,
CompositeQuery: &v3.CompositeQuery{
QueryType: v3.QueryTypePromQL,
PromQueries: map[string]*v3.PromQuery{
"A": {Query: "test_metric"},
Queries: []qbtypes.QueryEnvelope{
{
Type: qbtypes.QueryTypePromQL,
Spec: qbtypes.PromQuery{Name: "A", Query: "test_metric"},
},
},
},
},

View File

@@ -1,38 +1,24 @@
package rules
import (
"bytes"
"context"
"encoding/json"
"fmt"
"log/slog"
"math"
"net/url"
"reflect"
"text/template"
"time"
"github.com/SigNoz/signoz/pkg/contextlinks"
"github.com/SigNoz/signoz/pkg/query-service/common"
"github.com/SigNoz/signoz/pkg/query-service/model"
"github.com/SigNoz/signoz/pkg/query-service/postprocess"
"github.com/SigNoz/signoz/pkg/transition"
"github.com/SigNoz/signoz/pkg/types/ruletypes"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/query-service/app/querier"
querierV2 "github.com/SigNoz/signoz/pkg/query-service/app/querier/v2"
"github.com/SigNoz/signoz/pkg/query-service/app/queryBuilder"
"github.com/SigNoz/signoz/pkg/query-service/interfaces"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
querytemplate "github.com/SigNoz/signoz/pkg/query-service/utils/queryTemplate"
"github.com/SigNoz/signoz/pkg/query-service/utils/times"
"github.com/SigNoz/signoz/pkg/query-service/utils/timestamp"
logsv3 "github.com/SigNoz/signoz/pkg/query-service/app/logs/v3"
tracesV4 "github.com/SigNoz/signoz/pkg/query-service/app/traces/v4"
"github.com/SigNoz/signoz/pkg/query-service/formatter"
querierV5 "github.com/SigNoz/signoz/pkg/querier"
@@ -42,23 +28,9 @@ import (
type ThresholdRule struct {
*BaseRule
// Ever since we introduced the new metrics query builder, the version is "v4"
// for all the rules
// if the version is "v3", then we use the old querier
// if the version is "v4", then we use the new querierV2
version string
// querier is used for alerts created before the introduction of new metrics query builder
querier interfaces.Querier
// querierV2 is used for alerts created after the introduction of new metrics query builder
querierV2 interfaces.Querier
// querierV5 is used for alerts migrated after the introduction of new query builder
// querierV5 is the query builder v5 querier used for all alert rule evaluation
querierV5 querierV5.Querier
// used for attribute metadata enrichment for logs and traces
logsKeys map[string]v3.AttributeKey
spansKeys map[string]v3.AttributeKey
}
var _ Rule = (*ThresholdRule)(nil)
@@ -82,25 +54,10 @@ func NewThresholdRule(
}
t := ThresholdRule{
BaseRule: baseRule,
version: p.Version,
BaseRule: baseRule,
querierV5: querierV5,
}
querierOption := querier.QuerierOptions{
Reader: reader,
Cache: nil,
KeyGenerator: queryBuilder.NewKeyGenerator(),
}
querierOptsV2 := querierV2.QuerierOptions{
Reader: reader,
Cache: nil,
KeyGenerator: queryBuilder.NewKeyGenerator(),
}
t.querier = querier.NewQuerier(querierOption)
t.querierV2 = querierV2.NewQuerier(querierOptsV2)
t.querierV5 = querierV5
t.reader = reader
return &t, nil
}
@@ -120,169 +77,9 @@ func (r *ThresholdRule) Type() ruletypes.RuleType {
return ruletypes.RuleTypeThreshold
}
func (r *ThresholdRule) prepareQueryRange(ctx context.Context, ts time.Time) (*v3.QueryRangeParamsV3, error) {
func (r *ThresholdRule) prepareQueryRange(ctx context.Context, ts time.Time) (*qbtypes.QueryRangeRequest, error) {
r.logger.InfoContext(
ctx, "prepare query range request v4", "ts", ts.UnixMilli(), "eval_window", r.evalWindow.Milliseconds(), "eval_delay", r.evalDelay.Milliseconds(),
)
startTs, endTs := r.Timestamps(ts)
start, end := startTs.UnixMilli(), endTs.UnixMilli()
if r.ruleCondition.QueryType() == v3.QueryTypeClickHouseSQL {
params := &v3.QueryRangeParamsV3{
Start: start,
End: end,
Step: int64(math.Max(float64(common.MinAllowedStepInterval(start, end)), 60)),
CompositeQuery: &v3.CompositeQuery{
QueryType: r.ruleCondition.CompositeQuery.QueryType,
PanelType: r.ruleCondition.CompositeQuery.PanelType,
BuilderQueries: make(map[string]*v3.BuilderQuery),
ClickHouseQueries: make(map[string]*v3.ClickHouseQuery),
PromQueries: make(map[string]*v3.PromQuery),
Unit: r.ruleCondition.CompositeQuery.Unit,
},
Variables: make(map[string]interface{}),
NoCache: true,
}
querytemplate.AssignReservedVarsV3(params)
for name, chQuery := range r.ruleCondition.CompositeQuery.ClickHouseQueries {
if chQuery.Disabled {
continue
}
tmpl := template.New("clickhouse-query")
tmpl, err := tmpl.Parse(chQuery.Query)
if err != nil {
return nil, err
}
var query bytes.Buffer
err = tmpl.Execute(&query, params.Variables)
if err != nil {
return nil, err
}
params.CompositeQuery.ClickHouseQueries[name] = &v3.ClickHouseQuery{
Query: query.String(),
Disabled: chQuery.Disabled,
Legend: chQuery.Legend,
}
}
return params, nil
}
if r.ruleCondition.CompositeQuery != nil && r.ruleCondition.CompositeQuery.BuilderQueries != nil {
for _, q := range r.ruleCondition.CompositeQuery.BuilderQueries {
// If the step interval is less than the minimum allowed step interval, set it to the minimum allowed step interval
if minStep := common.MinAllowedStepInterval(start, end); q.StepInterval < minStep {
q.StepInterval = minStep
}
q.SetShiftByFromFunc()
if q.DataSource == v3.DataSourceMetrics {
// if the time range is greater than 1 day, and less than 1 week set the step interval to be multiple of 5 minutes
// if the time range is greater than 1 week, set the step interval to be multiple of 30 mins
if end-start >= 24*time.Hour.Milliseconds() && end-start < 7*24*time.Hour.Milliseconds() {
q.StepInterval = int64(math.Round(float64(q.StepInterval)/300)) * 300
} else if end-start >= 7*24*time.Hour.Milliseconds() {
q.StepInterval = int64(math.Round(float64(q.StepInterval)/1800)) * 1800
}
}
}
}
if r.ruleCondition.CompositeQuery.PanelType != v3.PanelTypeGraph {
r.ruleCondition.CompositeQuery.PanelType = v3.PanelTypeGraph
}
// default mode
return &v3.QueryRangeParamsV3{
Start: start,
End: end,
Step: int64(math.Max(float64(common.MinAllowedStepInterval(start, end)), 60)),
CompositeQuery: r.ruleCondition.CompositeQuery,
Variables: make(map[string]interface{}),
NoCache: true,
}, nil
}
func (r *ThresholdRule) prepareLinksToLogs(ctx context.Context, ts time.Time, lbls labels.Labels) string {
if r.version == "v5" {
return r.prepareLinksToLogsV5(ctx, ts, lbls)
}
selectedQuery := r.GetSelectedQuery()
qr, err := r.prepareQueryRange(ctx, ts)
if err != nil {
return ""
}
start := time.UnixMilli(qr.Start)
end := time.UnixMilli(qr.End)
// TODO(srikanthccv): handle formula queries
if selectedQuery < "A" || selectedQuery > "Z" {
return ""
}
q := r.ruleCondition.CompositeQuery.BuilderQueries[selectedQuery]
if q == nil {
return ""
}
if q.DataSource != v3.DataSourceLogs {
return ""
}
queryFilter := []v3.FilterItem{}
if q.Filters != nil {
queryFilter = q.Filters.Items
}
filterItems := contextlinks.PrepareFilters(lbls.Map(), queryFilter, q.GroupBy, r.logsKeys)
return contextlinks.PrepareLinksToLogs(start, end, filterItems)
}
func (r *ThresholdRule) prepareLinksToTraces(ctx context.Context, ts time.Time, lbls labels.Labels) string {
if r.version == "v5" {
return r.prepareLinksToTracesV5(ctx, ts, lbls)
}
selectedQuery := r.GetSelectedQuery()
qr, err := r.prepareQueryRange(ctx, ts)
if err != nil {
return ""
}
start := time.UnixMilli(qr.Start)
end := time.UnixMilli(qr.End)
// TODO(srikanthccv): handle formula queries
if selectedQuery < "A" || selectedQuery > "Z" {
return ""
}
q := r.ruleCondition.CompositeQuery.BuilderQueries[selectedQuery]
if q == nil {
return ""
}
if q.DataSource != v3.DataSourceTraces {
return ""
}
queryFilter := []v3.FilterItem{}
if q.Filters != nil {
queryFilter = q.Filters.Items
}
filterItems := contextlinks.PrepareFilters(lbls.Map(), queryFilter, q.GroupBy, r.spansKeys)
return contextlinks.PrepareLinksToTraces(start, end, filterItems)
}
func (r *ThresholdRule) prepareQueryRangeV5(ctx context.Context, ts time.Time) (*qbtypes.QueryRangeRequest, error) {
r.logger.InfoContext(
ctx, "prepare query range request v5", "ts", ts.UnixMilli(), "eval_window", r.evalWindow.Milliseconds(), "eval_delay", r.evalDelay.Milliseconds(),
ctx, "prepare query range request", "ts", ts.UnixMilli(), "eval_window", r.evalWindow.Milliseconds(), "eval_delay", r.evalDelay.Milliseconds(),
)
startTs, endTs := r.Timestamps(ts)
@@ -302,10 +99,10 @@ func (r *ThresholdRule) prepareQueryRangeV5(ctx context.Context, ts time.Time) (
return req, nil
}
func (r *ThresholdRule) prepareLinksToLogsV5(ctx context.Context, ts time.Time, lbls labels.Labels) string {
func (r *ThresholdRule) prepareLinksToLogs(ctx context.Context, ts time.Time, lbls labels.Labels) string {
selectedQuery := r.GetSelectedQuery()
qr, err := r.prepareQueryRangeV5(ctx, ts)
qr, err := r.prepareQueryRange(ctx, ts)
if err != nil {
return ""
}
@@ -342,10 +139,10 @@ func (r *ThresholdRule) prepareLinksToLogsV5(ctx context.Context, ts time.Time,
return contextlinks.PrepareLinksToLogsV5(start, end, whereClause)
}
func (r *ThresholdRule) prepareLinksToTracesV5(ctx context.Context, ts time.Time, lbls labels.Labels) string {
func (r *ThresholdRule) prepareLinksToTraces(ctx context.Context, ts time.Time, lbls labels.Labels) string {
selectedQuery := r.GetSelectedQuery()
qr, err := r.prepareQueryRangeV5(ctx, ts)
qr, err := r.prepareQueryRange(ctx, ts)
if err != nil {
return ""
}
@@ -391,115 +188,6 @@ func (r *ThresholdRule) buildAndRunQuery(ctx context.Context, orgID valuer.UUID,
if err != nil {
return nil, err
}
err = r.PopulateTemporality(ctx, orgID, params)
if err != nil {
return nil, fmt.Errorf("internal error while setting temporality")
}
if params.CompositeQuery.QueryType == v3.QueryTypeBuilder {
hasLogsQuery := false
hasTracesQuery := false
for _, query := range params.CompositeQuery.BuilderQueries {
if query.DataSource == v3.DataSourceLogs {
hasLogsQuery = true
}
if query.DataSource == v3.DataSourceTraces {
hasTracesQuery = true
}
}
if hasLogsQuery {
// check if any enrichment is required for logs if yes then enrich them
if logsv3.EnrichmentRequired(params) {
logsFields, apiErr := r.reader.GetLogFieldsFromNames(ctx, logsv3.GetFieldNames(params.CompositeQuery))
if apiErr != nil {
return nil, apiErr.ToError()
}
logsKeys := model.GetLogFieldsV3(ctx, params, logsFields)
r.logsKeys = logsKeys
logsv3.Enrich(params, logsKeys)
}
}
if hasTracesQuery {
spanKeys, err := r.reader.GetSpanAttributeKeysByNames(ctx, logsv3.GetFieldNames(params.CompositeQuery))
if err != nil {
return nil, err
}
r.spansKeys = spanKeys
tracesV4.Enrich(params, spanKeys)
}
}
var results []*v3.Result
var queryErrors map[string]error
if r.version == "v4" {
results, queryErrors, err = r.querierV2.QueryRange(ctx, orgID, params)
} else {
results, queryErrors, err = r.querier.QueryRange(ctx, orgID, params)
}
if err != nil {
r.logger.ErrorContext(ctx, "failed to get alert query range result", "rule_name", r.Name(), "error", err, "query_errors", queryErrors)
return nil, fmt.Errorf("internal error while querying")
}
if params.CompositeQuery.QueryType == v3.QueryTypeBuilder {
results, err = postprocess.PostProcessResult(results, params)
if err != nil {
r.logger.ErrorContext(ctx, "failed to post process result", "rule_name", r.Name(), "error", err)
return nil, fmt.Errorf("internal error while post processing")
}
}
selectedQuery := r.GetSelectedQuery()
var queryResult *v3.Result
for _, res := range results {
if res.QueryName == selectedQuery {
queryResult = res
break
}
}
hasData := queryResult != nil && len(queryResult.Series) > 0
if missingDataAlert := r.HandleMissingDataAlert(ctx, ts, hasData); missingDataAlert != nil {
return ruletypes.Vector{*missingDataAlert}, nil
}
var resultVector ruletypes.Vector
if queryResult == nil {
r.logger.WarnContext(ctx, "query result is nil", "rule_name", r.Name(), "query_name", selectedQuery)
return resultVector, nil
}
for _, series := range queryResult.Series {
if !r.Condition().ShouldEval(series) {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)
continue
}
resultSeries, err := r.Threshold.Eval(*series, r.Unit(), ruletypes.EvalData{
ActiveAlerts: r.ActiveAlertsLabelFP(),
SendUnmatched: r.ShouldSendUnmatched(),
})
if err != nil {
return nil, err
}
resultVector = append(resultVector, resultSeries...)
}
return resultVector, nil
}
func (r *ThresholdRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUID, ts time.Time) (ruletypes.Vector, error) {
params, err := r.prepareQueryRangeV5(ctx, ts)
if err != nil {
return nil, err
}
var results []*v3.Result
v5Result, err := r.querierV5.QueryRange(ctx, orgID, params)
if err != nil {
@@ -507,26 +195,24 @@ func (r *ThresholdRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUI
return nil, fmt.Errorf("internal error while querying")
}
for _, item := range v5Result.Data.Results {
if tsData, ok := item.(*qbtypes.TimeSeriesData); ok {
results = append(results, transition.ConvertV5TimeSeriesDataToV4Result(tsData))
} else {
// NOTE: should not happen but just to ensure we don't miss it if it happens for some reason
r.logger.WarnContext(ctx, "expected qbtypes.TimeSeriesData but got", "item_type", reflect.TypeOf(item))
}
}
selectedQuery := r.GetSelectedQuery()
var queryResult *v3.Result
for _, res := range results {
if res.QueryName == selectedQuery {
queryResult = res
var queryResult *qbtypes.TimeSeriesData
for _, item := range v5Result.Data.Results {
if tsData, ok := item.(*qbtypes.TimeSeriesData); ok && tsData.QueryName == selectedQuery {
queryResult = tsData
break
}
}
hasData := queryResult != nil && len(queryResult.Series) > 0
var allSeries []*qbtypes.TimeSeries
if queryResult != nil {
for _, bucket := range queryResult.Aggregations {
allSeries = append(allSeries, bucket.Series...)
}
}
hasData := len(allSeries) > 0
if missingDataAlert := r.HandleMissingDataAlert(ctx, ts, hasData); missingDataAlert != nil {
return ruletypes.Vector{*missingDataAlert}, nil
}
@@ -539,7 +225,7 @@ func (r *ThresholdRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUI
}
// Filter out new series if newGroupEvalDelay is configured
seriesToProcess := queryResult.Series
seriesToProcess := allSeries
if r.ShouldSkipNewGroups() {
filteredSeries, filterErr := r.BaseRule.FilterNewSeries(ctx, ts, seriesToProcess)
// In case of error we log the error and continue with the original series
@@ -552,7 +238,7 @@ func (r *ThresholdRule) buildAndRunQueryV5(ctx context.Context, orgID valuer.UUI
for _, series := range seriesToProcess {
if !r.Condition().ShouldEval(series) {
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Points), "requiredPoints", r.Condition().RequiredNumPoints)
r.logger.InfoContext(ctx, "not enough data points to evaluate series, skipping", "ruleid", r.ID(), "numPoints", len(series.Values), "requiredPoints", r.Condition().RequiredNumPoints)
continue
}
resultSeries, err := r.Threshold.Eval(*series, r.Unit(), ruletypes.EvalData{
@@ -573,16 +259,7 @@ func (r *ThresholdRule) Eval(ctx context.Context, ts time.Time) (int, error) {
valueFormatter := formatter.FromUnit(r.Unit())
var res ruletypes.Vector
var err error
if r.version == "v5" {
r.logger.InfoContext(ctx, "running v5 query")
res, err = r.buildAndRunQueryV5(ctx, r.orgID, ts)
} else {
r.logger.InfoContext(ctx, "running v4 query")
res, err = r.buildAndRunQuery(ctx, r.orgID, ts)
}
res, err := r.buildAndRunQuery(ctx, r.orgID, ts)
if err != nil {
return 0, err

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -167,6 +167,7 @@ func NewSQLMigrationProviderFactories(
sqlmigration.NewMigrateRbacToAuthzFactory(sqlstore),
sqlmigration.NewMigratePublicDashboardsFactory(sqlstore),
sqlmigration.NewAddAnonymousPublicDashboardTransactionFactory(sqlstore),
sqlmigration.NewMigrateRulesV4ToV5Factory(sqlstore, telemetryStore),
)
}

View File

@@ -0,0 +1,194 @@
package sqlmigration
import (
"context"
"database/sql"
"encoding/json"
"log/slog"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/transition"
"github.com/uptrace/bun"
"github.com/uptrace/bun/migrate"
)
type migrateRulesV4ToV5 struct {
store sqlstore.SQLStore
telemetryStore telemetrystore.TelemetryStore
logger *slog.Logger
}
func NewMigrateRulesV4ToV5Factory(
store sqlstore.SQLStore,
telemetryStore telemetrystore.TelemetryStore,
) factory.ProviderFactory[SQLMigration, Config] {
return factory.NewProviderFactory(
factory.MustNewName("migrate_rules_v4_to_v5"),
func(ctx context.Context, ps factory.ProviderSettings, c Config) (SQLMigration, error) {
return &migrateRulesV4ToV5{
store: store,
telemetryStore: telemetryStore,
logger: ps.Logger,
}, nil
})
}
func (migration *migrateRulesV4ToV5) Register(migrations *migrate.Migrations) error {
if err := migrations.Register(migration.Up, migration.Down); err != nil {
return err
}
return nil
}
func (migration *migrateRulesV4ToV5) getLogDuplicateKeys(ctx context.Context) ([]string, error) {
query := `
SELECT name
FROM (
SELECT DISTINCT name FROM signoz_logs.distributed_logs_attribute_keys
INTERSECT
SELECT DISTINCT name FROM signoz_logs.distributed_logs_resource_keys
)
ORDER BY name
`
rows, err := migration.telemetryStore.ClickhouseDB().Query(ctx, query)
if err != nil {
migration.logger.WarnContext(ctx, "failed to query log duplicate keys", "error", err)
return nil, nil
}
defer rows.Close()
var keys []string
for rows.Next() {
var key string
if err := rows.Scan(&key); err != nil {
migration.logger.WarnContext(ctx, "failed to scan log duplicate key", "error", err)
continue
}
keys = append(keys, key)
}
return keys, nil
}
func (migration *migrateRulesV4ToV5) getTraceDuplicateKeys(ctx context.Context) ([]string, error) {
query := `
SELECT tagKey
FROM signoz_traces.distributed_span_attributes_keys
WHERE tagType IN ('tag', 'resource')
GROUP BY tagKey
HAVING COUNT(DISTINCT tagType) > 1
ORDER BY tagKey
`
rows, err := migration.telemetryStore.ClickhouseDB().Query(ctx, query)
if err != nil {
migration.logger.WarnContext(ctx, "failed to query trace duplicate keys", "error", err)
return nil, nil
}
defer rows.Close()
var keys []string
for rows.Next() {
var key string
if err := rows.Scan(&key); err != nil {
migration.logger.WarnContext(ctx, "failed to scan trace duplicate key", "error", err)
continue
}
keys = append(keys, key)
}
return keys, nil
}
func (migration *migrateRulesV4ToV5) Up(ctx context.Context, db *bun.DB) error {
logsKeys, err := migration.getLogDuplicateKeys(ctx)
if err != nil {
return err
}
tracesKeys, err := migration.getTraceDuplicateKeys(ctx)
if err != nil {
return err
}
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer func() {
_ = tx.Rollback()
}()
var rules []struct {
ID string `bun:"id"`
Data map[string]any `bun:"data"`
}
err = tx.NewSelect().
Table("rule").
Column("id", "data").
Scan(ctx, &rules)
if err != nil {
if err == sql.ErrNoRows {
return nil
}
return err
}
alertsMigrator := transition.NewAlertMigrateV5(migration.logger, logsKeys, tracesKeys)
for _, rule := range rules {
version, _ := rule.Data["version"].(string)
if version == "v5" {
continue
}
migration.logger.InfoContext(ctx, "migrating rule v4 to v5", "rule_id", rule.ID, "current_version", version)
// Check if the queries envelope already exists and is non-empty
hasQueriesEnvelope := false
if condition, ok := rule.Data["condition"].(map[string]any); ok {
if compositeQuery, ok := condition["compositeQuery"].(map[string]any); ok {
if queries, ok := compositeQuery["queries"].([]any); ok && len(queries) > 0 {
hasQueriesEnvelope = true
}
}
}
if hasQueriesEnvelope {
// Case 2: Already has queries envelope, just bump version
migration.logger.InfoContext(ctx, "rule already has queries envelope, bumping version", "rule_id", rule.ID)
rule.Data["version"] = "v5"
} else {
// Case 1: Old format, run full migration
migration.logger.InfoContext(ctx, "rule has old format, running full migration", "rule_id", rule.ID)
alertsMigrator.Migrate(ctx, rule.Data)
// Force version to v5 regardless of Migrate return value
rule.Data["version"] = "v5"
}
dataJSON, err := json.Marshal(rule.Data)
if err != nil {
return err
}
_, err = tx.NewUpdate().
Table("rule").
Set("data = ?", string(dataJSON)).
Where("id = ?", rule.ID).
Exec(ctx)
if err != nil {
return err
}
}
return tx.Commit()
}
func (migration *migrateRulesV4ToV5) Down(ctx context.Context, db *bun.DB) error {
return nil
}

View File

@@ -1597,12 +1597,12 @@ func (t *telemetryMetaStore) GetAllValues(ctx context.Context, fieldValueSelecto
return values, complete, nil
}
func (t *telemetryMetaStore) FetchTemporality(ctx context.Context, metricName string) (metrictypes.Temporality, error) {
func (t *telemetryMetaStore) FetchTemporality(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricName string) (metrictypes.Temporality, error) {
if metricName == "" {
return metrictypes.Unknown, errors.Newf(errors.TypeInternal, errors.CodeInternal, "metric name cannot be empty")
}
temporalityMap, err := t.FetchTemporalityMulti(ctx, metricName)
temporalityMap, err := t.FetchTemporalityMulti(ctx, queryTimeRangeStartTs, queryTimeRangeEndTs, metricName)
if err != nil {
return metrictypes.Unknown, err
}
@@ -1615,13 +1615,13 @@ func (t *telemetryMetaStore) FetchTemporality(ctx context.Context, metricName st
return temporality, nil
}
func (t *telemetryMetaStore) FetchTemporalityMulti(ctx context.Context, metricNames ...string) (map[string]metrictypes.Temporality, error) {
func (t *telemetryMetaStore) FetchTemporalityMulti(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricNames ...string) (map[string]metrictypes.Temporality, error) {
if len(metricNames) == 0 {
return make(map[string]metrictypes.Temporality), nil
}
result := make(map[string]metrictypes.Temporality)
metricsTemporality, err := t.fetchMetricsTemporality(ctx, metricNames...)
metricsTemporality, err := t.fetchMetricsTemporality(ctx, queryTimeRangeStartTs, queryTimeRangeEndTs, metricNames...)
if err != nil {
return nil, err
}
@@ -1630,8 +1630,12 @@ func (t *telemetryMetaStore) FetchTemporalityMulti(ctx context.Context, metricNa
// For metrics not found in the database, set to Unknown
for _, metricName := range metricNames {
if temporality, exists := metricsTemporality[metricName]; exists {
result[metricName] = temporality
if temporality, exists := metricsTemporality[metricName]; exists && len(temporality) > 0 {
if len(temporality) > 1 {
result[metricName] = metrictypes.Multiple
} else {
result[metricName] = temporality[0]
}
continue
}
if temporality, exists := meterMetricsTemporality[metricName]; exists {
@@ -1644,8 +1648,10 @@ func (t *telemetryMetaStore) FetchTemporalityMulti(ctx context.Context, metricNa
return result, nil
}
func (t *telemetryMetaStore) fetchMetricsTemporality(ctx context.Context, metricNames ...string) (map[string]metrictypes.Temporality, error) {
result := make(map[string]metrictypes.Temporality)
func (t *telemetryMetaStore) fetchMetricsTemporality(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricNames ...string) (map[string][]metrictypes.Temporality, error) {
result := make(map[string][]metrictypes.Temporality)
adjustedStartTs, adjustedEndTs, tsTableName, _ := telemetrymetrics.WhichTSTableToUse(queryTimeRangeStartTs, queryTimeRangeEndTs, nil)
// Build query to fetch temporality for all metrics
// We use attr_string_value where attr_name = '__temporality__'
@@ -1653,14 +1659,18 @@ func (t *telemetryMetaStore) fetchMetricsTemporality(ctx context.Context, metric
// and metric_name column contains temporality value, so we use the correct mapping
sb := sqlbuilder.Select(
"metric_name",
"argMax(temporality, last_reported_unix_milli) as temporality",
).From(t.metricsDBName + "." + t.metricsFieldsTblName)
"temporality",
).
From(t.metricsDBName + "." + tsTableName)
// Filter by metric names (in the temporality column due to data mix-up)
sb.Where(sb.In("metric_name", metricNames))
sb.Where(
sb.In("metric_name", metricNames),
sb.GTE("unix_milli", adjustedStartTs),
sb.LT("unix_milli", adjustedEndTs),
)
// Group by metric name to get one temporality per metric
sb.GroupBy("metric_name")
sb.GroupBy("metric_name", "temporality")
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
@@ -1692,8 +1702,12 @@ func (t *telemetryMetaStore) fetchMetricsTemporality(ctx context.Context, metric
// Unknown or empty temporality
temporality = metrictypes.Unknown
}
result[metricName] = temporality
if temporality != metrictypes.Unknown {
result[metricName] = append(result[metricName], temporality)
}
}
if err := rows.Err(); err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "error iterating over metrics temporality rows")
}
return result, nil

View File

@@ -21,6 +21,10 @@ const (
RateWithoutNegative = `If((per_series_value - lagInFrame(per_series_value, 1, 0) OVER rate_window) < 0, per_series_value / (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window), (per_series_value - lagInFrame(per_series_value, 1, 0) OVER rate_window) / (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window))`
IncreaseWithoutNegative = `If((per_series_value - lagInFrame(per_series_value, 1, 0) OVER rate_window) < 0, per_series_value, ((per_series_value - lagInFrame(per_series_value, 1, 0) OVER rate_window) / (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window)) * (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window))`
RateWithoutNegativeMultiTemporality = `IF(LOWER(temporality) LIKE LOWER('delta'), %s, IF((%s - lagInFrame(%s, 1, 0) OVER rate_window) < 0, %s / (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window), (%s - lagInFrame(%s, 1, 0) OVER rate_window) / (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window))) AS per_series_value`
IncreaseWithoutNegativeMultiTemporality = `IF(LOWER(temporality) LIKE LOWER('delta'), %s, IF((%s - lagInFrame(%s, 1, 0) OVER rate_window) < 0, %s, ((%s - lagInFrame(%s, 1, 0) OVER rate_window) / (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window)) * (ts - lagInFrame(ts, 1, toDateTime(fromUnixTimestamp64Milli(%d))) OVER rate_window))) AS per_series_value`
OthersMultiTemporality = `IF(LOWER(temporality) LIKE LOWER('delta'), %s, %s) AS per_series_value`
RateWithInterpolation = `
CASE
WHEN row_number() OVER rate_window = 1 THEN
@@ -377,7 +381,7 @@ func (b *MetricQueryStatementBuilder) buildTimeSeriesCTE(
sb.LTE("unix_milli", end),
)
if query.Aggregations[0].Temporality != metrictypes.Unknown {
if query.Aggregations[0].Temporality != metrictypes.Multiple && query.Aggregations[0].Temporality != metrictypes.Unknown {
sb.Where(sb.ILike("temporality", query.Aggregations[0].Temporality.StringValue()))
}
@@ -407,8 +411,10 @@ func (b *MetricQueryStatementBuilder) buildTemporalAggregationCTE(
) (string, []any, error) {
if query.Aggregations[0].Temporality == metrictypes.Delta {
return b.buildTemporalAggDelta(ctx, start, end, query, timeSeriesCTE, timeSeriesCTEArgs)
} else if query.Aggregations[0].Temporality != metrictypes.Multiple {
return b.buildTemporalAggCumulativeOrUnspecified(ctx, start, end, query, timeSeriesCTE, timeSeriesCTEArgs)
}
return b.buildTemporalAggCumulativeOrUnspecified(ctx, start, end, query, timeSeriesCTE, timeSeriesCTEArgs)
return b.buildTemporalAggForMultipleTemporalities(ctx, start, end, query, timeSeriesCTE, timeSeriesCTEArgs)
}
func (b *MetricQueryStatementBuilder) buildTemporalAggDelta(
@@ -528,6 +534,58 @@ func (b *MetricQueryStatementBuilder) buildTemporalAggCumulativeOrUnspecified(
}
}
// because RateInterpolation is not enabled anywhere due to some gaps in the logic wrt cache handling, it hasn't been considered for the multi temporality
func (b *MetricQueryStatementBuilder) buildTemporalAggForMultipleTemporalities(
_ context.Context,
start, end uint64,
query qbtypes.QueryBuilderQuery[qbtypes.MetricAggregation],
timeSeriesCTE string,
timeSeriesCTEArgs []any,
) (string, []any, error) {
stepSec := int64(query.StepInterval.Seconds())
sb := sqlbuilder.NewSelectBuilder()
sb.SelectMore(fmt.Sprintf(
"toStartOfInterval(toDateTime(intDiv(unix_milli, 1000)), toIntervalSecond(%d)) AS ts",
stepSec,
))
for _, g := range query.GroupBy {
sb.SelectMore(fmt.Sprintf("`%s`", g.TelemetryFieldKey.Name))
}
aggForDeltaTemporality := AggregationColumnForSamplesTable(start, end, query.Aggregations[0].Type, metrictypes.Delta, query.Aggregations[0].TimeAggregation, query.Aggregations[0].TableHints)
aggForCumulativeTemporality := AggregationColumnForSamplesTable(start, end, query.Aggregations[0].Type, metrictypes.Cumulative, query.Aggregations[0].TimeAggregation, query.Aggregations[0].TableHints)
if query.Aggregations[0].TimeAggregation == metrictypes.TimeAggregationRate {
aggForDeltaTemporality = fmt.Sprintf("%s/%d", aggForDeltaTemporality, stepSec)
}
switch query.Aggregations[0].TimeAggregation {
case metrictypes.TimeAggregationRate:
rateExpr := fmt.Sprintf(RateWithoutNegativeMultiTemporality, aggForDeltaTemporality, aggForCumulativeTemporality, aggForCumulativeTemporality, aggForCumulativeTemporality, start, aggForCumulativeTemporality, aggForCumulativeTemporality, start)
sb.SelectMore(rateExpr)
case metrictypes.TimeAggregationIncrease:
increaseExpr := fmt.Sprintf(IncreaseWithoutNegativeMultiTemporality, aggForDeltaTemporality, aggForCumulativeTemporality, aggForCumulativeTemporality, aggForCumulativeTemporality, aggForCumulativeTemporality, aggForCumulativeTemporality, start, start)
sb.SelectMore(increaseExpr)
default:
expr := fmt.Sprintf(OthersMultiTemporality, aggForDeltaTemporality, aggForCumulativeTemporality)
sb.SelectMore(expr)
}
tbl := WhichSamplesTableToUse(start, end, query.Aggregations[0].Type, query.Aggregations[0].TimeAggregation, query.Aggregations[0].TableHints)
sb.From(fmt.Sprintf("%s.%s AS points", DBName, tbl))
sb.JoinWithOption(sqlbuilder.InnerJoin, timeSeriesCTE, "points.fingerprint = filtered_time_series.fingerprint")
sb.Where(
sb.In("metric_name", query.Aggregations[0].MetricName),
sb.GTE("unix_milli", start),
sb.LT("unix_milli", end),
)
sb.GroupBy("fingerprint", "ts", "temporality")
sb.GroupBy(querybuilder.GroupByKeys(query.GroupBy)...)
queryWithoutWindow, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse, timeSeriesCTEArgs...)
queryWithWindowAndOrder := queryWithoutWindow + " WINDOW rate_window AS (PARTITION BY fingerprint ORDER BY fingerprint ASC, ts ASC) ORDER BY ts"
return fmt.Sprintf("__temporal_aggregation_cte AS (%s)", queryWithWindowAndOrder), args, nil
}
func (b *MetricQueryStatementBuilder) buildSpatialAggregationCTE(
_ context.Context,
_ uint64,

View File

@@ -1,102 +0,0 @@
package transition
import (
"fmt"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
)
// ConvertV5TimeSeriesDataToV4Result converts v5 TimeSeriesData to v4 Result
func ConvertV5TimeSeriesDataToV4Result(v5Data *qbtypes.TimeSeriesData) *v3.Result {
if v5Data == nil {
return nil
}
result := &v3.Result{
QueryName: v5Data.QueryName,
Series: make([]*v3.Series, 0),
}
toV4Series := func(ts *qbtypes.TimeSeries) *v3.Series {
series := &v3.Series{
Labels: make(map[string]string),
LabelsArray: make([]map[string]string, 0),
Points: make([]v3.Point, 0, len(ts.Values)),
}
for _, label := range ts.Labels {
valueStr := fmt.Sprintf("%v", label.Value)
series.Labels[label.Key.Name] = valueStr
}
if len(series.Labels) > 0 {
series.LabelsArray = append(series.LabelsArray, series.Labels)
}
for _, tsValue := range ts.Values {
if tsValue.Partial {
continue
}
point := v3.Point{
Timestamp: tsValue.Timestamp,
Value: tsValue.Value,
}
series.Points = append(series.Points, point)
}
return series
}
for _, aggBucket := range v5Data.Aggregations {
for _, ts := range aggBucket.Series {
result.Series = append(result.Series, toV4Series(ts))
}
if len(aggBucket.AnomalyScores) != 0 {
result.AnomalyScores = make([]*v3.Series, 0)
for _, ts := range aggBucket.AnomalyScores {
result.AnomalyScores = append(result.AnomalyScores, toV4Series(ts))
}
}
if len(aggBucket.PredictedSeries) != 0 {
result.PredictedSeries = make([]*v3.Series, 0)
for _, ts := range aggBucket.PredictedSeries {
result.PredictedSeries = append(result.PredictedSeries, toV4Series(ts))
}
}
if len(aggBucket.LowerBoundSeries) != 0 {
result.LowerBoundSeries = make([]*v3.Series, 0)
for _, ts := range aggBucket.LowerBoundSeries {
result.LowerBoundSeries = append(result.LowerBoundSeries, toV4Series(ts))
}
}
if len(aggBucket.UpperBoundSeries) != 0 {
result.UpperBoundSeries = make([]*v3.Series, 0)
for _, ts := range aggBucket.UpperBoundSeries {
result.UpperBoundSeries = append(result.UpperBoundSeries, toV4Series(ts))
}
}
}
return result
}
// ConvertV5TimeSeriesDataSliceToV4Results converts a slice of v5 TimeSeriesData to v4 QueryRangeResponse
func ConvertV5TimeSeriesDataSliceToV4Results(v5DataSlice []*qbtypes.TimeSeriesData) *v3.QueryRangeResponse {
response := &v3.QueryRangeResponse{
ResultType: "matrix", // Time series data is typically "matrix" type
Result: make([]*v3.Result, 0, len(v5DataSlice)),
}
for _, v5Data := range v5DataSlice {
if result := ConvertV5TimeSeriesDataToV4Result(v5Data); result != nil {
response.Result = append(response.Result, result)
}
}
return response
}

View File

@@ -151,7 +151,8 @@ func (c *AccountConfig) Value() (driver.Value, error) {
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "couldn't serialize cloud account config to JSON")
}
return serialized, nil
// Return as string instead of []byte to ensure PostgreSQL stores as text, not bytea
return string(serialized), nil
}
type AgentReport struct {
@@ -186,7 +187,8 @@ func (r *AgentReport) Value() (driver.Value, error) {
err, errors.CodeInternal, "couldn't serialize agent report to JSON",
)
}
return serialized, nil
// Return as string instead of []byte to ensure PostgreSQL stores as text, not bytea
return string(serialized), nil
}
type CloudIntegrationService struct {
@@ -240,5 +242,6 @@ func (c *CloudServiceConfig) Value() (driver.Value, error) {
err, errors.CodeInternal, "couldn't serialize cloud service config to JSON",
)
}
return serialized, nil
// Return as string instead of []byte to ensure PostgreSQL stores as text, not bytea
return string(serialized), nil
}

View File

@@ -19,6 +19,7 @@ var (
Cumulative = Temporality{valuer.NewString("cumulative")}
Unspecified = Temporality{valuer.NewString("unspecified")}
Unknown = Temporality{valuer.NewString("")}
Multiple = Temporality{valuer.NewString("__multiple__")}
)
func (t Temporality) Value() (driver.Value, error) {

View File

@@ -76,6 +76,33 @@ type TimeSeries struct {
Values []*TimeSeriesValue `json:"values"`
}
// LabelsMap converts the label slice to a map[string]string for use in
// alert evaluation helpers that operate on flat label maps.
func (ts *TimeSeries) LabelsMap() map[string]string {
if ts == nil {
return nil
}
m := make(map[string]string, len(ts.Labels))
for _, l := range ts.Labels {
m[l.Key.Name] = fmt.Sprintf("%v", l.Value)
}
return m
}
// NonPartialValues returns only the values where Partial is false.
func (ts *TimeSeries) NonPartialValues() []*TimeSeriesValue {
if ts == nil {
return nil
}
result := make([]*TimeSeriesValue, 0, len(ts.Values))
for _, v := range ts.Values {
if !v.Partial {
result = append(result, v)
}
}
return result
}
type Label struct {
Key telemetrytypes.TelemetryFieldKey `json:"key"`
Value any `json:"value"`

View File

@@ -203,11 +203,11 @@ func (rc *RuleCondition) IsValid() bool {
}
// ShouldEval checks if the further series should be evaluated at all for alerts.
func (rc *RuleCondition) ShouldEval(series *v3.Series) bool {
func (rc *RuleCondition) ShouldEval(series *qbtypes.TimeSeries) bool {
if rc == nil {
return true
}
return !rc.RequireMinPoints || len(series.Points) >= rc.RequiredNumPoints
return !rc.RequireMinPoints || len(series.NonPartialValues()) >= rc.RequiredNumPoints
}
// QueryType is a shorthand method to get query type

View File

@@ -355,6 +355,14 @@ func (r *PostableRule) validate() error {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "composite query is required"))
}
if r.Version != "" && r.Version != "v5" {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "only version v5 is supported, got %q", r.Version))
}
if r.Version == "v5" && r.RuleCondition.CompositeQuery != nil && len(r.RuleCondition.CompositeQuery.Queries) == 0 {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "queries envelope is required in compositeQuery"))
}
if isAllQueriesDisabled(r.RuleCondition.CompositeQuery) {
errs = append(errs, signozError.NewInvalidInputf(signozError.CodeInvalidInput, "all queries are disabled in rule condition"))
}

View File

@@ -8,6 +8,8 @@ import (
"github.com/stretchr/testify/assert"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
)
func TestIsAllQueriesDisabled(t *testing.T) {
@@ -621,9 +623,9 @@ func TestParseIntoRuleThresholdGeneration(t *testing.T) {
}
// Test that threshold can evaluate properly
vector, err := threshold.Eval(v3.Series{
Points: []v3.Point{{Value: 0.15, Timestamp: 1000}}, // 150ms in seconds
Labels: map[string]string{"test": "label"},
vector, err := threshold.Eval(qbtypes.TimeSeries{
Values: []*qbtypes.TimeSeriesValue{{Value: 0.15, Timestamp: 1000}}, // 150ms in seconds
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "test"}, Value: "label"}},
}, "", EvalData{})
if err != nil {
t.Fatalf("Unexpected error in shouldAlert: %v", err)
@@ -698,9 +700,9 @@ func TestParseIntoRuleMultipleThresholds(t *testing.T) {
}
// Test with a value that should trigger both WARNING and CRITICAL thresholds
vector, err := threshold.Eval(v3.Series{
Points: []v3.Point{{Value: 95.0, Timestamp: 1000}}, // 95% CPU usage
Labels: map[string]string{"service": "test"},
vector, err := threshold.Eval(qbtypes.TimeSeries{
Values: []*qbtypes.TimeSeriesValue{{Value: 95.0, Timestamp: 1000}}, // 95% CPU usage
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "service"}, Value: "test"}},
}, "", EvalData{})
if err != nil {
t.Fatalf("Unexpected error in shouldAlert: %v", err)
@@ -708,9 +710,9 @@ func TestParseIntoRuleMultipleThresholds(t *testing.T) {
assert.Equal(t, 2, len(vector))
vector, err = threshold.Eval(v3.Series{
Points: []v3.Point{{Value: 75.0, Timestamp: 1000}}, // 75% CPU usage
Labels: map[string]string{"service": "test"},
vector, err = threshold.Eval(qbtypes.TimeSeries{
Values: []*qbtypes.TimeSeriesValue{{Value: 75.0, Timestamp: 1000}}, // 75% CPU usage
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "service"}, Value: "test"}},
}, "", EvalData{})
if err != nil {
t.Fatalf("Unexpected error in shouldAlert: %v", err)
@@ -723,7 +725,7 @@ func TestAnomalyNegationEval(t *testing.T) {
tests := []struct {
name string
ruleJSON []byte
series v3.Series
series qbtypes.TimeSeries
shouldAlert bool
expectedValue float64
}{
@@ -751,9 +753,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`),
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: -2.1}, // below & at least once, should alert
{Timestamp: 2000, Value: -2.3},
},
@@ -785,9 +787,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`), // below & at least once, no value below -2.0
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: -1.9},
{Timestamp: 2000, Value: -1.8},
},
@@ -818,9 +820,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`), // above & at least once, should alert
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: 2.1}, // above 2.0, should alert
{Timestamp: 2000, Value: 2.2},
},
@@ -852,9 +854,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`),
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: 1.1},
{Timestamp: 2000, Value: 1.2},
},
@@ -885,9 +887,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`), // below and all the times
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: -2.1}, // all below -2
{Timestamp: 2000, Value: -2.2},
{Timestamp: 3000, Value: -2.5},
@@ -920,9 +922,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`),
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: -3.0},
{Timestamp: 2000, Value: -1.0}, // above -2, breaks condition
{Timestamp: 3000, Value: -2.5},
@@ -954,10 +956,10 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`),
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
{Timestamp: 1000, Value: -8.0}, // abs(8) >= 7, alert
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: -8.0}, // abs(-8) >= 7, alert
{Timestamp: 2000, Value: 5.0},
},
},
@@ -988,9 +990,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`),
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: 80.0}, // below 90, should alert
{Timestamp: 2000, Value: 85.0},
},
@@ -1022,9 +1024,9 @@ func TestAnomalyNegationEval(t *testing.T) {
"selectedQuery": "A"
}
}`),
series: v3.Series{
Labels: map[string]string{"host": "server1"},
Points: []v3.Point{
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{{Key: telemetrytypes.TelemetryFieldKey{Name: "host"}, Value: "server1"}},
Values: []*qbtypes.TimeSeriesValue{
{Timestamp: 1000, Value: 60.0}, // below, should alert
{Timestamp: 2000, Value: 90.0},
},

View File

@@ -8,8 +8,8 @@ import (
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/query-service/converter"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/utils/labels"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/valuer"
)
@@ -83,7 +83,7 @@ func (eval EvalData) HasActiveAlert(sampleLabelFp uint64) bool {
type RuleThreshold interface {
// Eval runs the given series through the threshold rules
// using the given EvalData and returns the matching series
Eval(series v3.Series, unit string, evalData EvalData) (Vector, error)
Eval(series qbtypes.TimeSeries, unit string, evalData EvalData) (Vector, error)
GetRuleReceivers() []RuleReceivers
}
@@ -122,10 +122,11 @@ func (r BasicRuleThresholds) Validate() error {
return errors.Join(errs...)
}
func (r BasicRuleThresholds) Eval(series v3.Series, unit string, evalData EvalData) (Vector, error) {
func (r BasicRuleThresholds) Eval(series qbtypes.TimeSeries, unit string, evalData EvalData) (Vector, error) {
var resultVector Vector
thresholds := []BasicRuleThreshold(r)
sortThresholds(thresholds)
seriesLabels := series.LabelsMap()
for _, threshold := range thresholds {
smpl, shouldAlert := threshold.shouldAlert(series, unit)
if shouldAlert {
@@ -137,15 +138,15 @@ func (r BasicRuleThresholds) Eval(series v3.Series, unit string, evalData EvalDa
resultVector = append(resultVector, smpl)
continue
} else if evalData.SendUnmatched {
// Sanitise the series points to remove any NaN or Inf values
series.Points = removeGroupinSetPoints(series)
if len(series.Points) == 0 {
// Sanitise the series values to remove any NaN, Inf, or partial values
values := filterValidValues(series.Values)
if len(values) == 0 {
continue
}
// prepare the sample with the first point of the series
// prepare the sample with the first value of the series
smpl := Sample{
Point: Point{T: series.Points[0].Timestamp, V: series.Points[0].Value},
Metric: PrepareSampleLabelsForRule(series.Labels, threshold.Name),
Point: Point{T: values[0].Timestamp, V: values[0].Value},
Metric: PrepareSampleLabelsForRule(seriesLabels, threshold.Name),
Target: *threshold.TargetValue,
TargetUnit: threshold.TargetUnit,
}
@@ -160,7 +161,7 @@ func (r BasicRuleThresholds) Eval(series v3.Series, unit string, evalData EvalDa
if threshold.RecoveryTarget == nil {
continue
}
sampleLabels := PrepareSampleLabelsForRule(series.Labels, threshold.Name)
sampleLabels := PrepareSampleLabelsForRule(seriesLabels, threshold.Name)
alertHash := sampleLabels.Hash()
// check if alert is active and then check if recovery threshold matches
if evalData.HasActiveAlert(alertHash) {
@@ -255,18 +256,23 @@ func (b BasicRuleThreshold) Validate() error {
return errors.Join(errs...)
}
func (b BasicRuleThreshold) matchesRecoveryThreshold(series v3.Series, ruleUnit string) (Sample, bool) {
func (b BasicRuleThreshold) matchesRecoveryThreshold(series qbtypes.TimeSeries, ruleUnit string) (Sample, bool) {
return b.shouldAlertWithTarget(series, b.recoveryTarget(ruleUnit))
}
func (b BasicRuleThreshold) shouldAlert(series v3.Series, ruleUnit string) (Sample, bool) {
func (b BasicRuleThreshold) shouldAlert(series qbtypes.TimeSeries, ruleUnit string) (Sample, bool) {
return b.shouldAlertWithTarget(series, b.target(ruleUnit))
}
func removeGroupinSetPoints(series v3.Series) []v3.Point {
var result []v3.Point
for _, s := range series.Points {
if s.Timestamp >= 0 && !math.IsNaN(s.Value) && !math.IsInf(s.Value, 0) {
result = append(result, s)
// filterValidValues returns only the values that are valid for alert evaluation:
// non-partial, non-NaN, non-Inf, and with non-negative timestamps.
func filterValidValues(values []*qbtypes.TimeSeriesValue) []*qbtypes.TimeSeriesValue {
var result []*qbtypes.TimeSeriesValue
for _, v := range values {
if v.Partial {
continue
}
if v.Timestamp >= 0 && !math.IsNaN(v.Value) && !math.IsInf(v.Value, 0) {
result = append(result, v)
}
}
return result
@@ -284,15 +290,15 @@ func PrepareSampleLabelsForRule(seriesLabels map[string]string, thresholdName st
return lb.Labels()
}
func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float64) (Sample, bool) {
func (b BasicRuleThreshold) shouldAlertWithTarget(series qbtypes.TimeSeries, target float64) (Sample, bool) {
var shouldAlert bool
var alertSmpl Sample
lbls := PrepareSampleLabelsForRule(series.Labels, b.Name)
lbls := PrepareSampleLabelsForRule(series.LabelsMap(), b.Name)
series.Points = removeGroupinSetPoints(series)
values := filterValidValues(series.Values)
// nothing to evaluate
if len(series.Points) == 0 {
if len(values) == 0 {
return alertSmpl, false
}
@@ -300,7 +306,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
case AtleastOnce:
// If any sample matches the condition, the rule is firing.
if b.CompareOp == ValueIsAbove {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value > target {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
shouldAlert = true
@@ -308,7 +314,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
}
} else if b.CompareOp == ValueIsBelow {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value < target {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
shouldAlert = true
@@ -316,7 +322,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
}
} else if b.CompareOp == ValueIsEq {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value == target {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
shouldAlert = true
@@ -324,7 +330,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
}
} else if b.CompareOp == ValueIsNotEq {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value != target {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
shouldAlert = true
@@ -332,7 +338,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
}
} else if b.CompareOp == ValueOutsideBounds {
for _, smpl := range series.Points {
for _, smpl := range values {
if math.Abs(smpl.Value) >= target {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
shouldAlert = true
@@ -345,7 +351,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
shouldAlert = true
alertSmpl = Sample{Point: Point{V: target}, Metric: lbls}
if b.CompareOp == ValueIsAbove {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value <= target {
shouldAlert = false
break
@@ -354,7 +360,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
// use min value from the series
if shouldAlert {
var minValue float64 = math.Inf(1)
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value < minValue {
minValue = smpl.Value
}
@@ -362,7 +368,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
alertSmpl = Sample{Point: Point{V: minValue}, Metric: lbls}
}
} else if b.CompareOp == ValueIsBelow {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value >= target {
shouldAlert = false
break
@@ -370,7 +376,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
if shouldAlert {
var maxValue float64 = math.Inf(-1)
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value > maxValue {
maxValue = smpl.Value
}
@@ -378,14 +384,14 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
alertSmpl = Sample{Point: Point{V: maxValue}, Metric: lbls}
}
} else if b.CompareOp == ValueIsEq {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value != target {
shouldAlert = false
break
}
}
} else if b.CompareOp == ValueIsNotEq {
for _, smpl := range series.Points {
for _, smpl := range values {
if smpl.Value == target {
shouldAlert = false
break
@@ -393,7 +399,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
// use any non-inf or nan value from the series
if shouldAlert {
for _, smpl := range series.Points {
for _, smpl := range values {
if !math.IsInf(smpl.Value, 0) && !math.IsNaN(smpl.Value) {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
break
@@ -401,7 +407,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
}
}
} else if b.CompareOp == ValueOutsideBounds {
for _, smpl := range series.Points {
for _, smpl := range values {
if math.Abs(smpl.Value) < target {
alertSmpl = Sample{Point: Point{V: smpl.Value}, Metric: lbls}
shouldAlert = false
@@ -412,7 +418,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
case OnAverage:
// If the average of all samples matches the condition, the rule is firing.
var sum, count float64
for _, smpl := range series.Points {
for _, smpl := range values {
if math.IsNaN(smpl.Value) || math.IsInf(smpl.Value, 0) {
continue
}
@@ -446,7 +452,7 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
// If the sum of all samples matches the condition, the rule is firing.
var sum float64
for _, smpl := range series.Points {
for _, smpl := range values {
if math.IsNaN(smpl.Value) || math.IsInf(smpl.Value, 0) {
continue
}
@@ -477,21 +483,22 @@ func (b BasicRuleThreshold) shouldAlertWithTarget(series v3.Series, target float
case Last:
// If the last sample matches the condition, the rule is firing.
shouldAlert = false
alertSmpl = Sample{Point: Point{V: series.Points[len(series.Points)-1].Value}, Metric: lbls}
lastValue := values[len(values)-1].Value
alertSmpl = Sample{Point: Point{V: lastValue}, Metric: lbls}
if b.CompareOp == ValueIsAbove {
if series.Points[len(series.Points)-1].Value > target {
if lastValue > target {
shouldAlert = true
}
} else if b.CompareOp == ValueIsBelow {
if series.Points[len(series.Points)-1].Value < target {
if lastValue < target {
shouldAlert = true
}
} else if b.CompareOp == ValueIsEq {
if series.Points[len(series.Points)-1].Value == target {
if lastValue == target {
shouldAlert = true
}
} else if b.CompareOp == ValueIsNotEq {
if series.Points[len(series.Points)-1].Value != target {
if lastValue != target {
shouldAlert = true
}
}

View File

@@ -6,16 +6,24 @@ import (
"github.com/stretchr/testify/assert"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
)
func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
target := 100.0
makeLabel := func(name, value string) *qbtypes.Label {
return &qbtypes.Label{Key: telemetrytypes.TelemetryFieldKey{Name: name}, Value: value}
}
makeValue := func(value float64, ts int64) *qbtypes.TimeSeriesValue {
return &qbtypes.TimeSeriesValue{Value: value, Timestamp: ts}
}
tests := []struct {
name string
threshold BasicRuleThreshold
series v3.Series
series qbtypes.TimeSeries
ruleUnit string
shouldAlert bool
}{
@@ -28,11 +36,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.15, Timestamp: 1000}, // 150ms in seconds
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(0.15, 1000)}, // 150ms in seconds
},
ruleUnit: "s",
shouldAlert: true,
@@ -46,11 +52,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.05, Timestamp: 1000}, // 50ms in seconds
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(0.05, 1000)}, // 50ms in seconds
},
ruleUnit: "s",
shouldAlert: false,
@@ -64,11 +68,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 150000, Timestamp: 1000}, // 150000ms = 150s
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(150000, 1000)}, // 150000ms = 150s
},
ruleUnit: "ms",
shouldAlert: true,
@@ -83,11 +85,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.15, Timestamp: 1000}, // 0.15KiB ≈ 153.6 bytes
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(0.15, 1000)}, // 0.15KiB ≈ 153.6 bytes
},
ruleUnit: "kbytes",
shouldAlert: true,
@@ -101,11 +101,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.15, Timestamp: 1000},
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(0.15, 1000)},
},
ruleUnit: "mbytes",
shouldAlert: true,
@@ -120,11 +118,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsBelow,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.05, Timestamp: 1000}, // 50ms in seconds
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(0.05, 1000)}, // 50ms in seconds
},
ruleUnit: "s",
shouldAlert: true,
@@ -138,12 +134,12 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: OnAverage,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.08, Timestamp: 1000}, // 80ms
{Value: 0.12, Timestamp: 2000}, // 120ms
{Value: 0.15, Timestamp: 3000}, // 150ms
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{
makeValue(0.08, 1000), // 80ms
makeValue(0.12, 2000), // 120ms
makeValue(0.15, 3000), // 150ms
},
},
ruleUnit: "s",
@@ -158,12 +154,12 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: InTotal,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.04, Timestamp: 1000}, // 40MB
{Value: 0.05, Timestamp: 2000}, // 50MB
{Value: 0.03, Timestamp: 3000}, // 30MB
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{
makeValue(0.04, 1000), // 40MB
makeValue(0.05, 2000), // 50MB
makeValue(0.03, 3000), // 30MB
},
},
ruleUnit: "decgbytes",
@@ -178,12 +174,12 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AllTheTimes,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.11, Timestamp: 1000}, // 110ms
{Value: 0.12, Timestamp: 2000}, // 120ms
{Value: 0.15, Timestamp: 3000}, // 150ms
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{
makeValue(0.11, 1000), // 110ms
makeValue(0.12, 2000), // 120ms
makeValue(0.15, 3000), // 150ms
},
},
ruleUnit: "s",
@@ -198,11 +194,11 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: Last,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.15, Timestamp: 1000}, // 150kB
{Value: 0.05, Timestamp: 2000}, // 50kB (last value)
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{
makeValue(0.15, 1000), // 150kB
makeValue(0.05, 2000), // 50kB (last value)
},
},
ruleUnit: "decmbytes",
@@ -218,11 +214,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 0.15, Timestamp: 1000},
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(0.15, 1000)},
},
ruleUnit: "KBs",
shouldAlert: true,
@@ -237,11 +231,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 150, Timestamp: 1000}, // 150ms
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(150, 1000)}, // 150ms
},
ruleUnit: "ms",
shouldAlert: true,
@@ -256,11 +248,9 @@ func TestBasicRuleThresholdEval_UnitConversion(t *testing.T) {
MatchType: AtleastOnce,
CompareOp: ValueIsAbove,
},
series: v3.Series{
Labels: map[string]string{"service": "test"},
Points: []v3.Point{
{Value: 150, Timestamp: 1000}, // 150 (unitless)
},
series: qbtypes.TimeSeries{
Labels: []*qbtypes.Label{makeLabel("service", "test")},
Values: []*qbtypes.TimeSeriesValue{makeValue(150, 1000)}, // 150 (unitless)
},
ruleUnit: "",
shouldAlert: true,

View File

@@ -27,10 +27,10 @@ type MetadataStore interface {
GetAllValues(ctx context.Context, fieldValueSelector *FieldValueSelector) (*TelemetryFieldValues, bool, error)
// FetchTemporality fetches the temporality for metric
FetchTemporality(ctx context.Context, metricName string) (metrictypes.Temporality, error)
FetchTemporality(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricName string) (metrictypes.Temporality, error)
// FetchTemporalityMulti fetches the temporality for multiple metrics
FetchTemporalityMulti(ctx context.Context, metricNames ...string) (map[string]metrictypes.Temporality, error)
FetchTemporalityMulti(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricNames ...string) (map[string]metrictypes.Temporality, error)
// ListLogsJSONIndexes lists the JSON indexes for the logs table.
ListLogsJSONIndexes(ctx context.Context, filters ...string) (map[string][]schemamigrator.Index, error)

View File

@@ -265,7 +265,7 @@ func (m *MockMetadataStore) SetAllValues(lookupKey string, values *telemetrytype
}
// FetchTemporality fetches the temporality for a metric
func (m *MockMetadataStore) FetchTemporality(ctx context.Context, metricName string) (metrictypes.Temporality, error) {
func (m *MockMetadataStore) FetchTemporality(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricName string) (metrictypes.Temporality, error) {
if temporality, exists := m.TemporalityMap[metricName]; exists {
return temporality, nil
}
@@ -273,7 +273,7 @@ func (m *MockMetadataStore) FetchTemporality(ctx context.Context, metricName str
}
// FetchTemporalityMulti fetches the temporality for multiple metrics
func (m *MockMetadataStore) FetchTemporalityMulti(ctx context.Context, metricNames ...string) (map[string]metrictypes.Temporality, error) {
func (m *MockMetadataStore) FetchTemporalityMulti(ctx context.Context, queryTimeRangeStartTs, queryTimeRangeEndTs uint64, metricNames ...string) (map[string]metrictypes.Temporality, error) {
result := make(map[string]metrictypes.Temporality)
for _, metricName := range metricNames {

View File

@@ -20,6 +20,7 @@ pytest_plugins = [
"fixtures.idputils",
"fixtures.notification_channel",
"fixtures.alerts",
"fixtures.cloudintegrations",
]

View File

@@ -112,7 +112,7 @@ def verify_webhook_alert_expectation(
break
# wait for some time before checking again
time.sleep(10)
time.sleep(1)
# We've waited but we didn't get the expected number of alerts
@@ -133,3 +133,15 @@ def verify_webhook_alert_expectation(
)
return True # should not reach here
def update_rule_channel_name(rule_data: dict, channel_name: str):
"""
updates the channel name in the thresholds
so alert notification are sent to the given channel
"""
thresholds = rule_data["condition"]["thresholds"]
if "kind" in thresholds and thresholds["kind"] == "basic":
# loop over all the sepcs and update the channels
for spec in thresholds["spec"]:
spec["channels"] = [channel_name]

View File

@@ -0,0 +1,86 @@
"""Fixtures for cloud integration tests."""
from http import HTTPStatus
from typing import Callable, Optional
import pytest
import requests
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
@pytest.fixture(scope="function")
def create_cloud_integration_account(
request: pytest.FixtureRequest,
signoz: types.SigNoz,
) -> Callable[[str, str], dict]:
created_account_id: Optional[str] = None
cloud_provider_used: Optional[str] = None
def _create(
admin_token: str,
cloud_provider: str = "aws",
) -> dict:
nonlocal created_account_id, cloud_provider_used
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts/generate-connection-url"
request_payload = {
"account_config": {"regions": ["us-east-1"]},
"agent_config": {
"region": "us-east-1",
"ingestion_url": "https://ingest.test.signoz.cloud",
"ingestion_key": "test-ingestion-key-123456",
"signoz_api_url": "https://test-deployment.test.signoz.cloud",
"signoz_api_key": "test-api-key-789",
"version": "v0.0.8",
},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=request_payload,
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Failed to create test account: {response.status_code}"
data = response.json().get("data", response.json())
created_account_id = data.get("account_id")
cloud_provider_used = cloud_provider
return data
def _disconnect(admin_token: str, cloud_provider: str) -> requests.Response:
assert created_account_id
disconnect_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts/{created_account_id}/disconnect"
return requests.post(
signoz.self.host_configs["8080"].get(disconnect_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
# Yield factory to the test
yield _create
# Post-test cleanup: generate admin token and disconnect the created account
if created_account_id and cloud_provider_used:
get_token = request.getfixturevalue("get_token")
try:
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
r = _disconnect(admin_token, cloud_provider_used)
if r.status_code != HTTPStatus.OK:
logger.info(
"Disconnect cleanup returned %s for account %s",
r.status_code,
created_account_id,
)
logger.info("Cleaned up test account: %s", created_account_id)
except Exception as exc: # pylint: disable=broad-except
logger.info("Post-test disconnect cleanup failed: %s", exc)

View File

@@ -0,0 +1,47 @@
"""Fixtures for cloud integration tests."""
from http import HTTPStatus
import requests
from fixtures import types
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def simulate_agent_checkin(
signoz: types.SigNoz,
admin_token: str,
cloud_provider: str,
account_id: str,
cloud_account_id: str,
) -> dict:
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/agent-check-in"
checkin_payload = {
"account_id": account_id,
"cloud_account_id": cloud_account_id,
"data": {},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=checkin_payload,
timeout=10,
)
if response.status_code != HTTPStatus.OK:
logger.error(
"Agent check-in failed: %s, response: %s",
response.status_code,
response.text,
)
assert (
response.status_code == HTTPStatus.OK
), f"Agent check-in failed: {response.status_code}"
response_data = response.json()
return response_data.get("data", response_data)

View File

@@ -43,6 +43,10 @@ class MetricsTimeSeries(ABC):
resource_attrs: dict[str, str] = {},
scope_attrs: dict[str, str] = {},
) -> None:
# Create a copy of labels to avoid mutating the caller's dictionary
labels = dict(labels)
# Add metric_name to the labels to support promql queries
labels["__name__"] = metric_name
self.env = env
self.metric_name = metric_name
self.temporality = temporality

View File

@@ -52,7 +52,7 @@ def build_builder_query(
time_aggregation: str,
space_aggregation: str,
*,
temporality: str = "cumulative",
temporality: Optional[str] = None,
step_interval: int = DEFAULT_STEP_INTERVAL,
group_by: Optional[List[str]] = None,
filter_expression: Optional[str] = None,
@@ -65,7 +65,6 @@ def build_builder_query(
"aggregations": [
{
"metricName": metric_name,
"temporality": temporality,
"timeAggregation": time_aggregation,
"spaceAggregation": space_aggregation,
}
@@ -73,6 +72,8 @@ def build_builder_query(
"stepInterval": step_interval,
"disabled": disabled,
}
if temporality:
spec["aggregations"][0]["temporality"] = temporality
if group_by:
spec["groupBy"] = [

View File

@@ -69,6 +69,10 @@ def signoz( # pylint: disable=too-many-arguments,too-many-positional-arguments
"SIGNOZ_GLOBAL_INGESTION__URL": "https://ingest.test.signoz.cloud",
"SIGNOZ_USER_PASSWORD_RESET_ALLOW__SELF": True,
"SIGNOZ_USER_PASSWORD_RESET_MAX__TOKEN__LIFETIME": "6h",
"RULES_EVAL_DELAY": "0s",
"SIGNOZ_ALERTMANAGER_SIGNOZ_POLL__INTERVAL": "5s",
"SIGNOZ_ALERTMANAGER_SIGNOZ_ROUTE_GROUP__WAIT": "1s",
"SIGNOZ_ALERTMANAGER_SIGNOZ_ROUTE_GROUP__INTERVAL": "5s",
}
| sqlstore.env
| clickhouse.env

View File

@@ -191,3 +191,15 @@ class AlertExpectation:
# seconds to wait for the alerts to be fired, if no
# alerts are fired in the expected time, the test will fail
wait_time_seconds: int
@dataclass(frozen=True)
class AlertTestCase:
# name of the test case
name: str
# path to the rule file in testdata directory
rule_path: str
# list of alert data that will be inserted into the database
alert_data: List[AlertData]
# list of alert expectations for the test case
alert_expectation: AlertExpectation

View File

@@ -8,6 +8,9 @@ from wiremock.client import HttpMethods, Mapping, MappingRequest, MappingRespons
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_webhook_notification_channel(
@@ -20,6 +23,7 @@ def test_webhook_notification_channel(
"""
Tests the creation and delivery of test alerts on the created notification channel
"""
logger.info("Setting up notification channel")
# Prepare notification channel name and webhook endpoint
notification_channel_name = f"notification-channel-{uuid.uuid4()}"
@@ -55,10 +59,10 @@ def test_webhook_notification_channel(
)
# TODO: @abhishekhugetech # pylint: disable=W0511
# Time required for Org to be registered
# in the alertmanager, default 1m.
# Time required for newly created Org to be registered in the alertmanager is 5 seconds in signoz.py
# this will be fixed after [https://github.com/SigNoz/engineering-pod/issues/3800]
time.sleep(65)
# 10 seconds safe time for org to be registered in the alertmanager
time.sleep(10)
# Call test API for the notification channel
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)

View File

@@ -0,0 +1,671 @@
import json
import uuid
from datetime import datetime, timedelta, timezone
from typing import Callable, List
import pytest
from wiremock.client import HttpMethods, Mapping, MappingRequest, MappingResponse
from fixtures import types
from fixtures.alertutils import (
update_rule_channel_name,
verify_webhook_alert_expectation,
)
from fixtures.logger import setup_logger
from fixtures.utils import get_testdata_file_path
# test cases for match type and compare operators have wait time of 30 seconds to verify the alert expectation.
# we've poistioned the alert data to fire the alert on first eval of rule manager, the eval frequency
# for most alert rules are set of 15s so considering this delay plus some delay from alert manager's
# group_wait and group_interval, even in worst case most alerts should be triggered in about 30 seconds
TEST_RULES_MATCH_TYPE_AND_COMPARE_OPERATORS = [
types.AlertTestCase(
name="test_threshold_above_at_least_once",
rule_path="alerts/test_scenarios/threshold_above_at_least_once/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_above_at_least_once/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_above_at_least_once",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_above_all_the_time",
rule_path="alerts/test_scenarios/threshold_above_all_the_time/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_above_all_the_time/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_above_all_the_time",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_above_in_total",
rule_path="alerts/test_scenarios/threshold_above_in_total/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_above_in_total/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_above_in_total",
"threshold.name": "critical",
"service": "server",
},
),
types.FiringAlert(
labels={
"alertname": "threshold_above_in_total",
"threshold.name": "critical",
"service": "api",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_above_average",
rule_path="alerts/test_scenarios/threshold_above_average/rule.json",
alert_data=[
types.AlertData(
type="traces",
data_path="alerts/test_scenarios/threshold_above_average/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_above_average",
"threshold.name": "critical",
}
),
],
),
),
# TODO: @abhishekhugetech enable the test for matchType last, pylint: disable=W0511
# after the [issue](https://github.com/SigNoz/engineering-pod/issues/3801) with matchType last is fixed
# types.AlertTestCase(
# name="test_threshold_above_last",
# rule_path="alerts/test_scenarios/threshold_above_last/rule.json",
# alert_data=[
# types.AlertData(
# type="metrics",
# data_path="alerts/test_scenarios/threshold_above_last/alert_data.jsonl",
# ),
# ],
# alert_expectation=types.AlertExpectation(
# should_alert=True,
# wait_time_seconds=30,
# expected_alerts=[
# types.FiringAlert(
# labels={
# "alertname": "threshold_above_last",
# "threshold.name": "critical",
# }
# ),
# ],
# ),
# ),
types.AlertTestCase(
name="test_threshold_below_at_least_once",
rule_path="alerts/test_scenarios/threshold_below_at_least_once/rule.json",
alert_data=[
types.AlertData(
type="logs",
data_path="alerts/test_scenarios/threshold_below_at_least_once/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_below_at_least_once",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_below_all_the_time",
rule_path="alerts/test_scenarios/threshold_below_all_the_time/rule.json",
alert_data=[
types.AlertData(
type="logs",
data_path="alerts/test_scenarios/threshold_below_all_the_time/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_below_all_the_time",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_below_in_total",
rule_path="alerts/test_scenarios/threshold_below_in_total/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_below_in_total/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_below_in_total",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_below_average",
rule_path="alerts/test_scenarios/threshold_below_average/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_below_average/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_below_average",
"threshold.name": "critical",
}
),
],
),
),
# TODO: @abhishekhugetech enable the test for matchType last,
# after the [issue](https://github.com/SigNoz/engineering-pod/issues/3801) with matchType last is fixed, pylint: disable=W0511
# types.AlertTestCase(
# name="test_threshold_below_last",
# rule_path="alerts/test_scenarios/threshold_below_last/rule.json",
# alert_data=[
# types.AlertData(
# type="metrics",
# data_path="alerts/test_scenarios/threshold_below_last/alert_data.jsonl",
# ),
# ],
# alert_expectation=types.AlertExpectation(
# should_alert=True,
# wait_time_seconds=30,
# expected_alerts=[
# types.FiringAlert(
# labels={
# "alertname": "threshold_below_last",
# "threshold.name": "critical",
# }
# ),
# ],
# ),
# ),
types.AlertTestCase(
name="test_threshold_equal_to_at_least_once",
rule_path="alerts/test_scenarios/threshold_equal_to_at_least_once/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_equal_to_at_least_once/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_equal_to_at_least_once",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_equal_to_all_the_time",
rule_path="alerts/test_scenarios/threshold_equal_to_all_the_time/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_equal_to_all_the_time/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_equal_to_all_the_time",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_equal_to_in_total",
rule_path="alerts/test_scenarios/threshold_equal_to_in_total/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_equal_to_in_total/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_equal_to_in_total",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_equal_to_average",
rule_path="alerts/test_scenarios/threshold_equal_to_average/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_equal_to_average/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_equal_to_average",
"threshold.name": "critical",
}
),
],
),
),
# TODO: @abhishekhugetech enable the test for matchType last,
# after the [issue](https://github.com/SigNoz/engineering-pod/issues/3801) with matchType last is fixed, pylint: disable=W0511
# types.AlertTestCase(
# name="test_threshold_equal_to_last",
# rule_path="alerts/test_scenarios/threshold_equal_to_last/rule.json",
# alert_data=[
# types.AlertData(
# type="metrics",
# data_path="alerts/test_scenarios/threshold_equal_to_last/alert_data.jsonl",
# ),
# ],
# alert_expectation=types.AlertExpectation(
# should_alert=True,
# wait_time_seconds=30,
# expected_alerts=[
# types.FiringAlert(
# labels={
# "alertname": "threshold_equal_to_last",
# "threshold.name": "critical",
# }
# ),
# ],
# ),
# ),
types.AlertTestCase(
name="test_threshold_not_equal_to_at_least_once",
rule_path="alerts/test_scenarios/threshold_not_equal_to_at_least_once/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_not_equal_to_at_least_once/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_not_equal_to_at_least_once",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_not_equal_to_all_the_time",
rule_path="alerts/test_scenarios/threshold_not_equal_to_all_the_time/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_not_equal_to_all_the_time/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_not_equal_to_all_the_time",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_not_equal_to_in_total",
rule_path="alerts/test_scenarios/threshold_not_equal_to_in_total/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_not_equal_to_in_total/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_not_equal_to_in_total",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_threshold_not_equal_to_average",
rule_path="alerts/test_scenarios/threshold_not_equal_to_average/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/threshold_not_equal_to_average/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "threshold_not_equal_to_average",
"threshold.name": "critical",
}
),
],
),
),
# TODO: @abhishekhugetech enable the test for matchType last,
# after the [issue](https://github.com/SigNoz/engineering-pod/issues/3801) with matchType last is fixed, pylint: disable=W0511
# types.AlertTestCase(
# name="test_threshold_not_equal_to_last",
# rule_path="alerts/test_scenarios/threshold_not_equal_to_last/rule.json",
# alert_data=[
# types.AlertData(
# type="metrics",
# data_path="alerts/test_scenarios/threshold_not_equal_to_last/alert_data.jsonl",
# ),
# ],
# alert_expectation=types.AlertExpectation(
# should_alert=True,
# wait_time_seconds=30,
# expected_alerts=[
# types.FiringAlert(
# labels={
# "alertname": "threshold_not_equal_to_last",
# "threshold.name": "critical",
# }
# ),
# ],
# ),
# ),
]
# test cases unit conversion
TEST_RULES_UNIT_CONVERSION = [
types.AlertTestCase(
name="test_unit_conversion_bytes_to_mb",
rule_path="alerts/test_scenarios/unit_conversion_bytes_to_mb/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/unit_conversion_bytes_to_mb/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "unit_conversion_bytes_to_mb",
"threshold.name": "critical",
}
),
],
),
),
types.AlertTestCase(
name="test_unit_conversion_ms_to_second",
rule_path="alerts/test_scenarios/unit_conversion_ms_to_second/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/unit_conversion_ms_to_second/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "unit_conversion_ms_to_second",
"threshold.name": "critical",
}
),
],
),
),
]
# test cases miscellaneous cases, no data and multi threshold
TEST_RULES_MISCELLANEOUS = [
types.AlertTestCase(
name="test_no_data_rule_test",
rule_path="alerts/test_scenarios/no_data_rule_test/rule.json",
alert_data=[
types.AlertData(
type="metrics",
data_path="alerts/test_scenarios/no_data_rule_test/alert_data.jsonl",
),
],
alert_expectation=types.AlertExpectation(
should_alert=True,
wait_time_seconds=30,
expected_alerts=[
types.FiringAlert(
labels={
"alertname": "[No data] no_data_rule_test",
"nodata": "true",
}
),
],
),
),
# TODO: @abhishekhugetech enable the test for multi threshold rule, pylint: disable=W0511
# after the [issue](https://github.com/SigNoz/engineering-pod/issues/3934) with alertManager is resolved
# types.AlertTestCase(
# name="test_multi_threshold_rule_test",
# rule_path="alerts/test_scenarios/multi_threshold_rule_test/rule.json",
# alert_data=[
# types.AlertData(
# type="metrics",
# data_path="alerts/test_scenarios/multi_threshold_rule_test/alert_data.jsonl",
# ),
# ],
# alert_expectation=types.AlertExpectation(
# should_alert=True,
# # the second alert will be fired with some delay from alert manager's group_interval
# # so taking this in consideration, the wait time is 90 seconds (30s + 30s for next alert + 30s buffer)
# wait_time_seconds=90,
# expected_alerts=[
# types.FiringAlert(
# labels={
# "alertname": "multi_threshold_rule_test",
# "threshold.name": "info",
# }
# ),
# types.FiringAlert(
# labels={
# "alertname": "multi_threshold_rule_test",
# "threshold.name": "warning",
# }
# ),
# ],
# ),
# ),
]
logger = setup_logger(__name__)
@pytest.mark.parametrize(
"alert_test_case",
TEST_RULES_MATCH_TYPE_AND_COMPARE_OPERATORS
+ TEST_RULES_UNIT_CONVERSION
+ TEST_RULES_MISCELLANEOUS,
ids=lambda alert_test_case: alert_test_case.name,
)
def test_basic_alert_rule_conditions(
# Notification channel related fixtures
notification_channel: types.TestContainerDocker,
make_http_mocks: Callable[[types.TestContainerDocker, List[Mapping]], None],
create_webhook_notification_channel: Callable[[str, str, dict, bool], str],
# Alert rule related fixtures
create_alert_rule: Callable[[dict], str],
# Alert data insertion related fixtures
insert_alert_data: Callable[[List[types.AlertData], datetime], None],
alert_test_case: types.AlertTestCase,
):
# Prepare notification channel name and webhook endpoint
notification_channel_name = str(uuid.uuid4())
webhook_endpoint_path = f"/alert/{notification_channel_name}"
notification_url = notification_channel.container_configs["8080"].get(
webhook_endpoint_path
)
logger.info("notification_url: %s", {"notification_url": notification_url})
# register the mock endpoint in notification channel
make_http_mocks(
notification_channel,
[
Mapping(
request=MappingRequest(
method=HttpMethods.POST,
url=webhook_endpoint_path,
),
response=MappingResponse(
status=200,
json_body={},
),
persistent=False,
)
],
)
# Create an alert channel using the given route
create_webhook_notification_channel(
channel_name=notification_channel_name,
webhook_url=notification_url,
http_config={},
send_resolved=False,
)
logger.info(
"alert channel created with name: %s",
{"notification_channel_name": notification_channel_name},
)
# Insert alert data
insert_alert_data(
alert_test_case.alert_data,
base_time=datetime.now(tz=timezone.utc) - timedelta(minutes=5),
)
# Create Alert Rule
rule_path = get_testdata_file_path(alert_test_case.rule_path)
with open(rule_path, "r", encoding="utf-8") as f:
rule_data = json.loads(f.read())
# Update the channel name in the rule data
update_rule_channel_name(rule_data, notification_channel_name)
rule_id = create_alert_rule(rule_data)
logger.info(
"rule created with id: %s",
{"rule_id": rule_id, "rule_name": rule_data["alert"]},
)
# Verify alert expectation
verify_webhook_alert_expectation(
notification_channel,
notification_channel_name,
alert_test_case.alert_expectation,
)

View File

@@ -0,0 +1,144 @@
from http import HTTPStatus
from typing import Callable
import requests
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_generate_connection_url(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test to generate connection URL for AWS CloudFormation stack deployment."""
# Get authentication token for admin user
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/accounts/generate-connection-url"
)
# Prepare request payload
request_payload = {
"account_config": {"regions": ["us-east-1", "us-west-2"]},
"agent_config": {
"region": "us-east-1",
"ingestion_url": "https://ingest.test.signoz.cloud",
"ingestion_key": "test-ingestion-key-123456",
"signoz_api_url": "https://test-deployment.test.signoz.cloud",
"signoz_api_key": "test-api-key-789",
"version": "v0.0.8",
},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=request_payload,
timeout=10,
)
# Assert successful response
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}: {response.text}"
# Parse response JSON
response_data = response.json()
# Assert response structure contains expected data
assert "data" in response_data, "Response should contain 'data' field"
# Assert required fields in the response data
expected_fields = ["account_id", "connection_url"]
for field in expected_fields:
assert (
field in response_data["data"]
), f"Response data should contain '{field}' field"
data = response_data["data"]
# Assert account_id is a valid UUID format
assert len(data["account_id"]) > 0, "account_id should be a non-empty string (UUID)"
# Assert connection_url contains expected CloudFormation parameters
connection_url = data["connection_url"]
# Verify it's an AWS CloudFormation URL
assert (
"console.aws.amazon.com/cloudformation" in connection_url
), "connection_url should be an AWS CloudFormation URL"
# Verify region is included
assert (
"region=us-east-1" in connection_url
), "connection_url should contain the specified region"
# Verify required parameters are in the URL
required_params = [
"param_SigNozIntegrationAgentVersion=v0.0.8",
"param_SigNozApiUrl=https%3A%2F%2Ftest-deployment.test.signoz.cloud",
"param_SigNozApiKey=test-api-key-789",
"param_SigNozAccountId=", # Will be a UUID
"param_IngestionUrl=https%3A%2F%2Fingest.test.signoz.cloud",
"param_IngestionKey=test-ingestion-key-123456",
"stackName=signoz-integration",
"templateURL=https%3A%2F%2Fsignoz-integrations.s3.us-east-1.amazonaws.com%2Faws-quickcreate-template-v0.0.8.json",
]
for param in required_params:
assert (
param in connection_url
), f"connection_url should contain parameter: {param}"
def test_generate_connection_url_unsupported_provider(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test that unsupported cloud providers return an error."""
# Get authentication token for admin user
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Try with GCP (unsupported)
cloud_provider = "gcp"
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/accounts/generate-connection-url"
)
request_payload = {
"account_config": {"regions": ["us-central1"]},
"agent_config": {
"region": "us-central1",
"ingestion_url": "https://ingest.test.signoz.cloud",
"ingestion_key": "test-ingestion-key-123456",
"signoz_api_url": "https://test-deployment.test.signoz.cloud",
"signoz_api_key": "test-api-key-789",
},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=request_payload,
timeout=10,
)
# Should return Bad Request for unsupported provider
assert (
response.status_code == HTTPStatus.BAD_REQUEST
), f"Expected 400 for unsupported provider, got {response.status_code}"
response_data = response.json()
assert "error" in response_data, "Response should contain 'error' field"
assert (
"unsupported cloud provider" in response_data["error"].lower()
), "Error message should indicate unsupported provider"

View File

@@ -0,0 +1,312 @@
import uuid
from http import HTTPStatus
from typing import Callable
import requests
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.cloudintegrationsutils import simulate_agent_checkin
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_list_connected_accounts_empty(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test listing connected accounts when there are none."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
assert "accounts" in data, "Response should contain 'accounts' field"
assert isinstance(data["accounts"], list), "Accounts should be a list"
assert (
len(data["accounts"]) == 0
), "Accounts list should be empty when no accounts are connected"
def test_list_connected_accounts_with_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test listing connected accounts after creating one."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
# Simulate agent check-in to mark as connected
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# List accounts
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
assert "accounts" in data, "Response should contain 'accounts' field"
assert isinstance(data["accounts"], list), "Accounts should be a list"
# Find our account in the list (there may be leftover accounts from previous test runs)
account = next((a for a in data["accounts"] if a["id"] == account_id), None)
assert account is not None, f"Account {account_id} should be found in list"
assert account["id"] == account_id, "Account ID should match"
assert "config" in account, "Account should have config field"
assert "status" in account, "Account should have status field"
def test_get_account_status(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test getting the status of a specific account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account (no check-in needed for status check)
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
# Get account status
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/accounts/{account_id}/status"
)
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
assert "id" in data, "Response should contain 'id' field"
assert data["id"] == account_id, "Account ID should match"
assert "status" in data, "Response should contain 'status' field"
assert "integration" in data["status"], "Status should contain 'integration' field"
def test_get_account_status_not_found(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test getting status for a non-existent account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
fake_account_id = "00000000-0000-0000-0000-000000000000"
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/accounts/{fake_account_id}/status"
)
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.NOT_FOUND
), f"Expected 404, got {response.status_code}"
def test_update_account_config(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test updating account configuration."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
# Simulate agent check-in to mark as connected
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# Update account configuration
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/accounts/{account_id}/config"
)
updated_config = {"config": {"regions": ["us-east-1", "us-west-2", "eu-west-1"]}}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=updated_config,
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
assert "id" in data, "Response should contain 'id' field"
assert data["id"] == account_id, "Account ID should match"
# Verify the update by listing accounts
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_response_data = list_response.json()
list_data = list_response_data.get("data", list_response_data)
account = next((a for a in list_data["accounts"] if a["id"] == account_id), None)
assert account is not None, "Account should be found in list"
assert "config" in account, "Account should have config"
assert "regions" in account["config"], "Config should have regions"
assert len(account["config"]["regions"]) == 3, "Should have 3 regions"
assert set(account["config"]["regions"]) == {
"us-east-1",
"us-west-2",
"eu-west-1",
}, "Regions should match updated config"
def test_disconnect_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test disconnecting an account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
# Simulate agent check-in to mark as connected
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# Disconnect the account
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/accounts/{account_id}/disconnect"
)
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
# Verify our specific account is no longer in the connected list
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_response_data = list_response.json()
list_data = list_response_data.get("data", list_response_data)
# Check that our specific account is not in the list
disconnected_account = next(
(a for a in list_data["accounts"] if a["id"] == account_id), None
)
assert (
disconnected_account is None
), f"Account {account_id} should be removed from connected accounts"
def test_disconnect_account_not_found(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test disconnecting a non-existent account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
fake_account_id = "00000000-0000-0000-0000-000000000000"
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts/{fake_account_id}/disconnect"
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.NOT_FOUND
), f"Expected 404, got {response.status_code}"
def test_list_accounts_unsupported_provider(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test listing accounts for an unsupported cloud provider."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "gcp" # Unsupported provider
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/accounts"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.BAD_REQUEST
), f"Expected 400, got {response.status_code}"

View File

@@ -0,0 +1,473 @@
import uuid
from http import HTTPStatus
from typing import Callable
import requests
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.cloudintegrationsutils import simulate_agent_checkin
from fixtures.logger import setup_logger
logger = setup_logger(__name__)
def test_list_services_without_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test listing available services without specifying an account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
assert "services" in data, "Response should contain 'services' field"
assert isinstance(data["services"], list), "Services should be a list"
assert len(data["services"]) > 0, "Should have at least one service available"
# Verify service structure
service = data["services"][0]
assert "id" in service, "Service should have 'id' field"
assert "title" in service, "Service should have 'title' field"
assert "icon" in service, "Service should have 'icon' field"
def test_list_services_with_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test listing services for a specific connected account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account and do check-in
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# List services for the account
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services?cloud_account_id={cloud_account_id}"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
assert "services" in data, "Response should contain 'services' field"
assert isinstance(data["services"], list), "Services should be a list"
assert len(data["services"]) > 0, "Should have at least one service available"
# Services should include config field (may be null if not configured)
service = data["services"][0]
assert "id" in service, "Service should have 'id' field"
assert "title" in service, "Service should have 'title' field"
assert "icon" in service, "Service should have 'icon' field"
def test_get_service_details_without_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test getting service details without specifying an account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
# First get the list of services to get a valid service ID
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_data = list_response.json().get("data", list_response.json())
assert len(list_data["services"]) > 0, "Should have at least one service"
service_id = list_data["services"][0]["id"]
# Get service details
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services/{service_id}"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
# Verify service details structure
assert "id" in data, "Service details should have 'id' field"
assert data["id"] == service_id, "Service ID should match requested ID"
assert "title" in data, "Service details should have 'title' field"
assert "overview" in data, "Service details should have 'overview' field"
# assert assets to had list of dashboards
assert "assets" in data, "Service details should have 'assets' field"
assert isinstance(data["assets"], dict), "Assets should be a dictionary"
def test_get_service_details_with_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test getting service details for a specific connected account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account and do check-in
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# Get list of services first
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_data = list_response.json().get("data", list_response.json())
assert len(list_data["services"]) > 0, "Should have at least one service"
service_id = list_data["services"][0]["id"]
# Get service details with account
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services/{service_id}?cloud_account_id={cloud_account_id}"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
# Verify service details structure
assert "id" in data, "Service details should have 'id' field"
assert data["id"] == service_id, "Service ID should match requested ID"
assert "title" in data, "Service details should have 'title' field"
assert "overview" in data, "Service details should have 'overview' field"
assert "assets" in data, "Service details should have 'assets' field"
assert "config" in data, "Service details should have 'config' field"
assert "status" in data, "Config should have 'status' field"
def test_get_service_details_invalid_service(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test getting details for a non-existent service."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
fake_service_id = "non-existent-service"
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services/{fake_service_id}"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.NOT_FOUND
), f"Expected 404, got {response.status_code}"
def test_list_services_unsupported_provider(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test listing services for an unsupported cloud provider."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "gcp" # Unsupported provider
endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
response = requests.get(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
assert (
response.status_code == HTTPStatus.BAD_REQUEST
), f"Expected 400, got {response.status_code}"
def test_update_service_config(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test updating service configuration for a connected account."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account and do check-in
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# Get list of services to pick a valid service ID
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_data = list_response.json().get("data", list_response.json())
assert len(list_data["services"]) > 0, "Should have at least one service"
service_id = list_data["services"][0]["id"]
# Update service configuration
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/services/{service_id}/config"
)
config_payload = {
"cloud_account_id": cloud_account_id,
"config": {
"metrics": {"enabled": True},
"logs": {"enabled": True},
},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=config_payload,
timeout=10,
)
assert (
response.status_code == HTTPStatus.OK
), f"Expected 200, got {response.status_code}"
response_data = response.json()
data = response_data.get("data", response_data)
# Verify response structure
assert "id" in data, "Response should contain 'id' field"
assert data["id"] == service_id, "Service ID should match"
assert "config" in data, "Response should contain 'config' field"
assert "metrics" in data["config"], "Config should contain 'metrics' field"
assert "logs" in data["config"], "Config should contain 'logs' field"
def test_update_service_config_without_account(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
) -> None:
"""Test updating service config without a connected account should fail."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
cloud_provider = "aws"
# Get a valid service ID
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_data = list_response.json().get("data", list_response.json())
service_id = list_data["services"][0]["id"]
# Try to update config with non-existent account
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/services/{service_id}/config"
)
fake_cloud_account_id = str(uuid.uuid4())
config_payload = {
"cloud_account_id": fake_cloud_account_id,
"config": {
"metrics": {"enabled": True},
},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=config_payload,
timeout=10,
)
assert (
response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
), f"Expected 500 for non-existent account, got {response.status_code}"
def test_update_service_config_invalid_service(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test updating config for a non-existent service should fail."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account and do check-in
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# Try to update config for invalid service
fake_service_id = "non-existent-service"
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/services/{fake_service_id}/config"
)
config_payload = {
"cloud_account_id": cloud_account_id,
"config": {
"metrics": {"enabled": True},
},
}
response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=config_payload,
timeout=10,
)
assert (
response.status_code == HTTPStatus.NOT_FOUND
), f"Expected 404 for invalid service, got {response.status_code}"
def test_update_service_config_disable_service(
signoz: types.SigNoz,
create_user_admin: types.Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
create_cloud_integration_account: Callable,
) -> None:
"""Test disabling a service by updating config with enabled=false."""
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Create a test account and do check-in
cloud_provider = "aws"
account_data = create_cloud_integration_account(admin_token, cloud_provider)
account_id = account_data["account_id"]
cloud_account_id = str(uuid.uuid4())
simulate_agent_checkin(
signoz, admin_token, cloud_provider, account_id, cloud_account_id
)
# Get a valid service
list_endpoint = f"/api/v1/cloud-integrations/{cloud_provider}/services"
list_response = requests.get(
signoz.self.host_configs["8080"].get(list_endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
timeout=10,
)
list_data = list_response.json().get("data", list_response.json())
service_id = list_data["services"][0]["id"]
# First enable the service
endpoint = (
f"/api/v1/cloud-integrations/{cloud_provider}/services/{service_id}/config"
)
enable_payload = {
"cloud_account_id": cloud_account_id,
"config": {
"metrics": {"enabled": True},
},
}
enable_response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=enable_payload,
timeout=10,
)
assert enable_response.status_code == HTTPStatus.OK, "Failed to enable service"
# Now disable the service
disable_payload = {
"cloud_account_id": cloud_account_id,
"config": {
"metrics": {"enabled": False},
"logs": {"enabled": False},
},
}
disable_response = requests.post(
signoz.self.host_configs["8080"].get(endpoint),
headers={"Authorization": f"Bearer {admin_token}"},
json=disable_payload,
timeout=10,
)
assert (
disable_response.status_code == HTTPStatus.OK
), f"Expected 200, got {disable_response.status_code}"
response_data = disable_response.json()
data = response_data.get("data", response_data)
# Verify service is disabled
assert data["config"]["metrics"]["enabled"] is False, "Metrics should be disabled"
assert data["config"]["logs"]["enabled"] is False, "Logs should be disabled"

View File

@@ -0,0 +1,418 @@
"""
Look at the multi_temporality_counters_1h.jsonl file for the relevant data
"""
import random
from datetime import datetime, timedelta, timezone
from http import HTTPStatus
from typing import Callable, List
import pytest
from fixtures import types
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.metrics import Metrics
from fixtures.querier import (
build_builder_query,
get_all_series,
get_series_values,
make_query_request,
)
from fixtures.utils import get_testdata_file_path
MULTI_TEMPORALITY_FILE = get_testdata_file_path("multi_temporality_counters_1h.jsonl")
MULTI_TEMPORALITY_FILE_10h = get_testdata_file_path(
"multi_temporality_counters_10h.jsonl"
)
MULTI_TEMPORALITY_FILE_24h = get_testdata_file_path(
"multi_temporality_counters_24h.jsonl"
)
@pytest.mark.parametrize(
"time_aggregation, expected_value_at_31st_minute, expected_value_at_32nd_minute, steady_value",
[
("rate", 0.0167, 0.133, 0.0833),
("increase", 2, 8, 5),
],
)
def test_with_steady_values_and_reset(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_metrics: Callable[[List[Metrics]], None],
time_aggregation: str,
expected_value_at_31st_minute: float,
expected_value_at_32nd_minute: float,
steady_value: float,
) -> None:
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_ms = int((now - timedelta(minutes=65)).timestamp() * 1000)
end_ms = int(now.timestamp() * 1000)
metric_name = f"test_{time_aggregation}_stale"
metrics = Metrics.load_from_file(
MULTI_TEMPORALITY_FILE,
base_time=now - timedelta(minutes=60),
metric_name_override=metric_name,
)
insert_metrics(metrics)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
query = build_builder_query(
"A",
metric_name,
time_aggregation,
"sum",
filter_expression='endpoint = "/orders"',
)
response = make_query_request(signoz, token, start_ms, end_ms, [query])
assert response.status_code == HTTPStatus.OK
data = response.json()
result_values = sorted(get_series_values(data, "A"), key=lambda x: x["timestamp"])
assert len(result_values) >= 59
# the counter reset happened at 31st minute
assert result_values[30]["value"] == expected_value_at_31st_minute
assert result_values[31]["value"] == expected_value_at_32nd_minute
assert (
result_values[39]["value"] == steady_value
) # 39th minute is when cumulative shifts to delta
count_of_steady_rate = sum(1 for v in result_values if v["value"] == steady_value)
assert (
count_of_steady_rate >= 56
) # 59 - (1 reset + 1 high rate + 1 at the beginning)
# All rates should be non-negative (stale periods = 0 rate)
for v in result_values:
assert (
v["value"] >= 0
), f"{time_aggregation} should not be negative: {v['value']}"
@pytest.mark.parametrize(
"time_aggregation, stable_health_value, stable_products_value, stable_checkout_value, spike_checkout_value, stable_orders_value, spike_users_value",
[
("rate", 0.167, 0.333, 0.0167, 0.833, 0.0833, 0.0167),
("increase", 10, 20, 1, 50, 5, 1),
],
)
def test_group_by_endpoint(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_metrics: Callable[[List[Metrics]], None],
time_aggregation: str,
stable_health_value: float,
stable_products_value: float,
stable_checkout_value: float,
spike_checkout_value: float,
stable_orders_value: float,
spike_users_value: float,
) -> None:
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_ms = int((now - timedelta(minutes=65)).timestamp() * 1000)
end_ms = int(now.timestamp() * 1000)
metric_name = f"test_{time_aggregation}_groupby"
metrics = Metrics.load_from_file(
MULTI_TEMPORALITY_FILE,
base_time=now - timedelta(minutes=60),
metric_name_override=metric_name,
)
insert_metrics(metrics)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
query = build_builder_query(
"A",
metric_name,
time_aggregation,
"sum",
group_by=["endpoint"],
)
response = make_query_request(signoz, token, start_ms, end_ms, [query])
assert response.status_code == HTTPStatus.OK
data = response.json()
all_series = get_all_series(data, "A")
# Should have 5 different endpoints
assert (
len(all_series) == 5
), f"Expected 5 series for 5 endpoints, got {len(all_series)}"
# endpoint -> values
endpoint_values = {}
for series in all_series:
endpoint = series.get("labels", [{}])[0].get("value", "unknown")
values = sorted(series.get("values", []), key=lambda x: x["timestamp"])
endpoint_values[endpoint] = values
expected_endpoints = {"/products", "/health", "/checkout", "/orders", "/users"}
assert (
set(endpoint_values.keys()) == expected_endpoints
), f"Expected endpoints {expected_endpoints}, got {set(endpoint_values.keys())}"
# at no point rate should be negative
for endpoint, values in endpoint_values.items():
for v in values:
assert (
v["value"] >= 0
), f"Rate for {endpoint} should not be negative: {v['value']}"
# /health: 60 data points (t01-t60), steady +10/min
health_values = endpoint_values["/health"]
assert (
len(health_values) >= 58
), f"Expected >= 58 values for /health, got {len(health_values)}"
count_steady_health = sum(
1 for v in health_values if v["value"] == stable_health_value
)
assert (
count_steady_health >= 57
), f"Expected >= 57 steady rate values ({stable_health_value}) for /health, got {count_steady_health}"
# all /health rates should be state except possibly first/last due to boundaries
for v in health_values[1:-1]:
assert (
v["value"] == stable_health_value
), f"Expected /health rate {stable_health_value}, got {v['value']}"
# /products: 51 data points with 10-minute gap (t20-t29 missing), steady +20/min
products_values = endpoint_values["/products"]
assert (
len(products_values) >= 49
), f"Expected >= 49 values for /products, got {len(products_values)}"
count_steady_products = sum(
1 for v in products_values if v["value"] == stable_products_value
)
# most values should be stable, some boundary values differ due to 10-min gap
assert (
count_steady_products >= 46
), f"Expected >= 46 steady rate values ({stable_products_value}) for /products, got {count_steady_products}"
# check that non-stable values are due to gap averaging (should be lower)
gap_boundary_values = [
v["value"] for v in products_values if v["value"] != stable_products_value
]
for val in gap_boundary_values:
assert (
0 < val < stable_products_value
), f"Gap boundary values should be between 0 and {stable_products_value}, got {val}"
# /checkout: 61 data points (t00-t60), +1/min normal, +50/min spike at t40-t44
checkout_values = endpoint_values["/checkout"]
assert (
len(checkout_values) >= 59
), f"Expected >= 59 values for /checkout, got {len(checkout_values)}"
count_steady_checkout = sum(
1 for v in checkout_values if v["value"] == stable_checkout_value
)
assert (
count_steady_checkout >= 53
), f"Expected >= 53 steady {time_aggregation} values ({stable_checkout_value}) for /checkout, got {count_steady_checkout}"
# check that spike values exist (traffic spike +50/min at t40-t44)
count_spike_checkout = sum(
1 for v in checkout_values if v["value"] == spike_checkout_value
)
assert (
count_spike_checkout >= 4
), f"Expected >= 4 spike {time_aggregation} values ({spike_checkout_value}) for /checkout, got {count_spike_checkout}"
# spike values should be consecutive
spike_indices = [
i for i, v in enumerate(checkout_values) if v["value"] == spike_checkout_value
]
assert len(spike_indices) >= 4, f"Expected >= 4 spike indices, got {spike_indices}"
# consecutiveness
for i in range(1, len(spike_indices)):
assert (
spike_indices[i] == spike_indices[i - 1] + 1
), f"Spike indices should be consecutive, got {spike_indices}"
# /orders: 60 data points (t00-t60) with gap at t30, counter reset at t31 (150->2)
# reset at t31 causes: rate/increase at t30 includes gap (lower), t31 has high rate after reset
orders_values = endpoint_values["/orders"]
assert (
len(orders_values) >= 58
), f"Expected >= 58 values for /orders, got {len(orders_values)}"
count_steady_orders = sum(
1 for v in orders_values if v["value"] == stable_orders_value
)
assert (
count_steady_orders >= 55
), f"Expected >= 55 steady {time_aggregation} values ({stable_orders_value}) for /orders, got {count_steady_orders}"
# check for counter reset effects - there should be some non-standard values
non_standard_orders = [
v["value"] for v in orders_values if v["value"] != stable_orders_value
]
assert (
len(non_standard_orders) >= 2
), f"Expected >= 2 non-standard values due to counter reset, got {non_standard_orders}"
# post-reset value should be higher (new counter value / interval)
high_rate_orders = [v for v in non_standard_orders if v > stable_orders_value]
assert (
len(high_rate_orders) >= 1
), f"Expected at least one high {time_aggregation} value after counter reset, got {non_standard_orders}"
# /users: 56 data points (t05-t60), sparse +1 every 5 minutes
users_values = endpoint_values["/users"]
assert (
len(users_values) >= 54
), f"Expected >= 54 values for /users, got {len(users_values)}"
count_zero_users = sum(1 for v in users_values if v["value"] == 0)
# most values should be 0 (flat periods between increments)
assert (
count_zero_users >= 40
), f"Expected >= 40 zero {time_aggregation} values for /users (sparse data), got {count_zero_users}"
# non-zero values should be 0.0167 (1/60 increment rate)
non_zero_users = [v["value"] for v in users_values if v["value"] != 0]
count_increment_rate = sum(1 for v in non_zero_users if v == spike_users_value)
assert (
count_increment_rate >= 8
), f"Expected >= 8 increment {time_aggregation} values ({spike_users_value}) for /users, got {count_increment_rate}"
@pytest.mark.parametrize(
"time_aggregation, expected_value_at_30th_minute, expected_value_at_31st_minute, value_at_switch",
[
("rate", 0.183, 0.183, 0.25),
("increase", 11, 12, 15),
],
)
def test_for_service_with_switch(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_metrics: Callable[[List[Metrics]], None],
time_aggregation: str,
expected_value_at_30th_minute: float,
expected_value_at_31st_minute: float,
value_at_switch: float,
) -> None:
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_ms = int((now - timedelta(minutes=65)).timestamp() * 1000)
end_ms = int(now.timestamp() * 1000)
metric_name = f"test_{time_aggregation}_count"
metrics = Metrics.load_from_file(
MULTI_TEMPORALITY_FILE,
base_time=now - timedelta(minutes=60),
metric_name_override=metric_name,
)
insert_metrics(metrics)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
query = build_builder_query(
"A",
metric_name,
time_aggregation,
"sum",
filter_expression='service = "api"',
)
response = make_query_request(signoz, token, start_ms, end_ms, [query])
assert response.status_code == HTTPStatus.OK
data = response.json()
result_values = sorted(get_series_values(data, "A"), key=lambda x: x["timestamp"])
assert len(result_values) >= 60
assert result_values[30]["value"] == expected_value_at_30th_minute # 0.183
assert result_values[31]["value"] == expected_value_at_31st_minute # 0.183
assert result_values[38]["value"] == value_at_switch # 0.25
assert (
result_values[39]["value"] == value_at_switch # 0.25
) # 39th minute is when cumulative shifts to delta
# All rates should be non-negative (stale periods = 0 rate)
for v in result_values:
assert (
v["value"] >= 0
), f"{time_aggregation} should not be negative: {v['value']}"
@pytest.mark.parametrize(
"time_aggregation, expected_value",
[
("rate", 0.0122),
("increase", 22),
],
)
def test_for_week_long_time_range(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_metrics: Callable[[List[Metrics]], None],
time_aggregation: str,
expected_value: float,
) -> None:
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_ms = int((now - timedelta(days=7)).timestamp() * 1000)
end_ms = int(now.timestamp() * 1000)
metric_name = f"test_{time_aggregation}_" + hex(random.getrandbits(32))[2:]
metrics = Metrics.load_from_file(
MULTI_TEMPORALITY_FILE_10h,
base_time=now - timedelta(minutes=600),
metric_name_override=metric_name,
)
insert_metrics(metrics)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
query = build_builder_query(
"A",
metric_name,
time_aggregation,
"sum",
)
response = make_query_request(signoz, token, start_ms, end_ms, [query])
assert response.status_code == HTTPStatus.OK
data = response.json()
result_values = sorted(get_series_values(data, "A"), key=lambda x: x["timestamp"])
# the zeroth value may not cover the whole time range
for value in result_values[1:]:
assert value["value"] == expected_value
@pytest.mark.parametrize(
"time_aggregation, expected_value",
[
("rate", 0.0122),
("increase", 110),
],
)
def test_for_month_long_time_range(
signoz: types.SigNoz,
create_user_admin: None, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
insert_metrics: Callable[[List[Metrics]], None],
time_aggregation: str,
expected_value: float,
) -> None:
now = datetime.now(tz=timezone.utc).replace(second=0, microsecond=0)
start_ms = int((now - timedelta(days=31)).timestamp() * 1000)
end_ms = int(now.timestamp() * 1000)
metric_name = f"test_{time_aggregation}_" + hex(random.getrandbits(32))[2:]
metrics = Metrics.load_from_file(
MULTI_TEMPORALITY_FILE_24h,
base_time=now - timedelta(minutes=1441),
metric_name_override=metric_name,
)
insert_metrics(metrics)
token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
query = build_builder_query(
"A",
metric_name,
time_aggregation,
"sum",
)
response = make_query_request(signoz, token, start_ms, end_ms, [query])
assert response.status_code == HTTPStatus.OK
data = response.json()
result_values = sorted(get_series_values(data, "A"), key=lambda x: x["timestamp"])
## the zeroth and last values may not cover the whole 9000 seconds
for value in result_values[1:-1]:
assert value["value"] == expected_value

View File

@@ -1,23 +1,23 @@
import pytest
from http import HTTPStatus
from typing import Callable
import pytest
import requests
from sqlalchemy import sql
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD
from fixtures.types import Operation, SigNoz
ANONYMOUS_USER_ID = "00000000-0000-0000-0000-000000000000"
def test_managed_roles_create_on_register(
signoz: SigNoz,
create_user_admin: Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
):
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# get the list of all roles.
response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/roles"),
@@ -32,18 +32,22 @@ def test_managed_roles_create_on_register(
# since this check happens immediately post registeration, all the managed roles should be present.
assert len(data) == 4
role_names = {role["name"] for role in data}
expected_names = {"signoz-admin", "signoz-viewer", "signoz-editor", "signoz-anonymous"}
expected_names = {
"signoz-admin",
"signoz-viewer",
"signoz-editor",
"signoz-anonymous",
}
# do the set mapping as this is order insensitive, direct list match is order-sensitive.
assert set(role_names) == expected_names
def test_root_user_signoz_admin_assignment(
request: pytest.FixtureRequest,
signoz: SigNoz,
create_user_admin: Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
):
):
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# Get the user from the /user/me endpoint and extract the id
@@ -64,14 +68,16 @@ def test_root_user_signoz_admin_assignment(
# this validates to some extent that the role assignment is complete under the assumption that middleware is functioning as expected.
assert response.status_code == HTTPStatus.OK
assert response.json()["status"] == "success"
# Loop over the roles and get the org_id and id for signoz-admin role
roles = response.json()["data"]
admin_role_entry = next((role for role in roles if role["name"] == "signoz-admin"), None)
admin_role_entry = next(
(role for role in roles if role["name"] == "signoz-admin"), None
)
assert admin_role_entry is not None
org_id = admin_role_entry["orgId"]
# to be super sure of authorization server, let's validate the tuples in DB as well.
# to be super sure of authorization server, let's validate the tuples in DB as well.
# todo[@vikrantgupta25]: replace this with role memebers handler once built.
with signoz.sqlstore.conn.connect() as conn:
# verify the entry present for role assignment
@@ -80,15 +86,14 @@ def test_root_user_signoz_admin_assignment(
sql.text("SELECT * FROM tuple WHERE object_id = :object_id"),
{"object_id": tuple_object_id},
)
tuple_row = tuple_result.mappings().fetchone()
assert tuple_row is not None
# check that the tuple if for role assignment
assert tuple_row['object_type'] == "role"
assert tuple_row['relation'] == "assignee"
assert tuple_row["object_type"] == "role"
assert tuple_row["relation"] == "assignee"
if request.config.getoption("--sqlstore-provider") == 'sqlite':
if request.config.getoption("--sqlstore-provider") == "sqlite":
user_object_id = f"organization/{org_id}/user/{user_id}"
assert tuple_row["user_object_type"] == "user"
assert tuple_row["user_object_id"] == user_object_id
@@ -97,13 +102,13 @@ def test_root_user_signoz_admin_assignment(
assert tuple_row["user_type"] == "user"
assert tuple_row["_user"] == _user
def test_anonymous_user_signoz_anonymous_assignment(
request: pytest.FixtureRequest,
signoz: SigNoz,
create_user_admin: Operation, # pylint: disable=unused-argument
get_token: Callable[[str, str], str],
):
):
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
response = requests.get(
@@ -115,14 +120,16 @@ def test_anonymous_user_signoz_anonymous_assignment(
# this validates to some extent that the role assignment is complete under the assumption that middleware is functioning as expected.
assert response.status_code == HTTPStatus.OK
assert response.json()["status"] == "success"
# Loop over the roles and get the org_id and id for signoz-admin role
roles = response.json()["data"]
admin_role_entry = next((role for role in roles if role["name"] == "signoz-anonymous"), None)
admin_role_entry = next(
(role for role in roles if role["name"] == "signoz-anonymous"), None
)
assert admin_role_entry is not None
org_id = admin_role_entry["orgId"]
# to be super sure of authorization server, let's validate the tuples in DB as well.
# to be super sure of authorization server, let's validate the tuples in DB as well.
# todo[@vikrantgupta25]: replace this with role memebers handler once built.
with signoz.sqlstore.conn.connect() as conn:
# verify the entry present for role assignment
@@ -131,15 +138,14 @@ def test_anonymous_user_signoz_anonymous_assignment(
sql.text("SELECT * FROM tuple WHERE object_id = :object_id"),
{"object_id": tuple_object_id},
)
tuple_row = tuple_result.mappings().fetchone()
assert tuple_row is not None
# check that the tuple if for role assignment
assert tuple_row['object_type'] == "role"
assert tuple_row['relation'] == "assignee"
assert tuple_row["object_type"] == "role"
assert tuple_row["relation"] == "assignee"
if request.config.getoption("--sqlstore-provider") == 'sqlite':
if request.config.getoption("--sqlstore-provider") == "sqlite":
user_object_id = f"organization/{org_id}/anonymous/{ANONYMOUS_USER_ID}"
assert tuple_row["user_object_type"] == "anonymous"
assert tuple_row["user_object_id"] == user_object_id
@@ -147,5 +153,3 @@ def test_anonymous_user_signoz_anonymous_assignment(
_user = f"anonymous:organization/{org_id}/anonymous/{ANONYMOUS_USER_ID}"
assert tuple_row["user_type"] == "user"
assert tuple_row["_user"] == _user

View File

@@ -1,11 +1,16 @@
import pytest
from http import HTTPStatus
from typing import Callable
import pytest
import requests
from sqlalchemy import sql
from fixtures.auth import USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD, USER_EDITOR_EMAIL, USER_EDITOR_PASSWORD
from fixtures.auth import (
USER_ADMIN_EMAIL,
USER_ADMIN_PASSWORD,
USER_EDITOR_EMAIL,
USER_EDITOR_PASSWORD,
)
from fixtures.types import Operation, SigNoz
@@ -16,7 +21,7 @@ def test_user_invite_accept_role_grant(
get_token: Callable[[str, str], str],
):
admin_token = get_token(USER_ADMIN_EMAIL, USER_ADMIN_PASSWORD)
# invite a user as editor
invite_payload = {
"email": USER_EDITOR_EMAIL,
@@ -30,7 +35,7 @@ def test_user_invite_accept_role_grant(
)
assert invite_response.status_code == HTTPStatus.CREATED
invite_token = invite_response.json()["data"]["token"]
# accept the invite for editor
accept_payload = {
"token": invite_token,
@@ -40,7 +45,7 @@ def test_user_invite_accept_role_grant(
signoz.self.host_configs["8080"].get("/api/v1/invite/accept"),
json=accept_payload,
timeout=2,
)
)
assert accept_response.status_code == HTTPStatus.CREATED
# Login with editor email and password
@@ -53,7 +58,6 @@ def test_user_invite_accept_role_grant(
assert user_me_response.status_code == HTTPStatus.OK
editor_id = user_me_response.json()["data"]["id"]
# check the forbidden response for admin api for editor user
admin_roles_response = requests.get(
signoz.self.host_configs["8080"].get("/api/v1/roles"),
@@ -79,11 +83,11 @@ def test_user_invite_accept_role_grant(
)
tuple_row = tuple_result.mappings().fetchone()
assert tuple_row is not None
assert tuple_row['object_type'] == "role"
assert tuple_row['relation'] == "assignee"
assert tuple_row["object_type"] == "role"
assert tuple_row["relation"] == "assignee"
# verify the user tuple details depending on db provider
if request.config.getoption("--sqlstore-provider") == 'sqlite':
if request.config.getoption("--sqlstore-provider") == "sqlite":
user_object_id = f"organization/{org_id}/user/{editor_id}"
assert tuple_row["user_object_type"] == "user"
assert tuple_row["user_object_id"] == user_object_id
@@ -93,7 +97,6 @@ def test_user_invite_accept_role_grant(
assert tuple_row["_user"] == _user
def test_user_update_role_grant(
request: pytest.FixtureRequest,
signoz: SigNoz,
@@ -122,9 +125,7 @@ def test_user_update_role_grant(
org_id = roles_data[0]["orgId"]
# Update the user's role to viewer
update_payload = {
"role": "VIEWER"
}
update_payload = {"role": "VIEWER"}
update_response = requests.put(
signoz.self.host_configs["8080"].get(f"/api/v1/user/{editor_id}"),
json=update_payload,
@@ -139,7 +140,9 @@ def test_user_update_role_grant(
viewer_tuple_object_id = f"organization/{org_id}/role/signoz-viewer"
# Check there is no tuple for signoz-editor assignment
editor_tuple_result = conn.execute(
sql.text("SELECT * FROM tuple WHERE object_id = :object_id AND relation = 'assignee'"),
sql.text(
"SELECT * FROM tuple WHERE object_id = :object_id AND relation = 'assignee'"
),
{"object_id": editor_tuple_object_id},
)
for row in editor_tuple_result.mappings().fetchall():
@@ -152,13 +155,15 @@ def test_user_update_role_grant(
# Check that a tuple exists for signoz-viewer assignment
viewer_tuple_result = conn.execute(
sql.text("SELECT * FROM tuple WHERE object_id = :object_id AND relation = 'assignee'"),
sql.text(
"SELECT * FROM tuple WHERE object_id = :object_id AND relation = 'assignee'"
),
{"object_id": viewer_tuple_object_id},
)
row = viewer_tuple_result.mappings().fetchone()
assert row is not None
assert row['object_type'] == "role"
assert row['relation'] == "assignee"
assert row["object_type"] == "role"
assert row["relation"] == "assignee"
if request.config.getoption("--sqlstore-provider") == "sqlite":
user_object_id = f"organization/{org_id}/user/{editor_id}"
assert row["user_object_type"] == "user"
@@ -168,6 +173,7 @@ def test_user_update_role_grant(
assert row["user_type"] == "user"
assert row["_user"] == _user
def test_user_delete_role_revoke(
request: pytest.FixtureRequest,
signoz: SigNoz,
@@ -205,10 +211,12 @@ def test_user_delete_role_revoke(
with signoz.sqlstore.conn.connect() as conn:
tuple_result = conn.execute(
sql.text("SELECT * FROM tuple WHERE object_id = :object_id AND relation = 'assignee'"),
sql.text(
"SELECT * FROM tuple WHERE object_id = :object_id AND relation = 'assignee'"
),
{"object_id": tuple_object_id},
)
# there should NOT be any tuple for the current user assignment
tuple_rows = tuple_result.mappings().fetchall()
for row in tuple_rows:
@@ -217,4 +225,4 @@ def test_user_delete_role_revoke(
assert row["user_object_id"] != user_object_id
else:
_user = f"user:organization/{org_id}/user/{editor_id}"
assert row["_user"] != _user
assert row["_user"] != _user

View File

@@ -0,0 +1,12 @@
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:01:00+00:00","value":2,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:02:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:03:00+00:00","value":14,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:04:00+00:00","value":3,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:05:00+00:00","value":6,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:06:00+00:00","value":25,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:07:00+00:00","value":25,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:08:00+00:00","value":25,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:09:00+00:00","value":10,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:10:00+00:00","value":12,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:11:00+00:00","value":8,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_multi_threshold_rule_test","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:12:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}

View File

@@ -0,0 +1,83 @@
{
"alert": "multi_threshold_rule_test",
"ruleType": "threshold_rule",
"alertType": "METRIC_BASED_ALERT",
"condition": {
"thresholds": {
"kind": "basic",
"spec": [
{
"name": "critical",
"target": 30,
"matchType": "1",
"op": "1",
"channels": [
"test channel"
]
},
{
"name": "warning",
"target": 20,
"matchType": "1",
"op": "1",
"channels": [
"test channel"
]
},
{
"name": "info",
"target": 10,
"matchType": "1",
"op": "1",
"channels": [
"test channel"
]
}
]
},
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [
{
"type": "builder_query",
"spec": {
"name": "A",
"signal": "metrics",
"aggregations": [
{
"metricName": "cpu_percent_multi_threshold_rule_test",
"timeAggregation": "avg",
"spaceAggregation": "max"
}
]
}
}
]
},
"selectedQueryName": "A"
},
"evaluation": {
"kind": "rolling",
"spec": {
"evalWindow": "5m0s",
"frequency": "15s"
}
},
"labels": {},
"annotations": {
"description": "This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})",
"summary": "This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})"
},
"notificationSettings": {
"groupBy": [],
"usePolicy": false,
"renotify": {
"enabled": false,
"interval": "30m",
"alertStates": []
}
},
"version": "v5",
"schemaVersion": "v2alpha1"
}

View File

@@ -0,0 +1,12 @@
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:01:00+00:00","value":12,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:02:00+00:00","value":26,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:03:00+00:00","value":41,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:04:00+00:00","value":56,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:05:00+00:00","value":71,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:06:00+00:00","value":86,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:07:00+00:00","value":101,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:08:00+00:00","value":116,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:09:00+00:00","value":131,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:10:00+00:00","value":146,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:11:00+00:00","value":161,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"request_total_no_data_rule_test","labels":{"service":"api","endpoint":"/health","status_code":"200"},"timestamp":"2026-01-29T10:12:00+00:00","value":176,"temporality":"Cumulative","type_":"Sum","is_monotonic":true,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}

View File

@@ -0,0 +1,70 @@
{
"alert": "no_data_rule_test",
"ruleType": "threshold_rule",
"alertType": "METRIC_BASED_ALERT",
"condition": {
"thresholds": {
"kind": "basic",
"spec": [
{
"name": "critical",
"target": 10,
"matchType": "1",
"op": "1",
"channels": [
"test channel"
]
}
]
},
"compositeQuery": {
"queryType": "builder",
"panelType": "graph",
"queries": [
{
"type": "builder_query",
"spec": {
"name": "A",
"signal": "metrics",
"filter": {
"expression": "service = 'server'"
},
"aggregations": [
{
"metricName": "request_total_no_data_rule_test",
"timeAggregation": "rate",
"spaceAggregation": "sum"
}
]
}
}
]
},
"selectedQueryName": "A",
"alertOnAbsent": true,
"absentFor": 1
},
"evaluation": {
"kind": "rolling",
"spec": {
"evalWindow": "5m0s",
"frequency": "15s"
}
},
"labels": {},
"annotations": {
"description": "This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})",
"summary": "This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})"
},
"notificationSettings": {
"groupBy": [],
"usePolicy": false,
"renotify": {
"enabled": false,
"interval": "30m",
"alertStates": []
}
},
"version": "v5",
"schemaVersion": "v2alpha1"
}

View File

@@ -0,0 +1,12 @@
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:01:00+00:00","value":12,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:02:00+00:00","value":14,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:03:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:04:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:05:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:06:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:07:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:08:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:09:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:10:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:11:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}
{"metric_name":"cpu_percent_threshold_above_all_the_time","labels":{"host":"server-01","cpu":"cpu0"},"timestamp":"2026-01-29T10:12:00+00:00","value":15,"temporality":"Unspecified","type_":"Gauge","is_monotonic":false,"flags":0,"description":"","unit":"","env":"default","resource_attrs":{},"scope_attrs":{}}

View File

@@ -0,0 +1,58 @@
{
"alert": "threshold_above_all_the_time",
"ruleType": "promql_rule",
"alertType": "METRIC_BASED_ALERT",
"condition": {
"thresholds": {
"kind": "basic",
"spec": [
{
"name": "critical",
"target": 10,
"matchType": "2",
"op": "1",
"channels": [
"test channel"
]
}
]
},
"compositeQuery": {
"queryType": "promql",
"panelType": "graph",
"queries": [
{
"type": "promql",
"spec": {
"name": "A",
"query": "{\"cpu_percent_threshold_above_all_the_time\"}"
}
}
]
},
"selectedQueryName": "A"
},
"evaluation": {
"kind": "rolling",
"spec": {
"evalWindow": "5m0s",
"frequency": "15s"
}
},
"labels": {},
"annotations": {
"description": "This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})",
"summary": "This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})"
},
"notificationSettings": {
"groupBy": [],
"usePolicy": false,
"renotify": {
"enabled": false,
"interval": "30m",
"alertStates": []
}
},
"version": "v5",
"schemaVersion": "v2alpha1"
}

Some files were not shown because too many files have changed in this diff Show More