Compare commits

...

1 Commits

Author SHA1 Message Date
Karan Balani
0766ab31c0 feat: meter reporter for new billing infra (#11016)
Some checks are pending
build-staging / prepare (push) Waiting to run
build-staging / js-build (push) Blocked by required conditions
build-staging / go-build (push) Blocked by required conditions
build-staging / staging (push) Blocked by required conditions
Release Drafter / update_release_draft (push) Waiting to run
* feat: meter reporter for new billing infra

* feat(meterreporter): simplify code, add metric meters, dry-run zeus call

* feat(meterreporter): add traces meters

* chore: update interval validation to allow min 5 mins interval for testing

* feat: add telemetry for collect and ship durations & improve comments

* feat(meterreporter): sealed-range catch-up and today-partial ticks

* chore: intermediate commit

* feat: improve retention period queries based on workspace ids for logs only for now

* chore: skip meter checkpoint call temporarily

* feat(meterreporter): bootstrap from data floor, emit sentinel zero-readings

* chore: lower HistoricalBackfillDays

* fix(meterreporter): pin retention type

* refactor(meterreporter): remove unused retry config

* refactor: add retentiontypes

* chore: intermediate commit

* feat(meterreporter): add metric and trace meters

* refactor: cleanup comments

* refactor: remove HistoricalBackfillDays

* refactor: move few things to ee package

* refactor: simplify some sections of tick

* refactor: push meters in batch for each day

* chore: add tracing and logging

* feat: make retention buckets generic

* feat(metercollector): add MeterCollector interface and split type packages

* feat(metercollector/retention): add narrow retention slice loader and SQL helpers

* refactor(meterreporter): wire http collectors

* chore(meterreporter): trim comments

* test(metercollector): add collector coverage

* chore(meterreporter): increase catchup window

* fix: ci lint and flag default value

* refactor(meters): align retention and zeus

* refactor(retention): move ttl types

* refactor(meters): rename platform fee collector

* refactor(meters): add meter constructor

* refactor(meters): add window constructor

* refactor(meters): consolidate zeus meter types

* refactor(meters): centralize meter metadata

* refactor(retention): add getter module

* refactor(retention): consolidate ttl types

* chore: use int64 instead of float64 as meter value

* chore: int64 conversion in clickhouse query too

* chore: error log - make failed meter collection louder

* chore: start sending data to zeus

* chore: add debug statement for logging meter data

* chore: simplify meter query only use org id and retention duration

* chore: remove unused functions from retention module and move sqlbuilder function too

* chore: remove unused code

* chore: switch to info context log for testing

* refactor(meterreporter): consolidate collectors and push origin into source

Replaces six near-duplicate collector packages with two parametrized,
factory-shaped ones: telemetrymetercollector for the ClickHouse-backed
meters (log size, span size, datapoint count) and staticmetercollector
for fixed-value meters (base platform fee). Each meter is now a Config
entry in cmd/enterprise/meter.go, materialized by iterating the factory.

Pushes the catchup floor concept out of the reporter and into each
collector via a new Origin method. Telemetry collectors return per-meter
min(unix_milli) FROM signoz_meter.samples; static collectors return
todayStart. The reporter now computes per-meter next-day-to-report and
only invokes a collector for days at/after its own next, eliminating
the over-emit + dropCheckpointed dance.

Other tightening: typed Meter.MeterName with JSON marshalers; Meter
dimensions built via attribute.Key-based zeustypes.NewDimensions;
license flows into Collect from the reporter (collectors stop fetching
it themselves); providerSettings plumbed into the meterreporter
factory closure for harness-style provider construction.

* refactor(meterreporter): per-collector Origin, simpler tick, semconv metrics

Pushes the catchup-floor concept out of the reporter and into each
collector via MeterCollector.Origin. Telemetry collectors return per-
meter min(unix_milli) FROM signoz_meter.samples; static collectors
return today. The reporter computes per-meter next-reportable-day,
iterates the day-loop globally, and only invokes a collector for days
at/after its own next — eliminating the over-emit + dropCheckpointed
dance entirely.

collectOrg is split into three named helpers: provider.checkpoints
(Zeus call + index), provider.nextDays (per-meter origin + checkpoint
max), and pure backfillRange (start/end clamped to yesterday + cap).
collectOrg itself reads as a five-step recipe.

Provider stores collectors as map[MeterName]MeterCollector keyed by
name; the slice + sort.Slice scaffolding is gone, validation moves
into newProvider. eligibleCollectors and report take the map directly.

Start matches the opaquetokenizer pattern: synchronous select+ticker,
sharder + per-org loop with license check (skipping orgs with no
active license), per-tick span scoped via an IIFE so defer span.End()
fires once per tick. goroutinesWg removed.

Config drops Timeout. CatchupMaxDaysPerTick renamed to MaxBackfillDays.
runPhase renamed to report. telemetryStore injection removed (no
longer used after dataFloor moved into the telemetry collector).

Metrics rebuilt around OTel semconv: signoz.meterreporter.checkpoints,
.reports, .collections, .meters — each bumped on success and failure,
with error.type set on failure via a new errors.TypeAttr helper in
pkg/errors. collections also carries signoz.meter.name.

* refactor(meterreporter): rename base platform fee meter, add metric units

Renames signoz.meter.base.platform.fee to signoz.meter.platform.active.
The new name matches the per-service template signoz.meter.<service>
.active that scales for future per-service billing meters; "active"
fits the billing-eligibility semantic (org's platform subscription
is active for the period) without conflating with operational
liveness conventions like Prometheus's `up`.

Adds UCUM annotated-count units to each reporter counter:
  - signoz.meterreporter.checkpoints  -> {checkpoint}
  - signoz.meterreporter.reports      -> {report}
  - signoz.meterreporter.collections  -> {collection}
  - signoz.meterreporter.meters       -> {meter}

* chore: stop leaking collectors if flag is false  and address comments

* fix(meterreporter): correct startup and retention metadata

* fix(meterreporter): recover static meter backfill

* chore: address review comments

* chore: move flag evaluation into reporter

* refactor: fix retention origin for staticmeter collectors

* fix(meterreporter): gate backfill by license day

Replace max_backfill_days with a backfill switch.
Clamp sealed-day catch-up to the license creation day.

Send retention duration dimensions in seconds.

* fix(meterreporter): anchor backfill to license day

* chore: address review comments

* chore: drop unrelated authz schema diff

---------

Co-authored-by: Karan Balani <29383381+balanikaran@users.noreply.github.com>
Co-authored-by: grandwizard28 <vibhupandey28@gmail.com>
2026-05-11 17:47:29 +00:00
48 changed files with 1903 additions and 254 deletions

View File

@@ -18,16 +18,19 @@ import (
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/gateway/noopgateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/licensing/nooplicensing"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/retention"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/prometheus"
@@ -109,6 +112,9 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
func(_ licensing.Licensing) factory.NamedMap[factory.ProviderFactory[auditor.Auditor, auditor.Config]] {
return signoz.NewAuditorProviderFactories()
},
func(_ context.Context, _ factory.ProviderSettings, _ flagger.Flagger, _ licensing.Licensing, _ telemetrystore.TelemetryStore, _ retention.Getter, _ organization.Getter, _ zeus.Zeus) (factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]], string) {
return signoz.NewMeterReporterProviderFactories(), "noop"
},
func(ps factory.ProviderSettings, q querier.Querier, a analytics.Analytics) querier.Handler {
return querier.NewHandler(ps, q, a)
},

55
cmd/enterprise/meter.go Normal file
View File

@@ -0,0 +1,55 @@
package main
import (
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/telemetrylogs"
"github.com/SigNoz/signoz/pkg/telemetrymetrics"
"github.com/SigNoz/signoz/pkg/telemetrytraces"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
)
var meterConfigs = []metercollector.Config{
{
Provider: metercollector.ProviderStatic,
Static: metercollector.StaticConfig{
Name: zeustypes.MeterPlatformActive,
Unit: zeustypes.MeterUnitCount,
Aggregation: zeustypes.MeterAggregationMax,
Value: 1,
},
},
{
Provider: metercollector.ProviderTelemetry,
Telemetry: metercollector.TelemetryConfig{
Name: zeustypes.MeterLogSize,
Unit: zeustypes.MeterUnitBytes,
Aggregation: zeustypes.MeterAggregationSum,
DBName: telemetrylogs.DBName,
TableName: telemetrylogs.LogsV2LocalTableName,
DefaultRetentionDays: retentiontypes.DefaultLogsRetentionDays,
},
},
{
Provider: metercollector.ProviderTelemetry,
Telemetry: metercollector.TelemetryConfig{
Name: zeustypes.MeterSpanSize,
Unit: zeustypes.MeterUnitBytes,
Aggregation: zeustypes.MeterAggregationSum,
DBName: telemetrytraces.DBName,
TableName: telemetrytraces.SpanIndexV3LocalTableName,
DefaultRetentionDays: retentiontypes.DefaultTracesRetentionDays,
},
},
{
Provider: metercollector.ProviderTelemetry,
Telemetry: metercollector.TelemetryConfig{
Name: zeustypes.MeterDatapointCount,
Unit: zeustypes.MeterUnitCount,
Aggregation: zeustypes.MeterAggregationSum,
DBName: telemetrymetrics.DBName,
TableName: telemetrymetrics.SamplesV4LocalTableName,
DefaultRetentionDays: retentiontypes.DefaultMetricsRetentionDays,
},
},
}

View File

@@ -18,6 +18,9 @@ import (
"github.com/SigNoz/signoz/ee/gateway/httpgateway"
enterpriselicensing "github.com/SigNoz/signoz/ee/licensing"
"github.com/SigNoz/signoz/ee/licensing/httplicensing"
"github.com/SigNoz/signoz/ee/metercollector/staticmetercollector"
"github.com/SigNoz/signoz/ee/metercollector/telemetrymetercollector"
"github.com/SigNoz/signoz/ee/meterreporter/httpmeterreporter"
"github.com/SigNoz/signoz/ee/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/ee/modules/cloudintegration/implcloudintegration/implcloudprovider"
"github.com/SigNoz/signoz/ee/modules/dashboard/impldashboard"
@@ -36,14 +39,17 @@ import (
"github.com/SigNoz/signoz/pkg/cache"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
pkgflagger "github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/gateway"
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
pkgcloudintegration "github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
pkgimpldashboard "github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/retention"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/prometheus"
@@ -161,6 +167,20 @@ func runServer(ctx context.Context, config signoz.Config, logger *slog.Logger) e
}
return factories
},
func(ctx context.Context, providerSettings factory.ProviderSettings, flagger pkgflagger.Flagger, licensing licensing.Licensing, telemetryStore telemetrystore.TelemetryStore, retentionGetter retention.Getter, orgGetter organization.Getter, zeus zeus.Zeus) (factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]], string) {
factories := signoz.NewMeterReporterProviderFactories()
collectorFactories := factory.MustNewNamedMap(
staticmetercollector.NewFactory(),
telemetrymetercollector.NewFactory(telemetryStore, retentionGetter),
)
if err := factories.Add(httpmeterreporter.NewFactory(collectorFactories, meterConfigs, flagger, licensing, orgGetter, zeus)); err != nil {
panic(err)
}
return factories, "http"
},
func(ps factory.ProviderSettings, q querier.Querier, a analytics.Analytics) querier.Handler {
communityHandler := querier.NewHandler(ps, q, a)
return eequerier.NewHandler(ps, q, communityHandler)

View File

@@ -429,3 +429,10 @@ authz:
openfga:
# maximum tuples allowed per openfga write operation.
max_tuples_per_write: 300
##################### Meter Reporter #####################
meterreporter:
# The interval between collection ticks. Minimum 5m.
interval: 6h
# Whether to backfill sealed days from the license creation day.
backfill: true

View File

@@ -0,0 +1,61 @@
// Package staticmetercollector emits a fixed-value meter reading per org per window.
package staticmetercollector
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
var _ metercollector.MeterCollector = (*Provider)(nil)
type Provider struct {
settings factory.ScopedProviderSettings
config metercollector.StaticConfig
}
func NewFactory() factory.ProviderFactory[metercollector.MeterCollector, metercollector.Config] {
return factory.NewProviderFactory(factory.MustNewName(metercollector.ProviderStatic), func(ctx context.Context, providerSettings factory.ProviderSettings, config metercollector.Config) (metercollector.MeterCollector, error) {
return newProvider(providerSettings, config.Static), nil
},
)
}
func newProvider(providerSettings factory.ProviderSettings, config metercollector.StaticConfig) *Provider {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/metercollector/staticmetercollector")
return &Provider{
settings: settings,
config: config,
}
}
func (provider *Provider) Name() zeustypes.MeterName { return provider.config.Name }
func (provider *Provider) Unit() zeustypes.MeterUnit { return provider.config.Unit }
func (provider *Provider) Aggregation() zeustypes.MeterAggregation {
return provider.config.Aggregation
}
func (provider *Provider) Origin(_ context.Context, _ valuer.UUID, license *licensetypes.License, _ time.Time) (time.Time, error) {
if license == nil || license.CreatedAt.IsZero() {
return time.Time{}, nil
}
createdAt := license.CreatedAt.UTC()
return time.Date(createdAt.Year(), createdAt.Month(), createdAt.Day(), 0, 0, 0, 0, time.UTC), nil
}
func (provider *Provider) Collect(_ context.Context, orgID valuer.UUID, license *licensetypes.License, window zeustypes.MeterWindow) ([]zeustypes.Meter, error) {
if license == nil || license.Key == "" {
return nil, nil
}
return []zeustypes.Meter{
zeustypes.NewMeter(provider.config.Name, provider.config.Value, provider.config.Unit, provider.config.Aggregation, window, zeustypes.NewDimensions(zeustypes.OrganizationID.String(orgID.StringValue()))),
}, nil
}

View File

@@ -0,0 +1,247 @@
// Package telemetrymetercollector collects telemetry meters (logs, traces, metrics)
// by retention. One Provider materializes per TelemetryConfig.
package telemetrymetercollector
import (
"context"
"fmt"
"regexp"
"strconv"
"strings"
"time"
"github.com/huandu/go-sqlbuilder"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/modules/retention"
"github.com/SigNoz/signoz/pkg/telemetrymeter"
"github.com/SigNoz/signoz/pkg/telemetrystore"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
var (
labelKeyPattern = regexp.MustCompile(`^[A-Za-z0-9_.\-]+$`)
labelValuePattern = regexp.MustCompile(`^[A-Za-z0-9_.\-:]+$`)
)
var _ metercollector.MeterCollector = (*Provider)(nil)
type Provider struct {
settings factory.ScopedProviderSettings
config metercollector.TelemetryConfig
telemetryStore telemetrystore.TelemetryStore
retentionGetter retention.Getter
}
func NewFactory(telemetryStore telemetrystore.TelemetryStore, retentionGetter retention.Getter) factory.ProviderFactory[metercollector.MeterCollector, metercollector.Config] {
return factory.NewProviderFactory(factory.MustNewName(metercollector.ProviderTelemetry), func(ctx context.Context, providerSettings factory.ProviderSettings, config metercollector.Config) (metercollector.MeterCollector, error) {
return newProvider(providerSettings, config.Telemetry, telemetryStore, retentionGetter), nil
},
)
}
func newProvider(
providerSettings factory.ProviderSettings,
config metercollector.TelemetryConfig,
telemetryStore telemetrystore.TelemetryStore,
retentionGetter retention.Getter,
) *Provider {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/metercollector/telemetrymetercollector")
return &Provider{
settings: settings,
config: config,
telemetryStore: telemetryStore,
retentionGetter: retentionGetter,
}
}
func (provider *Provider) Name() zeustypes.MeterName { return provider.config.Name }
func (provider *Provider) Unit() zeustypes.MeterUnit { return provider.config.Unit }
func (provider *Provider) Aggregation() zeustypes.MeterAggregation {
return provider.config.Aggregation
}
func (provider *Provider) Origin(ctx context.Context, _ valuer.UUID, _ *licensetypes.License, todayStart time.Time) (time.Time, error) {
query, args := buildOriginQuery(provider.config.Name.String())
var minMs int64
if err := provider.telemetryStore.ClickhouseDB().QueryRow(ctx, query, args...).Scan(&minMs); err != nil {
return time.Time{}, err
}
if minMs == 0 {
return todayStart, nil
}
minDay := time.UnixMilli(minMs).UTC()
return time.Date(minDay.Year(), minDay.Month(), minDay.Day(), 0, 0, 0, 0, time.UTC), nil
}
func (provider *Provider) Collect(
ctx context.Context,
orgID valuer.UUID,
_ *licensetypes.License,
window zeustypes.MeterWindow,
) ([]zeustypes.Meter, error) {
meterName := provider.config.Name.String()
segments, err := provider.retentionGetter.GetRetentionPolicySegments(
ctx,
orgID,
provider.config.DBName,
provider.config.TableName,
provider.config.DefaultRetentionDays,
window.StartUnixMilli,
window.EndUnixMilli,
)
if err != nil {
return nil, err
}
valuesByRetentionDays := make(map[int]int64)
for _, segment := range segments {
query, args, err := buildQuery(meterName, segment)
if err != nil {
return nil, err
}
rows, err := provider.telemetryStore.ClickhouseDB().Query(ctx, query, args...)
if err != nil {
return nil, err
}
if err := func() error {
defer rows.Close()
for rows.Next() {
var retentionDays int32
var value int64
if err := rows.Scan(&retentionDays, &value); err != nil {
return err
}
valuesByRetentionDays[int(retentionDays)] += value
}
if err := rows.Err(); err != nil {
return err
}
return nil
}(); err != nil {
return nil, err
}
}
meters := make([]zeustypes.Meter, 0, len(valuesByRetentionDays))
for retentionDays, value := range valuesByRetentionDays {
meters = append(meters, zeustypes.NewMeter(provider.config.Name, value, provider.config.Unit, provider.config.Aggregation, window, buildDimensions(orgID, retentionDays)))
}
// Empty windows still emit a sentinel so checkpoints can advance.
if len(meters) == 0 && len(segments) > 0 {
meters = append(meters, zeustypes.NewMeter(provider.config.Name, 0, provider.config.Unit, provider.config.Aggregation, window, buildDimensions(orgID, segments[len(segments)-1].DefaultDays)))
}
return meters, nil
}
func buildOriginQuery(meterName string) (string, []any) {
sb := sqlbuilder.NewSelectBuilder()
sb.Select("toInt64(ifNull(min(unix_milli), 0))")
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(sb.Equal("metric_name", meterName))
return sb.BuildWithFlavor(sqlbuilder.ClickHouse)
}
func buildQuery(meterName string, segment *retentiontypes.RetentionPolicySegment) (string, []any, error) {
retentionExpr, err := buildRetentionMultiIfSQL(segment.Rules, segment.DefaultDays)
if err != nil {
return "", nil, err
}
selects := []string{
retentionExpr + " AS retention_days",
"toInt64(ifNull(sum(value), 0)) AS value",
}
sb := sqlbuilder.NewSelectBuilder()
sb.Select(selects...)
sb.From(telemetrymeter.DBName + "." + telemetrymeter.SamplesTableName)
sb.Where(
sb.Equal("metric_name", meterName),
sb.GTE("unix_milli", segment.StartMs),
sb.LT("unix_milli", segment.EndMs),
)
sb.GroupBy("retention_days")
query, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse)
return query, args, nil
}
func buildRetentionMultiIfSQL(rules []retentiontypes.CustomRetentionRule, defaultDays int) (string, error) {
if defaultDays <= 0 {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeMeterCollectorInvalidCustomRetentionRule, "non-positive default retention %d", defaultDays)
}
if len(rules) == 0 {
return "toInt32(" + strconv.Itoa(defaultDays) + ")", nil
}
arms := make([]string, 0, 2*len(rules)+1)
for ruleIndex, rule := range rules {
if rule.TTLDays <= 0 {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeMeterCollectorInvalidCustomRetentionRule, "rule %d has non-positive ttl_days %d", ruleIndex, rule.TTLDays)
}
conditionExpr, err := buildRuleConditionSQL(ruleIndex, rule)
if err != nil {
return "", err
}
arms = append(arms, conditionExpr)
arms = append(arms, strconv.Itoa(rule.TTLDays))
}
arms = append(arms, strconv.Itoa(defaultDays))
return "toInt32(multiIf(" + strings.Join(arms, ", ") + "))", nil
}
func buildRuleConditionSQL(ruleIndex int, rule retentiontypes.CustomRetentionRule) (string, error) {
if len(rule.Filters) == 0 {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeMeterCollectorInvalidCustomRetentionRule, "rule %d has no filters", ruleIndex)
}
filterExprs := make([]string, 0, len(rule.Filters))
for filterIndex, filter := range rule.Filters {
if !labelKeyPattern.MatchString(filter.Key) {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeMeterCollectorInvalidCustomRetentionRule, "rule %d filter %d has invalid key %q", ruleIndex, filterIndex, filter.Key)
}
if len(filter.Values) == 0 {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeMeterCollectorInvalidCustomRetentionRule, "rule %d filter %d has no values", ruleIndex, filterIndex)
}
quoted := make([]string, len(filter.Values))
for valueIndex, value := range filter.Values {
if !labelValuePattern.MatchString(value) {
return "", errors.Newf(errors.TypeInvalidInput, metercollector.ErrCodeMeterCollectorInvalidCustomRetentionRule, "rule %d filter %d value %d is invalid %q", ruleIndex, filterIndex, valueIndex, value)
}
quoted[valueIndex] = "'" + value + "'"
}
filterExprs = append(filterExprs, fmt.Sprintf("JSONExtractString(labels, '%s') IN (%s)", filter.Key, strings.Join(quoted, ", ")))
}
return strings.Join(filterExprs, " AND "), nil
}
func buildDimensions(orgID valuer.UUID, retentionDays int) map[string]string {
retentionDurationSeconds := int64(retentionDays) * 24 * 60 * 60 // seconds
return zeustypes.NewDimensions(
zeustypes.OrganizationID.String(orgID.StringValue()),
zeustypes.RetentionDuration.String(strconv.FormatInt(retentionDurationSeconds, 10)),
)
}

View File

@@ -0,0 +1,318 @@
package httpmeterreporter
import (
"context"
"encoding/json"
"fmt"
"log/slog"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/flagger"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/metercollector"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/types"
"github.com/SigNoz/signoz/pkg/types/featuretypes"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/SigNoz/signoz/pkg/zeus"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/trace"
)
var _ factory.ServiceWithHealthy = (*Provider)(nil)
type Provider struct {
settings factory.ScopedProviderSettings
config meterreporter.Config
collectorsByName map[zeustypes.MeterName]metercollector.MeterCollector
flagger flagger.Flagger
licensing licensing.Licensing
orgGetter organization.Getter
zeus zeus.Zeus
healthyC chan struct{}
stopC chan struct{}
metrics *reporterMetrics
}
func NewFactory(collectorFactories factory.NamedMap[factory.ProviderFactory[metercollector.MeterCollector, metercollector.Config]], collectorConfigs []metercollector.Config, flagger flagger.Flagger, licensing licensing.Licensing, orgGetter organization.Getter, zeus zeus.Zeus) factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config] {
return factory.NewProviderFactory(factory.MustNewName("http"), func(ctx context.Context, providerSettings factory.ProviderSettings, config meterreporter.Config) (meterreporter.Reporter, error) {
return newProvider(ctx, providerSettings, config, collectorFactories, collectorConfigs, flagger, licensing, orgGetter, zeus)
},
)
}
func newProvider(
ctx context.Context,
providerSettings factory.ProviderSettings,
config meterreporter.Config,
collectorFactories factory.NamedMap[factory.ProviderFactory[metercollector.MeterCollector, metercollector.Config]],
collectorConfigs []metercollector.Config,
flagger flagger.Flagger,
licensing licensing.Licensing,
orgGetter organization.Getter,
zeus zeus.Zeus,
) (*Provider, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/meterreporter/httpmeterreporter")
collectorsByName := map[zeustypes.MeterName]metercollector.MeterCollector{}
for _, collectorConfig := range collectorConfigs {
collector, err := factory.NewProviderFromNamedMap(ctx, providerSettings, collectorConfig, collectorFactories, collectorConfig.Provider)
if err != nil {
return nil, err
}
if _, exists := collectorsByName[collector.Name()]; exists {
return nil, errors.Newf(errors.TypeAlreadyExists, errors.CodeAlreadyExists, "duplicate meter collector %q", collector.Name())
}
collectorsByName[collector.Name()] = collector
}
metrics, err := newReporterMetrics(settings.Meter())
if err != nil {
return nil, err
}
return &Provider{
settings: settings,
config: config,
collectorsByName: collectorsByName,
flagger: flagger,
licensing: licensing,
orgGetter: orgGetter,
zeus: zeus,
healthyC: make(chan struct{}),
stopC: make(chan struct{}),
metrics: metrics,
}, nil
}
func (provider *Provider) Start(ctx context.Context) error {
close(provider.healthyC)
provider.collect(ctx)
ticker := time.NewTicker(provider.config.Interval)
defer ticker.Stop()
for {
select {
case <-provider.stopC:
return nil
case <-ticker.C:
provider.collect(ctx)
}
}
}
func (provider *Provider) collect(ctx context.Context) {
ctx, span := provider.settings.Tracer().Start(ctx, "meterreporter.Collect", trace.WithAttributes(attribute.String("meterreporter.provider", "http")))
defer span.End()
orgs, err := provider.orgGetter.ListByOwnedKeyRange(ctx)
if err != nil {
span.RecordError(err)
provider.settings.Logger().ErrorContext(ctx, "failed to get orgs data", errors.Attr(err))
return
}
for _, org := range orgs {
evalCtx := featuretypes.NewFlaggerEvaluationContext(org.ID)
if !provider.flagger.BooleanOrEmpty(ctx, flagger.FeatureUseMeterReporter, evalCtx) {
provider.settings.Logger().DebugContext(ctx, "meter reporter disabled for org, skipping reporting", slog.String("org_id", org.ID.StringValue()))
continue
}
license, err := provider.licensing.GetActive(ctx, org.ID)
if err != nil {
if errors.Ast(err, errors.TypeNotFound) {
provider.settings.Logger().DebugContext(ctx, "no active license found for org, skipping reporting", slog.String("org_id", org.ID.StringValue()))
continue
}
span.RecordError(err)
provider.settings.Logger().ErrorContext(ctx, "failed to fetch active license for org", errors.Attr(err), slog.String("org_id", org.ID.StringValue()))
return
}
if err := provider.collectOrg(ctx, org, license); err != nil {
span.RecordError(err)
provider.settings.Logger().ErrorContext(ctx, "failed to collect meters", errors.Attr(err), slog.String("org_id", org.ID.StringValue()))
}
}
}
func (provider *Provider) Stop(ctx context.Context) error {
close(provider.stopC)
return nil
}
func (provider *Provider) Healthy() <-chan struct{} {
return provider.healthyC
}
func (provider *Provider) collectOrg(ctx context.Context, org *types.Organization, license *licensetypes.License) error {
now := time.Now().UTC()
// Use one timestamp so a tick cannot straddle midnight.
todayStart := time.Date(now.Year(), now.Month(), now.Day(), 0, 0, 0, 0, time.UTC)
if provider.config.Backfill {
checkpointsByMeter, err := provider.checkpoints(ctx, license.Key)
if err != nil {
return err
}
nextByCollector := provider.nextDays(license, todayStart, checkpointsByMeter)
start, end, ok := backfillRange(nextByCollector, todayStart)
if ok {
for day := start; !day.After(end); day = day.AddDate(0, 0, 1) {
eligible := eligibleCollectors(provider.collectorsByName, nextByCollector, day)
if len(eligible) == 0 {
continue
}
window, err := zeustypes.NewMeterWindow(day.UnixMilli(), day.AddDate(0, 0, 1).UnixMilli(), true)
if err != nil {
return err
}
if err := provider.report(ctx, org.ID, license, window, eligible); err != nil {
provider.settings.Logger().WarnContext(ctx, "failed to backfill for day", errors.Attr(err), slog.String("date", day.Format("2006-01-02")))
return err
}
}
}
}
// Today's partial window: every collector is always eligible (next <= today).
if now.UnixMilli() > todayStart.UnixMilli() {
todayWindow, err := zeustypes.NewMeterWindow(todayStart.UnixMilli(), now.UnixMilli(), false)
if err != nil {
return err
}
return provider.report(ctx, org.ID, license, todayWindow, provider.collectorsByName)
}
return nil
}
func (provider *Provider) checkpoints(ctx context.Context, licenseKey string) (map[string]time.Time, error) {
list, err := provider.zeus.ListMeterCheckpoints(ctx, licenseKey)
if err != nil {
provider.metrics.checkpoints.Add(ctx, 1, metric.WithAttributes(errors.TypeAttr(err)))
return nil, err
}
provider.metrics.checkpoints.Add(ctx, 1)
checkpointsByMeter := make(map[string]time.Time, len(list))
for _, checkpoint := range list {
checkpointsByMeter[checkpoint.Name] = checkpoint.StartDate.UTC()
}
return checkpointsByMeter, nil
}
func (provider *Provider) nextDays(license *licensetypes.License, todayStart time.Time, checkpointsByMeter map[string]time.Time) map[zeustypes.MeterName]time.Time {
nextByCollector := make(map[zeustypes.MeterName]time.Time, len(provider.collectorsByName))
licenseCreatedAt := license.CreatedAt.UTC()
licenseCreatedAtDay := time.Date(licenseCreatedAt.Year(), licenseCreatedAt.Month(), licenseCreatedAt.Day(), 0, 0, 0, 0, time.UTC)
for _, collector := range provider.collectorsByName {
checkpoint, hasCheckpoint := checkpointsByMeter[collector.Name().String()]
nextByCollector[collector.Name()] = nextReportableDay(licenseCreatedAtDay, todayStart, checkpoint, hasCheckpoint)
}
return nextByCollector
}
func nextReportableDay(licenseCreatedAtDay time.Time, todayStart time.Time, checkpoint time.Time, hasCheckpoint bool) time.Time {
next := licenseCreatedAtDay
if next.IsZero() {
next = todayStart
}
if hasCheckpoint {
checkpointNext := checkpoint.AddDate(0, 0, 1)
if checkpointNext.After(next) {
next = checkpointNext
}
}
return next
}
func (provider *Provider) report(ctx context.Context, orgID valuer.UUID, license *licensetypes.License, window zeustypes.MeterWindow, collectors map[zeustypes.MeterName]metercollector.MeterCollector) error {
date := time.UnixMilli(window.StartUnixMilli).UTC().Format("2006-01-02")
meters := make([]zeustypes.Meter, 0, len(collectors))
for _, collector := range collectors {
meterAttr := attribute.String("signoz.meter.name", collector.Name().String())
collectedReadings, err := collector.Collect(ctx, orgID, license, window)
if err != nil {
provider.metrics.collections.Add(ctx, 1, metric.WithAttributes(meterAttr, errors.TypeAttr(err)))
continue
}
provider.metrics.collections.Add(ctx, 1, metric.WithAttributes(meterAttr))
meters = append(meters, collectedReadings...)
}
if len(meters) == 0 {
return nil
}
idempotencyKey := fmt.Sprintf("meterreporter:%s", date)
body, err := json.Marshal(meters)
if err != nil {
provider.metrics.reports.Add(ctx, 1, metric.WithAttributes(errors.TypeAttr(err)))
return err
}
if err := provider.zeus.PutMetersV3(ctx, license.Key, idempotencyKey, body); err != nil {
provider.metrics.reports.Add(ctx, 1, metric.WithAttributes(errors.TypeAttr(err)))
return err
}
provider.metrics.reports.Add(ctx, 1)
provider.metrics.meters.Add(ctx, int64(len(meters)))
return nil
}
// backfillRange returns the inclusive sealed-day range ending at yesterday.
func backfillRange(nextByCollector map[zeustypes.MeterName]time.Time, todayStart time.Time) (start, end time.Time, ok bool) {
yesterday := todayStart.AddDate(0, 0, -1)
for _, next := range nextByCollector {
if !next.Before(todayStart) {
continue
}
if start.IsZero() || next.Before(start) {
start = next
}
}
if start.IsZero() || start.After(yesterday) {
return time.Time{}, time.Time{}, false
}
return start, yesterday, true
}
func eligibleCollectors(collectors map[zeustypes.MeterName]metercollector.MeterCollector, nextByCollector map[zeustypes.MeterName]time.Time, day time.Time) map[zeustypes.MeterName]metercollector.MeterCollector {
eligible := make(map[zeustypes.MeterName]metercollector.MeterCollector, len(collectors))
for name, collector := range collectors {
if !nextByCollector[name].After(day) {
eligible[name] = collector
}
}
return eligible
}

View File

@@ -0,0 +1,48 @@
package httpmeterreporter
import (
"github.com/SigNoz/signoz/pkg/errors"
"go.opentelemetry.io/otel/metric"
)
type reporterMetrics struct {
checkpoints metric.Int64Counter
reports metric.Int64Counter
collections metric.Int64Counter
meters metric.Int64Counter
}
func newReporterMetrics(meter metric.Meter) (*reporterMetrics, error) {
var errs error
checkpoints, err := meter.Int64Counter("signoz.meterreporter.checkpoints", metric.WithDescription("Zeus meter checkpoint fetches."), metric.WithUnit("{checkpoint}"))
if err != nil {
errs = errors.Join(errs, err)
}
reports, err := meter.Int64Counter("signoz.meterreporter.reports", metric.WithDescription("Meter reports shipped to Zeus."), metric.WithUnit("{report}"))
if err != nil {
errs = errors.Join(errs, err)
}
collections, err := meter.Int64Counter("signoz.meterreporter.collections", metric.WithDescription("Per-meter collect calls."), metric.WithUnit("{collection}"))
if err != nil {
errs = errors.Join(errs, err)
}
meters, err := meter.Int64Counter("signoz.meterreporter.meters", metric.WithDescription("Meter readings shipped to Zeus."), metric.WithUnit("{meter}"))
if err != nil {
errs = errors.Join(errs, err)
}
if errs != nil {
return nil, errs
}
return &reporterMetrics{
checkpoints: checkpoints,
reports: reports,
collections: collections,
meters: meters,
}, nil
}

View File

@@ -150,6 +150,72 @@ func (provider *Provider) PutMetersV2(ctx context.Context, key string, data []by
return err
}
func (provider *Provider) PutMetersV3(ctx context.Context, key string, idempotencyKey string, data []byte) error {
headers := http.Header{}
if idempotencyKey != "" {
headers.Set("X-Idempotency-Key", idempotencyKey)
}
_, err := provider.doWithHeaders(
ctx,
provider.config.URL.JoinPath("/v2/meters"),
http.MethodPost,
key,
data,
headers,
)
return err
}
func (provider *Provider) ListMeterCheckpoints(ctx context.Context, key string) ([]zeustypes.MeterCheckpoint, error) {
response, err := provider.do(
ctx,
provider.config.URL.JoinPath("/v2/meters/checkpoints"),
http.MethodGet,
key,
nil,
)
if err != nil {
return nil, err
}
checkpointValues := gjson.GetBytes(response, "data")
if !checkpointValues.Exists() || checkpointValues.Type == gjson.Null {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoints are required")
}
if !checkpointValues.IsArray() {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoints must be an array")
}
checkpointResults := checkpointValues.Array()
checkpoints := make([]zeustypes.MeterCheckpoint, 0, len(checkpointResults))
for _, checkpointValue := range checkpointResults {
name := checkpointValue.Get("name").String()
if name == "" {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoint name is required")
}
startDateString := checkpointValue.Get("start_date").String()
if startDateString == "" {
return nil, errors.Newf(errors.TypeInternal, zeus.ErrCodeResponseMalformed, "meter checkpoint start_date is required for %q", name)
}
startDate, err := time.Parse("2006-01-02", startDateString)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, zeus.ErrCodeResponseMalformed, "parse meter checkpoint start_date %q for %q", startDateString, name)
}
checkpoints = append(checkpoints, zeustypes.MeterCheckpoint{
Name: name,
StartDate: startDate,
})
}
return checkpoints, nil
}
func (provider *Provider) PutProfile(ctx context.Context, key string, profile *zeustypes.PostableProfile) error {
body, err := json.Marshal(profile)
if err != nil {
@@ -185,12 +251,21 @@ func (provider *Provider) PutHost(ctx context.Context, key string, host *zeustyp
}
func (provider *Provider) do(ctx context.Context, url *url.URL, method string, key string, requestBody []byte) ([]byte, error) {
return provider.doWithHeaders(ctx, url, method, key, requestBody, nil)
}
func (provider *Provider) doWithHeaders(ctx context.Context, url *url.URL, method string, key string, requestBody []byte, extraHeaders http.Header) ([]byte, error) {
request, err := http.NewRequestWithContext(ctx, method, url.String(), bytes.NewBuffer(requestBody))
if err != nil {
return nil, err
}
request.Header.Set("X-Signoz-Cloud-Api-Key", key)
request.Header.Set("Content-Type", "application/json")
for k, vs := range extraHeaders {
for _, v := range vs {
request.Header.Add(k, v)
}
}
response, err := provider.httpClient.Do(request)
if err != nil {

View File

@@ -4,6 +4,8 @@ import (
"errors" //nolint:depguard
"fmt"
"log/slog"
"go.opentelemetry.io/otel/attribute"
)
// base is the fundamental struct that implements the error interface.
@@ -253,3 +255,10 @@ func NewTimeoutf(code Code, format string, args ...any) *base {
func Attr(err error) slog.Attr {
return slog.Any("exception", err)
}
// TypeAttr returns an OTel attribute.KeyValue with the "error.type" semconv key
// set to the error's type string.
func TypeAttr(err error) attribute.KeyValue {
t, _, _, _, _, _ := Unwrapb(err)
return attribute.String("error.type", t.String())
}

View File

@@ -8,6 +8,7 @@ var (
FeatureHideRootUser = featuretypes.MustNewName("hide_root_user")
FeatureGetMetersFromZeus = featuretypes.MustNewName("get_meters_from_zeus")
FeaturePutMetersInZeus = featuretypes.MustNewName("put_meters_in_zeus")
FeatureUseMeterReporter = featuretypes.MustNewName("use_meter_reporter")
FeatureUseJSONBody = featuretypes.MustNewName("use_json_body")
)
@@ -53,6 +54,14 @@ func MustNewRegistry() featuretypes.Registry {
DefaultVariant: featuretypes.MustNewName("disabled"),
Variants: featuretypes.NewBooleanVariants(),
},
&featuretypes.Feature{
Name: FeatureUseMeterReporter,
Kind: featuretypes.KindBoolean,
Stage: featuretypes.StageExperimental,
Description: "Controls whether the enterprise meter reporter runs instead of the noop reporter",
DefaultVariant: featuretypes.MustNewName("disabled"),
Variants: featuretypes.NewBooleanVariants(),
},
&featuretypes.Feature{
Name: FeatureUseJSONBody,
Kind: featuretypes.KindBoolean,

View File

@@ -0,0 +1,68 @@
package metercollector
import (
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
)
const (
ProviderStatic = "static"
ProviderTelemetry = "telemetry"
)
type Config struct {
Provider string `mapstructure:"provider"`
Telemetry TelemetryConfig `mapstructure:"telemetry"`
Static StaticConfig `mapstructure:"static"`
}
func (c Config) Validate() error {
switch c.Provider {
case ProviderStatic:
return c.Static.Validate()
case ProviderTelemetry:
return c.Telemetry.Validate()
default:
return errors.Newf(errors.TypeInvalidInput, ErrCodeInvalidConfig, "meter collector: unknown provider %q", c.Provider)
}
}
type TelemetryConfig struct {
Name zeustypes.MeterName
Unit zeustypes.MeterUnit
Aggregation zeustypes.MeterAggregation
DBName string
TableName string
DefaultRetentionDays int
}
type StaticConfig struct {
Name zeustypes.MeterName
Unit zeustypes.MeterUnit
Aggregation zeustypes.MeterAggregation
Value int64
}
func (c StaticConfig) Validate() error {
if c.Name.IsZero() {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidConfig, "static meter collector: name must be set")
}
return nil
}
func (c TelemetryConfig) Validate() error {
if c.Name.IsZero() {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidConfig, "telemetry meter collector: name must be set")
}
if c.DBName == "" || c.TableName == "" {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidConfig, "telemetry meter collector: db_name and table_name are required")
}
if c.DefaultRetentionDays <= 0 {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidConfig, "telemetry meter collector: default_retention_days must be positive")
}
return nil
}

View File

@@ -0,0 +1,25 @@
package metercollector
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/licensetypes"
"github.com/SigNoz/signoz/pkg/types/zeustypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
var (
ErrCodeMeterCollectorCollectFailed = errors.MustNewCode("meter_collector_collect_failed")
ErrCodeMeterCollectorInvalidCustomRetentionRule = errors.MustNewCode("meter_collector_invalid_custom_retention_rule")
ErrCodeInvalidConfig = errors.MustNewCode("meter_collector_invalid_config")
)
type MeterCollector interface {
Name() zeustypes.MeterName
Unit() zeustypes.MeterUnit
Aggregation() zeustypes.MeterAggregation
Origin(ctx context.Context, orgID valuer.UUID, license *licensetypes.License, todayStart time.Time) (time.Time, error)
Collect(ctx context.Context, orgID valuer.UUID, license *licensetypes.License, window zeustypes.MeterWindow) ([]zeustypes.Meter, error)
}

View File

@@ -0,0 +1,37 @@
package meterreporter
import (
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
var _ factory.Config = (*Config)(nil)
type Config struct {
// Interval is how often the reporter collects and ships meters.
Interval time.Duration `mapstructure:"interval"`
// Backfill enables sealed-day catch-up from the license creation day.
Backfill bool `mapstructure:"backfill"`
}
func newConfig() factory.Config {
return Config{
Interval: 6 * time.Hour,
Backfill: true,
}
}
func NewConfigFactory() factory.ConfigFactory {
return factory.NewConfigFactory(factory.MustNewName("meterreporter"), newConfig)
}
func (c Config) Validate() error {
if c.Interval < 5*time.Minute || c.Interval > 24*time.Hour {
return errors.New(errors.TypeInvalidInput, ErrCodeInvalidInput, "meterreporter::interval must be between 5m and 24h")
}
return nil
}

View File

@@ -0,0 +1,14 @@
package meterreporter
import (
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
)
var (
ErrCodeInvalidInput = errors.MustNewCode("meterreporter_invalid_input")
)
type Reporter interface {
factory.ServiceWithHealthy
}

View File

@@ -0,0 +1,39 @@
package noopmeterreporter
import (
"context"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/meterreporter"
)
type provider struct {
healthyC chan struct{}
stopC chan struct{}
}
func NewFactory() factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config] {
return factory.NewProviderFactory(factory.MustNewName("noop"), New)
}
func New(_ context.Context, _ factory.ProviderSettings, _ meterreporter.Config) (meterreporter.Reporter, error) {
return &provider{
healthyC: make(chan struct{}),
stopC: make(chan struct{}),
}, nil
}
func (p *provider) Start(_ context.Context) error {
close(p.healthyC)
<-p.stopC
return nil
}
func (p *provider) Stop(_ context.Context) error {
close(p.stopC)
return nil
}
func (p *provider) Healthy() <-chan struct{} {
return p.healthyC
}

View File

@@ -0,0 +1,52 @@
package implretention
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/modules/retention"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type getter struct {
store retentiontypes.Store
}
// NewGetter creates a retention getter backed by the retention store.
func NewGetter(store retentiontypes.Store) retention.Getter {
return &getter{
store: store,
}
}
// GetRetentionPolicySegments loads successful TTL changes and converts them into retention policy segments.
func (getter *getter) GetRetentionPolicySegments(
ctx context.Context,
orgID valuer.UUID,
dbName string,
tableName string,
fallbackDefaultDays int,
startMs int64,
endMs int64,
) ([]*retentiontypes.RetentionPolicySegment, error) {
if startMs >= endMs {
return nil, nil
}
if dbName == "" {
return nil, errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "dbName is empty")
}
if tableName == "" {
return nil, errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "tableName is empty")
}
if fallbackDefaultDays <= 0 {
return nil, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "non-positive fallbackDefaultDays %d", fallbackDefaultDays)
}
rows, err := getter.store.ListTTLSettingsByTableNameAndBeforeCreatedAt(ctx, orgID, dbName+"."+tableName, endMs)
if err != nil {
return nil, err
}
return retentiontypes.BuildRetentionPolicySegmentsFromRows(rows, fallbackDefaultDays, startMs, endMs)
}

View File

@@ -0,0 +1,41 @@
package implretention
import (
"context"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/sqlstore"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
type store struct {
sqlstore sqlstore.SQLStore
}
// NewStore creates a SQL-backed retention store.
func NewStore(sqlstore sqlstore.SQLStore) retentiontypes.Store {
return &store{sqlstore: sqlstore}
}
// ListTTLSettingsByTableNameAndBeforeCreatedAt returns successful TTL settings before the given timestamp.
func (store *store) ListTTLSettingsByTableNameAndBeforeCreatedAt(ctx context.Context, orgID valuer.UUID, tableName string, beforeMs int64) ([]*retentiontypes.TTLSetting, error) {
rows := []*retentiontypes.TTLSetting{}
err := store.
sqlstore.
BunDB().
NewSelect().
Model(&rows).
Where("table_name = ?", tableName).
Where("org_id = ?", orgID.StringValue()).
Where("status = ?", retentiontypes.TTLSettingStatusSuccess).
Where("created_at < ?", time.UnixMilli(beforeMs).UTC()).
OrderExpr("created_at ASC").
Scan(ctx)
if err != nil {
return nil, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "load ttl_setting rows for org %q table %q", orgID.StringValue(), tableName)
}
return rows, nil
}

View File

@@ -0,0 +1,14 @@
package retention
import (
"context"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
)
// Getter resolves retention data and expressions for read paths.
type Getter interface {
// GetRetentionPolicySegments returns retention policy segments active over a half-open meter window.
GetRetentionPolicySegments(ctx context.Context, orgID valuer.UUID, dbName string, tableName string, fallbackDefaultDays int, startMs int64, endMs int64) ([]*retentiontypes.RetentionPolicySegment, error)
}

View File

@@ -1343,7 +1343,7 @@ func getLocalTableName(tableName string) string {
}
func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalLogs.StringValue(),
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -1378,7 +1378,7 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
if apiErr != nil {
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == constants.StatusPending {
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
return nil, &model.ApiError{Typ: model.ErrorConflict, Err: fmt.Errorf("TTL is already running")}
}
}
@@ -1437,7 +1437,7 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
TransactionID: uuid,
TableName: tableName,
TTL: int(params.DelDuration),
Status: constants.StatusPending,
Status: retentiontypes.TTLSettingStatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -1463,7 +1463,7 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusFailed).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1483,7 +1483,7 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusFailed).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1498,7 +1498,7 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusSuccess).
Set("status = ?", retentiontypes.TTLSettingStatusSuccess).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1508,10 +1508,10 @@ func (r *ClickHouseReader) setTTLLogs(ctx context.Context, orgID string, params
}
}(ttlPayload)
return &model.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
return &retentiontypes.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
}
func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalTraces.StringValue(),
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -1541,7 +1541,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
if apiErr != nil {
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == constants.StatusPending {
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
return nil, &model.ApiError{Typ: model.ErrorConflict, Err: fmt.Errorf("TTL is already running")}
}
}
@@ -1575,7 +1575,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
TransactionID: uuid,
TableName: tableName,
TTL: int(params.DelDuration),
Status: constants.StatusPending,
Status: retentiontypes.TTLSettingStatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -1613,7 +1613,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusFailed).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1634,7 +1634,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusFailed).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1649,7 +1649,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusSuccess).
Set("status = ?", retentiontypes.TTLSettingStatusSuccess).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -1658,7 +1658,7 @@ func (r *ClickHouseReader) setTTLTraces(ctx context.Context, orgID string, param
}
}(distributedTableName)
}
return &model.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
return &retentiontypes.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
}
func (r *ClickHouseReader) hasCustomRetentionColumn(ctx context.Context) (bool, error) {
@@ -1687,7 +1687,7 @@ func (r *ClickHouseReader) hasCustomRetentionColumn(ctx context.Context) (bool,
return true, nil
}
func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *model.CustomRetentionTTLParams) (*model.CustomRetentionTTLResponse, error) {
func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *retentiontypes.CustomRetentionTTLParams) (*retentiontypes.CustomRetentionTTLResponse, error) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalLogs.StringValue(),
@@ -1702,7 +1702,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
if !hasCustomRetention {
r.logger.Info("Custom retention not supported, falling back to standard TTL method", "orgID", orgID)
ttlParams := &model.TTLParams{
ttlParams := &retentiontypes.TTLParams{
Type: params.Type,
DelDuration: int64(params.DefaultTTLDays * 24 * 3600),
}
@@ -1723,7 +1723,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
return nil, errorsV2.Wrapf(apiErr.Err, errorsV2.TypeInternal, errorsV2.CodeInternal, "failed to set standard TTL")
}
return &model.CustomRetentionTTLResponse{
return &retentiontypes.CustomRetentionTTLResponse{
Message: fmt.Sprintf("Custom retention not supported, applied standard TTL of %d days. %s", params.DefaultTTLDays, ttlResult.Message),
}, nil
}
@@ -1734,7 +1734,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
uuidWithHyphen := valuer.GenerateUUID()
uuid := strings.Replace(uuidWithHyphen.String(), "-", "", -1)
if params.Type != constants.LogsTTL {
if params.Type != retentiontypes.LogsTTL {
return nil, errorsV2.Newf(errorsV2.TypeInternal, errorsV2.CodeInternal, "custom retention TTL only supported for logs")
}
@@ -1765,7 +1765,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
if apiErr != nil {
return nil, errorsV2.Newf(errorsV2.TypeInternal, errorsV2.CodeInternal, "error in processing custom_retention_ttl_status check sql query")
}
if statusItem.Status == constants.StatusPending {
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
return nil, errorsV2.Newf(errorsV2.TypeInternal, errorsV2.CodeInternal, "custom retention TTL is already running")
}
}
@@ -1851,7 +1851,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
TableName: tableName,
TTL: params.DefaultTTLDays,
Condition: string(ttlConditionsJSON),
Status: constants.StatusPending,
Status: retentiontypes.TTLSettingStatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -1867,7 +1867,7 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
err := r.setColdStorage(ctx, tableName, params.ColdStorageVolume)
if err != nil {
r.logger.Error("error in setting cold storage", errorsV2.Attr(err))
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, constants.StatusFailed)
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, retentiontypes.TTLSettingStatusFailed)
return nil, errorsV2.Wrapf(err.Err, errorsV2.TypeInternal, errorsV2.CodeInternal, "error setting cold storage for table %s", tableName)
}
}
@@ -1876,21 +1876,21 @@ func (r *ClickHouseReader) SetTTLV2(ctx context.Context, orgID string, params *m
r.logger.Debug("Executing custom retention TTL request: ", "request", query, "step", i+1)
if err := r.db.Exec(ctx, query); err != nil {
r.logger.Error("error while setting custom retention ttl", errorsV2.Attr(err))
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, constants.StatusFailed)
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, retentiontypes.TTLSettingStatusFailed)
return nil, errorsV2.Wrapf(err, errorsV2.TypeInternal, errorsV2.CodeInternal, "error setting custom retention TTL for table %s, query: %s", tableName, query)
}
}
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, constants.StatusSuccess)
r.updateCustomRetentionTTLStatus(ctx, orgID, tableName, retentiontypes.TTLSettingStatusSuccess)
}
return &model.CustomRetentionTTLResponse{
return &retentiontypes.CustomRetentionTTLResponse{
Message: "custom retention TTL has been successfully set up",
}, nil
}
// New method to build multiIf expressions with support for multiple AND conditions
func (r *ClickHouseReader) buildMultiIfExpression(ttlConditions []model.CustomRetentionRule, defaultTTLDays int, isResourceTable bool) string {
func (r *ClickHouseReader) buildMultiIfExpression(ttlConditions []retentiontypes.CustomRetentionRule, defaultTTLDays int, isResourceTable bool) string {
var conditions []string
for i, rule := range ttlConditions {
@@ -1962,7 +1962,7 @@ func (r *ClickHouseReader) buildMultiIfExpression(ttlConditions []model.CustomRe
return result
}
func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID string) (*model.GetCustomRetentionTTLResponse, error) {
func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID string) (*retentiontypes.GetCustomRetentionTTLResponse, error) {
// Check if V2 (custom retention) is supported
hasCustomRetention, err := r.hasCustomRetentionColumn(ctx)
if err != nil {
@@ -1971,7 +1971,7 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
hasCustomRetention = false
}
response := &model.GetCustomRetentionTTLResponse{}
response := &retentiontypes.GetCustomRetentionTTLResponse{}
if hasCustomRetention {
// V2 - Custom retention is supported
@@ -1994,19 +1994,19 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
if err == sql.ErrNoRows {
// No V2 configuration found, return defaults
response.DefaultTTLDays = 15
response.TTLConditions = []model.CustomRetentionRule{}
response.Status = constants.StatusSuccess
response.DefaultTTLDays = retentiontypes.DefaultLogsRetentionDays
response.TTLConditions = []retentiontypes.CustomRetentionRule{}
response.Status = retentiontypes.TTLSettingStatusSuccess
response.ColdStorageTTLDays = -1
return response, nil
}
// Parse TTL conditions from Condition
var ttlConditions []model.CustomRetentionRule
var ttlConditions []retentiontypes.CustomRetentionRule
if customTTL.Condition != "" {
if err := json.Unmarshal([]byte(customTTL.Condition), &ttlConditions); err != nil {
r.logger.Error("Error parsing TTL conditions", errorsV2.Attr(err))
ttlConditions = []model.CustomRetentionRule{}
ttlConditions = []retentiontypes.CustomRetentionRule{}
}
}
@@ -2020,8 +2020,8 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
response.Version = "v1"
// Get V1 TTL configuration
ttlParams := &model.GetTTLParams{
Type: constants.LogsTTL,
ttlParams := &retentiontypes.GetTTLParams{
Type: retentiontypes.LogsTTL,
}
ttlResult, apiErr := r.GetTTL(ctx, orgID, ttlParams)
@@ -2041,7 +2041,7 @@ func (r *ClickHouseReader) GetCustomRetentionTTL(ctx context.Context, orgID stri
}
// For V1, we don't have TTL conditions
response.TTLConditions = []model.CustomRetentionRule{}
response.TTLConditions = []retentiontypes.CustomRetentionRule{}
}
return response, nil
@@ -2081,7 +2081,7 @@ func (r *ClickHouseReader) updateCustomRetentionTTLStatus(ctx context.Context, o
}
// Enhanced validation function with duplicate detection and efficient key validation
func (r *ClickHouseReader) validateTTLConditions(ctx context.Context, ttlConditions []model.CustomRetentionRule) error {
func (r *ClickHouseReader) validateTTLConditions(ctx context.Context, ttlConditions []retentiontypes.CustomRetentionRule) error {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.CodeNamespace: "clickhouse-reader",
instrumentationtypes.CodeFunctionName: "validateTTLConditions",
@@ -2185,16 +2185,16 @@ func (r *ClickHouseReader) validateTTLConditions(ctx context.Context, ttlConditi
// SetTTL sets the TTL for traces or metrics or logs tables.
// This is an async API which creates goroutines to set TTL.
// Status of TTL update is tracked with ttl_status table in sqlite db.
func (r *ClickHouseReader) SetTTL(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) SetTTL(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
// Keep only latest 100 transactions/requests
r.deleteTtlTransactions(ctx, orgID, 100)
switch params.Type {
case constants.TraceTTL:
case retentiontypes.TraceTTL:
return r.setTTLTraces(ctx, orgID, params)
case constants.MetricsTTL:
case retentiontypes.MetricsTTL:
return r.setTTLMetrics(ctx, orgID, params)
case constants.LogsTTL:
case retentiontypes.LogsTTL:
return r.setTTLLogs(ctx, orgID, params)
default:
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error while setting ttl. ttl type should be <metrics|traces>, got %v", params.Type)}
@@ -2202,7 +2202,7 @@ func (r *ClickHouseReader) SetTTL(ctx context.Context, orgID string, params *mod
}
func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, params *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, params *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.TelemetrySignal: telemetrytypes.SignalMetrics.StringValue(),
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -2231,7 +2231,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
if apiErr != nil {
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == constants.StatusPending {
if statusItem.Status == retentiontypes.TTLSettingStatusPending {
return nil, &model.ApiError{Typ: model.ErrorConflict, Err: fmt.Errorf("TTL is already running")}
}
}
@@ -2247,7 +2247,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
TransactionID: uuid,
TableName: tableName,
TTL: int(params.DelDuration),
Status: constants.StatusPending,
Status: retentiontypes.TTLSettingStatusPending,
ColdStorageTTL: coldStorageDuration,
OrgID: orgID,
}
@@ -2285,7 +2285,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusFailed).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -2306,7 +2306,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusFailed).
Set("status = ?", retentiontypes.TTLSettingStatusFailed).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -2321,7 +2321,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
NewUpdate().
Model(new(retentiontypes.TTLSetting)).
Set("updated_at = ?", time.Now()).
Set("status = ?", constants.StatusSuccess).
Set("status = ?", retentiontypes.TTLSettingStatusSuccess).
Where("id = ?", statusItem.ID.StringValue()).
Exec(ctx)
if dbErr != nil {
@@ -2332,7 +2332,7 @@ func (r *ClickHouseReader) setTTLMetrics(ctx context.Context, orgID string, para
for _, tableName := range tableNames {
go metricTTL(tableName)
}
return &model.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
return &retentiontypes.SetTTLResponseItem{Message: "move ttl has been successfully set up"}, nil
}
func (r *ClickHouseReader) deleteTtlTransactions(ctx context.Context, orgID string, numberOfTransactionsStore int) {
@@ -2389,7 +2389,7 @@ func (r *ClickHouseReader) checkTTLStatusItem(ctx context.Context, orgID string,
// getTTLQueryStatus fetches ttl_status table status from DB
func (r *ClickHouseReader) getTTLQueryStatus(ctx context.Context, orgID string, tableNameArray []string) (string, *model.ApiError) {
failFlag := false
status := constants.StatusSuccess
status := retentiontypes.TTLSettingStatusSuccess
for _, tableName := range tableNameArray {
statusItem, apiErr := r.checkTTLStatusItem(ctx, orgID, tableName)
emptyStatusStruct := new(retentiontypes.TTLSetting)
@@ -2399,16 +2399,16 @@ func (r *ClickHouseReader) getTTLQueryStatus(ctx context.Context, orgID string,
if apiErr != nil {
return "", &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error in processing ttl_status check sql query")}
}
if statusItem.Status == constants.StatusPending && statusItem.UpdatedAt.Unix()-time.Now().Unix() < 3600 {
status = constants.StatusPending
if statusItem.Status == retentiontypes.TTLSettingStatusPending && statusItem.UpdatedAt.Unix()-time.Now().Unix() < 3600 {
status = retentiontypes.TTLSettingStatusPending
return status, nil
}
if statusItem.Status == constants.StatusFailed {
if statusItem.Status == retentiontypes.TTLSettingStatusFailed {
failFlag = true
}
}
if failFlag {
status = constants.StatusFailed
status = retentiontypes.TTLSettingStatusFailed
}
return status, nil
@@ -2461,7 +2461,7 @@ func getLocalTableNameArray(tableNames []string) []string {
}
// GetTTL returns current ttl, expected ttl and past setTTL status for metrics/traces.
func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *model.GetTTLParams) (*model.GetTTLResponseItem, *model.ApiError) {
func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *retentiontypes.GetTTLParams) (*retentiontypes.GetTTLResponseItem, *model.ApiError) {
ctx = ctxtypes.NewContextWithCommentVals(ctx, map[string]string{
instrumentationtypes.CodeNamespace: "clickhouse-reader",
@@ -2496,8 +2496,8 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
return delTTL, moveTTL
}
getMetricsTTL := func() (*model.DBResponseTTL, *model.ApiError) {
var dbResp []model.DBResponseTTL
getMetricsTTL := func() (*retentiontypes.DBResponseTTL, *model.ApiError) {
var dbResp []retentiontypes.DBResponseTTL
query := fmt.Sprintf("SELECT engine_full FROM system.tables WHERE name='%v'", signozSampleLocalTableName)
@@ -2514,8 +2514,8 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
}
getTracesTTL := func() (*model.DBResponseTTL, *model.ApiError) {
var dbResp []model.DBResponseTTL
getTracesTTL := func() (*retentiontypes.DBResponseTTL, *model.ApiError) {
var dbResp []retentiontypes.DBResponseTTL
query := fmt.Sprintf("SELECT engine_full FROM system.tables WHERE name='%v' AND database='%v'", r.traceLocalTableName, signozTraceDBName)
@@ -2532,8 +2532,8 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
}
getLogsTTL := func() (*model.DBResponseTTL, *model.ApiError) {
var dbResp []model.DBResponseTTL
getLogsTTL := func() (*retentiontypes.DBResponseTTL, *model.ApiError) {
var dbResp []retentiontypes.DBResponseTTL
query := fmt.Sprintf("SELECT engine_full FROM system.tables WHERE name='%v' AND database='%v'", r.logsLocalTableName, r.logsDB)
@@ -2551,7 +2551,7 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
switch ttlParams.Type {
case constants.TraceTTL:
case retentiontypes.TraceTTL:
tableNameArray := []string{
r.TraceDB + "." + r.traceTableName,
r.TraceDB + "." + r.traceResourceTableV3,
@@ -2579,9 +2579,9 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
delTTL, moveTTL := parseTTL(dbResp.EngineFull)
return &model.GetTTLResponseItem{TracesTime: delTTL, TracesMoveTime: moveTTL, ExpectedTracesTime: ttlQuery.TTL, ExpectedTracesMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
return &retentiontypes.GetTTLResponseItem{TracesTime: delTTL, TracesMoveTime: moveTTL, ExpectedTracesTime: ttlQuery.TTL, ExpectedTracesMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
case constants.MetricsTTL:
case retentiontypes.MetricsTTL:
tableNameArray := []string{signozMetricDBName + "." + signozSampleTableName}
tableNameArray = getLocalTableNameArray(tableNameArray)
status, apiErr := r.getTTLQueryStatus(ctx, orgID, tableNameArray)
@@ -2602,9 +2602,9 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
delTTL, moveTTL := parseTTL(dbResp.EngineFull)
return &model.GetTTLResponseItem{MetricsTime: delTTL, MetricsMoveTime: moveTTL, ExpectedMetricsTime: ttlQuery.TTL, ExpectedMetricsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
return &retentiontypes.GetTTLResponseItem{MetricsTime: delTTL, MetricsMoveTime: moveTTL, ExpectedMetricsTime: ttlQuery.TTL, ExpectedMetricsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
case constants.LogsTTL:
case retentiontypes.LogsTTL:
tableNameArray := []string{r.logsDB + "." + r.logsTableName}
tableNameArray = getLocalTableNameArray(tableNameArray)
status, apiErr := r.getTTLQueryStatus(ctx, orgID, tableNameArray)
@@ -2625,7 +2625,7 @@ func (r *ClickHouseReader) GetTTL(ctx context.Context, orgID string, ttlParams *
}
delTTL, moveTTL := parseTTL(dbResp.EngineFull)
return &model.GetTTLResponseItem{LogsTime: delTTL, LogsMoveTime: moveTTL, ExpectedLogsTime: ttlQuery.TTL, ExpectedLogsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
return &retentiontypes.GetTTLResponseItem{LogsTime: delTTL, LogsMoveTime: moveTTL, ExpectedLogsTime: ttlQuery.TTL, ExpectedLogsMoveTime: ttlQuery.ColdStorageTTL, Status: status}, nil
default:
return nil, &model.ApiError{Typ: model.ErrorExec, Err: fmt.Errorf("error while getting ttl. ttl type should be metrics|traces, got %v",

View File

@@ -34,6 +34,7 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/app/cloudintegrations/services"
"github.com/SigNoz/signoz/pkg/query-service/app/integrations"
"github.com/SigNoz/signoz/pkg/signoz"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/gorilla/mux"
@@ -1677,7 +1678,7 @@ func (aH *APIHandler) setCustomRetentionTTL(w http.ResponseWriter, r *http.Reque
return
}
var params model.CustomRetentionTTLParams
var params retentiontypes.CustomRetentionTTLParams
if err := json.NewDecoder(r.Body).Decode(&params); err != nil {
render.Error(w, errorsV2.Newf(errorsV2.TypeInvalidInput, errorsV2.CodeInvalidInput, "Invalid data"))
return

View File

@@ -40,6 +40,7 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/postprocess"
"github.com/SigNoz/signoz/pkg/query-service/utils"
querytemplate "github.com/SigNoz/signoz/pkg/query-service/utils/queryTemplate"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
chVariables "github.com/SigNoz/signoz/pkg/variables/clickhouse"
)
@@ -419,7 +420,7 @@ func parseTime(param string, r *http.Request) (*time.Time, error) {
}
func parseTTLParams(r *http.Request) (*model.TTLParams, error) {
func parseTTLParams(r *http.Request) (*retentiontypes.TTLParams, error) {
// make sure either of the query params are present
typeTTL := r.URL.Query().Get("type")
@@ -432,7 +433,7 @@ func parseTTLParams(r *http.Request) (*model.TTLParams, error) {
}
// Validate the type parameter
if typeTTL != baseconstants.TraceTTL && typeTTL != baseconstants.MetricsTTL && typeTTL != baseconstants.LogsTTL {
if typeTTL != retentiontypes.TraceTTL && typeTTL != retentiontypes.MetricsTTL && typeTTL != retentiontypes.LogsTTL {
return nil, fmt.Errorf("type param should be metrics|traces|logs, got %v", typeTTL)
}
@@ -455,7 +456,7 @@ func parseTTLParams(r *http.Request) (*model.TTLParams, error) {
}
}
return &model.TTLParams{
return &retentiontypes.TTLParams{
Type: typeTTL,
DelDuration: int64(durationParsed.Seconds()),
ColdStorageVolume: coldStorage,
@@ -463,7 +464,7 @@ func parseTTLParams(r *http.Request) (*model.TTLParams, error) {
}, nil
}
func parseGetTTL(r *http.Request) (*model.GetTTLParams, error) {
func parseGetTTL(r *http.Request) (*retentiontypes.GetTTLParams, error) {
typeTTL := r.URL.Query().Get("type")
@@ -471,12 +472,12 @@ func parseGetTTL(r *http.Request) (*model.GetTTLParams, error) {
return nil, fmt.Errorf("type param cannot be empty from the query")
} else {
// Validate the type parameter
if typeTTL != baseconstants.TraceTTL && typeTTL != baseconstants.MetricsTTL && typeTTL != baseconstants.LogsTTL {
if typeTTL != retentiontypes.TraceTTL && typeTTL != retentiontypes.MetricsTTL && typeTTL != retentiontypes.LogsTTL {
return nil, fmt.Errorf("type param should be metrics|traces|logs, got %v", typeTTL)
}
}
return &model.GetTTLParams{Type: typeTTL}, nil
return &retentiontypes.GetTTLParams{Type: typeTTL}, nil
}
func parseAggregateAttributeRequest(r *http.Request) (*v3.AggregateAttributeRequest, error) {

View File

@@ -19,10 +19,6 @@ const (
const MaxAllowedPointsInTimeSeries = 300
const TraceTTL = "traces"
const MetricsTTL = "metrics"
const LogsTTL = "logs"
const SpanSearchScopeRoot = "isroot"
const SpanSearchScopeEntryPoint = "isentrypoint"
const OrderBySpanCount = "span_count"

View File

@@ -7,6 +7,7 @@ import (
"github.com/SigNoz/signoz/pkg/query-service/model"
v3 "github.com/SigNoz/signoz/pkg/query-service/model/v3"
"github.com/SigNoz/signoz/pkg/query-service/querycache"
"github.com/SigNoz/signoz/pkg/types/retentiontypes"
"github.com/SigNoz/signoz/pkg/valuer"
"github.com/prometheus/prometheus/promql"
"github.com/prometheus/prometheus/util/stats"
@@ -23,8 +24,8 @@ type Reader interface {
GetServicesList(ctx context.Context) (*[]string, error)
GetDependencyGraph(ctx context.Context, query *model.GetServicesParams) (*[]model.ServiceMapDependencyResponseItem, error)
GetTTL(ctx context.Context, orgID string, ttlParams *model.GetTTLParams) (*model.GetTTLResponseItem, *model.ApiError)
GetCustomRetentionTTL(ctx context.Context, orgID string) (*model.GetCustomRetentionTTLResponse, error)
GetTTL(ctx context.Context, orgID string, ttlParams *retentiontypes.GetTTLParams) (*retentiontypes.GetTTLResponseItem, *model.ApiError)
GetCustomRetentionTTL(ctx context.Context, orgID string) (*retentiontypes.GetCustomRetentionTTLResponse, error)
// GetDisks returns a list of disks configured in the underlying DB. It is supported by
// clickhouse only.
@@ -46,8 +47,8 @@ type Reader interface {
GetFlamegraphSpansForTrace(ctx context.Context, orgID valuer.UUID, traceID string, req *model.GetFlamegraphSpansForTraceParams) (*model.GetFlamegraphSpansForTraceResponse, error)
// Setter Interfaces
SetTTL(ctx context.Context, orgID string, ttlParams *model.TTLParams) (*model.SetTTLResponseItem, *model.ApiError)
SetTTLV2(ctx context.Context, orgID string, params *model.CustomRetentionTTLParams) (*model.CustomRetentionTTLResponse, error)
SetTTL(ctx context.Context, orgID string, ttlParams *retentiontypes.TTLParams) (*retentiontypes.SetTTLResponseItem, *model.ApiError)
SetTTLV2(ctx context.Context, orgID string, params *retentiontypes.CustomRetentionTTLParams) (*retentiontypes.CustomRetentionTTLResponse, error)
FetchTemporality(ctx context.Context, orgID valuer.UUID, metricNames []string) (map[string]map[v3.Temporality]bool, error)
GetMetricAggregateAttributes(ctx context.Context, orgID valuer.UUID, req *v3.AggregateAttributeRequest, skipSignozMetrics bool) (*v3.AggregateAttributeResponse, error)

View File

@@ -404,56 +404,6 @@ type TagKey struct {
Type TagDataType `json:"type"`
}
type TTLParams struct {
Type string // It can be one of {traces, metrics}.
ColdStorageVolume string // Name of the cold storage volume.
ToColdStorageDuration int64 // Seconds after which data will be moved to cold storage.
DelDuration int64 // Seconds after which data will be deleted.
}
type CustomRetentionTTLParams struct {
Type string `json:"type"`
DefaultTTLDays int `json:"defaultTTLDays"`
TTLConditions []CustomRetentionRule `json:"ttlConditions"`
ColdStorageVolume string `json:"coldStorageVolume,omitempty"`
ToColdStorageDurationDays int64 `json:"coldStorageDurationDays,omitempty"`
}
type CustomRetentionRule struct {
Filters []FilterCondition `json:"conditions"`
TTLDays int `json:"ttlDays"`
}
type FilterCondition struct {
Key string `json:"key"`
Values []string `json:"values"`
}
type GetCustomRetentionTTLResponse struct {
Version string `json:"version"`
Status string `json:"status"`
// V1 fields
// LogsTime int `json:"logs_ttl_duration_hrs,omitempty"`
// LogsMoveTime int `json:"logs_move_ttl_duration_hrs,omitempty"`
ExpectedLogsTime int `json:"expected_logs_ttl_duration_hrs,omitempty"`
ExpectedLogsMoveTime int `json:"expected_logs_move_ttl_duration_hrs,omitempty"`
// V2 fields
DefaultTTLDays int `json:"default_ttl_days,omitempty"`
TTLConditions []CustomRetentionRule `json:"ttl_conditions,omitempty"`
ColdStorageVolume string `json:"cold_storage_volume,omitempty"`
ColdStorageTTLDays int `json:"cold_storage_ttl_days,omitempty"`
}
type CustomRetentionTTLResponse struct {
Message string `json:"message"`
}
type GetTTLParams struct {
Type string
}
type ListErrorsParams struct {
StartStr string `json:"start"`
EndStr string `json:"end"`

View File

@@ -150,16 +150,6 @@ type RuleResponseItem struct {
Data string `json:"data" db:"data"`
}
type TTLStatusItem struct {
Id int `json:"id" db:"id"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
TableName string `json:"table_name" db:"table_name"`
TTL int `json:"ttl" db:"ttl"`
Status string `json:"status" db:"status"`
ColdStorageTtl int `json:"cold_storage_ttl" db:"cold_storage_ttl"`
}
type ChannelItem struct {
Id int `json:"id" db:"id"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
@@ -462,35 +452,11 @@ type SpanAggregatesDBResponseItem struct {
GroupBy string `ch:"groupBy"`
}
type SetTTLResponseItem struct {
Message string `json:"message"`
}
type DiskItem struct {
Name string `json:"name,omitempty" ch:"name"`
Type string `json:"type,omitempty" ch:"type"`
}
type DBResponseTTL struct {
EngineFull string `ch:"engine_full"`
}
type GetTTLResponseItem struct {
MetricsTime int `json:"metrics_ttl_duration_hrs,omitempty"`
MetricsMoveTime int `json:"metrics_move_ttl_duration_hrs,omitempty"`
TracesTime int `json:"traces_ttl_duration_hrs,omitempty"`
TracesMoveTime int `json:"traces_move_ttl_duration_hrs,omitempty"`
LogsTime int `json:"logs_ttl_duration_hrs,omitempty"`
LogsMoveTime int `json:"logs_move_ttl_duration_hrs,omitempty"`
ExpectedMetricsTime int `json:"expected_metrics_ttl_duration_hrs,omitempty"`
ExpectedMetricsMoveTime int `json:"expected_metrics_move_ttl_duration_hrs,omitempty"`
ExpectedTracesTime int `json:"expected_traces_ttl_duration_hrs,omitempty"`
ExpectedTracesMoveTime int `json:"expected_traces_move_ttl_duration_hrs,omitempty"`
ExpectedLogsTime int `json:"expected_logs_ttl_duration_hrs,omitempty"`
ExpectedLogsMoveTime int `json:"expected_logs_move_ttl_duration_hrs,omitempty"`
Status string `json:"status"`
}
type DBResponseServiceName struct {
ServiceName string `ch:"serviceName"`
Count uint64 `ch:"count"`

View File

@@ -23,6 +23,7 @@ import (
"github.com/SigNoz/signoz/pkg/global"
"github.com/SigNoz/signoz/pkg/identn"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/inframonitoring"
"github.com/SigNoz/signoz/pkg/modules/metricsexplorer"
@@ -135,6 +136,9 @@ type Config struct {
// Auditor config
Auditor auditor.Config `mapstructure:"auditor"`
// MeterReporter config
MeterReporter meterreporter.Config `mapstructure:"meterreporter"`
// CloudIntegration config
CloudIntegration cloudintegration.Config `mapstructure:"cloudintegration"`
@@ -175,6 +179,7 @@ func NewConfig(ctx context.Context, logger *slog.Logger, resolverConfig config.R
identn.NewConfigFactory(),
serviceaccount.NewConfigFactory(),
auditor.NewConfigFactory(),
meterreporter.NewConfigFactory(),
cloudintegration.NewConfigFactory(),
tracedetail.NewConfigFactory(),
authz.NewConfigFactory(),

View File

@@ -16,6 +16,7 @@ import (
"github.com/SigNoz/signoz/pkg/instrumentation/instrumentationtest"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/retention/implretention"
"github.com/SigNoz/signoz/pkg/modules/user/impluser"
"github.com/SigNoz/signoz/pkg/querier"
"github.com/SigNoz/signoz/pkg/queryparser"
@@ -52,7 +53,8 @@ func TestNewHandlers(t *testing.T) {
userRoleStore := impluser.NewUserRoleStore(sqlstore, providerSettings)
userGetter := impluser.NewGetter(impluser.NewStore(sqlstore, providerSettings), userRoleStore, flagger)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, nil, nil, flagger)
retentionGetter := implretention.NewGetter(implretention.NewStore(sqlstore))
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, nil, nil, retentionGetter, flagger)
querierHandler := querier.NewHandler(providerSettings, nil, nil)
registryHandler := factory.NewHandler(nil)

View File

@@ -29,6 +29,7 @@ import (
"github.com/SigNoz/signoz/pkg/modules/quickfilter/implquickfilter"
"github.com/SigNoz/signoz/pkg/modules/rawdataexport"
"github.com/SigNoz/signoz/pkg/modules/rawdataexport/implrawdataexport"
"github.com/SigNoz/signoz/pkg/modules/retention"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory/implrulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/savedview"
@@ -63,6 +64,7 @@ type Modules struct {
Preference preference.Module
UserSetter user.Setter
UserGetter user.Getter
RetentionGetter retention.Getter
SavedView savedview.Module
Apdex apdex.Module
Dashboard dashboard.Module
@@ -103,6 +105,7 @@ func NewModules(
userRoleStore authtypes.UserRoleStore,
serviceAccount serviceaccount.Module,
cloudIntegrationModule cloudintegration.Module,
retentionGetter retention.Getter,
fl flagger.Flagger,
) Modules {
quickfilter := implquickfilter.NewModule(implquickfilter.NewStore(sqlstore))
@@ -119,6 +122,7 @@ func NewModules(
Dashboard: dashboard,
UserSetter: userSetter,
UserGetter: userGetter,
RetentionGetter: retentionGetter,
QuickFilter: quickfilter,
TraceFunnel: impltracefunnel.NewModule(impltracefunnel.NewStore(sqlstore)),
RawDataExport: implrawdataexport.NewModule(querier),

View File

@@ -16,6 +16,7 @@ import (
"github.com/SigNoz/signoz/pkg/modules/cloudintegration/implcloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard/impldashboard"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/retention/implretention"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount/implserviceaccount"
"github.com/SigNoz/signoz/pkg/modules/user/impluser"
@@ -56,7 +57,8 @@ func TestNewModules(t *testing.T) {
serviceAccount := implserviceaccount.NewModule(implserviceaccount.NewStore(sqlstore), nil, nil, nil, providerSettings, serviceaccount.Config{})
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, serviceAccount, implcloudintegration.NewModule(), flagger)
retentionGetter := implretention.NewGetter(implretention.NewStore(sqlstore))
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, nil, nil, nil, nil, nil, nil, nil, queryParser, Config{}, dashboardModule, userGetter, userRoleStore, serviceAccount, implcloudintegration.NewModule(), retentionGetter, flagger)
reflectVal := reflect.ValueOf(modules)
for i := 0; i < reflectVal.NumField(); i++ {

View File

@@ -28,6 +28,8 @@ import (
"github.com/SigNoz/signoz/pkg/identn/apikeyidentn"
"github.com/SigNoz/signoz/pkg/identn/impersonationidentn"
"github.com/SigNoz/signoz/pkg/identn/tokenizeridentn"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/meterreporter/noopmeterreporter"
"github.com/SigNoz/signoz/pkg/modules/authdomain/implauthdomain"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
@@ -319,6 +321,12 @@ func NewAuditorProviderFactories() factory.NamedMap[factory.ProviderFactory[audi
)
}
func NewMeterReporterProviderFactories() factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]] {
return factory.MustNewNamedMap(
noopmeterreporter.NewFactory(),
)
}
func NewFlaggerProviderFactories(registry featuretypes.Registry) factory.NamedMap[factory.ProviderFactory[flagger.FlaggerProvider, flagger.Config]] {
return factory.MustNewNamedMap(
configflagger.NewFactory(registry),

View File

@@ -22,10 +22,13 @@ import (
"github.com/SigNoz/signoz/pkg/identn"
"github.com/SigNoz/signoz/pkg/instrumentation"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/meterreporter"
"github.com/SigNoz/signoz/pkg/modules/cloudintegration"
"github.com/SigNoz/signoz/pkg/modules/dashboard"
"github.com/SigNoz/signoz/pkg/modules/organization"
"github.com/SigNoz/signoz/pkg/modules/organization/implorganization"
"github.com/SigNoz/signoz/pkg/modules/retention"
"github.com/SigNoz/signoz/pkg/modules/retention/implretention"
"github.com/SigNoz/signoz/pkg/modules/rulestatehistory"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount"
"github.com/SigNoz/signoz/pkg/modules/serviceaccount/implserviceaccount"
@@ -84,6 +87,7 @@ type SigNoz struct {
Flagger flagger.Flagger
Gateway gateway.Gateway
Auditor auditor.Auditor
MeterReporter meterreporter.Reporter
}
func New(
@@ -104,6 +108,7 @@ func New(
dashboardModuleCallback func(sqlstore.SQLStore, factory.ProviderSettings, analytics.Analytics, organization.Getter, queryparser.QueryParser, querier.Querier, licensing.Licensing) dashboard.Module,
gatewayProviderFactory func(licensing.Licensing) factory.ProviderFactory[gateway.Gateway, gateway.Config],
auditorProviderFactories func(licensing.Licensing) factory.NamedMap[factory.ProviderFactory[auditor.Auditor, auditor.Config]],
meterReporterProviderFactories func(context.Context, factory.ProviderSettings, flagger.Flagger, licensing.Licensing, telemetrystore.TelemetryStore, retention.Getter, organization.Getter, zeus.Zeus) (factory.NamedMap[factory.ProviderFactory[meterreporter.Reporter, meterreporter.Config]], string),
querierHandlerCallback func(factory.ProviderSettings, querier.Querier, analytics.Analytics) querier.Handler,
cloudIntegrationCallback func(sqlstore.SQLStore, global.Global, zeus.Zeus, gateway.Gateway, licensing.Licensing, serviceaccount.Module, cloudintegration.Config) (cloudintegration.Module, error),
rulerProviderFactories func(cache.Cache, alertmanager.Alertmanager, sqlstore.SQLStore, telemetrystore.TelemetryStore, telemetrytypes.MetadataStore, prometheus.Prometheus, organization.Getter, rulestatehistory.Module, querier.Querier, queryparser.QueryParser) factory.NamedMap[factory.ProviderFactory[ruler.Ruler, ruler.Config]],
@@ -228,6 +233,8 @@ func New(
return nil, err
}
retentionGetter := implretention.NewGetter(implretention.NewStore(sqlstore))
// Initialize prometheus from the available prometheus provider factories
prometheus, err := factory.NewProviderFromNamedMap(
ctx,
@@ -386,6 +393,13 @@ func New(
return nil, err
}
// Initialize meter reporter from the variant-specific provider factories
meterReporterFactories, meterReporterProvider := meterReporterProviderFactories(ctx, providerSettings, flagger, licensing, telemetrystore, retentionGetter, orgGetter, zeus)
meterReporter, err := factory.NewProviderFromNamedMap(ctx, providerSettings, config.MeterReporter, meterReporterFactories, meterReporterProvider)
if err != nil {
return nil, err
}
// Initialize authns
store := sqlauthnstore.NewStore(sqlstore)
authNs, err := authNsCallback(ctx, providerSettings, store, licensing)
@@ -441,7 +455,7 @@ func New(
}
// Initialize all modules
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config, dashboard, userGetter, userRoleStore, serviceAccount, cloudIntegrationModule, flagger)
modules := NewModules(sqlstore, tokenizer, emailing, providerSettings, orgGetter, alertmanager, analytics, querier, telemetrystore, telemetryMetadataStore, authNs, authz, cache, queryParser, config, dashboard, userGetter, userRoleStore, serviceAccount, cloudIntegrationModule, retentionGetter, flagger)
// Initialize ruler from the variant-specific provider factories
rulerInstance, err := factory.NewProviderFromNamedMap(ctx, providerSettings, config.Ruler, rulerProviderFactories(cache, alertmanager, sqlstore, telemetrystore, telemetryMetadataStore, prometheus, orgGetter, modules.RuleStateHistory, querier, queryParser), "signoz")
@@ -501,6 +515,7 @@ func New(
factory.NewNamedService(factory.MustNewName("authz"), authz),
factory.NewNamedService(factory.MustNewName("user"), userService, factory.MustNewName("authz")),
factory.NewNamedService(factory.MustNewName("auditor"), auditor),
factory.NewNamedService(factory.MustNewName("meterreporter"), meterReporter, factory.MustNewName("licensing")),
factory.NewNamedService(factory.MustNewName("ruler"), rulerInstance),
)
if err != nil {
@@ -550,5 +565,6 @@ func New(
Flagger: flagger,
Gateway: gateway,
Auditor: auditor,
MeterReporter: meterReporter,
}, nil
}

View File

@@ -0,0 +1,12 @@
package retentiontypes
import (
"context"
"github.com/SigNoz/signoz/pkg/valuer"
)
type Store interface {
// ListTTLSettingsByTableNameAndBeforeCreatedAt returns successful TTL settings before the given timestamp.
ListTTLSettingsByTableNameAndBeforeCreatedAt(ctx context.Context, orgID valuer.UUID, tableName string, beforeMs int64) ([]*TTLSetting, error)
}

View File

@@ -1,10 +1,141 @@
package retentiontypes
import (
"encoding/json"
"time"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types"
"github.com/uptrace/bun"
)
const secondsPerDay = 24 * 60 * 60
const (
DefaultLogsRetentionDays = 15
DefaultMetricsRetentionDays = 30
DefaultTracesRetentionDays = 15
)
const (
TraceTTL = "traces"
MetricsTTL = "metrics"
LogsTTL = "logs"
)
const (
TTLSettingStatusPending = "pending"
TTLSettingStatusFailed = "failed"
TTLSettingStatusSuccess = "success"
)
// RetentionPolicySegment is a half-open time range using one retention policy.
type RetentionPolicySegment struct {
StartMs int64
EndMs int64
Rules []CustomRetentionRule
DefaultDays int
}
// NewRetentionPolicySegment creates a retention policy segment for a half-open time range.
func NewRetentionPolicySegment(startMs int64, endMs int64, rules []CustomRetentionRule, defaultDays int) *RetentionPolicySegment {
return &RetentionPolicySegment{
StartMs: startMs,
EndMs: endMs,
Rules: rules,
DefaultDays: defaultDays,
}
}
// BuildRetentionPolicySegmentsFromRows converts successful TTL settings into retention policy segments.
func BuildRetentionPolicySegmentsFromRows(rows []*TTLSetting, fallbackDefaultDays int, startMs, endMs int64) ([]*RetentionPolicySegment, error) {
if startMs >= endMs {
return nil, nil
}
var activeAtStart *TTLSetting
inWindow := make([]*TTLSetting, 0, len(rows))
for _, row := range rows {
rowMs := row.CreatedAt.UnixMilli()
if rowMs <= startMs {
activeAtStart = row
continue
}
if rowMs >= endMs {
continue
}
inWindow = append(inWindow, row)
}
activeRules, activeDefault, err := parseTTLSetting(activeAtStart, fallbackDefaultDays)
if err != nil {
return nil, err
}
segments := make([]*RetentionPolicySegment, 0, len(inWindow)+1)
cursor := startMs
for _, row := range inWindow {
rowMs := row.CreatedAt.UnixMilli()
if rowMs <= cursor {
activeRules, activeDefault, err = parseTTLSetting(row, fallbackDefaultDays)
if err != nil {
return nil, err
}
continue
}
segments = append(segments, NewRetentionPolicySegment(cursor, rowMs, activeRules, activeDefault))
cursor = rowMs
activeRules, activeDefault, err = parseTTLSetting(row, fallbackDefaultDays)
if err != nil {
return nil, err
}
}
if cursor < endMs {
segments = append(segments, NewRetentionPolicySegment(cursor, endMs, activeRules, activeDefault))
}
return segments, nil
}
func parseTTLSetting(row *TTLSetting, fallbackDefaultDays int) ([]CustomRetentionRule, int, error) {
if row == nil {
return nil, fallbackDefaultDays, nil
}
defaultDays := row.TTL
if row.Condition == "" {
defaultDays = (row.TTL + secondsPerDay - 1) / secondsPerDay
}
if defaultDays <= 0 {
defaultDays = fallbackDefaultDays
}
if row.Condition == "" {
return nil, defaultDays, nil
}
var rules []CustomRetentionRule
if err := json.Unmarshal([]byte(row.Condition), &rules); err != nil {
return nil, 0, errors.Wrapf(err, errors.TypeInternal, errors.CodeInternal, "parse ttl_setting condition for row %q", row.ID.StringValue())
}
return rules, defaultDays, nil
}
// CustomRetentionRule is one custom retention rule as stored in ttl_setting.condition.
// Rules are evaluated in declaration order; the first matching rule wins.
type CustomRetentionRule struct {
Filters []FilterCondition `json:"conditions"`
TTLDays int `json:"ttlDays"`
}
// FilterCondition is one label-key, allowed-values condition inside a retention rule.
type FilterCondition struct {
Key string `json:"key"`
Values []string `json:"values"`
}
type TTLSetting struct {
bun.BaseModel `bun:"table:ttl_setting"`
types.Identifiable
@@ -17,3 +148,73 @@ type TTLSetting struct {
OrgID string `json:"-" bun:"org_id,notnull"`
Condition string `bun:"condition,type:text"`
}
type TTLParams struct {
Type string
ColdStorageVolume string
ToColdStorageDuration int64
DelDuration int64
}
type CustomRetentionTTLParams struct {
Type string `json:"type"`
DefaultTTLDays int `json:"defaultTTLDays"`
TTLConditions []CustomRetentionRule `json:"ttlConditions"`
ColdStorageVolume string `json:"coldStorageVolume,omitempty"`
ToColdStorageDurationDays int64 `json:"coldStorageDurationDays,omitempty"`
}
type GetTTLParams struct {
Type string
}
type GetCustomRetentionTTLResponse struct {
Version string `json:"version"`
Status string `json:"status"`
ExpectedLogsTime int `json:"expected_logs_ttl_duration_hrs,omitempty"`
ExpectedLogsMoveTime int `json:"expected_logs_move_ttl_duration_hrs,omitempty"`
DefaultTTLDays int `json:"default_ttl_days,omitempty"`
TTLConditions []CustomRetentionRule `json:"ttl_conditions,omitempty"`
ColdStorageVolume string `json:"cold_storage_volume,omitempty"`
ColdStorageTTLDays int `json:"cold_storage_ttl_days,omitempty"`
}
type CustomRetentionTTLResponse struct {
Message string `json:"message"`
}
type TTLStatusItem struct {
Id int `json:"id" db:"id"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
TableName string `json:"table_name" db:"table_name"`
TTL int `json:"ttl" db:"ttl"`
Status string `json:"status" db:"status"`
ColdStorageTtl int `json:"cold_storage_ttl" db:"cold_storage_ttl"`
}
type SetTTLResponseItem struct {
Message string `json:"message"`
}
type DBResponseTTL struct {
EngineFull string `ch:"engine_full"`
}
type GetTTLResponseItem struct {
MetricsTime int `json:"metrics_ttl_duration_hrs,omitempty"`
MetricsMoveTime int `json:"metrics_move_ttl_duration_hrs,omitempty"`
TracesTime int `json:"traces_ttl_duration_hrs,omitempty"`
TracesMoveTime int `json:"traces_move_ttl_duration_hrs,omitempty"`
LogsTime int `json:"logs_ttl_duration_hrs,omitempty"`
LogsMoveTime int `json:"logs_move_ttl_duration_hrs,omitempty"`
ExpectedMetricsTime int `json:"expected_metrics_ttl_duration_hrs,omitempty"`
ExpectedMetricsMoveTime int `json:"expected_metrics_move_ttl_duration_hrs,omitempty"`
ExpectedTracesTime int `json:"expected_traces_ttl_duration_hrs,omitempty"`
ExpectedTracesMoveTime int `json:"expected_traces_move_ttl_duration_hrs,omitempty"`
ExpectedLogsTime int `json:"expected_logs_ttl_duration_hrs,omitempty"`
ExpectedLogsMoveTime int `json:"expected_logs_move_ttl_duration_hrs,omitempty"`
Status string `json:"status"`
}

View File

@@ -0,0 +1,83 @@
package retentiontypes
import (
"encoding/json"
"testing"
"time"
"github.com/SigNoz/signoz/pkg/types"
"github.com/stretchr/testify/require"
)
func TestBuildRetentionPolicySegmentsFromRows(t *testing.T) {
start := time.Date(2026, 5, 4, 0, 0, 0, 0, time.UTC)
end := start.AddDate(0, 0, 1)
ruleA := CustomRetentionRule{
Filters: []FilterCondition{{Key: "service.name", Values: []string{"api"}}},
TTLDays: 7,
}
ruleB := CustomRetentionRule{
Filters: []FilterCondition{{Key: "env", Values: []string{"prod"}}},
TTLDays: 15,
}
t.Run("row before window is active at start", func(t *testing.T) {
segments, err := BuildRetentionPolicySegmentsFromRows(
[]*TTLSetting{
ttlSetting(t, start.Add(-time.Hour), 45, []CustomRetentionRule{ruleA}),
},
30,
start.UnixMilli(),
end.UnixMilli(),
)
require.NoError(t, err)
require.Equal(t, []*RetentionPolicySegment{
NewRetentionPolicySegment(start.UnixMilli(), end.UnixMilli(), []CustomRetentionRule{ruleA}, 45),
}, segments)
})
t.Run("row inside window splits segments", func(t *testing.T) {
firstChange := start.Add(6 * time.Hour)
secondChange := start.Add(18 * time.Hour)
segments, err := BuildRetentionPolicySegmentsFromRows(
[]*TTLSetting{
ttlSetting(t, firstChange, 21, []CustomRetentionRule{ruleA}),
ttlSetting(t, secondChange, 14, []CustomRetentionRule{ruleB}),
},
30,
start.UnixMilli(),
end.UnixMilli(),
)
require.NoError(t, err)
require.Equal(t, []*RetentionPolicySegment{
NewRetentionPolicySegment(start.UnixMilli(), firstChange.UnixMilli(), nil, 30),
NewRetentionPolicySegment(firstChange.UnixMilli(), secondChange.UnixMilli(), []CustomRetentionRule{ruleA}, 21),
NewRetentionPolicySegment(secondChange.UnixMilli(), end.UnixMilli(), []CustomRetentionRule{ruleB}, 14),
}, segments)
})
t.Run("no rows uses fallback", func(t *testing.T) {
segments, err := BuildRetentionPolicySegmentsFromRows(nil, 30, start.UnixMilli(), end.UnixMilli())
require.NoError(t, err)
require.Equal(t, []*RetentionPolicySegment{
NewRetentionPolicySegment(start.UnixMilli(), end.UnixMilli(), nil, 30),
}, segments)
})
}
func ttlSetting(t *testing.T, createdAt time.Time, ttlDays int, rules []CustomRetentionRule) *TTLSetting {
t.Helper()
condition, err := json.Marshal(rules)
require.NoError(t, err)
return &TTLSetting{
TimeAuditable: types.TimeAuditable{
CreatedAt: createdAt,
},
TTL: ttlDays,
Condition: string(condition),
}
}

View File

@@ -0,0 +1,20 @@
package zeustypes
import "go.opentelemetry.io/otel/attribute"
var (
// Identifies the organization.
OrganizationID = attribute.Key("signoz.organization.id")
// Identifies the retention bucket a meter belongs to.
RetentionDuration = attribute.Key("signoz.retention.duration")
)
func NewDimensions(kvs ...attribute.KeyValue) map[string]string {
dimensions := map[string]string{}
for _, kv := range kvs {
dimensions[string(kv.Key)] = kv.Value.AsString()
}
return dimensions
}

View File

@@ -0,0 +1,29 @@
package zeustypes
import (
"encoding/json"
)
type GettableDeployment struct {
ID string `json:"id"`
Name string `json:"name"`
Cluster struct {
ID string `json:"id"`
Name string `json:"name"`
Region struct {
ID string `json:"id"`
Name string `json:"name"`
DNS string `json:"dns"`
} `json:"region"`
} `json:"cluster"`
}
func NewGettableDeployment(data []byte) (*GettableDeployment, error) {
deployment := new(GettableDeployment)
err := json.Unmarshal(data, deployment)
if err != nil {
return nil, err
}
return deployment, nil
}

View File

@@ -0,0 +1,46 @@
package zeustypes
import (
"net/url"
"github.com/tidwall/gjson"
)
type Host struct {
Name string `json:"name" required:"true"`
IsDefault bool `json:"is_default" required:"true"`
URL string `json:"url" required:"true"`
}
type GettableHost struct {
Name string `json:"name" required:"true"`
State string `json:"state" required:"true"`
Tier string `json:"tier" required:"true"`
Hosts []Host `json:"hosts" required:"true"`
}
type PostableHost struct {
Name string `json:"name" required:"true"`
}
func NewGettableHost(data []byte) *GettableHost {
parsed := gjson.ParseBytes(data)
dns := parsed.Get("cluster.region.dns").String()
hostResults := parsed.Get("hosts").Array()
hosts := make([]Host, len(hostResults))
for i, h := range hostResults {
name := h.Get("name").String()
hosts[i].Name = name
hosts[i].IsDefault = h.Get("is_default").Bool()
hosts[i].URL = (&url.URL{Scheme: "https", Host: name + "." + dns}).String()
}
return &GettableHost{
Name: parsed.Get("name").String(),
State: parsed.Get("state").String(),
Tier: parsed.Get("tier").String(),
Hosts: hosts,
}
}

View File

@@ -0,0 +1,54 @@
package zeustypes
import "time"
type MeterCheckpoint struct {
Name string
StartDate time.Time
}
type Meter struct {
// MeterName is the fully-qualified meter identifier.
MeterName MeterName `json:"name"`
// Value is the aggregated integer scalar for this meter over the reporting window.
Value int64 `json:"value"`
// Unit is the metric unit for this meter.
Unit MeterUnit `json:"unit"`
// Aggregation names the aggregation applied to produce Value.
Aggregation MeterAggregation `json:"aggregation"`
// StartUnixMilli is the inclusive window start in epoch milliseconds.
StartUnixMilli int64 `json:"start_unix_milli"`
// EndUnixMilli is the exclusive window end in epoch milliseconds.
EndUnixMilli int64 `json:"end_unix_milli"`
// IsCompleted is false for the current day's partial value.
IsCompleted bool `json:"is_completed"`
// Dimensions is the per-meter label set.
Dimensions map[string]string `json:"dimensions"`
}
func NewMeter(
name MeterName,
value int64,
unit MeterUnit,
aggregation MeterAggregation,
window MeterWindow,
dimensions map[string]string,
) Meter {
return Meter{
MeterName: name,
Value: value,
Unit: unit,
Aggregation: aggregation,
StartUnixMilli: window.StartUnixMilli,
EndUnixMilli: window.EndUnixMilli,
IsCompleted: window.IsCompleted,
Dimensions: dimensions,
}
}

View File

@@ -0,0 +1,12 @@
package zeustypes
import "github.com/SigNoz/signoz/pkg/valuer"
type MeterAggregation struct {
valuer.String
}
var (
MeterAggregationSum = MeterAggregation{valuer.NewString("sum")}
MeterAggregationMax = MeterAggregation{valuer.NewString("max")}
)

View File

@@ -0,0 +1,66 @@
package zeustypes
import (
"encoding/json"
"regexp"
"github.com/SigNoz/signoz/pkg/errors"
)
var (
MeterSpanSize = MustNewMeterName("signoz.meter.span.size")
MeterLogSize = MustNewMeterName("signoz.meter.log.size")
MeterDatapointCount = MustNewMeterName("signoz.meter.metric.datapoint.count")
MeterPlatformActive = MustNewMeterName("signoz.meter.platform.active")
)
var meterNameRegex = regexp.MustCompile(`^[a-z][a-z0-9_.]+$`)
// MeterName is a validated dotted Zeus meter name.
type MeterName struct {
s string
}
func NewMeterName(s string) (MeterName, error) {
if !meterNameRegex.MatchString(s) {
return MeterName{}, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "invalid meter name: %s", s)
}
return MeterName{s: s}, nil
}
func MustNewMeterName(s string) MeterName {
name, err := NewMeterName(s)
if err != nil {
panic(err)
}
return name
}
func (n MeterName) String() string {
return n.s
}
func (n MeterName) IsZero() bool {
return n.s == ""
}
func (n MeterName) MarshalJSON() ([]byte, error) {
return json.Marshal(n.s)
}
func (n *MeterName) UnmarshalJSON(data []byte) error {
var s string
if err := json.Unmarshal(data, &s); err != nil {
return err
}
name, err := NewMeterName(s)
if err != nil {
return err
}
*n = name
return nil
}

View File

@@ -0,0 +1,30 @@
package zeustypes
import (
"testing"
"time"
"github.com/stretchr/testify/require"
)
func TestNewMeterWindow(t *testing.T) {
start := time.Date(2026, 5, 4, 0, 0, 0, 0, time.UTC)
window, err := NewMeterWindow(start.UnixMilli(), start.AddDate(0, 0, 1).UnixMilli(), true)
require.NoError(t, err)
require.Equal(t, start.UnixMilli(), window.StartUnixMilli)
require.Equal(t, start.AddDate(0, 0, 1).UnixMilli(), window.EndUnixMilli)
require.True(t, window.IsCompleted)
_, err = NewMeterWindow(0, start.UnixMilli(), true)
require.Error(t, err)
_, err = NewMeterWindow(start.UnixMilli(), start.UnixMilli(), false)
require.Error(t, err)
}
func TestMustNewMeterWindowPanicsForInvalidWindow(t *testing.T) {
require.Panics(t, func() {
MustNewMeterWindow(0, 0, true)
})
}

View File

@@ -0,0 +1,12 @@
package zeustypes
import "github.com/SigNoz/signoz/pkg/valuer"
type MeterUnit struct {
valuer.String
}
var (
MeterUnitCount = MeterUnit{valuer.NewString("count")}
MeterUnitBytes = MeterUnit{valuer.NewString("bytes")}
)

View File

@@ -0,0 +1,43 @@
package zeustypes
import (
"time"
"github.com/SigNoz/signoz/pkg/errors"
)
type MeterWindow struct {
StartUnixMilli int64
EndUnixMilli int64
IsCompleted bool
}
func NewMeterWindow(startUnixMilli, endUnixMilli int64, isCompleted bool) (MeterWindow, error) {
if startUnixMilli <= 0 {
return MeterWindow{}, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "meter window start must be positive: %d", startUnixMilli)
}
if endUnixMilli <= startUnixMilli {
return MeterWindow{}, errors.Newf(errors.TypeInvalidInput, errors.CodeInvalidInput, "meter window end must be after start: [%d, %d)", startUnixMilli, endUnixMilli)
}
return MeterWindow{
StartUnixMilli: startUnixMilli,
EndUnixMilli: endUnixMilli,
IsCompleted: isCompleted,
}, nil
}
func MustNewMeterWindow(startUnixMilli, endUnixMilli int64, isCompleted bool) MeterWindow {
window, err := NewMeterWindow(startUnixMilli, endUnixMilli, isCompleted)
if err != nil {
panic(err)
}
return window
}
func (w MeterWindow) Day() time.Time {
t := time.UnixMilli(w.StartUnixMilli).UTC()
return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, time.UTC)
}

View File

@@ -0,0 +1,13 @@
package zeustypes
type PostableProfile struct {
UsesOtel bool `json:"uses_otel" required:"true"`
HasExistingObservabilityTool bool `json:"has_existing_observability_tool" required:"true"`
ExistingObservabilityTool string `json:"existing_observability_tool" required:"true"`
ReasonsForInterestInSigNoz []string `json:"reasons_for_interest_in_signoz" required:"true"`
LogsScalePerDayInGB int64 `json:"logs_scale_per_day_in_gb" required:"true"`
NumberOfServices int64 `json:"number_of_services" required:"true"`
NumberOfHosts int64 `json:"number_of_hosts" required:"true"`
WhereDidYouDiscoverSigNoz string `json:"where_did_you_discover_signoz" required:"true"`
TimelineForMigratingToSigNoz string `json:"timeline_for_migrating_to_signoz" required:"true"`
}

View File

@@ -1,87 +0,0 @@
package zeustypes
import (
"encoding/json"
"net/url"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/tidwall/gjson"
)
type PostableHost struct {
Name string `json:"name" required:"true"`
}
type PostableProfile struct {
UsesOtel bool `json:"uses_otel" required:"true"`
HasExistingObservabilityTool bool `json:"has_existing_observability_tool" required:"true"`
ExistingObservabilityTool string `json:"existing_observability_tool" required:"true"`
ReasonsForInterestInSigNoz []string `json:"reasons_for_interest_in_signoz" required:"true"`
LogsScalePerDayInGB int64 `json:"logs_scale_per_day_in_gb" required:"true"`
NumberOfServices int64 `json:"number_of_services" required:"true"`
NumberOfHosts int64 `json:"number_of_hosts" required:"true"`
WhereDidYouDiscoverSigNoz string `json:"where_did_you_discover_signoz" required:"true"`
TimelineForMigratingToSigNoz string `json:"timeline_for_migrating_to_signoz" required:"true"`
}
type GettableHost struct {
Name string `json:"name" required:"true"`
State string `json:"state" required:"true"`
Tier string `json:"tier" required:"true"`
Hosts []Host `json:"hosts" required:"true"`
}
type Host struct {
Name string `json:"name" required:"true"`
IsDefault bool `json:"is_default" required:"true"`
URL string `json:"url" required:"true"`
}
func NewGettableHost(data []byte) *GettableHost {
parsed := gjson.ParseBytes(data)
dns := parsed.Get("cluster.region.dns").String()
hostResults := parsed.Get("hosts").Array()
hosts := make([]Host, len(hostResults))
for i, h := range hostResults {
name := h.Get("name").String()
hosts[i].Name = name
hosts[i].IsDefault = h.Get("is_default").Bool()
hosts[i].URL = (&url.URL{Scheme: "https", Host: name + "." + dns}).String()
}
return &GettableHost{
Name: parsed.Get("name").String(),
State: parsed.Get("state").String(),
Tier: parsed.Get("tier").String(),
Hosts: hosts,
}
}
// GettableDeployment represents the parsed deployment info from zeus.GetDeployment.
// NOTE: this is not a full response structure, add more fields from actual response as per requirement.
type GettableDeployment struct {
ID string `json:"id"`
Name string `json:"name"`
Cluster struct {
ID string `json:"id"`
Name string `json:"name"`
Region struct {
ID string `json:"id"`
Name string `json:"name"`
DNS string `json:"dns"`
} `json:"region"`
} `json:"cluster"`
}
// NewGettableDeployment parses raw GetDeployment bytes into a GettableDeployment.
func NewGettableDeployment(data []byte) (*GettableDeployment, error) {
deployment := new(GettableDeployment)
err := json.Unmarshal(data, deployment)
if err != nil {
return nil, errors.WrapInternalf(err, errors.CodeInternal, "failed to unmarshal deployment response")
}
return deployment, nil
}

View File

@@ -49,6 +49,14 @@ func (provider *provider) PutMetersV2(_ context.Context, _ string, _ []byte) err
return errors.New(errors.TypeUnsupported, zeus.ErrCodeUnsupported, "putting meters v2 is not supported")
}
func (provider *provider) PutMetersV3(_ context.Context, _ string, _ string, _ []byte) error {
return errors.New(errors.TypeUnsupported, zeus.ErrCodeUnsupported, "putting meters v3 is not supported")
}
func (provider *provider) ListMeterCheckpoints(_ context.Context, _ string) ([]zeustypes.MeterCheckpoint, error) {
return nil, errors.New(errors.TypeUnsupported, zeus.ErrCodeUnsupported, "list meter checkpoints is not supported")
}
func (provider *provider) PutProfile(_ context.Context, _ string, _ *zeustypes.PostableProfile) error {
return errors.New(errors.TypeUnsupported, zeus.ErrCodeUnsupported, "putting profile is not supported")
}

View File

@@ -35,6 +35,16 @@ type Zeus interface {
// Puts the meters for the given license key using Zeus.
PutMetersV2(context.Context, string, []byte) error
// PutMetersV3 ships one day's raw JSON array of meter readings to the
// v2/meters endpoint. idempotencyKey is propagated as X-Idempotency-Key so
// Zeus can UPSERT on retries.
PutMetersV3(ctx context.Context, licenseKey string, idempotencyKey string, body []byte) error
// ListMeterCheckpoints returns the latest sealed (is_completed=true) UTC day
// Zeus has stored for each billing meter name. Missing meter names are
// treated by the cron as bootstrap cases.
ListMeterCheckpoints(ctx context.Context, licenseKey string) ([]zeustypes.MeterCheckpoint, error)
// Put profile for the given license key.
PutProfile(context.Context, string, *zeustypes.PostableProfile) error