Compare commits

..

22 Commits

Author SHA1 Message Date
Srikanth Chekuri
b866bb62b3 Merge branch 'main' into tvats-add-validation-in-having-expression 2026-04-01 04:13:37 +05:30
Tushar Vats
fa0c505503 fix: added more unit tests, handle white space difference in aggregation exp and having exp 2026-04-01 03:58:30 +05:30
Vinicius Lourenço
e2cd203c8f test(k8s-volume-list): mark test as skip due to flakyness (#10787)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-03-31 16:57:38 +00:00
Vikrant Gupta
a4c6394542 feat(sqlstore): add support for transaction modes (#10781)
Some checks failed
build-staging / prepare (push) Has been cancelled
build-staging / js-build (push) Has been cancelled
build-staging / go-build (push) Has been cancelled
build-staging / staging (push) Has been cancelled
Release Drafter / update_release_draft (push) Has been cancelled
2026-03-31 13:40:06 +00:00
Pandey
71a13b4818 feat(audit): enterprise auditor with licensing gate and OTLP HTTP export (#10776)
* feat(audit): add enterprise auditor with licensing gate and OTLP HTTP export

Implements the enterprise auditor at ee/auditor/otlphttpauditor/.
Composes auditorserver.Server for batching with licensing gate,
OTel SDK LoggerProvider for InstrumentationScope, and otlploghttp
exporter for OTLP HTTP transport.

* fix(audit): address PR review — inline factory, move body+logrecord to audittypes

- Inline NewFactory closure, remove newProviderFunc
- Move buildBody to audittypes.BuildBody
- Move eventToLogRecord to audittypes.ToLogRecord
- go mod tidy for newly direct otel/log deps

* fix(audit): address PR review round 2

- Make ToLogRecord a method on AuditEvent returning sdklog.Record
- Fold buildBody into ToLogRecord as unexported helper
- Remove accumulatingProcessor and LoggerProvider — export directly
  via otlploghttp.Exporter
- Delete body.go and processor.go

* fix(audit): address PR review round 3

- Merge export.go into provider.go
- Add severity/severityText fields to Outcome struct
- Rename buildBody to setBody method on AuditEvent
- Add appendStringIfNotEmpty helper to reduce duplication

* feat(audit): switch to plog + direct HTTP POST for OTLP export

Replace otlploghttp.Exporter + sdklog.Record with plog data model,
ProtoMarshaler, and direct HTTP POST. This properly sets
InstrumentationScope.Name = "signoz.audit" and Resource attributes
on the OTLP payload.

* fix(audit): adopt collector otlphttpexporter pattern for HTTP export

Model the send function after the OTel Collector's otlphttpexporter:
- Bounded response body reads (64KB max)
- Protobuf-encoded Status parsing from error responses
- Proper response body draining on defer
- Detailed error messages with endpoint URL and status details

* refactor(audit): split export logic into export.go, add throttle retry

- Move export, send, and HTTP response helpers to export.go
- Add exporterhelper.NewThrottleRetry for 429/503 with Retry-After
- Parse Retry-After as delay-seconds or HTTP-date per RFC 7231
- Keep provider.go focused on Auditor interface and lifecycle

* feat(audit): add partial success handler and internal retry with backoff

- Parse ExportLogsServiceResponse on 2xx for partial success, log
  warning if log records were rejected
- Internal retry loop with exponential backoff for retryable status
  codes (429, 502, 503, 504) using RetryConfig from auditor config
- Honour Retry-After header (delay-seconds and HTTP-date)
- Store full auditor.Config on provider struct
- Replace exporterhelper.NewThrottleRetry with local retryableError
  type consumed by the internal retry loop

* fix(audit): fix lint — use pkg/errors, remove stdlib errors and fmt.Errorf

* refactor(audit): use provider as receiver name instead of p

* refactor(audit): clean up enterprise auditor implementation

- Extract retry logic into retry.go
- Move NewPLogsFromAuditEvents and ToLogRecord into event.go
- Add ErrCodeAuditExportFailed to auditor package
- Add version.Build to provider for service.version attribute
- Simplify sendOnce, split response handling into onSuccess/onErr
- Use PrincipalOrgID as valuer.UUID directly
- Use OTLPHTTP.Endpoint as URL type
- Remove gzip compression handling
- Delete logrecord.go

* refactor(audit): use pkg/http/client instead of bare http.Client

Use the standard SigNoz HTTP client with OTel instrumentation.
Disable heimdall retries (count=0) since we have our own
OTLP-aware retry loop that understands Retry-After headers.

* refactor(audit): use heimdall Retriable for retry instead of manual loop

- Implement retrier with exponential backoff from auditor RetryConfig
- Compute retry count from MaxElapsedTime and backoff intervals
- Pass retrier and retry count to pkg/http/client via WithRetriable
- Remove manual retry loop, retryableError type, and Retry-After parsing
- Heimdall handles retries on >= 500 status codes automatically

* refactor(audit): rename retry.go to retrier.go
2026-03-31 13:37:20 +00:00
Vinicius Lourenço
a8e2155bb6 fix(app-routes-redirect): redirects when workspaceBlocked & onboarding having wrong redirects (#10738)
* fix(private): issues with redirect when onboarding & workspace locked

* test(app-routes): add tests for private route
2026-03-31 11:07:42 +00:00
Vinicius Lourenço
a9cbf9a4df fix(infra-monitoring): request loop when click to visualize volume (#10657)
* fix(infra-monitoring): request loop when click to visualize volume

* test(k8s-volume-list): fix tests broken due to nuqs
2026-03-31 10:56:09 +00:00
Srikanth Chekuri
f1bdc94096 Merge branch 'main' into tvats-add-validation-in-having-expression 2026-03-31 07:10:39 +05:30
Tushar Vats
c5039c74e4 fix: typo 2026-03-31 01:27:07 +05:30
Tushar Vats
680902ef1b fix: added suggestion for having expression 2026-03-31 00:53:42 +05:30
Tushar Vats
4d3c7956ac fix: allow bare not in expression 2026-03-15 18:37:46 +05:30
Srikanth Chekuri
478958c129 Merge branch 'main' into tvats-add-validation-in-having-expression 2026-02-26 08:09:35 +05:30
Tushar Vats
7ab091f02d fix: support implicit and 2026-02-25 19:44:18 +05:30
Tushar Vats
c173fe1cf9 fix: use std libg sorting instead of selection sort 2026-02-25 19:07:22 +05:30
Tushar Vats
1555153156 fix: added cmnd to scripts for generating lexer 2026-02-20 15:00:25 +05:30
Tushar Vats
23a6801646 fix: edge cases 2026-02-20 14:49:11 +05:30
Tushar Vats
26a651667d fix: generated lexer files and added more unit tests 2026-02-20 05:04:20 +05:30
Tushar Vats
9102e4ccc6 fix: removed validation on having in range request validations 2026-02-19 21:04:38 +05:30
Tushar Vats
74b1df2941 fix: added more unit tests 2026-02-19 20:54:38 +05:30
Tushar Vats
a8348b6395 fix: added antlr based parsing for validation 2026-02-19 17:47:24 +05:30
Tushar Vats
2aedf5f7e6 fix: added extra validation and unit tests 2026-02-19 15:49:37 +05:30
Tushar Vats
a77a4d4daa fix: added validations for having expression 2026-02-12 20:14:51 +05:30
63 changed files with 8913 additions and 1149 deletions

View File

@@ -85,10 +85,12 @@ sqlstore:
sqlite:
# The path to the SQLite database file.
path: /var/lib/signoz/signoz.db
# Mode is the mode to use for the sqlite database.
# The journal mode for the sqlite database. Supported values: delete, wal.
mode: delete
# BusyTimeout is the timeout for the sqlite database to wait for a lock.
# The timeout for the sqlite database to wait for a lock.
busy_timeout: 10s
# The default transaction locking behavior. Supported values: deferred, immediate, exclusive.
transaction_mode: deferred
##################### APIServer #####################
apiserver:

View File

@@ -0,0 +1,143 @@
package otlphttpauditor
import (
"bytes"
"context"
"io"
"log/slog"
"net/http"
"github.com/SigNoz/signoz/pkg/auditor"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/audittypes"
collogspb "go.opentelemetry.io/proto/otlp/collector/logs/v1"
"google.golang.org/protobuf/proto"
spb "google.golang.org/genproto/googleapis/rpc/status"
)
const (
maxHTTPResponseReadBytes int64 = 64 * 1024
protobufContentType string = "application/x-protobuf"
)
func (provider *provider) export(ctx context.Context, events []audittypes.AuditEvent) error {
logs := audittypes.NewPLogsFromAuditEvents(events, "signoz", provider.build.Version(), "signoz.audit")
request, err := provider.marshaler.MarshalLogs(logs)
if err != nil {
return errors.Wrapf(err, errors.TypeInternal, auditor.ErrCodeAuditExportFailed, "failed to marshal audit logs")
}
if err := provider.send(ctx, request); err != nil {
provider.settings.Logger().ErrorContext(ctx, "audit export failed", errors.Attr(err), slog.Int("dropped_log_records", len(events)))
return err
}
return nil
}
// Posts a protobuf-encoded OTLP request to the configured endpoint.
// Retries are handled by the underlying heimdall HTTP client.
// Ref: https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlphttpexporter/otlp.go
func (provider *provider) send(ctx context.Context, body []byte) error {
req, err := http.NewRequestWithContext(ctx, http.MethodPost, provider.config.OTLPHTTP.Endpoint.String(), bytes.NewReader(body))
if err != nil {
return err
}
req.Header.Set("Content-Type", protobufContentType)
res, err := provider.httpClient.Do(req)
if err != nil {
return err
}
defer func() {
_, _ = io.CopyN(io.Discard, res.Body, maxHTTPResponseReadBytes)
res.Body.Close()
}()
if res.StatusCode >= 200 && res.StatusCode <= 299 {
provider.onSuccess(ctx, res)
return nil
}
return provider.onErr(res)
}
// Ref: https://github.com/open-telemetry/opentelemetry-collector/blob/01b07fcbb7a253bd996c290dcae6166e71d13732/exporter/otlphttpexporter/otlp.go#L403.
func (provider *provider) onSuccess(ctx context.Context, res *http.Response) {
resBytes, err := readResponseBody(res)
if err != nil || resBytes == nil {
return
}
exportResponse := &collogspb.ExportLogsServiceResponse{}
if err := proto.Unmarshal(resBytes, exportResponse); err != nil {
return
}
ps := exportResponse.GetPartialSuccess()
if ps == nil {
return
}
if ps.GetErrorMessage() != "" || ps.GetRejectedLogRecords() != 0 {
provider.settings.Logger().WarnContext(ctx, "partial success response", slog.String("message", ps.GetErrorMessage()), slog.Int64("dropped_log_records", ps.GetRejectedLogRecords()))
}
}
func (provider *provider) onErr(res *http.Response) error {
status := readResponseStatus(res)
if status != nil {
return errors.Newf(errors.TypeInternal, auditor.ErrCodeAuditExportFailed, "request to %s responded with status code %d, Message=%s, Details=%v", provider.config.OTLPHTTP.Endpoint.String(), res.StatusCode, status.Message, status.Details)
}
return errors.Newf(errors.TypeInternal, auditor.ErrCodeAuditExportFailed, "request to %s responded with status code %d", provider.config.OTLPHTTP.Endpoint.String(), res.StatusCode)
}
// Reads at most maxHTTPResponseReadBytes from the response body.
// Ref: https://github.com/open-telemetry/opentelemetry-collector/blob/01b07fcbb7a253bd996c290dcae6166e71d13732/exporter/otlphttpexporter/otlp.go#L275.
func readResponseBody(resp *http.Response) ([]byte, error) {
if resp.ContentLength == 0 {
return nil, nil
}
maxRead := resp.ContentLength
if maxRead == -1 || maxRead > maxHTTPResponseReadBytes {
maxRead = maxHTTPResponseReadBytes
}
protoBytes := make([]byte, maxRead)
n, err := io.ReadFull(resp.Body, protoBytes)
if n == 0 && (err == nil || errors.Is(err, io.EOF)) {
return nil, nil
}
if err != nil && !errors.Is(err, io.ErrUnexpectedEOF) {
return nil, err
}
return protoBytes[:n], nil
}
// Decodes a protobuf-encoded Status from 4xx/5xx response bodies. Returns nil if the response is empty or cannot be decoded.
// Ref: https://github.com/open-telemetry/opentelemetry-collector/blob/01b07fcbb7a253bd996c290dcae6166e71d13732/exporter/otlphttpexporter/otlp.go#L310.
func readResponseStatus(resp *http.Response) *spb.Status {
if resp.StatusCode < 400 || resp.StatusCode > 599 {
return nil
}
respBytes, err := readResponseBody(resp)
if err != nil || respBytes == nil {
return nil
}
respStatus := &spb.Status{}
if err := proto.Unmarshal(respBytes, respStatus); err != nil {
return nil
}
return respStatus
}

View File

@@ -0,0 +1,97 @@
package otlphttpauditor
import (
"context"
"github.com/SigNoz/signoz/pkg/auditor"
"github.com/SigNoz/signoz/pkg/auditor/auditorserver"
"github.com/SigNoz/signoz/pkg/factory"
client "github.com/SigNoz/signoz/pkg/http/client"
"github.com/SigNoz/signoz/pkg/licensing"
"github.com/SigNoz/signoz/pkg/types/audittypes"
"github.com/SigNoz/signoz/pkg/version"
"go.opentelemetry.io/collector/pdata/plog"
)
var _ auditor.Auditor = (*provider)(nil)
type provider struct {
settings factory.ScopedProviderSettings
config auditor.Config
licensing licensing.Licensing
build version.Build
server *auditorserver.Server
marshaler plog.ProtoMarshaler
httpClient *client.Client
}
func NewFactory(licensing licensing.Licensing, build version.Build) factory.ProviderFactory[auditor.Auditor, auditor.Config] {
return factory.NewProviderFactory(factory.MustNewName("otlphttp"), func(ctx context.Context, providerSettings factory.ProviderSettings, config auditor.Config) (auditor.Auditor, error) {
return newProvider(ctx, providerSettings, config, licensing, build)
})
}
func newProvider(_ context.Context, providerSettings factory.ProviderSettings, config auditor.Config, licensing licensing.Licensing, build version.Build) (auditor.Auditor, error) {
settings := factory.NewScopedProviderSettings(providerSettings, "github.com/SigNoz/signoz/ee/auditor/otlphttpauditor")
httpClient, err := client.New(
settings.Logger(),
providerSettings.TracerProvider,
providerSettings.MeterProvider,
client.WithTimeout(config.OTLPHTTP.Timeout),
client.WithRetryCount(retryCountFromConfig(config.OTLPHTTP.Retry)),
retrierOption(config.OTLPHTTP.Retry),
)
if err != nil {
return nil, err
}
provider := &provider{
settings: settings,
config: config,
licensing: licensing,
build: build,
marshaler: plog.ProtoMarshaler{},
httpClient: httpClient,
}
server, err := auditorserver.New(settings,
auditorserver.Config{
BufferSize: config.BufferSize,
BatchSize: config.BatchSize,
FlushInterval: config.FlushInterval,
},
provider.export,
)
if err != nil {
return nil, err
}
provider.server = server
return provider, nil
}
func (provider *provider) Start(ctx context.Context) error {
return provider.server.Start(ctx)
}
func (provider *provider) Audit(ctx context.Context, event audittypes.AuditEvent) {
if event.PrincipalOrgID.IsZero() {
provider.settings.Logger().WarnContext(ctx, "audit event dropped as org_id is zero")
return
}
if _, err := provider.licensing.GetActive(ctx, event.PrincipalOrgID); err != nil {
return
}
provider.server.Add(ctx, event)
}
func (provider *provider) Healthy() <-chan struct{} {
return provider.server.Healthy()
}
func (provider *provider) Stop(ctx context.Context) error {
return provider.server.Stop(ctx)
}

View File

@@ -0,0 +1,52 @@
package otlphttpauditor
import (
"time"
"github.com/SigNoz/signoz/pkg/auditor"
client "github.com/SigNoz/signoz/pkg/http/client"
)
// retrier implements client.Retriable with exponential backoff
// derived from auditor.RetryConfig.
type retrier struct {
initialInterval time.Duration
maxInterval time.Duration
}
func newRetrier(cfg auditor.RetryConfig) *retrier {
return &retrier{
initialInterval: cfg.InitialInterval,
maxInterval: cfg.MaxInterval,
}
}
// NextInterval returns the backoff duration for the given retry attempt.
// Uses exponential backoff: initialInterval * 2^retry, capped at maxInterval.
func (r *retrier) NextInterval(retry int) time.Duration {
interval := r.initialInterval
for range retry {
interval *= 2
}
return min(interval, r.maxInterval)
}
func retrierOption(cfg auditor.RetryConfig) client.Option {
return client.WithRetriable(newRetrier(cfg))
}
func retryCountFromConfig(cfg auditor.RetryConfig) int {
if !cfg.Enabled || cfg.MaxElapsedTime <= 0 {
return 0
}
count := 0
elapsed := time.Duration(0)
interval := cfg.InitialInterval
for elapsed < cfg.MaxElapsedTime {
elapsed += interval
interval = min(interval*2, cfg.MaxInterval)
count++
}
return count
}

View File

@@ -101,6 +101,22 @@ function PrivateRoute({ children }: PrivateRouteProps): JSX.Element {
preference.name === ORG_PREFERENCES.ORG_ONBOARDING,
)?.value;
// Don't redirect to onboarding if workspace has issues (blocked, suspended, or restricted)
// User needs access to settings/billing to fix payment issues
const isWorkspaceBlocked = trialInfo?.workSpaceBlock;
const isWorkspaceSuspended = activeLicense?.state === LicenseState.DEFAULTED;
const isWorkspaceAccessRestricted =
activeLicense?.state === LicenseState.TERMINATED ||
activeLicense?.state === LicenseState.EXPIRED ||
activeLicense?.state === LicenseState.CANCELLED;
const hasWorkspaceIssue =
isWorkspaceBlocked || isWorkspaceSuspended || isWorkspaceAccessRestricted;
if (hasWorkspaceIssue) {
return;
}
const isFirstUser = checkFirstTimeUser();
if (
isFirstUser &&
@@ -119,40 +135,36 @@ function PrivateRoute({ children }: PrivateRouteProps): JSX.Element {
orgPreferences,
usersData,
pathname,
trialInfo?.workSpaceBlock,
activeLicense?.state,
]);
const navigateToWorkSpaceBlocked = (route: any): void => {
const { path } = route;
const navigateToWorkSpaceBlocked = useCallback((): void => {
const isRouteEnabledForWorkspaceBlockedState =
isAdmin &&
(path === ROUTES.SETTINGS ||
path === ROUTES.ORG_SETTINGS ||
path === ROUTES.MEMBERS_SETTINGS ||
path === ROUTES.BILLING ||
path === ROUTES.MY_SETTINGS);
(pathname === ROUTES.SETTINGS ||
pathname === ROUTES.ORG_SETTINGS ||
pathname === ROUTES.MEMBERS_SETTINGS ||
pathname === ROUTES.BILLING ||
pathname === ROUTES.MY_SETTINGS);
if (
path &&
path !== ROUTES.WORKSPACE_LOCKED &&
pathname &&
pathname !== ROUTES.WORKSPACE_LOCKED &&
!isRouteEnabledForWorkspaceBlockedState
) {
history.push(ROUTES.WORKSPACE_LOCKED);
}
};
}, [isAdmin, pathname]);
const navigateToWorkSpaceAccessRestricted = (route: any): void => {
const { path } = route;
if (path && path !== ROUTES.WORKSPACE_ACCESS_RESTRICTED) {
const navigateToWorkSpaceAccessRestricted = useCallback((): void => {
if (pathname && pathname !== ROUTES.WORKSPACE_ACCESS_RESTRICTED) {
history.push(ROUTES.WORKSPACE_ACCESS_RESTRICTED);
}
};
}, [pathname]);
useEffect(() => {
if (!isFetchingActiveLicense && activeLicense) {
const currentRoute = mapRoutes.get('current');
const isTerminated = activeLicense.state === LicenseState.TERMINATED;
const isExpired = activeLicense.state === LicenseState.EXPIRED;
const isCancelled = activeLicense.state === LicenseState.CANCELLED;
@@ -161,61 +173,53 @@ function PrivateRoute({ children }: PrivateRouteProps): JSX.Element {
const { platform } = activeLicense;
if (
isWorkspaceAccessRestricted &&
platform === LicensePlatform.CLOUD &&
currentRoute
) {
navigateToWorkSpaceAccessRestricted(currentRoute);
if (isWorkspaceAccessRestricted && platform === LicensePlatform.CLOUD) {
navigateToWorkSpaceAccessRestricted();
}
}
}, [isFetchingActiveLicense, activeLicense, mapRoutes, pathname]);
}, [
isFetchingActiveLicense,
activeLicense,
navigateToWorkSpaceAccessRestricted,
]);
useEffect(() => {
if (!isFetchingActiveLicense) {
const currentRoute = mapRoutes.get('current');
const shouldBlockWorkspace = trialInfo?.workSpaceBlock;
if (
shouldBlockWorkspace &&
currentRoute &&
activeLicense?.platform === LicensePlatform.CLOUD
) {
navigateToWorkSpaceBlocked(currentRoute);
navigateToWorkSpaceBlocked();
}
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [
isFetchingActiveLicense,
trialInfo?.workSpaceBlock,
activeLicense?.platform,
mapRoutes,
pathname,
navigateToWorkSpaceBlocked,
]);
const navigateToWorkSpaceSuspended = (route: any): void => {
const { path } = route;
if (path && path !== ROUTES.WORKSPACE_SUSPENDED) {
const navigateToWorkSpaceSuspended = useCallback((): void => {
if (pathname && pathname !== ROUTES.WORKSPACE_SUSPENDED) {
history.push(ROUTES.WORKSPACE_SUSPENDED);
}
};
}, [pathname]);
useEffect(() => {
if (!isFetchingActiveLicense && activeLicense) {
const currentRoute = mapRoutes.get('current');
const shouldSuspendWorkspace =
activeLicense.state === LicenseState.DEFAULTED;
if (
shouldSuspendWorkspace &&
currentRoute &&
activeLicense.platform === LicensePlatform.CLOUD
) {
navigateToWorkSpaceSuspended(currentRoute);
navigateToWorkSpaceSuspended();
}
}
}, [isFetchingActiveLicense, activeLicense, mapRoutes, pathname]);
}, [isFetchingActiveLicense, activeLicense, navigateToWorkSpaceSuspended]);
useEffect(() => {
if (org && org.length > 0 && org[0].id !== undefined) {

File diff suppressed because it is too large Load Diff

View File

@@ -48,6 +48,8 @@ import {
formatDataForTable,
getK8sVolumesListColumns,
getK8sVolumesListQuery,
getVolumeListGroupedByRowDataQueryKey,
getVolumesListQueryKey,
K8sVolumesRowData,
} from './utils';
import VolumeDetails from './VolumeDetails';
@@ -167,6 +169,26 @@ function K8sVolumesList({
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [minTime, maxTime, orderBy, selectedRowData, groupBy]);
const groupedByRowDataQueryKey = useMemo(
() =>
getVolumeListGroupedByRowDataQueryKey(
selectedRowData?.groupedByMeta,
queryFilters,
orderBy,
groupBy,
minTime,
maxTime,
),
[
selectedRowData?.groupedByMeta,
queryFilters,
orderBy,
groupBy,
minTime,
maxTime,
],
);
const {
data: groupedByRowData,
isFetching: isFetchingGroupedByRowData,
@@ -176,7 +198,7 @@ function K8sVolumesList({
} = useGetK8sVolumesList(
fetchGroupedByRowDataQuery as K8sVolumesListPayload,
{
queryKey: ['volumeList', fetchGroupedByRowDataQuery],
queryKey: groupedByRowDataQueryKey,
enabled: !!fetchGroupedByRowDataQuery && !!selectedRowData,
},
undefined,
@@ -221,6 +243,28 @@ function K8sVolumesList({
return queryPayload;
}, [pageSize, currentPage, queryFilters, minTime, maxTime, orderBy, groupBy]);
const volumesListQueryKey = useMemo(() => {
return getVolumesListQueryKey(
selectedVolumeUID,
pageSize,
currentPage,
queryFilters,
orderBy,
groupBy,
minTime,
maxTime,
);
}, [
selectedVolumeUID,
pageSize,
currentPage,
queryFilters,
groupBy,
orderBy,
minTime,
maxTime,
]);
const formattedGroupedByVolumesData = useMemo(
() =>
formatDataForTable(groupedByRowData?.payload?.data?.records || [], groupBy),
@@ -237,7 +281,7 @@ function K8sVolumesList({
const { data, isFetching, isLoading, isError } = useGetK8sVolumesList(
query as K8sVolumesListPayload,
{
queryKey: ['volumeList', query],
queryKey: volumesListQueryKey,
enabled: !!query,
},
undefined,

View File

@@ -77,6 +77,74 @@ export const getK8sVolumesListQuery = (): K8sVolumesListPayload => ({
orderBy: { columnName: 'usage', order: 'desc' },
});
export const getVolumeListGroupedByRowDataQueryKey = (
groupedByMeta: K8sVolumesData['meta'] | undefined,
queryFilters: IBuilderQuery['filters'],
orderBy: { columnName: string; order: 'asc' | 'desc' } | null,
groupBy: IBuilderQuery['groupBy'],
minTime: number,
maxTime: number,
): (string | undefined)[] => {
// When we have grouped by metadata defined
// We need to leave out the min/max time
// Otherwise it will cause a loop
const groupedByMetaStr = JSON.stringify(groupedByMeta || undefined) ?? '';
if (groupedByMetaStr) {
return [
'volumeList',
JSON.stringify(queryFilters),
JSON.stringify(orderBy),
JSON.stringify(groupBy),
groupedByMetaStr,
];
}
return [
'volumeList',
JSON.stringify(queryFilters),
JSON.stringify(orderBy),
JSON.stringify(groupBy),
groupedByMetaStr,
String(minTime),
String(maxTime),
];
};
export const getVolumesListQueryKey = (
selectedVolumeUID: string | null,
pageSize: number,
currentPage: number,
queryFilters: IBuilderQuery['filters'],
orderBy: { columnName: string; order: 'asc' | 'desc' } | null,
groupBy: IBuilderQuery['groupBy'],
minTime: number,
maxTime: number,
): (string | undefined)[] => {
// When selected volume is defined
// We need to leave out the min/max time
// Otherwise it will cause a loop
if (selectedVolumeUID) {
return [
'volumeList',
String(pageSize),
String(currentPage),
JSON.stringify(queryFilters),
JSON.stringify(orderBy),
JSON.stringify(groupBy),
];
}
return [
'volumeList',
String(pageSize),
String(currentPage),
JSON.stringify(queryFilters),
JSON.stringify(orderBy),
JSON.stringify(groupBy),
String(minTime),
String(maxTime),
];
};
const columnsConfig = [
{
title: <div className="column-header-left pvc-name-header">PVC Name</div>,

View File

@@ -1,29 +1,93 @@
import setupCommonMocks from '../commonMocks';
setupCommonMocks();
import { QueryClient, QueryClientProvider } from 'react-query';
// eslint-disable-next-line no-restricted-imports
import { MemoryRouter } from 'react-router-dom';
import { render, waitFor } from '@testing-library/react';
import { MemoryRouter } from 'react-router-dom-v5-compat';
import { FeatureKeys } from 'constants/features';
import K8sVolumesList from 'container/InfraMonitoringK8s/Volumes/K8sVolumesList';
import { rest, server } from 'mocks-server/server';
import { NuqsTestingAdapter } from 'nuqs/adapters/testing';
import { IAppContext, IUser } from 'providers/App/types';
import { QueryBuilderProvider } from 'providers/QueryBuilder';
// eslint-disable-next-line no-restricted-imports
import { applyMiddleware, legacy_createStore as createStore } from 'redux';
import thunk from 'redux-thunk';
import reducers from 'store/reducers';
import { act, render, screen, userEvent, waitFor } from 'tests/test-utils';
import { UPDATE_TIME_INTERVAL } from 'types/actions/globalTime';
import { LicenseResModel } from 'types/api/licensesV3/getActive';
const queryClient = new QueryClient({
defaultOptions: {
queries: {
retry: false,
cacheTime: 0,
},
},
});
import { INFRA_MONITORING_K8S_PARAMS_KEYS } from '../../constants';
const SERVER_URL = 'http://localhost/api';
// jsdom does not implement IntersectionObserver — provide a no-op stub
const mockObserver = {
observe: jest.fn(),
unobserve: jest.fn(),
disconnect: jest.fn(),
};
global.IntersectionObserver = jest
.fn()
.mockImplementation(() => mockObserver) as any;
const mockVolume = {
persistentVolumeClaimName: 'test-pvc',
volumeAvailable: 1000000,
volumeCapacity: 5000000,
volumeInodes: 100,
volumeInodesFree: 50,
volumeInodesUsed: 50,
volumeUsage: 4000000,
meta: {
k8s_cluster_name: 'test-cluster',
k8s_namespace_name: 'test-namespace',
k8s_node_name: 'test-node',
k8s_persistentvolumeclaim_name: 'test-pvc',
k8s_pod_name: 'test-pod',
k8s_pod_uid: 'test-pod-uid',
k8s_statefulset_name: '',
},
};
const mockVolumesResponse = {
status: 'success',
data: {
type: '',
records: [mockVolume],
groups: null,
total: 1,
sentAnyHostMetricsData: false,
isSendingK8SAgentMetrics: false,
},
};
/** Renders K8sVolumesList with a real Redux store so dispatched actions affect state. */
function renderWithRealStore(
initialEntries?: Record<string, any>,
): { testStore: ReturnType<typeof createStore> } {
const testStore = createStore(reducers, applyMiddleware(thunk as any));
const queryClient = new QueryClient({
defaultOptions: { queries: { retry: false } },
});
render(
<NuqsTestingAdapter searchParams={initialEntries}>
<QueryClientProvider client={queryClient}>
<QueryBuilderProvider>
<MemoryRouter>
<K8sVolumesList
isFiltersVisible={false}
handleFilterVisibilityChange={jest.fn()}
quickFiltersLastUpdated={-1}
/>
</MemoryRouter>
</QueryBuilderProvider>
</QueryClientProvider>
</NuqsTestingAdapter>,
);
return { testStore };
}
describe('K8sVolumesList - useGetAggregateKeys Category Regression', () => {
let requestsMade: Array<{
url: string;
@@ -33,7 +97,6 @@ describe('K8sVolumesList - useGetAggregateKeys Category Regression', () => {
beforeEach(() => {
requestsMade = [];
queryClient.clear();
server.use(
rest.get(`${SERVER_URL}/v3/autocomplete/attribute_keys`, (req, res, ctx) => {
@@ -79,19 +142,7 @@ describe('K8sVolumesList - useGetAggregateKeys Category Regression', () => {
});
it('should call aggregate keys API with k8s_volume_capacity', async () => {
render(
<NuqsTestingAdapter>
<QueryClientProvider client={queryClient}>
<MemoryRouter>
<K8sVolumesList
isFiltersVisible={false}
handleFilterVisibilityChange={jest.fn()}
quickFiltersLastUpdated={-1}
/>
</MemoryRouter>
</QueryClientProvider>
</NuqsTestingAdapter>,
);
renderWithRealStore();
await waitFor(() => {
expect(requestsMade.length).toBeGreaterThan(0);
@@ -128,19 +179,7 @@ describe('K8sVolumesList - useGetAggregateKeys Category Regression', () => {
activeLicense: (null as unknown) as LicenseResModel,
} as IAppContext);
render(
<NuqsTestingAdapter>
<QueryClientProvider client={queryClient}>
<MemoryRouter>
<K8sVolumesList
isFiltersVisible={false}
handleFilterVisibilityChange={jest.fn()}
quickFiltersLastUpdated={-1}
/>
</MemoryRouter>
</QueryClientProvider>
</NuqsTestingAdapter>,
);
renderWithRealStore();
await waitFor(() => {
expect(requestsMade.length).toBeGreaterThan(0);
@@ -159,3 +198,193 @@ describe('K8sVolumesList - useGetAggregateKeys Category Regression', () => {
expect(aggregateAttribute).toBe('k8s.volume.capacity');
});
});
describe('K8sVolumesList', () => {
beforeEach(() => {
server.use(
rest.post('http://localhost/api/v1/pvcs/list', (_req, res, ctx) =>
res(ctx.status(200), ctx.json(mockVolumesResponse)),
),
rest.get(
'http://localhost/api/v3/autocomplete/attribute_keys',
(_req, res, ctx) =>
res(ctx.status(200), ctx.json({ data: { attributeKeys: [] } })),
),
);
});
it('renders volume rows from API response', async () => {
renderWithRealStore();
await waitFor(async () => {
const elements = await screen.findAllByText('test-pvc');
expect(elements.length).toBeGreaterThan(0);
});
});
it('opens VolumeDetails when a volume row is clicked', async () => {
const user = userEvent.setup();
renderWithRealStore();
const pvcCells = await screen.findAllByText('test-pvc');
expect(pvcCells.length).toBeGreaterThan(0);
const row = pvcCells[0].closest('tr');
expect(row).not.toBeNull();
await user.click(row!);
await waitFor(async () => {
const cells = await screen.findAllByText('test-pvc');
expect(cells.length).toBeGreaterThan(1);
});
});
it.skip('closes VolumeDetails when the close button is clicked', async () => {
const user = userEvent.setup();
renderWithRealStore();
const pvcCells = await screen.findAllByText('test-pvc');
expect(pvcCells.length).toBeGreaterThan(0);
const row = pvcCells[0].closest('tr');
await user.click(row!);
await waitFor(() => {
expect(screen.getByRole('button', { name: 'Close' })).toBeInTheDocument();
});
await user.click(screen.getByRole('button', { name: 'Close' }));
await waitFor(() => {
expect(
screen.queryByRole('button', { name: 'Close' }),
).not.toBeInTheDocument();
});
});
it('does not re-fetch the volumes list when time range changes after selecting a volume', async () => {
const user = userEvent.setup();
let apiCallCount = 0;
server.use(
rest.post('http://localhost/api/v1/pvcs/list', async (_req, res, ctx) => {
apiCallCount += 1;
return res(ctx.status(200), ctx.json(mockVolumesResponse));
}),
);
const { testStore } = renderWithRealStore();
await waitFor(() => expect(apiCallCount).toBe(1));
const pvcCells = await screen.findAllByText('test-pvc');
const row = pvcCells[0].closest('tr');
await user.click(row!);
await waitFor(async () => {
const cells = await screen.findAllByText('test-pvc');
expect(cells.length).toBeGreaterThan(1);
});
// Wait for nuqs URL state to fully propagate to the component
// The selectedVolumeUID is managed via nuqs (async URL state),
// so we need to ensure the state has settled before dispatching time changes
await act(async () => {
await new Promise((resolve) => {
setTimeout(resolve, 0);
});
});
const countAfterClick = apiCallCount;
// There's a specific component causing the min/max time to be updated
// After the volume loads, it triggers the change again
// And then the query to fetch data for the selected volume enters in a loop
act(() => {
testStore.dispatch({
type: UPDATE_TIME_INTERVAL,
payload: {
minTime: Date.now() * 1000000 - 30 * 60 * 1000 * 1000000,
maxTime: Date.now() * 1000000,
selectedTime: '30m',
},
} as any);
});
// Allow any potential re-fetch to settle
await new Promise((resolve) => {
setTimeout(resolve, 500);
});
expect(apiCallCount).toBe(countAfterClick);
});
it('does not re-fetch groupedByRowData when time range changes after expanding a volume row with groupBy', async () => {
const user = userEvent.setup();
const groupByValue = [{ key: 'k8s_namespace_name' }];
let groupedByRowDataCallCount = 0;
server.use(
rest.post('http://localhost/api/v1/pvcs/list', async (req, res, ctx) => {
const body = await req.json();
// Check for both underscore and dot notation keys since dotMetricsEnabled
// may be true or false depending on test order
const isGroupedByRowDataRequest = body.filters?.items?.some(
(item: { key?: { key?: string }; value?: string }) =>
(item.key?.key === 'k8s_namespace_name' ||
item.key?.key === 'k8s.namespace.name') &&
item.value === 'test-namespace',
);
if (isGroupedByRowDataRequest) {
groupedByRowDataCallCount += 1;
}
return res(ctx.status(200), ctx.json(mockVolumesResponse));
}),
rest.get(
'http://localhost/api/v3/autocomplete/attribute_keys',
(_req, res, ctx) =>
res(
ctx.status(200),
ctx.json({
data: {
attributeKeys: [{ key: 'k8s_namespace_name', dataType: 'string' }],
},
}),
),
),
);
const { testStore } = renderWithRealStore({
[INFRA_MONITORING_K8S_PARAMS_KEYS.GROUP_BY]: JSON.stringify(groupByValue),
});
await waitFor(async () => {
const elements = await screen.findAllByText('test-namespace');
return expect(elements.length).toBeGreaterThan(0);
});
const row = (await screen.findAllByText('test-namespace'))[0].closest('tr');
expect(row).not.toBeNull();
user.click(row as HTMLElement);
await waitFor(() => expect(groupedByRowDataCallCount).toBe(1));
const countAfterExpand = groupedByRowDataCallCount;
act(() => {
testStore.dispatch({
type: UPDATE_TIME_INTERVAL,
payload: {
minTime: Date.now() * 1000000 - 30 * 60 * 1000 * 1000000,
maxTime: Date.now() * 1000000,
selectedTime: '30m',
},
} as any);
});
// Allow any potential re-fetch to settle
await new Promise((resolve) => {
setTimeout(resolve, 500);
});
expect(groupedByRowDataCallCount).toBe(countAfterExpand);
});
});

4
go.mod
View File

@@ -372,7 +372,7 @@ require (
go.opentelemetry.io/otel/log v0.15.0 // indirect
go.opentelemetry.io/otel/sdk/log v0.14.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.40.0
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.opentelemetry.io/proto/otlp v1.9.0
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/mock v0.6.0 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
@@ -381,7 +381,7 @@ require (
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.41.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409
google.golang.org/grpc v1.78.0 // indirect
gopkg.in/telebot.v3 v3.3.8 // indirect
k8s.io/client-go v0.35.0 // indirect

229
grammar/HavingExpression.g4 Normal file
View File

@@ -0,0 +1,229 @@
grammar HavingExpression;
/*
* Parser Rules
*/
query
: expression EOF
;
// Expression with standard boolean precedence:
// - parentheses > NOT > AND > OR
expression
: orExpression
;
// OR expressions
orExpression
: andExpression ( OR andExpression )*
;
// AND expressions + optional chaining with implicit AND if no OR is present
andExpression
: primary ( AND primary | primary )*
;
// Primary: an optionally negated expression.
// NOT can be applied to a parenthesized expression or a bare comparison.
// E.g.: NOT (count() > 100 AND sum(bytes) < 500)
// NOT count() > 100
primary
: NOT? LPAREN orExpression RPAREN
| NOT? comparison
;
/*
* Comparison between two arithmetic operands.
* E.g.: count() > 100, total_duration >= 500, __result_0 != 0
*/
comparison
: operand compOp operand
;
compOp
: EQUALS
| NOT_EQUALS
| NEQ
| LT
| LE
| GT
| GE
;
/*
* Operands support additive arithmetic (+/-).
* E.g.: sum(a) + sum(b) > 1000, count() - 10 > 0
*/
operand
: operand (PLUS | MINUS) term
| term
;
/*
* Terms support multiplicative arithmetic (*, /, %)
* E.g.: count() * 2 > 100, sum(bytes) / 1024 > 10
*/
term
: term (STAR | SLASH | PERCENT) factor
| factor
;
/*
* Factors: atoms, parenthesized operands, or unary-signed sub-factors.
* E.g.: (sum(a) + sum(b)) * 2 > 100, -count() > 0, -(avg(x) + 1) > 0
*
* Note: the lexer's NUMBER rule includes an optional SIGN prefix, so a bare
* negative literal like -10 is a single NUMBER token and is handled by atom,
* not by the unary rule here. Unary sign applies when the operand following
* the sign is a non-literal: a function call, an identifier, or a parenthesised
* expression. As a result, `count()-10 > 0` is always rejected: after `count()`
* the lexer produces NUMBER(-10), which is not a valid compOp or binary operator.
*/
factor
: (PLUS | MINUS) factor
| LPAREN operand RPAREN
| atom
;
/*
* Atoms are the basic building blocks of arithmetic operands:
* - aggregate function calls: count(), sum(bytes), avg(duration)
* - identifier references: aliases, result refs (__result, __result_0, __result0)
* - numeric literals: 100, 0.5, 1e6
* - string literals: 'xyz' — recognized so we can give a friendly error
*
* String literals in HAVING are always invalid (aggregator results are numeric),
* but we accept them here so the visitor can produce a clear error message instead
* of a raw syntax error.
*/
atom
: functionCall
| identifier
| NUMBER
| STRING
;
/*
* Aggregate function calls, e.g.:
* count(), sum(bytes), avg(duration_nano)
* countIf(level='error'), sumIf(bytes, status > 400)
* p99(duration), avg(sum(cpu_usage))
*
* Function arguments are parsed as a permissive token sequence (funcArgToken+)
* so that complex aggregation expressions — including nested function calls and
* filter predicates with string literals — can be referenced verbatim in the
* HAVING expression. The visitor looks up the full call text (whitespace-free,
* via ctx.GetText()) in the column map, which stores normalized (space-stripped)
* aggregation expression keys.
*/
functionCall
: IDENTIFIER LPAREN functionArgList? RPAREN
;
functionArgList
: funcArg ( COMMA funcArg )*
;
/*
* A single function argument is one or more consecutive arg-tokens.
* Commas at the top level separate arguments; closing parens terminate the list.
*/
funcArg
: funcArgToken+
;
/*
* Permissive token set for function argument content. Covers:
* - simple identifiers: bytes, duration
* - string literals: 'error', "info"
* - numeric literals: 200, 3.14
* - comparison operators: level='error', status > 400
* - arithmetic operators: x + y
* - boolean connectives: level='error' AND status=200
* - balanced parens: nested calls like sum(duration)
*/
funcArgToken
: IDENTIFIER
| STRING
| NUMBER
| BOOL
| EQUALS | NOT_EQUALS | NEQ | LT | LE | GT | GE
| PLUS | MINUS | STAR | SLASH | PERCENT
| NOT | AND | OR
| LPAREN funcArgToken* RPAREN
;
// Identifier references: aliases, field names, result references
// Examples: total_logs, error_count, __result, __result_0, __result0, p99
identifier
: IDENTIFIER
;
/*
* Lexer Rules
*/
// Punctuation
LPAREN : '(' ;
RPAREN : ')' ;
COMMA : ',' ;
// Comparison operators
EQUALS : '=' | '==' ;
NOT_EQUALS : '!=' ;
NEQ : '<>' ; // alternate not-equals operator
LT : '<' ;
LE : '<=' ;
GT : '>' ;
GE : '>=' ;
// Arithmetic operators
PLUS : '+' ;
MINUS : '-' ;
STAR : '*' ;
SLASH : '/' ;
PERCENT : '%' ;
// Boolean logic (case-insensitive)
NOT : [Nn][Oo][Tt] ;
AND : [Aa][Nn][Dd] ;
OR : [Oo][Rr] ;
// Boolean constants (case-insensitive)
BOOL
: [Tt][Rr][Uu][Ee]
| [Ff][Aa][Ll][Ss][Ee]
;
fragment SIGN : [+-] ;
// Numbers: optional sign, digits, optional decimal, optional scientific notation
// E.g.: 100, -10, 0.5, 1.5e3, .75, -3.14
NUMBER
: SIGN? DIGIT+ ('.' DIGIT*)? ([eE] SIGN? DIGIT+)?
| SIGN? '.' DIGIT+ ([eE] SIGN? DIGIT+)?
;
// Identifiers: start with a letter or underscore, followed by alphanumeric/underscores.
// Optionally dotted for nested field paths.
// Covers: count, sum, p99, total_logs, error_count, __result, __result_0, __result0,
// service.name, span.duration
IDENTIFIER
: [a-zA-Z_] [a-zA-Z0-9_]* ( '.' [a-zA-Z_] [a-zA-Z0-9_]* )*
;
// Quoted string literals (single or double-quoted).
// These are valid tokens inside function arguments (e.g. countIf(level='error'))
// but are always rejected in comparison-operand position by the visitor.
STRING
: '\'' (~'\'')* '\''
| '"' (~'"')* '"'
;
// Skip whitespace
WS
: [ \t\r\n]+ -> skip
;
fragment DIGIT : [0-9] ;

View File

@@ -3,10 +3,15 @@ package auditor
import (
"context"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/SigNoz/signoz/pkg/types/audittypes"
)
var (
ErrCodeAuditExportFailed = errors.MustNewCode("audit_export_failed")
)
type Auditor interface {
factory.ServiceWithHealthy

View File

@@ -1,6 +1,7 @@
package auditor
import (
"net/url"
"time"
"github.com/SigNoz/signoz/pkg/errors"
@@ -10,6 +11,7 @@ import (
var _ factory.Config = (*Config)(nil)
type Config struct {
// Provider specifies the audit export implementation to use.
Provider string `mapstructure:"provider"`
// BufferSize is the async channel capacity for audit events.
@@ -28,18 +30,12 @@ type Config struct {
// OTLPHTTPConfig holds configuration for the OTLP HTTP exporter provider.
// Fields map to go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp options.
type OTLPHTTPConfig struct {
// Endpoint is the target host:port (without scheme or path).
Endpoint string `mapstructure:"endpoint"`
// URLPath overrides the default URL path (/v1/logs).
URLPath string `mapstructure:"url_path"`
// Endpoint is the target scheme://host:port of the OTLP HTTP endpoint.
Endpoint *url.URL `mapstructure:"endpoint"`
// Insecure disables TLS, using HTTP instead of HTTPS.
Insecure bool `mapstructure:"insecure"`
// Compression sets the compression strategy. Supported: "none", "gzip".
Compression string `mapstructure:"compression"`
// Timeout is the maximum duration for an export attempt.
Timeout time.Duration `mapstructure:"timeout"`
@@ -71,10 +67,12 @@ func newConfig() factory.Config {
BatchSize: 100,
FlushInterval: time.Second,
OTLPHTTP: OTLPHTTPConfig{
Endpoint: "localhost:4318",
URLPath: "/v1/logs",
Compression: "none",
Timeout: 10 * time.Second,
Endpoint: &url.URL{
Scheme: "http",
Host: "localhost:4318",
Path: "/v1/logs",
},
Timeout: 10 * time.Second,
Retry: RetryConfig{
Enabled: true,
InitialInterval: 5 * time.Second,
@@ -93,14 +91,24 @@ func (c Config) Validate() error {
if c.BufferSize <= 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "auditor::buffer_size must be greater than 0")
}
if c.BatchSize <= 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "auditor::batch_size must be greater than 0")
}
if c.FlushInterval <= 0 {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "auditor::flush_interval must be greater than 0")
}
if c.BatchSize > c.BufferSize {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "auditor::batch_size must not exceed auditor::buffer_size")
}
if c.Provider == "otlphttp" {
if c.OTLPHTTP.Endpoint == nil {
return errors.New(errors.TypeInvalidInput, errors.CodeInvalidInput, "auditor::otlphttp::endpoint must be set when provider is otlphttp")
}
}
return nil
}

View File

@@ -5,7 +5,7 @@ import (
"slices"
"strings"
parser "github.com/SigNoz/signoz/pkg/parser/grammar"
parser "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
"github.com/antlr4-go/antlr/v4"
"golang.org/x/exp/maps"

View File

@@ -15,12 +15,16 @@ import (
type Instrumentation interface {
// Logger returns the Slog logger.
Logger() *slog.Logger
// MeterProvider returns the OpenTelemetry meter provider.
MeterProvider() sdkmetric.MeterProvider
// TracerProvider returns the OpenTelemetry tracer provider.
TracerProvider() sdktrace.TracerProvider
// PrometheusRegisterer returns the Prometheus registerer.
PrometheusRegisterer() prometheus.Registerer
// ToProviderSettings converts instrumentation to provider settings.
ToProviderSettings() factory.ProviderSettings
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,37 @@
LPAREN=1
RPAREN=2
COMMA=3
EQUALS=4
NOT_EQUALS=5
NEQ=6
LT=7
LE=8
GT=9
GE=10
PLUS=11
MINUS=12
STAR=13
SLASH=14
PERCENT=15
NOT=16
AND=17
OR=18
BOOL=19
NUMBER=20
IDENTIFIER=21
STRING=22
WS=23
'('=1
')'=2
','=3
'!='=5
'<>'=6
'<'=7
'<='=8
'>'=9
'>='=10
'+'=11
'-'=12
'*'=13
'/'=14
'%'=15

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,37 @@
LPAREN=1
RPAREN=2
COMMA=3
EQUALS=4
NOT_EQUALS=5
NEQ=6
LT=7
LE=8
GT=9
GE=10
PLUS=11
MINUS=12
STAR=13
SLASH=14
PERCENT=15
NOT=16
AND=17
OR=18
BOOL=19
NUMBER=20
IDENTIFIER=21
STRING=22
WS=23
'('=1
')'=2
','=3
'!='=5
'<>'=6
'<'=7
'<='=8
'>'=9
'>='=10
'+'=11
'-'=12
'*'=13
'/'=14
'%'=15

View File

@@ -0,0 +1,118 @@
// Code generated from grammar/HavingExpression.g4 by ANTLR 4.13.2. DO NOT EDIT.
package parser // HavingExpression
import "github.com/antlr4-go/antlr/v4"
// BaseHavingExpressionListener is a complete listener for a parse tree produced by HavingExpressionParser.
type BaseHavingExpressionListener struct{}
var _ HavingExpressionListener = &BaseHavingExpressionListener{}
// VisitTerminal is called when a terminal node is visited.
func (s *BaseHavingExpressionListener) VisitTerminal(node antlr.TerminalNode) {}
// VisitErrorNode is called when an error node is visited.
func (s *BaseHavingExpressionListener) VisitErrorNode(node antlr.ErrorNode) {}
// EnterEveryRule is called when any rule is entered.
func (s *BaseHavingExpressionListener) EnterEveryRule(ctx antlr.ParserRuleContext) {}
// ExitEveryRule is called when any rule is exited.
func (s *BaseHavingExpressionListener) ExitEveryRule(ctx antlr.ParserRuleContext) {}
// EnterQuery is called when production query is entered.
func (s *BaseHavingExpressionListener) EnterQuery(ctx *QueryContext) {}
// ExitQuery is called when production query is exited.
func (s *BaseHavingExpressionListener) ExitQuery(ctx *QueryContext) {}
// EnterExpression is called when production expression is entered.
func (s *BaseHavingExpressionListener) EnterExpression(ctx *ExpressionContext) {}
// ExitExpression is called when production expression is exited.
func (s *BaseHavingExpressionListener) ExitExpression(ctx *ExpressionContext) {}
// EnterOrExpression is called when production orExpression is entered.
func (s *BaseHavingExpressionListener) EnterOrExpression(ctx *OrExpressionContext) {}
// ExitOrExpression is called when production orExpression is exited.
func (s *BaseHavingExpressionListener) ExitOrExpression(ctx *OrExpressionContext) {}
// EnterAndExpression is called when production andExpression is entered.
func (s *BaseHavingExpressionListener) EnterAndExpression(ctx *AndExpressionContext) {}
// ExitAndExpression is called when production andExpression is exited.
func (s *BaseHavingExpressionListener) ExitAndExpression(ctx *AndExpressionContext) {}
// EnterPrimary is called when production primary is entered.
func (s *BaseHavingExpressionListener) EnterPrimary(ctx *PrimaryContext) {}
// ExitPrimary is called when production primary is exited.
func (s *BaseHavingExpressionListener) ExitPrimary(ctx *PrimaryContext) {}
// EnterComparison is called when production comparison is entered.
func (s *BaseHavingExpressionListener) EnterComparison(ctx *ComparisonContext) {}
// ExitComparison is called when production comparison is exited.
func (s *BaseHavingExpressionListener) ExitComparison(ctx *ComparisonContext) {}
// EnterCompOp is called when production compOp is entered.
func (s *BaseHavingExpressionListener) EnterCompOp(ctx *CompOpContext) {}
// ExitCompOp is called when production compOp is exited.
func (s *BaseHavingExpressionListener) ExitCompOp(ctx *CompOpContext) {}
// EnterOperand is called when production operand is entered.
func (s *BaseHavingExpressionListener) EnterOperand(ctx *OperandContext) {}
// ExitOperand is called when production operand is exited.
func (s *BaseHavingExpressionListener) ExitOperand(ctx *OperandContext) {}
// EnterTerm is called when production term is entered.
func (s *BaseHavingExpressionListener) EnterTerm(ctx *TermContext) {}
// ExitTerm is called when production term is exited.
func (s *BaseHavingExpressionListener) ExitTerm(ctx *TermContext) {}
// EnterFactor is called when production factor is entered.
func (s *BaseHavingExpressionListener) EnterFactor(ctx *FactorContext) {}
// ExitFactor is called when production factor is exited.
func (s *BaseHavingExpressionListener) ExitFactor(ctx *FactorContext) {}
// EnterAtom is called when production atom is entered.
func (s *BaseHavingExpressionListener) EnterAtom(ctx *AtomContext) {}
// ExitAtom is called when production atom is exited.
func (s *BaseHavingExpressionListener) ExitAtom(ctx *AtomContext) {}
// EnterFunctionCall is called when production functionCall is entered.
func (s *BaseHavingExpressionListener) EnterFunctionCall(ctx *FunctionCallContext) {}
// ExitFunctionCall is called when production functionCall is exited.
func (s *BaseHavingExpressionListener) ExitFunctionCall(ctx *FunctionCallContext) {}
// EnterFunctionArgList is called when production functionArgList is entered.
func (s *BaseHavingExpressionListener) EnterFunctionArgList(ctx *FunctionArgListContext) {}
// ExitFunctionArgList is called when production functionArgList is exited.
func (s *BaseHavingExpressionListener) ExitFunctionArgList(ctx *FunctionArgListContext) {}
// EnterFuncArg is called when production funcArg is entered.
func (s *BaseHavingExpressionListener) EnterFuncArg(ctx *FuncArgContext) {}
// ExitFuncArg is called when production funcArg is exited.
func (s *BaseHavingExpressionListener) ExitFuncArg(ctx *FuncArgContext) {}
// EnterFuncArgToken is called when production funcArgToken is entered.
func (s *BaseHavingExpressionListener) EnterFuncArgToken(ctx *FuncArgTokenContext) {}
// ExitFuncArgToken is called when production funcArgToken is exited.
func (s *BaseHavingExpressionListener) ExitFuncArgToken(ctx *FuncArgTokenContext) {}
// EnterIdentifier is called when production identifier is entered.
func (s *BaseHavingExpressionListener) EnterIdentifier(ctx *IdentifierContext) {}
// ExitIdentifier is called when production identifier is exited.
func (s *BaseHavingExpressionListener) ExitIdentifier(ctx *IdentifierContext) {}

View File

@@ -0,0 +1,73 @@
// Code generated from grammar/HavingExpression.g4 by ANTLR 4.13.2. DO NOT EDIT.
package parser // HavingExpression
import "github.com/antlr4-go/antlr/v4"
type BaseHavingExpressionVisitor struct {
*antlr.BaseParseTreeVisitor
}
func (v *BaseHavingExpressionVisitor) VisitQuery(ctx *QueryContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitExpression(ctx *ExpressionContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitOrExpression(ctx *OrExpressionContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitAndExpression(ctx *AndExpressionContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitPrimary(ctx *PrimaryContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitComparison(ctx *ComparisonContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitCompOp(ctx *CompOpContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitOperand(ctx *OperandContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitTerm(ctx *TermContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitFactor(ctx *FactorContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitAtom(ctx *AtomContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitFunctionCall(ctx *FunctionCallContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitFunctionArgList(ctx *FunctionArgListContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitFuncArg(ctx *FuncArgContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitFuncArgToken(ctx *FuncArgTokenContext) interface{} {
return v.VisitChildren(ctx)
}
func (v *BaseHavingExpressionVisitor) VisitIdentifier(ctx *IdentifierContext) interface{} {
return v.VisitChildren(ctx)
}

View File

@@ -0,0 +1,224 @@
// Code generated from grammar/HavingExpression.g4 by ANTLR 4.13.2. DO NOT EDIT.
package parser
import (
"fmt"
"github.com/antlr4-go/antlr/v4"
"sync"
"unicode"
)
// Suppress unused import error
var _ = fmt.Printf
var _ = sync.Once{}
var _ = unicode.IsLetter
type HavingExpressionLexer struct {
*antlr.BaseLexer
channelNames []string
modeNames []string
// TODO: EOF string
}
var HavingExpressionLexerLexerStaticData struct {
once sync.Once
serializedATN []int32
ChannelNames []string
ModeNames []string
LiteralNames []string
SymbolicNames []string
RuleNames []string
PredictionContextCache *antlr.PredictionContextCache
atn *antlr.ATN
decisionToDFA []*antlr.DFA
}
func havingexpressionlexerLexerInit() {
staticData := &HavingExpressionLexerLexerStaticData
staticData.ChannelNames = []string{
"DEFAULT_TOKEN_CHANNEL", "HIDDEN",
}
staticData.ModeNames = []string{
"DEFAULT_MODE",
}
staticData.LiteralNames = []string{
"", "'('", "')'", "','", "", "'!='", "'<>'", "'<'", "'<='", "'>'", "'>='",
"'+'", "'-'", "'*'", "'/'", "'%'",
}
staticData.SymbolicNames = []string{
"", "LPAREN", "RPAREN", "COMMA", "EQUALS", "NOT_EQUALS", "NEQ", "LT",
"LE", "GT", "GE", "PLUS", "MINUS", "STAR", "SLASH", "PERCENT", "NOT",
"AND", "OR", "BOOL", "NUMBER", "IDENTIFIER", "STRING", "WS",
}
staticData.RuleNames = []string{
"LPAREN", "RPAREN", "COMMA", "EQUALS", "NOT_EQUALS", "NEQ", "LT", "LE",
"GT", "GE", "PLUS", "MINUS", "STAR", "SLASH", "PERCENT", "NOT", "AND",
"OR", "BOOL", "SIGN", "NUMBER", "IDENTIFIER", "STRING", "WS", "DIGIT",
}
staticData.PredictionContextCache = antlr.NewPredictionContextCache()
staticData.serializedATN = []int32{
4, 0, 23, 209, 6, -1, 2, 0, 7, 0, 2, 1, 7, 1, 2, 2, 7, 2, 2, 3, 7, 3, 2,
4, 7, 4, 2, 5, 7, 5, 2, 6, 7, 6, 2, 7, 7, 7, 2, 8, 7, 8, 2, 9, 7, 9, 2,
10, 7, 10, 2, 11, 7, 11, 2, 12, 7, 12, 2, 13, 7, 13, 2, 14, 7, 14, 2, 15,
7, 15, 2, 16, 7, 16, 2, 17, 7, 17, 2, 18, 7, 18, 2, 19, 7, 19, 2, 20, 7,
20, 2, 21, 7, 21, 2, 22, 7, 22, 2, 23, 7, 23, 2, 24, 7, 24, 1, 0, 1, 0,
1, 1, 1, 1, 1, 2, 1, 2, 1, 3, 1, 3, 1, 3, 3, 3, 61, 8, 3, 1, 4, 1, 4, 1,
4, 1, 5, 1, 5, 1, 5, 1, 6, 1, 6, 1, 7, 1, 7, 1, 7, 1, 8, 1, 8, 1, 9, 1,
9, 1, 9, 1, 10, 1, 10, 1, 11, 1, 11, 1, 12, 1, 12, 1, 13, 1, 13, 1, 14,
1, 14, 1, 15, 1, 15, 1, 15, 1, 15, 1, 16, 1, 16, 1, 16, 1, 16, 1, 17, 1,
17, 1, 17, 1, 18, 1, 18, 1, 18, 1, 18, 1, 18, 1, 18, 1, 18, 1, 18, 1, 18,
3, 18, 109, 8, 18, 1, 19, 1, 19, 1, 20, 3, 20, 114, 8, 20, 1, 20, 4, 20,
117, 8, 20, 11, 20, 12, 20, 118, 1, 20, 1, 20, 5, 20, 123, 8, 20, 10, 20,
12, 20, 126, 9, 20, 3, 20, 128, 8, 20, 1, 20, 1, 20, 3, 20, 132, 8, 20,
1, 20, 4, 20, 135, 8, 20, 11, 20, 12, 20, 136, 3, 20, 139, 8, 20, 1, 20,
3, 20, 142, 8, 20, 1, 20, 1, 20, 4, 20, 146, 8, 20, 11, 20, 12, 20, 147,
1, 20, 1, 20, 3, 20, 152, 8, 20, 1, 20, 4, 20, 155, 8, 20, 11, 20, 12,
20, 156, 3, 20, 159, 8, 20, 3, 20, 161, 8, 20, 1, 21, 1, 21, 5, 21, 165,
8, 21, 10, 21, 12, 21, 168, 9, 21, 1, 21, 1, 21, 1, 21, 5, 21, 173, 8,
21, 10, 21, 12, 21, 176, 9, 21, 5, 21, 178, 8, 21, 10, 21, 12, 21, 181,
9, 21, 1, 22, 1, 22, 5, 22, 185, 8, 22, 10, 22, 12, 22, 188, 9, 22, 1,
22, 1, 22, 1, 22, 5, 22, 193, 8, 22, 10, 22, 12, 22, 196, 9, 22, 1, 22,
3, 22, 199, 8, 22, 1, 23, 4, 23, 202, 8, 23, 11, 23, 12, 23, 203, 1, 23,
1, 23, 1, 24, 1, 24, 0, 0, 25, 1, 1, 3, 2, 5, 3, 7, 4, 9, 5, 11, 6, 13,
7, 15, 8, 17, 9, 19, 10, 21, 11, 23, 12, 25, 13, 27, 14, 29, 15, 31, 16,
33, 17, 35, 18, 37, 19, 39, 0, 41, 20, 43, 21, 45, 22, 47, 23, 49, 0, 1,
0, 18, 2, 0, 78, 78, 110, 110, 2, 0, 79, 79, 111, 111, 2, 0, 84, 84, 116,
116, 2, 0, 65, 65, 97, 97, 2, 0, 68, 68, 100, 100, 2, 0, 82, 82, 114, 114,
2, 0, 85, 85, 117, 117, 2, 0, 69, 69, 101, 101, 2, 0, 70, 70, 102, 102,
2, 0, 76, 76, 108, 108, 2, 0, 83, 83, 115, 115, 2, 0, 43, 43, 45, 45, 3,
0, 65, 90, 95, 95, 97, 122, 4, 0, 48, 57, 65, 90, 95, 95, 97, 122, 1, 0,
39, 39, 1, 0, 34, 34, 3, 0, 9, 10, 13, 13, 32, 32, 1, 0, 48, 57, 228, 0,
1, 1, 0, 0, 0, 0, 3, 1, 0, 0, 0, 0, 5, 1, 0, 0, 0, 0, 7, 1, 0, 0, 0, 0,
9, 1, 0, 0, 0, 0, 11, 1, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 15, 1, 0, 0, 0,
0, 17, 1, 0, 0, 0, 0, 19, 1, 0, 0, 0, 0, 21, 1, 0, 0, 0, 0, 23, 1, 0, 0,
0, 0, 25, 1, 0, 0, 0, 0, 27, 1, 0, 0, 0, 0, 29, 1, 0, 0, 0, 0, 31, 1, 0,
0, 0, 0, 33, 1, 0, 0, 0, 0, 35, 1, 0, 0, 0, 0, 37, 1, 0, 0, 0, 0, 41, 1,
0, 0, 0, 0, 43, 1, 0, 0, 0, 0, 45, 1, 0, 0, 0, 0, 47, 1, 0, 0, 0, 1, 51,
1, 0, 0, 0, 3, 53, 1, 0, 0, 0, 5, 55, 1, 0, 0, 0, 7, 60, 1, 0, 0, 0, 9,
62, 1, 0, 0, 0, 11, 65, 1, 0, 0, 0, 13, 68, 1, 0, 0, 0, 15, 70, 1, 0, 0,
0, 17, 73, 1, 0, 0, 0, 19, 75, 1, 0, 0, 0, 21, 78, 1, 0, 0, 0, 23, 80,
1, 0, 0, 0, 25, 82, 1, 0, 0, 0, 27, 84, 1, 0, 0, 0, 29, 86, 1, 0, 0, 0,
31, 88, 1, 0, 0, 0, 33, 92, 1, 0, 0, 0, 35, 96, 1, 0, 0, 0, 37, 108, 1,
0, 0, 0, 39, 110, 1, 0, 0, 0, 41, 160, 1, 0, 0, 0, 43, 162, 1, 0, 0, 0,
45, 198, 1, 0, 0, 0, 47, 201, 1, 0, 0, 0, 49, 207, 1, 0, 0, 0, 51, 52,
5, 40, 0, 0, 52, 2, 1, 0, 0, 0, 53, 54, 5, 41, 0, 0, 54, 4, 1, 0, 0, 0,
55, 56, 5, 44, 0, 0, 56, 6, 1, 0, 0, 0, 57, 61, 5, 61, 0, 0, 58, 59, 5,
61, 0, 0, 59, 61, 5, 61, 0, 0, 60, 57, 1, 0, 0, 0, 60, 58, 1, 0, 0, 0,
61, 8, 1, 0, 0, 0, 62, 63, 5, 33, 0, 0, 63, 64, 5, 61, 0, 0, 64, 10, 1,
0, 0, 0, 65, 66, 5, 60, 0, 0, 66, 67, 5, 62, 0, 0, 67, 12, 1, 0, 0, 0,
68, 69, 5, 60, 0, 0, 69, 14, 1, 0, 0, 0, 70, 71, 5, 60, 0, 0, 71, 72, 5,
61, 0, 0, 72, 16, 1, 0, 0, 0, 73, 74, 5, 62, 0, 0, 74, 18, 1, 0, 0, 0,
75, 76, 5, 62, 0, 0, 76, 77, 5, 61, 0, 0, 77, 20, 1, 0, 0, 0, 78, 79, 5,
43, 0, 0, 79, 22, 1, 0, 0, 0, 80, 81, 5, 45, 0, 0, 81, 24, 1, 0, 0, 0,
82, 83, 5, 42, 0, 0, 83, 26, 1, 0, 0, 0, 84, 85, 5, 47, 0, 0, 85, 28, 1,
0, 0, 0, 86, 87, 5, 37, 0, 0, 87, 30, 1, 0, 0, 0, 88, 89, 7, 0, 0, 0, 89,
90, 7, 1, 0, 0, 90, 91, 7, 2, 0, 0, 91, 32, 1, 0, 0, 0, 92, 93, 7, 3, 0,
0, 93, 94, 7, 0, 0, 0, 94, 95, 7, 4, 0, 0, 95, 34, 1, 0, 0, 0, 96, 97,
7, 1, 0, 0, 97, 98, 7, 5, 0, 0, 98, 36, 1, 0, 0, 0, 99, 100, 7, 2, 0, 0,
100, 101, 7, 5, 0, 0, 101, 102, 7, 6, 0, 0, 102, 109, 7, 7, 0, 0, 103,
104, 7, 8, 0, 0, 104, 105, 7, 3, 0, 0, 105, 106, 7, 9, 0, 0, 106, 107,
7, 10, 0, 0, 107, 109, 7, 7, 0, 0, 108, 99, 1, 0, 0, 0, 108, 103, 1, 0,
0, 0, 109, 38, 1, 0, 0, 0, 110, 111, 7, 11, 0, 0, 111, 40, 1, 0, 0, 0,
112, 114, 3, 39, 19, 0, 113, 112, 1, 0, 0, 0, 113, 114, 1, 0, 0, 0, 114,
116, 1, 0, 0, 0, 115, 117, 3, 49, 24, 0, 116, 115, 1, 0, 0, 0, 117, 118,
1, 0, 0, 0, 118, 116, 1, 0, 0, 0, 118, 119, 1, 0, 0, 0, 119, 127, 1, 0,
0, 0, 120, 124, 5, 46, 0, 0, 121, 123, 3, 49, 24, 0, 122, 121, 1, 0, 0,
0, 123, 126, 1, 0, 0, 0, 124, 122, 1, 0, 0, 0, 124, 125, 1, 0, 0, 0, 125,
128, 1, 0, 0, 0, 126, 124, 1, 0, 0, 0, 127, 120, 1, 0, 0, 0, 127, 128,
1, 0, 0, 0, 128, 138, 1, 0, 0, 0, 129, 131, 7, 7, 0, 0, 130, 132, 3, 39,
19, 0, 131, 130, 1, 0, 0, 0, 131, 132, 1, 0, 0, 0, 132, 134, 1, 0, 0, 0,
133, 135, 3, 49, 24, 0, 134, 133, 1, 0, 0, 0, 135, 136, 1, 0, 0, 0, 136,
134, 1, 0, 0, 0, 136, 137, 1, 0, 0, 0, 137, 139, 1, 0, 0, 0, 138, 129,
1, 0, 0, 0, 138, 139, 1, 0, 0, 0, 139, 161, 1, 0, 0, 0, 140, 142, 3, 39,
19, 0, 141, 140, 1, 0, 0, 0, 141, 142, 1, 0, 0, 0, 142, 143, 1, 0, 0, 0,
143, 145, 5, 46, 0, 0, 144, 146, 3, 49, 24, 0, 145, 144, 1, 0, 0, 0, 146,
147, 1, 0, 0, 0, 147, 145, 1, 0, 0, 0, 147, 148, 1, 0, 0, 0, 148, 158,
1, 0, 0, 0, 149, 151, 7, 7, 0, 0, 150, 152, 3, 39, 19, 0, 151, 150, 1,
0, 0, 0, 151, 152, 1, 0, 0, 0, 152, 154, 1, 0, 0, 0, 153, 155, 3, 49, 24,
0, 154, 153, 1, 0, 0, 0, 155, 156, 1, 0, 0, 0, 156, 154, 1, 0, 0, 0, 156,
157, 1, 0, 0, 0, 157, 159, 1, 0, 0, 0, 158, 149, 1, 0, 0, 0, 158, 159,
1, 0, 0, 0, 159, 161, 1, 0, 0, 0, 160, 113, 1, 0, 0, 0, 160, 141, 1, 0,
0, 0, 161, 42, 1, 0, 0, 0, 162, 166, 7, 12, 0, 0, 163, 165, 7, 13, 0, 0,
164, 163, 1, 0, 0, 0, 165, 168, 1, 0, 0, 0, 166, 164, 1, 0, 0, 0, 166,
167, 1, 0, 0, 0, 167, 179, 1, 0, 0, 0, 168, 166, 1, 0, 0, 0, 169, 170,
5, 46, 0, 0, 170, 174, 7, 12, 0, 0, 171, 173, 7, 13, 0, 0, 172, 171, 1,
0, 0, 0, 173, 176, 1, 0, 0, 0, 174, 172, 1, 0, 0, 0, 174, 175, 1, 0, 0,
0, 175, 178, 1, 0, 0, 0, 176, 174, 1, 0, 0, 0, 177, 169, 1, 0, 0, 0, 178,
181, 1, 0, 0, 0, 179, 177, 1, 0, 0, 0, 179, 180, 1, 0, 0, 0, 180, 44, 1,
0, 0, 0, 181, 179, 1, 0, 0, 0, 182, 186, 5, 39, 0, 0, 183, 185, 8, 14,
0, 0, 184, 183, 1, 0, 0, 0, 185, 188, 1, 0, 0, 0, 186, 184, 1, 0, 0, 0,
186, 187, 1, 0, 0, 0, 187, 189, 1, 0, 0, 0, 188, 186, 1, 0, 0, 0, 189,
199, 5, 39, 0, 0, 190, 194, 5, 34, 0, 0, 191, 193, 8, 15, 0, 0, 192, 191,
1, 0, 0, 0, 193, 196, 1, 0, 0, 0, 194, 192, 1, 0, 0, 0, 194, 195, 1, 0,
0, 0, 195, 197, 1, 0, 0, 0, 196, 194, 1, 0, 0, 0, 197, 199, 5, 34, 0, 0,
198, 182, 1, 0, 0, 0, 198, 190, 1, 0, 0, 0, 199, 46, 1, 0, 0, 0, 200, 202,
7, 16, 0, 0, 201, 200, 1, 0, 0, 0, 202, 203, 1, 0, 0, 0, 203, 201, 1, 0,
0, 0, 203, 204, 1, 0, 0, 0, 204, 205, 1, 0, 0, 0, 205, 206, 6, 23, 0, 0,
206, 48, 1, 0, 0, 0, 207, 208, 7, 17, 0, 0, 208, 50, 1, 0, 0, 0, 23, 0,
60, 108, 113, 118, 124, 127, 131, 136, 138, 141, 147, 151, 156, 158, 160,
166, 174, 179, 186, 194, 198, 203, 1, 6, 0, 0,
}
deserializer := antlr.NewATNDeserializer(nil)
staticData.atn = deserializer.Deserialize(staticData.serializedATN)
atn := staticData.atn
staticData.decisionToDFA = make([]*antlr.DFA, len(atn.DecisionToState))
decisionToDFA := staticData.decisionToDFA
for index, state := range atn.DecisionToState {
decisionToDFA[index] = antlr.NewDFA(state, index)
}
}
// HavingExpressionLexerInit initializes any static state used to implement HavingExpressionLexer. By default the
// static state used to implement the lexer is lazily initialized during the first call to
// NewHavingExpressionLexer(). You can call this function if you wish to initialize the static state ahead
// of time.
func HavingExpressionLexerInit() {
staticData := &HavingExpressionLexerLexerStaticData
staticData.once.Do(havingexpressionlexerLexerInit)
}
// NewHavingExpressionLexer produces a new lexer instance for the optional input antlr.CharStream.
func NewHavingExpressionLexer(input antlr.CharStream) *HavingExpressionLexer {
HavingExpressionLexerInit()
l := new(HavingExpressionLexer)
l.BaseLexer = antlr.NewBaseLexer(input)
staticData := &HavingExpressionLexerLexerStaticData
l.Interpreter = antlr.NewLexerATNSimulator(l, staticData.atn, staticData.decisionToDFA, staticData.PredictionContextCache)
l.channelNames = staticData.ChannelNames
l.modeNames = staticData.ModeNames
l.RuleNames = staticData.RuleNames
l.LiteralNames = staticData.LiteralNames
l.SymbolicNames = staticData.SymbolicNames
l.GrammarFileName = "HavingExpression.g4"
// TODO: l.EOF = antlr.TokenEOF
return l
}
// HavingExpressionLexer tokens.
const (
HavingExpressionLexerLPAREN = 1
HavingExpressionLexerRPAREN = 2
HavingExpressionLexerCOMMA = 3
HavingExpressionLexerEQUALS = 4
HavingExpressionLexerNOT_EQUALS = 5
HavingExpressionLexerNEQ = 6
HavingExpressionLexerLT = 7
HavingExpressionLexerLE = 8
HavingExpressionLexerGT = 9
HavingExpressionLexerGE = 10
HavingExpressionLexerPLUS = 11
HavingExpressionLexerMINUS = 12
HavingExpressionLexerSTAR = 13
HavingExpressionLexerSLASH = 14
HavingExpressionLexerPERCENT = 15
HavingExpressionLexerNOT = 16
HavingExpressionLexerAND = 17
HavingExpressionLexerOR = 18
HavingExpressionLexerBOOL = 19
HavingExpressionLexerNUMBER = 20
HavingExpressionLexerIDENTIFIER = 21
HavingExpressionLexerSTRING = 22
HavingExpressionLexerWS = 23
)

View File

@@ -0,0 +1,106 @@
// Code generated from grammar/HavingExpression.g4 by ANTLR 4.13.2. DO NOT EDIT.
package parser // HavingExpression
import "github.com/antlr4-go/antlr/v4"
// HavingExpressionListener is a complete listener for a parse tree produced by HavingExpressionParser.
type HavingExpressionListener interface {
antlr.ParseTreeListener
// EnterQuery is called when entering the query production.
EnterQuery(c *QueryContext)
// EnterExpression is called when entering the expression production.
EnterExpression(c *ExpressionContext)
// EnterOrExpression is called when entering the orExpression production.
EnterOrExpression(c *OrExpressionContext)
// EnterAndExpression is called when entering the andExpression production.
EnterAndExpression(c *AndExpressionContext)
// EnterPrimary is called when entering the primary production.
EnterPrimary(c *PrimaryContext)
// EnterComparison is called when entering the comparison production.
EnterComparison(c *ComparisonContext)
// EnterCompOp is called when entering the compOp production.
EnterCompOp(c *CompOpContext)
// EnterOperand is called when entering the operand production.
EnterOperand(c *OperandContext)
// EnterTerm is called when entering the term production.
EnterTerm(c *TermContext)
// EnterFactor is called when entering the factor production.
EnterFactor(c *FactorContext)
// EnterAtom is called when entering the atom production.
EnterAtom(c *AtomContext)
// EnterFunctionCall is called when entering the functionCall production.
EnterFunctionCall(c *FunctionCallContext)
// EnterFunctionArgList is called when entering the functionArgList production.
EnterFunctionArgList(c *FunctionArgListContext)
// EnterFuncArg is called when entering the funcArg production.
EnterFuncArg(c *FuncArgContext)
// EnterFuncArgToken is called when entering the funcArgToken production.
EnterFuncArgToken(c *FuncArgTokenContext)
// EnterIdentifier is called when entering the identifier production.
EnterIdentifier(c *IdentifierContext)
// ExitQuery is called when exiting the query production.
ExitQuery(c *QueryContext)
// ExitExpression is called when exiting the expression production.
ExitExpression(c *ExpressionContext)
// ExitOrExpression is called when exiting the orExpression production.
ExitOrExpression(c *OrExpressionContext)
// ExitAndExpression is called when exiting the andExpression production.
ExitAndExpression(c *AndExpressionContext)
// ExitPrimary is called when exiting the primary production.
ExitPrimary(c *PrimaryContext)
// ExitComparison is called when exiting the comparison production.
ExitComparison(c *ComparisonContext)
// ExitCompOp is called when exiting the compOp production.
ExitCompOp(c *CompOpContext)
// ExitOperand is called when exiting the operand production.
ExitOperand(c *OperandContext)
// ExitTerm is called when exiting the term production.
ExitTerm(c *TermContext)
// ExitFactor is called when exiting the factor production.
ExitFactor(c *FactorContext)
// ExitAtom is called when exiting the atom production.
ExitAtom(c *AtomContext)
// ExitFunctionCall is called when exiting the functionCall production.
ExitFunctionCall(c *FunctionCallContext)
// ExitFunctionArgList is called when exiting the functionArgList production.
ExitFunctionArgList(c *FunctionArgListContext)
// ExitFuncArg is called when exiting the funcArg production.
ExitFuncArg(c *FuncArgContext)
// ExitFuncArgToken is called when exiting the funcArgToken production.
ExitFuncArgToken(c *FuncArgTokenContext)
// ExitIdentifier is called when exiting the identifier production.
ExitIdentifier(c *IdentifierContext)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,58 @@
// Code generated from grammar/HavingExpression.g4 by ANTLR 4.13.2. DO NOT EDIT.
package parser // HavingExpression
import "github.com/antlr4-go/antlr/v4"
// A complete Visitor for a parse tree produced by HavingExpressionParser.
type HavingExpressionVisitor interface {
antlr.ParseTreeVisitor
// Visit a parse tree produced by HavingExpressionParser#query.
VisitQuery(ctx *QueryContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#expression.
VisitExpression(ctx *ExpressionContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#orExpression.
VisitOrExpression(ctx *OrExpressionContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#andExpression.
VisitAndExpression(ctx *AndExpressionContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#primary.
VisitPrimary(ctx *PrimaryContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#comparison.
VisitComparison(ctx *ComparisonContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#compOp.
VisitCompOp(ctx *CompOpContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#operand.
VisitOperand(ctx *OperandContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#term.
VisitTerm(ctx *TermContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#factor.
VisitFactor(ctx *FactorContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#atom.
VisitAtom(ctx *AtomContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#functionCall.
VisitFunctionCall(ctx *FunctionCallContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#functionArgList.
VisitFunctionArgList(ctx *FunctionArgListContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#funcArg.
VisitFuncArg(ctx *FuncArgContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#funcArgToken.
VisitFuncArgToken(ctx *FuncArgTokenContext) interface{}
// Visit a parse tree produced by HavingExpressionParser#identifier.
VisitIdentifier(ctx *IdentifierContext) interface{}
}

View File

@@ -277,18 +277,6 @@ func DataTypeCollisionHandledFieldName(key *telemetrytypes.TelemetryFieldKey, va
tblFieldName, value = castString(tblFieldName), toStrings(v)
}
}
case telemetrytypes.FieldDataTypeArrayDynamic:
switch v := value.(type) {
case string:
tblFieldName = castString(tblFieldName)
case float64:
tblFieldName = accurateCastFloat(tblFieldName)
case bool:
tblFieldName = castBool(tblFieldName)
case []any:
// dynamic array elements will be default casted to string
tblFieldName, value = castString(tblFieldName), toStrings(v)
}
}
return tblFieldName, value
}
@@ -296,10 +284,6 @@ func DataTypeCollisionHandledFieldName(key *telemetrytypes.TelemetryFieldKey, va
func castFloat(col string) string { return fmt.Sprintf("toFloat64OrNull(%s)", col) }
func castFloatHack(col string) string { return fmt.Sprintf("toFloat64(%s)", col) }
func castString(col string) string { return fmt.Sprintf("toString(%s)", col) }
func castBool(col string) string { return fmt.Sprintf("accurateCastOrNull(%s, 'Bool')", col) }
func accurateCastFloat(col string) string {
return fmt.Sprintf("accurateCastOrNull(%s, 'Float64')", col)
}
func allFloats(in []any) bool {
for _, x := range in {

View File

@@ -88,6 +88,8 @@ func (e *SyntaxErr) Error() string {
exp := ""
if len(e.Expected) > 0 {
exp = "expecting one of {" + strings.Join(e.Expected, ", ") + "}" + " but got " + e.TokenTxt
} else if e.Msg != "" {
exp = e.Msg
}
return fmt.Sprintf("line %d:%d %s", e.Line, e.Col, exp)
}

View File

@@ -0,0 +1,385 @@
package querybuilder
import (
"sort"
"strings"
"github.com/SigNoz/signoz/pkg/errors"
grammar "github.com/SigNoz/signoz/pkg/parser/havingexpression/grammar"
"github.com/antlr4-go/antlr/v4"
"github.com/huandu/go-sqlbuilder"
)
// havingExpressionRewriteVisitor walks the parse tree of a HavingExpression in a single
// pass, simultaneously rewriting user-facing references to their SQL column names and
// collecting any references that could not be resolved.
//
// Each visit method reconstructs the expression string for its subtree:
// - Structural nodes (orExpression, andExpression, comparison, arithmetic) are
// reconstructed with canonical spacing.
// - andExpression joins ALL primaries with " AND ", which naturally normalises any
// implicit-AND adjacency (the old normalizeImplicitAND step).
// - IdentifierContext looks the name up in columnMap; if found the SQL column name is
// returned. If the name is already a valid SQL column (TO side of columnMap) it is
// passed through unchanged. Otherwise it is added to invalid.
// - FunctionCallContext looks the full call text (without whitespace, since WS is
// skipped) up in columnMap; if found the SQL column name is returned, otherwise the
// function name is added to invalid without recursing into its arguments.
// The grammar now accepts complex function arguments (nested calls, string predicates),
// so all aggregation expression forms can be looked up directly via ctx.GetText().
// - STRING atoms (string literals in comparison position) set hasStringLiteral so a
// friendly "aggregator results are numeric" error can be returned.
type havingExpressionRewriteVisitor struct {
columnMap map[string]string
validColumns map[string]bool // TO-side values; identifiers already in SQL form pass through
invalid []string
seen map[string]bool
hasStringLiteral bool
sb *sqlbuilder.SelectBuilder
}
func newHavingExpressionRewriteVisitor(columnMap map[string]string) *havingExpressionRewriteVisitor {
validColumns := make(map[string]bool, len(columnMap))
for _, col := range columnMap {
validColumns[col] = true
}
return &havingExpressionRewriteVisitor{
columnMap: columnMap,
validColumns: validColumns,
seen: make(map[string]bool),
sb: sqlbuilder.NewSelectBuilder(),
}
}
func (v *havingExpressionRewriteVisitor) visitQuery(ctx grammar.IQueryContext) string {
if ctx.Expression() == nil {
return ""
}
return v.visitExpression(ctx.Expression())
}
func (v *havingExpressionRewriteVisitor) visitExpression(ctx grammar.IExpressionContext) string {
return v.visitOrExpression(ctx.OrExpression())
}
func (v *havingExpressionRewriteVisitor) visitOrExpression(ctx grammar.IOrExpressionContext) string {
andExprs := ctx.AllAndExpression()
parts := make([]string, len(andExprs))
for i, ae := range andExprs {
parts[i] = v.visitAndExpression(ae)
}
if len(parts) == 1 {
return parts[0]
}
return v.sb.Or(parts...)
}
// visitAndExpression joins ALL primaries with " AND ".
// The grammar rule `primary ( AND primary | primary )*` allows adjacent primaries
// without an explicit AND (implicit AND). Joining all of them with " AND " here is
// equivalent to the old normalizeImplicitAND step.
func (v *havingExpressionRewriteVisitor) visitAndExpression(ctx grammar.IAndExpressionContext) string {
primaries := ctx.AllPrimary()
parts := make([]string, len(primaries))
for i, p := range primaries {
parts[i] = v.visitPrimary(p)
}
if len(parts) == 1 {
return parts[0]
}
return v.sb.And(parts...)
}
func (v *havingExpressionRewriteVisitor) visitPrimary(ctx grammar.IPrimaryContext) string {
if ctx.OrExpression() != nil {
inner := v.visitOrExpression(ctx.OrExpression())
if ctx.NOT() != nil {
return v.sb.Not(inner)
}
return v.sb.And(inner)
}
if ctx.Comparison() == nil {
return ""
}
inner := v.visitComparison(ctx.Comparison())
if ctx.NOT() != nil {
return v.sb.Not(inner)
}
return inner
}
func (v *havingExpressionRewriteVisitor) visitComparison(ctx grammar.IComparisonContext) string {
lhs := v.visitOperand(ctx.Operand(0))
op := ctx.CompOp().GetText()
rhs := v.visitOperand(ctx.Operand(1))
return lhs + " " + op + " " + rhs
}
func (v *havingExpressionRewriteVisitor) visitOperand(ctx grammar.IOperandContext) string {
if ctx.Operand() != nil {
left := v.visitOperand(ctx.Operand())
right := v.visitTerm(ctx.Term())
op := "+"
if ctx.MINUS() != nil {
op = "-"
}
return left + " " + op + " " + right
}
return v.visitTerm(ctx.Term())
}
func (v *havingExpressionRewriteVisitor) visitTerm(ctx grammar.ITermContext) string {
if ctx.Term() != nil {
left := v.visitTerm(ctx.Term())
right := v.visitFactor(ctx.Factor())
op := "*"
if ctx.SLASH() != nil {
op = "/"
} else if ctx.PERCENT() != nil {
op = "%"
}
return left + " " + op + " " + right
}
return v.visitFactor(ctx.Factor())
}
func (v *havingExpressionRewriteVisitor) visitFactor(ctx grammar.IFactorContext) string {
if ctx.Factor() != nil {
// Unary sign: (PLUS | MINUS) factor
sign := "+"
if ctx.MINUS() != nil {
sign = "-"
}
return sign + v.visitFactor(ctx.Factor())
}
if ctx.Operand() != nil {
return v.sb.And(v.visitOperand(ctx.Operand()))
}
if ctx.Atom() == nil {
return ""
}
return v.visitAtom(ctx.Atom())
}
func (v *havingExpressionRewriteVisitor) visitAtom(ctx grammar.IAtomContext) string {
if ctx.FunctionCall() != nil {
return v.visitFunctionCall(ctx.FunctionCall())
}
if ctx.Identifier() != nil {
return v.visitIdentifier(ctx.Identifier())
}
if ctx.STRING() != nil {
// String literals are never valid aggregation results; flag for a friendly error.
v.hasStringLiteral = true
return ctx.STRING().GetText()
}
text := ctx.NUMBER().GetText()
return text
}
// visitFunctionCall looks the full call text up in columnMap. WS tokens are skipped by
// the lexer, so ctx.GetText() returns the expression with all whitespace removed
// (e.g. "countIf(level='error')", "avg(sum(cpu_usage))", "count_distinct(a,b)").
// The column map stores both the original expression and a space-stripped version as
// keys, so the lookup is whitespace-insensitive regardless of how the user typed it.
// If not found, the function name is recorded as invalid.
func (v *havingExpressionRewriteVisitor) visitFunctionCall(ctx grammar.IFunctionCallContext) string {
fullText := ctx.GetText()
if col, ok := v.columnMap[fullText]; ok {
return col
}
funcName := ctx.IDENTIFIER().GetText()
if !v.seen[funcName] {
v.invalid = append(v.invalid, funcName)
v.seen[funcName] = true
}
return fullText
}
// visitIdentifier looks the identifier up in columnMap. If found, returns the SQL
// column name. If the name is already a valid SQL column (validColumns), it is passed
// through unchanged — this handles cases where the user writes the SQL column name
// directly (e.g. __result_0). Otherwise records it as invalid.
func (v *havingExpressionRewriteVisitor) visitIdentifier(ctx grammar.IIdentifierContext) string {
name := ctx.IDENTIFIER().GetText()
if col, ok := v.columnMap[name]; ok {
return col
}
if v.validColumns[name] {
return name
}
if !v.seen[name] {
v.invalid = append(v.invalid, name)
v.seen[name] = true
}
return name
}
// rewriteAndValidate is the single-pass implementation used by all RewriteFor* methods.
//
// Validation layers:
// 1. The visitor runs on the parse tree, rewriting and collecting invalid references.
// Unknown references (including unrecognised function calls) → lists valid references.
// The grammar now supports complex function arguments (nested calls, string predicates)
// so all aggregation expression forms are handled directly by the parser without any
// regex pre-substitution.
// 2. String literals in comparison-operand position → descriptive error
// ("aggregator results are numeric").
// 3. ANTLR syntax errors → error with messages referencing the original token names.
func (r *HavingExpressionRewriter) rewriteAndValidate(expression string) (string, error) {
original := strings.TrimSpace(expression)
// Parse the expression once.
input := antlr.NewInputStream(expression)
lexer := grammar.NewHavingExpressionLexer(input)
lexerErrListener := NewErrorListener()
lexer.RemoveErrorListeners()
lexer.AddErrorListener(lexerErrListener)
tokens := antlr.NewCommonTokenStream(lexer, antlr.TokenDefaultChannel)
p := grammar.NewHavingExpressionParser(tokens)
parserErrListener := NewErrorListener()
p.RemoveErrorListeners()
p.AddErrorListener(parserErrListener)
tree := p.Query()
// Layer 1 run the combined visitor and report any unresolved references.
// This runs before the syntax error check so that expressions with recoverable
// parse errors (e.g. sum(count())) still produce an actionable "invalid reference"
// message rather than a raw syntax error.
v := newHavingExpressionRewriteVisitor(r.columnMap)
result := v.visitQuery(tree)
// Layer 2 string literals in comparison-operand position (atom rule).
// The grammar accepts STRING tokens in atom so the parser can recover and continue,
// but the visitor flags them; aggregator results are always numeric.
// This is checked before invalid references so that "contains string literals" takes
// priority when a bare string literal is also an unresolvable operand.
if v.hasStringLiteral {
return "", errors.NewInvalidInputf(
errors.CodeInvalidInput,
"`Having` expression contains string literals",
).WithAdditional("Aggregator results are numeric")
}
if len(v.invalid) > 0 {
sort.Strings(v.invalid)
validKeys := make([]string, 0, len(r.columnMap))
for k := range r.columnMap {
validKeys = append(validKeys, k)
}
sort.Strings(validKeys)
return "", errors.NewInvalidInputf(
errors.CodeInvalidInput,
"Invalid references in `Having` expression: [%s]",
strings.Join(v.invalid, ", "),
).WithAdditional("Valid references are: [" + strings.Join(validKeys, ", ") + "]")
}
// Layer 3 ANTLR syntax errors. We parse the original expression, so error messages
// already reference the user's own token names; no re-parsing is needed.
allSyntaxErrors := append(lexerErrListener.SyntaxErrors, parserErrListener.SyntaxErrors...)
if len(allSyntaxErrors) > 0 {
msgs := make([]string, 0, len(allSyntaxErrors))
for _, se := range allSyntaxErrors {
if m := se.Error(); m != "" {
msgs = append(msgs, m)
}
}
detail := strings.Join(msgs, "; ")
if detail == "" {
detail = "check the expression syntax"
}
additional := []string{detail}
// For single-error expressions, try to produce an actionable suggestion.
if len(allSyntaxErrors) == 1 {
if s := havingSuggestion(allSyntaxErrors[0], original); s != "" {
additional = append(additional, "Suggestion: `"+s+"`")
}
}
return "", errors.NewInvalidInputf(
errors.CodeInvalidInput,
"Syntax error in `Having` expression",
).WithAdditional(additional...)
}
return result, nil
}
// havingSuggestion returns a corrected expression string to show as a suggestion when
// the error matches a well-known single-mistake pattern, or "" when no suggestion
// can be formed. Only call this when there is exactly one syntax error.
//
// Recognised patterns (all produce a minimal, valid completion):
// 1. Bare aggregation — comparison operator expected at EOF: count() → count() > 0
// 2. Missing right operand after comparison op at EOF: count() > → count() > 0
// 3. Unclosed parenthesis — only ) expected at EOF: (total > 100 → (total > 100)
// 4. Dangling AND/OR at end of expression: total > 100 AND → total > 100
// 5. Leading OR at position 0: OR total > 100 → total > 100
func havingSuggestion(se *SyntaxErr, original string) string {
trimmed := strings.TrimSpace(original)
upper := strings.ToUpper(trimmed)
if se.TokenTxt == "EOF" {
// Pattern 1: bare aggregation reference — comparison operator is expected.
// e.g. count() → count() > 0
if expectedContains(se, ">") {
return trimmed + " > 0"
}
// Pattern 2: comparison operator already written but right operand missing.
// e.g. count() > → count() > 0
if expectedContains(se, "number") && endsWithComparisonOp(trimmed) {
return trimmed + " 0"
}
// Pattern 3: unclosed parenthesis — only ) (and possibly ,) expected.
// e.g. (total > 100 AND count() < 500 → (total > 100 AND count() < 500)
if expectedContains(se, ")") && !expectedContains(se, "number") {
return trimmed + ")"
}
// Pattern 4: dangling AND or OR at end of expression.
// e.g. total > 100 AND → total > 100
if strings.HasSuffix(upper, " AND") {
return strings.TrimSpace(trimmed[:len(trimmed)-4])
}
if strings.HasSuffix(upper, " OR") {
return strings.TrimSpace(trimmed[:len(trimmed)-3])
}
return ""
}
// Pattern 5: leading OR at position 0.
// e.g. OR total > 100 → total > 100
if se.TokenTxt == "'OR'" && se.Col == 0 && strings.HasPrefix(upper, "OR ") {
return strings.TrimSpace(trimmed[3:])
}
return ""
}
// expectedContains reports whether label is present in se.Expected.
func expectedContains(se *SyntaxErr, label string) bool {
for _, e := range se.Expected {
if e == label {
return true
}
}
return false
}
// endsWithComparisonOp reports whether s ends with a comparison operator token
// (longer operators are checked first to avoid ">=" being matched by ">").
func endsWithComparisonOp(s string) bool {
for _, op := range []string{">=", "<=", "!=", "<>", "==", ">", "<", "="} {
if strings.HasSuffix(s, op) {
return true
}
}
return false
}

View File

@@ -2,7 +2,6 @@ package querybuilder
import (
"fmt"
"regexp"
"strings"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
@@ -19,19 +18,31 @@ func NewHavingExpressionRewriter() *HavingExpressionRewriter {
}
}
func (r *HavingExpressionRewriter) RewriteForTraces(expression string, aggregations []qbtypes.TraceAggregation) string {
// RewriteForTraces rewrites and validates the HAVING expression for a traces query.
func (r *HavingExpressionRewriter) RewriteForTraces(expression string, aggregations []qbtypes.TraceAggregation) (string, error) {
if len(strings.TrimSpace(expression)) == 0 {
return "", nil
}
r.buildTraceColumnMap(aggregations)
return r.rewriteExpression(expression)
return r.rewriteAndValidate(expression)
}
func (r *HavingExpressionRewriter) RewriteForLogs(expression string, aggregations []qbtypes.LogAggregation) string {
// RewriteForLogs rewrites and validates the HAVING expression for a logs query.
func (r *HavingExpressionRewriter) RewriteForLogs(expression string, aggregations []qbtypes.LogAggregation) (string, error) {
if len(strings.TrimSpace(expression)) == 0 {
return "", nil
}
r.buildLogColumnMap(aggregations)
return r.rewriteExpression(expression)
return r.rewriteAndValidate(expression)
}
func (r *HavingExpressionRewriter) RewriteForMetrics(expression string, aggregations []qbtypes.MetricAggregation) string {
// RewriteForMetrics rewrites and validates the HAVING expression for a metrics query.
func (r *HavingExpressionRewriter) RewriteForMetrics(expression string, aggregations []qbtypes.MetricAggregation) (string, error) {
if len(strings.TrimSpace(expression)) == 0 {
return "", nil
}
r.buildMetricColumnMap(aggregations)
return r.rewriteExpression(expression)
return r.rewriteAndValidate(expression)
}
func (r *HavingExpressionRewriter) buildTraceColumnMap(aggregations []qbtypes.TraceAggregation) {
@@ -45,6 +56,9 @@ func (r *HavingExpressionRewriter) buildTraceColumnMap(aggregations []qbtypes.Tr
}
r.columnMap[agg.Expression] = sqlColumn
if normalized := strings.ReplaceAll(agg.Expression, " ", ""); normalized != agg.Expression {
r.columnMap[normalized] = sqlColumn
}
r.columnMap[fmt.Sprintf("__result%d", idx)] = sqlColumn
@@ -65,6 +79,9 @@ func (r *HavingExpressionRewriter) buildLogColumnMap(aggregations []qbtypes.LogA
}
r.columnMap[agg.Expression] = sqlColumn
if normalized := strings.ReplaceAll(agg.Expression, " ", ""); normalized != agg.Expression {
r.columnMap[normalized] = sqlColumn
}
r.columnMap[fmt.Sprintf("__result%d", idx)] = sqlColumn
@@ -102,52 +119,3 @@ func (r *HavingExpressionRewriter) buildMetricColumnMap(aggregations []qbtypes.M
r.columnMap[fmt.Sprintf("__result%d", idx)] = sqlColumn
}
}
func (r *HavingExpressionRewriter) rewriteExpression(expression string) string {
quotedStrings := make(map[string]string)
quotePattern := regexp.MustCompile(`'[^']*'|"[^"]*"`)
quotedIdx := 0
expression = quotePattern.ReplaceAllStringFunc(expression, func(match string) string {
placeholder := fmt.Sprintf("__QUOTED_%d__", quotedIdx)
quotedStrings[placeholder] = match
quotedIdx++
return placeholder
})
type mapping struct {
from string
to string
}
mappings := make([]mapping, 0, len(r.columnMap))
for from, to := range r.columnMap {
mappings = append(mappings, mapping{from: from, to: to})
}
for i := 0; i < len(mappings); i++ {
for j := i + 1; j < len(mappings); j++ {
if len(mappings[j].from) > len(mappings[i].from) {
mappings[i], mappings[j] = mappings[j], mappings[i]
}
}
}
for _, m := range mappings {
if strings.Contains(m.from, "(") {
// escape special regex characters in the function name
escapedFrom := regexp.QuoteMeta(m.from)
pattern := regexp.MustCompile(`\b` + escapedFrom)
expression = pattern.ReplaceAllString(expression, m.to)
} else {
pattern := regexp.MustCompile(`\b` + regexp.QuoteMeta(m.from) + `\b`)
expression = pattern.ReplaceAllString(expression, m.to)
}
}
for placeholder, original := range quotedStrings {
expression = strings.Replace(expression, placeholder, original, 1)
}
return expression
}

View File

@@ -0,0 +1,916 @@
package querybuilder
import (
"testing"
"github.com/SigNoz/signoz/pkg/errors"
"github.com/SigNoz/signoz/pkg/types/metrictypes"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func toTraceAggregations(logs []qbtypes.LogAggregation) []qbtypes.TraceAggregation {
out := make([]qbtypes.TraceAggregation, len(logs))
for i, l := range logs {
out[i] = qbtypes.TraceAggregation{Expression: l.Expression, Alias: l.Alias}
}
return out
}
type logsAndTracesTestCase struct {
name string
expression string
aggregations []qbtypes.LogAggregation
wantExpression string
wantErr bool
wantErrMsg string
wantAdditional []string
}
func runLogsAndTracesTests(t *testing.T, tests []logsAndTracesTestCase) {
t.Helper()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
r := NewHavingExpressionRewriter()
traceAggs := toTraceAggregations(tt.aggregations)
gotLogs, errLogs := r.RewriteForLogs(tt.expression, tt.aggregations)
r2 := NewHavingExpressionRewriter()
gotTraces, errTraces := r2.RewriteForTraces(tt.expression, traceAggs)
if tt.wantErr {
require.Error(t, errLogs)
assert.ErrorContains(t, errLogs, tt.wantErrMsg)
_, _, _, _, _, additionalLogs := errors.Unwrapb(errLogs)
assert.Equal(t, tt.wantAdditional, additionalLogs)
require.Error(t, errTraces)
assert.ErrorContains(t, errTraces, tt.wantErrMsg)
_, _, _, _, _, additionalTraces := errors.Unwrapb(errTraces)
assert.Equal(t, tt.wantAdditional, additionalTraces)
} else {
require.NoError(t, errLogs)
assert.Equal(t, tt.wantExpression, gotLogs)
require.NoError(t, errTraces)
assert.Equal(t, tt.wantExpression, gotTraces)
}
})
}
}
// TestRewriteForLogsAndTraces_ReferenceTypes covers the different ways an aggregation
// result can be referenced in a HAVING expression: by alias, by expression text, by
// __result shorthand, and by __resultN index.
func TestRewriteForLogsAndTraces_ReferenceTypes(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
name: "alias reference",
expression: "total_logs > 1000",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantExpression: "__result_0 > 1000",
},
{
name: "expression reference",
expression: "sum(bytes) > 1024000",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(bytes)"},
},
wantExpression: "__result_0 > 1024000",
},
{
name: "__result reference for single aggregation",
expression: "__result > 500",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "__result_0 > 500",
},
{
name: "__result0 indexed reference",
expression: "__result0 > 100 AND __result1 < 1000",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
{Expression: "sum(bytes)"},
},
wantExpression: "(__result_0 > 100 AND __result_1 < 1000)",
},
{
name: "__result_0 underscore indexed reference",
expression: "__result_0 > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "__result_0 > 100",
},
{
name: "reserved keyword as alias",
expression: "sum > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "sum"},
},
wantExpression: "__result_0 > 100",
},
{
name: "comparison between two aggregation references",
expression: "error_count > warn_count AND errors = warnings",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(errors)", Alias: "error_count"},
{Expression: "sum(warnings)", Alias: "warn_count"},
{Expression: "sum(errors)", Alias: "errors"},
{Expression: "sum(warnings)", Alias: "warnings"},
},
wantExpression: "(__result_0 > __result_1 AND __result_2 = __result_3)",
},
{
name: "mixed alias and expression reference",
expression: "error_count > 10 AND count() < 1000",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
{Expression: "countIf(level='error')", Alias: "error_count"},
},
wantExpression: "(__result_1 > 10 AND __result_0 < 1000)",
},
})
}
// TestRewriteForLogsAndTraces_WhitespaceNormalization verifies that HAVING expression
// references are matched against aggregation expressions in a whitespace-insensitive way.
//
// The column map stores both the original expression and a fully space-stripped version
// as keys. The ANTLR visitor uses ctx.GetText() which also strips all whitespace (WS
// tokens are on a hidden channel). Together these ensure that any spacing difference
// between the aggregation definition and the HAVING reference is tolerated.
func TestRewriteForLogsAndTraces_WhitespaceNormalization(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
// Aggregation has space after comma; HAVING reference omits it.
name: "space after comma in multi-arg function",
expression: "count_distinct(a,b) > 10",
aggregations: []qbtypes.LogAggregation{
{Expression: "count_distinct(a, b)"},
},
wantExpression: "__result_0 > 10",
},
{
// Aggregation has inconsistent spacing around operators; HAVING reference has different inconsistent spacing.
name: "spaces around operators in filter predicate",
expression: "sumIf(a= 'x' or b ='y') > 10",
aggregations: []qbtypes.LogAggregation{
{Expression: "sumIf(a ='x' or b= 'y')"},
},
wantExpression: "__result_0 > 10",
},
{
// Aggregation has extra spaces inside parens; HAVING reference has none.
name: "spaces in nested function call",
expression: "avg(sum(duration)) > 500",
aggregations: []qbtypes.LogAggregation{
{Expression: "avg(sum( duration ))"},
},
wantExpression: "__result_0 > 500",
},
{
// Aggregation has no spaces; HAVING reference adds spaces around args.
name: "having adds spaces where aggregation has none",
expression: "count_distinct( a, b ) > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "count_distinct(a,b)"},
},
wantExpression: "__result_0 > 0",
},
{
// Multi-arg countIf with a complex AND predicate; both sides use different spacing.
name: "countIf with spaced AND predicate",
expression: "countIf(status='error'AND level='critical') > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "countIf(status = 'error' AND level = 'critical')"},
},
wantExpression: "__result_0 > 0",
},
{
// Boolean literals are valid inside function call arguments.
name: "bool literal inside function arg",
expression: "countIf(active=true) > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "countIf(active = true)"},
},
wantExpression: "__result_0 > 0",
},
})
}
// TestRewriteForLogsAndTraces_BooleanOperators covers explicit AND/OR/NOT, implicit AND
// (adjacent comparisons), parenthesised groups, and associated error cases.
func TestRewriteForLogsAndTraces_BooleanOperators(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
name: "implicit AND between two comparisons",
expression: "total > 100 count() < 500",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "(__result_0 > 100 AND __result_0 < 500)",
},
{
name: "complex boolean with parentheses",
expression: "(total > 100 AND avg_duration < 500) OR total > 10000",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
{Expression: "avg(duration)", Alias: "avg_duration"},
},
wantExpression: "(((__result_0 > 100 AND __result_1 < 500)) OR __result_0 > 10000)",
},
{
name: "OR with three operands",
expression: "a > 1 OR b > 2 OR c > 3",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "a"},
{Expression: "count()", Alias: "b"},
{Expression: "count()", Alias: "c"},
},
wantExpression: "(__result_0 > 1 OR __result_1 > 2 OR __result_2 > 3)",
},
{
name: "nested parentheses with OR and AND",
expression: "(a > 10 OR b > 20) AND c > 5",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "a"},
{Expression: "count()", Alias: "b"},
{Expression: "count()", Alias: "c"},
},
wantExpression: "(((__result_0 > 10 OR __result_1 > 20)) AND __result_2 > 5)",
},
{
name: "NOT on grouped expression",
expression: "NOT (__result_0 > 100 AND __result_1 < 500)",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
{Expression: "sum(bytes)"},
},
wantExpression: "NOT (__result_0 > 100 AND __result_1 < 500)",
},
{
name: "NOT with single comparison",
expression: "NOT (total > 100)",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "NOT __result_0 > 100",
},
{
name: "NOT without parentheses on function call",
expression: "NOT count() > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "NOT __result_0 > 100",
},
{
name: "NOT without parentheses on alias",
expression: "NOT total > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "NOT __result_0 > 100",
},
// Error cases
{
name: "double NOT without valid grouping",
expression: "NOT NOT (count() > 100)",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:4 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got 'NOT'"},
},
{
name: "dangling AND at end",
expression: "total_logs > 100 AND",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:20 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got EOF", "Suggestion: `total_logs > 100`"},
},
{
name: "dangling OR at start",
expression: "OR total_logs > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:0 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got 'OR'", "Suggestion: `total_logs > 100`"},
},
{
name: "dangling OR at end",
expression: "total > 100 OR",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:14 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got EOF", "Suggestion: `total > 100`"},
},
{
name: "consecutive AND operators",
expression: "total_logs > 100 AND AND count() < 500",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:21 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got 'AND'"},
},
{
name: "AND followed immediately by OR",
expression: "total > 100 AND OR count() < 50",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:16 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got 'OR'"},
},
})
}
// TestRewriteForLogsAndTraces_UnarySigns covers unary +/- applied to non-literal
// operands (function calls, identifiers, parenthesised groups).
//
// Negative numeric literals (e.g. -10) are handled by the lexer's NUMBER rule
// (SIGN? DIGIT+) and do not use the unary grammar rule. As a consequence,
// `count()-10 > 0` is always a syntax error: after `count()` the lexer produces
// NUMBER(-10) rather than a separate MINUS token, so the parser sees an unexpected
// number where a comparison operator is expected.
func TestRewriteForLogsAndTraces_UnarySigns(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
name: "unary minus on function call",
expression: "-count() > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "-__result_0 > 0",
},
{
name: "unary plus on function call",
expression: "+count() > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "+__result_0 > 0",
},
{
name: "unary minus on identifier alias",
expression: "-total > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "-__result_0 > 0",
},
{
name: "unary minus on parenthesised arithmetic",
expression: "-(sum_a + sum_b) > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(a)", Alias: "sum_a"},
{Expression: "sum(b)", Alias: "sum_b"},
},
wantExpression: "-(__result_0 + __result_1) > 0",
},
// count()-10 is rejected: the lexer produces NUMBER(-10) after RPAREN,
// which is not a valid comparison operator.
{
name: "adjacent minus-literal without space is rejected",
expression: "count()-10 > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:7 expecting one of {!=, '+', <, <=, <>, =, >, >=} but got '-10'"},
},
})
}
// TestRewriteForLogsAndTraces_Arithmetic covers arithmetic operators (+, -, *, /, %),
// all comparison operators, and numeric literal forms.
func TestRewriteForLogsAndTraces_Arithmetic(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
name: "arithmetic on aggregations",
expression: "sum_a + sum_b > 1000",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(a)", Alias: "sum_a"},
{Expression: "sum(b)", Alias: "sum_b"},
},
wantExpression: "__result_0 + __result_1 > 1000",
},
{
name: "comparison operators != <> ==",
expression: "total != 0 AND count() <> 0 AND total == 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "(__result_0 != 0 AND __result_0 <> 0 AND __result_0 == 100)",
},
{
name: "comparison operators < <= > >=",
expression: "total < 100 AND total <= 500 AND total > 10 AND total >= 50",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "(__result_0 < 100 AND __result_0 <= 500 AND __result_0 > 10 AND __result_0 >= 50)",
},
{
name: "numeric literals: negative, float, scientific notation",
expression: "total > -10 AND total > 500.5 AND total > 1e6",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantExpression: "(__result_0 > -10 AND __result_0 > 500.5 AND __result_0 > 1e6)",
},
{
name: "arithmetic: modulo, subtraction, division, multiplication",
expression: "cnt % 10 = 0 AND cnt - 10 > 0 AND errors / total > 0.05 AND cnt * 2 > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "cnt"},
{Expression: "sum(errors)", Alias: "errors"},
{Expression: "count()", Alias: "total"},
},
wantExpression: "(__result_0 % 10 = 0 AND __result_0 - 10 > 0 AND __result_1 / __result_2 > 0.05 AND __result_0 * 2 > 100)",
},
})
}
// TestRewriteForLogsAndTraces_QuotedStringKeys covers aggregation expressions that
// contain quoted string arguments (e.g. countIf(level='error')). These keys cannot
// be parsed by the ANTLR grammar directly and are pre-substituted before parsing.
// Both single-quoted and double-quoted variants are tested, including implicit AND.
func TestRewriteForLogsAndTraces_QuotedStringKeys(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
// Implicit AND: two adjacent comparisons are joined with AND by the grammar.
// Both single-quoted and double-quoted strings in aggregation expressions are pre-substituted.
name: "quoted string in aggregation expression referenced directly in having",
expression: "countIf(level='error') > 0 countIf(level=\"info\") > 0",
aggregations: []qbtypes.LogAggregation{
{Expression: "countIf(level='error')"},
{Expression: `countIf(level="info")`},
},
wantExpression: "(__result_0 > 0 AND __result_1 > 0)",
},
})
}
// TestRewriteForLogsAndTraces_EdgeCases covers empty and whitespace-only expressions.
func TestRewriteForLogsAndTraces_EdgeCases(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
name: "empty expression",
expression: "",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "",
},
{
name: "whitespace only expression",
expression: " ",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantExpression: "",
},
})
}
// TestRewriteForLogsAndTraces_ErrorInvalidReferences covers errors produced when the
// expression contains identifiers or function calls that do not match any aggregation.
func TestRewriteForLogsAndTraces_ErrorInvalidReferences(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
{
name: "unknown identifier",
expression: "unknown_alias > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [unknown_alias]",
wantAdditional: []string{"Valid references are: [__result, __result0, count(), total]"},
},
{
name: "expression not in column map",
expression: "sum(missing_field) > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [sum]",
wantAdditional: []string{"Valid references are: [__result, __result0, count()]"},
},
{
name: "one valid one invalid reference",
expression: "total > 100 AND ghost > 50",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [ghost]",
wantAdditional: []string{"Valid references are: [__result, __result0, count(), total]"},
},
{
name: "__result ambiguous with multiple aggregations",
expression: "__result > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
{Expression: "sum(bytes)"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [__result]",
wantAdditional: []string{"Valid references are: [__result0, __result1, count(), sum(bytes)]"},
},
{
name: "out-of-range __result_N index",
expression: "__result_9 > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [__result_9]",
wantAdditional: []string{"Valid references are: [__result, __result0, count()]"},
},
{
name: "__result_1 out of range for single aggregation",
expression: "__result_1 > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [__result_1]",
wantAdditional: []string{"Valid references are: [__result, __result0, count()]"},
},
{
name: "cascaded function calls",
expression: "sum(count()) > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [sum]",
wantAdditional: []string{"Valid references are: [__result, __result0, count()]"},
},
{
name: "function call with multiple args not in column map",
expression: "sum(a, b) > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(a)"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [sum]",
wantAdditional: []string{"Valid references are: [__result, __result0, sum(a)]"},
},
{
name: "unquoted string value treated as unknown identifier",
expression: "sum(bytes) = xyz",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(bytes)"},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [xyz]",
wantAdditional: []string{"Valid references are: [__result, __result0, sum(bytes)]"},
},
})
}
// TestRewriteForLogsAndTraces_ErrorSyntax covers expressions that produce syntax errors:
// malformed structure (bare operands, mismatched parentheses, missing operators) and
// invalid operand types (string literals, boolean literals).
func TestRewriteForLogsAndTraces_ErrorSyntax(t *testing.T) {
runLogsAndTracesTests(t, []logsAndTracesTestCase{
// Bare operands
{
name: "bare function call without comparison",
expression: "count()",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:7 expecting one of {!=, '+', <, <=, <>, =, >, >=} but got EOF", "Suggestion: `count() > 0`"},
},
{
name: "bare identifier without comparison",
expression: "total_logs",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:10 expecting one of {!=, '+', <, <=, <>, =, >, >=} but got EOF", "Suggestion: `total_logs > 0`"},
},
// Parenthesis mismatches
{
name: "unclosed parenthesis",
expression: "(total_logs > 100 AND count() < 500",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:35 expecting one of {), ,} but got EOF", "Suggestion: `(total_logs > 100 AND count() < 500)`"},
},
{
name: "unexpected closing parenthesis",
expression: "total_logs > 100)",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:16 extraneous input ')' expecting <EOF>"},
},
{
name: "only opening parenthesis",
expression: "(",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:1 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got EOF"},
},
{
name: "only closing parenthesis",
expression: ")",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:0 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got ')'"},
},
{
name: "empty parentheses",
expression: "()",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:1 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got ')'"},
},
// Missing operands or operator
{
name: "missing left operand",
expression: "> 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:0 expecting one of {'*', '+', '-', (, ), AND, IDENTIFIER, NOT, STRING, number} but got '>'; line 1:5 expecting one of {!=, '+', <, <=, <>, =, >, >=} but got EOF"},
},
{
name: "missing right operand",
expression: "count() >",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:9 expecting one of {'*', '+', '-', (, ), IDENTIFIER, STRING, number} but got EOF", "Suggestion: `count() > 0`"},
},
{
name: "missing comparison operator",
expression: "count() 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:8 expecting one of {!=, '+', <, <=, <>, =, >, >=} but got '100'"},
},
// Invalid operand types
{
// BOOL is valid only inside function call args, never as a bare comparison operand.
name: "boolean literal as comparison value",
expression: "count() > true",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:10 expecting one of {'*', '+', '-', (, ), IDENTIFIER, STRING, number} but got 'true'"},
},
{
// false is equally invalid as a comparison operand.
name: "false boolean literal as comparison value",
expression: "count() = false",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()"},
},
wantErr: true,
wantErrMsg: "Syntax error in `Having` expression",
wantAdditional: []string{"line 1:10 expecting one of {'*', '+', '-', (, ), IDENTIFIER, STRING, number} but got 'false'"},
},
{
name: "single-quoted string literal as comparison value",
expression: "sum(bytes) = 'xyz'",
aggregations: []qbtypes.LogAggregation{
{Expression: "sum(bytes)"},
},
wantErr: true,
wantErrMsg: "`Having` expression contains string literals",
wantAdditional: []string{"Aggregator results are numeric"},
},
{
name: "double-quoted string literal as comparison value",
expression: `total > "threshold"`,
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total"},
},
wantErr: true,
wantErrMsg: "`Having` expression contains string literals",
wantAdditional: []string{"Aggregator results are numeric"},
},
})
}
func TestRewriteForMetrics(t *testing.T) {
tests := []struct {
name string
expression string
aggregations []qbtypes.MetricAggregation
wantExpression string
wantErr bool
wantErrMsg string
wantAdditional []string
}{
// --- Happy path: reference types (time/space aggregation, __result, bare metric) ---
{
name: "time aggregation reference",
expression: "sum(cpu_usage) > 80",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantExpression: "value > 80",
},
{
name: "space aggregation reference",
expression: "avg(cpu_usage) > 50",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
SpaceAggregation: metrictypes.SpaceAggregationAvg,
TimeAggregation: metrictypes.TimeAggregationUnspecified,
},
},
wantExpression: "value > 50",
},
{
name: "__result reference",
expression: "__result > 90",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantExpression: "value > 90",
},
{
name: "bare metric name when no aggregations set",
expression: "cpu_usage > 80",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationUnspecified,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantExpression: "value > 80",
},
{
name: "combined space and time aggregation",
expression: "avg(sum(cpu_usage)) > 50",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationAvg,
},
},
wantExpression: "value > 50",
},
// --- Happy path: comparison operators and arithmetic ---
{
name: "comparison operators and arithmetic",
expression: "sum(cpu_usage) < 100 AND sum(cpu_usage) * 2 > 50",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantExpression: "(value < 100 AND value * 2 > 50)",
},
// --- Happy path: empty or bare operand ---
{
name: "empty expression",
expression: "",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantExpression: "",
},
// --- Error: invalid or unknown metric reference ---
{
name: "unknown metric reference",
expression: "wrong_metric > 80",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [wrong_metric]",
wantAdditional: []string{"Valid references are: [__result, __result0, sum(cpu_usage)]"},
},
// --- Error: string literal (not allowed in HAVING) ---
{
name: "string literal rejected",
expression: "cpu_usage = 'high'",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantErr: true,
wantErrMsg: "`Having` expression contains string literals",
wantAdditional: []string{"Aggregator results are numeric"},
},
// --- Error: bare operand (no comparison) ---
{
name: "bare operand without comparison",
expression: "cpu_usage",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [cpu_usage]",
wantAdditional: []string{"Valid references are: [__result, __result0, sum(cpu_usage)]"},
},
// --- Error: aggregation not in column map ---
{
name: "aggregation not in column map",
expression: "count(cpu_usage) > 10",
aggregations: []qbtypes.MetricAggregation{
{
MetricName: "cpu_usage",
TimeAggregation: metrictypes.TimeAggregationSum,
SpaceAggregation: metrictypes.SpaceAggregationUnspecified,
},
},
wantErr: true,
wantErrMsg: "Invalid references in `Having` expression: [count]",
wantAdditional: []string{"Valid references are: [__result, __result0, sum(cpu_usage)]"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
r := NewHavingExpressionRewriter()
got, err := r.RewriteForMetrics(tt.expression, tt.aggregations)
if tt.wantErr {
require.Error(t, err)
assert.ErrorContains(t, err, tt.wantErrMsg)
_, _, _, _, _, additional := errors.Unwrapb(err)
assert.Equal(t, tt.wantAdditional, additional)
} else {
require.NoError(t, err)
assert.Equal(t, tt.wantExpression, got)
}
})
}
}

View File

@@ -6,7 +6,7 @@ import (
"strings"
"github.com/SigNoz/signoz/pkg/errors"
grammar "github.com/SigNoz/signoz/pkg/parser/grammar"
grammar "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/antlr4-go/antlr/v4"
)

View File

@@ -1,7 +1,7 @@
package querybuilder
import (
grammar "github.com/SigNoz/signoz/pkg/parser/grammar"
grammar "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/antlr4-go/antlr/v4"
)

View File

@@ -9,7 +9,7 @@ import (
"strings"
"github.com/SigNoz/signoz/pkg/errors"
grammar "github.com/SigNoz/signoz/pkg/parser/grammar"
grammar "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/antlr4-go/antlr/v4"

View File

@@ -7,7 +7,7 @@ import (
"strings"
"testing"
grammar "github.com/SigNoz/signoz/pkg/parser/grammar"
grammar "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/SigNoz/signoz/pkg/types/telemetrytypes"
"github.com/antlr4-go/antlr/v4"

View File

@@ -26,11 +26,14 @@ type SqliteConfig struct {
// Path is the path to the sqlite database.
Path string `mapstructure:"path"`
// Mode is the mode to use for the sqlite database.
// Mode is the journal mode for the sqlite database.
Mode string `mapstructure:"mode"`
// BusyTimeout is the timeout for the sqlite database to wait for a lock.
BusyTimeout time.Duration `mapstructure:"busy_timeout"`
// TransactionMode is the default transaction locking behavior for the sqlite database.
TransactionMode string `mapstructure:"transaction_mode"`
}
type ConnectionConfig struct {
@@ -49,9 +52,10 @@ func newConfig() factory.Config {
MaxOpenConns: 100,
},
Sqlite: SqliteConfig{
Path: "/var/lib/signoz/signoz.db",
Mode: "delete",
BusyTimeout: 10000 * time.Millisecond, // increasing the defaults from https://github.com/mattn/go-sqlite3/blob/master/sqlite3.go#L1098 because of transpilation from C to GO
Path: "/var/lib/signoz/signoz.db",
Mode: "delete",
BusyTimeout: 10000 * time.Millisecond, // increasing the defaults from https://github.com/mattn/go-sqlite3/blob/master/sqlite3.go#L1098 because of transpilation from C to GO
TransactionMode: "deferred",
},
}

View File

@@ -0,0 +1,56 @@
package sqlstore
import (
"context"
"testing"
"time"
"github.com/SigNoz/signoz/pkg/config"
"github.com/SigNoz/signoz/pkg/config/envprovider"
"github.com/SigNoz/signoz/pkg/factory"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewWithEnvProvider(t *testing.T) {
t.Setenv("SIGNOZ_SQLSTORE_PROVIDER", "sqlite")
t.Setenv("SIGNOZ_SQLSTORE_SQLITE_PATH", "/tmp/test.db")
t.Setenv("SIGNOZ_SQLSTORE_SQLITE_MODE", "wal")
t.Setenv("SIGNOZ_SQLSTORE_SQLITE_BUSY__TIMEOUT", "5s")
t.Setenv("SIGNOZ_SQLSTORE_SQLITE_TRANSACTION__MODE", "immediate")
t.Setenv("SIGNOZ_SQLSTORE_MAX__OPEN__CONNS", "50")
conf, err := config.New(
context.Background(),
config.ResolverConfig{
Uris: []string{"env:"},
ProviderFactories: []config.ProviderFactory{
envprovider.NewFactory(),
},
},
[]factory.ConfigFactory{
NewConfigFactory(),
},
)
require.NoError(t, err)
actual := &Config{}
err = conf.Unmarshal("sqlstore", actual)
require.NoError(t, err)
expected := &Config{
Provider: "sqlite",
Connection: ConnectionConfig{
MaxOpenConns: 50,
},
Sqlite: SqliteConfig{
Path: "/tmp/test.db",
Mode: "wal",
BusyTimeout: 5 * time.Second,
TransactionMode: "immediate",
},
}
assert.Equal(t, expected, actual)
assert.NoError(t, actual.Validate())
}

View File

@@ -49,6 +49,7 @@ func New(ctx context.Context, providerSettings factory.ProviderSettings, config
connectionParams.Add("_pragma", fmt.Sprintf("busy_timeout(%d)", config.Sqlite.BusyTimeout.Milliseconds()))
connectionParams.Add("_pragma", fmt.Sprintf("journal_mode(%s)", config.Sqlite.Mode))
connectionParams.Add("_pragma", "foreign_keys(1)")
connectionParams.Set("_txlock", config.Sqlite.TransactionMode)
sqldb, err := sql.Open("sqlite", "file:"+config.Sqlite.Path+"?"+connectionParams.Encode())
if err != nil {
return nil, err

View File

@@ -3,7 +3,6 @@ package telemetrylogs
import (
"fmt"
"slices"
"strings"
schemamigrator "github.com/SigNoz/signoz-otel-collector/cmd/signozschemamigrator/schema_migrator"
"github.com/SigNoz/signoz/pkg/errors"
@@ -34,34 +33,182 @@ func NewJSONConditionBuilder(key *telemetrytypes.TelemetryFieldKey, valueType te
// BuildCondition builds the full WHERE condition for body_v2 JSON paths.
func (c *jsonConditionBuilder) buildJSONCondition(operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
baseCond, err := c.emitPlannedCondition(operator, value, sb)
if err != nil {
return "", err
conditions := []string{}
for _, node := range c.key.JSONPlan {
condition, err := c.emitPlannedCondition(node, operator, value, sb)
if err != nil {
return "", err
}
conditions = append(conditions, condition)
}
baseCond := sb.Or(conditions...)
// path index
if operator.AddDefaultExistsFilter() {
pathIndex := fmt.Sprintf(`has(%s, '%s')`, schemamigrator.JSONPathsIndexExpr(LogsV2BodyV2Column), c.key.ArrayParentPaths()[0])
return sb.And(baseCond, pathIndex), nil
}
return baseCond, nil
}
func (c *jsonConditionBuilder) emitPlannedCondition(operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
// emitPlannedCondition handles paths with array traversal.
func (c *jsonConditionBuilder) emitPlannedCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
// Build traversal + terminal recursively per-hop
conditions := []string{}
for _, node := range c.key.JSONPlan {
condition, err := c.recurseArrayHops(node, operator, value, sb)
compiled, err := c.recurseArrayHops(node, operator, value, sb)
if err != nil {
return "", err
}
return compiled, nil
}
// buildTerminalCondition creates the innermost condition.
func (c *jsonConditionBuilder) buildTerminalCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
if node.TerminalConfig.ElemType.IsArray {
conditions := []string{}
// if the value type is not an array
// TODO(piyush): Confirm the Query built for Array case and add testcases for it later
if !c.valueType.IsArray {
// if operator is a String search Operator, then we need to build one more String comparison condition along with the Strict match condition
if operator.IsStringSearchOperator() {
formattedValue := querybuilder.FormatValueForContains(value)
arrayCond, err := c.buildArrayMembershipCondition(node, operator, formattedValue, sb)
if err != nil {
return "", err
}
conditions = append(conditions, arrayCond)
}
// switch operator for array membership checks
switch operator {
case qbtypes.FilterOperatorContains:
operator = qbtypes.FilterOperatorEqual
case qbtypes.FilterOperatorNotContains:
operator = qbtypes.FilterOperatorNotEqual
}
}
arrayCond, err := c.buildArrayMembershipCondition(node, operator, value, sb)
if err != nil {
return "", err
}
conditions = append(conditions, condition)
conditions = append(conditions, arrayCond)
// or the conditions together
return sb.Or(conditions...), nil
}
return sb.Or(conditions...), nil
return c.buildPrimitiveTerminalCondition(node, operator, value, sb)
}
// buildPrimitiveTerminalCondition builds the condition if the terminal node is a primitive type
// it handles the data type collisions and utilizes indexes for the condition if available.
func (c *jsonConditionBuilder) buildPrimitiveTerminalCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
fieldPath := node.FieldPath()
conditions := []string{}
var formattedValue = value
if operator.IsStringSearchOperator() {
formattedValue = querybuilder.FormatValueForContains(value)
}
elemType := node.TerminalConfig.ElemType
fieldExpr := fmt.Sprintf("dynamicElement(%s, '%s')", fieldPath, elemType.StringValue())
fieldExpr, formattedValue = querybuilder.DataTypeCollisionHandledFieldName(node.TerminalConfig.Key, formattedValue, fieldExpr, operator)
// utilize indexes for the condition if available
indexed := slices.ContainsFunc(node.TerminalConfig.Key.Indexes, func(index telemetrytypes.JSONDataTypeIndex) bool {
return index.Type == elemType && index.ColumnExpression == fieldPath
})
if elemType.IndexSupported && indexed {
indexedExpr := assumeNotNull(fieldPath, elemType)
emptyValue := func() any {
switch elemType {
case telemetrytypes.String:
return ""
case telemetrytypes.Int64, telemetrytypes.Float64, telemetrytypes.Bool:
return 0
default:
return nil
}
}()
// switch the operator and value for exists and not exists
switch operator {
case qbtypes.FilterOperatorExists:
operator = qbtypes.FilterOperatorNotEqual
value = emptyValue
case qbtypes.FilterOperatorNotExists:
operator = qbtypes.FilterOperatorEqual
value = emptyValue
default:
// do nothing
}
indexedExpr, indexedComparisonValue := querybuilder.DataTypeCollisionHandledFieldName(node.TerminalConfig.Key, formattedValue, indexedExpr, operator)
cond, err := c.applyOperator(sb, indexedExpr, operator, indexedComparisonValue)
if err != nil {
return "", err
}
// if qb has a definitive value, we can skip adding a condition to
// check the existence of the path in the json column
if value != emptyValue {
return cond, nil
}
conditions = append(conditions, cond)
// Switch operator to EXISTS since indexed paths on assumedNotNull, indexes will always have a default value
// So we flip the operator to Exists and filter the rows that actually have the value
operator = qbtypes.FilterOperatorExists
}
cond, err := c.applyOperator(sb, fieldExpr, operator, formattedValue)
if err != nil {
return "", err
}
conditions = append(conditions, cond)
if len(conditions) > 1 {
return sb.And(conditions...), nil
}
return conditions[0], nil
}
// buildArrayMembershipCondition handles array membership checks.
func (c *jsonConditionBuilder) buildArrayMembershipCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
arrayPath := node.FieldPath()
localKeyCopy := *node.TerminalConfig.Key
// create typed array out of a dynamic array
filteredDynamicExpr := func() string {
// Change the field data type from []dynamic to the value type
// since we've filtered the value type out of the dynamic array, we need to change the field data corresponding to the value type
localKeyCopy.FieldDataType = telemetrytypes.MappingJSONDataTypeToFieldDataType[telemetrytypes.ScalerTypeToArrayType[c.valueType]]
baseArrayDynamicExpr := fmt.Sprintf("dynamicElement(%s, 'Array(Dynamic)')", arrayPath)
return fmt.Sprintf("arrayMap(x->dynamicElement(x, '%s'), arrayFilter(x->(dynamicType(x) = '%s'), %s))",
c.valueType.StringValue(),
c.valueType.StringValue(),
baseArrayDynamicExpr)
}
typedArrayExpr := func() string {
return fmt.Sprintf("dynamicElement(%s, '%s')", arrayPath, node.TerminalConfig.ElemType.StringValue())
}
var arrayExpr string
if node.TerminalConfig.ElemType == telemetrytypes.ArrayDynamic {
arrayExpr = filteredDynamicExpr()
} else {
arrayExpr = typedArrayExpr()
}
key := "x"
fieldExpr, value := querybuilder.DataTypeCollisionHandledFieldName(&localKeyCopy, value, key, operator)
op, err := c.applyOperator(sb, fieldExpr, operator, value)
if err != nil {
return "", err
}
return fmt.Sprintf("arrayExists(%s -> %s, %s)", key, op, arrayExpr), nil
}
// recurseArrayHops recursively builds array traversal conditions.
func (c *jsonConditionBuilder) recurseArrayHops(current *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
if current == nil {
return "", errors.NewInternalf(CodeArrayNavigationFailed, "navigation failed, current node is nil")
@@ -75,33 +222,6 @@ func (c *jsonConditionBuilder) recurseArrayHops(current *telemetrytypes.JSONAcce
return terminalCond, nil
}
// apply NOT at top level arrayExists so that any subsequent arrayExists fails we count it as true (matching log)
yes, operator := applyNotCondition(operator)
condition, err := c.buildAccessNodeBranches(current, operator, value, sb)
if err != nil {
return "", err
}
if yes {
return sb.Not(condition), nil
}
return condition, nil
}
func applyNotCondition(operator qbtypes.FilterOperator) (bool, qbtypes.FilterOperator) {
if operator.IsNegativeOperator() {
return true, operator.Inverse()
}
return false, operator
}
// buildAccessNodeBranches builds conditions for each branch of the access node
func (c *jsonConditionBuilder) buildAccessNodeBranches(current *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
if current == nil {
return "", errors.NewInternalf(CodeArrayNavigationFailed, "navigation failed, current node is nil")
}
currAlias := current.Alias()
fieldPath := current.FieldPath()
// Determine availability of Array(JSON) and Array(Dynamic) at this hop
@@ -136,200 +256,6 @@ func (c *jsonConditionBuilder) buildAccessNodeBranches(current *telemetrytypes.J
return sb.Or(branches...), nil
}
// buildTerminalCondition creates the innermost condition
func (c *jsonConditionBuilder) buildTerminalCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
if node.TerminalConfig.ElemType.IsArray {
// Note: here applyNotCondition will return true only if; top level path is an array; and operator is a negative operator
// Otherwise this code will be triggered by buildAccessNodeBranches; Where operator would've been already inverted if needed.
yes, operator := applyNotCondition(operator)
cond, err := c.buildTerminalArrayCondition(node, operator, value, sb)
if err != nil {
return "", err
}
if yes {
return sb.Not(cond), nil
}
return cond, nil
}
return c.buildPrimitiveTerminalCondition(node, operator, value, sb)
}
func getEmptyValue(elemType telemetrytypes.JSONDataType) any {
switch elemType {
case telemetrytypes.String:
return ""
case telemetrytypes.Int64, telemetrytypes.Float64, telemetrytypes.Bool:
return 0
default:
return nil
}
}
func (c *jsonConditionBuilder) terminalIndexedCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
fieldPath := node.FieldPath()
if strings.Contains(fieldPath, telemetrytypes.ArraySepSuffix) {
return "", errors.NewInternalf(CodeArrayNavigationFailed, "can not build index condition for array field %s", fieldPath)
}
elemType := node.TerminalConfig.ElemType
dynamicExpr := fmt.Sprintf("dynamicElement(%s, '%s')", fieldPath, elemType.StringValue())
indexedExpr := assumeNotNull(dynamicExpr)
// switch the operator and value for exists and not exists
switch operator {
case qbtypes.FilterOperatorExists:
operator = qbtypes.FilterOperatorNotEqual
value = getEmptyValue(elemType)
case qbtypes.FilterOperatorNotExists:
operator = qbtypes.FilterOperatorEqual
value = getEmptyValue(elemType)
default:
// do nothing
}
indexedExpr, formattedValue := querybuilder.DataTypeCollisionHandledFieldName(node.TerminalConfig.Key, value, indexedExpr, operator)
cond, err := c.applyOperator(sb, indexedExpr, operator, formattedValue)
if err != nil {
return "", err
}
return cond, nil
}
// buildPrimitiveTerminalCondition builds the condition if the terminal node is a primitive type
// it handles the data type collisions and utilizes indexes for the condition if available
func (c *jsonConditionBuilder) buildPrimitiveTerminalCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
fieldPath := node.FieldPath()
conditions := []string{}
// utilize indexes for the condition if available
//
// Note: Indexing code doesn't get executed for Array Nested fields because they can not be indexed
indexed := slices.ContainsFunc(node.TerminalConfig.Key.Indexes, func(index telemetrytypes.JSONDataTypeIndex) bool {
return index.Type == node.TerminalConfig.ElemType
})
if node.TerminalConfig.ElemType.IndexSupported && indexed {
indexCond, err := c.terminalIndexedCondition(node, operator, value, sb)
if err != nil {
return "", err
}
// if qb has a definitive value, we can skip adding a condition to
// check the existence of the path in the json column
if value != nil && value != getEmptyValue(node.TerminalConfig.ElemType) {
return indexCond, nil
}
conditions = append(conditions, indexCond)
// Switch operator to EXISTS except when operator is NOT EXISTS since
// indexed paths on assumedNotNull, indexes will always have a default
// value so we flip the operator to Exists and filter the rows that
// actually have the value
if operator != qbtypes.FilterOperatorNotExists {
operator = qbtypes.FilterOperatorExists
}
}
var formattedValue any = value
if operator.IsStringSearchOperator() {
formattedValue = querybuilder.FormatValueForContains(value)
}
fieldExpr := fmt.Sprintf("dynamicElement(%s, '%s')", fieldPath, node.TerminalConfig.ElemType.StringValue())
// if operator is negative and has a value comparison i.e. excluding EXISTS and NOT EXISTS, we need to assume that the field exists everywhere
//
// Note: here applyNotCondition will return true only if; top level path is being queried and operator is a negative operator
// Otherwise this code will be triggered by buildAccessNodeBranches; Where operator would've been already inverted if needed.
if node.IsNonNestedPath() {
yes, _ := applyNotCondition(operator)
if yes {
switch operator {
case qbtypes.FilterOperatorNotExists:
// skip
default:
fieldExpr = assumeNotNull(fieldExpr)
}
}
}
fieldExpr, formattedValue = querybuilder.DataTypeCollisionHandledFieldName(node.TerminalConfig.Key, formattedValue, fieldExpr, operator)
cond, err := c.applyOperator(sb, fieldExpr, operator, formattedValue)
if err != nil {
return "", err
}
conditions = append(conditions, cond)
if len(conditions) > 1 {
return sb.And(conditions...), nil
}
return conditions[0], nil
}
func (c *jsonConditionBuilder) buildTerminalArrayCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
conditions := []string{}
// if operator is a String search Operator, then we need to build one more String comparison condition along with the Strict match condition
if operator.IsStringSearchOperator() {
formattedValue := querybuilder.FormatValueForContains(value)
arrayCond, err := c.buildArrayMembershipCondition(node, operator, formattedValue, sb)
if err != nil {
return "", err
}
conditions = append(conditions, arrayCond)
// switch operator for array membership checks
switch operator {
case qbtypes.FilterOperatorContains:
operator = qbtypes.FilterOperatorEqual
case qbtypes.FilterOperatorNotContains:
operator = qbtypes.FilterOperatorNotEqual
}
}
arrayCond, err := c.buildArrayMembershipCondition(node, operator, value, sb)
if err != nil {
return "", err
}
conditions = append(conditions, arrayCond)
if len(conditions) > 1 {
return sb.Or(conditions...), nil
}
return conditions[0], nil
}
// buildArrayMembershipCondition builds condition of the part where Arrays becomes primitive typed Arrays
// e.g. [300, 404, 500], and value operations will work on the array elements
func (c *jsonConditionBuilder) buildArrayMembershipCondition(node *telemetrytypes.JSONAccessNode, operator qbtypes.FilterOperator, value any, sb *sqlbuilder.SelectBuilder) (string, error) {
arrayPath := node.FieldPath()
// create typed array out of a dynamic array
filteredDynamicExpr := func() string {
baseArrayDynamicExpr := fmt.Sprintf("dynamicElement(%s, 'Array(Dynamic)')", arrayPath)
return fmt.Sprintf("arrayFilter(x->(dynamicType(x) IN ('String', 'Int64', 'Float64', 'Bool')), %s)",
baseArrayDynamicExpr)
}
typedArrayExpr := func() string {
return fmt.Sprintf("dynamicElement(%s, '%s')", arrayPath, node.TerminalConfig.ElemType.StringValue())
}
var arrayExpr string
if node.TerminalConfig.ElemType == telemetrytypes.ArrayDynamic {
arrayExpr = filteredDynamicExpr()
} else {
arrayExpr = typedArrayExpr()
}
key := "x"
fieldExpr, value := querybuilder.DataTypeCollisionHandledFieldName(node.TerminalConfig.Key, value, key, operator)
op, err := c.applyOperator(sb, fieldExpr, operator, value)
if err != nil {
return "", err
}
return fmt.Sprintf("arrayExists(%s -> %s, %s)", key, op, arrayExpr), nil
}
func (c *jsonConditionBuilder) applyOperator(sb *sqlbuilder.SelectBuilder, fieldExpr string, operator qbtypes.FilterOperator, value any) (string, error) {
switch operator {
case qbtypes.FilterOperatorEqual:
@@ -391,6 +317,6 @@ func (c *jsonConditionBuilder) applyOperator(sb *sqlbuilder.SelectBuilder, field
}
}
func assumeNotNull(fieldExpr string) string {
return fmt.Sprintf("assumeNotNull(%s)", fieldExpr)
func assumeNotNull(column string, elemType telemetrytypes.JSONDataType) string {
return fmt.Sprintf("assumeNotNull(dynamicElement(%s, '%s'))", column, elemType.StringValue())
}

File diff suppressed because one or more lines are too long

View File

@@ -427,7 +427,10 @@ func (b *logQueryStatementBuilder) buildTimeSeriesQuery(
if query.Having != nil && query.Having.Expression != "" {
// Rewrite having expression to use SQL column names
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForLogs(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForLogs(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}
@@ -453,7 +456,10 @@ func (b *logQueryStatementBuilder) buildTimeSeriesQuery(
sb.GroupBy(querybuilder.GroupByKeys(query.GroupBy)...)
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForLogs(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForLogs(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}
@@ -560,7 +566,10 @@ func (b *logQueryStatementBuilder) buildScalarQuery(
// Add having clause if needed
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForLogs(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForLogs(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/SigNoz/signoz/pkg/querybuilder"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestHavingExpressionRewriter_LogQueries(t *testing.T) {
@@ -14,6 +15,7 @@ func TestHavingExpressionRewriter_LogQueries(t *testing.T) {
havingExpression string
aggregations []qbtypes.LogAggregation
expectedExpression string
wantErr bool
}{
{
name: "single aggregation with alias",
@@ -30,7 +32,7 @@ func TestHavingExpressionRewriter_LogQueries(t *testing.T) {
{Expression: "count()", Alias: "total"},
{Expression: "avg(duration)", Alias: "avg_duration"},
},
expectedExpression: "(__result_0 > 100 AND __result_1 < 500) OR __result_0 > 10000",
expectedExpression: "(((__result_0 > 100 AND __result_1 < 500)) OR __result_0 > 10000)",
},
{
name: "__result reference for single aggregation",
@@ -55,7 +57,7 @@ func TestHavingExpressionRewriter_LogQueries(t *testing.T) {
{Expression: "count()", Alias: ""},
{Expression: "sum(bytes)", Alias: ""},
},
expectedExpression: "__result_0 > 100 AND __result_1 < 1000",
expectedExpression: "(__result_0 > 100 AND __result_1 < 1000)",
},
{
name: "mixed aliases and expressions",
@@ -64,15 +66,36 @@ func TestHavingExpressionRewriter_LogQueries(t *testing.T) {
{Expression: "count()", Alias: ""},
{Expression: "countIf(level='error')", Alias: "error_count"},
},
expectedExpression: "__result_1 > 10 AND __result_0 < 1000",
expectedExpression: "(__result_1 > 10 AND __result_0 < 1000)",
},
{
name: "string literal in having expression",
havingExpression: "count() > 'threshold'",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: ""},
},
wantErr: true,
},
{
name: "unknown reference",
havingExpression: "no_such_alias > 100",
aggregations: []qbtypes.LogAggregation{
{Expression: "count()", Alias: "total_logs"},
},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
rewriter := querybuilder.NewHavingExpressionRewriter()
result := rewriter.RewriteForLogs(tt.havingExpression, tt.aggregations)
assert.Equal(t, tt.expectedExpression, result)
result, err := rewriter.RewriteForLogs(tt.havingExpression, tt.aggregations)
if tt.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, tt.expectedExpression, result)
}
})
}
}

View File

@@ -574,7 +574,10 @@ func (b *MetricQueryStatementBuilder) BuildFinalSelect(
sb.GroupBy("ts")
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForMetrics(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForMetrics(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}
} else if metricType == metrictypes.HistogramType && spaceAgg == metrictypes.SpaceAggregationCount && query.Aggregations[0].ComparisonSpaceAggregationParam != nil {
@@ -597,7 +600,10 @@ func (b *MetricQueryStatementBuilder) BuildFinalSelect(
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForMetrics(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForMetrics(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}
} else {
@@ -606,7 +612,10 @@ func (b *MetricQueryStatementBuilder) BuildFinalSelect(
sb.From("__spatial_aggregation_cte")
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForMetrics(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForMetrics(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Where(rewrittenExpr)
}
}

View File

@@ -552,7 +552,10 @@ func (b *traceQueryStatementBuilder) buildTimeSeriesQuery(
sb.GroupBy(querybuilder.GroupByKeys(query.GroupBy)...)
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForTraces(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForTraces(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}
@@ -578,7 +581,10 @@ func (b *traceQueryStatementBuilder) buildTimeSeriesQuery(
sb.GroupBy(querybuilder.GroupByKeys(query.GroupBy)...)
if query.Having != nil && query.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForTraces(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForTraces(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}
@@ -684,7 +690,10 @@ func (b *traceQueryStatementBuilder) buildScalarQuery(
// Add having clause if needed
if query.Having != nil && query.Having.Expression != "" && !skipHaving {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForTraces(query.Having.Expression, query.Aggregations)
rewrittenExpr, err := rewriter.RewriteForTraces(query.Having.Expression, query.Aggregations)
if err != nil {
return nil, err
}
sb.Having(rewrittenExpr)
}

View File

@@ -3,7 +3,7 @@ package telemetrytraces
import (
"strings"
grammar "github.com/SigNoz/signoz/pkg/parser/grammar"
grammar "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
"github.com/antlr4-go/antlr/v4"
)

View File

@@ -624,7 +624,10 @@ func (b *traceOperatorCTEBuilder) buildTimeSeriesQuery(ctx context.Context, sele
combinedArgs := append(allGroupByArgs, allAggChArgs...)
// Add HAVING clause if specified
b.addHavingClause(sb)
err = b.addHavingClause(sb)
if err != nil {
return nil, err
}
sql, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse, combinedArgs...)
return &qbtypes.Statement{
@@ -740,7 +743,10 @@ func (b *traceOperatorCTEBuilder) buildTraceQuery(ctx context.Context, selectFro
sb.GroupBy(groupByKeys...)
}
b.addHavingClause(sb)
err = b.addHavingClause(sb)
if err != nil {
return nil, err
}
orderApplied := false
for _, orderBy := range b.operator.Order {
@@ -883,7 +889,10 @@ func (b *traceOperatorCTEBuilder) buildScalarQuery(ctx context.Context, selectFr
combinedArgs := append(allGroupByArgs, allAggChArgs...)
// Add HAVING clause if specified
b.addHavingClause(sb)
err = b.addHavingClause(sb)
if err != nil {
return nil, err
}
sql, args := sb.BuildWithFlavor(sqlbuilder.ClickHouse, combinedArgs...)
return &qbtypes.Statement{
@@ -892,12 +901,16 @@ func (b *traceOperatorCTEBuilder) buildScalarQuery(ctx context.Context, selectFr
}, nil
}
func (b *traceOperatorCTEBuilder) addHavingClause(sb *sqlbuilder.SelectBuilder) {
func (b *traceOperatorCTEBuilder) addHavingClause(sb *sqlbuilder.SelectBuilder) error {
if b.operator.Having != nil && b.operator.Having.Expression != "" {
rewriter := querybuilder.NewHavingExpressionRewriter()
rewrittenExpr := rewriter.RewriteForTraces(b.operator.Having.Expression, b.operator.Aggregations)
rewrittenExpr, err := rewriter.RewriteForTraces(b.operator.Having.Expression, b.operator.Aggregations)
if err != nil {
return err
}
sb.Having(rewrittenExpr)
}
return nil
}
func (b *traceOperatorCTEBuilder) addCTE(name, sql string, args []any, dependsOn []string) {

View File

@@ -1,8 +1,14 @@
package audittypes
import (
"context"
"encoding/hex"
"fmt"
"time"
"github.com/SigNoz/signoz/pkg/valuer"
"go.opentelemetry.io/collector/pdata/pcommon"
"go.opentelemetry.io/collector/pdata/plog"
semconv "go.opentelemetry.io/otel/semconv/v1.10.0"
)
// AuditEvent represents a single audit log event.
@@ -16,10 +22,10 @@ type AuditEvent struct {
EventName EventName `json:"eventName"`
// Audit attributes — Principal (Who)
PrincipalID string `json:"principalId"`
PrincipalEmail string `json:"principalEmail"`
PrincipalID valuer.UUID `json:"principalId"`
PrincipalEmail valuer.Email `json:"principalEmail"`
PrincipalType PrincipalType `json:"principalType"`
PrincipalOrgID string `json:"principalOrgId"`
PrincipalOrgID valuer.UUID `json:"principalOrgId"`
IdentNProvider string `json:"identnProvider,omitempty"`
// Audit attributes — Action (What)
@@ -45,7 +51,105 @@ type AuditEvent struct {
UserAgent string `json:"userAgent,omitempty"`
}
// Store is the minimal interface for emitting audit events.
type Store interface {
Emit(ctx context.Context, event AuditEvent) error
func NewPLogsFromAuditEvents(events []AuditEvent, name string, version string, scope string) plog.Logs {
logs := plog.NewLogs()
resourceLogs := logs.ResourceLogs().AppendEmpty()
resourceLogs.Resource().Attributes().PutStr(string(semconv.ServiceNameKey), name)
resourceLogs.Resource().Attributes().PutStr(string(semconv.ServiceVersionKey), version)
scopeLogs := resourceLogs.ScopeLogs().AppendEmpty()
scopeLogs.Scope().SetName(scope)
for i := range events {
events[i].ToLogRecord(scopeLogs.LogRecords().AppendEmpty())
}
return logs
}
func (event AuditEvent) ToLogRecord(dest plog.LogRecord) {
dest.SetTimestamp(pcommon.NewTimestampFromTime(event.Timestamp))
dest.SetObservedTimestamp(pcommon.NewTimestampFromTime(event.Timestamp))
dest.Body().SetStr(event.setBody())
dest.SetEventName(event.EventName.String())
dest.SetSeverityNumber(event.Outcome.Severity())
dest.SetSeverityText(event.Outcome.SeverityText())
if tid, ok := parseTraceID(event.TraceID); ok {
dest.SetTraceID(tid)
}
if sid, ok := parseSpanID(event.SpanID); ok {
dest.SetSpanID(sid)
}
attrs := dest.Attributes()
// Principal attributes
attrs.PutStr("signoz.audit.principal.id", event.PrincipalID.StringValue())
attrs.PutStr("signoz.audit.principal.email", event.PrincipalEmail.String())
attrs.PutStr("signoz.audit.principal.type", event.PrincipalType.StringValue())
attrs.PutStr("signoz.audit.principal.org_id", event.PrincipalOrgID.StringValue())
putStrIfNotEmpty(attrs, "signoz.audit.identn_provider", event.IdentNProvider)
// Action attributes
attrs.PutStr("signoz.audit.action", event.Action.StringValue())
attrs.PutStr("signoz.audit.action_category", event.ActionCategory.StringValue())
attrs.PutStr("signoz.audit.outcome", event.Outcome.StringValue())
// Resource attributes
attrs.PutStr("signoz.audit.resource.name", event.ResourceName)
putStrIfNotEmpty(attrs, "signoz.audit.resource.id", event.ResourceID)
// Error attributes (on failure)
putStrIfNotEmpty(attrs, "signoz.audit.error.type", event.ErrorType)
putStrIfNotEmpty(attrs, "signoz.audit.error.code", event.ErrorCode)
putStrIfNotEmpty(attrs, "signoz.audit.error.message", event.ErrorMessage)
// Transport context attributes
putStrIfNotEmpty(attrs, "http.request.method", event.HTTPMethod)
putStrIfNotEmpty(attrs, "http.route", event.HTTPRoute)
if event.HTTPStatusCode != 0 {
attrs.PutInt("http.response.status_code", int64(event.HTTPStatusCode))
}
putStrIfNotEmpty(attrs, "url.path", event.URLPath)
putStrIfNotEmpty(attrs, "client.address", event.ClientAddress)
putStrIfNotEmpty(attrs, "user_agent.original", event.UserAgent)
}
func (event AuditEvent) setBody() string {
if event.Outcome == OutcomeSuccess {
return fmt.Sprintf("%s (%s) %s %s %s", event.PrincipalEmail, event.PrincipalID, event.Action.PastTense(), event.ResourceName, event.ResourceID)
}
return fmt.Sprintf("%s (%s) failed to %s %s %s: %s (%s)", event.PrincipalEmail, event.PrincipalID, event.Action.StringValue(), event.ResourceName, event.ResourceID, event.ErrorType, event.ErrorCode)
}
func putStrIfNotEmpty(attrs pcommon.Map, key, value string) {
if value != "" {
attrs.PutStr(key, value)
}
}
func parseTraceID(s string) (pcommon.TraceID, bool) {
b, err := hex.DecodeString(s)
if err != nil || len(b) != 16 {
return pcommon.TraceID{}, false
}
var tid pcommon.TraceID
copy(tid[:], b)
return tid, true
}
func parseSpanID(s string) (pcommon.SpanID, bool) {
b, err := hex.DecodeString(s)
if err != nil || len(b) != 8 {
return pcommon.SpanID{}, false
}
var sid pcommon.SpanID
copy(sid[:], b)
return sid, true
}

View File

@@ -1,13 +1,20 @@
package audittypes
import "github.com/SigNoz/signoz/pkg/valuer"
import (
"github.com/SigNoz/signoz/pkg/valuer"
"go.opentelemetry.io/collector/pdata/plog"
)
// Outcome represents the result of an audited operation.
type Outcome struct{ valuer.String }
type Outcome struct {
valuer.String
severity plog.SeverityNumber
severityText string
}
var (
OutcomeSuccess = Outcome{valuer.NewString("success")}
OutcomeFailure = Outcome{valuer.NewString("failure")}
OutcomeSuccess = Outcome{valuer.NewString("success"), plog.SeverityNumberInfo, "INFO"}
OutcomeFailure = Outcome{valuer.NewString("failure"), plog.SeverityNumberError, "ERROR"}
)
func (Outcome) Enum() []any {
@@ -16,3 +23,13 @@ func (Outcome) Enum() []any {
OutcomeFailure,
}
}
// Severity returns the plog severity number for this outcome.
func (o Outcome) Severity() plog.SeverityNumber {
return o.severity
}
// SeverityText returns the severity text for this outcome.
func (o Outcome) SeverityText() string {
return o.severityText
}

View File

@@ -113,29 +113,6 @@ const (
FilterOperatorNotContains
)
var operatorInverseMapping = map[FilterOperator]FilterOperator{
FilterOperatorEqual: FilterOperatorNotEqual,
FilterOperatorNotEqual: FilterOperatorEqual,
FilterOperatorGreaterThan: FilterOperatorLessThanOrEq,
FilterOperatorGreaterThanOrEq: FilterOperatorLessThan,
FilterOperatorLessThan: FilterOperatorGreaterThanOrEq,
FilterOperatorLessThanOrEq: FilterOperatorGreaterThan,
FilterOperatorLike: FilterOperatorNotLike,
FilterOperatorNotLike: FilterOperatorLike,
FilterOperatorILike: FilterOperatorNotILike,
FilterOperatorNotILike: FilterOperatorILike,
FilterOperatorBetween: FilterOperatorNotBetween,
FilterOperatorNotBetween: FilterOperatorBetween,
FilterOperatorIn: FilterOperatorNotIn,
FilterOperatorNotIn: FilterOperatorIn,
FilterOperatorExists: FilterOperatorNotExists,
FilterOperatorNotExists: FilterOperatorExists,
FilterOperatorRegexp: FilterOperatorNotRegexp,
FilterOperatorNotRegexp: FilterOperatorRegexp,
FilterOperatorContains: FilterOperatorNotContains,
FilterOperatorNotContains: FilterOperatorContains,
}
// AddDefaultExistsFilter returns true if addl exists filter should be added to the query
// For the negative predicates, we don't want to add the exists filter. Why?
// Say for example, user adds a filter `service.name != "redis"`, we can't interpret it
@@ -185,10 +162,6 @@ func (f FilterOperator) IsNegativeOperator() bool {
return true
}
func (f FilterOperator) Inverse() FilterOperator {
return operatorInverseMapping[f]
}
func (f FilterOperator) IsComparisonOperator() bool {
switch f {
case FilterOperatorGreaterThan, FilterOperatorGreaterThanOrEq, FilterOperatorLessThan, FilterOperatorLessThanOrEq:

View File

@@ -264,6 +264,12 @@ func (q *QueryBuilderQuery[T]) validateAggregations(cfg validationConfig) error
}
aliases[v.Alias] = true
}
if strings.Contains(strings.ToLower(v.Expression), " as ") {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"aliasing is not allowed in expression. Use `alias` field instead",
)
}
case LogAggregation:
if v.Expression == "" {
aggId := fmt.Sprintf("aggregation #%d", i+1)
@@ -286,6 +292,12 @@ func (q *QueryBuilderQuery[T]) validateAggregations(cfg validationConfig) error
}
aliases[v.Alias] = true
}
if strings.Contains(strings.ToLower(v.Expression), " as ") {
return errors.NewInvalidInputf(
errors.CodeInvalidInput,
"aliasing is not allowed in expression. Use `alias` field instead",
)
}
}
}

View File

@@ -90,11 +90,6 @@ func (n *JSONAccessNode) FieldPath() string {
return n.Parent.Alias() + "." + key
}
// Returns true if the current node is a non-nested path
func (n *JSONAccessNode) IsNonNestedPath() bool {
return !strings.Contains(n.FieldPath(), ArraySep)
}
func (n *JSONAccessNode) BranchesInOrder() []JSONAccessBranchType {
return slices.SortedFunc(maps.Keys(n.Branches), func(a, b JSONAccessBranchType) int {
return strings.Compare(b.StringValue(), a.StringValue())

View File

@@ -8,101 +8,65 @@ package telemetrytypes
// This represents the type information available in the test JSON structure.
func TestJSONTypeSet() (map[string][]JSONDataType, MetadataStore) {
types := map[string][]JSONDataType{
// ── user (primitives) ─────────────────────────────────────────────
"user.name": {String},
"user.permissions": {ArrayString},
"user.age": {Int64, String}, // Int64/String ambiguity
"user.height": {Float64},
"user.active": {Bool}, // Bool — not IndexSupported
// Deeper non-array nesting (a.b.c — no array hops)
"user.address.zip": {Int64},
// ── education[] ───────────────────────────────────────────────────
// Pattern: x[].y
"education": {ArrayJSON},
"education[].name": {String},
"education[].type": {String, Int64},
"education[].year": {Int64},
"education[].scores": {ArrayInt64},
"education[].parameters": {ArrayFloat64, ArrayDynamic},
// Pattern: x[].y[]
"education[].awards": {ArrayDynamic, ArrayJSON},
// Pattern: x[].y[].z
"education[].awards[].name": {String},
"education[].awards[].type": {String},
"education[].awards[].semester": {Int64},
// Pattern: x[].y[].z[]
"education[].awards[].participated": {ArrayDynamic, ArrayJSON},
// Pattern: x[].y[].z[].w
"education[].awards[].participated[].members": {ArrayString},
// Pattern: x[].y[].z[].w[]
"education[].awards[].participated[].team": {ArrayJSON},
// Pattern: x[].y[].z[].w[].v
"education[].awards[].participated[].team[].branch": {String},
// ── interests[] ───────────────────────────────────────────────────
"interests": {ArrayJSON},
"interests[].entities": {ArrayJSON},
"interests[].entities[].reviews": {ArrayJSON},
"interests[].entities[].reviews[].entries": {ArrayJSON},
"interests[].entities[].reviews[].entries[].metadata": {ArrayJSON},
"interests[].entities[].reviews[].entries[].metadata[].positions": {ArrayJSON},
"interests[].entities[].reviews[].entries[].metadata[].positions[].name": {String},
"interests[].entities[].reviews[].entries[].metadata[].positions[].ratings": {ArrayInt64, ArrayString},
"http-events": {ArrayJSON},
"http-events[].request-info.host": {String},
"ids": {ArrayDynamic},
// ── top-level primitives ──────────────────────────────────────────
"message": {String},
"http-status": {Int64, String}, // hyphen in root key, ambiguous
// ── top-level nested objects (no array hops) ───────────────────────
"response.time-taken": {Float64}, // hyphen inside nested key
"user.name": {String},
"user.permissions": {ArrayString},
"user.age": {Int64, String},
"user.height": {Float64},
"education": {ArrayJSON},
"education[].name": {String},
"education[].type": {String, Int64},
"education[].internal_type": {String},
"education[].metadata.location": {String},
"education[].parameters": {ArrayFloat64, ArrayDynamic},
"education[].duration": {String},
"education[].mode": {String},
"education[].year": {Int64},
"education[].field": {String},
"education[].awards": {ArrayDynamic, ArrayJSON},
"education[].awards[].name": {String},
"education[].awards[].rank": {Int64},
"education[].awards[].medal": {String},
"education[].awards[].type": {String},
"education[].awards[].semester": {Int64},
"education[].awards[].participated": {ArrayDynamic, ArrayJSON},
"education[].awards[].participated[].type": {String},
"education[].awards[].participated[].field": {String},
"education[].awards[].participated[].project_type": {String},
"education[].awards[].participated[].project_name": {String},
"education[].awards[].participated[].race_type": {String},
"education[].awards[].participated[].team_based": {Bool},
"education[].awards[].participated[].team_name": {String},
"education[].awards[].participated[].team": {ArrayJSON},
"education[].awards[].participated[].members": {ArrayString},
"education[].awards[].participated[].team[].name": {String},
"education[].awards[].participated[].team[].branch": {String},
"education[].awards[].participated[].team[].semester": {Int64},
"interests": {ArrayJSON},
"interests[].type": {String},
"interests[].entities": {ArrayJSON},
"interests[].entities.application_date": {String},
"interests[].entities[].reviews": {ArrayJSON},
"interests[].entities[].reviews[].given_by": {String},
"interests[].entities[].reviews[].remarks": {String},
"interests[].entities[].reviews[].weight": {Float64},
"interests[].entities[].reviews[].passed": {Bool},
"interests[].entities[].reviews[].type": {String},
"interests[].entities[].reviews[].analysis_type": {Int64},
"interests[].entities[].reviews[].entries": {ArrayJSON},
"interests[].entities[].reviews[].entries[].subject": {String},
"interests[].entities[].reviews[].entries[].status": {String},
"interests[].entities[].reviews[].entries[].metadata": {ArrayJSON},
"interests[].entities[].reviews[].entries[].metadata[].company": {String},
"interests[].entities[].reviews[].entries[].metadata[].experience": {Int64},
"interests[].entities[].reviews[].entries[].metadata[].unit": {String},
"interests[].entities[].reviews[].entries[].metadata[].positions": {ArrayJSON},
"interests[].entities[].reviews[].entries[].metadata[].positions[].name": {String},
"interests[].entities[].reviews[].entries[].metadata[].positions[].duration": {Int64, Float64},
"interests[].entities[].reviews[].entries[].metadata[].positions[].unit": {String},
"interests[].entities[].reviews[].entries[].metadata[].positions[].ratings": {ArrayInt64, ArrayString},
"message": {String},
"tags": {ArrayString},
}
return types, nil
}
// TestIndexedPathEntry is a path + JSON type pair representing a field
// backed by a ClickHouse skip index in the test data.
//
// Only non-array paths with IndexSupported types (String, Int64, Float64)
// are valid entries — arrays and Bool cannot carry a skip index.
//
// The ColumnExpression for each entry is computed at test-setup time from
// the access plan, since it depends on the column name (e.g. body_v2)
// which is unknown to this package.
type TestIndexedPathEntry struct {
Path string
Type JSONDataType
}
// TestIndexedPaths lists path+type pairs from TestJSONTypeSet that are
// backed by a JSON data type index. Test setup uses this to populate
// key.Indexes after calling SetJSONAccessPlan.
//
// Intentionally excluded:
// - user.active → Bool, IndexSupported=false
var TestIndexedPaths = []TestIndexedPathEntry{
// user primitives
{Path: "user.name", Type: String},
// user.address — deeper non-array nesting
{Path: "user.address.zip", Type: Int64},
// root-level with special characters
{Path: "http-status", Type: Int64},
{Path: "http-status", Type: String},
// root-level nested objects (no array hops)
{Path: "response.time-taken", Type: Float64},
}

View File

@@ -6,7 +6,7 @@ import (
"strings"
"github.com/SigNoz/signoz/pkg/errors"
grammar "github.com/SigNoz/signoz/pkg/parser/grammar"
grammar "github.com/SigNoz/signoz/pkg/parser/filterquery/grammar"
qbtypes "github.com/SigNoz/signoz/pkg/types/querybuildertypes/querybuildertypesv5"
"github.com/antlr4-go/antlr/v4"
)

View File

@@ -6,6 +6,7 @@ echo "Generating Go parser..."
mkdir -p pkg/parser
# Generate Go parser
antlr -visitor -Dlanguage=Go -o pkg/parser grammar/FilterQuery.g4
antlr -visitor -Dlanguage=Go -o pkg/parser/filterquery grammar/FilterQuery.g4
antlr -visitor -Dlanguage=Go -o pkg/parser/havingexpression grammar/HavingExpression.g4
echo "Go parser generation complete"