* fix: add missing filtering for ip address for scalar data
In domain listing api for external api monitoring,
we have option to filter out the IP address but
it only handles timeseries and raw type data while
domain list handler returns scalar data.
* fix: switch to new derived attributes for ip filtering
---------
Co-authored-by: Nityananda Gohain <nityanandagohain@gmail.com>
* fix: limit value size and count to pointers with omitempty
* fix: openapi specs backend
* fix: openapi specs frontend
* chore: add go tests for limits validations
* fix: liniting issues
* test: remove go test and add gateway integration tests with mocked gateway for all gateway apis
* feat: add gateway in integration ci src matrix
* chore: divide tests into multiple files for keys and limits and utilities
* fix: creation ingestion key returns 201, check for actual values in tests
* fix: creation ingestion key returns 201, check for actual values in tests
* fix: create ingestion key gateway api mock status code as 201
### 📄 Summary
- Expose Zeus PutProfile, PutHost and GetHost APIs as first-class OpenAPI-spec endpoints, replacing the previous proxy-based approach
- Introduce typed request structs (PostableProfile, PostableHost) instead of raw []byte for type safety and OpenAPI documentation
- Wire Zeus handler through the standard dependency chain: handler interface, handler implementation, Handlers struct, signozapiserver provider
#### Changes
- PUT /api/v2/zeus/profiles - saves deployment profile to Zeus
- PUT /api/v2/zeus/hosts - saves deployment host to Zeus
- GET /api/v2/zeus/hosts - gets the deployment host from Zeus
- All the above new APIs need Admin access
Also:
- httpzeus provider — marshaling now happens in the provider; upstream error messages are passed through instead of being swallowed; fixes wrong upstream path (/hosts → /host); adds 409 Conflict mapping; replaces errors.Newf with errors.New
#### Issues closed by this PR
Closes https://github.com/SigNoz/platform-pod/issues/1722
* fix: fix incosistent use of http attribute in ext. api
HTTP attributes like http.url, url.full along with server.name and net.peer.name
were used inconsitantly leading to bugs in aggregation query and they were
expensive to query as well since these attr are stored as json instead of
direct columns. Using columns like http_url optimises these queries since
it gets populated using all relevant attributes during ingestion itself.
* fix: switch to using http_host instead of external_http_url
external_http_url stores the hostname but the name
is confusing, so switching to http_host
* fix: use constants defined where possible
* fix: fix old attribute usage in tests
## Summary
- Adds root user support with environment-based provisioning, protection guards, and automatic reconciliation. A root user is a special admin user that is provisioned via configuration (environment variables) rather than the UI, designed for automated/headless deployments.
## Key Features
- Environment-based provisioning: Configure root user via user.root.enabled, user.root.email, user.root.password, and user.root.org_name settings
- Automatic reconciliation: A background service runs on startup that:
- Looks up the organization by configured org_name
- If no matching org exists, creates the organization and root user via CreateFirstUser
- If the org exists, reconciles the root user (creates, promotes existing user, or updates email/password to match config)
- Retries every 10 seconds until successful
- Protection guards: Root users cannot be:
- Updated or deleted through the API
- Invited or have their password changed through the UI
- Authenticated via SSO/SAML (password-only authentication enforced)
- Self-registration disabled: When root user provisioning is enabled, the self-registration endpoint (/register) is blocked to prevent creating duplicate organizations
- Idempotent password sync: On every reconciliation, the root user's password is synced with the configured value — if it differs, it's updated; if it matches, no-op
Using FINAL in clickhouse query will trigger the aggregation
merge of data while using group by will be more efficient.
It's recommended id docs as well - https://clickhouse.com/
docs/engines/table-engines/mergetree-family/
aggregatingmergetree#select-and-insert
If multiple batches are inserted with same trace_id, then
trace_summary table can have multiple rows before they are
aggregated by clickhouse. Query to get the time range from
trace_summary was assuming a single which was create
unpredictable behaviour as any random row could be returned.
* feat(authz): remove unnecessary dependency injection for role setter
* feat(authz): deprecate role module
* feat(authz): deprecate role module
* feat(authz): split between server and sql actions
* feat(authz): add bootstrap for managed role transactions
* feat(authz): update and add integration tests
* feat(authz): match names for factory and migration
* feat(authz): fix integration tests
* feat(authz): reduce calls on organisation creeation
* fix: make size and count included in json if zero
* fix: make forgot password api fields required
* fix: openapi spec
* fix: error message casing for frontend
* chore: fix openapi spec
* fix: openapi specs