Indexes are located on the Configuration page in the Indexes section. Double click on them or click on the edit button to see more information about the number of logs that were indexed in the past 3 days, as well as the retention period for those logs:
By default, Log Explorer have one unique Log Index, but datadog also offers multiple indexes if you require:
Index filters allow dynamic control over which logs flow into which indexes. For example, if you create a first index filtered on the
status:notice attribute, a second index filtered to the
status:error attribute, and a final one without any filter (the equivalent of
*), all your
status:notice logs would go to the first index, all your
status:error logs to the second index, and the rest would go to the final one.
Note: Logs enter the first index whose filter they match on, use drag and drop on the list of indexes to reorder them according to your use-case.
By default, logs indexes have no exclusion filter: that is to say all logs matching the Index Filter are indexed.
But because your logs are not all and equally valuable, exclusion filters control which logs flowing in your index should be removed. Excluded logs are discarded from indexes, but still flow through the Livetail and can be used to generate metrics and archived.
Exclusion filters are defined by a query, a sampling rule, and a active/inactive toggle:
*, meaning all logs flowing in the index would be excluded. Scope down exclusion filter to only a subset of logs with a log query.
Exclude 100% of logsmatching the query. Adapt sampling rate from 0% to 100%, and decide if the sampling rate applies on individual logs, or group of logs defined by the unique values of any attribute.
Note: Index filters for logs are only processed with the first active exclusion filter matched. If a log matches an exclusion filter (even if the log is not sampled out), it ignores all following exclusion filters in the sequence.
Use drag and drop on the list of exclusion filters to reorder them according to your use case.
You might not need your DEBUG logs until you actually need them when your platform undergoes an incident, or want to carefully observe the deployment of a critical version of your application. Setup a 100% exclusion filter on the
status:DEBUG, and toggle it on and off from Datadog UI or through the API when required.
Let’s say now you don’t want to keep all logs from your web access server requests. You could choose to index all 3xx, 4xx, and 5xx logs, but exclude 95% of the 2xx logs:
source:nginx AND http.status_code:[200 TO 299] to keep track of the trends.
Tip: Transform web access logs into meaningful KPIs with a metric generated from your logs, counting number of requests and tagged by status code, browser and country.
You have millions of users connecting to your website everyday. And although you don’t need observability on every single user, you still want to keep the full picture for some. Set up an exclusion filter applying to all production logs (
env:production) and exclude logs for 90% of the
You can use APM in conjunction with Logs, thanks to trace ID injection in logs. As for users, you don’t need to keep all your logs but making sure logs always give the full picture to a trace is critical for troubleshooting.
Set up an exclusion filter applied to logs from your instrumented service (
service:my_python_app) and exclude logs for 50% of the
Trace ID - make sure to use the trace ID remapper upstream in your pipelines.
You can set a daily quota to hard-limit the number of logs that are stored within an Index per day. This quota is applied for all logs that should have been stored (i.e. after exclusion filters are applied). Once the daily quota is reached, logs are no longer indexed but are still available in the livetail, sent to your archives, and used to generate metrics from logs.
Update or remove this quota at any time when editing the Index:
Note: Indexes daily quotas reset automatically at 2:00pm UTC (4:00pm CET, 10:00am EDT, 7:00am PDT).
Additional helpful documentation, links, and articles: