The integration name associated with your log: the technology from which the log originated.
When it matches an integration name, Datadog automatically installs the corresponding parsers and facets.
See reserved attributes.
ddtags
string
Tags associated with your logs.
hostname
string
The name of the originating host of the log.
message
string
The message reserved attribute
of your log. By default, Datadog ingests the value of the message attribute as the body of the log entry.
That value is then highlighted and displayed in the Logstream, where it is indexed for full text search.
service
string
The name of the application or service generating the log events.
It is used to switch from Logs to APM, so make sure you define the same value when you use both products.
See reserved attributes.
The integration name associated with your log: the technology from which the log originated.
When it matches an integration name, Datadog automatically installs the corresponding parsers and facets.
See reserved attributes.
ddtags
string
Tags associated with your logs.
hostname
string
The name of the originating host of the log.
message
string
The message reserved attribute
of your log. By default, Datadog ingests the value of the message attribute as the body of the log entry.
That value is then highlighted and displayed in the Logstream, where it is indexed for full text search.
service
string
The name of the application or service generating the log events.
It is used to switch from Logs to APM, so make sure you define the same value when you use both products.
See reserved attributes.
## Multi JSON Messages
# Pass multiple log objects at once.
See one of the other client libraries for an example of sending deflate-compressed data.
## Simple JSON Message
# Log attributes can be passed as `key:value` pairs in valid JSON messages.
See one of the other client libraries for an example of sending deflate-compressed data.
## Multi Logplex Messages
# Submit log messages.
See one of the other client libraries for an example of sending deflate-compressed data.
## Simple Logplex Message
# Submit log string.
See one of the other client libraries for an example of sending deflate-compressed data.
## Multi Raw Messages
# Submit log string.
See one of the other client libraries for an example of sending deflate-compressed data.
## Simple Raw Message
# Submit log string. Log attributes can be passed as query parameters in the URL. This enables the addition of tags or the source by using the `ddtags` and `ddsource` parameters: `?host=my-hostname&service=my-service&ddsource=my-source&ddtags=env:prod,user:my-user`.
See one of the other client libraries for an example of sending deflate-compressed data.
The integration name associated with your log: the technology from which the log originated.
When it matches an integration name, Datadog automatically installs the corresponding parsers and facets.
See reserved attributes.
ddtags
string
Tags associated with your logs.
hostname
string
The name of the originating host of the log.
message
string
The message reserved attribute
of your log. By default, Datadog ingests the value of the message attribute as the body of the log entry.
That value is then highlighted and displayed in the Logstream, where it is indexed for full text search.
service
string
The name of the application or service generating the log events.
It is used to switch from Logs to APM, so make sure you define the same value when you use both products.
See reserved attributes.
[{"ddsource":"nginx","ddtags":"env:staging,version:5.1","hostname":"i-012345678","message":"2019-11-19T14:37:58,995 INFO [process.name][20081] Hello World","service":"payment"}]
[{"ddsource":"nginx","ddtags":"env:staging,version:5.1","hostname":"i-012345678","message":"2019-11-19T14:37:58,995 INFO [process.name][20081] Hello World","service":"payment"}]
[{"ddsource":"nginx","ddtags":"env:staging,version:5.1","hostname":"i-012345678","message":"2019-11-19T14:37:58,995 INFO [process.name][20081] Hello World","service":"payment","status":"info"}]
## Multi JSON Messages
# Pass multiple log objects at once.
See one of the other client libraries for an example of sending deflate-compressed data.
## Simple JSON Message
# Log attributes can be passed as `key:value` pairs in valid JSON messages.
See one of the other client libraries for an example of sending deflate-compressed data.
## Multi Logplex Messages
# Submit log messages.
See one of the other client libraries for an example of sending deflate-compressed data.
## Simple Logplex Message
# Submit log string.
See one of the other client libraries for an example of sending deflate-compressed data.
## Multi Raw Messages
# Submit log string.
See one of the other client libraries for an example of sending deflate-compressed data.
## Simple Raw Message
# Submit log string. Log attributes can be passed as query parameters in the URL. This enables the addition of tags or the source by using the `ddtags` and `ddsource` parameters: `?host=my-hostname&service=my-service&ddsource=my-source&ddtags=env:prod,user:my-user`.
See one of the other client libraries for an example of sending deflate-compressed data.
The list of metrics or timeseries to compute for the retrieved buckets.
aggregation [required]
enum
An aggregation function
Allowed enum values: count,cardinality,pc75,pc90,pc95,pc98,pc99,sum,min,max,avg,median
interval
string
The time buckets' size (only used for type=timeseries)
Defaults to a resolution of 150 points
metric
string
The metric to use
type
enum
The type of compute
Allowed enum values: timeseries,total
default: total
filter
object
The search and filter query settings
from
string
The minimum time for the requested logs, supports date math and regular timestamps (milliseconds).
default: now-15m
indexes
[string]
For customers with multiple indexes, the indexes to search. Defaults to ['*'] which means all indexes.
default: *
query
string
The search query - following the log search syntax.
default: *
storage_tier
enum
Specifies storage type as indexes, online-archives or flex
Allowed enum values: indexes,online-archives,flex
default: indexes
to
string
The maximum time for the requested logs, supports date math and regular timestamps (milliseconds).
default: now
group_by
[object]
The rules for the group by
facet [required]
string
The name of the facet to use (required)
histogram
object
Used to perform a histogram computation (only for measure facets).
Note: at most 100 buckets are allowed, the number of buckets is (max - min)/interval.
interval [required]
double
The bin size of the histogram buckets
max [required]
double
The maximum value for the measure used in the histogram
(values greater than this one are filtered out)
min [required]
double
The minimum value for the measure used in the histogram
(values smaller than this one are filtered out)
limit
int64
The maximum buckets to return for this group by. Note: at most 10000 buckets are allowed.
If grouping by multiple facets, the product of limits must not exceed 10000.
default: 10
missing
<oneOf>
The value to use for logs that don't have the facet used to group by
Option 1
string
The missing value to use if there is string valued facet.
Option 2
double
The missing value to use if there is a number valued facet.
sort
object
A sort rule
aggregation
enum
An aggregation function
Allowed enum values: count,cardinality,pc75,pc90,pc95,pc98,pc99,sum,min,max,avg,median
metric
string
The metric to sort by (only used for type=measure)
order
enum
The order to use, ascending or descending
Allowed enum values: asc,desc
type
enum
The type of sorting algorithm
Allowed enum values: alphabetical,measure
default: alphabetical
total
<oneOf>
A resulting object to put the given computes in over all the matching records.
Option 1
boolean
If set to true, creates an additional bucket labeled "$facet_total"
Option 2
string
A string to use as the key value for the total bucket
Option 3
double
A number to use as the key value for the total bucket
options
object
Global query options that are used during the query.
Note: you should supply either timezone or time offset, but not both. Otherwise, the query will fail.
timeOffset
int64
The time offset (in seconds) to apply to the query.
timezone
string
The timezone can be specified as GMT, UTC, an offset from UTC (like UTC+1), or as a Timezone Database identifier (like America/New_York).
default: UTC
page
object
Paging settings
cursor
string
The returned paging point to use to get the next results. Note: at most 1000 results can be paged.
The response object for the logs aggregate API endpoint
Expand All
Campo
Tipo
Descripción
data
object
The query results
buckets
[object]
The list of matching buckets, one item per bucket
by
object
The key, value pairs for each group by
<any-key>
The values for each group by
computes
object
A map of the metric name -> value for regular compute or list of values for a timeseries
<any-key>
<oneOf>
A bucket value, can be either a timeseries or a single value
Option 1
string
A single string value
Option 2
double
A single number value
Option 3
[object]
A timeseries array
time
string
The time value for this point
value
double
The value for this point
meta
object
The metadata associated with a request
elapsed
int64
The time elapsed in milliseconds
page
object
Paging attributes.
after
string
The cursor to use to get the next results, if any. To make the next request, use the same
parameters with the addition of the page[cursor].
request_id
string
The identifier of the request
status
enum
The status of the response
Allowed enum values: done,timeout
warnings
[object]
A list of warnings (non fatal errors) encountered, partial results might be returned if
warnings are present in the response.
code
string
A unique code for this type of warning
detail
string
A detailed explanation of this specific warning
title
string
A short human-readable summary of the warning
{"data":{"buckets":[{"by":{"<any-key>":"undefined"},"computes":{"<any-key>":{"description":"undefined","type":"undefined"}}}]},"meta":{"elapsed":132,"page":{"after":"eyJzdGFydEF0IjoiQVFBQUFYS2tMS3pPbm40NGV3QUFBQUJCV0V0clRFdDZVbG8zY3pCRmNsbHJiVmxDWlEifQ=="},"request_id":"MWlFUjVaWGZTTTZPYzM0VXp1OXU2d3xLSVpEMjZKQ0VKUTI0dEYtM3RSOFVR","status":"done","warnings":[{"code":"unknown_index","detail":"indexes: foo, bar","title":"One or several indexes are missing or invalid, results hold data from the other indexes"}]}}
// Aggregate compute events returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){body:=datadogV2.LogsAggregateRequest{Compute:[]datadogV2.LogsCompute{{Aggregation:datadogV2.LOGSAGGREGATIONFUNCTION_COUNT,Interval:datadog.PtrString("5m"),Type:datadogV2.LOGSCOMPUTETYPE_TIMESERIES.Ptr(),},},Filter:&datadogV2.LogsQueryFilter{From:datadog.PtrString("now-15m"),Indexes:[]string{"main",},Query:datadog.PtrString("*"),To:datadog.PtrString("now"),},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewLogsApi(apiClient)resp,r,err:=api.AggregateLogs(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsApi.AggregateLogs`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsApi.AggregateLogs`:\n%s\n",responseContent)}
// Aggregate compute events with group by returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){body:=datadogV2.LogsAggregateRequest{Compute:[]datadogV2.LogsCompute{{Aggregation:datadogV2.LOGSAGGREGATIONFUNCTION_COUNT,Interval:datadog.PtrString("5m"),Type:datadogV2.LOGSCOMPUTETYPE_TIMESERIES.Ptr(),},},Filter:&datadogV2.LogsQueryFilter{From:datadog.PtrString("now-15m"),Indexes:[]string{"main",},Query:datadog.PtrString("*"),To:datadog.PtrString("now"),},GroupBy:[]datadogV2.LogsGroupBy{{Facet:"host",Missing:&datadogV2.LogsGroupByMissing{LogsGroupByMissingString:datadog.PtrString("miss")},Sort:&datadogV2.LogsAggregateSort{Type:datadogV2.LOGSAGGREGATESORTTYPE_MEASURE.Ptr(),Order:datadogV2.LOGSSORTORDER_ASCENDING.Ptr(),Aggregation:datadogV2.LOGSAGGREGATIONFUNCTION_PERCENTILE_90.Ptr(),Metric:datadog.PtrString("@duration"),},Total:&datadogV2.LogsGroupByTotal{LogsGroupByTotalString:datadog.PtrString("recall")},},},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewLogsApi(apiClient)resp,r,err:=api.AggregateLogs(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsApi.AggregateLogs`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsApi.AggregateLogs`:\n%s\n",responseContent)}
// Aggregate events returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){body:=datadogV2.LogsAggregateRequest{Filter:&datadogV2.LogsQueryFilter{From:datadog.PtrString("now-15m"),Indexes:[]string{"main",},Query:datadog.PtrString("*"),To:datadog.PtrString("now"),},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewLogsApi(apiClient)resp,r,err:=api.AggregateLogs(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsApi.AggregateLogs`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsApi.AggregateLogs`:\n%s\n",responseContent)}
// Aggregate compute events with group by returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.LogsApi;importcom.datadog.api.client.v2.model.LogsAggregateRequest;importcom.datadog.api.client.v2.model.LogsAggregateResponse;importcom.datadog.api.client.v2.model.LogsAggregateSort;importcom.datadog.api.client.v2.model.LogsAggregateSortType;importcom.datadog.api.client.v2.model.LogsAggregationFunction;importcom.datadog.api.client.v2.model.LogsCompute;importcom.datadog.api.client.v2.model.LogsComputeType;importcom.datadog.api.client.v2.model.LogsGroupBy;importcom.datadog.api.client.v2.model.LogsGroupByMissing;importcom.datadog.api.client.v2.model.LogsGroupByTotal;importcom.datadog.api.client.v2.model.LogsQueryFilter;importcom.datadog.api.client.v2.model.LogsSortOrder;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();LogsApiapiInstance=newLogsApi(defaultClient);LogsAggregateRequestbody=newLogsAggregateRequest().compute(Collections.singletonList(newLogsCompute().aggregation(LogsAggregationFunction.COUNT).interval("5m").type(LogsComputeType.TIMESERIES))).filter(newLogsQueryFilter().from("now-15m").indexes(Collections.singletonList("main")).query("*").to("now")).groupBy(Collections.singletonList(newLogsGroupBy().facet("host").missing(newLogsGroupByMissing("miss")).sort(newLogsAggregateSort().type(LogsAggregateSortType.MEASURE).order(LogsSortOrder.ASCENDING).aggregation(LogsAggregationFunction.PERCENTILE_90).metric("@duration")).total(newLogsGroupByTotal("recall"))));try{LogsAggregateResponseresult=apiInstance.aggregateLogs(body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling LogsApi#aggregateLogs");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
"""
Aggregate compute events with group by returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.logs_apiimportLogsApifromdatadog_api_client.v2.model.logs_aggregate_requestimportLogsAggregateRequestfromdatadog_api_client.v2.model.logs_aggregate_sortimportLogsAggregateSortfromdatadog_api_client.v2.model.logs_aggregate_sort_typeimportLogsAggregateSortTypefromdatadog_api_client.v2.model.logs_aggregation_functionimportLogsAggregationFunctionfromdatadog_api_client.v2.model.logs_computeimportLogsComputefromdatadog_api_client.v2.model.logs_compute_typeimportLogsComputeTypefromdatadog_api_client.v2.model.logs_group_byimportLogsGroupByfromdatadog_api_client.v2.model.logs_query_filterimportLogsQueryFilterfromdatadog_api_client.v2.model.logs_sort_orderimportLogsSortOrderbody=LogsAggregateRequest(compute=[LogsCompute(aggregation=LogsAggregationFunction.COUNT,interval="5m",type=LogsComputeType.TIMESERIES,),],filter=LogsQueryFilter(_from="now-15m",indexes=["main",],query="*",to="now",),group_by=[LogsGroupBy(facet="host",missing="miss",sort=LogsAggregateSort(type=LogsAggregateSortType.MEASURE,order=LogsSortOrder.ASCENDING,aggregation=LogsAggregationFunction.PERCENTILE_90,metric="@duration",),total="recall",),],)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=LogsApi(api_client)response=api_instance.aggregate_logs(body=body)print(response)
# Aggregate compute events with group by returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::LogsAPI.newbody=DatadogAPIClient::V2::LogsAggregateRequest.new({compute:[DatadogAPIClient::V2::LogsCompute.new({aggregation:DatadogAPIClient::V2::LogsAggregationFunction::COUNT,interval:"5m",type:DatadogAPIClient::V2::LogsComputeType::TIMESERIES,}),],filter:DatadogAPIClient::V2::LogsQueryFilter.new({from:"now-15m",indexes:["main",],query:"*",to:"now",}),group_by:[DatadogAPIClient::V2::LogsGroupBy.new({facet:"host",missing:"miss",sort:DatadogAPIClient::V2::LogsAggregateSort.new({type:DatadogAPIClient::V2::LogsAggregateSortType::MEASURE,order:DatadogAPIClient::V2::LogsSortOrder::ASCENDING,aggregation:DatadogAPIClient::V2::LogsAggregationFunction::PERCENTILE_90,metric:"@duration",}),total:"recall",}),],})papi_instance.aggregate_logs(body)
// Aggregate compute events with group by returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_logs::LogsAPI;usedatadog_api_client::datadogV2::model::LogsAggregateRequest;usedatadog_api_client::datadogV2::model::LogsAggregateSort;usedatadog_api_client::datadogV2::model::LogsAggregateSortType;usedatadog_api_client::datadogV2::model::LogsAggregationFunction;usedatadog_api_client::datadogV2::model::LogsCompute;usedatadog_api_client::datadogV2::model::LogsComputeType;usedatadog_api_client::datadogV2::model::LogsGroupBy;usedatadog_api_client::datadogV2::model::LogsGroupByMissing;usedatadog_api_client::datadogV2::model::LogsGroupByTotal;usedatadog_api_client::datadogV2::model::LogsQueryFilter;usedatadog_api_client::datadogV2::model::LogsSortOrder;#[tokio::main]asyncfnmain(){letbody=LogsAggregateRequest::new().compute(vec![LogsCompute::new(LogsAggregationFunction::COUNT).interval("5m".to_string()).type_(LogsComputeType::TIMESERIES)]).filter(LogsQueryFilter::new().from("now-15m".to_string()).indexes(vec!["main".to_string()]).query("*".to_string()).to("now".to_string()),).group_by(vec![LogsGroupBy::new("host".to_string()).missing(LogsGroupByMissing::LogsGroupByMissingString("miss".to_string(),)).sort(LogsAggregateSort::new().aggregation(LogsAggregationFunction::PERCENTILE_90).metric("@duration".to_string()).order(LogsSortOrder::ASCENDING).type_(LogsAggregateSortType::MEASURE),).total(LogsGroupByTotal::LogsGroupByTotalString("recall".to_string(),))]);letconfiguration=datadog::Configuration::new();letapi=LogsAPI::with_config(configuration);letresp=api.aggregate_logs(body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Aggregate compute events returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.LogsApi(configuration);constparams: v2.LogsApiAggregateLogsRequest={body:{compute:[{aggregation:"count",interval:"5m",type:"timeseries",},],filter:{from:"now-15m",indexes:["main"],query:"*",to:"now",},},};apiInstance.aggregateLogs(params).then((data: v2.LogsAggregateResponse)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
/**
* Aggregate compute events with group by returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.LogsApi(configuration);constparams: v2.LogsApiAggregateLogsRequest={body:{compute:[{aggregation:"count",interval:"5m",type:"timeseries",},],filter:{from:"now-15m",indexes:["main"],query:"*",to:"now",},groupBy:[{facet:"host",missing:"miss",sort:{type:"measure",order:"asc",aggregation:"pc90",metric:"@duration",},total:"recall",},],},};apiInstance.aggregateLogs(params).then((data: v2.LogsAggregateResponse)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
/**
* Aggregate events returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.LogsApi(configuration);constparams: v2.LogsApiAggregateLogsRequest={body:{filter:{from:"now-15m",indexes:["main"],query:"*",to:"now",},},};apiInstance.aggregateLogs(params).then((data: v2.LogsAggregateResponse)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
El endpoint de listas devuelve los logs que coinciden con una consulta de búsqueda de logs.
Los resultados están paginados.
Si estás pensando en archivar los logs de tu organización,
considera utilizar las funciones de archivado de Datadog y no la API de listas de logs.
Consulta la documentación sobre archivado de logs de Datadog.
This endpoint requires the logs_read_data permission.
The log index on which the request is performed. For multi-index organizations,
the default is all live indexes. Historical indexes of rehydrated logs must be specified.
limit
int32
Number of logs return in the response.
query
string
The search query - following the log search syntax.
sort
enum
Time-ascending asc or time-descending desc results.
Allowed enum values: asc,desc
startAt
string
Hash identifier of the first log to return in the list, available in a log id attribute.
This parameter is used for the pagination feature.
Note: This parameter is ignored if the corresponding log
is out of the scope of the specified time window.
time [required]
object
Timeframe to retrieve the log from.
from [required]
date-time
Minimum timestamp for requested logs.
timezone
string
Timezone can be specified both as an offset (for example "UTC+03:00")
or a regional zone (for example "Europe/Paris").
Response object with all logs matching the request and pagination information.
Expand All
Campo
Tipo
Descripción
logs
[object]
Array of logs matching the request and the nextLogId if sent.
content
object
JSON object containing all log attributes and their associated values.
attributes
object
JSON object of attributes from your log.
host
string
Name of the machine from where the logs are being sent.
message
string
The message reserved attribute
of your log. By default, Datadog ingests the value of the message attribute as the body of the log entry.
That value is then highlighted and displayed in the Logstream, where it is indexed for full text search.
service
string
The name of the application or service generating the log events.
It is used to switch from Logs to APM, so make sure you define the same
value when you use both products.
tags
[string]
Array of tags associated with your log.
timestamp
date-time
Timestamp of your log.
id
string
ID of the Log.
nextLogId
string
Hash identifier of the next log to return in the list.
This parameter is used for the pagination feature.
status
string
Status of the response.
{"logs":[{"content":{"attributes":{"customAttribute":123,"duration":2345},"host":"i-0123","message":"Host connected to remote","service":"agent","tags":["team:A"],"timestamp":"2020-05-26T13:36:14Z"},"id":"AAAAAWgN8Xwgr1vKDQAAAABBV2dOOFh3ZzZobm1mWXJFYTR0OA"}],"nextLogId":"string","status":"string"}