---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_rekognition_stream_processor{% #aws_rekognition_stream_processor %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `creation_timestamp`{% #creation_timestamp %}

**Type**: `TIMESTAMP`**Provider name**: `CreationTimestamp`**Description**: Date and time the stream processor was created

## `data_sharing_preference`{% #data_sharing_preference %}

**Type**: `STRUCT`**Provider name**: `DataSharingPreference`**Description**: Shows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.

- `opt_in`**Type**: `BOOLEAN`**Provider name**: `OptIn`**Description**: If this option is set to true, you choose to share data with Rekognition to improve model performance.

## `input`{% #input %}

**Type**: `STRUCT`**Provider name**: `Input`**Description**: Kinesis video stream that provides the source streaming video.

- `kinesis_video_stream`**Type**: `STRUCT`**Provider name**: `KinesisVideoStream`**Description**: The Kinesis video stream input stream for the source streaming video.
  - `arn`**Type**: `STRING`**Provider name**: `Arn`**Description**: ARN of the Kinesis video stream stream that streams the source video.

## `kms_key_id`{% #kms_key_id %}

**Type**: `STRING`**Provider name**: `KmsKeyId`**Description**: The identifier for your AWS Key Management Service key (AWS KMS key). This is an optional parameter for label detection stream processors.

## `last_update_timestamp`{% #last_update_timestamp %}

**Type**: `TIMESTAMP`**Provider name**: `LastUpdateTimestamp`**Description**: The time, in Unix format, the stream processor was last updated. For example, when the stream processor moves from a running state to a failed state, or when the user starts or stops the stream processor.

## `name`{% #name %}

**Type**: `STRING`**Provider name**: `Name`**Description**: Name of the stream processor.

## `notification_channel`{% #notification_channel %}

**Type**: `STRUCT`**Provider name**: `NotificationChannel`

- `sns_topic_arn`**Type**: `STRING`**Provider name**: `SNSTopicArn`**Description**: The Amazon Resource Number (ARN) of the Amazon Amazon Simple Notification Service topic to which Amazon Rekognition posts the completion status.

## `output`{% #output %}

**Type**: `STRUCT`**Provider name**: `Output`**Description**: Kinesis data stream to which Amazon Rekognition Video puts the analysis results.

- `kinesis_data_stream`**Type**: `STRUCT`**Provider name**: `KinesisDataStream`**Description**: The Amazon Kinesis Data Streams stream to which the Amazon Rekognition stream processor streams the analysis results.
  - `arn`**Type**: `STRING`**Provider name**: `Arn`**Description**: ARN of the output Amazon Kinesis Data Streams stream.
- `s3_destination`**Type**: `STRUCT`**Provider name**: `S3Destination`**Description**: The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation.
  - `bucket`**Type**: `STRING`**Provider name**: `Bucket`**Description**: The name of the Amazon S3 bucket you want to associate with the streaming video project. You must be the owner of the Amazon S3 bucket.
  - `key_prefix`**Type**: `STRING`**Provider name**: `KeyPrefix`**Description**: The prefix value of the location within the bucket that you want the information to be published to. For more information, see [Using prefixes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html).

## `regions_of_interest`{% #regions_of_interest %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `RegionsOfInterest`**Description**: Specifies locations in the frames where Amazon Rekognition checks for objects or people. This is an optional parameter for label detection stream processors.

- `bounding_box`**Type**: `STRUCT`**Provider name**: `BoundingBox`**Description**: The box representing a region of interest on screen.
  - `height`**Type**: `FLOAT`**Provider name**: `Height`**Description**: Height of the bounding box as a ratio of the overall image height.
  - `left`**Type**: `FLOAT`**Provider name**: `Left`**Description**: Left coordinate of the bounding box as a ratio of overall image width.
  - `top`**Type**: `FLOAT`**Provider name**: `Top`**Description**: Top coordinate of the bounding box as a ratio of overall image height.
  - `width`**Type**: `FLOAT`**Provider name**: `Width`**Description**: Width of the bounding box as a ratio of the overall image width.
- `polygon`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `Polygon`**Description**: Specifies a shape made up of up to 10 `Point` objects to define a region of interest.
  - `x`**Type**: `FLOAT`**Provider name**: `X`**Description**: The value of the X coordinate for a point on a `Polygon`.
  - `y`**Type**: `FLOAT`**Provider name**: `Y`**Description**: The value of the Y coordinate for a point on a `Polygon`.

## `role_arn`{% #role_arn %}

**Type**: `STRING`**Provider name**: `RoleArn`**Description**: ARN of the IAM role that allows access to the stream processor.

## `settings`{% #settings %}

**Type**: `STRUCT`**Provider name**: `Settings`**Description**: Input parameters used in a streaming video analyzed by a stream processor. You can use `FaceSearch` to recognize faces in a streaming video, or you can use `ConnectedHome` to detect labels.

- `connected_home`**Type**: `STRUCT`**Provider name**: `ConnectedHome`
  - `labels`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `Labels`**Description**: Specifies what you want to detect in the video, such as people, packages, or pets. The current valid labels you can include in this list are: "PERSON", "PET", "PACKAGE", and "ALL".
  - `min_confidence`**Type**: `FLOAT`**Provider name**: `MinConfidence`**Description**: The minimum confidence required to label an object in the video.
- `face_search`**Type**: `STRUCT`**Provider name**: `FaceSearch`**Description**: Face search settings to use on a streaming video.
  - `collection_id`**Type**: `STRING`**Provider name**: `CollectionId`**Description**: The ID of a collection that contains faces that you want to search for.
  - `face_match_threshold`**Type**: `FLOAT`**Provider name**: `FaceMatchThreshold`**Description**: Minimum face match confidence score that must be met to return a result for a recognized face. The default is 80. 0 is the lowest confidence. 100 is the highest confidence. Values between 0 and 100 are accepted, and values lower than 80 are set to 80.

## `status`{% #status %}

**Type**: `STRING`**Provider name**: `Status`**Description**: Current status of the stream processor.

## `status_message`{% #status_message %}

**Type**: `STRING`**Provider name**: `StatusMessage`**Description**: Detailed status message about the stream processor.

## `stream_processor_arn`{% #stream_processor_arn %}

**Type**: `STRING`**Provider name**: `StreamProcessorArn`**Description**: ARN of the stream processor.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`
