Sensitive Data Scanner Processor

이 제품은 선택한 Datadog 사이트에서 지원되지 않습니다. ().
이 페이지는 아직 한국어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.
이용 가능:

Logs

Overview

The Sensitive Data Scanner processor scans logs to detect and redact or hash sensitive information such as PII, PCI, and custom sensitive data. You can pick from Datadog’s library of predefined rules, or input custom Regex rules to scan for sensitive data.

You can set up the pipeline and processor in the UI, API, or Terraform.

Set up the processor in the UI

To set up the processor:

  1. Define a filter query. Only logs that match the specified filter query are scanned and processed. All logs are sent to the next step in the pipeline, regardless of whether they match the filter query. See Search Syntax for more information.
  2. Click Add Scanning Rule.
  3. Select one of the following:
  1. In the dropdown menu, select the library rule you want to use.
  2. Recommended keywords are automatically added based on the library rule selected. After the scanning rule has been added, you can add additional keywords or remove recommended keywords.
  3. In the Define rule target and action section, select if you want to scan the Entire Event, Specific Attributes, or Exclude Attributes in the dropdown menu.
    • If you are scanning the entire event, you can optionally exclude specific attributes from getting scanned. Use path notation (outer_key.inner_key) to access nested keys. For specified attributes with nested data, all nested data is excluded.
    • If you are scanning specific attributes, specify which attributes you want to scan. Use path notation (outer_key.inner_key) to access nested keys. For specified attributes with nested data, all nested data is scanned.
  4. For Define actions on match, select the action you want to take for the matched information. Note: Redaction, partial redaction, and hashing are all irreversible actions.
    • Redact: Replaces all matching values with the text you specify in the Replacement text field.
    • Partially Redact: Replaces a specified portion of all matched data. In the Redact section, specify the number of characters you want to redact and which part of the matched data to redact.
    • Hash: Replaces all matched data with a unique identifier. The UTF-8 bytes of the match are hashed with the 64-bit fingerprint of FarmHash.
  5. Optionally, click Add Field to add tags you want to associate with the matched events.
  6. Add a name for the scanning rule.
  7. Optionally, add a description for the rule.
  8. Click Save.

Add additional keywords

After adding scanning rules from the library, you can edit each rule separately and add additional keywords to the keyword dictionary.

  1. Navigate to your pipeline.
  2. In the Sensitive Data Scanner processor with the rule you want to edit, click Manage Scanning Rules.
  3. Toggle Use recommended keywords if you want the rule to use them. Otherwise, add your own keywords to the Create keyword dictionary field. You can also require that these keywords be within a specified number of characters of a match. By default, keywords must be within 30 characters before a matched value.
  4. Click Update.
  1. In the Define match conditions section, specify the regex pattern to use for matching against events in the Define the regex field. See Writing Effective Grok Parsing Rules with Regular Expressions for more information. Sensitive Data Scanner supports Perl Compatible Regular Expressions (PCRE), but the following patterns are not supported:
    • Backreferences and capturing sub-expressions (lookarounds)
    • Arbitrary zero-width assertions
    • Subroutine references and recursive patterns
    • Conditional patterns
    • Backtracking control verbs
    • The \C “single-byte” directive (which breaks UTF-8 sequences)
    • The \R newline match
    • The \K start of match reset directive
    • Callouts and embedded code
    • Atomic grouping and possessive quantifiers
  2. Enter sample data in the Add sample data field to verify that your regex pattern is valid.
  3. For Create keyword dictionary, add keywords to refine detection accuracy when matching regex conditions. For example, if you are scanning for a sixteen-digit Visa credit card number, you can add keywords like visa, credit, and card. You can also require that these keywords be within a specified number of characters of a match. By default, keywords must be within 30 characters before a matched value.
  4. In the Define rule target and action section, select if you want to scan the Entire Event, Specific Attributes, or Exclude Attributes in the dropdown menu.
    • If you are scanning the entire event, you can optionally exclude specific attributes from getting scanned. Use path notation (outer_key.inner_key) to access nested keys. For specified attributes with nested data, all nested data is excluded.
    • If you are scanning specific attributes, specify which attributes you want to scan. Use path notation (outer_key.inner_key) to access nested keys. For specified attributes with nested data, all nested data is scanned.
  5. For Define actions on match, select the action you want to take for the matched information. Note: Redaction, partial redaction, and hashing are all irreversible actions.
    • Redact: Replaces all matching values with the text you specify in the Replacement text field.
    • Partially Redact: Replaces a specified portion of all matched data. In the Redact section, specify the number of characters you want to redact and which part of the matched data to redact.
    • Hash: Replaces all matched data with a unique identifier. The UTF-8 bytes of the match is hashed with the 64-bit fingerprint of FarmHash.
  6. Optionally, click Add Field to add tags you want to associate with the matched events.
  7. Add a name for the scanning rule.
  8. Optionally, add a description for the rule.
  9. Click Add Rule.

Path notation example

For the following message structure:

{
    "outer_key": {
        "inner_key": "inner_value",
        "a": {
            "double_inner_key": "double_inner_value",
            "b": "b value"
        },
        "c": "c value"
    },
    "d": "d value"
}
  • Use outer_key.inner_key to refer to the key with the value inner_value.
  • Use outer_key.inner_key.double_inner_key to refer to the key with the value double_inner_value.

Set up the processor using Terraform

You can use the Datadog Observability Pipeline Terraform resource to set up a pipeline with the Sensitive Data Scanner processor. To add a rule to the Sensitive Data Scanner processor using Terraform:

  1. Use the Datadog Sensitive Data Scanner Standard Pattern data source to retrieve the rule ID of the Sensitive Data Scanner library rule.

    data "datadog_sensitive_data_scanner_standard_pattern" "<RULE_IDENTIFIER>" {
      filter = "<RULE_NAME>"
    }
       

    Replace the placeholders:

    • <RULE_IDENTIFIER> with a name to use when you later set up the Sensitive Data Scanner processor in the Observability Pipeline resource.
    • <RULE_NAME> with the exact name of the rule. See Library Rules for the full list of rules.

    For example, if you want to use the AWS Access Key ID Scanner, configure the data source as follows:

    data "datadog_sensitive_data_scanner_standard_pattern" "aws_access_key" {
      filter = "AWS Access Key ID Scanner"
    }
       
    See the full configuration example on how to add data sources for multiple rules.

  2. Add a rule block in your Observability Pipeline resource for the library rule.

    ...
      sensitive_data_scanner {
        rule {
          name = "<YOUR_RULE_NAME>"
          tags = []
          on_match {
            redact {
              replace = "***"
            }
          }
          pattern {
            library {
              id                       = data.datadog_sensitive_data_scanner_standard_pattern.<RULE_IDENTIFIER>.id
              use_recommended_keywords = true
            }
          }
          scope {
            all = true
          }
        }
      }
       

    Replace the placeholders:

    • <YOUR_RULE_NAME> with a name for the rule. This name is shown in the Pipelines UI.
    • <RULE_IDENTIFIER> with the rule identifier you used in the data source in step 1.

    For example, if you use the AWS Access Key ID Scanner data source from step 1, configure the rule block as follows:

    ...
      sensitive_data_scanner {
        rule {
          name = "Redact AWS Access Key IDs"
          tags = []
          on_match {
            redact {
              replace = "***"
            }
          }
          pattern {
            library {
              id                       = data.datadog_sensitive_data_scanner_standard_pattern.aws_access_key.id
              use_recommended_keywords = true
            }
          }
          scope {
            all = true
          }
        }
      }
       

    See the full configuration example on how to add multiple rules.

  3. Repeat steps 1 and 2 for all library rules you want to add.

Full configuration example

The Sensitive Data Scanner processor panel showing two scanning rules: Redact AWS Access Key IDs and Redact US SSNs

If you want to use the Sensitive Data Scanner processor to scan for AWS Access Key IDs and US Social Security Numbers, and redact them by replacing them with the string ***:

  1. Use the Datadog Sensitive Data Scanner Standard Pattern data source to retrieve the rule IDs for the AWS Access Key ID Scanner and the US Social Security Number Scanner.
  2. In your Datadog Observability Pipeline resource’s Sensitive Data Scanner processor, use the Sensitive Data Scanner rules defined in the data sources.
data "datadog_sensitive_data_scanner_standard_pattern" "aws_access_key" {
  filter = "AWS Access Key ID Scanner"
}
data "datadog_sensitive_data_scanner_standard_pattern" "us_ssn" {
  filter = "US Social Security Number Scanner"
}

resource "datadog_observability_pipeline" "sensitive_data_pipeline" {
  name = "Sensitive Data Pipeline"

  config {
    source {
      id = "source-0"
      datadog_agent {}
    }

    processor_group {
      display_name = "Processors"
      enabled      = true
      id           = "group-0"
      include      = "*"
      inputs       = ["source-0"]

      processor {
        display_name = "Sensitive Data Scanner"
        enabled      = true
        id           = "processor-sds-0"
        include      = "*"

        sensitive_data_scanner {
          rule {
            name = "Redact AWS Access Key IDs"
            tags = []
            on_match {
              redact {
                replace = "***"
              }
            }
            pattern {
              library {
                id                       = data.datadog_sensitive_data_scanner_standard_pattern.aws_access_key.id
                use_recommended_keywords = true
              }
            }
            scope {
              all = true
            }
          }
          rule {
            name = "Redact US SSNs"
            tags = []
            on_match {
              redact {
                replace = "***"
              }
            }
            pattern {
              library {
                id                       = data.datadog_sensitive_data_scanner_standard_pattern.us_ssn.id
                use_recommended_keywords = true
              }
            }
            scope {
              all = true
            }
          }
        }
      }
    }

    destination {
      id     = "destination-0"
      inputs = ["group-0"]
      datadog_logs {}
    }
  }
}

Further reading