BlazeMeter

Intégration1.0.0
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

Overview

BlazeMeter is a cloud-based performance testing platform that enables scalable load testing for web applications, mobile apps, and APIs. It offers a range of testing capabilities, including performance testing and functional testing.

Integrate BlazeMeter with Datadog to gain insights into performance and functional test results metrics data.

Minimum Agent version: 7.68.2

Setup

Generate API Key Id and API Key Secret in BlazeMeter

  1. Log in to BlazeMeter Account.
  2. Navigate to the Settings page by clicking the gear icon in the upper-right corner of the page.
  3. In the left side bar, under the Personal section, click API Keys.
  4. Create a new API Key by clicking + icon.
  5. In the Generate API Key section, enter a name and select an expiration date.
  6. Click the Generate button to generate the API Key Id and API Key Secret.

Connect your BlazeMeter account to Datadog

  1. Add your API Key Id and API Key Secret\

    ParametersDescription
    API Key IdAPI Key Id of BlazeMeter Account.
    API Key SecretAPI Key Secret of BlazeMeter Account.
  2. Click the Save button to save your settings.

Data Collected

Metrics

The BlazeMeter integration collects and forwards Performance and Functional test results metrics to Datadog. | | | | — | — | | blazemeter.functional.gridSummary_brokenTestCasesCount
(gauge) | The number of test cases that are in a broken state| | blazemeter.functional.gridSummary_failedTestCasesCount
(gauge) | The number of the test cases that are in a failed state| | blazemeter.functional.gridSummary_passedPercent
(gauge) | The percentage of test cases that passed
Shown as percent | | blazemeter.functional.gridSummary_passedTestCasesCount
(gauge) | The number of test cases that are in a passed state| | blazemeter.functional.gridSummary_testCasesCount
(gauge) | The number of test cases in a test| | blazemeter.functional.gridSummary_undefinedTestCasesCount
(gauge) | The number of test cases that are in undefined state| | blazemeter.functional.gridSummary_uniqueSuitesCount
(gauge) | The unique number of testsuites in a test| | blazemeter.functional.gridSummary_uniqueTestCasesCount
(gauge) | The unique number of test cases in a test| | blazemeter.functional.individual_testCaseCount
(gauge) | The individual number of test cases in a test| | blazemeter.performance.assertions_failures
(gauge) | Number of failures generated with this assertion name and message| | blazemeter.performance.errors_count
(gauge) | Number of errors generated with this response code and message| | blazemeter.performance.errors_url_count
(gauge) | Number of errors the URL generated an error| | blazemeter.performance.failedEmbeddedResources_count
(gauge) | Number of errors generated with this response code and message| | blazemeter.performance.request_90line
(gauge) | Max Response Time for 90% of samples
Shown as millisecond | | blazemeter.performance.request_95line
(gauge) | Max Response Time for 95% of samples
Shown as millisecond | | blazemeter.performance.request_99line
(gauge) | Max Response Time for 99% of samples
Shown as millisecond | | blazemeter.performance.request_avgBytes
(gauge) | Average size of requests.
Shown as byte | | blazemeter.performance.request_avgLatency
(gauge) | Average Latency of the label.
Shown as millisecond | | blazemeter.performance.request_avgResponseTime
(gauge) | Average response time of the label.
Shown as millisecond | | blazemeter.performance.request_avgThroughput
(gauge) | Average number of requests processed per second.
Shown as hit | | blazemeter.performance.request_concurrency
(gauge) | Maximum Concurrent Users| | blazemeter.performance.request_duration
(gauge) | Total duration of the test
Shown as second | | blazemeter.performance.request_errorRate
(gauge) | Percentage of requests that resulted in errors.
Shown as percent | | blazemeter.performance.request_errorsCount
(gauge) | Total number of errors.| | blazemeter.performance.request_maxResponseTime
(gauge) | Maximum response time of the label.
Shown as millisecond | | blazemeter.performance.request_medianResponseTime
(gauge) | Maximum Response Time for 50% of samples
Shown as millisecond | | blazemeter.performance.request_minResponseTime
(gauge) | Minimum response time of the label.
Shown as millisecond | | blazemeter.performance.request_samples
(gauge) | Total number of samples or requests processed.| | blazemeter.performance.request_stDev
(gauge) | Standard deviation of the response times.
Shown as millisecond | | blazemeter.performance.summary_avg
(gauge) | Average Response Time of the test run.
Shown as millisecond | | blazemeter.performance.summary_bytes
(gauge) | Average Bandwidth of the test run.
Shown as byte | | blazemeter.performance.summary_concurrency
(gauge) | Maximum Concurrent Users| | blazemeter.performance.summary_duration
(gauge) | Duration of Test Run
Shown as second | | blazemeter.performance.summary_failed
(gauge) | Total Number of Errors in the test run.| | blazemeter.performance.summary_hits
(gauge) | Total number of successful requests processed.| | blazemeter.performance.summary_hits_avg
(gauge) | Average number of hits per second.
Shown as hit | | blazemeter.performance.summary_latency
(gauge) | Average Latency of the test run.
Shown as millisecond | | blazemeter.performance.summary_max
(gauge) | Maximum Response Time.
Shown as millisecond | | blazemeter.performance.summary_maxUsers
(gauge) | The maximum concurrency the test reached.| | blazemeter.performance.summary_min
(gauge) | Minimum Response Time.
Shown as millisecond | | blazemeter.performance.summary_stDev
(gauge) | Standard deviation of the response times
Shown as millisecond | | blazemeter.performance.summary_tp90
(gauge) | Max response time for 90% of samples
Shown as millisecond |

Service Checks

The BlazeMeter integration does not include any service checks.

Events

The BlazeMeter integration does not include any events.

Support

Need help? Contact Datadog support.