Browser tests are scenarios executed by Datadog on your web applications. They run at configurable periodic intervals from multiple locations around the world, and from multiple devices. These tests verify both that your applications are up and responding to requests, and that key business transactions can be performed by users as expected.
In this example, a browser test is configured to map a user’s journey from adding an item to cart to successful checkout. Each test execution is recorded in Datadog as a Test Result.
In the Datadog application, hover over UX Monitoring in the left hand menu and select Synthetic Test.
In the top right corner, click the New Test button.
Select Browser Test.
Define the configuration of your browser test:
Starting URL: Add the URL of the website you’re interested in monitoring. If you don’t know what to start with, you can use
https://www.shopist.io as a test web application.
Name: Name the test.
Tags: You can set tags such as
app:shopist on your test. Tags help to keep things organized and allow you to quickly find the tests you’re interested in on the homepage.
Browsers & Devices: Choose the devices and browsers you want to use for testing. In this example, the test is only run from Chrome and on Large Laptops.
Locations: Choose one of the Managed Locations to run your test from. In this example, the test is run in Americas and Europe.
Specify a test frequency: Select how often you would like the test to run.
Alert Conditions: Set alert conditions to determine the circumstances under which you want a test to send a notification alert.
An alert is triggered if your test fails for 0 minutes from any 3 of 13 locations
Retry 1 time before location is marked as failed
Note: By default, there is a 300ms wait before retrying a test that failed. This interval can be configured via the API.
Notify: Write an alert message and specify which email addresses should be notified when the alert is triggered. No additional set up is required to start receiving alert emails from Datadog. You can also use integrations, such as Slack, PagerDuty, webhooks, etc., to receive alert notifications.
Click Save & Edit Recording.
Once this extension is installed, begin recording your test steps by clicking the Start Recording button. Navigate your page in the iframe to the right of the recording options. When you select a div, image, or any area of your page, the steps are recorded and used to create steps within the browser test. Learn more about each step in the browser test steps doc.
For example, to record test steps that map a user’s journey from adding an item to cart to successful checkout:
Navigate to one of the furniture sections, for instance Chairs, and select Add to cart.
Click on Cart, click Checkout.
Manually add the Assertion “Test text is present on the active page” to confirm the words “Thank you” are on the page.
Note: Your final browser test step must be an assertion. This will ensure your test ended up on an expected page and found the expected element.
Save the test.
Note: the website used in this example regularly throws an error causing it to intentionally fail. If you set your email address in the message box, you should consequently receive a notification email when the test failure occurs.
The browser test details page includes details about your test configuration, test uptime, historical graphs for response time and time to interactive for the first page, sample successful and failed results, and a list of test results corresponding to the selected timeframe. Each individual test result includes screenshots, core web vitals, potential errors, resources, and traces for each step.
Wait for your test to generate several test results or hit
Run test now to trigger them more quickly. Then look for a failed test result under Test Results or in your mailbox. You can start your troubleshooting by looking at the screenshots to try to understand what went wrong. Don’t forget to look at screenshots of steps that happened before the failed step as these often contain the root cause of the failure.
https://api.shopist.io/checkout.json, posted the status ,and the targeted source of the problem is a controller linked to checkout. You have now successfully found the route of the problem.
The Traces tab is accessible with Datadog APM integration with Synthetic Monitoring. Once configured, it allows you to go from a test run that failed to the root cause of the issue by looking at the trace generated by that test run. To link browser test results with APM, whitelist the URLs you want the APM integration headers added to. Use
* for wildcards:
Additional helpful documentation, links, and articles: