---
title: Setting Up Database Monitoring for Google Cloud SQL managed SQL Server
description: >-
  Install and configure Database Monitoring for SQL Server managed on Google
  Cloud SQL
breadcrumbs: >-
  Docs > Database Monitoring > Setting up SQL Server > Setting Up Database
  Monitoring for Google Cloud SQL managed SQL Server
---

# Setting Up Database Monitoring for Google Cloud SQL managed SQL Server

Database Monitoring provides deep visibility into your Microsoft SQL Server databases by exposing query metrics, query samples, explain plans, database states, failovers, and events.

Complete the following steps to enable Database Monitoring with your database:

1. Grant the Agent access to the database
1. Install and configure the Agent
1. Install the Cloud SQL integration

## Before you begin{% #before-you-begin %}

{% dl %}

{% dt %}
Supported SQL Server versions
{% /dt %}

{% dd %}
2014, 2016, 2017, 2019, 2022
{% /dd %}

{% dt %}
Supported Agent versions
{% /dt %}

{% dd %}
7.41.0+
{% /dd %}

{% dt %}
Performance impact
{% /dt %}

{% dd %}
The default Agent configuration for Database Monitoring is conservative, but you can adjust settings such as the collection interval and query sampling rate to better suit your needs. For most workloads, the Agent represents less than one percent of query execution time on the database and less than one percent of CPU.Database Monitoring runs as an integration on top of the base Agent ([see benchmarks](https://docs.datadoghq.com/database_monitoring/agent_integration_overhead/?tab=sqlserver)).
{% /dd %}

{% dt %}
Proxies, load balancers, and connection poolers
{% /dt %}

{% dd %}
The Datadog Agent must connect directly to the host being monitored. The Agent should not connect to the database through a proxy, load balancer, or connection pooler. If the Agent connects to different hosts while it is running (as in the case of failover, load balancing, and so on), the Agent calculates the difference in statistics between two hosts, producing inaccurate metrics.
{% /dd %}

{% dt %}
Data security considerations
{% /dt %}

{% dd %}
Read about how Database Management handles [sensitive information](https://docs.datadoghq.com/database_monitoring/data_collected/#sensitive-information) for information about what data the Agent collects from your databases and how to ensure it is secure.
{% /dd %}

{% /dl %}

## Grant the Agent access{% #grant-the-agent-access %}

The Datadog Agent requires read-only access to the database server to collect statistics and queries.

Create a `datadog` user [on the Cloud SQL instance](https://cloud.google.com/sql/docs/sqlserver/create-manage-users#creating).

To maintain read-only access for the agent, remove the `datadog` user from the default `CustomerDbRootRole`. Instead, grant only the explicit permissions required by the agent.

```SQL
GRANT VIEW SERVER STATE to datadog as CustomerDbRootRole;
GRANT VIEW ANY DEFINITION to datadog as CustomerDbRootRole;
ALTER SERVER ROLE CustomerDbRootRole DROP member datadog;
```

Create the `datadog` user in each additional application database:

```SQL
USE [database_name];
CREATE USER datadog FOR LOGIN datadog;
```

This is required because Google Cloud SQL does not permit granting `CONNECT ANY DATABASE`. The Datadog Agent needs to connect to each database to collect database-specific file I/O statistics.

## Install and configure the Agent{% #install-and-configure-the-agent %}

Google Cloud does not grant direct host access, meaning the Datadog Agent must be installed on a separate host where it is able to talk to the SQL Server host. There are several options for installing and running the Agent.

{% tab title="Windows Host" %}
To start collecting SQL Server telemetry, first [install the Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest?platform=windows).

Create the SQL Server Agent conf file `C:\ProgramData\Datadog\conf.d\sqlserver.d\conf.yaml`. See the [sample conf file](https://github.com/DataDog/integrations-core/blob/master/sqlserver/datadog_checks/sqlserver/data/conf.yaml.example) for all available configuration options.

```yaml
init_config:
instances:
  - dbm: true
    host: '<HOSTNAME>,<PORT>'
    username: datadog
    password: '<PASSWORD>'
    connector: adodbapi
    provider: MSOLEDBSQL
    tags:  # Optional
      - 'service:<CUSTOM_SERVICE>'
      - 'env:<CUSTOM_ENV>'
    # After adding your project and instance, configure the Datadog Google Cloud (GCP) integration to pull additional cloud data such as CPU, Memory, etc.
    gcp:
      project_id: '<PROJECT_ID>'
      instance_id: '<INSTANCE_ID>'
```

See the [SQL Server integration spec](https://github.com/DataDog/integrations-core/blob/master/sqlserver/assets/configuration/spec.yaml#L324-L351) for additional information on setting `project_id` and `instance_id` fields.

To use [Windows Authentication](https://docs.microsoft.com/en-us/sql/relational-databases/security/choose-an-authentication-mode), set `connection_string: "Trusted_Connection=yes"` and omit the `username` and `password` fields.

Use the `service` and `env` tags to link your database telemetry to other telemetry through a common tagging scheme. See [Unified Service Tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging) on how these tags are used throughout Datadog.

### Securely store your password{% #securely-store-your-password %}

Store your password using secret management software such as [Vault](https://www.vaultproject.io/). You can then reference this password as `ENC[<SECRET_NAME>]` in your Agent configuration files: for example, `ENC[datadog_user_database_password]`. See [Secrets Management](https://docs.datadoghq.com/agent/configuration/secrets-management/) for more information.

The examples on this page use `datadog_user_database_password` to refer to the name of the secret where your password is stored. It is possible to reference your password in plain text, but this is not recommended.

### Supported Drivers{% #supported-drivers %}

#### Microsoft ADO{% #microsoft-ado %}

The recommended [ADO](https://docs.microsoft.com/en-us/sql/ado/microsoft-activex-data-objects-ado) provider is [Microsoft OLE DB Driver](https://docs.microsoft.com/en-us/sql/connect/oledb/oledb-driver-for-sql-server). Ensure the driver is installed on the host where the agent is running.

```yaml
connector: adodbapi
adoprovider: MSOLEDBSQL19  # Replace with MSOLEDBSQL for versions 18 and lower
```

The other two providers, `SQLOLEDB` and `SQLNCLI`, are considered deprecated by Microsoft and should no longer be used.

#### ODBC{% #odbc %}

The recommended ODBC driver is [Microsoft ODBC Driver](https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server). Starting with Agent 7.51, ODBC Driver 18 for SQL Server is included in the agent for Linux. For Windows, ensure the driver is installed on the host where the Agent is running.

```yaml
connector: odbc
driver: 'ODBC Driver 18 for SQL Server'
```

Once all Agent configuration is complete, [restart the Datadog Agent](https://docs.datadoghq.com/agent/configuration/agent-commands/#start-stop-and-restart-the-agent).

### Validate{% #validate %}

[Run the Agent's status subcommand](https://docs.datadoghq.com/agent/configuration/agent-commands/#agent-status-and-information) and look for `sqlserver` under the **Checks** section. Navigate to the [Databases](https://app.datadoghq.com/databases) page in Datadog to get started.
{% /tab %}

{% tab title="Linux Host" %}
To start collecting SQL Server telemetry, first [install the Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest).

On Linux, the Datadog Agent additionally requires an ODBC SQL Server driver to be installed—for example, the [Microsoft ODBC driver](https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server). Once an ODBC SQL Server is installed, copy the `odbc.ini` and `odbcinst.ini` files into the `/opt/datadog-agent/embedded/etc` folder.

Use the `odbc` connector and specify the proper driver as indicated in the `odbcinst.ini` file.

Create the SQL Server Agent conf file `/etc/datadog-agent/conf.d/sqlserver.d/conf.yaml`. See the [sample conf file](https://github.com/DataDog/integrations-core/blob/master/sqlserver/datadog_checks/sqlserver/data/conf.yaml.example) for all available configuration options.

```yaml
init_config:
instances:
  - dbm: true
    host: '<HOSTNAME>,<PORT>'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: odbc
    driver: '<Driver from the `odbcinst.ini` file>'
    tags:  # Optional
      - 'service:<CUSTOM_SERVICE>'
      - 'env:<CUSTOM_ENV>'
    # After adding your project and instance, configure the Datadog Google Cloud (GCP) integration to pull additional cloud data such as CPU, Memory, etc.
    gcp:
      project_id: '<PROJECT_ID>'
      instance_id: '<INSTANCE_ID>'
```

See the [SQL Server integration spec](https://github.com/DataDog/integrations-core/blob/master/sqlserver/assets/configuration/spec.yaml#L324-L351) for additional information on setting `project_id` and `instance_id` fields.

Use the `service` and `env` tags to link your database telemetry to other telemetry through a common tagging scheme. See [Unified Service Tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging) on how these tags are used throughout Datadog.

Once all Agent configuration is complete, [restart the Datadog Agent](https://docs.datadoghq.com/agent/configuration/agent-commands/#start-stop-and-restart-the-agent).

### Validate{% #validate %}

[Run the Agent's status subcommand](https://docs.datadoghq.com/agent/configuration/agent-commands/#agent-status-and-information) and look for `sqlserver` under the **Checks** section. Navigate to the [Databases](https://app.datadoghq.com/databases) page in Datadog to get started.
{% /tab %}

{% tab title="Docker" %}
To configure the Database Monitoring Agent running in a Docker container, set the [Autodiscovery Integration Templates](https://docs.datadoghq.com/agent/faq/template_variables/) as Docker labels on your Agent container.

**Note**: The Agent must have read permission on the Docker socket for Autodiscovery of labels to work.

Replace the values to match your account and environment. See the [sample conf file](https://github.com/DataDog/integrations-core/blob/master/sqlserver/datadog_checks/sqlserver/data/conf.yaml.example) for all available configuration options.

```bash
export DD_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export DD_AGENT_VERSION=<AGENT_VERSION>

docker run -e "DD_API_KEY=${DD_API_KEY}" \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -l com.datadoghq.ad.check_names='["sqlserver"]' \
  -l com.datadoghq.ad.init_configs='[{}]' \
  -l com.datadoghq.ad.instances='[{
    "dbm": true,
    "host": "<HOSTNAME>",
    "port": <SQL_PORT>,
    "connector": "odbc",
    "driver": "ODBC Driver 18 for SQL Server",
    "username": "datadog",
    "password": "<PASSWORD>",
    "tags": [
      "service:<CUSTOM_SERVICE>"
      "env:<CUSTOM_ENV>"
    ],
    "gcp": {
      "project_id": "<PROJECT_ID>",
      "instance_id": "<INSTANCE_ID>"
    }
  }]' \
  registry.datadoghq.com/agent:${DD_AGENT_VERSION}
```

See the [SQL Server integration spec](https://github.com/DataDog/integrations-core/blob/master/sqlserver/assets/configuration/spec.yaml#L324-L351) for additional information on setting `project_id` and `instance_id` fields.

Use the `service` and `env` tags to link your database telemetry to other telemetry through a common tagging scheme. See [Unified Service Tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging) on how these tags are used throughout Datadog.

### Validate{% #validate %}

[Run the Agent's status subcommand](https://docs.datadoghq.com/agent/configuration/agent-commands/#agent-status-and-information) and look for `sqlserver` under the **Checks** section. Alternatively, navigate to the [Databases](https://app.datadoghq.com/databases) page in Datadog to get started.
{% /tab %}

{% tab title="Kubernetes" %}
If you're using a Kubernetes cluster, use the [Datadog Cluster Agent](https://docs.datadoghq.com/agent/cluster_agent) for Database Monitoring. If cluster checks aren't already enabled, [follow these instructions](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/) to enable them before proceeding..

### Operator{% #operator %}

Follow the steps below to set up the SQL Server integration, using the [Operator instructions in Kubernetes and Integrations](https://docs.datadoghq.com/containers/kubernetes/integrations/?tab=datadogoperator) as a reference.

1. Create or update the `datadog-agent.yaml` file with the following configuration:

   ```yaml
   apiVersion: datadoghq.com/v2alpha1
   kind: DatadogAgent
   metadata:
     name: datadog
   spec:
     global:
       clusterName: <CLUSTER_NAME>
       site: <DD_SITE>
       credentials:
         apiSecret:
           secretName: datadog-agent-secret
           keyName: api-key
   
     features:
       clusterChecks:
         enabled: true
   
     override:
       nodeAgent:
         image:
           name: agent
           tag: <AGENT_VERSION>
   
       clusterAgent:
         extraConfd:
           configDataMap:
             sqlserver.yaml: |-
               cluster_check: true # Required for cluster checks
               init_config:
               instances:
               - host: <HOSTNAME>,<PORT>
                 username: datadog
                 password: 'ENC[datadog_user_database_password]'
                 connector: 'odbc'
                 driver: 'ODBC Driver 18 for SQL Server'
                 dbm: true
                 # Optional: For additional tags
                 tags:
                   - 'service:<CUSTOM_SERVICE>'
                   - 'env:<CUSTOM_ENV>'
                 # After adding your project and instance, configure the Datadog Google Cloud (GCP) integration to pull additional cloud data such as CPU, Memory, etc.
                 gcp:
                   project_id: '<PROJECT_ID>'
                   instance_id: '<INSTANCE_ID>'
   ```

1. Apply the changes to the Datadog Operator using the following command:

   ```shell
   kubectl apply -f datadog-agent.yaml
   ```

### Helm{% #helm %}

Complete the following steps to install the [Datadog Cluster Agent](https://docs.datadoghq.com/agent/cluster_agent) on your Kubernetes cluster. Replace the values to match your account and environment.

1. Complete the [Datadog Agent installation instructions](https://docs.datadoghq.com/containers/kubernetes/installation/?tab=helm#installation) for Helm.

1. Update your YAML configuration file (`datadog-values.yaml` in the Cluster Agent installation instructions) to include the following:

   ```yaml
   clusterAgent:
     confd:
       sqlserver.yaml: |-
         cluster_check: true
         init_config:
         instances:
         - dbm: true
           host: <HOSTNAME>,<PORT>
           username: datadog
           password: 'ENC[datadog_user_database_password]'
           connector: 'odbc'
           driver: 'ODBC Driver 18 for SQL Server'
           # Optional: For additional tags
           tags:
             - 'service:<CUSTOM_SERVICE>'
             - 'env:<CUSTOM_ENV>'
           # After adding your project and instance, configure the Datadog Google Cloud (GCP) integration to pull additional cloud data such as CPU, Memory, etc.
           gcp:
             project_id: '<PROJECT_ID>'
             instance_id: '<INSTANCE_ID>'
   
   clusterChecksRunner:
     enabled: true
   ```

1. Deploy the Agent with the above configuration file from the command line:

   ```shell
   helm install datadog-agent -f datadog-values.yaml datadog/datadog
   ```

{% alert level="info" %}
For Windows, append `--set targetSystem=windows` to the `helm install` command.
{% /alert %}

### Configure with mounted files{% #configure-with-mounted-files %}

To configure a cluster check with a mounted configuration file, mount the configuration file in the Cluster Agent container on the path: `/conf.d/sqlserver.yaml`:

```yaml
cluster_check: true  # Make sure to include this flag
init_config:
instances:
  - dbm: true
    host: <HOSTNAME>,<PORT>
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: 'odbc'
    driver: 'ODBC Driver 18 for SQL Server'
    # Optional: For additional tags
    tags:
      - 'service:<CUSTOM_SERVICE>'
      - 'env:<CUSTOM_ENV>'
    # After adding your project and instance, configure the Datadog Google Cloud (GCP) integration to pull additional cloud data such as CPU, Memory, etc.
    gcp:
      project_id: '<PROJECT_ID>'
      instance_id: '<INSTANCE_ID>'
```

### Configure with Kubernetes service annotations{% #configure-with-kubernetes-service-annotations %}

Rather than mounting a file, you can declare the instance configuration as a Kubernetes Service. To configure this check for an Agent running on Kubernetes, create a service using the following syntax:

```yaml
apiVersion: v1
kind: Service
metadata:
  name: sqlserver-datadog-check-instances
  annotations:
    ad.datadoghq.com/service.check_names: '["sqlserver"]'
    ad.datadoghq.com/service.init_configs: '[{}]'
    ad.datadoghq.com/service.instances: |
      [
        {
          "dbm": true,
          "host": "<HOSTNAME>,<PORT>",
          "username": "datadog",
          "password": "ENC[datadog_user_database_password]",
          "connector": "odbc",
          "driver": "ODBC Driver 18 for SQL Server",
          "tags": ["service:<CUSTOM_SERVICE>", "env:<CUSTOM_ENV>"],
          "gcp": {
            "project_id": "<PROJECT_ID>",
            "instance_id": "<INSTANCE_ID>"
          }
        }
      ]
spec:
  ports:
  - port: 1433
    protocol: TCP
    targetPort: 1433
    name: sqlserver
```

See the [SQL Server integration spec](https://github.com/DataDog/integrations-core/blob/master/sqlserver/assets/configuration/spec.yaml#L324-L351) for additional information on setting `project_id` and `instance_id` fields.

The Cluster Agent automatically registers this configuration and begins running the SQL Server check.

To avoid exposing the `datadog` user's password in plain text, use the Agent's [secret management package](https://docs.datadoghq.com/agent/configuration/secrets-management) and declare the password using the `ENC[]` syntax.
{% /tab %}

## Example Agent Configurations{% #example-agent-configurations %}

### Connecting with DSN using the ODBC driver on Linux{% #connecting-with-dsn-using-the-odbc-driver-on-linux %}

1. Locate the `odbc.ini` and `odbcinst.ini` files. By default, these are placed in the `/etc` directory when installing ODBC.

1. Copy the `odbc.ini` and `odbcinst.ini` files into the `/opt/datadog-agent/embedded/etc` folder.

1. Configure your DSN settings as follows:

`odbcinst.ini` must provide at least one section header and ODBC driver location.

Example:

   ```text
   [ODBC Driver 18 for SQL Server]
   Description=Microsoft ODBC Driver 18 for SQL Server
   Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.2.1
   UsageCount=1
   ```

`odbc.ini` must provide a section header and a `Driver` path that matches `odbcinst.ini`.

Example:

   ```text
   [datadog]
   Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.2.1
   ```

1. Update the `/etc/datadog-agent/conf.d/sqlserver.d/conf.yaml` file with your DSN information.

Example:

   ```yaml
   instances:
     - dbm: true
       host: 'localhost,1433'
       username: datadog
       password: 'ENC[datadog_user_database_password]'
       connector: 'odbc'
       driver: 'ODBC Driver 18 for SQL Server' # This is the section header of odbcinst.ini
       dsn: 'datadog' # This is the section header of odbc.ini
   ```

1. Restart the Agent.

### Using AlwaysOn{% #using-alwayson %}

For AlwaysOn users, the Agent should be installed on each replica server and connected directly to each replica. The full set of AlwaysOn telemetry is collected from each individual replica, in addition to host-based telemetry (CPU, disk, memory, and so on) for each server.

```yaml
instances:
  - dbm: true
    host: 'shopist-prod,1433'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    database_metrics:
      # If Availability Groups is enabled
      ao_metrics:
        enabled: true
      # If Failover Clustering is enabled
      fci_metrics:
        enabled: true
```

### Monitoring SQL Server Agent Jobs{% #monitoring-sql-server-agent-jobs %}

{% alert level="info" %}
To enable monitoring of SQL Server Agent jobs, the Datadog Agent must have access to the [msdb] database.
{% /alert %}

{% alert level="danger" %}
SQL Server Agent Jobs monitoring is not available for Azure SQL Database.
{% /alert %}

Monitoring of SQL Server Agent jobs is supported on SQL Server versions 2016 and newer. Starting from Agent v7.57, the Datadog Agent can collect SQL Server Agent job metrics and histories. To enable this feature, set `enabled` to `true` in the `agent_jobs` section of the SQL Server integration configuration file. The `collection_interval` and `history_row_limit` fields are optional.

```yaml
instances:
  - dbm: true
    host: 'shopist-prod,1433'
    username: datadog
    password: '<PASSWORD>'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    agent_jobs:
      enabled: true
      collection_interval: 15
      history_row_limit: 10000
```

### Collecting schemas{% #collecting-schemas %}

{% alert level="danger" %}
Datadog Agent v7.56+ and SQL Server 2017 or higher are required for SQL Server schema collection.
{% /alert %}

To enable this feature, use the `collect_schemas` option. Schemas are collected on databases for which the Agent has `CONNECT` access.

{% alert level="info" %}
To collect schema information from RDS instances, you must grant the `datadog` user explicit `CONNECT` access to each database on the instance. For more information, see [Grant the Agent access](https://docs.datadoghq.com/database_monitoring/setup_sql_server/rds/?tab=windowshost#grant-the-agent-access).
{% /alert %}

Use the `database_autodiscovery` option to avoid specifying each logical database. See the sample [sqlserver.d/conf.yaml](https://github.com/DataDog/integrations-core/blob/master/sqlserver/datadog_checks/sqlserver/data/conf.yaml.example) for more details.

```yaml
init_config:
instances:
 # This instance detects every logical database automatically
  - dbm: true
        host: 'shopist-prod,1433'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    database_autodiscovery: true
    collect_schemas:
      enabled: true
    database_metrics:
      # Optional: enable metric collection for indexes
      index_usage_metrics:
        enabled: true
# This instance only collects schemas and index metrics from the `users` database
  - dbm: true
        host: 'shopist-prod,1433'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    database: users
    collect_schemas:
      enabled: true
    database_metrics:
      # Optional: enable metric collection for indexes
      index_usage_metrics:
        enabled: true
```

**Note**: For Agent v7.68 and below, use `schemas_collection` instead of `collect_schemas`.

### One Agent connecting to multiple hosts{% #one-agent-connecting-to-multiple-hosts %}

It is common to configure a single Agent host to connect to multiple remote database instances (see [Agent installation architectures](https://docs.datadoghq.com/database_monitoring/architecture/) for DBM). To connect to multiple hosts, create an entry for each host in the SQL Server integration config.

{% alert level="info" %}
Datadog recommends using one Agent to monitor no more than 30 database instances.Benchmarks show that one Agent running on a t4g.medium EC2 instance (2 CPUs and 4GB of RAM) can successfully monitor 30 RDS db.t3.medium instances (2 CPUs and 4GB of RAM).
{% /alert %}

```yaml
init_config:
instances:
  - dbm: true
    host: 'example-service-primary.example-host.com,1433'
    username: datadog
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    password: 'ENC[datadog_user_database_password]'
    tags:
      - 'env:prod'
      - 'team:team-discovery'
      - 'service:example-service'
  - dbm: true
    host: 'example-service–replica-1.example-host.com,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    tags:
      - 'env:prod'
      - 'team:team-discovery'
      - 'service:example-service'
  - dbm: true
    host: 'example-service–replica-2.example-host.com,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    tags:
      - 'env:prod'
      - 'team:team-discovery'
      - 'service:example-service'
    [...]
```

### Running custom queries{% #running-custom-queries %}

To collect custom metrics, use the `custom_queries` option. See the sample [sqlserver.d/conf.yaml](https://github.com/DataDog/integrations-core/blob/master/sqlserver/datadog_checks/sqlserver/data/conf.yaml.example) for more details.

```yaml
init_config:
instances:
  - dbm: true
    host: 'localhost,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    custom_queries:
    - query: SELECT age, salary, hours_worked, name FROM hr.employees;
      columns:
        - name: custom.employee_age
          type: gauge
        - name: custom.employee_salary
           type: gauge
        - name: custom.employee_hours
           type: count
        - name: name
           type: tag
      tags:
        - 'table:employees'
```

### Working with hosts through a remote proxy{% #working-with-hosts-through-a-remote-proxy %}

If the Agent must connect to a database host through a remote proxy, all telemetry is tagged with the hostname of the proxy rather than the database instance. Use the `reported_hostname` option to set a custom override of the hostname detected by the Agent.

```yaml
init_config:
instances:
  - dbm: true
    host: 'localhost,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    reported_hostname: products-primary
  - dbm: true
    host: 'localhost,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    reported_hostname: products-replica-1
```

### Discovering ports automatically{% #discovering-ports-automatically %}

SQL Server Browser Service, Named Instances, and other services can automatically detect port numbers. You can use this instead of hardcoding port numbers in connection strings. To use the Agent with one of these services, set the `port` field to `0`.

For example, a Named Instance config:

```yaml
init_config:
instances:
  - host: <hostname\instance name>
    port: 0
```

## Install the Google Cloud SQL integration{% #install-the-google-cloud-sql-integration %}

To collect more comprehensive database metrics from Google Cloud SQL, install the [Google Cloud SQL integration](https://docs.datadoghq.com/integrations/google_cloudsql).

## Further reading{% #further-reading %}

- [SQL Server Integration](https://docs.datadoghq.com/integrations/sqlserver/)
- [Configure Deadlock Monitoring](https://docs.datadoghq.com/database_monitoring/guide/sql_deadlock/)
- [Configure Query Completion and Query Error Collection](https://docs.datadoghq.com/database_monitoring/guide/sql_extended_events/)
- [Capturing SQL Query Parameter Values](https://docs.datadoghq.com/database_monitoring/guide/parameterized_queries/)
