Data Forwarder Setup


This document outlines the steps for configuring a Carbon Black Cloud Data Forwarder with either an AWS S3 bucket or Azure blob storage.

The following table shows which data types can be forwarded to each storage option.

Data Forwarder Type AWS S3 Bucket Azure Blob Storage
Alert Yes Yes
Endpoint Event Yes No
Watchlist Hit Yes Yes


  • Carbon Black Cloud Console
  • Account with Amazon Simple Storage Service (Amazon S3) or Azure Blob Storage

Guides and Resources

1. Configure the Destination

The destination needs to be configured before setting up a Forwarder because the ability to write to the destination is verified when the Forwarder is saved.

The Data Forwarder requires either an AWS Simple Storage Service (S3) bucket (Option 1) or Azure Blob Storage (Option 2) to receive data. Any amount of data can be stored for analysis or can be connected to other products.

Option 1: Use AWS S3

Alerts, Endpoint Events and Watchlist Hits can be sent to an AWS S3 Bucket. Use the AWS Management Console to create the bucket and configure permissions.

For more detailed instructions and policy examples to support different use cases, see (AWS S3: Writing an S3 Bucket Policy)[reference/carbon-black-cloud/integrations/data-forwarder/bucket-policy/].

Optional: Setup KMS Encryption

If you require more security for your data at rest, we recommend that you use AWS’s built-in support, AWS KMS. This makes securing your data easy and provides Carbon Black the ability to write files without the ability to read them later. To enable KMS encryption, you will need a Customer Managed KMS Key.

Note: The Role Policy will also need modification to enable consumers to decrypt objects in the bucket using the KMS key.

  1. Navigate to the AWS Key Management Service.
  2. From the left side panel, choose Customer managed keys.
  3. Create a key.
  4. Leave the default selections for Symmetric keys, KMS key material origin, Single-region key.
  5. Hit Next and fill in any Alias, Description or Tags you like, and any Key administrators, Key deletion or Key usage permissions you need to allocate.
  6. Insert the following in the Key policy in the Statement section, using the appropriate principal id for your region. See the Carbon Black Cloud User Guide or further on this page for the principal ids for each region.
       "Sid": "KMS policy to allow CBC Data Forwarder",
       "Effect": "Allow",
       "Principal": {
        "Action": [
        "Resource": "*"

Create a Bucket

Use the AWS Management Console to create a bucket.

  1. Sign in to the AWS Management Console.

  2. From the top right corner of the page, use the dropdown to select the region for your bucket.

    You must select the same region as your Carbon Black Cloud instance. It is possible to work around this requirement using S3 Cross-Region Replication (CRR).

    Use the table below to determine the correct region:

    Carbon Black Cloud Product URL AWS Region Name AWS Region
    Prod 02:
    Prod 05:
    US East (N. Virginia) us-east-1
    Prod 06: Europe (Frankfurt) eu-central-1
    Prod NRT: Asia Pacific (Tokyo) ap-northwest-1
    Prod Syd: Asia Pacific (Sydney) ap-southeast-2
    Prod UK: Europe (London) eu-west-2
    AWS GovCloud (US): US Gov West 1 us-gov-west-1
  3. Navigate to Services > S3 console.

  4. Select Create bucket to open the Create Bucket wizard.

  5. In Bucket name, enter a unique name for your bucket. The name may not contain uppercase letters or underscores. For additional guidance, see Amazon’s bucket naming restrictions.

  6. Region should default to the region you selected in step 2. Ensure that the correct region is still selected.

  7. Select Enable to Block all Public Access. Note: Public access is not required for the S3 bucket to work with the Data Forwarder.

  8. Select Create Bucket.

Configure the Bucket

The AWS S3 Bucket needs to be configured to allow the Forwarder to write data. Learn more about writing bucket policies for different use cases and configuring the bucket with varying levels of access here.

Continuing from the previous section:

  1. Once the bucket is created and the page is loaded with a success message, select the Go to bucket details button from the message, or click the name of the bucket you created from the list displayed.

  2. The bucket policy gives the Forwarder permissions to write to your bucket. It is helpful to create a new folder, which will be the base folder where you will push all data. You can name this something like carbon-black-cloud-forwarder. You will use this for your prefix-folder-name in a later step.

  3. From the Permissions tab, select Bucket Policy and configure it by copying the example below into the Bucket Policy Editor and adjusting the example placeholder text:

    1. Update the Id value. This can be anything, such as Policy04212020 (where 04212020 represents the date, in this case, April 21, 2020).

    2. Update the Sid value. This can be anything, such as Stmt04212020.

    3. Use this table to check that you are using the correct Principal value.

      AWS Region Principal ID
      US East (N. Virginia) us-east-1 arn:aws:iam::132308400445:role/{mcs-psc-prod-event-forwarder-us-east-1-event-forwarder
      Europe (Frankfurt) eu-central-1 arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-eu-central-1-event-forwarder
      Asia Pacific (Tokyo) ap-northwest-1 arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-ap-northeast-1-event-forwarder
      Asia Pacific (Sydney) ap-southeast-2 arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-ap-southeast-2-event-forwarder
      Europe (London) eu-west-2 arn:aws:iam::132308400445:role/mcs2-psc-data-forwarder-s3
      US Gov West 1 us-gov-west-1 arn:aws-us-gov:iam::507058390320:role/mcs2-psc-data-forwarder-s3
    4. Update the Resource value.

      1. Change carbon-black-customer-bucket-name with the name you provided in step 11.
      2. Replace prefix-folder-name with the folder name you created in step 16.
      3. Example: “Resource”: “arn:aws:s3:::your-bucket-name/prefix-folder-name/*”
      4. The Resource value must end with /* to allow the Forwarder to access all sub-folders.
      5. Once you have replaced all three values, hit save, and you have configured your bucket policy.

      Example Bucket Policy

               "Version": "2012-10-17",
               "Id": "Policy9999999981234",
               "Statement": [
                   "Sid": "Stmt1111111119376",
                   "Effect": "Allow",
                   "Principal": {
                   "Action": [
                   "Resource": "arn:aws:s3:::carbon-black-customer-bucket-name/prefix-folder-name/*"

  4. If adding KMS Encryption, select the Properties tab, scroll to Default Encryption and click Edit

    1. Enable Server-side encryption.
    2. For Encryption key type, choose AWS Key Management Service key (SSE-KMS).
    3. For AWS KMS key, either Choose from your AWS KMS keys, or Enter AWS KMS key ARN.
      1. If you choose to Enter AWS KMS key ARN, copy the ARN of the KMS key .
      2. If you select Choose from your AWS KMS keys, select the Forwarder Key you created in steps 3-6.
    4. Enable the Bucket Key.
    5. Hit save to finalize KMS encryption for your bucket.

    Note: Enabling the Bucket Key is NOT mandatory. AWS recommends using a Bucket Key for cost reasons and support of KMS with the Data Forwarder was validated using this recommendation.

    If you choose not to enable Bucket Key, there are no known, negative impacts on Data Forwarder.

Optional: Create an SQS Queue

This is only needed for some integrations such as Splunk and QRadar apps that need a queue input for data from an AWS S3 Bucket.

  1. Create an SQS queue in your AWS Management Console.
  2. Configure the Access policy. Replace the tokens with your own values.
         "Version": "2008-10-17",
         "Id": "__default_policy_ID",
         "Statement": [
             "Sid": "__sender_statement",
             "Effect": "Allow",
             "Principal": {
               "Service": ""
             "Action": "SQS:SendMessage",
             "Resource": "arn:aws:sqs:<aws-region>:<AWS Account Number>:<name-of-queue>",
             "Condition": {
               "ForAllValues:ArnEquals": {
                 "aws:SourceArn": "arn:aws:s3:::<name-of-s3-bucket>"
  3. Configure the Event Notification in the AWS S3 bucket to use this queue. Navigate to Properties > Event Notifications and set the Destination SQS queue to the arn of the new queue.

    Note: If you need to reload older events and are using SQS to pull buckets, the events will not be available in the queue once they are retrieved. To view historical events or reload data, copy the events to another prefix to copy it to the queue.

Option 2: Use Azure Blob Storage

Alerts and Watchlist Hits can be sent to Azure Blob Storage.

Endpoint Events can only be sent to an AWS S3 Bucket.

Create an Azure Storage Account

To create an Azure storage account to use with Data Forwarder, perform the following procedure.

The following procedure includes the settings with which the Carbon Black Cloud Data Forwarder is supported. Recommendations are noted.


  1. In the Azure console, select Storage Accounts.
  2. Click + Create.
  3. In the Project details section, select the Subscription and Resource Group under which the storage account will be filed.
  4. In the Instance details section:
    1. Provide a unique name for the account.
    2. Select an appropriate Region. Select the Azure Identity Credentials that correspond to the Carbon Black Cloud URL to which your organization belongs. See Azure Forwarding Identity Credentials.
    3. For Performance, select Standard (recommended).
    4. Select either LRS or GRS Redundancy, based on your redundancy requirements.
  5. On the Advanced tab:
    1. In the Security section, use the default values.
    2. Select Enable hierarchical namespace.
    3. Use the default values for Access Protocols, Blob Storage, and Azure Files.
  6. On the Networking tab:
    1. For Network Access, select Enable public access from all networks.
    2. For Network Routing, retain the default value of Microsoft network routing.
  7. On the Data Protection tab, set Recovery, Tracking, and Access Control to your preference.
  8. On the Encryption tab, set Encryption to your preference.

Create an Azure Blob Container

  1. In the Azure console, go to your Azure storage account.
  2. Under Data Storage, click Containers.
  3. To create a new container, click the + Container button. Provide a unique name and leave all other fields as default values.

Configure the Azure Blob Container

To authorize Carbon Black Cloud to access your designated storage container, perform the following procedure.

  1. In the Azure console, click Managed Identities.
  2. Click + Create.
    1. In the Project details section, select the Subscription and Resource Group with which to associate the Managed Identity. We recommend that you use the same settings as those established for the Azure storage account for your Data Forwarder. (See Step 3 in Create an Azure Storage Account.
    2. In the Instance details section, select the appropriate Region that correspond to the Carbon Black Cloud URL to which your organization belongs. See Azure Forwarding Identity Credentials.
    3. Provide a unique name for the Managed Identity.
    4. Select Review + Create, click Create, and then click Go to resource.
    5. From the sidebar, select Federated Credentials.
    6. Click + Add Credential.
    7. Under Federated Credential Scenario, select Other.
    8. For Issuer URL, enter
    9. From the Azure Forwarding Identity Credentials table, select the combination of Subject identifier and Audience that correspond to your Carbon Black Cloud URL.
    10. Enter the Subject identifier.
      Note: Validate your entry to make sure it exactly matches the field data.
    11. Enter a unique name for the Federated Credential, such as Carbon-Black-Cloud-Data-Forwarder.
    12. Under Audience (optional), click Edit and overwrite the Audience value with the value found in the Audience column that corresponds to the Carbon Black Cloud URL in the Azure Forwarding Identity Credentials table.
  3. Navigate to your designated Azure Storage Container.
    1. From the sidebar, select Access Control (IAM).
    2. Click + Add and select Add role assignment from the dropdown menu.
    3. Select the Storage Blob Data Contributor role.
    4. Under Assign access to, select Managed Identity.
    5. Click + Select Members.
    6. From the right panel:
      1. Select the Subscription under which your Managed Identity was registered.
      2. Under Managed Identity, select User-assigned managed identity.
      3. Select the Managed Identity you created for use with your Carbon Black Cloud Data Forwarder.
      4. Click Select.
    7. Click Review + Assign two times.

Azure Forwarding Identity Credentials

The following table describes supported Azure data fields to use when creating an Azure storage account for use with a Data Forwarder, and when configuring the Azure Blob Container for Carbon Black Cloud access.

Carbon Black Cloud Product URL (Region) Subject Identifier Audience
Prod 01:
Prod 02:
Prod 05:
us-east-1:70ac9e64-2d3f-4e2b-967b-12b2da2ee0ff us-east-1:aacb3a9c-877c-4664-a164-5d584cce8f89
Prod 06: eu-central-1:8b6f75c6-2f09-4f33-9466-f531a86428f2 eu-central-1:e668967f-3937-42ed-b6ce-dfa1ab47a687
Prod NRT: ap-northeast-1:b78584c2-03ca-486a-98a3-2c1c1a2f7d0b ap-northeast-1:ceb5010a-24bc-4db0-a3c0-d5d1d8f6789c
Prod Syd: ap-southeast-2:0c4dcd16-9552-4c83-8e6f-4eee7a9709f8 ap-southeast-2:373a12fe-cd63-4840-a20a-fe02ab7e4a7a
Prod UK: eu-west-2:9ea6e509-8da5-491e-b1a9-1fc6878ae46c eu-west-2:4ae3def8-1eb0-4562-8e7e-ad5d76107068

2. Create a Forwarder

Option 1: Use the Carbon Black Cloud Console


To create a Data Forwarder in the console, go to Settings > Data Forwarders and select Add Forwarder from the upper-right corner.

Complete the configuration screen; some information will be from the AWS S3 or Azure Blob Storage configuration in earlier steps.

Further instructions are provided from the Add Forwarder form and the Carbon Black Cloud User Guide.

Option 2: Create a Forwarder via API

This option is recommended for use cases where the same Data Forwarder configuration is created regularly, for example MSSPs or Incident Response Teams who regularly onboard new organizations with consistent configuration.

The following steps guide you through creating a Forwarder via the Data Forwarder API and setting up an AWS S3 bucket to receive the data.

Configure an API Access Level

The data forwarder requires a Custom access level type. You can either follow the steps below to create a custom access level with the least privileges (recommended), or you can use the existing access levels Super Admin or System Admin when you create your API key. For more information about Access Levels, see the Carbon Black Cloud Authentication Guide.

  1. Log into Carbon Black Cloud Console.
  2. Go to Settings > API Access.
  3. Select Access Levels tab from the top-left.
  4. Select Add Access Level and fill in the following information:
    1. Give it a name (example: CB_Cloud_Forwarder); you will need this for a later step.
    2. Give it a description (example: Only used to forward Events to S3).
    3. Set Copy Permissions from to None.
  5. From the table, scroll down to the category called Data Forwarding (.notation name is event-forwarder.settings) and check the permissions for: Create, Read, Update, Delete.
  6. Select Save and your access level is created.

Create an API key

For more information about API Keys, see the Carbon Black Cloud Authentication Guide.

Continuing from the previous section on Settings > API Access:

  1. Select API Keys tab from the top-left.
  2. Select Add API Key and provide the following information:
    1. Give it a name.
    2. For Access Level Type, choose custom and then choose the access level you created in the previous section.
    3. Include a description as desired.
  3. Select Save.
  4. Now you will see your API Credentials. Record these in a secure place; you will need these for a later step.

Execute the API calls

You can run API calls in code (cURL, HTTP) or use Postman, which is a platform that helps simplify the use of APIs, especially if you are running several API calls or using multiple Carbon Black APIs.

The following steps describe using Postman to execute the API call.

  1. Download Postman from here and follow the prompts to install it.

  2. Navigate to the Carbon Black Postman Collection and select Open in Postman from the upper-right corner, or fork the collection from the Carbon Black Postman Workspace.

  3. In the Postman app, select a workspace for the collection.

  4. Navigate to the Data Forwarder API folder from Carbon Black Cloud > Platform APIs.

  5. Fill out your configuration by selecting the eyeball icon near the upper right corner next to the settings gear, and then select Edit under the eyeball icon.

    1. Verify that the environment URL matches the URL of your Carbon Black Cloud console.
    2. Add your API ID and API Secret Key from steps 26-28 under the Current Value column.
    3. Add your Org Key, found in the Carbon Black Cloud console under Settings > API Access in the API Keys tab.
    4. The cb_forwarder_id can be added later.
  6. First, validate the configuration and existing forwarder by running the Get Configured Forwarders route.

    1. From the Collections panel on the left, select the Get Configured Forwarders route.
    2. Hit Send to run the call. The result will be a list of all forwarders in that organization, and could be null if you have not created any Forwarders.
  7. Now run the Create Forwarder route

    1. Select the Create Forwarder call from the Postman Collection.

    2. Click on the Body tab and replace all the bracketed text with actual values, making sure to remove the <>.

      1. Give the Forwarder a name.

      2. Replace the bucket name with the name from step 11.

      3. Replace the S3 prefix with your folder name from step 16. Optionally, you can append it with a unique sub-folder name (Example: prefix-folder-name/events).

      4. Choose what type of data you wish to forward. Options include:

        • Alerts
        • Endpoint events
        • Watchlist hits

        Example Create Alert Forwarder Request Body for AWS S3 Storage

                     "enabled": true,
                     "name": "Alert Forwarder v1.0.0 - the original and deprecated",
                     "s3_bucket_name": "demo-bucket",
                     "s3_prefix": "demo-alert",
                     "type": "alert",
                     "version_constraint": "1.0.0",
                     "destination": "aws_s3",
                     "current_version": "1.0.0"

        Example Create Alert Forwarder Request Body for Azure Blob Storage Container

                     "org_key": "ABCD1234",
                     "name": "Demo Create Azure Alert",
                     "enabled": false,
                     "type": "alert",
                     "version_constraint": "2.0.0",
                     "destination": "azure_blob_storage",
                     "azure_storage_account": "azuredemo",
                     "azure_container_name": "azure-event-demo",
                     "azure_tenant_id": "a12345bc-1abcd-1a2b-a1b2-ab12c3de45f6",
                     "azure_client_id": "X98766yz-z987-z9x8-z9x8-zx98y7vw65u4"
    3. Hit Send.

      Example Success Message

                "id": "<Forwarder_ID>",
                "enabled": true,
                "update_time": "<YYYY-MM-ddTHH:mm:ssZ>",
                "status": "Valid Bucket Configuration for Bucket: <BucketName> with Prefix: <prefix>",
                "error": ""

      Example Failure Message - Invalid Bucket Configuration Error

      If you receive this error, check that you used the correct bucket name.

                "id": "<Forwarder_ID>",
                "enabled": true,
                "update_time": "<YYYY-MM-ddTHH:mm:ssZ>",
                "status": "Invalid Bucket Configuration for Bucket: <BucketName> with Prefix: <prefix>",
      Access Denied Error

      If you receive this error, there is an issue with your bucket policy. This can occur if the Resource field has the incorrect prefix route. Review steps 16, 17-d, and 36-c to ensure your prefix file path is correct, or see the Writing a Bucket Policy guide for troubleshooting this error.

                "id": "<Forwarder_ID>",
                "enabled": true,
                "update_time": "<YYYY-MM-ddTHH:mm:ssZ>",
                "status": "Invalid Bucket Configuration for Bucket: <BucketName> with Prefix: <prefix>",
  8. Run the Get Configured Forwarders call again to confirm the configuration is correct.

  9. If your Forwarder was configured successfully, a healthcheck.json file is sent to your bucket in a folder named healthcheck.

    Note: If you created a sub-folder in step 37.3, the healthcheck folder may be in the sub-folder (Ex: prefix-folder-name > events > healthcheck).
    1. The healthcheck runs automatically when a Forwarder is created.
    2. To check the Forwarder health manually, select the Forwarder Healthcheck call.

3. Monitor the data flow

When using AWS S3 storage:

  1. Go to the Amazon S3 console.
  2. Go to the configured Bucket and Prefix for configured Forwarder(s).
  3. Within the next 5-15 minutes, data should begin to appear in time-based sub-directories.
    1. For example, data sent on 4/21/2020 at 11:01:54UTC using the example Forwarder configuration from the Create a Forwarder section above will appear in a folder with the following path: prefix-folder-name/events/org_key=ABCD123/year=2020/month=4/day=21/hour=11/minute=1/second=54/xxxfilename.jsonl.gz.

When using Azure Blob Storage:

  1. Go to the Azure console
  2. Go to the configured Container
  3. Within the next 5-15 minutes, data should begin to appear in time-based sub-directories.

4. Next Steps

Once Forwarders are configured, you can fetch the data or connect other tools to process it as needed. For example, you can configure a SIEM to collect this data from the Amazon S3 bucket.

Refer to the Data Forwarder Fields guide for more information about your data.

Last modified on January 17, 2024