Data Forwarder Setup
Overview
This document outlines the steps for configuring a Carbon Black Cloud Data Forwarder with either an AWS S3 bucket or Azure blob storage.
The following table shows which data types can be forwarded to each storage option.
Data Forwarder Type | AWS S3 Bucket | Azure Blob Storage |
---|---|---|
Alert | Yes | Yes |
Endpoint Event | Yes | No |
Watchlist Hit | Yes | Yes |
Requirements
- Carbon Black Cloud Console
- Account with Amazon Simple Storage Service (Amazon S3) or Azure Blob Storage
Guides and Resources
- Carbon Black Cloud Data Forwarder User Guide
- Data Forwarder Fields
- Getting Started with Custom Query Filters
- Data Forwarder API Documentation
- Writing an AWS S3 Bucket Policy
- Amazon: How Do I Create an S3 Bucket?
- Amazon: Bucket Restrictions & Limitations
1. Configure the Destination
The destination needs to be configured before setting up a Forwarder because the ability to write to the destination is verified when the Forwarder is saved.
The Data Forwarder requires either an AWS Simple Storage Service (S3) bucket (Option 1) or Azure Blob Storage (Option 2) to receive data. Any amount of data can be stored for analysis or can be connected to other products.
Option 1: Use AWS S3
Alerts, Endpoint Events and Watchlist Hits can be sent to an AWS S3 Bucket. Use the AWS Management Console to create the bucket and configure permissions.
- Optional: Set up KMS Encryption
- Create a bucket in your AWS Management Console
- Configure an AWS S3 Bucket to allow the forwarder to write data
- Create an AWS SQS Queue and configure the access policy
For more detailed instructions and policy examples to support different use cases, see (AWS S3: Writing an S3 Bucket Policy)[reference/carbon-black-cloud/integrations/data-forwarder/bucket-policy/].
Optional: Setup KMS Encryption
If you require more security for your data at rest, we recommend that you use AWS’s built-in support, AWS KMS. This makes securing your data easy and provides Carbon Black the ability to write files without the ability to read them later. To enable KMS encryption, you will need a Customer Managed KMS Key.
Note: The Role Policy will also need modification to enable consumers to decrypt objects in the bucket using the KMS key.
- Navigate to the AWS Key Management Service.
- From the left side panel, choose Customer managed keys.
- Create a key.
- Leave the default selections for Symmetric keys, KMS key material origin, Single-region key.
- Hit Next and fill in any Alias, Description or Tags you like, and any Key administrators, Key deletion or Key usage permissions you need to allocate.
- Insert the following in the Key policy in the Statement section, using the appropriate principal id for your region. See the Carbon Black Cloud User Guide or further on this page for the principal ids for each region.
{ "Sid": "KMS policy to allow CBC Data Forwarder", "Effect": "Allow", "Principal": { "AWS":"arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-us-east-1-event-forwarder" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }
Create a Bucket
Use the AWS Management Console to create a bucket.
-
Sign in to the AWS Management Console.
-
From the top right corner of the page, use the dropdown to select the region for your bucket.
You must select the same region as your Carbon Black Cloud instance. It is possible to work around this requirement using S3 Cross-Region Replication (CRR).
Use the table below to determine the correct region:
Carbon Black Cloud Product URL AWS Region Name AWS Region Prod01: https://dashboard.confer.net
Prod 02: https://defense.conferdeploy.net
Prod 05: https://defense-prod05.conferdeploy.netUS East (N. Virginia) us-east-1 Prod 06: https://defense-eu.conferdeploy.net Europe (Frankfurt) eu-central-1 Prod NRT: https://defense-prodnrt.conferdeploy.net Asia Pacific (Tokyo) ap-northwest-1 Prod Syd: https://defense-prodsyd.conferdeploy.net/ Asia Pacific (Sydney) ap-southeast-2 Prod UK: https://ew2.carbonblackcloud.vmware.com Europe (London) eu-west-2 -
Navigate to Services > S3 console.
-
Select Create bucket to open the Create Bucket wizard.
-
In Bucket name, enter a unique name for your bucket. The name may not contain uppercase letters or underscores. For additional guidance, see Amazon’s bucket naming restrictions.
-
Region should default to the region you selected in step 2. Ensure that the correct region is still selected.
-
Select Enable to Block all Public Access. Note: Public access is not required for the S3 bucket to work with the Data Forwarder.
-
Select Create Bucket.
Configure the Bucket
The AWS S3 Bucket needs to be configured to allow the Forwarder to write data. Learn more about writing bucket policies for different use cases and configuring the bucket with varying levels of access here.
Continuing from the previous section:
-
Once the bucket is created and the page is loaded with a success message, select the
Go to bucket details
button from the message, or click the name of the bucket you created from the list displayed. -
The bucket policy gives the Forwarder permissions to write to your bucket. It is helpful to create a new folder, which will be the base folder where you will push all data. You can name this something like
carbon-black-cloud-forwarder
. You will use this for yourprefix-folder-name
in a later step. -
From the Permissions tab, select Bucket Policy and configure it by copying the example below into the Bucket Policy Editor and adjusting the example placeholder text:
-
Update the
Id
value. This can be anything, such asPolicy04212020
(where 04212020 represents the date, in this case, April 21, 2020). -
Update the
Sid
value. This can be anything, such asStmt04212020
. -
Use this table to check that you are using the correct
Principal
value.AWS Region Principal ID US East (N. Virginia) us-east-1 arn:aws:iam::132308400445:role/{mcs-psc-prod-event-forwarder-us-east-1-event-forwarder Europe (Frankfurt) eu-central-1 arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-eu-central-1-event-forwarder Asia Pacific (Tokyo) ap-northwest-1 arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-ap-northeast-1-event-forwarder Asia Pacific (Sydney) ap-southeast-2 arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-ap-southeast-2-event-forwarder Europe (London) eu-west-2 arn:aws:iam::132308400445:role/mcs2-psc-data-forwarder-s3 US Gov West 1 us-gov-west-1 arn:aws-us-gov:iam::507058390320:role/mcs2-psc-data-forwarder-s3 -
Update the
Resource
value.- Change
carbon-black-customer-bucket-name
with the name you provided in step 11. - Replace
prefix-folder-name
with the folder name you created in step 16. - Example: “Resource”: “arn:aws:s3:::your-bucket-name/prefix-folder-name/*”
- The Resource value must end with
/*
to allow the Forwarder to access all sub-folders. - Once you have replaced all three values, hit save, and you have configured your bucket policy.
Example Bucket Policy
{ "Version": "2012-10-17", "Id": "Policy9999999981234", "Statement": [ { "Sid": "Stmt1111111119376", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::132308400445:role/mcs-psc-prod-event-forwarder-us-east-1-event-forwarder" }, "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::carbon-black-customer-bucket-name/prefix-folder-name/*" } ] }
- Change
-
-
If adding KMS Encryption, select the Properties tab, scroll to Default Encryption and click Edit
- Enable Server-side encryption.
- For Encryption key type, choose AWS Key Management Service key (SSE-KMS).
- For AWS KMS key, either Choose from your AWS KMS keys, or Enter AWS KMS key ARN.
- If you choose to Enter AWS KMS key ARN, copy the ARN of the KMS key .
- If you select Choose from your AWS KMS keys, select the Forwarder Key you created in steps 3-6.
- Enable the Bucket Key.
- Hit save to finalize KMS encryption for your bucket.
Note: Enabling the Bucket Key is NOT mandatory. AWS recommends using a Bucket Key for cost reasons and support of KMS with the Data Forwarder was validated using this recommendation.If you choose not to enable Bucket Key, there are no known, negative impacts on Data Forwarder.
Optional: Create an SQS Queue
This is only needed for some integrations such as Splunk and QRadar apps that need a queue input for data from an AWS S3 Bucket.
- Create an SQS queue in your AWS Management Console.
- Configure the Access policy. Replace the tokens with your own values.
{ "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__sender_statement", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:<aws-region>:<AWS Account Number>:<name-of-queue>", "Condition": { "ForAllValues:ArnEquals": { "aws:SourceArn": "arn:aws:s3:::<name-of-s3-bucket>" } } } ] }
- Configure the Event Notification in the AWS S3 bucket to use this queue. Navigate to Properties > Event Notifications and set the Destination SQS queue to the arn of the new queue.
Note: If you need to reload older events and are using SQS to pull buckets, the events will not be available in the queue once they are retrieved. To view historical events or reload data, copy the events to another prefix to copy it to the queue.
Option 2: Use Azure Blob Storage
Alerts and Watchlist Hits can be sent to Azure Blob Storage.
Endpoint Events can only be sent to an AWS S3 Bucket.
Create an Azure Storage Account
To create an Azure storage account to use with Data Forwarder, perform the following procedure.
The following procedure includes the settings with which the Carbon Black Cloud Data Forwarder is supported. Recommendations are noted.
Procedure
- In the Azure console, select
Storage Accounts
. - Click
+ Create.
- In the Project details section, select the
Subscription and Resource Group
under which the storage account will be filed. - In the Instance details section:
- Provide a unique name for the account.
- Select an appropriate
Region
. Select theAzure Identity Credentials
that correspond to the Carbon Black Cloud URL to which your organization belongs. See Azure Forwarding Identity Credentials. - For
Performance
, selectStandard
(recommended). - Select either
LRS
orGRS
Redundancy, based on your redundancy requirements.
- On the
Advanced
tab:- In the
Security
section, use the default values. - Select
Enable hierarchical namespace
. - Use the default values for
Access Protocols
,Blob Storage
, andAzure Files
.
- In the
- On the
Networking
tab:- For
Network Access
, selectEnable public access from all networks
. - For Network Routing, retain the default value of
Microsoft network routing
.
- For
- On the
Data Protection
tab, setRecovery
,Tracking
, andAccess Control
to your preference. - On the
Encryption
tab, setEncryption
to your preference.
Create an Azure Blob Container
- In the Azure console, go to your Azure storage account.
- Under
Data Storage
, clickContainers
. - To create a new container, click the
+ Container
button. Provide a unique name and leave all other fields as default values.
Configure the Azure Blob Container
To authorize Carbon Black Cloud to access your designated storage container, perform the following procedure.
- In the Azure console, click
Managed Identities
. - Click
+ Create
.- In the
Project details
section, select theSubscription
andResource Group
with which to associate theManaged Identity
. We recommend that you use the same settings as those established for the Azure storage account for your Data Forwarder. (See Step 3 in Create an Azure Storage Account. - In the
Instance details
section, select the appropriateRegion
that correspond to the Carbon Black Cloud URL to which your organization belongs. See Azure Forwarding Identity Credentials. - Provide a unique name for the
Managed Identity
. - Select
Review + Create
, clickCreate
, and then clickGo to resource
. - From the sidebar, select
Federated Credentials
. - Click
+ Add Credential
. - Under
Federated Credential Scenario
, selectOther
. - For
Issuer URL
, enter https://cognito-identity.amazonaws.com - From the Azure Forwarding Identity Credentials table, select the combination of
Subject identifier
andAudience
that correspond to your Carbon Black Cloud URL. - Enter the
Subject identifier
.
Note: Validate your entry to make sure it exactly matches the field data. - Enter a unique name for the
Federated Credential
, such asCarbon-Black-Cloud-Data-Forwarder
. - Under
Audience (optional)
, clickEdit
and overwrite theAudience
value with the value found in theAudience
column that corresponds to the Carbon Black Cloud URL in the Azure Forwarding Identity Credentials table.
- In the
- Navigate to your designated
Azure Storage Container
.- From the sidebar, select
Access Control (IAM)
. - Click
+ Add
and selectAdd role assignment
from the dropdown menu. - Select the
Storage Blob Data Contributor
role. - Under
Assign access to
, selectManaged Identity
. - Click
+ Select Members
. - From the right panel:
- Select the
Subscription
under which yourManaged Identity
was registered. - Under
Managed Identity
, selectUser-assigned managed identity
. - Select the
Managed Identity
you created for use with your Carbon Black Cloud Data Forwarder. - Click
Select
.
- Select the
- Click
Review + Assign
two times.
- From the sidebar, select
Azure Forwarding Identity Credentials
The following table describes supported Azure data fields to use when creating an Azure storage account for use with a Data Forwarder, and when configuring the Azure Blob Container for Carbon Black Cloud access.
Carbon Black Cloud Product URL (Region) | Subject Identifier | Audience |
---|---|---|
Prod 01: https://dashboard.confer.net
Prod 02: https://defense.conferdeploy.net Prod 05: https://defense-prod05.conferdeploy.net |
us-east-1:70ac9e64-2d3f-4e2b-967b-12b2da2ee0ff | us-east-1:aacb3a9c-877c-4664-a164-5d584cce8f89 |
Prod 06: https://defense-eu.conferdeploy.net | eu-central-1:8b6f75c6-2f09-4f33-9466-f531a86428f2 | eu-central-1:e668967f-3937-42ed-b6ce-dfa1ab47a687 |
Prod NRT: https://defense-prodnrt.conferdeploy.net | ap-northeast-1:b78584c2-03ca-486a-98a3-2c1c1a2f7d0b | ap-northeast-1:ceb5010a-24bc-4db0-a3c0-d5d1d8f6789c |
Prod Syd: https://defense-prodsyd.conferdeploy.net | ap-southeast-2:0c4dcd16-9552-4c83-8e6f-4eee7a9709f8 | ap-southeast-2:373a12fe-cd63-4840-a20a-fe02ab7e4a7a |
Prod UK: https://ew2.carbonblackcloud.vmware.com | eu-west-2:9ea6e509-8da5-491e-b1a9-1fc6878ae46c | eu-west-2:4ae3def8-1eb0-4562-8e7e-ad5d76107068 |
2. Create a Forwarder
Option 1: Use the Carbon Black Cloud Console
Recommended
To create a Data Forwarder in the console, go to Settings > Data Forwarders and select Add Forwarder from the upper-right corner.
Complete the configuration screen; some information will be from the AWS S3 or Azure Blob Storage configuration in earlier steps.
Further instructions are provided from the Add Forwarder
form and the Carbon Black Cloud User Guide.
Option 2: Create a Forwarder via API
This option is recommended for use cases where the same Data Forwarder configuration is created regularly, for example MSSPs or Incident Response Teams who regularly onboard new organizations with consistent configuration.
The following steps guide you through creating a Forwarder via the Data Forwarder API and setting up an AWS S3 bucket to receive the data.
Configure an API Access Level
The data forwarder requires a Custom
access level type. You can either follow the steps below to create a custom access level with the least privileges (recommended), or you can use the existing access levels Super Admin
or System Admin
when you create your API key.
For more information about Access Levels, see the Carbon Black Cloud Authentication Guide.
- Log into Carbon Black Cloud Console.
- Go to Settings > API Access.
- Select Access Levels tab from the top-left.
- Select
Add Access Level
and fill in the following information:- Give it a name (example:
CB_Cloud_Forwarder
); you will need this for a later step. - Give it a description (example:
Only used to forward Events to S3
). - Set
Copy Permissions from
toNone
.
- Give it a name (example:
- From the table, scroll down to the category called Data Forwarding (.notation name is event-forwarder.settings) and check the permissions for: Create, Read, Update, Delete.
- Select Save and your access level is created.
Create an API key
For more information about API Keys, see the Carbon Black Cloud Authentication Guide.
Continuing from the previous section on Settings > API Access:
- Select API Keys tab from the top-left.
- Select
Add API Key
and provide the following information:- Give it a name.
- For Access Level Type, choose
custom
and then choose the access level you created in the previous section. - Include a description as desired.
- Select Save.
- Now you will see your API Credentials. Record these in a secure place; you will need these for a later step.
Execute the API calls
You can run API calls in code (cURL, HTTP) or use Postman, which is a platform that helps simplify the use of APIs, especially if you are running several API calls or using multiple Carbon Black APIs.
The following steps describe using Postman to execute the API call.
-
Download Postman from here and follow the prompts to install it.
-
Fork the collection from the Carbon Black Postman Workspace.
-
In the Postman app, select a workspace for the collection.
-
Navigate to the Data Forwarder API folder from Carbon Black Cloud > Platform APIs.
-
Fill out your configuration by selecting the eyeball icon near the upper right corner next to the settings gear, and then select Edit under the eyeball icon.
- Verify that the environment URL matches the URL of your Carbon Black Cloud console.
- Add your API ID and API Secret Key from steps 26-28 under the Current Value column.
- Add your Org Key, found in the Carbon Black Cloud console under Settings > API Access in the API Keys tab.
- The cb_forwarder_id can be added later.
-
First, validate the configuration and existing forwarder by running the Get Configured Forwarders route.
- From the Collections panel on the left, select the
Get Configured Forwarders
route. - Hit Send to run the call. The result will be a list of all forwarders in that organization, and could be null if you have not created any Forwarders.
- From the Collections panel on the left, select the
-
Now run the Create Forwarder route
-
Select the
Create Forwarder
call from the Postman Collection. -
Click on the Body tab and replace all the bracketed text with actual values, making sure to remove the <>.
-
Give the Forwarder a name.
-
Replace the bucket name with the name from step 11.
-
Replace the S3 prefix with your folder name from step 16. Optionally, you can append it with a unique sub-folder name (Example:
prefix-folder-name/events
). -
Choose what type of data you wish to forward. Options include:
- Alerts
- Endpoint events
- Watchlist hits
Example Create Alert Forwarder Request Body for AWS S3 Storage
{ "enabled": true, "name": "Alert Forwarder v1.0.0 - the original and deprecated", "s3_bucket_name": "demo-bucket", "s3_prefix": "demo-alert", "type": "alert", "version_constraint": "1.0.0", "destination": "aws_s3", "current_version": "1.0.0" }
Example Create Alert Forwarder Request Body for Azure Blob Storage Container
{ "org_key": "ABCD1234", "name": "Demo Create Azure Alert", "enabled": false, "type": "alert", "version_constraint": "2.0.0", "destination": "azure_blob_storage", "azure_storage_account": "azuredemo", "azure_container_name": "azure-event-demo", "azure_tenant_id": "a12345bc-1abcd-1a2b-a1b2-ab12c3de45f6", "azure_client_id": "X98766yz-z987-z9x8-z9x8-zx98y7vw65u4" }
-
-
Hit Send.
Example Success Message
{ "id": "<Forwarder_ID>", "enabled": true, "update_time": "<YYYY-MM-ddTHH:mm:ssZ>", "status": "Valid Bucket Configuration for Bucket: <BucketName> with Prefix: <prefix>", "error": "" }
Example Failure Message - Invalid Bucket Configuration Error
If you receive this error, check that you used the correct bucket name.
{ "id": "<Forwarder_ID>", "enabled": true, "update_time": "<YYYY-MM-ddTHH:mm:ssZ>", "status": "Invalid Bucket Configuration for Bucket: <BucketName> with Prefix: <prefix>", "error":"NoSuchBucket” }
Access Denied Error
If you receive this error, there is an issue with your bucket policy. This can occur if the Resource field has the incorrect prefix route. Review steps 16, 17-d, and 36-c to ensure your prefix file path is correct, or see the Writing a Bucket Policy guide for troubleshooting this error.
{ "id": "<Forwarder_ID>", "enabled": true, "update_time": "<YYYY-MM-ddTHH:mm:ssZ>", "status": "Invalid Bucket Configuration for Bucket: <BucketName> with Prefix: <prefix>", "error":"AccessDenied” }
-
-
Run the
Get Configured Forwarders
call again to confirm the configuration is correct. -
If your Forwarder was configured successfully, a
Note: If you created a sub-folder in step 37.3, the healthcheck folder may be in the sub-folder (Ex: prefix-folder-name > events > healthcheck).healthcheck.json
file is sent to your bucket in a folder namedhealthcheck
.- The healthcheck runs automatically when a Forwarder is created.
- To check the Forwarder health manually, select the
Forwarder Healthcheck
call.
- The healthcheck runs automatically when a Forwarder is created.
3. Monitor the data flow
When using AWS S3 storage:
- Go to the Amazon S3 console.
- Go to the configured Bucket and Prefix for configured Forwarder(s).
- Within the next 5-15 minutes, data should begin to appear in time-based sub-directories.
- For example, data sent on 4/21/2020 at 11:01:54UTC using the example Forwarder configuration from the
Create a Forwarder
section above will appear in a folder with the following path:prefix-folder-name/events/org_key=ABCD123/year=2020/month=4/day=21/hour=11/minute=1/second=54/xxxfilename.jsonl.gz
.
- For example, data sent on 4/21/2020 at 11:01:54UTC using the example Forwarder configuration from the
When using Azure Blob Storage:
- Go to the Azure console
- Go to the configured Container
- Within the next 5-15 minutes, data should begin to appear in time-based sub-directories.
4. Next Steps
Once Forwarders are configured, you can fetch the data or connect other tools to process it as needed. For example, you can configure a SIEM to collect this data from the Amazon S3 bucket.
Refer to the Data Forwarder Fields guide for more information about your data.
Last modified on January 17, 2024