Alert Bulk Export

Forward Alerts to an S3 Bucket

The Data Forwarder is the recommended export method for reliable and guaranteed delivery of Carbon Black Cloud Alerts. This method works at scale to support any size customer or MSSP by writing jsonl zipped content to an S3 bucket. The Data Forwarder can be configured in the Carbon Black Cloud console under Settings > Data Forwarder or using the Data Forwarder API.

Exporting Alerts Continuously via the Alerts API

If the Data Forwarder doesn’t work for you then the following algorithm will allow you to fetch alerts with no duplicates using the Alerts API. You can run the logic on a cycle to continually fetch alerts or you could do a single time historical export to fetch a time range of alerts. For large historical exports greater than 30 days, repeat the following algorithm for each month by using a start and end for the first and last day of the month.

If you are repeating on a cycle, we recommend keeping a 30 second delay ( end = now - 30s) from current time to allow for Carbon Black Cloud’s asynchronous alerts processing to complete; or if you want to wait for updates to stop for a CB Analytic alerts, keep a delay of 15mins.

A cycle should be no shorter than 60s and should use a start and end time range for either create_time or last_update_time.

Note: If your goal is to ensure you receive no duplicate Alerts, using create_time will do so - but at the expense of missing out on subsequent updates to a CB Analytic alert, which can include changes to properties such as reputation, or can include added event_ids that CBC associates with the alert after the alert record is generated. A CBC Analytic alert can be updated with properties or event_ids up to 15 minutes after create_time - after which time the alert is considered immutable. If you wish to receive updates to alerts, then use the last_update_time instead - but then you'll have to consolidate duplicate Alert records as CBC issue updates to existing Alerts.
  1. Fetch Alerts for 5min window and set start to at least 30 seconds earlier than the current time to allow for Carbon Black Cloud backend to make all Alerts available to API

    curl -s -H "X-Auth-Token: $ACCESS_TOKEN" -H "Content-Type: application/json" https://$CBD/appservices/v6/orgs/$ORG/alerts/_search \
    -d '{
      "criteria": {
        "create_time": {
          "start": "2021-06-29T12:00:00.000Z",
          "end": "2021-06-29T12:04:59.999Z"
        }
      },
      "rows": 10000,
      "start": 0,
      "sort": [
        {
          "field": "create_time",
          "order": "ASC"
        }
      ]
    }'
    
  2. Verify if num_found <= 10000 in API response, if less than or equal to 10k results wait for next 5min window and repeat step 1

  3. If there are more than 10k results then you will need additional API requests to fetch all Alerts for the 5min window.

    a. Get the create_time timestamp from the last record of the results, you will use this as the start for the time range

    Example Response:

    {
      "num_found": "23568"
      "results": [
        ...,
        {
          ...
          "create_time": "2021-06-29T12:03:21.000Z"
          ...
        }
      ]
    }
    

    b. Add one millisecond to the timestamp to ensure the Alert is not duplicated.

Note: This does risk missing an alert if there are multiple alerts with the same 'create_time'. If you want to avoid missing any alerts then you can use the same 'create_time' however you will need to add additional processing logic to remove the duplicate alert(s) which will be the first index(es) based on how many alerts were previously fetched with the same 'create_time'.
```
curl -s -H "X-Auth-Token: $ACCESS_TOKEN" -H "Content-Type: application/json" https://$CBD/appservices/v6/orgs/$ORG/alerts/_search \
-d '{
  "criteria": {
    "create_time": {
      "start": "2021-06-29T12:03:21.001Z",
      "end": "2021-06-29T12:04:59.999Z"
    }
  },
  "rows": 10000,
  "start": 0,
  "sort": [
    {
      "field": "create_time",
      "order": "ASC"
    }
  ]
}'
```

c. Verify if `num_found <= 10000` in API response, if less than or equal to 10k results wait for next 5min window and return to step 1 otherwise repeat from step 3a

If you want to use the Carbon Black Cloud SDK the following code will fetch the alerts and concatenate them into a single list for you to add custom logic or write the ouput to a file.

"""Algorithm to Export Alerts from Carbon Black Cloud"""

from cbc_sdk import CBCloudAPI
from cbc_sdk.platform import BaseAlert

from datetime import datetime, timedelta, timezone

cb = CBCloudAPI(profile="")

# Time field and format to use
time_field = "create_time"
time_format = "%Y-%m-%dT%H:%M:%S.%fZ"

# Time window to fetch using current time - 30s to allow for Carbon Black Cloud asynchronous event processing completion
end = datetime.now(timezone.utc) - timedelta(seconds=30)
start = end - timedelta(minutes=5)

# Fetch initial Alert batch
alerts = list(cb.select(BaseAlert)
                .set_time_range(time_field,
                                start=start.strftime(time_format),
                                end=end.strftime(time_format))
                .sort_by(time_field, "ASC"))

# Check if 10k limit was hit and iteratively fetch remaining alerts by increasing start time to the last alert fetched
if len(alerts) >= 10000:
    last_alert = alerts[-1]
    while True:
        new_start = datetime.strptime(last_alert.create_time, time_format) + timedelta(milliseconds=1)
        overflow = list(cb.select(BaseAlert)
                          .set_time_range(time_field,
                                          start=new_start.strftime(time_format),
                                          end=end.strftime(time_format))
                          .sort_by(time_field, "ASC"))

        # Extend alert list with follow up alert batches
        alerts.extend(overflow)
        if len(overflow) >= 10000:
            last_alert = overflow[-1]
        else:
            break

print(f"Fetched {len(alerts)} alert(s) from {start.strftime(time_format)} to {end.strftime(time_format)}")
Last modified on February 9, 2023