Event Message Bus Reference

Carbon Black EDR (Endpoint Detection and Response) is the new name for the product formerly called CB Response.

Carbon Black 4.2+ supports a rich array of asynchronous server-side notifications, referred to as the “message bus”. This interface provides a “push” notification service so that your application can receive and process any event that is received or generated by the Carbon Black server. These events include:

  • Notification of a watchlist and/or feed
  • Notifications when a new binary is discovered on an endpoint

The message bus enables you to build applications that can:

  • Forward events onto a third party service, such as Splunk. This is implemented in the CB Event Forwarder.
  • Evaluate network connections against a machine learning model to find potential botnet Command & Control nodes
  • React to certain conditions in near-real-time, for example, taking custom actions when a network connection is created by an endpoint. For example, this is how the Infoblox connector is notified when an endpoint contacts an Infoblox-flagged domain and can kill the offending process.
  • … and much more

This document describes these notifications, how to subscribe to these notifications, and the format of the notifications themselves.

Note that many Message Bus events must be first enabled in the Carbon Black server configuration file. To learn how to enable all message bus events on the EDR server, see the Event Forwarder documentation.

Notification Architecture

The Carbon Black server uses the Advanced Message Queuing Protocol (AMQP) to publish events of interest. Current versions of Carbon Black use the RabbitMQ message broker. Any RabbitMQ or AMQP compliant client can subscribe to these notifications. Some language bindings include:

For other languages, see the RabbitMQ Clients & Developer Tools page.

A Brief AMQP Tutorial

This section is intended to give a brief overview of how AMQP works in the context of the Carbon Black message bus. If you are familiar with AMQP already, then you may skip this section; for more information on AMQP, see the RabbitMQ Concepts page.

AMQP defines an architecture for publishing and subscribing to notifications. To control the data flow of messages between publishers and subscribers, AMQP organizes the data flow into exchanges and queues. The Carbon Black server defines one exchange of interest: the api.events exchange. This exchange brings the producers (the components of the Carbon Black server creating messages) and the consumers (you) together in a common location. Every time a message is published on the api.events exchange, the message is delivered to consumers that have registered an interest in receiving that message.

Message consumers define a queue. Like exchanges, queues are identified by a short string. A consumer can define any number of queues. Messages are delivered to queues and not to consumers directly. That distinction is important to note, since consumers can come and go; for example, the consumer may crash or the network link is severed between the Carbon Black server and the consuming service. Therefore, queues can be defined to be durable, which means the queue will outlive the consumer’s network connection to the server. If the consumer process reading from the queue exits, Carbon Black (through RabbitMQ) will cache messages for delivery until the consumer starts reading from the queue again. On the other hand, non-durable queues will drop messages when there is no consumer available to read them.

Astute readers will note the brief reference above to consumers registering the message types of interest. Each Carbon Black message is published with a routing key. The routing key is a string, usually arranged like a DNS name or Java class name with dot-separated components. An example routing key from the Carbon Black Message Bus is ingress.event.netconn. Like a Java class name, it is read from left to right, less specific to more specific. This routing key indicates that the message body includes a message published on ingestion (ingress); further, this is a message produced via an event sent from an endpoint (event), and the event type is a network connection (netconn).

Therefore, the consumer also provides a key pattern that defines the type of messages it is interested in receiving when it creates the queue. The key pattern works much like a glob in a filename, except the hash mark (#) is used as the glob character instead of the asterisk (*). Therefore, if you wanted to only match on all network connection events, the key pattern would be simply ingress.event.netconn. On the other hand, if you wanted to subscribe to all sensor events, the key pattern would be ingress.event.#; by extension, to subscribe to all messages, the key pattern would simply be #.

Notification Format

All Message Bus notifications are published in one of two formats:

  • Google Protobufs is used for endpoint-generated messages
  • JSON is used for server-generated messages

Messages generated as a result of events on an endpoint (for example, process start/exit notifications, network connections, registry modifications, etc.) are published as protobufs on the Message Bus. This is the same format used by the sensors themselves to encode the raw events. The raw event volume can easily be measured in tens of thousands of messages per second. Because of the performance implications, the Carbon Black Server must be configured to export these messages (see the Configuration Docs document for details). Further, the server can be configured to only publish a subset of the event types in order to reduce the performance impact.

Messages generated by the Carbon Black server itself (watchlist hits, notification of new binary uploaded to the server, etc.) are published as JSON documents on the Message Bus. These events have a much lower volume and represent a minimal performance impact for both publishing and consuming the messages.

The documentation below calls out in which format each notification type is published.

Endpoint Generated Messages

The messages generated by endpoints are published according to the Protobuf definition located in the cbapi repository. Each protobuf message is transmitted as a CbEventMsg encoded in the Protobuf binary format. Since these events are forwarded on the Message Bus immediately after they’re received by the server, the data contained within the message is not visible through the Carbon Black REST API or the Web UI until it is committed to disk. The delay between receiving a message from the message bus and when it’s visible through the REST API may be up to 15 minutes.

Each protobuf message is mapped to a routing key in the following table. The Message Protobuf Name refers to the member element in the CbEventMsg structure that is defined for the given event type.

Message Description Message Protobuf Name Routing Key
Binary module loads (for example, DLLs on Windows) modload ingress.event.moduleload
Network connections netconn ingress.event.netconn
File modifications fimlemod ingress.event.filemod
Registry modifications regmod ingress.event.regmod
Process creation and termination process ingress.event.procstart, ingress.event.process, * ingress.event.procend
Binary module information module ingress.event.module
Child process spawn childproc ingress.event.childproc
Cross-Process event (remote thread, etc.) crossproc ingress.event.crossprocopen, ingress.event.remotethread

* Note that both ingress.event.process and ingress.event.procstart are valid routing keys for the “Process creation” message. Your code should be able to accept and process the event using either routing key.

Server Generated Messages

The Carbon Black server also generates messages published on the Message Bus as a result of events that occur in the internal processing pipeline. These events are grouped into three categories:

  • Watchlist hits
  • Feed hits
  • Binary notifications
  • Alerts

For each of these categories, the messages are further decomposed based on whether the data is available for immediate query/retrieval from the REST API or Web UI. For example, some feed and watchlist IOCs are evaluated against the incoming endpoint data at ingestion time. As soon as the Carbon Black server determines that an event matches an IOC in a feed, it will publish a notification on the bus, even if the event has not yet been fully processed and stored by Carbon Black. A separate notification is sent after the event has been processed and stored.

Message Visibility

Message Description Message visibility Routing Key
Watchlist hit on binary Message not visible in search yet watchlist.hit.binary
Watchlist hit on binary Message visible in search watchlist.storage.hit.binary
Watchlist hit on process Message not visible in search yet watchlist.hit.process
Watchlist hit on process Message visible in search watchlist.storage.hit.process
Feed hit on binary Message not visible in search yet feed.ingress.hit.binary
Feed hit on binary Message visible in search feed.storage.hit.binary
Feed hit on binary Message visible in search feed.query.hit.process
Feed hit on process Message not visible in search yet feed.ingress.hit.process
Feed hit on process Message visible in search feed.storage.hit.process
Feed hit on process Message visible in search feed.query.hit.process

Watchlist Hits

Watchlists are created to trigger on new process events or a new binary.

  • Watchlist hits
    • Process Watchlist
    • Binary Watchlist

Process Watchlist Hit

Name: watchlist.hit.process or watchlist.storage.hit.process. The events that caused the watchlist hit are guaranteed to exist in a process API query when the watchlist.storage.hit.process message is received. Both messages have the same structure.

watchlist.hit.process is a JSON structure with the following entries:

name type description
cb_version string Carbon Black server version
event_timestamp string Timestamp when event was published
watchlist_id int32 Identifier of the watchlist that matched
watchlist_name string Name of watchlist that matched
server_name string Name of the Carbon Black Server
docs list List of one or more matching process documents; see next table

Each matching process document is a JSON structure with the following entries:

name type description
childproc_count int32 Total count of child processes created by this process
cmdline string Process command line
filemod_count int32 Total count of file modifications made by this process
group string Sensor group this sensor was assigned to at time of process execution
host_type string Type of the computer: server, workstation, domain controller
hostname string Hostname of the computer on which the process executed (at time of execution)
last_update string Last activity in this process is endpoint local time. Example: 2014-02-04T16:23:22.547Z
modload_count int32 Total count of module loads in this process.
netconn_count int32 Total count of network connections made and received by this process.
os_type string Operating system type of the endpoint, e.g. Windows, Linux, Osx.
parent_name string Name of the parent process.
parent_md5 string MD5 of the parent process.
parent_pid int32 PID of parent process.
parent_unique_id string Parent process unique identifer.
path string Full path to the executable file backing this process.
process_md5 string MD5 of the executable file backing this process.
process_name string Filename of the executable backing this process.
process_pid int32 PID of this process.
regmod_count int32 total count of registry modifications made by this process.
segment_id int32 For internal use
sensor_id int32 Endpoint identifier.
start string Start time of this process in endpoint local time. Example: 2014-02-04T16:23:22.516Z
unique_id string Process unique Id
username string User context in which the process executed.

Example:

  {
    "server_name": "cb-enterprise-testing.local",
    "docs": [
        {"process_md5": "a3ccfd0aa0b17fd23aa9fd0d84b86c05",
         "sensor_id": 1,
         "modload_count": 49,
         "parent_unique_id": "00000001-0000-09e4-01cf-a5dee70168f2-00000001",
         "cmdline": "\"c:\\users\\admin\\desktop\\putty.exe\" ",
         "filemod_count": 0,
         "id": "00000001-0000-afbc-01cf-b31b9e83777f",
         "parent_name": "explorer.exe",
         "parent_md5": "332feab1435662fc6c672e25beb37be3",
         "group": "Default Group",
         "hostname": "WIN8-TEST",
         "last_update": "2014-08-08T15:15:47.544Z",
         "start": "2014-08-08T15:15:42.193Z",
         "regmod_count": 6,
         "process_pid": 44988,
         "username": "win8-test\\admin",
         "process_name": "putty.exe",
         "path": "c:\\users\\admin\\desktop\\putty.exe",
         "netconn_count": 1,
         "parent_pid": 2532,
         "segment_id": 1,
         "host_type": "workstation",
         "os_type": "windows",
         "childproc_count": 0,
         "unique_id": "00000001-0000-afbc-01cf-b31b9e83777f-00000001"}
     ],
     "event_timestamp": 1407362104.19,
     "watchlist_id": 10,
     "cb_version": "4.2.1.140808.1059",
     "watchlist_name": "Tor Feed"
  }

Binary Watchlist Hit

Name: watchlist.hit.binary or watchlist.storage.hit.binary. The binary that caused the watchlist hit is guaranteed to exist in a binary API query when the watchlist.storage.hit.binary message is received. Both messages have the same structure.

watchlist.hit.binary is a JSON structure with the following entries:

name type description
cb_version string Carbon Black server version
event_timestamp string Timestamp when event was published
watchlist_id int32 Identifier of the watchlist that matched
watchlist_name string Name of watchlist that matched
server_name string Name of the Carbon Black Server
docs list List of one or more matching process documents; see next table

Each matching binary document is a JSON structure with the following entries:

name type description
copied_mod_len int32 Number of bytes copied to server
endpoint string Hostname and sensor ID of the first endpoint on which this binary was observed.
group string First sensor group on which this binary was observed
digsig_issuer string If digitally signed, the issuer.
digsig_publisher string If digitally signed, the publisher.
digsig_result string If digitally signed, the human-readable status. See notes.
digsig_result_code in32 For internal use.
digsig_sign_time string If digitally signed, the sign time.
digsig_subject string If digitally signed, the subject.
is_executable_image bool True if the binary is a standalone executable (as compared to a library).
is_64bit bool True if architecture is x64 (versus x86
md5 string MD5 of the binary
observed_filename string Full path to the executable backing the process
orig_mod_len int32 Size in bytes of the binary at the time of observation on the endpoint.
os_type string Operating system type, e.g. Windows, Linux, Osx.
server_added_timestamp string The time this binary was first seen by the server
timestamp string Time binary was first observed (in endpoint time)
watchlists list List of matching watchlists.
file_version string File Version (Windows Only)
product_name string Product Name (Windows Only)
company_name string Company Name (Windows Only)
internal_name string Internal Name (Windows Only)
original_filename string Internal Original Filename (Windows Only)
file_desc string File Description (Windows only)
product_desc string Product Description (Windows only)
product_version string Product Description (Windows only)
comments string Comment String (Windows only)
legal_copyright string Legal copyright string (Windows only)
legal_trademark string Legal trademark string (Windows only)
private_build string Private build string (Windows only)

Example:

  {
    "server_name": "cb-enterprise-testing.local",
    "docs": [
        {"digsig_result": "Signed",
         "observed_filename": ["c:\\windows\\system32\\prncache.dll"],
         "product_version": "6.1.7601.17514",
         "signed": "Signed",
         "digsig_sign_time": "2010-11-21T00:37:00Z",
         "is_executable_image": false,
         "orig_mod_len": 183808,
         "is_64bit": true,
         "digsig_publisher": "Microsoft Corporation",
         "group": ["Default Group"],
         "file_version": "6.1.7601.17514 (win7sp1_rtm.101119-1850)",
         "company_name": "Microsoft Corporation",
         "internal_name": "PrintCache",
         "product_name": "Microsoft\u00ae Windows\u00ae Operating System",
         "digsig_result_code": "0",
         "timestamp": "2014-08-09T11:19:04.009Z",
         "copied_mod_len": 183808,
         "server_added_timestamp": "2014-08-09T11:19:04.009Z",
         "md5": "A1CDE92DDC170D307DB3C5BAA348811B",
         "endpoint": ["WIN8-TEST|1"],
         "legal_copyright": "\u00a9 Microsoft Corporation. All rights reserved.",
         "original_filename": "PrnCache.dll",
         "os_type": "Windows",
         "file_desc": "Print UI Cache"}
     ],
     "event_timestamp": 1407583203.5,
     "watchlist_id": 10,
     "cb_version": "4.2.1.140811.29",
     "watchlist_name": "SRS Trust"
  }

Notes:

The digsig_status field can be one of eight values:

  • Signed
  • Unsigned
  • Bad Signature
  • Invalid Signature
  • Expired
  • Invalid Chain
  • Untrusted Root
  • Explicit Distrust

Feed Hits

Like watchlists, feeds also are created to trigger on new process events or a new binary.

For historical reasons, feed hits are segmented by the way the feed was evaluated as well as whether the data has been committed to disk and is available to query via the API. The three categories are ingress, storage, and query.

Ingress feed events are published as the matching endpoint data arrives from the sensor. These ingress feed events therefore provide the earliest available notification of the endpoint activity. Ingress events are published prior to updating the data to the backend data store (SOLR), and therefore it may be up to fifteen minutes before the data is discoverable via search. The latency is partially dependent on the configured SOLR soft-commit (auto-commit) interval.

Storage feed events are published as the data is committed to the backend data store (SOLR). As compared to ingress feed events, storage feed events happen later in time, but when all data is fully indexed and searchable via SOLR and therefore the CB client API.

Query feed events are published when a query string provided by a query feed matches committed data. Therefore, any processes or binaries published via a query feed event is fully indexed and searchable via the REST API.

New Binary Notifications

  • Binary notifications
    • First instance of any endpoint observing a binary
    • First instance of one endpoint observing a binary
    • First instance of a sensor group observing a binary
    • Binary upload complete

Binary Observed for the first time on any endpoint

Name: binaryinfo.observed

binaryinfo.observed is a JSON structure with the following entries:

name type description
md5 string MD5 of the binary
event_timestamp float Timestamp of the feed match, measured in number of seconds since the epoch
scores dict Dictionary of Alliance feed scores

Example Event:

{
    "md5": "9E4B0E7472B4CEBA9E17F440B8CB0AB8",
    "event_timestamp": 1397248033.914,
    "scores":
      {
        "alliance_score_virustotal": 16
      }
}

Binary Observed on an individual endpoint for the first time

Name: binaryinfo.host.observed

binaryinfo.host.observed is a JSON structure with the following entries:

name type description
md5 string MD5 of the binary.
hostname string Hostname of endpoint on which binary was observed
sensor_id int32 Sensor Id of endpoint on which binary was observed
event_timestamp float Timestamp of the feed match, measured in number of seconds since the epoch
scores dict Dictionary of Alliance feed scores
watchlists dict Dictionary of already-matched watchlists

Example Event:

{
    "md5": "9E4B0E7472B4CEBA9E17F440B8CB0AB8",
    "hostname": "FS-HQ",
    "sensor_id": 1021,
    "event_timestamp": 1397248033.914,
    "scores":
      {
        "alliance_score_virustotal": 16
      },
    "watchlists":
      {
        "watchlist_7": "2014-02-13T00:30:11.247Z"
        "watchlist_9": "2014-02-13T00:21:13.009Z"
      }
}

Binary Observed within a sensor group for the first time

Name: binaryinfo.group.observed

binaryinfo.group.observed is a JSON structure with the following entries:

name type description
md5 string MD5 of the binary
group string Sensor group name on which the binary was observed
event_timestamp float Timestamp of the feed match, measured in number of seconds since the epoch
scores dict Dictionary of Alliance feed scores
watchlists dict Dictionary of already-matched watchlists

Example Event:

{
    "md5": "9E4B0E7472B4CEBA9E17F440B8CB0AB8",
    "group": "Default Group",
    "event_timestamp": 1397248033.914
    "scores":
      {
        "alliance_score_virustotal": 16
      },
    "watchlists":
      {
        "watchlist_7": "2014-02-13T00:30:11.247Z"
        "watchlist_9": "2014-02-13T00:21:13.009Z"
      }
}

Binary Upload Complete

The Carbon Black server can be configured to store a copy of all unique binary (executable) files observed on endpoints.
This includes Windows PE files such as EXEs and DLLs, Linux ELF files, and similar. Upon the arrival of a new binary file, a binarystore event is published.

This event provides an easy way to trigger custom analysis of a binary, including static or dynamic analysis, integration with a third-party analysis system, or custom archiving.

Name: binarystore.file.added

binarystore.file.added is a JSON structure with the following entries:

name type description
md5 string MD5 sum of the binary file.
size int32 Size of the original binary, in bytes.
compressed_size int32 Size of the zip archive containing the binary file on the Carbon Black server
event_timestamp float Timestamp of the binary file addition, measured in number of seconds since the epoch
file_path string Path, on the server disk, of the copied binary file (zipped).

Example Event:

{
    "md5": "9E4B0E7472B4CEBA9E17F440B8CB0AB8",
    "file_path": "/var/cb/data/modulestore/FE2/AFA/FE2AFACC396DC37F51421DE4A08DA8A7.zip"
    "size": 320000,
    "compressed_size": 126857,
    "event_timestamp": 1397248033.914
}

Notes:

  • The Carbon Black Server can be configured to delete binary store files from the Carbon Black server after uploading to the Alliance Server (if sharing with the Alliance is enabled). These files are still retrievable via the Carbon Black API, although there may be bandwidth or transfer time concerns. See the AllianceClientNoDeleteOnUpload configuration option in cb.conf.
  • The Carbon Black Server can be configured to automatically delete binary store files from the Carbon Black server due to disk space constraints. See the KeepAllModuleFiles configuration option in cb.conf.

Last modified on May 5, 2020