Latest Updates: CB ThreatHunter App for Splunk 1.0.0 Released

CB Enterprise Response FAQ

Event Bus

Q: How do I set up the Carbon Black server for the event bus?

  1. Edit /etc/cb/cb.conf: add/uncomment the following line:

    DatastoreBroadcastEventTypes=*
    
  2. Take note of the RabbitMQ configuration in /etc/cb.conf: look for the lines that start with RabbitMQ. You are looking for the following- we will use those shortly to connect to the event bus.

    RabbitMQPort
    RabbitMQUser
    RabbitMQPassword
    
  3. Add the following to /etc/sysconfig/iptables right below the line with —dport 443 in order to enable communication with the RabbitMQ broker:

    -A INPUT -p tcp -m state --state NEW -m tcp --dport 5004 -j ACCEPT
    
  4. Restart the cb-enterprise and iptables services

    service iptables restart
    service cb-enterprise restart
    

At this point, the RabbitMQ broker is available for connections from external clients.

Q: How can I run the example script to pull JSON-formatted events from the event bus?

The Event Forwarder is the most straightforward way to get JSON-formatted events from the event bus.

If you want to develop against the event bus API directly, then you can use the eventBusToJson.py Python script to act as a consumer of the event bus and print out the events in real time as they’re received by the server.

To start, you’ll need a system with Python 2.7 installed. The most straightforward way to run through this is to have a VM or physical machine with CentOS 7 installed. Ensure pip is installed

easy_install pip

Clone the cbapi repository on your local machine and cd into it:

git clone https://github.com/carbonblack/cbapi.git
cd cbapi

Install the dependencies

pip install -r server_apis/python/requirements.txt

Run the script

cd server_apis/python/example/bulk
./eventBusToJson.py –u cb –p <RabbitMQPassword> -c https://<CB Server DNS/IP address>

The script will now output JSON structures for every event that’s processed through the CB server on stdout.

Q: Does the default configuration of RabbitMQ support data persistence?

This is determined by the “durable” option when you connect to the server. See the Pika documentation for more information on creating durable queues.

If you make the queue durable, ensure that once you start consuming events, you keep the consumer running at all times. Running the consumer on and off to drain the queue on a periodic basis will cause the RabbitMQ process to grow in memory and disk space very quickly, causing severe performance issues. The durability is there to protect against data loss during bug fixes, network partitions, and hardware failures – not to support a periodic “drain” of the queue.

Q: What do the events look like once they’re output from the script?

Here is an example event:

{
    "sensor_id": 1, 
    "event_type": "childproc", 
    "created": true, 
    "timestamp": 1435343070, 
    "child_process_guid": "00000001-0000-0e0c-01d0-b03d57c1746b", 
    "process_guid": 6217766278483900149, 
    "type": "ingress.event.childproc", 
    "md5": "98FA788238E71D9563D4BF177A4FC22C"
}

Q: Should I use Syslog or the Event Forwarder?

Most syslog servers cannot handle the intense load that we will place on them. For example, just on my test CB server with 2-3 clients, I received multiple errors while testing the script pushing to the local rsyslog daemon on the CB server, such as:

Jun 26 14:21:53 cbtest rsyslogd-2177: imuxsock begins to drop messages from pid 37864 due to rate-limiting

You will also encounter some issues with the size of syslog messages; most syslog servers will silently truncate messages larger than a set size (usually around 500 bytes-1kb). As a result of these issues I would strongly suggest not using syslog as your event queue between CB and your analytics/data storage platform.

Performance

Q: How do I increase the performance of large process/binary queries?

Requesting large numbers of process documents in a single query will cause timeouts, so you will want to request data in smaller batches. The most straightforward way to do this - if you intend to iterate over all results of a query - is to use the process_search_iter helper function. It takes the same arguments as process_search, but returns an iterator of process dictionaries instead. So you could do something like:

for process in process_search_iter(‘mysearch’):
    print process[‘start’], process[‘path’]

Another way to increase the speed of API calls is to disable faceting, if you’re not using that feature. Append the facet_enable=False option to process_search or process_search_iter.

Last modified on January 4, 2016