Carbon Black EDR (Endpoint Detection and Response) is the new name for the product formerly called CB Response.
Edit /etc/cb/cb.conf
: add/uncomment the following line:
DatastoreBroadcastEventTypes=*
Take note of the RabbitMQ configuration in /etc/cb.conf
: look for the lines that start with RabbitMQ
. You are looking for the following- we will use those shortly to connect to the event bus.
RabbitMQPort
RabbitMQUser
RabbitMQPassword
Add the following to /etc/sysconfig/iptables
right below the line with —dport 443 in order to enable communication with the RabbitMQ broker:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5004 -j ACCEPT
Restart the cb-enterprise and iptables services
service iptables restart
service cb-enterprise restart
At this point, the RabbitMQ broker is available for connections from external clients.
The Event Forwarder is the most straightforward way to get JSON-formatted events from the event bus.
If you want to develop against the event bus API directly, then you can use the eventBusToJson.py
Python script to
act as a consumer of the event bus and print out the events in real time as they’re received by the server.
To start, you’ll need a system with Python 2.7 installed. The most straightforward way to run through this is to have a VM or physical machine with CentOS 7 installed. Ensure pip is installed
easy_install pip
Clone the cbapi repository on your local machine and cd into it:
git clone https://github.com/carbonblack/cbapi.git
cd cbapi
Install the dependencies
pip install -r server_apis/python/requirements.txt
Run the script
cd server_apis/python/example/bulk
./eventBusToJson.py –u cb –p <RabbitMQPassword> -c https://<CB Server DNS/IP address>
The script will now output JSON structures for every event that’s processed through the CB server on stdout.
This is determined by the “durable” option when you connect to the server. See the Pika documentation for more information on creating durable queues.
If you make the queue durable, ensure that once you start consuming events, you keep the consumer running at all times. Running the consumer on and off to drain the queue on a periodic basis will cause the RabbitMQ process to grow in memory and disk space very quickly, causing severe performance issues. The durability is there to protect against data loss during bug fixes, network partitions, and hardware failures – not to support a periodic “drain” of the queue.
Here is an example event:
{
"sensor_id": 1,
"event_type": "childproc",
"created": true,
"timestamp": 1435343070,
"child_process_guid": "00000001-0000-0e0c-01d0-b03d57c1746b",
"process_guid": 6217766278483900149,
"type": "ingress.event.childproc",
"md5": "98FA788238E71D9563D4BF177A4FC22C"
}
Most syslog servers cannot handle the intense load that we will place on them. For example, just on my test CB server with 2-3 clients, I received multiple errors while testing the script pushing to the local rsyslog daemon on the CB server, such as:
Jun 26 14:21:53 cbtest rsyslogd-2177: imuxsock begins to drop messages from pid 37864 due to rate-limiting
You will also encounter some issues with the size of syslog messages; most syslog servers will silently truncate messages larger than a set size (usually around 500 bytes-1kb). As a result of these issues I would strongly suggest not using syslog as your event queue between CB and your analytics/data storage platform.
Requesting large numbers of process documents in a single query will cause timeouts, so you will want to request data in smaller batches.
The most straightforward way to do this - if you intend to iterate over all results of a query - is to use the
process_search_iter
helper function. It takes the same arguments as process_search
, but returns an iterator of
process dictionaries instead. So you could do something like:
for process in process_search_iter(‘mysearch’):
print process[‘start’], process[‘path’]
Another way to increase the speed of API calls is to disable faceting, if you’re not using that feature. Append the facet_enable=False
option to process_search or process_search_iter.