CB Event Forwarder 4.0.0 Beta

Posted on January 1, 0001


4.0.0 BETA PRERELEASE

In general, the new cb-event-forwarder 4.0 is designed to be a (nearly) drop-in replacement for previous versions of the event forwarder, supporting the same features (along with a number of oft-requested enhancements, suggestions and bugfixes) merely using a new configuration format - YAML.

  • configuration format changed to yaml - old configurations will not work :/
  • architectural overhaul
    • plugins - output
    • new format option format: template and provide a template to format the output CbR event messages
    • multiple-input multiple-output pipeline for events
      • can consume events from multiple CbR mq systems in input:
      • can output to multiple event types & formats in output:
    • (optional) event filtering (between input and output, for all events seen by the forwarder) at the event-forwarder using golang’s templating language
      • simply provide a filter : { template : {{return KEEP or DROP to keep or drop a message}}}
  • output format updates and tweaks
    • very similar to previous format , standardization of alert/feeds/watchlist.# events for SIEMs

Upgrades can be done via yum upgrade cb-event-forwarder, you will need to provide a new working config in YAML. For best results, stop the old cb-event-forwarder service before upgrading. Note that the old cb-event-forwarder had bugs that resulted in zombie processes left even after the service is stopped; it is recommended to killall cb-event-forwarder before upgrading as well to kill those zombie processes. Correction: Use the RPM available on github, which will be updated as needed during the pre-release beta testing phase.

Configuration changes

The configuration file location still defaults to /etc/cb/integrations/event-forwarder/cb-event-forwarder.conf and the configuration file has been changed to YAML, but the name of the keys and their meaning has been almost entirely conserved:

Here’s a basic configuration for the new 4.0.0 forwarder showing the new concise format. The input stanza defines a ‘dictionary’ of cbserver-identifiers to the desired configuration for connecting to the message bus of each.

There are optional TLS parameters, rabbitMQ credentials can be explicitly set.

The output: stanza defines an array of output types where each element provide the configuration for that output-type - http,file, socket, splunk & various - existing output-types from prior versions are still available.

In general the input stanza contains a number of input elements in an dictionary that each contain the input parameters for a CbR server/cluster. The general format:

input:
    name:
        config-keys: config-values
    name2:
        config-keys: config-values

Here’s a more complete example showing some more of the options available: See the README.md that ships with the forwarder for an exhaustive discussion of these settings and their values.

input:
    cbreseponsesevername:
            cb_server_url:  https://developer.carbonblack.com
            #... and more keys describing the desired configuration
            # explicit rabbit MQ creds or default to trying /etc/cb/cb.conf locally
            # rabbit_mq_password, rabbit_ mq_username
            # optional TLS 
            #tls:
            #    tls_client_cert: cert.cert   
            #    tls_client_key: key.key
            #    tls_verify: false
            # optional - defaults to ALL and ALL RAW events
            #events_map:
            #    events_watchlist:
            #        - watchlist.#
            #    events_feed: 
            #        - feed.#
            #    events_alert:
            #        - alert.#
            #    events_raw_sensor: 
            #        - ingress.event.process
            #        - ingress.event.procstart
            #        - ingress.event.netconn
            #        - ingress.event.procend
            #        - ingress.event.childproc
            #        - ingress.event.moduleload
            #        - ingress.event.module
            #        - ingress.event.filemod
            #        - ingress.event.regmod
            #        - ingress.event.tamper
            #        - ingress.event.crossprocopen
            #        - ingress.event.remotethread
            #        - ingress.event.processblock
            #        - ingress.event.emetmitigation
            #    events_binary_observed:
            #        - binaryinfo.#
            #    events_binary_upload: 
            #        - binarystore.#
            #    events_storage_partition: 
            #        - events.partition.#

Output section

Continuing from above, the output section defines a list of typed output options that define the destination, transport mechanism and other particular of one or more outputs. The outputs each receive all of the input information, that is not
dropped or filtered.

output:
    - file: 
        path: mydesiredoutput.json
        format:
            type: json

The original output types and formats are still supported: Outputs types -> http, file , socket, splunk , syslog, plugin Output Formats -> LEEF, JSON, CEF, custom

Filtering

The event forwarder now supports an optional filtering stage that can be used to cull messages in-flight.

Simply provide a filter: section specifying a golang text/template tha t evaluates to KEEP or DROP.

filter:
    template: "{{if cond}}KEEP{{else}}DROP{{endif}}"

The template can inspect arbitrary elements of the in-flight message, but beware of incuring overhead by performing complex or slow operations as they will occure for every event.

4.0 V Architectural Changes

The CbR event forwarder always featured a robust processing pipeline which has been enhanced to highlight three stages… 1) input A number of messsage quesues are defined, along with what events to subscribe to 2) filter An optional stage used to filter messages at the forwarder if desired 3) output A number of output destinations and formats ex an http server and a backup flat-file.

The primary use case is still to forward data from a single CbR server/cluster to one destination.

PLUGINS

Users now have the ability to write their own output plugins in golang.

This allows advanced users full control of output if desired and the ability to add custom outputs without upstream contributions.

Two example output plugins are provided - the kafka output which was ‘promoted’ to a plugin being a choice example of how to write a CbR output plugin and an example of a motivation for doing so

Additionally, it uses go-kafka-confluent instead of sarama. This would normally require the system to have librdkakfa.so (though the ) plugin should provide this by linking it in statically.

See plugins/output/kafka

Plugins are an experimental feature subject to change , alteration, and and possible remove in the future (however unlikely, they’re awesome!)

Correction: Use the RPM available on github, which will be updated as needed during the pre-release beta testing phaseoeven on systems where

Output format

The general marshalling of the output events has been tweaked.

The output formats available are LEEF,JSON,CEF (as normal) & custom templates.

The output format is now customizable using template format to specify a golang template for formatting the outgoing Cb Event Forwarder messages.

In the output element, use format: { type : template , template : "{{template goes here}}" } or equivalent.

The template can be any golang template see https://golang.org/pkg/text/template

Operations

4.0.0 is operated the same way as in the past although the statistics available have been tweaked to fit the new processing pipeline:

  • The daemon is now managed by the “upstart” system in CentOS 6.

    • Use the start and stop commands to control the daemon: start cb-event-forwarder.
  • The daemon now supports the SIGHUP signal.

    • When configured with a file output, SIGHUP will immediately roll over the event file
    • When configured with an s3 output, SIGHUP will immediately roll over the current log and flush the logs to S3
  • The cb-event-forwarder now starts an HTTP server on port 33706 with configuration and status reporting. A raw JSON output is available on http://:33706/debug/vars. Note that this port may have to be opened via iptables for it to be accessed remotely.