Skip to content

Lumberjack Output

The lumberjack bolt is very similar to the syslog bolt, excepts it encodes the data using the Lumberjack protocol. That protocol has two characteristics:

  1. It is acknowledged : the server will acknowledge each received log once that log has been fully processed.
  2. it supports efficiently a key-value format. Lumberjack encodes key-value pairs using a binary format, efficiently decoded by the server.
  3. the punchplatform lumberjack protocol supports an additional keep alive mechanism

Refer to the Syslog bolt explanation, the Lumberjack bolt supports destinations groups the same way (Destination Groups<destinationGroups/). Per Stream Destinations are also supported.

Here is an example configuration:

  "type": lumberjack_input",
  "settings": {
    "destination": [
        "compression": false,
        "host": "target.ip.address",
        "port": 9999,
        "drop_if_queue_full": false,
        "queue_size": 1000,
        "queue_flush_size": 1000,
        "queue_flush_interval_ms": 3000,
        "connect_retry_interval_ms": 3000,
        "connect_timeout_ms": 3000,

        # Use a keep alive applicative message exchange to make sure 
        # the server is alive
        # Here we send such keep alive message every 30 seconds
        "keep_alive_interval": 30,

        # and we give 20 seconds to the server to send us back the 
        # corresponding acknowledgement.
        # If not received in that time interval the socket will be closed 
        "keep_alive_timeout": 20,

        "ssl": true,
        "ssl_provider": "JDK",
        "ssl_private_key": "/opt/keys/punchplatform.key.pkcs8",
        "ssl_certificate": "/opt/keys/punchplatform.crt",
        "ssl_certificate": "/opt/keys/ca.pem"
  "storm_settings": {
    "executors": 1,
    "component": "tcp_spout",
    "publish": [
        "stream": "logs",
        "fields": [
    "subscribe": [
        "component": "previous_spout_or_bolt",
        "stream": "logs",
        "grouping": "localOrShuffle"

note the keep alive options. It checks for the connection aliveness and closes inactive sockets.


To learn more about encryption possibilities, refer to this SSL/TLS configurations dedicated section.


The Lumberjack supports two compression mode. If you use the compression property, compression will be performed at the socket level using the Netty ZLib compression. If instead you use the lumberjack_compression parameter, compression is performed as part of Lumberjack frame.


Netty compression is most efficient, but will work only if the peer is a Punchplatform Lumberjack spout. If you send your data to a standard Lumberjack server such as a Logstash daemon, use lumberjack compression instead.

Streams And Fields

The Lumberjack bolt works nicely using storm streams and fields. It encode the received fields in a lumberjack frame. This is illustrated next:

/ image


make sure you understand spouts and bolts stream and field fundamental concepts.

Refer to the Lumberjack Bolt javadoc documentation.


See metrics_lumberjack_bolt