This chapter describes the internal architecture of LTRs, LMRs and LMCs. You can skip it if you are only interested in writing or running log parsers.


A LTR (Log TRansport) must transfer securely logs from endpoints to a central area.

A LTR is usually deployed on one (non resilient) or three servers. A Kafka instance is in charge of log buffering. Strom punchplatform topologies are in charge of forwarding the logs from the customer site to a distant LMR instance.

Here are the topologies associated:

  • Receiving Topologies: write logs to the local Kafka broker. These topologies consists of

    • a Syslog Spout that receives logs from remote sources. It can be a TCP, UDP or Lumberjack spout depending on the supported protocol.
    • a Kafka Bolt that saves in turn the logs in Kafka.
  • Forwarding Topologies: forward the logs to a remote LMR or LMC instance. These are composed of

Refer to Components for a detailed description of the LTR and the lumberjack protocol.


A LMR (Log Management Receiver) is the LTR counterpart receiving the logs. In some platforms a LMC includes its own LMR components, in others because of security constraints, dedicated LMR are deployed on their own in demilitarised security zone.

An LMR consists in Storm topologies with:

  • a lumberjackSpout receiving logs from the LTR. It can optionally be a Syslog Spout or a syslogUdpSpout.
  • a Kafka Bolt to forward the logs to the downstream LMC Kafka broker.


A LMC (Log Management Center) is the essential prt of a log management solution. It is in charge of log processings, indexing, saving and searching. It also can play a forwarder role should the log be sent downstream to a third-party correlation engine.

Depending on the volumetry and required SLAs, a LMC can be architectured as a small to very large pipeline, crossing zero, one or several Kafka brokers. In general the pattern is to have all log processors generated parsed and normalised logs into an output Kafka topic. These logs are then indexed to Elasticsearch or CEPH archives using dedicated IO intensive topologies. In short one speaks of:

  • Parsing Topologies : composed of Kafka Spout, Punch Bolt (running punchlets) the Kafka Bolt.
  • Indexing Topologies : composed of Kafka Spout, Elasticsearch Bolt.
  • Archiving Topologies : composed of Kafka Spout, File Bolt to CEPH.


the parsing topologies are the one running the log transformations. These are equivalent to logstash processors in a traditional ELK setup.