Skip to content


Logging is useful in two situations. The first is for those of you prototyping new use cases or features on top of the punchplatform. Second, it is important to analyse production issues in particular to quickly pinpoint to system or environment issues such as network outages, OS or storage issue that in turn impact the punchplatform applicative service.

This chapter explains where to find the logs, and how to change their levels.


logging is important, hence this guide. Note however that on scalable and distributed platforms, logs are quickly unmanageable. Instead metric monitoring is the key feature that allow capacity planning and performance analysis.

Understanding Log levels

The punchplatform components carefully use the log4j standard levels. These are : FATAL, ERROR, WARN, INFO, DEBUG or TRACE.

Here are a few important logging rule you should understand and follow:

  • By default you should go production using the INFO level.

  • Increasing the verbosity to DEBUG is possible in production. The DEBUG mode activate important information but no information at the per event level. I.e. you will not be flooded by tons of logs even at high event processing rate.

  • the TRACE level works at the per event level. I.e. it is only useful for advanced debugging, rarely if never in production.

  • These rules apply to the punchplatform loggers, bot to the various COTS we rely on.

  • Spark, Storm, Zookeeper logs are not as well organised and defined than punchplatform ones. Do not activate the DEBUG mode in production without trying first on preproduction or test platforms. They may be intempestive.


For advanced users, it can be useful to precisely set the Storm topologies log levels to keep only the most meaningful information without saturating local disk storage with logs. We will see how topologies logging levels can be tuned.


On a standalone setup, to easily understand what is happening, launch a channel (Apache in our example)

channelctl start --channel apache_httpd

If you execute

ps aux | grep apache_httpd

You will see that one of the JVM options is


This is how the log4j configuration file is provided to the storm workers. By default, this file is the storm worker.xml file. By updating this file, you can increase or decrease the log verbosity. On a standalone setup, this file is located at $PUNCHPLATFORM_CONF_DIR/../external/apache-storm-1.2.2/log4j2/worker.xml

::: {.note} ::: {.admonition-title} Note :::

once updated, no need to restart the topologies. Logs level will be updated automatically. :::

Foreground mode

Alternatively you can execute topologies in foreground using the [punchlinectl]{.title-ref} command. Using that variant the logging levels are set in the log4j2.xml file. Refer to Troubleshooting storm logs.

Cluster mode (production)

On a cluster setup, the worker.xml location depends on your setups_root variable set in the punchplatform-deployment.settings file. Assuming setups_root is [/data/opt] the location is


On a single node setup, only one file exists. On a multi node installation, one [worker.xml]{.title-ref} appears on each worker node. If you decided to update one of them, in order to preserve consistency you should do it on every one.


In production, you should not modify this file. It is used by every worker, you probably do not want to alter your working configuration on the fly. For debugging topologies, see this troubleshooting guide`.

Updating Log Verbosity

A typical worker.xml looks like this:

<configuration monitorInterval="60" shutdownHook="disable">
        <root level="info">
            <appender-ref ref="A1"/>
            <appender-ref ref="syslog"/>
        <Logger name="org.apache.storm.metric.LoggingMetricsConsumer" level="info" additivity="false">
            <appender-ref ref="METRICS"/>
        <Logger name="STDERR" level="INFO">
            <appender-ref ref="STDERR"/>
            <appender-ref ref="syslog"/>
        <Logger name="STDOUT" level="INFO">
            <appender-ref ref="STDOUT"/>
            <appender-ref ref="syslog"/>

At the beginning, the configuration tag has the monitorInterval attribute set to 60. Which means this configuration file will be reloaded every 60 seconds if a modification is made.

FYou can change the level of all loggers, (say) to DEBUG as follows:

    <root level="DEBUG"/>

If you want to be more precise and change only some logger's level, do it like this:


    <!-- storm loggers --> 
    <logger name="org.thales.punch.libraries.storm" level="info"/>
    <logger name="org.thales.punch.libraries.storm.spout" level="info"/>
    <logger name="org.thales.punch.libraries.storm.bolt" level="info"/>

    <!-- punchlang library for details on punchlets and punch --> 
    <logger name="org.thales.punch.libraries.punchlang" level="info"/>

        These one are handy to have an eye on your socket traffic if you use syslog, udp tcp
        or lumberjack. The MonitoringHandler one dumps the complete traffic.
    <logger name="" level="WARN"/>
    <logger name="" level="WARN"/>

        zookeeper and storm are verbose.
    <logger name="org.apache.zookeeper" level="WARN"/>
    <logger name="org.apache.storm" level="WARN"/>

        These are useful if you struggle having the Elasticsearch bolt
        connect to the target elasticsearch cluster.
    <logger name="org.elasticsearch.cluster" level="WARN"/>
    <logger name="org.elasticsearch.discovery" level="WARN"/>

        Metrics sent to Elasticsearch or to the logger reporter are prefixed with
        "punchplatform". Or something else if you change the platform_id in
    <logger name="punchplatform" level="WARN"/>


If you change the storm worker.xml file, the new configuration will be applied to all the existing topologies. It is not possible to modify the log levels for a single topology.