Skip to content

DAVE-6.2.0 release notes

This document is a summary of content, changes, limitation and fixes of this release as compared to DAVE-6.1.2 release

For full documentation of the release, a guide to get started, and information about the project, see the Punchplatform project site.

The documentation for this release can be found inside the deployment archives (standalone and deployer versions), and at

The documentation for the most recent release can be found at

Note about upgrades: Please carefully review the upgrade documentation for this release thoroughly before upgrading your clusters. The upgrade notes (e.g. upgrade from 6.1 to 6.2) discuss any critical information about incompatibilities and breaking changes, performance changes, and any other configuration changes that might impact your production deployment of Punchplatform.

Release main features and enhancements

Supported platforms update (#1022)

CentOS 8 and Ubuntu 20 are now supported for production deployment, except for SSL communication using Syslog and Lumberjack input/output nodes (targeted in next release)

See Supported platform in Deployment section of Operation Guide in documentation for more information.

New features

  • MailOperator can now parse all mail headers, contents and attachments.

Elastalert connectors improvements (#1026)

Packaged Elastalert application (already usable under shiva scheduling in previous versions) now features - An optional mechanism allowing to fetch its rule directly from an Elasticsearch indice. Optionally, the rules can be scanned automatically and reloaded when changed.

  • New alerters allowing to send alerts into a Kafka topic, enabling to design post-processing/normalizing/enriching punchlines before forwarding or indexing in Elasticsearch the alert objects. One of the alerter variant produces raw elastalert data model, the other integrates ECS normalization and enrichment following Cybels Analytics data model.

see Reference Guide section on Data_Alerting for more information.

Parquet & Avro format in Archiving (#904)

In addition to the text/CSV format supported for archiving in filesystem and S3 backends, the archiving mechanism (File Output node) now supports output using Avro & Parquet structured schema and format.

See FileOutput node documentation in Storm-like punchlines reference guide section

Punchlines viewer(#1033)

A preview feature of punchlines (providing a visual representation of the punchline nodes and streams) has been added to the Resource Management tool of the Punchplatform Kibana plugin.

Deployment configuration improvements

  • #1069 : It is now possible to configure production Kibanas so that ALL their issued Elasticsearch queries go through a Punchplatform gateway instance (allowing filtering of queries using custom-defined platform rules before submission, to reduce risks of ES cluster overload).

  • #997` : display images in the Punch Feedback plugin

  • `#1041 : The resolver mechanism, useful to centralize platform-specifics settings out of individual pipelines/application configuration, has been improved, and now supports:

    • an additional filter type 'name' that allows to specify rules depending on application names
    • wildcards in combination with filter character sequence (e.g. 'channel: ltr_*')

logs injector tool lumberjack procotol improvements

Lumberjack protocol (client/server) was already available in previous versions of Punch.

  • SSL injection/reception of logs is now supported, including for lumberjack protocol (so as to be able to simulate inter-site ciphered transmission for example) (#1071) For more information on these new options (--ssl_private_key, --ssl_certificate, --ssl_protocols, --ssl_provider, and --ssl_ciphers) please refer to logs injector tool MAN page in documentation.

  • An option has been added (--lumberjack-json-fields-payload option ) to allow customisation of fields in the lumberjack frame (#1028). This is especially useful to emulate data injection from beats such as winlogbeat collector daemon for Windows. For more information on this option, please refer to logs injector tool MAN page in documentation, or check the added example (resources/injector/examples folder in a standalone packaging).

Main documentation improvements:

  • #1055 A more end-to-end procedure to prepare and integrate opendistro-ldap-roles configuration has been added in Security section of Operation guide.

  • First version of Training modules documentation have been included in the Tutorial section of the documentation:

    • HLI
    • CHA
    • DPP
    • IKQ
    • PUN
  • #1023 A high-level view of the overall deployment & post-configuration process and a high-level configuration overview have been added to the operation guide section of documentation

  • #1017 A Reference Architecture section has been added to the Operation section of the documentation, and provides Reference Configuration commented examples and associated Key Design highlights for the following topics

  • High-Availability Logs Collection site with logs forwarding to central processing and indexation site, with remote monitoring from central site
  • Central Logs processing and indexing site, with Elasticsearch indexing and Compressed Files indexed archiving of logs.

Other Improvements

  • #1021 Punch plugin can now use authentication and SSL certificates
  • #1020 new channelctl status -v option allows to see stopped applications
  • #1019 A new unique metrics context identifier tag has been added to metrics context for easy metrics grouping in dashboards
  • #1014 Added example resolver configuration on standalone to avoid random port binding for spark driver and executor
  • #1011 Planctl debugging features added
  • #1004 Gateway upload path for punchlines can now be customized
  • #960 Standalone EPS dashboard improved

Bug Fixes

  • #1072 unwanted ppf_topology_error_message field indexed by Elastic output bolt when no topology error occurred
  • #1066 Yml indentation error on deployment on kibana group_vars when the is set
  • #1061 Kibana not working (Kibana not ready) after deployment when es_type_of_nodes_targeted is only_data_node
  • #1058 gateway reporter fails on kafka brokers paramater
  • #1051 only_data_node settings does not work on elasticsearch deployment
  • #1050 Wrong Unix right on on a deploy operator environement
  • #1049 Ansible offline deployment doesn't work on Ubutu 20/centos 7
  • #1047 channelctl start doesn't restart "killed" topologies
  • #1046 Elasticsearch not stopped by systemctl or start twice
  • #1045 Opendistro Deployment role enhancement to prevent false positive over installation
  • #1044 Unable to stop a removed channel
  • #1042 Punchline execution failures through deployed gateway
  • #1040 punchlinectl resolve does not take into account environment variables as 'punchlinectl start'
  • #1038 ResolverImpl wrong type casting
  • #1037 Punch Plugin SSL Client cannot load certificate authorities from configuration
  • #1036 shiva deployment fail when metricbeat is not configured
  • #1035 Can't start grok operator through gateway
  • #1032 Gateway doesn't start when security is disabled
  • #1031 Kibana plugins cannot be deployed from 'domain.' conf section anymore
  • #1029 Deployment-dependencies no longer exists in 611 deployer
  • #1024 channelctl idempotence issue for storm topologies starting
  • #1016 local-dispatcher ignores Gateway document type
  • #1015 Gateway ES client for nodes and shards metrics is forcing the complete cert subject for hostname verification
  • #1013 shiva silent fails all its applications if a plan fails
  • #1010 plan checkpointing does not take credentials into account
  • #1009 Pyspark and Spark will fail if an UDAF function is registered instead of udf
  • #1007 Improve channels-monitoring alert message when no available metrics