Skip to content

Channels

Abstract

This chapter explains how you assemble various applications into a channel and why this concept is both powerful and necessary.

A channel describes where one or several application must be executed. The channel abstraction provides you with a concise and clear declarative approach to organise your applications in ways easy to manage.

Configuration

Each channel is defined using a yaml channel_structure.yaml file. Refer to the (configuration)[Configuration.md] file to understand the tenant/channel/application tree organisation.

The channel_structure.yaml file has the following structure:

version: "6.0"
start_by_tenant: true
stop_by_tenant: false
applications:
  - name: mypunchline
    runtime: shiva
    cluster: common
    reload: none
    command: punclinectl
    quartzcron_schedule : 0/30 * * * * ? *
    args:
      - "start"
      - "--punchline"
      - "mypunchline.yaml"
      - "--runtime"
      - "punch"
    apply_resolver_on:
    - afile.json
    - anotherfile.hjson
    ]

resources:
 - type: kafka_topic
   topic: syslog-to-kafka
   cluster: kafka
   partitions: 1
   replication_factor: 1
   retention: 10000000
   segment: 1073741825
Parameter Description Default
version Correspond to the major punch release. nil
start_by_tenant set to true to make that channel startable by a tenant-level command true
stop_by_tenant set to true to make that channel stoppable by a tenant-level command false
application the list of applications nil
resources an optional list of resources associated to this channel nil

Each applications has the following properties:

Parameter Description Default
name The application name. nil
runtime storm or shiva nil
cluster the name of the target (kubernetes|shiva) cluster nil
reload control the reload policy of this application. none or kill_then_start none
command one of the punch supported commands nil
args the command args as you would type them in a plain terminal nil
apply_resolver_on an optional list of additional files on which to apply the resolver nil
quartzcron_schedule an optional cron expression to periodically execute the application nil

The resources is an array of resources. In 6.x releases only kafka topics are supported.

Applications

The punch ships in with several applications ready to be used and listed hereafter. Note that each application you define is uniquely reffered to as <tenant_name>/<channel_name>/<cluster>/<application_name>.

Punchlinectl

To start a punchline use the following command and argumens:

Parameter Values
--punchline The punchline configuration name.
--runtime One of flink spark pyspark or punch for (resp.) the flink, spark, pyspark or punch execution engine.
--childopts The optional jvm arguments

Tip

The arguments correspond to the punchlinectl command on line arguments.

Planctl

To start a plan use the planctl tool on line arguments:

Parameter Values
--template The plan template configuration file. This is typically a punchline template file.
--plan The plan configuration file.
--runtime One of the supported punch execution engine: shiva, storm.

Here is an example:

stop_by_tenant: true
version: "6.0"
start_by_tenant: true
applications:
- args:
  - start
  - --plan
  - plan.yaml
  - --template
  - punchline.yaml
  - --runtime
  - spark
  - --spark-cluster
  - common
  cluster: common
  shiva_runner_tags:
  - common
  name: plan-aggregation
  runtime: shiva
  command: planctl

Logstash

Logstash is fully integrated into the punch. Here is the example shipped with the standalone:

version: '6.0'
start_by_tenant: true
stop_by_tenant: true
applications:
- name: input
  runtime: shiva
  cluster: common
  shiva_runner_tags: []
  command: logstash
  args:
  - -f
  - logstash.conf
- name: print
  runtime: shiva
  cluster: common
  command: punchlinectl
  args:
  - start
  - --punchline
  - punchline.yaml
resources:
- type: kafka_topic
  name: mytenant_logstash
  cluster: common
  partitions: 1
  replication_factor: 1

Elastalert

Elastalert is fully integrated into the punch. The corresponding command is elastalert. You must provide an Elastalert configuration and a rules folder or a single rule using the --rule Elastalert option. Here is the example shipped with the standalone:

version: '6.0'
start_by_tenant: false
stop_by_tenant: true
applications:
- name: elastalert
  runtime: shiva
  command: elastalert
  args:
  - --config
  - config.yaml
  - --verbose
  cluster: common
  shiva_runner_tags:
  - common

Take a look at Elastalert documentation.

Housekeeping

The punch provides seveveral housekeeping applications. Here is the standalone mytenant channel example that monitors the other mytenant channels.

version: '6.0'
start_by_tenant: true
stop_by_tenant: true
applications:
- name: elasticsearch-housekeeping
  runtime: shiva
  cluster: common
  command: elasticsearch-housekeeping
  args:
  - --tenant-configuration-path
  - elasticsearch-housekeeping.yaml
  apply_resolver_on:
  - elasticsearch-housekeeping.yaml
  quartzcron_schedule: 0 0 * ? * * *
- name: archives-housekeeping
  runtime: shiva
  cluster: common
  command: archives-housekeeping
  args:
  - archives-housekeeping.yaml
  quartzcron_schedule: 0 * * ? * * *
resources: []
Application Description
elasticsearch-housekeeping elasticsearch data housekeeper in charge of cleaning old elasticsearch indexes
archives-housekeeping long term storage data housekeeper

Channel Monitoring

The punch provides a dedicated channel monitoring application channels-monitoring that computes health status metrics for channels. Refer to the channel monitoring guide.

Tip

this application is deployed in each tenant. This ensure each tenant is monitored using a dedicated application and not affected by other tenants operations.

Platform Monitoring

The punch provides a dedicated platform monitoring application platform-monitoring that computes health status metrics for an entire platform. Refer to the platform monitoring guide.

Tip

this application is deployed in the platform tenant.

Java Applications

Shiva can execute an arbitrary application as long as its jar is installed on each shiva node.

version: '6.0'
start_by_tenant: true
stop_by_tenant: true
applications:
- name: my-java-app 
  runtime: shiva
  cluster: common
  shiva_runner_tags:
  - common
  command: java
  args:
  - -jar
  - myjar.java
}

Third Party apps

You can also provide your own application or refer to any available kubernetes compatible app. You do that by using the official helm command as documented by the application chart.

For example to start a logstash in a channel check on some artifactory for the helm chart you need. Here is one you can get from the artifactory hub :

image

To use it execute the helm repository using the documented command. Then instead of executing the second helm install command (which would start logstash directly) add it instead to your channel as follows:

applications:
  - name: logstash
    runtime: kubernetes
    cluster: west
    reload: none
    command: helm
    args:
      - install
      - logstash
      - elasticsearch/logstash
      - --version 
      - 7.12.0

I.e. you simply add the helm install [NAME] [CHART] [flags] command where | Parameters | Values
|-------------------|----------------------------------------------- | install | the helm install command to install and start the corresponding chart
| NAME | the application name that MUST matches the one already defined. | CHART | the name of the corresponding helm chart

You can then start and stop it using the channelctl command.

Besides third party applications you can (of course) also deploy the one provided by the punch including the the ElastAlert helm package that come equipped with additional plugins. Refer at the Elastalert documentation

======= Logstash is fully integrated into the punch. Here is the example shipped with the standalone:

version: '6.0'
start_by_tenant: true
stop_by_tenant: true
applications:
- name: input
  runtime: shiva
  cluster: common
  shiva_runner_tags: []
  command: logstash
  args:
  - -f
  - logstash.conf
- name: print
  runtime: shiva
  cluster: common
  command: punchlinectl
  args:
  - start
  - --punchline
  - punchline.yaml
resources:
- type: kafka_topic
  name: mytenant_logstash
  cluster: common
  partitions: 1
  replication_factor: 1