Skip to content

Punch Operator

Previously we checked COTS from the vagrant user. We saw that the processes where launched in background by the punchdaemon user.

There is another user deployed which is described in the punchplatform_operator section of the punchplatform-deployment.settings.

{
  "punchplatform_operator": {
    "punchplatform_operator_environment_version": "punch-operator-6.4.4-SNAPSHOT",
    "configuration_name_dir_from_home": "conf",
    "reporters": [
      "kafka-reporter"
    ],
    "operators_username": [
      "punchoperator"
    ],
    "storage": {
      "type": "kafka",
      "kafka_cluster": "common"
    },
    "servers": {
      "server1": {}
    }
  }
}

The punchoperator user will start punchlines and other Punch applications.

You can log in as the punchoperator using:

# on server1
sudo su - punchoperator

Operator configuration

The Punch Operator requires punchlines, configuration files, channels structures and parsers... to start Punch applications. Those files are already present in the conf directory in the Standalone. Here, it is described with configuration_name_dir_from_home parameter.

On a deployed version we need to prepare this folder :

  1. Choose punchlines, configuration files, channels structures...

    # vagrant@deployer
    sudo yum install tree -y
    tree punch-deployer-6.4.4/examples/platforms/getting_started_deployer/runtime/tenants
    

  2. Install parsers.

    # vagrant@deployer
    bash punch-deployer-6.4.4/examples/platforms/getting_started_deployer/install-parsers.sh
    

  3. Send configuration to operators.

    # vagrant@deployer
    export PUNCHPLATFORM_OPERATOR_CONF_DIR=~/punch-deployer-6.4.4/examples/platforms/getting_started_deployer/runtime/
    punchplatform-deployer.sh --copy-configuration
    

Warning

The configuration tools used in this tutorial are not recommended for a production use. On a production platform, configuration must be managed carefully with a git for example.

Prepare platform

Right after deploying the platform and the configuration, the operator must push the Kibana resources and the Elasticsearch templates.

# punchoperator@server1
punchplatform-push-es-templates.sh -d conf/resources/elasticsearch/ -l http://server3:9200
punchplatform-setup-kibana.sh --import -l http://server1:5601

You can check that Punch dashboards are available in Kibana.

Punchlinectl

Start

Operator can start punchline & applications in foreground.

Let's try to launch a punchline in foreground:

punchlinectl --tenant mytenant start --punchline ~/conf/tenants/mytenant/channels/stormshield_networksecurity/input.yaml 

Your stream punchline is running and you can inject logs in another terminal:

punchplatform-log-injector.sh -c ~/conf/resources/injectors/mytenant/stormshield_networksecurity_injector.json

Resolve

In deployed platform, there is a resolv.yaml file. We added this file in the $PUNCHPLATFORM_CONF_DIR at the beginning of the tutorial. The resolver adds and replaces parameters in configuration files based on JSONPath rules.

Let's take the example of the first rule:

elasticsearch_nodes:
  selection:
    tenant: "*"
    channel: "*"
    runtime: "*"
    name: "*"
  match: "$.spec.dag[?(@.type=='elasticsearch_output' || @.type=='elastic_output')].settings"
  additional_values:
    http_hosts: 
      host: server2
      port: 9200

This rule will add http_hosts parameter to all elastisearch_output and elastic_output nodes. This makes it possible to reuse configurations (standalone ones for example) on different platforms.

Let's see how the resolver changes the configurations on the punchline we started !

Without resolver:

cat ~/conf/tenants/mytenant/channels/stormshield_networksecurity/input.yaml | yq e -j | jq .

With resolver:

punchlinectl --tenant mytenant resolve --file ~/conf/tenants/mytenant/channels/stormshield_networksecurity/input.yaml | jq .

You can see that many sections were augmented by this resolver.

Channelctl

Operator is the one submitting channels to the Punch orchestrator: Shiva.

You can start channels as you would do in Standalone.

Let's start monitoring channels that will be useful to understand Shiva in the next chapter.

channelctl -t mytenant start --channel monitoring
channelctl -t platform start --channel monitoring