Shiva¶
Shiva is the orchestrator in the Punchplatform.
When submitting applications through channelctl
, the Operator submits the channel to the Shiva cluster.
Then, Shiva chooses one of its worker to run the application.
If the application fails or the worker goes down, Shiva will restart or replace it on another worker.
In this Getting Started, Shiva is deployed on server2
and server3
.
You can check its service with
# vagrant@server2 or vagrant@server3
sudo systemctl status shiva-runner.service
Launch an Application on Shiva¶
The channel_structure.yaml
describes what command will be launched by Shiva worker with its arguments.
version: '6.0'
start_by_tenant: true
stop_by_tenant: true
applications:
- name: input
runtime: shiva
command: punchlinectl
args:
- start
- --punchline
- input.yaml
- --childopts
- -Xms256m -Xmx256m # will override topology.component.resources.onheap.memory.mb
shiva_runner_tags:
- common
cluster: common
reload_action: kill_then_start
When a Shiva worker receives this input
application, it actually launches the following command :
punchlinectl --tenant mytenant start --punchline input.yaml
Let's submit stormshield channel
channelctl -t mytenant start --channel stormshield_networksecurity
Now the input.yaml
application of the stormshield_networksecurity
channel should run on either server1
or server2
.
# vagrant@server1 or vagrant@server2
sudo ps -u punchdaemon ax | grep stormshield_networksecurity
Worker Assignment¶
With Shiva, we can decide where our application should run.
This can be achieved with shiva_runner_tags
in channel_structure.yaml
.
Let's take an archiving use case. We want to archive logs on a worker that has storage facility. The archiving punchline as well as the archive housekeeping application must run on this server.
For this demo, there are two resolver rules that will force applications to run on server3
.
When resolving those channel_structure.yaml
, you should see the shiva_runner_tags
set to server3
.
Resolving archiving punchline:
punchlinectl --tenant mytenant resolve --file conf/tenants/mytenant/channels/apache_httpd/channel_structure.yaml | jq .
Resolving archive housekeeping application:
punchlinectl --tenant mytenant resolve --file conf/tenants/mytenant/channels/housekeeping/channel_structure.yaml | jq .
When you submit those applications with channelctl
, they will run on server3
.
# punchoperator@server1
channelctl -t mytenant start --channel apache_httpd
channelctl -t mytenant start --channel housekeeping
# vagrant@server3
sudo ps -u punchdaemon ax | grep -e archiving -e housekeeping
For the archive housekeeping application, it is possible that you will not see the running process as it is started every minute. However, you can check that its logs:
# vagrant@server3
tail -f /var/log/punchplatform/shiva/subprocess.mytenant.housekeeping.archives-housekeeping.log
Application logs¶
Applications that run in Shiva have their logs stored in multiple places.
- Directly on the Filesystem of the worker :
tail -f /var/log/punchplatform/shiva/subprocess.<tenant>.<channel>.<application>.log
- Centralized in Elasticsearch providing monitoring has been enabled (
channelctl -t platform start --channel monitoring
).
Go to your deployed Kibana in your browser, and find [PTF] Logs
dashboard.
Shiva binaries¶
A Shiva worker has all the Punch commands and binaries available when the Punch is deployed.
- Binaries:
ls -l /data/opt/punch-binaries-6.4.5/lib/
- Commands:
ls -l /data/opt/punch-shiva-6.4.5/bin/
As a result, adding an application to Shiva is quite simple:
- Add the binary to Shiva workers.
- Add a command referencing the binary on Shiva workers.
- Build a channel with configuration files and
channel_structure.yaml
.