Skip to content

Task Scheduler


The punchplatform relies on a simple yet powerful task scheduler called Shiva.

Shiva is a lightweight distributed task manager implemented on top of Zookeeper. It lets you submit arbitrary tasks to some of the nodes of your (punchplatform) cluster.

Shiva is simple and powerful and can be used to enrich a channel with various periodic or long running applications.


Having shiva gives the punchplatform a tremendously simple yet extremely robust and distributed architecture.

There is no single point of failure, no virtual IP addresses, no complex active-passive scheme, no need for kubernetes or docker or mesos or similar infrastructures to take care of these issues.

Of course the punch is compatible with all these environments, it is just that you do not need them.


Here are some concrete examples where the platform relies on shiva:

Data Collector

How do you safely and efficiently transport some data from one site to another ? Go for Kafka data shipping, i.e. transporting your data from one Kafka to another. Here is the idea:


Punch topologies are great for this use case, they can take your data from a Kafka and put it into another using acknowledged protocol, security and compression.

Yet on small footprints systems, it would be heavy to run a Storm cluster on the remote site (i.e. on the left-hand side of the picture). Too heavy. Instead topologies are executed using the punch lighwweight engine. Using that mode Shiva that is in charge of starting and monitoring the topoligies instead of a regula storm cluster.

The next picture illustrates the internal architecture of a three node collector.



to get this picture you must be familiar with zookeeper and some other technical details. Do not worry about the details, focus on the big picture.

Platform Monitoring

The punchplatform is periodically gathering some metrics to publish the platform components health status as metrics to elasticsearch. Shiva is in charge of periodically execute these actions.

Administrative tasks

The punchplatform ships in with ready to use administrative services such as elasticsearch or archiving housekeepings. Shiva is in charge of running these.

Elastic Apm Agent

The elastic [application monitoring agent _] is integrated into the punch kibana plugin. It works jointly with a small server in charge or handling apm agents events in turn inserted into elasticsearch. Here is the architecture :


This apm server is typically an example of singleton service you need to run somewhere. You guessed it by now : shiva takes care of running it.

Shiva Architecture

this chapter is informational

In this chapter we describe the internal shiva architecture.

Shiva is deployed on a number of the platform nodes. Each node is running a tiny java agent, communicating with its peers and sharing information only through Zookeeper. No other communication than zookeeper is required.

Using shiva you can run long-lived applications such as a Kafka streaming applications, a metricbeat, anything you can think of. You can also run tasks periodically, for example run a data cleaning job in charge of deleting old data from elasticsearch. These tasks are defined as punchplatform services.

To illustrate how shiva works, consider a three server cluster. It could be for example a data shipper in charge of collecting data from a remote site and forwarding it to a central platform (a so-called [LTR]). Equipped with shiva (and zookeeper) it looks like this:


Each server runs one shiva and one zookeeper agent. As highlighted here zookeeper makes the three cluster part of a same cooperating group, and shiva lets you associate each server with arbitrary tags. Here tags are depicted as colors. I.e. server1 is red and orange.

As part of a channel you can request a task to run somewhere (i.e. it does not matter if it runs on server1, server2 or server3), you may not care, as long as it runs.

By now you probably fully understood Shiva. Here is what happens if you start a task on a red server : it will be launched on one of the three server.


Should you loose one of your node with a task on it, it will be restared on another available red server:


Compared to many other job manager, shiva is lightweight. It has been designed to run on small systems. Yet it can scale to hundreds of servers. It has a second strength : a task can be anything you can think of.


The punchplatform is carefully designed so as to run all administrative or applicative tasks as part of a tenant. The same holds for Shiva and shiva tasks, you cannot submit a task outside the scope of a tenant.

More precisely, shiva task must be declared as part of either a channel, or a service.

  • Channel tasks are application-level tasks. Examples are
    • machine learning batch jobs
    • kafka stream applications
    • external data fetchers
    • data aggregator and KPIs indexers
    • lightweight topologies
    • etc..
  • Service tasks are administrative tasks with a per tenant scope
    • elasticsearch housekeeping
    • archive service housekeeping
    • kafka topic monitoring

Refer to Services and Channels.

All in all Shiva provides with a simple yet robust service to enrich your platform with virtually any kind of components.


this is an important punch features: stop using crons, scripts manually deployed on servers, virtual ip addresses/failover/active-passive configurations.

You will end up with a spagetthi plate that will be unmanageable and will prevent you to manage and update your platform.

Operation Guide

Shiva Cluster Configuration

As part of your punchplatform you can deploy one or several shiva cluster. One is enough in most cases.

Refer to the deployment guide for details. A shiva cluster consists of shiva agents deployed on each cluster node. They are all associated to a given Zookeeper cluster.

Each shiva node is associated with tags. These tags allow you, in turn, to associate a task to one or several shiva nodes.

Shiva deployment section

Task Configuration

To define and schedul a shiva task you must simply declare it along with its inner components as part of a channel or service. Add a shiva_tasks section in the channel_structure (resp. service_structure) file of your channel (resp service).

Here is a self-documented example:

   "shiva_tasks" : [ 
       # the short name of the task. The task unique name
       # will appear as <tenant>_<channel>_<name> 
       "name" : "your_task_name",

       # the command argument(s). It works using the usual args[] 
       # parameter settings. The first (args[0]) is the actual
       # command to launch. The next ones are parameters.  
       "args" : [
          # each argument has a "type" and a "value" property
          # "type" can be "file" 
          { "type" : "file" , "value" : "" },

          # Here is how you pass in another file argument
          { "type" : "file" , "value" : "conf.yml" }

          # you can also pass in strings
          # { "type" : "string" , "value" : "hello" }
          # inline jsons
          # { "type" : "json" , "value" : { "timeout" : 10 } }
          # the special "task_info" type makes your application receive
          # a json document filled with the tenant, channel/service name
          # and your task name. It is used in many administrative service
          # that work in the scope of a particular tenant. 
          # { "type" : "task_info" }
          # Your task will then receive a json string like this:
          #   {"task":"kafka_service","service":"admin","tenant":"mytenant"}

       # the target shiva cluster name. This name must be associated 
       # to a shiva cluster defined n your 
       # file 
       "cluster" : "common",

       # the tags to place your task to the shiva node you want.
       "shiva_runner_tags" : ["red", "green"]

       # an optional cron expression should you require periodic 
       # scheduling of your task. Here is an example to execute 
       # it every 30 seconds
       # "quartzcron_schedule" : "0/30 * * * * ? *"

checkout the sample services delivered as part of the standalone platform.

Quartz Scheduler Quick Reference

A cron expression is a string comprised of 6 or 7 fields separated by white space. Fields can contain any of the allowed values, along with various combinations of the allowed special characters for that field. The fields are as follows:

Seconds Minutes Hours DayOfMonth  Month DayOfWeek  Year
  • 10 * * * ? * Fire every 10 seconds
  • 0 0 12 * * ? Fire at 12pm (noon) every day
  • 0 15 10 ? * * Fire at 10:15am every day
  • 0 15 10 * * ? Fire at 10:15am every day
  • 0 15 10 * * ? * Fire at 10:15am every day
  • 0 15 10 * * ? 2005 Fire at 10:15am every day during the year 2005
  • 0 * 14 * * ? Fire every minute starting at 2pm and ending at 2:59pm, every day
  • 0 0/5 14 * * ? Fire every 5 minutes starting at 2pm and ending at 2:55pm, every day
  • 0 0/5 14,18 * * ? Fire every 5 minutes starting at 2pm and ending at 2:55pm, AND fire every 5 minutes starting at 6pm and ending at 6:55pm, every day
  • 0 0-5 14 * * ? Fire every minute starting at 2pm and ending at 2:05pm, every day
  • 0 10,44 14 ? 3 WE Fire at 2:10pm and at 2:44pm every Wednesday in the month of March.
  • 0 15 10 ? * MON-FR Fire at 10:15am every Monday, Tuesday, Wednesday, Thursday and Friday
  • 0 15 10 15 * ? Fire at 10:15am on the 15th day of every month
  • 0 15 10 L * ? Fire at 10:15am on the last day of every month
  • 0 15 10 L-2 * ? Fire at 10:15am on the 2nd-to-last last day of every month
  • 0 15 10 ? * 6L Fire at 10:15am on the last Friday of every month
  • 0 15 10 ? * 6L Fire at 10:15am on the last Friday of every month
  • 0 15 10 ? * 6L 2002-2005 Fire at 10:15am on every last friday of every month during the years 2002, 2003, 2004 and 2005
  • 0 15 10 ? * 6 #3 Fire at 10:15am on the third Friday of every month
  • 0 0 12 1/5 * ? Fire at 12pm (noon) every 5 days every month, starting on the first day of the month.
  • 0 11 11 11 11 ? Fire every November 11th at 11:11am.

Task Execution

To start your tasks, simply use the or start or stop command. You can check the status of your task using either task logging or monitoring as explained hereafter.

Usual Use cases

Task Logging

Logs generated by tasks are automatically centralised in Elasticsearch. This makes it easy to monitor the advent and status of tasks.

What shiva does it to intercept all stdout/stderr logs from its child tasks. These are then forwarded to the platform administrative elasticsearch cluster. These logs are also written to the local node shiva log directory, in case you need a direct file access. Note that these log files are rotated and compressed.

Task Monitoring

Shiva is extremely simple, the return code of each command is logged as an event to the platform admin elasticsearch. There you can check they run ok, you can also add an alerting rule to be notified should one of your task fail to execute.