Skip to content

Channels

Rationale

Using channels, you build a useful application from one or several pipeline(s) pipeline(s). Something useful requires in the general case a mix of different types of processings:

  1. ever-running streaming components : in charge of continuously collecting, transforming, indexing and storing your data.
  2. batch processings : to periodically fetch some data, compute some machine learning models, perform some batch aggregations to generate consolidated reports or KPIs, etc..
  3. administrative tasks : to take care of the data lifecycle, deleting expired data, moving data from hot to medium to cold storage, etc..

Our goal in designing the punch is to let you assemble all that with a minimal number of concepts. A channel is what groups all these functional items in a single consistent, monitored and managed entity.

A channel is defined using a single configuration file that has only two sections : jobs and resources. Here is an example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
    version : 5.0
    jobs: [
        {
            // supported types are storm|spark|shiva
            type : storm
            // the storm|spark|shiva cluster in charge of the job execution 
            cluster: main
            // the name of the job. It refers to a job descriptor file
            name: single_topology
            // the action to take when users request a 'reload' action
            reload_action: kill_then_start
        }
    ]
    resources: [
        {
            type : kafka_topics
            name : mytenant_mytopic
            cluster : main
            partitions : 2
            replication_factor :  1
        }
    ]
}

Examples

Collecting logs

Here is a concrete simple example. Say you need a log management pipeline : receive logs, parse and transform them, and insert them into a database (elasticsearch).

Start simple : a channel can consists in a single process application to do just that as illustrated next:

image

Note

we refer in there to a [topology]. A topology is simply a small directed graph of functions you assemble, most often using a simple input-filter-output pattern. Something made popular by logtash in the log management world. It turns out the puchplatform implements these graphs on top of a much more powerful technology : storm topologies. Hence the name.

Now you go production. It is a good idea to add a queue to decorrelate data ingestion from data processing and indexing. That is a recommended pattern that will allow you to deal with traffic peaks, and to restart your processing part with no impact on your ingestion part. It becomes this :

image

Your channel now consists in two components plus another kind of resource a Kafka topic. Kafka is used by the punchplatform whenever you need a queue, and just like any queueing technology it relies on topics to publish/subscribe to the data.

In both cases your channel is composed of stream processing component(s), plus a few associated resources (parsers, kafka topics). We represent such channels like this:

image

Machine Learning

Now say you want to improve your application with some anomaly detection cool feature. You will need to add some batch processing in there to compute some models, and tag your data using a mix of stream and batch logic.

image

You can do that easily by adding a machine learning job to your channel. It will then looks like this:

image

Periodic Data Fetching

Say now you need to execute every night some external fetching of third-party data. What you need is to define simple tasks by possibly integrating third-party appications. It could be an Elastic beat to go fetch some files for example.

Again that is exremely easy : just add it to your channel. We refer to these as [tasks]{.title-ref}.

You can continue like this and add to a channel more processing and resources. This is very simple to do on the punchplatform. You will end up with channels like this:

image

Summary

Most big data platforms rely on this concept of channel. Only the name differs (channel, pipeline, etc..). Nifi, StreamSet, cdap are some of the well-known tools to design and run such pipelines.

The PunchPlatform is similar, but provides the same clean and powerful concepts using a daringly simple, small-footprint and fully integrated platform.

Refer to the ChannelConfiguration chapter for details about configuring channels.