Starting at punch 7.0, the punch runs natively on kubernetes. The officially supported kubernetes distribution comes from a joint Thales project called kast. This said, the punch can run on any Kubernetes you would already have at hand.
In terms of components here is what a Punch on Kast looks like.
Essential Punch Applications and Services¶
The following picture highlights the essential parts of a punch solution:
- the PConsole provides you with the essential start status and stop command to manage your applications.
- Punch applications are called punchlines, and represented as Kubernetes CRS (Custom Resource Definitions). The Punch Kubernetes Operators are in charge of submitting and monitoring these to the underlying Kubernetes cluster.
- Punch artifact and API servers provide additional functions to manage your various resources: parsers, enrichment files, custom function. These provide the punch Function As A Service backbone.
- the various COTS provide data storage, access and visualisation.
For the sake of clarity only the most important and functional components are shown here. Kubernetes, Grafana, Keycloack, and other services are skipped.
The Punch focuses on making it simple to deploy powerful data processing applications using simple configuration files. These files represent data pipelines (punchlines) that consume and produce data from/to a selected set of data stores: Elasticsearch, kafka, minio, clickhouse.
These pipelines are executed inside stream or batch processing engines such as spark, storm to benefit from powerful yet simple abstractions : data streams, SQL, real-time or batch capabilities.
The pipelines together with the ops commands required to start and stop them are the emerged (user-)part of the iceberg. The punch is in charge of providing the underlying IAAS, in particular:
- the deployment tools to automatically deploy all the required services
- an integrated monitoring and logging plane (backed with beats, kafka, elasticsearch)
- orchestration and configuration management services
Kubernetes is now de-facto standard technology to provide such a IAAS. In the recent years Kubernetes was significantly improved in particular to host stateful applications (stores and databases) and not only stateless micro-services for which it was originally designed. Many stateful services now come with powerful deployment charts and/or operators to significantly ease their deployment in kubernetes.
We therefore decided to fully replace the IAAS and punch orchestration services by Kubernetes.
A punch on Kubernetes will not differ much to punch end-users. Still at play are stream or batch pipelines, archiving services, nodes APIs, punch applicative components (such as the punch feedback UIs or the punch REST gateway).
The most important changes are the following:
- Deploying a punch now first require to deploy a kast. The punch deployer will be maintained only for backward compatibility. No new components will be added to the punch legacy deployer. I.e. new capabilities will only be provided through kast. An example is the kubernetes native spark 3 engine.
- The monitoring plane now leverages prometheus grafana and fluentbit.
- The punch shiva application scheduler is not used anymore. Kubernetes is now fully in charge of running the punch applications.
- All punch applications are packaged and delivered as containerd compatible images.
- Traffic routing is achieved using the standard kubernetes networking and ingress services.
These are only the most striking changes, moving to a kubernetes stack makes the punch benefit from many additional and/or new powerful capabilities. Security, network policies, support for arbitrary containerised applications. Check the roadmap for details.
The only supported kubernetes runtime is kast, a thales kubernetes distribution that provides all the required kubernetes and third-party services.
Kast provides not only kubernetes but also the additional third-party components required by punch. Kast actually supports more components. For example, cassandra is also supported by kast. The punch keeps focusing on a subset that makes sense for its use cases.
This chapter highlights a simple punch on kubernetes example. It provides an easy-to-understand methodology to set up your own architecture. Refer to [^1] for additional reference architectures, best practices and configuration guidelines.
This chapter only provides a quick summary with a punch-oriented view.
An Iot Monitoring Use Case¶
Here is a typical architecture to ingest data (say from some external sensor or industrial equipment), store them into Elasticsearch, and visualize them using Kibana.
- the data is incoming on TCP or UDP. It goes through an external firewall first, then a HaProxy load balancer before entering the punch.
- a pair of redundant HaProxy ([^6]) is in charge of forwarding the traffic to the internal ingress (here Nginx [^5]).
- the data processing part illustrated here consists on a first punchline to ingest the data into Kafka; then that data is in turn processed by two punchlines, one of them used to compute the data (enrichment filtering) into Elasticsearch, the other to perform some machine learning based detection. The detected items are in turn ingested in the same Elasticsearch. This scheme is only illustrative.
- the data is visualized through Kibana.
The external firewall and load balancer are provided by the hosting infrastructure. They are not part of the punch. To make is easy Kast provides a deployer to easily setup a redundant HaProxy pair on your infrastructure.
Defining Your Pipelines¶
First you define a processing pipeline yaml file. Here is an example:
apiVersion: punchline.gitlab.thalesdigital.io/v1 kind: Sparkline metadata: name: python-sample annotations: platform.gitlab.thalesdigital.io/platform: platform-sample spec: image: artifactory.thalesdigital.io/private-docker-punch/product/pp-punch/sparkline:7.0.1-SNAPSHOT implementation: python dependencies: - punch-parsers:org.thales.punch:punch-websense-parsers:1.0.0 - punch-parsers:org.thales.punch:common-punchlets:4.0.2 - file:org.thales.punch:geoip-resources:1.0.1 settings: spark.executor.instances: "1" punchline: dag: - settings: input_data: - date: "date" name: name component: input publish: - stream: data type: dataset_generator - settings: truncate: false component: show subscribe: - component: input stream: data type: show
This example assumes you bring in an additional python package (a pex file) containing your input node implementation.
Next you define a (so-called) channel definition file to refer to that pipeline. The channel file can contain additional resources, for example a kafka topic:
version: "7.0" resources: - type: kafka_topic name: spark_operator cluster: common partitions: 1 replication_factor: 1 applications: - name: detection.yaml runtime: kubernetes cluster: west
That is about it.
To run your application the punch provides simple terminal commands to start, stop, status or reload your application. The punch provides a punch kubernetes console tool that provides all the required punch commands along with the development tools.
This package is somehow similar to the punch standalone package but only ships with the client commands. You simply configure it with the required credential to access your kubernetes cluster, and you are good to go. All the punch applications are supported by the punch terminal. It looks like this.
This mode is handy for development or pre-production. Use your laptop to design run and test your pipeline using your favorite development tool. Once ready submit it to your kubernetes platform. The punch provides you with single-process startup command to easily run your pipelines in foreground or directly from within your code editor.
For security reasons, or simply if you don't have external accesses, you may prefer to use only a web console and run the punch terminal inside your cluster as follows:
In order to capture the logs from all internal applications and from the external firewall, some additional components are deployed.
- FluentBit [^2] is configured to capture the logs of all containers. These logs are sent to a Kafka topic for fast consumption (to avoid slowing the producing containers due to slow intake).
- an ingestion FluentBit is configured to receive the logs from the external firewall. These logs are pushed to Kafka as well.
- a punchline grabs all the logs from Kafka and simply ingests them into dedicated Elasticsearch indices.
- logs can be visualized using Kibana web User Interface.
Note that The fluentbit agents are deployed as Daemon Set, i.e; on every cluster node in order to catch the logs of all pods/container on that node.
The log ingestion punchline and the Kafka brokers are not configured to send their own logs to Kafka to avoid loop of processing their own logs. They can however be configured to send their own logs directly to Elasticsearch.
To capture performance and resource usage metrics from all components, the prometheus/grafana stack is deployed. Prometheus ([^4]) automatically collects the metrics from all the running components. Its stores these metrics on its local disk. Grafana ([^3]) simply helps to visualize these metrics through a number of dedicated dashboards: system, kubernetes, components or applicative resources.
- In this setup, a single prometheus is deployed. Should it crash (unlikely) or should the hosting hypervisor or server crash, the monitoring is lost. If this is not acceptable a different dual setup must be deployed. It is not shown here for the sake of clarity.
- Punch components (punchlines) can be configured to expose their metrics directly to prometheus. Just like any other prometheus friendly applications.
In real life applications there are two kinds of metrics. The ones for debugging or analysing the performance, and the ones used for capacity planning or even billing. That is to say that some particular metrics must be saved on the long term, not just in prometheus. The punch traffic peak metrics is such an example of a crucial metrics than must not be lost.
Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster. To cite the kubernetes documentation, namespaces provide:
- scopes for names.
- mechanisms to attach authorization and policy to a subsection of the cluster.
The kast platform deploys the various core services and components using a proposed namespace convention.
The punch fully leverages that convention. The punch user level concepts are defined using the concept of tenants. Inside a tenant applications are further organised in channels. When submitting application to kubernetes the tenant name will be automatically used to name the target namespace.
Say for example you have two logical tenants 'hr' and 'rd' each hosting some applications and exposing some data through dashboards. The namespace organisation is highlighted next. On the left side the default convention of a default Kast deployment, on the right the resulting Punch variation.
Punch tenants and Kubernetes are not identical concepts. Tenant is a user concept. Namespaces is part of the implementation of that concepts.