You are Impatient !¶
if you have 20 minutes and are not familiar yet with the punch then take the time to go through the other getting started chapter. If you have only 2 minutes, read this chapter.
If you haven't done it already, start your punch standalone:
source ./activate.sh punchplatform-standalone.sh start
Once done, you can start the
You have now a terminal with auto-completion. Check the status of your punch:
The very first time you execute that command, it is slow. That is perfectly normal. The channelctl tool detects you are on a completly empty configuration and bootstraps an correct initial state.
This lists all the channels installed on your standalone. Each channel is a complete punch application. Starts one that is a typical ELK-like example :
channelctl:mytenant> start --channel sourcefire
A punch application called a punchline is now running and ready to receive logs. In a new activated shell, you can inject some logs using the punch injector tool. It will generate sourcefire logs and send them to your punchline.
punchplatform-log-injector.sh -c $PUNCHPLATFORM_CONF_DIR/resources/injectors/mytenant/sourcefire_injector.json
Check your Kibana, you have your logs. To stop your channel :
Congratulations ! You just managed a complete ELK-like production ready punch !
The appplication you just started is completely described by a yaml file named
Here is its content:
version: '6.0' runtime: storm type: punchline meta: vendor: sourcefire technology: sourcefire dag: - type: syslog_input settings: listen: proto: tcp host: 0.0.0.0 port: 9902 self_monitoring.activation: true self_monitoring.period: 10 publish: - stream: logs fields: - log - _ppf_local_host - _ppf_local_port - _ppf_remote_host - _ppf_remote_port - _ppf_timestamp - _ppf_id - stream: _ppf_metrics fields: - _ppf_latency - type: punchlet_node settings: punchlet_json_resources:  punchlet: - punchlets/common/input.punch - punchlets/common/parsing_syslog_header.punch - punchlets/sourcefire/parsing.punch - punchlets/common/geoip.punch subscribe: - component: syslog_input stream: logs - component: syslog_input stream: _ppf_metrics publish: - stream: logs fields: - log - _ppf_id - stream: _ppf_errors fields: - _ppf_error_message - _ppf_error_document - _ppf_id - stream: _ppf_metrics fields: - _ppf_latency - type: elasticsearch_output settings: per_stream_settings: - stream: logs index: type: daily prefix: mytenant-events- document_json_field: log document_id_field: _ppf_id additional_document_value_fields: - type: date document_field: '@timestamp' format: iso - stream: _ppf_errors document_json_field: _ppf_error_document additional_document_value_fields: - type: tuple_field document_field: ppf_error_message tuple_field: _ppf_error_message - type: date document_field: '@timestamp' format: iso index: type: daily prefix: mytenant-events- document_id_field: _ppf_id subscribe: - component: punchlet_node stream: logs - component: punchlet_node stream: _ppf_errors - component: punchlet_node stream: _ppf_metrics metrics: reporters: - type: kafka settings: topology.worker.childopts: -server -Xms1g -Xmx4g
It implements a log pipeline from a TCP socket input up to elasticsearch.
A second file is used to describe how and where to run that application. It is called
channel_structure.yml. Its content is:
version: '6.0' start_by_tenant: true stop_by_tenant: true applications: - name: input runtime: shiva command: punchlinectl args: - start - --punchline - input.yaml shiva_runner_tags: - common cluster: common reload_action: kill_then_start
In this case it tells the punch to start that application in an apache storm cluster. A storm cluster is one of the possible runtime engine supported by the punch. Other runtime engines are spark, kubernetes (starting at punch 7.0), or an interesting punch lightweight proprietary engine called shiva. For now simply remember that a runtime engine is something in charge to execute applications.