Class | Description |
---|---|
AbstractFileOutput |
The file bolt writes the incoming tuples into archives.
|
AbstractJDBCOutput |
AbstractJDBCBolt provides common JDBC configuration (host, username, port, etc...).
|
AbstractKafkaOutput |
The KafkaBolt writes the content of storm tuples to Kafka topics.
|
AbstractKafkaOutput.Topic |
Gather any information related to a topic.
|
ArchiveReaderNode |
Node to read archive from provided metadata
|
ClickHouseOutput |
This bolt will enable you to insert data into a ClickHouse database.
|
DistinctLogCounterNode |
Count the distinct log, differentiated with their sequence identifier
|
ElasticsearchOutput |
The Elasticsearch output node sends tuples to Elasticsearch.
|
FileTransferBolt |
This node enable you to transfer file located on your filesystem to somewhere else: S3, Hadoop, FileSystem (based on supported sink point of avro-parquet lib)
|
FilterNode |
The Filter bolt provides useful, simple, yet efficient filtering capabilities with
low cpu usage.
|
HttpNode |
Send http request for each tuple.
|
LumberjackOutput |
The PunchPlatformBolt forwards messages to a next hop PunchPlatformSpout.
|
ModuleNode<T,V> |
A simple module bolt.
|
OmniSciOutput |
This bolt will enable you to insert data into a MapD (OmniSci) database.
|
PostgresqlOutput |
This bolt will enable you to insert data into a Postgresql database.
|
PunchDispatcherNode |
The
PunchDispatcherNode simply dispatches incoming tuples to different punchlet(s),
based on one of the received fields. |
PunchNode |
The punch bolt executes punchlet(s) on the fly.
|
RawPacketNode | |
RetentionNode |
Overview
|
RythmerNode |
The rythmer bolt emits messages according to a provided timestamp.
|
SplitNode |
A simple splitter bolt.
|
SyslogOutput |
The Syslog bolt forwards messages to a next hop UDP server.
|
SyslogUdpOutput |
The Syslog udp bolt forwards messages to a next hop UDP server.
|
TestNode |
A simple test vbolt.
|
The punchplatform spouts are standard storm bolts. It is easy to code your own and make them available to punch topologies. The few key concepts regarding the way the data is received and forwarded as streams and fields is described here after. Refer to each bolt javadoc page for details.
Data comes in to a bolt as a storm tuple from a previous spout or bolt from the same topology. Each bolt does somthing useful with that data : transform it, rejects it, multiply it. Some bolt are inner bolts, some other will save the outgoing tuples to an external destination (kafka, elasticsearch, ceph, files, ...).
It is your job to configure the bolt to eventually emit the outgoing tuples under the stream you need.
You can also configure your bolt to handle two additional reserved punchplatform streams whose names and semantics have a special meaning.
Here is an example Spout configuration. What you see here is that spout is configured to :
Consider the following topology:
"spouts" : [
{
"type" : "the_spout",
"spout_settings" : { ... },
"storm_settings" : {
"publish" : [
{ "stream" : "logs", "fields" : ["log"] },
{ "stream" : "_ppf_metrics", "fields" : ["_ppf_latency"] },
{ "stream" : "_ppf_errors", "fields" : ["_ppf_error"] }
]
}
}
"bolts" : [
{
"type" : "a_first_inner_bolt",
"bolt_settings" : { ... },
"storm_settings" : {
"subscribe" : [
{ "component" : "the_spout", "stream" : "logs" },
]
"publish" : [
{ "stream" : "logs", "fields" : ["log"] },
]
},
{
"type" : "a_second_output_bolt",
"bolt_settings" : { ... },
"storm_settings" : {
"subscribe" : [
{ "component" : "a_first_inner_bolt", "stream" : "logs" },
{ "component" : "the_spout", "stream" : "_ppf_errors" }
]
}
}
Hopefully you got the idea : you can arrange to route and dispatch your data in the way you want and need.
Copyright © 2022. All rights reserved.