Class | Description |
---|---|
AbstractFileInput |
The file spout feeds messages into Storm from one or several files.
|
AbstractFileInput.Item |
Whenever something is read, it is enqueued so that stom can come pick it in
nextTuple().
|
AbstractKafkaNode |
Common kafka input node base class with standard kafka properties definition.
|
AbstractSocketInput<T> |
Base class for socket servers spouts : http, lumberjack relp and plain syslog.
|
AzureBlobStorageSpout |
The AzureBlobStorageSpout enable you to pull data from a given container located in an Azure Blob Storage.
|
ExtractionInput |
Light topology elastic input node designed for extraction
|
GeneratorInput |
The generator publishes fake data in configurable streams.
|
HttpInput |
The Http spout receives http requests and forwards them as tuple.
|
KafkaInput |
The KafkaInput node reads records from a topic and forwards them into a
punchline.
|
LumberjackInput |
The Lumberjack spout receives and decodes lumberjack frames, emitted in turn as tuples.
|
RelpInput |
The Relp listening spout.
|
SFTPSpout |
SFTPSpout downloads files that matches a certain regex expression.
|
SmtpInput |
The SMTP listening spout.
|
SnmpInput |
The SNMP spout is able to receive SNMP Traps over UDP
It decodes the SNMP Traps messages into a JSON format - cf.
|
SyslogInput<T> |
The Syslog input reads lines from tcp/udp and emits them as tuples.
|
WrapperCustomInput |
Wrapper input node to convert custom node with public API
to legacy node with private API
|
Data comes in to a spout from an external source : socket, kafka, files. In some case you receive a single line (i.e. a string), in some other case you receive a map of key value elements.
In all case what the spout does is to take these value(s) and forward them as part of a storm stream, under the form of a so called tuple which actually is a key-value map.
It is your job to configure the spout to emit the fields you want as part of tuples inside streams.
Punchplatform topology files allows you to design arbitrary DAGs. You really can invent the way your data is transported, processed and routed to one or several
final destinations such as Elasticearch or an archiving backend.
You can also configure your spout to take care two additional reserved punchplatform streams whose names and semantics have a special meaning.
Here is an example Spout configuration. What you see here is that spout is configured to :
"spouts" : [
{
"type" : "one_of_the_spout",
"spout_settings" : { ... },
"storm_settings" : {
"publish" : [
{ "stream" : "logs", "fields" : ["log"] },
{ "stream" : "_ppf_metrics", "fields" : ["_ppf_latency"] },
{ "stream" : "_ppf_errors", "fields" : ["_ppf_error"] }
]
}
}
If instead you configure it like this :
"spouts" : [
{
"type" : "one_of_the_spout",
"spout_settings" : { ... },
"storm_settings" : {
"publish" : [
{ "stream" : "logs", "fields" : ["log"] }
]
}
}
Only the user data will be forwarded as tuple in your topology.Copyright © 2022. All rights reserved.