K
- the class of the record keyV
- the class of the record valuepublic class KafkaContinuousConsumerBuilder<K,V> extends Object
IKafkaContinuousConsumer
builder. Refer the the kafka KafkaConsumer javadoc.
Here is an example :
kafkaReader = new KafkaConsumerBuilder<byte[], byte[]>()
.setBootstrapServer("localhost:9092")
.setGroupId("mytenant_apache_processing")
.setTopic("apache")
.setStartOffsetStrategy("earliest")
.setMetricContext(yourMetricContext)
.build();
It is handy to pass in additional kafka standard properties. You can do that using the
{setProperty(String, Object)
method. For example :
kafkaReader = new KafkaReaderBuilder<byte[], byte[]>()
.setProperty("fetch.max.bytes", 1048576)
...
.build();
Watchout by default the auto commit is disabled, you are in charge of
inovking the commit method whevever need. You can use the setAutoCommit()
Modifier and Type | Class and Description |
---|---|
static class |
KafkaContinuousConsumerBuilder.START_OFFSET_STRATEGY
One of "earliest", "last_committed", "latest", "from_offset" or "from_timestamp".
|
Constructor and Description |
---|
KafkaContinuousConsumerBuilder()
public ctor
|
Modifier and Type | Method and Description |
---|---|
IKafkaContinuousConsumer |
forReceiver(IRecordHandler<K,V> receiver) |
KafkaContinuousConsumerBuilder<K,V> |
setAutoCommit()
Start reading from the earliest offset.
|
KafkaContinuousConsumerBuilder<K,V> |
setAutoCommitInterval(long interval)
Set the auto commit interval
|
KafkaContinuousConsumerBuilder<K,V> |
setBootstrapServer(String bootstrapServers) |
KafkaContinuousConsumerBuilder<K,V> |
setClientId(String consumerId) |
KafkaContinuousConsumerBuilder<K,V> |
setEarliestStartOffsetStrategy()
Start reading from the earliest offset.
|
KafkaContinuousConsumerBuilder<K,V> |
setFromDatetimeStartOffsetStrategy(String datetime,
String format) |
KafkaContinuousConsumerBuilder<K,V> |
setFromDatetimeStartOffsetStrategy(String datetime,
String format,
boolean continueWhenNoRecords)
Start reading from a specific datetime.
|
KafkaContinuousConsumerBuilder<K,V> |
setFromOffsetStartStrategy(long offset)
Start reading from an offset
|
KafkaContinuousConsumerBuilder<K,V> |
setGroupId(String groupId) |
KafkaContinuousConsumerBuilder<K,V> |
setLastCommittedStartOffsetStrategy()
Use a last commit strategy.
|
KafkaContinuousConsumerBuilder<K,V> |
setLatestStartOffsetStrategy()
Start reading from the latest offset.
|
KafkaContinuousConsumerBuilder<K,V> |
setMetricContext(org.thales.punch.libraries.metrics.api.IMetricContext metricContext)
Associate a metricContext to the kafka reader.
|
KafkaContinuousConsumerBuilder<K,V> |
setOffsetTreeAckStrategy()
Enable the ack and fail method of your
IPartition handle to be handled using the
underlyinh offset tree strategy. |
KafkaContinuousConsumerBuilder<K,V> |
setOffsetWatchDogTimeout(Long timeout)
Set offset watchdog timeout.
|
KafkaContinuousConsumerBuilder<K,V> |
setPartitionNumber(int num)
Set a partition number if you want to consume only a given partition.
|
KafkaContinuousConsumerBuilder<K,V> |
setProperties(Properties properties)
Set additional properties, typically some defined in the kafka consumer javadocs.
|
KafkaContinuousConsumerBuilder<K,V> |
setProperty(String name,
Object value)
Set additional property, typically some defined in the kafka consumer javadocs.
|
KafkaContinuousConsumerBuilder<K,V> |
setSecurityStores(org.thales.punch.platform.api.pojo.JavaStoreSecurity securityStores) |
KafkaContinuousConsumerBuilder<K,V> |
setSmartFailureHandling(boolean b)
If set to true, after a failure already acked or pending records will not be reemitted.
|
KafkaContinuousConsumerBuilder<K,V> |
setTickInterval(long i) |
KafkaContinuousConsumerBuilder<K,V> |
setTopic(List<String> topics)
Set the topics to be consumed by this consumer.
|
KafkaContinuousConsumerBuilder<K,V> |
setTopic(String... topic) |
public KafkaContinuousConsumerBuilder()
public KafkaContinuousConsumerBuilder<K,V> setBootstrapServer(String bootstrapServers)
bootstrapServers
- the bootstrap server(s) addresspublic KafkaContinuousConsumerBuilder<K,V> setGroupId(String groupId)
groupId
- public KafkaContinuousConsumerBuilder<K,V> setClientId(String consumerId)
consumerId
- public KafkaContinuousConsumerBuilder<K,V> setSmartFailureHandling(boolean b)
b
- true or falsepublic KafkaContinuousConsumerBuilder<K,V> setMetricContext(org.thales.punch.libraries.metrics.api.IMetricContext metricContext)
The metric context will be derivated for each partition so as to publish a list of per partition metrics.
metricContext
- your metric context.public KafkaContinuousConsumerBuilder<K,V> setTopic(String... topic)
topic
- public KafkaContinuousConsumerBuilder<K,V> setPartitionNumber(int num)
num
- the partition number. If you call this with -1 it has no effect, and the reader will consume all partitions.public KafkaContinuousConsumerBuilder<K,V> setAutoCommit()
public KafkaContinuousConsumerBuilder<K,V> setAutoCommitInterval(long interval)
interval
- public KafkaContinuousConsumerBuilder<K,V> setEarliestStartOffsetStrategy()
public KafkaContinuousConsumerBuilder<K,V> setLatestStartOffsetStrategy()
public KafkaContinuousConsumerBuilder<K,V> setLastCommittedStartOffsetStrategy()
public KafkaContinuousConsumerBuilder<K,V> setOffsetWatchDogTimeout(Long timeout)
public KafkaContinuousConsumerBuilder<K,V> setOffsetTreeAckStrategy()
IPartition
handle to be handled using the
underlyinh offset tree strategy. What this means is the acked and failed record will be dealt in a way
not to skipp unacknowledged record.
This mode is mandatory ofr storm or storm like streaming processing i f you care about at least once semantics.
public KafkaContinuousConsumerBuilder<K,V> setSecurityStores(org.thales.punch.platform.api.pojo.JavaStoreSecurity securityStores)
public KafkaContinuousConsumerBuilder<K,V> setFromDatetimeStartOffsetStrategy(String datetime, String format, boolean continueWhenNoRecords)
datetime
- the date valueformat
- the date format. Accepted values are "ISO" and "epoch_ms".continueWhenNoRecords
- indicates whether to throw an error or to continue to initialize partitions when
no records are found for the given datetimepublic KafkaContinuousConsumerBuilder<K,V> setFromDatetimeStartOffsetStrategy(String datetime, String format)
public KafkaContinuousConsumerBuilder<K,V> setFromOffsetStartStrategy(long offset)
offset
- the starting offsetpublic IKafkaContinuousConsumer forReceiver(IRecordHandler<K,V> receiver)
receiver
- the record receiverpublic KafkaContinuousConsumerBuilder<K,V> setProperty(String name, Object value)
name
- the property namevalue
- the property valuepublic KafkaContinuousConsumerBuilder<K,V> setProperties(Properties properties)
Be careful this method simply put all the properties you pass as is. If a topic is present in there it will be considered.
properties
- the propertiespublic KafkaContinuousConsumerBuilder<K,V> setTopic(List<String> topics)
topics
- the list of topicspublic KafkaContinuousConsumerBuilder<K,V> setTickInterval(long i)
Copyright © 2014–2023. All rights reserved.