public class KafkaShivaUtil extends Object
This protocol is used by the core punch administration to keep track of channels and applications status, as well as by the shiva application scheduler.
Why an extra layer ? simply to enforce the important properties required for the protocol to work correctly : auto commit, last commit offset strategy, key and value serializer etc ..
Modifier and Type | Field and Description |
---|---|
static int |
MAX_RECORD_LENGTH
Max default kafka record size is 1Mb.
|
Modifier and Type | Method and Description |
---|---|
static String |
getCommandRecordKey(String appName) |
static byte[] |
getCompleteV2Record(String key,
byte[] chunk,
Map<String,List<byte[]>> map)
This method receives the sequence of chunk from the same application start command,
and aggregate them all.
|
static KafkaContinuousConsumerBuilder<String,byte[]> |
getContinuousConsumerBuilder(String bootstrapServers,
String groupId,
String... topics)
This is a light helper to create continuous consumers well-configured for the punch internal kafka admin and shiva protocol.
|
static org.thales.punch.libraries.utils.api.UtilZip.Visitor |
getDefaultVisitor(String tenantName,
List<String> whitelist,
List<String> blacklist)
To start a shiva application, a zip archive is sent to each worker.
|
static org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> |
getProducer(String bootstrapServers,
String clientId)
Get a simple Kafka producer.
|
static org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> |
getProducer(String bootstrapServers,
String clientId,
org.thales.punch.platform.api.pojo.JavaStoreSecurity securityStores)
Get a simple Kafka producer.
|
static String |
getV2CommandRecordKey(String appName,
int seqNum,
int lastNum)
This one is used to send too big payloads.
|
static org.thales.punch.settings.api.ISettingsMap |
getWorkerDeclarationMap(String workerId,
List<String> tags)
Return a json document that represents the declaration of a new worker.
|
static void |
logRcv(String who,
org.apache.kafka.clients.consumer.ConsumerRecord<String,byte[]> record) |
static org.apache.kafka.clients.producer.RecordMetadata |
send(String who,
org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> producer,
org.apache.kafka.clients.producer.ProducerRecord<String,byte[]> record)
Send and trace a gossip message.
|
static void |
writeAdminMap(String topic,
org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> producer,
org.thales.punch.settings.api.ISettingsMap map)
Write a settings map to the administration topics.
|
public static final int MAX_RECORD_LENGTH
public static org.thales.punch.libraries.utils.api.UtilZip.Visitor getDefaultVisitor(String tenantName, List<String> whitelist, List<String> blacklist)
Every time a shiva application is started, the archive sent simply contains :
$PUNCHPLATFORM_CONF_DIR/punchplatform.properties
$PUNCHPLATFORM_CONF_DIR/resolv.hjson
$PUNCHPLATFORM_CONF_DIR/tenants/mytenant
This basically lets shiva worker launch correctly every required application.
To have better control you can define black and whitelist in the tenant configuration file. These will help you select what you need using regexp to exclud or include only some files.
blacklist
- whitelist
- public static org.thales.punch.settings.api.ISettingsMap getWorkerDeclarationMap(String workerId, List<String> tags)
workerId
- the worker unique identifiertags
- the worker tagspublic static void logRcv(String who, org.apache.kafka.clients.consumer.ConsumerRecord<String,byte[]> record)
public static String getV2CommandRecordKey(String appName, int seqNum, int lastNum)
appName
- the application nameseqNum
- the current sequence numberlastNum
- the last sequence numberpublic static org.apache.kafka.clients.producer.RecordMetadata send(String who, org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> producer, org.apache.kafka.clients.producer.ProducerRecord<String,byte[]> record)
who
- leader or worker, for tracing onlyproducer
- the producer to userecord
- the record to sendpublic static org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> getProducer(String bootstrapServers, String clientId)
bootstrapServers
- the bootstrap.servers addressclientId
- the client idpublic static org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> getProducer(String bootstrapServers, String clientId, org.thales.punch.platform.api.pojo.JavaStoreSecurity securityStores)
bootstrapServers
- the bootstrap.servers addressclientId
- the client idsecurityStores
- the Object containing the truststore and keystore information. If not null, activate ssl.public static KafkaContinuousConsumerBuilder<String,byte[]> getContinuousConsumerBuilder(String bootstrapServers, String groupId, String... topics)
It only sets the common characteristics, and enforce a consistent set of properties shared among all participant : the clients and the shiva daemons.
bootstrapServers
- the kafka bootstrap server propertygroupId
- the groupId to usetopics
- the topic(s) to listen topublic static void writeAdminMap(String topic, org.apache.kafka.clients.producer.KafkaProducer<String,byte[]> producer, org.thales.punch.settings.api.ISettingsMap map)
producer
- the admin producermap
- the settings m appublic static byte[] getCompleteV2Record(String key, byte[] chunk, Map<String,List<byte[]>> map)
key
- the application keychunk
- the new chunkmap
- the caller context map where chunks are stored using the application key.Copyright © 2014–2023. All rights reserved.