6.2 to 6.3 Upgrade Notes¶
This document explains what configuration changes have to be performed during a PunchPlatform update from version 6.2.x to 6.3.0, and other changes you need to know about (e.g. command-line changes).
Configurations¶
Platform reporters has been moved to operator section¶
Reporters used by an operator must now be declared in punchplatform_operator
section
in punchplatform-deployment.settings
and not anymore inplatform
section.
To migrate your platform from a 6.2.x release to a 6.3.x release you have to move the platform reporter parameter as follows :
{
"platform": {
"platform_id": "punchplatform-primary",
"setups_root": "/data/opt",
"remote_data_root_directory": "/data",
"remote_logs_root_directory": "/var/log/punchplatform",
"punchplatform_daemons_user": "vagrant",
"punchplatform_group": "vagrant",
"binaries_version": "punch-binaries-6.3.0-SNAPSHOT"
},
"punchplatform_operator": {
"punchplatform_operator_environment_version": "punch-operator-6.3.4-PATCH1",
"configuration_name_dir_from_home": "pp-conf",
"reporters": [
"myreporter"
],
...
}
}
Elasticsearch input and output nodes security¶
{
"type": "elasticsearch_input",
"settings": {
"cluster_id": "es_search",
"http_hosts": [
{
"host": "localhost",
"port": 9200
}
],
"credentials": {
"user": "bob",
"password": "bob_secret"
},
"ssl": true,
"ssl_keystore_location": "/data/certs/keystore.jks",
"ssl_truststore_location": "data/certs/truststore.jks",
"ssl_keystore_pass": "keystore_secret",
"ssl_truststore_pass": "truststore_secret",
"per_stream_settings": [
...
]
},
"storm_settings": {
...
}
}
{
"type": "elasticsearch_output",
"settings": {
"cluster_id": "es_search",
"http_hosts": [
{
"host": "localhost",
"port": 9200
}
],
"credentials": {
"user": "bob",
"password": "bob_secret"
},
"ssl": true,
"ssl_keystore_location": "/data/certs/keystore.jks",
"ssl_truststore_location": "data/certs/truststore.jks",
"ssl_keystore_pass": "keystore_secret",
"ssl_truststore_pass": "truststore_secret",
"per_stream_settings": [
...
]
},
"storm_settings": {
...
}
}
Platform health monitoring¶
Platform monitoring is now able to monitor a Gateway, Zookeeper, Kafka and Elasticsearch protected with SSL.
{
"monitoring_interval": 60,
"services": [
"kafka",
"shiva",
"storm",
"zookeeper",
"elasticsearch",
"clickhouse",
"spark",
"minio",
"gateway"
],
"security": {
"elasticsearch_clients": {
"es_search": {
"credentials": {
"username": "USER",
"password": "PASSWORD"
},
"ssl_enabled": true,
"ssl_private_key": "private_key.pem",
"ssl_certificate": "cert.pem",
"ssl_trusted_certificate": "ca.pem"
}
},
"gateway_clients": {
"common": {
"ssl_enabled": true,
"ssl_truststore_location": "truststore.jks",
"ssl_truststore_pass": "PASSWORD"
}
},
"zookeeper_clients": {
"common": {
"ssl_enabled": true,
"ssl_truststore_location": "truststore.jks",
"ssl_truststore_pass": "PASSWORD"
}
},
"kafka_clients": {
"common": {
"ssl_enabled": true,
"ssl_truststore_location": "truststore.jks",
"ssl_truststore_pass": "PASSWORD"
}
}
}
}
Clickhouse/PostgreSQL/MapD Output¶
Both host
and port
fields are replaced by hosts
to be able to use multiple hosts in case of a node is down.
{
"type": "clickhouse_output",
"settings": {
"hosts": [
"localhost:9000"
],
"username": "default",
"password": "",
"database": "default",
"table": "tests",
"bulk_size": 3,
"column_names": [
"arr_timestamp:cast",
"dep_timestamp:cast",
"uniquecarrier:cast"
]
}
}
Spark and Storm public API¶
Spark¶
OLD
@Override
public void declare(IDeclarer declarer){
declarer.publishMap(new TypeReference<String>(){});
}
NEW
@Override
public void declare(SparkNodePubSub declarer){
declarer.publishMap(new TypeReference<String>(){});
}
Storm¶
You need to implement the method below:
@Override
public void declare(StormNodePubSub declarer){
declarer.publishMap(new TypeReference<String>(){});
}
Storm java memory assignment¶
If you want to be fully compliant with the new ResourceScheduler, you must change the following settings:
punchplatform_deployment.settings:¶
workers_childopts must not add Xmx or Xms values. You can delete this setting if needed. cf storm_section
{
...
"storm": {
"clusters": {
"myStormCluster": {
"workers_childopts": "any childopts you want except -Xmx and -Xms"
}
}
},
...
}
storm_punchline¶
Java memory must be configured following this note : JVM memory setting
{
"settings": {
...
"topology.component.resources.onheap.memory.mb": 128,
...
}
}
Mapping¶
Gateway Mapping¶
target.type
field has been renamed event_type
.
Plugin Extraction¶
The files output format has changed when extracting in csv
.
In older 6.X, there was only one column containing all the extracted fields. In 6.3, there is one column per field.
There are no headers and the default separator is ,
.
OLD ```json lines { "init_process_id": "25685@machine1", "init_host_name": "machine1", "init_user_name": "user1", "content_level": "INFO", "init_process_name": "shiva_application", "content_args": "N/A", "es_ts": "2021-03-01T15:01:44.334+01:00", "platform_id": "production", "content_loggerName": "org.thales.punch.applications.channels.monitoring.Main", "content_instant_nanoOfSecond": "327000000" } { "init_process_id": "25685@machine1", "init_host_name": "machine1", "init_user_name": "user1", "content_level": "INFO", "init_process_name": "shiva_application", "content_args": "N/A", "es_ts": "2021-03-01T15:01:45.334+01:00", "platform_id": "production", "content_loggerName": "org.thales.punch.applications.channels.monitoring.Main", "content_instant_nanoOfSecond": "328000000" }
**NEW**
```csv
25685@machine1,machine1,user1,INFO,shiva_application,N/A,2021-03-01T15:01:44.334+01:00,production,org.thales.punch.applications.channels.monitoring.Main,327000000
25685@machine1,machine1,user1,INFO,shiva_application,N/A,2021-03-01T15:01:45.334+01:00,production,org.thales.punch.applications.channels.monitoring.Main,328000000