punchplatform-deployment.settings¶
Overview¶
The punchplatform-deployment.settings file is required to deploy a new production PunchPlatform. Declare in there how many Elasticsearch, Kafka, Spark etc.. nodes you need, in turn the punchplatform deployer tool will install everything where needed.
What you find in the punchplatform-deployment.settings file are :
- the names, versions, hosts, ports and URLS of all inner services (e.g. Storm, Zookeeper, ...)
- the folders where to store software programs, data and logs
- the unix users in charge of running services or executing administration actions
- some key configuration parameters (e.g. number of Storm workers, jvm xmx, ldap credentials, ...)
This file is required by the PunchPlatform deployer to generate a complete ansible inventory, in turn used to fully deploy your platform.
Important
In this file, when nodes names are provided (as servers lists or dictionaries keys in various clusters), the provided host names will be used for reaching the machines to deploy from the deployment environment, and must therefore be names resolved and reachable from this environment.
When no other specific setting exist to indicate the network interface on which the services will be boud, the nodes hostname may also be used by the cluster framework to communicate with each-other ; therefore they should be resolved as production interface from these machines, to avoid production data flow going through administration networks.
Location¶
The punchplatform-deployment.settings configuration file must be located in a platforms/<platformName>
sub-folder of
your deployment configuration directory, where platformName is typically
'production'. A symbolic link named
punchplatform-deployment.settings
must next be set from the configuration root folder. Remember you have a
PUNCHPLATFORM_CONF_DIR environment variable defining that location.
When using the PunchPlatform command-line tools, the PunchPlatform configuration root folder must be provided using the PUNCHPLATFORM_CONF_DIR environment variable. That is, it must look like this:
> $PUNCHPLATFORM_CONF_DIR
├── punchplatform-deployment.settings -> platform/singlenode/punchplatform-deployment.settings
└── platform
└── singlenode
└── punchplatform-deployment.settings
Note
In order to deploy a new platform, remember you start by creating a configuration folder on your deployer host. You must then set the [PUNCHPLATFORM_CONF_DIR] environment variable to point to that directory. That variable is expected to be correctly setby the deployer and platform command line tools. Refer to the manual pages
The reason to use a symbolic link is to let you later on switch from one platform to another while keeping the same tenant and channels configuration. It is extremely convenient to test your channels on a secondary test platform, and apply it later to your production platform.
After the deployment completes, some of your target servers, the ones acting as administration servers, will be equipped
with similar configuration folders. The PUNCHPLATFORM_CONF_DIR
environment variable will be set as well on these servers. These folders, usually located under /opt/soc_conf
or /data/soc_conf
, are actually git clones of a central git repository, and will be used at runtime by the platform to
start and/or monitor your channels. All that is set up for you by the deployer. For now keep in mind that you are only
defining the folder and files needed for deployment.
Conventions¶
Best practices
-
File format
This file a JSON file, in which you are (this is not standard json, though) free to add
#
prefixed comments.
Do not forget to add initial brackets ({}
) and pay attention to the comma (,
) at the end of each properties block and end of file. You can test your json syntax by usingsed 's/#.*//g' punchplatform-deployment.settings | jq .
-
Value type
Remember that in JSON surrounding a number with quotes changes its type from Number to String."
-
Avoid any encoding issue
To avoid any encoding issue, you should use only upper/lower case non-accentuated alphanumeric characters for all your id, hostname, cluster name, tags and so on.
Documentation conventions
Each component provides an example of configuration that needs to be adapted to your environment.
Follows from the list of parameters.
-
name.of.parameter
: Type Default value: DefaultValueIfExistDescription of the parameter.
If the Type is in bold, the parameter is mandatory when this settings section is present.
Hostname resolution
The hostname value to configure component, like zookeeper or #elasticsearch name must EXACTLY match the result of executing the
hostname
command on the corresponding servers."
Content¶
Each section that follows describes one of the punchplatform-deployment.settings part.
SSL/Secrets configuration¶
For each specific section in the configuration, a separate subsection is documented for TLS/Secrets specific settings and example.
Important
Please read the TLS Security deployment/configuration principles before referring to any individual TLS setting in this documentation, as it may clarify many settings use and also help reduce the amount of unnecessary settings (due to [default/supplementing lookup rules])(../Security/Security_deployment.md#secretscredentialscertificates_files_local_lookup_and_upload_by_the_deployer)
Platform¶
Mandatory section
Each platform is associated with a few configuration properties. These are grouped in a dictionary section. In particular each platform is thus uniquely identified. This identifier appears in turn in metrics tags, typically forwarded from one platform to another.
This section also defines keys location and users to be setup on all your target servers.
{
"platform": {
"platform_id": "punchplatform-primary",
"setups_root": "/opt",
"remote_data_root_directory": "/data",
"remote_logs_root_directory": "/var/log/punchplatform",
"punchplatform_daemons_user": "punchplatform",
"punchplatform_group": "punchplatform",
"binaries_version": "punchplatform-binaries-6.4.5",
"systemd_keep_free_memory": "2G"
}
}
platform_id
String
MANDATORY
The unique platform identifier. This identifier must be worldwide unique.
binaries_version
String
MANDATORY
version of Punchplatform binaries package
setups_root
String
MANDATORY
root folder where all software packages will be installed machines. It must match the installation dirs in punchplatform.properties configuration file.
remote_data_root_directory
String
MANDATORY
The root data directory. That folder will contain elasticsearch zookeeper kafka etc.. data. It must be mounted on a partition with enough disk capacity.
remote_logs_root_directory
String
MANDATORY
The root log folder.
punchplatform_daemons_user
String
MANDATORY
the unix daemon user in charge of running the various platform services. This user is non-interactive, and will not be granted a home directory.
punchplatform_group
String
MANDATORY
the user group associated to all users (daemon or operators) setup on your servers.
punchplatform_conf_repo_branch
String
OPTIONAL
By default, the deployer assumes you use a git branch named in your configuration directory. It will clone that branch on the servers defined with a monitoring or administration role. I.e. a role that requires a configuration folder to be installed. Use this property to clone another branch.
systemd_keep_free_memory
String
OPTIONAL
Default none.
control how much disk space systemd-journald shall leave free for other uses ()
Specify values in bytes or use K, M, G, T ...
SSL/TLS and secrets
Foreword on all SSL examples
Here is an example of the platform section with additional settings when SSL has been enabled on Punch deployed framework components (Zookeeper, Kafka, Elasticsearch/kibana+Opendistro security, Gateway).
As for other SSL configuration examples in this documentation, the secrets variable path
(e.g. @{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}
)
or @{DEPLOYMENT_SECRETS.ptf.truststore_pwd}
) are dependent on the json secrets file content you are providing
with your secrets information.
Here, the deployment_secrets.json file may for example contain very few actual secrets used at deployment time:
{
{
"deployment_secrets":{
"ptf": {
"truststore_pwd": "hellotruststore",
"keystore_pwd": "hellokeystore"
},
"metricbeat": {
"es_user": "metricbeat",
"es_password": "m3tr!cBEaT"
}
}
}
}
Note also that for simplicity sake, very few secrets and distinct certificates are used in the examples. This represents a one-certificate-per-server simple logic, with a not really 'secret' password used to access the keystore/truststore. This is because punch java code requires some password to be used for any jks, even though these files are entrusted to the linux security to prevent unwanted access.
The SSL settings of the platform section is twofold:
-
A set of platform-wide information about the global certificate authority that must be provided both in a public certificate
platform_ca_name
and a java JKS truststore (platform_truststore_name
) protected by a password (platform_truststore_name
). This authority can be overridden later if needed for each service, but will otherwise be used as a default in many places. -
A configuration of CLIENT credentials for punchplatform command-line tools that will be used at runtime in various context (operator command line, shiva-hosted apps/punchlines, gateway-launched tasks) All these clients can rely on RUNTIME secrets file (provided through settings of shiva/operator/gateway) to avoid
having passwords inside the deployment settings or other files. If some RUNTIME secret is common to the whole platform, it CAN be provided by theplatform_local_common_secrets_filename
optional file.You must ensure that all RUNTIME secrets referenced in this section are actually provided in the secrets json files that you configure in these shiva/operator/gateway sections, otherwise the associated environment and daemons (shiva/gateway/operator commands) will not start, as they will not be able to locate these runtime information (this will be a runtime 'resolving' error).
{
"platform": {
"platform_id": "punchplatform-training-central",
"setups_root": "/sw",
"remote_data_root_directory": "/dta",
"remote_logs_root_directory": "/var/log/pplogs",
"punchplatform_daemons_user": "punchdaemon",
"punchplatform_group": "punchgroup",
"binaries_version": "punch-binaries-6.4.5",
"platform_local_credentials_dir": "xcerts",
"platform_ca_name": "fullchain.crt",
"platform_truststore_name": "truststore.jks",
"platform_truststore_password": "@{DEPLOYMENT_SECRETS.ptf.truststore_pwd}",
"platform_local_common_secrets_filename": "common_runtime_secrets.json",
"punch_commands": {
"security": {
"kafka_clients": {
"front": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}"
},
"back": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}"
}
},
"elasticsearch_clients": {
"es_data": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}",
"credentials": {
"username": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.user}",
"password": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.password}"
}
},
"es_monitoring": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}",
"credentials": {
"username": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.user}",
"password": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.password}"
}
}
},
"zookeeper_clients": {
"zkf": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}"
},
"zkm": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}"
}
},
"gateway_clients": {
"mycluster": {
"ssl_enabled": true,
"ssl_truststore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/truststore.jks",
"ssl_truststore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.truststore_pwd}",
"ssl_keystore_location": "@{PUNCHPLATFORM_SECRETS_DIR}/server.jks",
"ssl_keystore_pass": "@{PUNCHPLATFORM_RUNTIME_SECRETS.cmn.keystore_pwd}"
}
},
"kibana_clients": {
"data-admin": {
"ssl_enabled": true,
"ssl_client_certificate_authority": "@{PUNCHPLATFORM_SECRETS_DIR}/fullchain.crt",
"ssl_client_certificate": "@{PUNCHPLATFORM_SECRETS_DIR}/server.crt",
"ssl_client_private_key": "@{PUNCHPLATFORM_SECRETS_DIR}/server.pem",
"credentials": {
"username": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.user}",
"password": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.password}"
}
},
"monitoring-admin": {
"ssl_enabled": true,
"ssl_client_certificate_authority": "@{PUNCHPLATFORM_SECRETS_DIR}/fullchain.crt",
"ssl_client_certificate": "@{PUNCHPLATFORM_SECRETS_DIR}/server.crt",
"ssl_client_private_key": "@{PUNCHPLATFORM_SECRETS_DIR}/server.pem",
"credentials": {
"username": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.user}",
"password": "@{PUNCHPLATFORM_RUNTIME_SECRETS.es.password}"
}
}
}
}
}
}
}
platform_local_credentials_dir
String
Mandatory if SSL is enabled, or override this configuration in all components' configuration that use SSL.
Default
None
.
The local path of a directory located on the deployer's machine and containing all the platform credentials (i.e. certs, keys, ca, secrets files ..).
The path is considered relative when the location value does not start with '/'. In this case, the path is taken from$PUNCHPLATFORM_CONF_DIR
. Otherwise, the path can still be provided as absolute.
Every key or keystore names, configured inside a Punch component section, will be searched after :
. A matching certificate name inside the current provided folder
. If not found, a matching certificate inside a subfolder named after the hostname were the component will be deployed.
Example : if Kafka is configured to be deployed on a hostname 'node01' and does not override this setting, the credentials files will be looked inside<platform_local_credentials_dir>/
then inside<platform_local_credentials_dir>/node01/
.
platform_ca_name
string
Mandatory if SSL is enabled, or override this configuration in all components' configuration that use SSL.
Default
None
.
Name of the CA file for platform's trusted certificates located inside theplatform_local_credentials_dir
.
This CA file will be used by every Punch Component activating SSL connexions in its configuration.
However, each Punch component may override this configuration to provide their own local path to the certificate's directory.
The name cannot contain '/' chars.
platform_truststore_name
string
Mandatory if SSL is enabled, or override this configuration in all components' configuration that use SSL.
Default
None
.
Name of the truststore file for platform's trusted certificates located inside theplatform_local_credentials_dir
.
This truststore will be used by every Punch Component activating SSL connexions in its configuration.
Contains the certificates of endpoints to trust at platform-level, with TLS.
However, each Punch component may override this configuration to provide their own local path to the certificate's directory.
MUST be injks
format.
The name cannot contain '/' chars.
platform_truststore_password
string
Mandatory if SSL is enabled, or override this configuration in all components' configuration that use SSL.
Default
None
.
Password ofplatform_truststore_name
located inside theplatform_local_credentials_dir
.
Each Punch component may override this configuration to provide their own password if they use their own truststore.
platform_local_common_secrets_filename
String
OPTIONAL
Default
None
.
The platform secret file which contains all common platform secrets for runtime (for example credentials for elasticsearch metric reporters ).
This file will be deployed on targets $HOME/.secrets directory.
However, you can provide an additional secret file on targets to provide specific secrets (for example, credentials for a specific operator). The name cannot contain '/' chars.
Info
<Client>
can be kafka_clients, gateway_client, elasticsearch_clients, kibana_clients or zookeeper_clients
Please note that these section must be provide only if you configured the associated component in SSL
punch_commands.security.<Client>.<cluster_id>.ssl_enabled
boolean
OPTIONAL
Default
False
.
Activate SSL connexion with your component for punch commands (i.e. punchctl, planctl, platformctl).
Info
<JavaStoreClient>
can be kafka_clients, gateway_client, elasticsearch_clients or zookeeper_clients
Please note that these section must be provide only if you configured the associated component in SSL
punch_commands.security.<JavaStoreClient>.<cluster_id>.ssl_truststore_location
String
OPTIONAL
Default
None
.
When SSL is enabled for your component, you may provide a path to a truststore location on the targeted operator machine. As punch commands can be called on operators, gateway and shiva targets, this path must be the same on all these targets.
punch_commands.security.<JavaStoreClient>.<cluster_id>.ssl_truststore_pass
String
OPTIONAL
Default
None
.
When SSL is enable for your component, you must provide your truststore password.
Info
<JavaCertClient>
can be elasticsearch_clients or kibana_clients
Please note that these section must be provide only if you configured the associated component in SSL
punch_commands.security.<JavaCertClient>.<cluster_id>.ssl_client_certificate_authority
String
OPTIONAL
Default
None
.
When SSL is enabled for your component, you must provide path to ca file location on targets. As punch commands can be called on operators, gateway and shiva targets, this path must be the same on all these targets
punch_commands.security.<JavaCertClient>.<cluster_id>.ssl_client_certificate
String
OPTIONAL
Default
None
.
When SSL is enabled for your component, you must provide path to certificate file location on targets. As punch commands can be called on operators, gateway and shiva targets, this path must be the same on all these targets
punch_commands.security.<JavaCertClient>.<cluster_id>.ssl_client_private_key
String
OPTIONAL
Default
None
.
When SSL is enabled for ES, you must provide path to private key file location on targets. As punch commands can be called on operators, gateway and shiva targets, this path must be the same on all these targets
punch_commands.security.<JavaCertClient>.<cluster_id>.credentials.username
String
OPTIONAL
Default
None
.
You can provide a username for punch commands on targets to communicate with your component, it can be provided even if SSL is disabled (i.e. with only opendistro deployed)
punch_commands.security.<JavaCertClient>.<cluster_id>.credentials.password
String
OPTIONAL
Default
None
.
You can provide a password for punch commands on targets to communicate with your component, it can be provided even if SSL is disabled (i.e. with only opendistro deployed)
Ansible Inventory Settings¶
When you use the punch deployer tool, it may be necessary to provide additional ansible parameters required by your specific environment. This is the purpose of this section.
{
"ansible": {
"ansible_inventory_settings": "[punchplatform_cluster:vars] \nansible_ssh_port=8022"
}
}
ansible_inventory_settings
Optional: these settings must be used to define some additional settings for ansible deployment. For instance : ansible_ssh_port, ansible_ssh_user, etc...
Reporters¶
This section lets you define connectors (called reporters) used by multiple punch components to send important traces, logs and monitoring metrics. Because these connectors have often the same configuration for multiple uses in a platform, they are defined in this dedicated section, and then these reporters can be referred to in the following settings sections, using their id (i.e. the key in this 'reporters' dictionary):
punchplatform_operator
section : Every start/stop command will be traced, along with useful information.shiva
section : Shiva service will also log its own actions and redirect its child jobs logs.gateway
section : As for shiva, the gateway will log its internal information
{
"reporters": {
"central_reporter": {
"type": "kafka",
"bootstrap.servers": [
"node02:9092"
],
"topic": "reporter-topic",
"metric_document_field_name": "log",
"reporting_interval": 30,
"encoding": "lumberjack"
},
"debug_reporter": {
"type": "elasticsearch",
"cluster_name": "es_search"
}
}
}
<reporterId>
Reporter
Describes a specific reporter configuration Use the reporter id to use it in other component (Shiva, Platform ..) To see all the available reporters, please refer to the dedicated reporter section.
PunchPlatform Operator¶
This section drives the deployment of the tools to operate/administrate the platform by a human, or through a Punchplatform Gateway Web API (for some operations, e.g. editing resources through Kibana Punchplatform Plugin) The command-line operation environment can be deployed on an administration server, or on an operator workstation.
{
"punchplatform_operator": {
"punchplatform_operator_environment_version": "punch-operator-6.1.0",
"configuration_name_dir_from_home": "pp-conf",
"operators_username": [
"admin1",
"admin2"
],
"servers": {
"node01": {
}
},
"resource_manager": {
"metadata": {
"type": "elasticsearch",
"es_cluster": "common",
"index": "resources-metadata"
},
"data": {
"type": "file",
"root_path": "/data"
}
},
"storage": {
"type": "kafka",
"kafka_cluster": "local"
},
"reporters": [
"myreporter"
]
}
}
Important
We strongly recommend to use a git repository to keep safe your PunchPlatform configuration. Take a look at git_settings section
configuration_name_dir_from_home
Mandatory
Name of the directory which contains the tenants configuration
operators_username
Optional
In addition to
punchplatform_admin_user
, all custom users used to administrate the PunchPlatform
servers
Mandatory
Comma separated array - describe the servers used by operator to administrate the punchplatform. Usually, the servers are workstations.
punchplatform_version
Mandatory
Version of PunchPlatform
punchplatform_operator_environment_version
Mandatory
Version of the punchplatform operator environment. To start/stop channels/jobs, the punchplatform operator needs several libraries and shell. This operator environment package give all needed scripts and jars.
storage.type
Mandatory
Describes in which type of storage operator information will be stored : file (which means that data will be stored on filesystem) or kafka
storage.kafka_cluster
Mandatory (but only present when type is 'kafka'). Identifier of the kafka cluster in which the operator will store its internal management and synchronization data. This must be one of the keys in
kafka.clusters
dictionary documented previously.
storage.kafka_ssl.truststore.location
Optional. Used only when type is 'kafka'.
A Path to the SSL truststore needed to contact an SSL protected Kafka cluster.
storage.kafka_ssl.truststore.password
Optional. Used only when type is 'kafka'.
Password of the SSL truststore.
resource_manager.metadata.type
MANDATORY
Describe in which type of storage resources metadata will be stored :
Supported type : elasticsearch
reporters
String[]
MANDATORY
A list of reporters referenced by id used by the operator to report all operator events. Ids must be declared in the dedicated 'reporters' sectionThis setting permits reporting operator actions to punchplatform-operator-logs (or to platform-events kafka topic), which is required for proper behaviour of channels monitoring and for auditing operator actions. For the elasticsearch reporter, logs will be sent to the
"platform-logs-[yyyy.MM.dd]"
index and the metrics to"platform-metrics-[yyyy.MM.dd]"
.
SSL/TLS and secrets
To interact with SSL secured frameworks, an operator needs :
- A secret file : In Json, containing passwords and other custom secrets
- Credential files : TLS keys and other additional credential files
The secret file and the additional credential files will be uploaded in /home/<operator username>/.secrets
.
The secrets and credentials are configured in order to follow different levels of priority :
- During deployment, where to find the secrets and credentials files on the deployer host (locally).
- During runtime, what secrets are taken into consideration first, over other secrets.
The priorities are defined by the section where they are configured :
- User level : max priority on all the other levels, defined by the
punchplatform_operator.users.<userId>
section.- Server level : defined by the
punchplatform_operator.servers.<serverId>
section.- Operators common level : defined by the
punchplatform_operator
section.- Platform common level : min priority, defined by the
platform
section.
The environment of the operators will be setup to include the uploaded secrets files, based on the configured level of priority.
Note that the custom secret file will be copied as user_secrets.json
for each user, in which he/she can store his
personal secrets (e.g. Elasticsearch user/password).
How to deploy Operators-common secrets ?
Configuration :
"punchplatform_operator": {
"punchplatform_operator_environment_version": "punch-operator-6.4.5",
"operators_username": [
"operator-1",
"operator-2"
],
"servers": {
"server1": {},
"server2": {}
},
"custom_additional_credentials_files" : [
"server.pem",
"server.crt",
"server.jks"
],
"custom_secrets_file" : "servers_secrets.json"
},
Files structure:
<platform.platform_local_credentials_dir>/
├── truststore.jks
├── server1/
│ ├── server.pem
│ ├── server.crt
│ ├── server.jks
│ └── servers_secrets.json
└── server2/
├── server.pem
├── server.crt
├── server.jks
└── servers_secrets.json
In the above configuration :
- The secrets and credentials files are configured in
punchplatform_operator
section: they are common to all operators per server (level 3)- Each operator will have the
custom_secrets_file
and thecustom_additional_credentials_files
in their/home/<user>/.secrets
folders on each server- The
custom_secrets_file
and thecustom_additional_credentials_files
are locally searched inside :<platform.platform_local_credentials_dir>/server1
for server1<platform.platform_local_credentials_dir>/server2
for server2- The certificates may actually have the same name, they are different files since the deployer locally look up for subfolders named after each server
How to deploy user-specific secrets ?
"punchplatform_operator": {
"punchplatform_operator_environment_version": "punch-operator-6.4.5",
"operators_username": [
"operator-1",
"operator-2"
],
"servers": {
"server1": {},
"server2": {}
},
"users": {
"operator-1": {
"local_credentials_dir": "server1/operator-1"
},
"operator-2": {
"local_credentials_dir": "server2/operator-2"
}
},
"custom_additional_credentials_files": [
"user.pem",
"user.crt",
"user.jks"
],
"custom_secrets_file": "operator_secrets.json"
},
Files structure:
<platform.platform_local_credentials_dir>/
├── truststore.jks
├── server1/
│ └── operator-1/
│ ├── user.pem
│ ├── user.crt
│ ├── user.jks
│ └── operator_secrets.json
└── server2/
└── operator-2/
├── user.pem
├── user.crt
├── user.jks
└── operator_secrets.json
In the above configuration :
- The
local_credentials_dir
is configured in each user section : the secrets are taken from user's directories only (level 1)- The secrets and credentials files names are common to all operators but the files remain different because they are taken from different directories
- Each operator will have the
custom_secrets_file
and thecustom_additional_credentials_files
in their/home/<user>/.secrets
folders on each server
Info
The custom_secrets_file
will not be overwritten at (re)deployment
Here are the SSL/secrets related settings:
servers.<server_id>.local_credentials_dir
: String
Optional
This will not supersede
platform.platform_local_credentials_dir
, but will add a priority directory that will be looked up before the platform-level one (if it exists) The local path of a directory located on the deployer's machine and containing some specific credentials for server (i.e. certs, keys, ca, secrets files ..).
The file structure inside this directory should be flat since the security configurations will get the files by name.
users.<operator_id>.local_credentials_dir
: String
Optional
This will not supersede
platform.platform_local_credentials_dir
, but will add a priority directory that will be looked up before the platform-level one and server-level/cluster-level ones (if they exist) The local path of a directory located on the deployer's machine and containing some specific credentials for server (i.e. certs, keys, ca, secrets files ..).
The file structure inside this directory should be flat since the security configurations will get the files by name.
servers.<server_id>.custom_additional_credentials_files
: String Array
Optional
Default
None
.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the user on this server during runtime.
Will be deployed on the targeted server inside the/home/{operator_username}/.secrets
directory.
users.<operator_id>.custom_additional_credentials_files
: String Array
Optional
Default
None
.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the user during runtime.
Will be deployed on the targeted server inside the/home/{operator_username}/.secrets
directory.
users.<operator_id>.custom_secrets_file
: String
Optional
Default
None
.
Optional secret file locally insidelocal_credentials_dir
.
Json file containing secrets used by the user during runtime.
Will be deployed on the targeted server inside the/home/{operator_username}/.secrets
directory and calleduser_secrets.json
(forced).
servers.<server_id>.custom_secrets_file
: String
Optional
Default
None
.
Optional secret file locally insidelocal_credentials_dir
.
Json file containing secrets used by the user during runtime.
Will be deployed on the targeted server inside the/home/{operator_username}/.secrets
directory and calleduser_secrets.json
(forced).
Zookeeper¶
ZooKeeper is a distributed coordination service used by Storm and Kafka. It is not used by PunchPlatform components. It exposes its service to client applications as a distributed filesystem.
{
"zookeeper": {
"zookeeper_version": "apache-zookeeper-3.5.5-bin",
"zookeeper_nodes_production_interface": "eth0",
"zookeeper_childopts": "-server -Xmx256m -Xms256m",
"clusters": {
"common": {
"hosts": [
"node01",
"node02",
"node03"
],
"cluster_port": 2181,
"punchplatform_root_node": "/punchplatform-primary"
}
}
}
}
zookeeper_version
: String
MANDATORY
Zookeeper version.
zookeeper_nodes_production_interface
: String
MANDATORY
Zookeeper production network interface.
zookeeper_childopts
: String
MANDATORY
JVM options for Zookeeper default "-server -Xmx1024m -Xms1024m".
clusters.<clusterId>
: String
MANDATORY
The clusterId is a string composed of alphanumeric characters and [-]. It is used by PunchPlatform command-line tools and various and configuration files to refer to the corresponding cluster.There can be one or several zookeeper.clusters.[clusterId] sections, depending on your platform(s) setup. Multiple clusters are typically used to define several zones with different security levels and data flows restrictions.
The clusterIds must be unique in the scope of a PunchPlatform. Note that if you define only one Zookeeper cluster in your platform, most PunchPlatform commands will automatically use it as the default cluster, without the need to provide explicit identifiers.
clusters.<clusterId>.hosts
: String[]
MANDATORY
Zookeeper server hostnames part of this zookeeper cluster.At least 3 hosts must be provided for resilience only an ODD number can be provided.
This is a Zookeeper requirement to avoid split-brain scenarioThese hostnames are used by PunchPlatform commands to find
an available node in the Zookeeper cluster. This parameter should match the actual list of servers configured in the running Zookeeper cluster (seezoo.cfg
file in your Zookeeper cluster configuration).
clusters.<clusterId>.cluster_port
: Number
MANDATORY
Port for client connections to Zookeeper cluster.
I.e. all Zookeeper nodes will bind that port for communicating with client applications as well as to communicate together.
clusters.<clusterId>.punchplatform_root_node
: String
MANDATORY
Defines the Zookeeper root path, starting with/
.
All PunchPlatform Zookeeper data will be stored under this path.
clusters.<clusterId>.peer_port
: Number
Optional
Default2888
.
Port for follower connections inside a Zookeeper cluster.
clusters.<clusterId>.election_port
: Number
Optional
Default3888
.
Port for leader election inside a Zookeeper cluster.
SSL/TLS and secrets
These settings allow to secure both inter-zookeeper-nodes communication and client-zookeeper communication between Kafka brokers and zookeeper (or between punch operator/monitoring tools and zookeeper)
Important
Punch-deployed Apache Storm doesn't support communications with TLS-protected Zookeeper.
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"zookeeper": {
"zookeeper_version": "apache-zookeeper-3.7.0-bin",
"zookeeper_nodes_production_interface": "eth0",
"zookeeper_childopts": "-server -Xmx512m -Xms512m",
"clusters": {
"common": {
"hosts": [
"node01",
"node02"
],
"cluster_port": 2181,
"punchplatform_root_node": "/punchplatform-primary",
"ssl_enabled": true,
"keystore_name": "server.jks",
"keystore_password": "@{DEPLOYMENT_SECRETS.ptf.keystore_pwd}"
}
}
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── truststore.jks
├── node01
│ └── server.jks
└── node02
└── server.jks
The filenames may be the same in all directories, but their content may obviously differ.
The security configurations inside clusters.<clusterId>.servers.<serverId>
are dedicated to one server :
local_credentials_dir
: String
Optional
Default
clusters.<clusterId>.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
keystore_name
: String
Optional
Default
clusters.<clusterId>.keystore_name
.
Name of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
Required to authenticate the Zookeeper's SSL client.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside thepunchplatform_daemons_user
home directory.
keystore_password
: String
Optional
Default
clusters.<clusterId>.keystore_password
.
Password of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
The security configurations inside clusters.<clusterId>
are common to every server inside the cluster :
ssl_enabled
: boolean
Optional
Default
False
.
If true, enable SSL for the Zookeeper server and client.
keystore_name
: String
Mandatory if SSL is enabled, or override
Overridden by
clusters.<clusterId>.servers.<serverId>.keystore_name
.
Name of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
keystore_password
: String
Mandatory if SSL is enabled, or override
Overridden by
clusters.<clusterId>.servers.<serverId>.keystore_password
.
Password of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
truststore_name
: String
Optional
Default
platform.platform_truststore_name
.
Name of the Java Truststore on the deployer host insidelocal_credentials_dir
. Contains the certificates of endpoints to trust with TLS.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
truststore_password
: String
Optional
Default
platform.platform_truststore_password
.
Password of the Java Truststore on the deployer host.
zk_client_auth
: String
Optional
Default
need
.
SSL authentication mode required by the Zookeeper server for clients.
Accepted values arenone
,want
andneed
.
none
: do not request a client certificate.
want
: request a client certificate, but allow anonymous clients to connect.
need
: require a client certificate, disconnect anonymous clients.
Elasticsearch¶
Elasticsearch is a document based database. It indexes JSON documents to provide advanced search capabilities. In particular, it provides the Kibana frontend applications with their data (business data or metrics data) backends.
{
"elasticsearch": {
"elasticsearch_version": "7.10.2",
"clusters": {
"es_search": {
"nodes": {
"node01": {
"http_api_address": "node01",
"transport_address": "node01",
"bind_address": "_eth1_",
"rack_id": "1"
},
"node02": {
"http_api_address": "node02",
"transport_address": "node02",
"bind_address": "_eth1_",
"rack_id": "2"
},
"node03": {
"http_api_address": "node03",
"transport_address": "node03",
"bind_address": "_eth1_",
"rack_id": "3"
}
},
"http_api_port": 9200,
"transport_port": 9300,
"minimum_master_nodes": 1,
"settings_by_type": {
"data_node": {
"max_memory": "2048m",
"modsecurity_enabled": false,
"modsecurity_blocking_requests": false,
"script_execution_authorized": true,
"http_cors_enabled": true,
"readonly": true
}
}
}
}
}
}
elasticsearch_version
Mandatory : version of Elasticsearch
clusters.<clusterId>
: String
MANDATORY
Alphanumeric characters to uniquely identify and refer to a given Elasticsearch cluster.When the Elasticsearch is configured, that clusterId is also used for generating metrics names.
clusters.<clusterId>.nodes.<nodeHostname>
: String
MANDATORY
Hostnames of the nodes composing the Elasticsearch cluster.
clusters.<clusterId>.nodes.<nodeHostname>.http_api_address
: String
MANDATORY
FQDN (domain) or IP of the REST API provided by the Elasticsearch node on the production network.
clusters.<clusterId>.nodes.<nodeHostname>.transport_address
: String
FQDN (domain) or IP of the internal Elasticsearch communication port exposed on the production network (reachable by the other nodes of the Elasticsearch cluster).
This parameter is used for PunchPlatform channel deployment, when using transport protocol for documents indexation, in order to directly send data from storm topologies to the cluster data nodes.
clusters.<clusterId>.nodes.<nodeHostname>.bind_address
: String
Default: "ES interface" When provided, this parameter defines the network address(es) to which the elasticsearch node will bind (and therefore
listen for incoming request). This can be provided in all forms supported by the elasticsearch host parameter.If not provided, the bind address will be determined using default
elasticsearch production interface provided in the deployment.settings.
clusters.<clusterId>.nodes.<nodeHostname>.type
: String
Default: "data_node" The type of the node:
data_node
,master_node
,client_node
oronly_data_node
(ie data node without master role enabled).This parameter is used for PunchPlatform cluster deployment, in order to automatically configure the Elasticsearch cluster nodes to set the data and master directive in the elasticsearch configuration file.
clusters.<clusterId>.nodes.<nodeHostname>.tag
: String
Default: ""
Nodes can be tagged. It allows indices placement, i.e. an index can be placed on a particular set of nodes according to their tags. It uses node.box_type Elasticsearch property.It can be used into Hot/Warm architectures for example: Index/tags mapping has to be declared on Elasticsearch mappings (or mapping templates in PunchPlatform) in parameter settings.index.routing.allocation.require.box_type.
clusters.<clusterId>.nodes.additional_jvm_options
: String
Default: "" Add JVM options for a specific elasticsearch node. Overrides the additional_jvm_options parameter set in cluster section
clusters.<clusterId>.http_api_port
: Integer
MANDATORY
Listening port number of the REST API provided by the elasticsearch node on the production network
clusters.<clusterId>.api_hosts_for_monitoring
: String[]
Optional array of string of the form "host:port" can be provided when the platform monitoring daemon (shiva cluster) is not able to directly reach the Elasticsearch API using the hosts and ports from "nodes" settings.
This is the case if this shiva cluster is running in a separate admin area, with no routing to the Elasticsearch cluster production network interface.e.g.
{ "api_hosts_for_monitoring" : [ "myelasticsearchvip.admin.network:9200"] }
clusters.<clusterId>.transport_port
: Integer
MANDATORY Listening port number of the internal Elasticsearch communication port exposed on the production network (reachable by the other nodes of the Elasticsearch cluster).
clusters.<clusterId>.minimum_master_nodes
: Integer
MANDATORY Define settings to prevent the by configuring the majority of nodes (total = (n/2)+1)
clusters.<clusterId>.recover_after_nodes
: Integer
Default: 0 Recover as long as this many data or master nodes have joined the cluster.
clusters.<clusterId>.expected_nodes
: Integer
Default: 0 The number of (data or master) nodes that are expected to be in the cluster. Recovery of local shards will start as soon as the expected number of nodes have joined the cluster.
clusters.<clusterId>.recover_after_time
: String "5m"
If the expected number of nodes is not achieved, the recovery process waits for the configured amount of time before trying to recover regardless. Defaults to
5m
if one of the expected_nodes settings is configured.
clusters.<clusterId>.additional_jvm_options
: String
Default: "" Add JVM options to each node from elasticsearch cluster
clusters.<clusterId>.temporary_directory
: String
Default: "" Allow relocation of Elasticsearch temporary folder to a custom folder By default, Elasticsearch uses a private temporary directory that the startup script creates immediately below the system temporary directory (/tmp). This folder must not be mounted with noexec otherwise cluster will not start up properly
clusters.<clusterId>.settings_by_type
: Object
define settings for data_nodes
clusters.<clusterId>.settings_by_type.client_type
: String "data_node"
Refer to Elasticsearch type. By default, it is
data_node
. It can beclient_node
clusters.<clusterId>.settings_by_type.client_type.max_memory
String
Maximum size of each Elasticsearch nodes Jvm memory. Rule of thumb is half the size of VM RAM, assuming one elasticsearch server per VM. Should be kept below
32G
to avoid the JVM to have to use large size pointers and memory tables.
clusters.<clusterId>.settings_by_type.client_type.modsecurity_enabled
: Boolean
Enable (
true
) or disable (false
) the installation and the configuration of modsecurity
clusters.<clusterId>.settings_by_type.client_type.modsecurity_blocking_requests
: Boolean true
During the integration of a PunchPlatform, this setting sets the modsecurity in NO BLOCKED mode.
clusters.<clusterId>.settings_by_type.client_type.script_execution_authorized
: Boolean
Enable (
true
) or disable (false
) the execution of script through elasticsearch. This settings must be set totrue
to display all grafana dashboards properly. We recommend setting this setting tofalse
for customer client elasticsearch cluster for security purpose.
clusters.<clusterId>.settings_by_type.client_type.http_cors_enabled
: Boolean
Enable (
true
) or disable (false
) cross-origin resource sharing, i.e. whether a browser on another origin can do requests to Elasticsearch
clusters.<clusterId>.settings_by_type.client_type.readonly
: Boolean
Enable (
true
) or disable (false
) readonly modsecurity. It will deny search, visualization and dashboard creation for the user.
clusters.<clusterId>.override_elasticsearch_version
: String
In some cases, especially after a migration of elasticsearch with snapshot mechanism, you want to switch the
elasticsearch version for only one cluster. Usually the query one.
clusters.<clusterId>.supervisor
: Undefined
Elasticsearch nodes are supervised by supervisor. Its logrotate parameters can be configured in this section.
Modsecurity¶
Modsecurity is an Apache module to protect deletion and integrity against your Elasticsearch cluster. Modsecurity features cannot be activated simultaneously with opendistro_security "ssl_http_enabled".
{
"elasticsearch": {
"modsecurity": {
"modsecurity_production_interface": "eth0",
"port": 9100,
"domains": {
"admin": {
"elasticsearch_security_aliases_pattern": "events-mytenant-kibana-[-a-zA-Z0-9.*_:]+",
"elasticsearch_security_index_pattern": "events-mytenant-[-a-zA-Z0-9.*_:]+"
}
}
}
}
}
Warning
Modsecurity is not enabled yet for an Elasticsearch cluster !
You have to trigger it by cluster and node type with:
elasticsearch.clusters.[cluster_id].settings_by_type.[node_type].modsecurity_enabled
{
"elasticsearch": {
"clusters": {
"es_search": {
"settings_by_type": {
"data_node": {
"modsecurity_enabled": true,
"modsecurity_blocking_requests": true
}
}
}
}
}
}
modsecurity.modsecurity_production_interface
Mandatory: interface used by modsecurity on the target host.
modsecurity.port
Mandatory: port uses by apache for modsecurity
modsecurity.<domain_name>
Mandatory: name of the client. Please check the kibana name's
modsecurity.<client_name>.elasticsearch_security_index_pattern
Mandatory
regexp on the name of the index for modsecurity configuration. This parameter is used to restrict requests for access to data. The purpose is to prevent any access to other indexes than the user profile that accesses this specific kibana domain/instance is entitled to.
This parameter MUST match all indexes that contain data allowed to the user, not only aliases which names the user 'sees' in the Kibana interface. For example, if the kibana provides an 'index pattern' that in fact is an alias (e.g. : events-mytenant-kibana-bluecoat-lastmonth), the pattern must match underlying indexes that contain the data (e.g. : events-mytenant-bluecoat-2017.07.05 ).
This is because Kibana will determine which indexes contain useful data within a 'user level' alias, and will issue unitary requests to only the underlying indexes that hold data matching the query time scope.
To configure what aliases the user is allowed to see/uses at his Graphical User Interface level, please provide a different value to the 'elasticsearch_security_aliases_pattern'.
If non-wildcard index patterns are used in Kibana, then this setting MUST also match the said index patterns, which will be queried 'directly' by kibana, without making any difference between indexes and aliases. Example : if a user has authorized data in indexes named following the 'events-mytenant-
- ' pattern, but sees them only through aliases named following the 'events-mytenant-kibana- ', then the setting should be : TODO To authorize everything please fill TODO
modsecurity.<client_name>.elasticsearch_security_aliases_pattern
Optional
Regexp on the name of the user-level aliases for modsecurity configuration. This setting MUST be provided if the user is allowed only to select some aliases within his kibana instance, instead of actually using indexes pattern that match real unitary indexes names.
If this setting is not provided, then it will default to the 'elasticsearch_security_index_pattern' setting value, and may lead to kibana malfunction or Elasticsearch overuse, especially if the provided value to this other setting is in fact an aliases pattern.
If you want to force kibana to use pre-flight requests to determine actual low-level indexes useful to query against a time-scope, then kibana indexes pattern must contain a '' and therefore, this setting should enforce presence of a ''.
Example : if a user has authorized data in indexes named following the 'events- mytenant-
- ' pattern, but sees them only through aliases named following the events-mytenant-kibana-<technoname>
, then the setting should be : events-mytenant-kibana-[-.:0-9a-zA-Z*_]*[*][-.:0-9a-zA-Z*_]*
.To authorize everything please fill TODO
Open Distro Security plugin¶
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"elasticsearch": {
"elasticsearch_version": "7.10.2",
"clusters": {
"es_search": {
"nodes": {
"node01": {
"http_api_address": "node01",
"transport_address": "node01",
"bind_address": "_eth1_",
"rack_id": "1"
},
"node02": {
"http_api_address": "node02",
"transport_address": "node02",
"bind_address": "_eth1_",
"rack_id": "2"
}
},
"http_api_port": 9200,
"transport_port": 9300,
"minimum_master_nodes": 1,
"settings_by_type": {
"data_node": {
"max_memory": "2048m",
"modsecurity_enabled": false,
"modsecurity_blocking_requests": false,
"script_execution_authorized": true,
"http_cors_enabled": true,
"readonly": true
}
},
"plugins": {
"opendistro_security": {
"opendistro_security_version": "1.9.0.0",
"ssl_http_enabled": true,
"ssl_http_clientauth_mode": "REQUIRE",
"ssl_pemkey_name": "node-key-pkcs8.pem",
"ssl_pemcert_name": "node-cert.pem",
"admin_pemcert_name": "admin-cert.pem",
"admin_pemkey_name": "admin-key-pkcs8.pem",
"authcz_admin_dn": [
"emailAddress=admin@thalesgroup.com,CN=admin,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR"
],
"nodes_dn": [
"emailAddress=node01@thalesgroup.com,CN=node,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR",
"emailAddress=node02@thalesgroup.com,CN=node,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR"
],
"kibana_index": ".kibana-admin",
"elasticsearch_username": "admin",
"elasticsearch_password": "admin"
}
}
}
}
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── ca.pem
├── node01
│ ├── node-key-pkcs8.pem
│ ├── node-cert.pem
│ ├── admin-cert.pem
│ └── admin-key-pkcs8.pem
└── node02
├── node-key-pkcs8.pem
├── node-cert.pem
├── admin-cert.pem
└── admin-key-pkcs8.pem
The filenames may be the same in all directories, but their content may obviously differ.
Info
Deploy Open Distro security on your Elasticsearch cluster will configure the security features once. Any further configuration on your filesystem requires a manual action to reload the security measures all over the cluster.
Warning
If certs and keys are not self-signed, configure platform.platform_ca_name
at the
punchplatform-deployment.settings root. This CA file must contains the trusted certs chain of the platform,
including the Elasticsearch SSL certificates.
clusters.<clusterId>.nodes.<nodeId>
section :
local_credentials_dir
: Optional, String
Optional
Default
clusters.<clusterId>.plugins.local_ssl_certs_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.plugins.local_ssl_certs_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
admin_pemkey_name
: Optional, String
Default is
clusters.<clusterId>.plugins.opendistro_security.admin_pemkey_name
.
Private key name locatedlocal_credentials_dir
.
Used for the security administration.
MUST be inPKCS8
format.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
The key must be different from the node's private key.
admin_pemcert_name
: Optional, String
Default is
clusters.<clusterId>.plugins.opendistro_security.admin_pemcert_name
.
Certificate name located insidelocal_credentials_dir
.
Used for the security administration.
MUST respect thex509
standard.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
The key must be different from the node's certificate.
ssl_pemkey_name
: Optional, String
Default is
clusters.<clusterId>.plugins.opendistro_security.ssl_pemkey_name
.
Private key name located insidelocal_credentials_dir
.
Used to encrypt the transport protocol and the REST Api's interface of the nodes with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
The key must be different from the admin key.
ssl_pemcert_name
: Optional, String
Default is
clusters.<clusterId>.plugins.opendistro_security.ssl_pemcert_name
.
Certificate name located insidelocal_credentials_dir
.
Used to encrypt the transport protocol and the REST Api's interface of the nodes with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory. The certificate must be different from the admin certificate.
clusters.<clusterId>.plugins.opendistro_security
section :
opendistro_security_version
: Mandatory, String
Version of Opendistro Security plugin for Elasticsearch.
Trigger the plugin installation during Elasticsearch deployment.
local_ssl_certs_dir
: Optional, String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.nodes.<nodeId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_ssl_certs_dir>
then inside<local_ssl_certs_dir>/host1/
. Same behavior for each configured host.
admin_pemkey_name
: Mandatory or override in node section, String
Overridden by
clusters.<clusterId>.nodes.<nodeId>.admin_pemkey_name
Private key name located insidelocal_ssl_certs_dir
.
Used for the security administration.
MUST be inPKCS8
format.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
The key must be different from the node's private key.
admin_pemcert_name
: Mandatory or override in node section, String
Overridden by
clusters.<clusterId>.nodes.<nodeId>.admin_pemcert_name
Certificate name located insidelocal_ssl_certs_dir
.
Used for the security administration.
MUST respect thex509
standard.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
The key must be different from the node's certificate.
ssl_pemkey_name
: Mandatory or override in node section, String
Overridden by
clusters.<clusterId>.nodes.<nodeId>.ssl_pemkey_name
Private key name located insidelocal_ssl_certs_dir
.
Used to encrypt the transport protocol and the REST Api's interface of the nodes with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
The key must be different from the admin key.
ssl_pemcert_name
: Mandatory or override in node section, String
Overridden by
clusters.<clusterId>.nodes.<nodeId>.ssl_pemcert_name
Certificate name located insidelocal_ssl_certs_dir
.
Used to encrypt the transport protocol and the REST Api's interface of the nodes with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory. The certificate must be different from the admin certificate.
authcz_admin_dn
: Mandatory, String Array
Distinguished Name (subject) of the admin certificate.
Used to identify the admin certificate for security management.Example:
"authcz_admin_dn": ["emailAddress=admin@thalesgroup.com,CN=admin,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR"]
To get the certificate's subject string, run the following command line :
sh > openssl x509 -subject -nameopt RFC2253 -noout -in admin_cert.crt > > subject=emailAddress=admin@thalesgroup.com,CN=admin,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR >
nodes_dn
: Mandatory, String Array
Distinguished Name (subject) of the nodes certificates.
Used to identify a node certificate for SSL handshakes.Example:
"nodes_dn": ["emailAddress=node@thalesgroup.com,CN=node,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR"]
To get the certificate's subject string, run the following command line :
sh > openssl x509 -subject -nameopt RFC2253 -noout -in node_cert.crt > > subject=emailAddress=node@thalesgroup.com,CN=node,OU=SAS,O=TS,L=VLZ,ST=Paris,C=FR >
admin_pemtrustedcas_name
: Optional, String
Default is configured in platform section :
platform.platform_ca_name
. CA filename located insidelocal_ssl_certs_dir
.
The certificates to trust for security administration.
The name cannot contain '/' chars. It will be placed inside the Elasticsearchconfig
directory.
ssl_pemtrustedcas_name
: Optional, String
Default is configured in platform section :
platform.platform_ca_name
. CA filename located insidelocal_ssl_certs_dir
.
The certificates to trust for the transport protocol and the REST Api's interface of the nodes with TLS. The name cannot contain '/' chars.
It will be placed inside the Elasticsearchconfig
directory.
ssl_http_enabled
: Optional, Boolean
Default is
false
.
If true, enabled SSL encryption for Elasticsearch's Rest API interface of the nodes.
ssl_http_clientauth_mode
: Optional, String
Default
OPTIONAL
.
Authentication mode for https. Values areNONE
.OPTIONAL
orREQUIRE
.
IfREQUIRE
, the Security plugin only accepts REST requests when a valid client TLS certificate is sent.
IfOPTIONAL
, the Security plugin accepts TLS client certificates if they are sent, but does not require them.
IfNONE
, the Security plugin does not accept TLS client certificates. If one is sent, it is discarded.
ssl_transport_enforce_hostname_verification
: Optional, Boolean
Default is
true
.
If true, the security plugin verifies, for the transport layer, that the hostname of the communication partner matches the hostname in the certificate. The hostname is taken from the subject or SAN entries of your certificate. Finally, the security plugin resolves the (verified) hostname against your DNS.
ssl_http_enabled_ciphers
: Optional, String Array
Default is
[]
.
Enabled TLS cipher suites for the REST layer. Use Java format for cipher names. If this setting is not enabled, the ciphers and TLS versions are negotiated between the browser and the security plugin automatically, which in some cases can lead to a weaker cipher suite being used.
Default is :TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA
ssl_http_enabled_protocols
: Optional, String Array
Default is
[]
.
Enabled TLS protocols for the REST layer. Use Java format for protocol names.
Example:["TLSv1.2"]
.
ssl_transport_enabled_ciphers
: Optional, String Array
Default is
[]
.
Enabled TLS cipher suites for the transport layer. Use Java format for cipher names. If this setting is not enabled, the ciphers and TLS versions are negotiated between the browser and the security plugin automatically, which in some cases can lead to a weaker cipher suite being used.
Default is :TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA
ssl_transport_enabled_protocols
: Optional, String Array
Default is
[]
.
Enabled TLS protocols for the transport layer. Use Java format for protocol names.
Example:["TLSv1.2"]
.
audit_index_prefix
: Optional, String
Default is
platform-opendistro-auditlog
.
Modify the prefix for the Open Distro Security audit log index.
The final index name will be<audit_index_prefix>-YYYY.MM.dd
audit_disabled_rest_categories
: Optional, String Array
Default is
[]
.
Disable REST audit categories for the Open Distro Security audit logs.
Supported values are :FAILED_LOGIN
,AUTHENTICATED
,SSL_EXCEPTION
, andBAD_HEADERS
. Learn more in Open Distro Security Plugin documentation
audit_disabled_transport_categories
: Optional, String Array
Default is
[]
.
Disable Elasticsearch transport protocol audit categories for the Open Distro Security audit logs.
Supported values are :FAILED_LOGIN
,AUTHENTICATED
,MISSING_PRIVILEGES
,GRANTED_PRIVILEGES
,SSL_EXCEPTION
,OPENDISTRO_SECURITY_INDEX_ATTEMPT
andBAD_HEADERS
. Learn more in Open Distro Security Plugin documentation
ssl_cache_ttl_minutes
: Optional, Integer
Default is 60.
Authentication cache timeout in minutes A value of 0 disables caching
kibana_index
: String, Optional
Default is .kibana. Must match the name of the Kibana index from
kibana.yml
. Example.kibana-admin
elasticsearch_username
: String, Optional
Default is admin. The username of the custom user to deploy plugin. Follow this procedure if you want to deploy with another user than the default.
elasticsearch_password
: String, Optional
Default is admin. The password of the custom user to deploy plugin. Follow this procedure if you want to deploy with another user than the default.
Opensearch¶
OpenSearch is a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2.
Warning
Opensearch cannot be deployed along with Elasticsearch.
Opensearch deployment parameters are the same as Elasticsearch. You only have to replace following terms:
elasticsearch
byopensearch
kibana
byopensearch_dashboards
opendistro_security
byopensearch_security
You can make a diff between those 2 deployment settings to check differences :
- Log Management Central with Elastic & Kibana
- Log Management Central with Opensearch & Opensearch Dashboards
Other components using Elasticsearch (such as Platform reporters, Gateway etc...) do not have to change their parameters when using Opensearch instead of Elasticsearch.
{
"opensearch": {
"opensearch_version": "1.2.4",
"clusters": {
"es_search": {
"nodes": {
"node01": {
"http_api_address": "node01",
"transport_address": "node01",
"bind_address": "_eth1_",
"rack_id": "1"
},
"node02": {
"http_api_address": "node02",
"transport_address": "node02",
"bind_address": "_eth1_",
"rack_id": "2"
},
"node03": {
"http_api_address": "node03",
"transport_address": "node03",
"bind_address": "_eth1_",
"rack_id": "3"
}
},
"http_api_port": 9200,
"transport_port": 9300,
"minimum_master_nodes": 1,
"settings_by_type": {
"data_node": {
"max_memory": "2048m",
"modsecurity_enabled": false,
"modsecurity_blocking_requests": false,
"script_execution_authorized": true,
"http_cors_enabled": true,
"readonly": true
}
},
"plugins": {
"opensearch_security": {
"opensearch_security_version": "1.2.4.0",
"ssl_http_enabled": true,
"ssl_http_clientauth_mode": "REQUIRE",
"authcz_admin_dn": [
"CN=centralback1-admin,OU=Punchplatform,O=Thales,L=Paris,ST=IDF,C=FR"
],
"nodes_dn": [
"CN=centralback1,OU=Punchplatform,O=Thales,L=Paris,ST=IDF,C=FR"
],
"opensearch_dashboards_index": ".opensearch_dashboards",
"opensearch_username": "admin",
"opensearch_password": "admin"
}
}
}
}
}
}
Kibana¶
Kibana is a front end application that allow a user to search and display data from Elasticsearch.
More information on Kibana official documentation.
{
"kibana": {
"kibana_version": "7.10.2",
"repository": "http://fr.archive.ubuntu.com/ubuntu/",
"domains": {
"admin": {
"es_cluster_target": "es_search",
"es_type_of_nodes_targeted": "data_node",
"kibana_port": 5601,
"csp_strict": false,
"type": "administration",
"index": ".kibana-override-name"
}
},
"servers": {
"node01": {
"address": "0.0.0.0"
},
"node02": {
"address": "node02"
},
"node03": {
"address": "node03"
}
}
}
}
kibana_version
: String
Mandatory.
Version of Kibana.
repository
: Boolean
Optional : but mandatory if chrooted
domains.<domainName>.kibana_port
: Boolean
Mandatory. TCP port used to access to kibana on HTTP
domains.<domainName>.type
: String
Mandatory.
Value : which connects the kibana through the front reverse proxy and the other value which connects the kibana through the admin proxy (and can also be directly accessed from) network.
domains.<domainName>.es_cluster_target
: String
Mandatory, or use
gateway_cluster_target
.
Cluster name that the kibana is allowed to contact.
domains.<domainName>.es_type_of_nodes_targeted
: String
Mandatory if
es_cluster_target
is set.
Elasticsearch node type that the kibana is allowed to contact. Can bedata_node
,master_node
,client_node
oronly_data_node
(ie data node without master role enabled).
domains.<domainName>.es_shard_timeout
: Integer
Optional.
Default30 000
(30 secs).
Set to0
to disable. Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable
domains.<domainName>.es_request_timeout
: Integer
Optional.
Default60 000
(60 secs). Time in milliseconds to wait for responses from the back end or Elasticsearch. This value must be a positive integer.
domains.<domainName>.gateway_cluster_target
: String
Mandatory or use
es_cluster_target
.
Cluster name that the Kibana is allowed to contact. Every Kibana request will be redirected to this cluster, then forwarded to the Elasticsearchdata_cluster
configured inside the Gateway settings.es_cluster_target
cannot be set in this case andes_type_of_nodes_targeted
is ignored.
domains.<domainName>.csp_strict
: Boolean
Optional.
Defaulttrue
.
Block access to Kibana for any browser that does not enforce Content Security Policy.
domains.<domainName>.servers
: String Array
List of hostnames, no ip. It allows starting kibana instances only on the described hosts. To deploy properly this feature, you have to: 1) Filled the kibana.servers section with ALL kibana servers 2) Select few servers in the previous section to start selected domains on these servers. Must be filled on kibana.domains.
.servers
domains.<domainName>.index
: String
Optional.
Default iskibana-<domain_name>
.
Override kibana index name used to store visualizations.
domains.<domainName>.disable_modsecurity
: Boolean
Default is
false
.
If Modsecurity is enabled for the targeted Elasticsearch cluster, set this configuration totrue
to directly target Elasticsearch without being filtered by Modsecurity rules for this domain only.
servers.<serverName>
: String
Comma-separated array of several parameters of each server
servers.<serverName>.address
: String
Interface used to bind the kibana process
chrooted
: Boolean false
Set to true to enable this function. Only taken into account if there is only one- domain specified. If set to false, the running instances of Kibana won't be jailed in a chroot. Recommended in production.
plugins
: JSON object
Refer to the following sections that describe kibana plugins configuration
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"kibana": {
"kibana_version": "7.10.2",
"domains": {
"admin": {
"gateway_cluster_target": "gateway_32g",
"kibana_port": 5601,
"type": "administration",
"server_ssl_enabled": true,
"server_ssl_key_name": "server-key.pem",
"server_ssl_certificate_name": "server-cert.pem",
"elasticsearch_ssl_enabled": true,
"plugins": {
"punchplatform": {
"rest_api": {
"hosts": [
"https://server3:4242"
],
"ssl_enabled": true
}
}
}
}
},
"servers": {
"server1": {
"address": "server1"
},
"server2": {
"address": "server2"
}
},
"plugins": {
"punchplatform": {
"punchplatform_version": "6.4.5"
},
"opendistro_security": {
"opendistro_security_version": "1.9.0.0"
}
}
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── cafile.pem
├── server1
│ ├── server-key.pem
│ └── server-cert.pem
└── server2
├── server-key.pem
└── server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
Info
server_ssl_key_name
and server_ssl_certificate_name
will be both used as the client key and cert files for :
- the Punchplatform REST Api client
- The elasticsearch client
The security configurations inside kibana.domains.<domainName>
are dedicated to one single domain :
local_ssl_certs_dir
: Optional, String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andkibana.domains.<domainName>.servers.<host>.local_ssl_certs_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_ssl_certs_dir>
then inside<local_ssl_certs_dir>/host1/
. Same behavior for each configured host.
server_ssl_enabled
: Optional, Boolean
Default
false
.
If true, enable SSL for the Kibana server
server_ssl_key_name
: Optional, String
Default is
None
.
Private key name located insidelocal_ssl_certs_dir
.
Will be used as the kibana server's private key. MUST be inPKCS8
format.
The name cannot contain '/' chars. It will be placed inside the{data_root}/kibana/{domain_chroot}/.secrets
directory.
server_ssl_certificate_name
: Optional, String
Default is
None
.
Certificate name located insidelocal_credentials_dir
.
Will be used as the kibana server's public key. MUST respect thex509
standard.
The name cannot contain '/' chars. It will be placed inside the{data_root}/kibana/{domain_chroot}/.secrets
directory.
server_ssl_certificateAuthorities_names
: Optional, String Array
Default is
[platform.platform_ca_name]
.
CA filename located insidelocal_ssl_certs_dir
.
If None, the kibana server will trust every SSL connexion.
The names cannot contain '/' chars. It will be placed inside the{data_root}/kibana/{domain_chroot}/.secrets
directory.
elasticsearch_ssl_enabled
: Optional, Boolean
Default is
false
.
If true, enable SSL for the Kibana client (to ES or Punch Gateway).
The server's private key and certificate will be used as the client ones (server_ssl_key_name
andserver_ssl_certificate_name
).
elasticsearch_ssl_certificateAuthorities_names
: Optional, Array
Default is
[platform.platform_ca_name]
.
CA filename located insidelocal_ssl_certs_dir
.
The certificates to trust for Kibana client.
The names cannot contain '/' chars. It will be placed inside the{data_root}/kibana/{domain_chroot}/.secrets
directory.
elasticsearch_ssl_verificationMode
: Optional, String
Default is
full
.
The Kibana client's SSL certificate verification mode when connecting to a server. Values may benone
,certificate
orfull
.
Ifnone
, performs no verification of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after very careful consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged.
Ifcertificate
, verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.
Iffull
, verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate.
elasticsearch_ssl_always_present_certificate
: Optional, Boolean
Default:
true
When true, the Kibana client will always send its SSL certificate alongside the SSL request. This setting applies to all outbound SSL/TLS connections to a server, including requests that are proxied for end users.
The security configurations inside kibana.domains.<domainName>.servers.<host>
are dedicated to one server of a
domain :
local_ssl_certs_dir
: Optional, String
Optional
Default
kibana.domains.<domainName>.local_ssl_certs_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andkibana.domains.<domainName>.servers.<host>.local_ssl_certs_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_ssl_certs_dir>
then inside<local_ssl_certs_dir>/host1/
. Same behavior for each configured host.
server_ssl_key_name
: Optional, String
Default is
None
.
Private key name located insidelocal_ssl_certs_dir
.
Will be used as the kibana server's private key. MUST be inPKCS8
format.
The name cannot contain '/' chars. It will be placed inside the{data_root}/kibana/{domain_chroot}/.secrets
directory.
server_ssl_certificate_name
: Optional, String
Default is
None
.
Certificate name located insidelocal_credentials_dir
.
Will be used as the kibana server's public key. MUST respect thex509
standard.
The name cannot contain '/' chars. It will be placed inside the{data_root}/kibana/{domain_chroot}/.secrets
directory.
Plugins¶
Info
Kibana plugins can be configured in two different sections :
kibana.plugins.<plugin-name>
: global plugin's configuration for all domainskibana.domains.<domain_name>.plugins.<plugin-name>
: each configuration applied to this domain overrides the global configurations inkibana.plugins.<plugin-name>
Both of these sections are additive.
For instance, the admin
domain will inherit from the kibana.plugins.punchplatform_feedback
configuration.
The guest
domain will inherit from the kibana.plugins.punchplatform-feedback
configuration
and the kibana.domains.guest.plugins.punchplatform_feedback
configuration.
{
"kibana": {
"kibana_version" : "7.10.2",
"repository": "http://fr.archive.ubuntu.com/ubuntu/",
"domains": {
"guest": {
"es_cluster_target": "es_search",
"es_type_of_nodes_targeted": "data_node",
"kibana_port": 5601,
"type": "administration",
"plugins": {
"punchplatform_feedback": {
"input_type": "checkbox"
}
}
},
"admin": {
"es_cluster_target": "es_search",
"es_type_of_nodes_targeted": "data_node",
"kibana_port": 5601,
"type": "administration"
}
},
"servers": {
"node01": {
"address": "0.0.0.0"
}
},
"plugins": {
"punchplatform_feedback": {
"punchplatform_feedback_version": "2.0.1",
"tenant": "global"
}
}
}
}
Open Distro Security plugin¶
{
"kibana": {
"plugins": {
"opendistro_security": {
"opendistro_security_version": "1.13.0.1"
}
}
}
}
opendistro_security_version
: Mandatory, string
Mandatory : version of Opendistro Security plugin for Kibana. Trigger the plugin installation during Kibana deployment. Check version in Embedded COTS.
elasticsearch_username
: Optional, string
Default is
punchkibanaserver
.
The kibana server's username to authenticate to the Elasticsearch cluster. Keep the default value if you are installingOpen Distro Security for Elasticsearch
for the first time and you have not changed the default kibana role's username yet.
elasticsearch_password
: Optional, string
Default is
punchkibanaserver
.
The kibana server's password to authenticate to the Elasticsearch cluster. Keep the default value if you are installingOpen Distro Security for Elasticsearch
for the first time and you have not changed the default kibana role's password yet.
Warning
Never keep the default username and password in a production context !
Data Feedback plugin¶
Kibana plugin that offers a Punch Feedback
Visualization to annotate Elasticsearch data.
The deployment options will configure the inputs types to annotation Elasticsearch data. If you do not set the input types parameters, then all inputs will be available and configurable in Kibana.
On the other hand, if you specify input types parameters as in the example, the related inputs will not be configurable in Kibana UI.
For more information about input types, check plugin documentation.
{
"kibana": {
"plugins": {
"punchplatform_feedback": {
"punchplatform_feedback_version": "2.1.0",
"tenant": "reftenant",
"select_options": ["blue", "green", "red"],
"max_tags": 1,
"range_min": 0,
"range_max": 10,
"checkbox_label": "False Positive",
"save_in_new_index": true,
"feedback_index": "feedbacks"
}
}
}
}
punchplatform_feedback_version
: Mandatory, String
Plugin version to install. Check version in Punch Plugins list.
enabled
: Optional, boolean
Default is
true
.
If false, disable this plugin installation or configuration updates.
Useful to disable this plugin installation for one dedicated domain if configured inkibana.domains.<domain_name>.plugins.punchplatform_feedback
.
tenant
: Mandatory, string
If you save your feedbacks in a new index, the name of the index will be formatted as follows : {tenant}-{feedbackIndex}-YYYY.MM.DD.
input_type
: Optional, string
If activated, then only this input type will be available in the Visualization.
Possible values aretext
,select
,checkbox
orrange
.
Warning
Be careful when changing the type of feedback : changing the input type will delete feedback stored in the table, because they will be overwritten by the default value of the new feedback type.
select_options
: Optional, array
If activated, then only those select options will be available in the Visualization. List of possible feedback values if input type is
select
.
This options will further be used to select possible feedbacks to affect to data.
max_tags
: Optional, integer
If activated, then it will not be configurable in the Visualization. Maximum amount of tags for each data feedback. Only works for
select
andtext
input types.
range_min
: Optional, integer
If activated, then only this range_min will be available in the Visualization. Set a minimum limit to choose in your data feedback if input type is
range
.
range_max
: Optional, integer
If activated, then only this range_max_ will be available in the Visualization. Set a maximum limit to choose in your data feedback if input type is
range
.
checkbox_label
: Optional, string
If activated, then only this checkbox value will be available in the Visualization. Add a label for your feedback checkbox if input type is
checkbox
save_in_new_index
: Optional, boolean
If activated, then it will not be configurable in the Visualization. If false, feedback are stored inside the index of the original data.
If true, feedback are stored inside a new index named<tenant>-<feedback_index>-YYYY.MM.DD
.
Thetenant
value is taken from the Punchplatform Gateway tenant endpoint.
feedback_index
: Optional, string
If activated, then only this feedback index will be available in the Visualization. Only used by the
save_in_new_index
configuration.
Punch Documentation Plugin¶
The Punch Documentation
plugin displays official documentation offline.
The documentation is embedded within the plugin.
It does not require to deploy the Gateway to serve the documentation.
Check plugin documentation.
{
"kibana": {
"plugins": {
"punch_documentation": {
"version": "1.0.1",
"documentation_version": "6.4.5"
}
}
}
}
version
: Mandatory, String
Plugin version to install. Check version in Punch Plugins list.
documentation_version
: Mandatory, String
Punch documentation version to be displayed in the plugin.
Data Extraction Plugin¶
The Data Extraction
plugin makes it possible to extract data from Elasticsearch.
It requires the Punch Gateway to launch an extraction job in the background.
{
"kibana": {
"plugins": {
"data_extraction": {
"version": "1.2.1",
"enabled": true,
"use_legacy": false,
"tenant": "reftenant",
"rest_api": {
"hosts": [
"https://centralback1:4242",
"https://centralback2:4242"
],
"ssl_enabled": true
}
}
}
}
}
version
: Mandatory, String
Plugin version to install. Check version in Punch Plugins list.
rest_api.hosts
: Mandatory, String array
Backend REST API for Punch plugin.
rest_api.request_timeout
: Optional, Integer
Default
10000
.
Timeout in ms for requests leading to the backend REST API
rest_api.custom_headers
: Optional, Json Object
Default
None
.
Key:value
pairs to add custom headers in requests leading to the backend REST API Example:{"Accept-Encoding": "gzip"}
SSL and authentication to the backend REST API are also configurable :
rest_api.ssl_enabled
: Optional, Boolean
Default
false
.
Enable SSL encryption for the connection to the backend REST API.
The private key used for the SSL connection is the server's private key configured inkibana.domains.<domainName>.server_ssl_key_name
(Optional).
The certificate used for the SSL connection is the server's certificate configured inkibana.domains.<domainName>.server_ssl_certificate_name
(Optional).
The CA files used for the SSL connection is the server's CA files configured inkibana.domains.<domainName>.elasticsearch_ssl_certificateAuthorities_names
(Optional).
The verification mode used for the SSL connection is the server's verification mode configured inkibana.domains.<domainName>.elasticsearch_ssl_verificationMode
(Optional, default is"full"
).
Opensearch Dashboards¶
OpenSearch is a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2.
Warning
OpensearchDashboards cannot be deployed along with Kibana. OpensearchDashboards requires Elasticsearch.
OpensearchDashboards deployment parameters are the same as Kibana. You only have to replace following terms:
elasticsearch
byopensearch
kibana
byopensearch_dashboards
opendistro_security
byopensearch_security
You can make a diff between those 2 deployment settings to check differences :
- Log Management Central with Elastic & Kibana
- Log Management Central with Opensearch & Opensearch Dashboards
Other components using Elasticsearch (such as Platform reporters, Gateway etc...) do not have to change their parameters when using Opensearch instead of Elasticsearch.
{
"opensearch_dashboards": {
"opensearch_dashboards_version": "1.2.0",
"domains": {
"admin-data": {
"opensearch_cluster_target": "es_data",
"opensearch_type_of_nodes_targeted": "data_node",
"opensearch_dashboards_port": 5601,
"type": "administration",
"index": ".opensearch_dashboards",
"servers": [
"centralback1",
"centralback2"
],
"server_ssl_enabled": true,
"server_ssl_key_name": "server.pem",
"server_ssl_certificate_name": "server.crt",
"opensearch_ssl_enabled": true,
"opensearch_ssl_verificationMode": "full",
"opensearch_ssl_certificateAuthorities_names": [
"fullchain.crt"
]
},
"admin-monitoring": {
"opensearch_cluster_target": "es_monitoring",
"opensearch_type_of_nodes_targeted": "data_node",
"opensearch_dashboards_port": 5602,
"type": "administration",
"index": ".opensearch_dashboards",
"servers": [
"centralback2"
],
"server_ssl_enabled": true,
"server_ssl_key_name": "server.pem",
"server_ssl_certificate_name": "server.crt",
"opensearch_ssl_enabled": true,
"opensearch_ssl_verificationMode": "full",
"opensearch_ssl_certificateAuthorities_names": [
"fullchain.crt"
]
}
},
"plugins": {
"opensearch_dashboards_security": {
"opensearch_dashboards_security_version": "1.2.0.0"
},
"data_extraction": {
"version": "1.2.4",
"enabled": true,
"use_legacy": false,
"tenant": "reftenant",
"rest_api": {
"hosts": [
"https://centralback1:4242",
"https://centralback2:4242"
],
"ssl_enabled": true
}
},
"punchplatform_feedback": {
"punchplatform_feedback_version": "2.1.3",
"tenant": "reftenant"
},
"punch_documentation": {
"version": "1.0.3",
"documentation_version": "6.4.5"
}
},
"servers": {
"centralback1": {
"address": "217.182.140.108"
},
"centralback2": {
"address": "149.202.189.157"
}
}
}
}
Storm¶
{
"storm": {
"storm_version": "apache-storm-2.3.0",
"storm_nimbus_nodes_production_interface": "eth0",
"clusters": {
"main": {
"master": {
"servers": [
"node01",
"node02",
"node03"
]
},
"ui": {
"servers": [
"node01",
"node02",
"node03"
]
},
"slaves": [
"node01",
"node02",
"node03"
],
"zk_cluster": "common",
"zk_root": "storm-2.3.0-main",
"storm_workers_by_punchplatform_supervisor": 15,
"supervisor_memory_mb": 8192,
"supervisor_cpu": 4,
"supervisor_localizer_cache_target_size_mb": 3072
}
}
}
}
storm_version
Mandatory: version of storm
storm_nimbus_nodes_production_interface
Mandatory: network interface bound by storm nimbus (master) for production usage
storm_nimbus_jvm_xmx
Optional
Set the Xmx of the nimbus jvm default value: 1024m
storm_ui_jvm_xmx
Optional
Set the Xmx of the ui jvm default value: 256m
storm_supervisor_jvm_xmx
Optional
Set the Xmx of the storm supervisor jvm default value: 256m
clusters.<clusterId>
: String
The Storm cluster identifier. A string composed of letters and numbers. A single Punchplatform can contain several storm clusters.
clusters.<clusterId>.master.servers
: String[]
A comma-separated array of hostnames of the servers that will run Storm so-called
nimbus
processes in charge of scheduling the starting/stopping of topologies.
clusters.<clusterId>.master.thrift_port
: Integer 6627
The thrift TCP Port used for storm intercommunication. Default value: 6627
clusters.<clusterId>.ui.servers
: String[]
A comma-separated array of hostnames of the servers that will run the Storm so-called
ui
server, providing inbuilt monitoring Wwb interface and an associated REST API.
clusters.<clusterId>.ui.ui_port
: Integer 8080
Rhe listening TCP Port of the Storm
ui
servers. Default value: 8080
clusters.<clusterId>.slaves
: String[]
A comma-separated array of hostnames of the servers that will run Storm so-called
supervisor
processes in charge of starting/stopping topologies, as requested by thenimbus
.
clusters.<clusterId>.zk_cluster
: String
Identifier of the zookeeper cluster in which the Storm cluster will store its internal cluster management and synchronization data. This must be one of the keys in
zookeeper.clusters
dictionary documented previously.
clusters.<clusterId>.zk_root
: String
This string is a prefix (composed of letters, digits or '-') that is used as root of all data path in the zookeeper cluster, for data associated to the storm cluster. This allows sharing a same zookeeper cluster for multiple Storm cluster ; therefore it should be unique within a zookeeper cluster (both unique within the PunchPlatform system, but also unique as compared to other zookeeper roots configured in other PunchPlatform for the same zookeeper cluster). We recommend adding the storm version to avoid issue during migration
clusters.<clusterId>.storm_workers_by_punchplatform_supervisor
: Integer
This number indicates the number of Storm slave slots
that are allowed on each storm slave node (i.e. running Storm supervisor component). This field, multiplied by the JVM memory options of each slave (seeworkers_childopts
field hereafter) should not exceed Storm slave server memory.
clusters.<clusterId>.workers_childopts
: String
This string provides the storm worker jvm options. It will be added to the default storm workers_childopts:
-Xmx%HEAP-MEM%m -Xms%HEAP-MEM%m -XX:+PrintGCDetails -Xloggc:artifacts/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=artifacts/heapdump
. Storm worker are in charge of running topologies.
This field, multiplied by the number of Storm Slots (seestorm_workers_by_punchplatform_supervisor
field above)
should not exceed Storm slave server memory.
clusters.<clusterId>.storm_scheduler
: String
Default
org.apache.storm.scheduler.resource.ResourceAwareScheduler
Optional
This string defines the java class name of the scheduler used by storm nimbus. The scheduler rules the assignment of the punchlines.
clusters.<clusterId>.supervisor_memory_mb
: Integer
This number provides the size of RAM of the virtual or physical node. It is used to configure storm.yaml for storm supervisor.
clusters.<clusterId>.supervisor_cpu
: Integer
This number provides the number of CPU of the virtual or physical node. It is used to configure storm.yaml for storm supervisor.
clusters.<clusterId>.supervisor_localizer_cache_target_size_mb
: Integer
Optional Default: 10240 This configures the cache size for topology resources on Storm slaves. Reduce it if your stormdist/ is growing too large. References: - https://github.com/apache/storm/blob/master/conf/defaults.yaml#L140 - https://storm.apache.org/releases/current/distcache-blobstore.html
clusters.<clusterId>.servers.<serverId>.local_credentials_dir
: String
Optional
If provided, will supplement
platform.platform_local_credentials_dir
andclusters.<clusterId>.local_credentials_dir
These local directories on the deployer's system containing all the SSL keys, certificates and stores that will be used by Storm punchlines on the targeted host. They will be placed inside the remote home directory of daemon Storm user These SSL resources will always be overwritten by the new ones if different. They are never removed by the deployer. The supported ssl resource extensions are
.pem
,.jks
and.p12
.
clusters.<clusterId>.servers.<serverId>.custom_additional_credentials_files
: String
Optional
Default :
None
Files inlocal_credentials_dir
which must be deployed on target
clusters.<clusterId>.master.monitoring_interval
: Integer 60
The period (in seconds) of cyclical acquisition of metrics/supervisor status by Nimbus. Legacy deployments use 10, but increasing this value reduces the load of the nimbus service and improve availability of the Storm UI/API.
clusters.<clusterId>.master.monitoring_interval
: Integer 60
The period (in seconds) of cyclical acquisition of metrics/supervisor status by Nimbus. Legacy deployments use 10, but increasing this value reduces the load of the nimbus service and improve availability of the Storm UI/API.
clusters.<clusterId>.master.supervisor_timeout
: Integer 90
The timeout (in seconds) for declaring a supervisor non-nominal when nimbus monitors it. Legacy settings is 10s. Increasing this value in relation with 'monitoring_interval' helps to avoid false-positives of failed supervisors in loaded situation. As a tradeoff, the reassignment of topologies to a surviving supervisor node in case of loss of the supervisor node previously assigned these topologies will take longer.
clusters.<clusterId>.temporary_directory
: String
Default: "/tmp" Allow relocation of Storm temporary folder to a custom folder
clusters.<clusterId>.supervisor
: Undefined
Storm components are supervised by supervisor. Its logrotate parameters can be configured in this section.
clusters.<clusterId>.published_storm_hostname_source
: String
The following setting determine the name that the storm will publish in zookeeper, so that other nodes can contact this one So this MUST be a name resolved to the production interface of this node, when resolved on other cluster nodes. It can take different values : "inventory" , storm will publish the hostname set in configurations files as production interface. "server_local_fqdn" , storm will publish the local server fqdn hostname as production interface. "server_local_hostname", storm will publish the local server hostname as production interface. "auto", settings by default, storm will choose which hostname it will publish in zookeeper
SSL/TLS and secrets
The security configurations inside clusters.<clusterId>.servers.<server_id>
are dedicated to one server :
local_credentials_dir
: String
Optional
Default
clusters.<clusterId>.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
custom_additional_credentials_files
: String Array
Optional
If provided, will supplement
clusters.<clusterId>.custom_additional_credentials_files
.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the daemons' user during runtime to run punchlines and applications.
The filenames cannot contain '/' chars.
Will be deployed on the targeted server inside the/home/{punchplatform_daemons_user}/.secrets
directory.
The security configurations inside clusters.<clusterId>
are common to every broker inside the cluster :
local_credentials_dir
: String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.servers.<serverId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
custom_additional_credentials_files
: String array
Optional
If provided, will supplement
clusters.<clusterId>.servers.<serverId>.custom_additional_credentials_files
for every server inside the current cluster.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the daemons' user during runtime to run punchlines and applications.
The filenames cannot contain '/' chars.
Will be deployed on the targeted server inside the/home/{punchplatform_daemons_user}/.secrets
directory.
Kafka¶
It is a resilient and scalable queueing application. It stores documents for several days. Usually, it is used before storm to keep safe data
{
"kafka": {
"kafka_version": "kafka_2.12-2.8.1",
"kafka_brokers_production_interface": "eth0",
"clusters": {
"local": {
"brokers_with_ids": [
{
"id": 1,
"broker": "node01:9092"
},
{
"id": 2,
"broker": "node02:9092"
},
{
"id": 3,
"broker": "node03:9092"
}
],
"zk_cluster": "common",
"zk_root": "kafka-local",
"brokers_config": "punchplatform-local-server.properties",
"default_replication_factor": 1,
"default_partitions": 2,
"partition_retention_bytes": 1073741824,
"partition_retention_hours": 24,
"kafka_brokers_jvm_xmx": "512M"
}
}
}
}
kafka_version
Mandatory
version of kafka
kafka_brokers_production_interface
Mandatory
network interface bound by kafka broker for production usage
clusters.<clusterId>
: String
clusterId is a string composed of alphanumeric characters and [_] which will be used each time this particular kafka cluster must be identified in a PunchPlatform command-line or configuration file, and also for metrics name generation when elasticsearch reporting is activated by PunchPlatform configuration.
There can be one or multiple
kafka.clusters.<clusterId>
sections, depending on the overall deployment configuration (for example : in order to use different storage configuration for brokers that manage different kind of logs, or to ensure isolation of performance between different log channels). Kafka clusterIds must be unique in a PunchPlatform cluster.Please note that if only one kafka cluster is identified in punchplatform properties file, most PunchPlatform commands will automatically use this kafka cluster without need for clusterId providing on command line.
clusters.<clusterId>.brokers_with_ids
: map[]
Pairs of id and broker providing all kafka brokers in this cluster and their unique id.
[ {"id" : 1, "broker" : "node01:9092" }, {"id" : 2, "broker" : "node02:9092" }, {"id" : 3, "broker" : "node03:9092" } ],
When redeploying on existing nodes, the id should be preserved to avoid data loss. Therefore, if migrating from the deprecated 'brokers' settings (with autogenerated id), please fetch previously deployed id from your kafka node (broker.id setting in your cluster configuration, usually in/data/opt/kafka*/conf/punchplatform-<KafkaclusterId>-serer.properties
).
clusters.<clusterId>.zk_cluster
: String
String identifying the PunchPlatform zookeeper cluster that this kafka cluster will use to persist/exchange its internal configuration, topics, partitions and offsets. This must be one of the keys in
zookeeper.clusters
dictionary documented previously.This parameter will be used by all PunchPlatform kafka clients (producers and consumers) that will need to locate available kafka brokers for this cluster, because available clusters register themselves in zookeeper.
clusters.<clusterId>.zk_root
: String
This string is a prefix (composed of letters, digits or '-') that is used as root of all data path in the zookeeper cluster, for data associated to the kafka brokers cluster. This allows sharing a same zookeeper cluster for multiple Kafka brokers clusters ; therefore it should be unique within a zookeeper cluster (both unique within the PunchPlatform system, but also unique as compared to other zookeeper roots configured in other PunchPlatform for the same zookeeper cluster).
clusters.<clusterId>.brokers_config
: String
Path to the local kafka broker server configuration. This parameter is used by
punchplatform-standalone.sh
andpunchplatform-kafka.sh
when running a local kafka broker server in a PunchPlatform sample configuration. When using punchplatform cluster deployment tool, this field is used to generate the Kafka brokers cluster configuration on Kafka servers.
clusters.<clusterId>.default_replication_factor
: Integer
Default replication level for Kafka topic partitions. This is used whenever no replication factor is defined in
the channel structure configuration (cf. Channels).
A number of 1 means no replication, therefore no resilience in case
of failure of a cluster broker.
clusters.<clusterId>.default_partitions
: Integer
Number of default Kafka topics number of partitions for
each topic partition, whenever no partitions number is defined in
channel
structure configuration (cf. Channels).Number of partitions allow scaling processing by sharding
responsibility of consuming Kafka messages between multiple consumer
instances (if configured in Stormtopology
clusters.<clusterId>.partition_retention_bytes
: Long
Maximum size-based retention policy for logs. kafka applies the first condition to delete (either time or size) so this parameter is a failsafe to avoid that any single channel fills up the platform storage in case of flooding of a topic.
In a typical cluster setup, we limit each channel PARTITION to: -1000 EventsPerSeconds x 1000bytes x 2 day for , so a typical value of 172800000000 (bytes). - 4000 logs per second for 2 days of flooding, and 1 day of additional nominal storage (2500 lps) with 3000 bytes per enriched log, therefore 1099511627776 for topology for a tenant
clusters.<clusterId>.partition_retention_hours
: Integer
maximum time-based retention policy (if size-based retention policy is not triggered by data amount received)
clusters.<clusterId>.offsets_retention_minutes
: Integer
Default is
20160
. Offsets older than this retention period will be discarded. This duration should be greater thanpartition_retention_hours
.
clusters.<clusterId>.kafka_brokers_jvm_xmx
: Integer
The max size allowed to each kafka broker JVM (this will be used by the kafka startup script).
clusters.<clusterId>.supervisor
: Undefined
Kafka nodes are supervised by supervisord. Its logrotate parameters can be configured in this section.
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"kafka": {
"kafka_version": "kafka_2.12-2.8.1",
"kafka_brokers_production_interface": "eth0",
"clusters": {
"local": {
"brokers_with_ids": [
{
"id": 1,
"broker": "node01:9092"
},
{
"id": 2,
"broker": "node02:9092"
}
],
"ssl_enabled": true,
"keystore_name": "server-keystore.jks",
"keystore_password": "@{DEPLOYMENT_SECRETS.kafka.keystore_pass}",
"zk_cluster": "common",
"zk_root": "kafka-local",
"brokers_config": "punchplatform-local-server.properties",
"default_replication_factor": 1,
"default_partitions": 2,
"partition_retention_bytes": 1073741824,
"partition_retention_hours": 24,
"kafka_brokers_jvm_xmx": "512M"
}
}
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── truststore.jks
├── node01
│ └── server-cert.pem
└── node02
└── server-keystore.jks
The filenames may be the same in all directories, but their content may obviously differ.
The security configurations inside clusters.<clusterId>.brokers_with_ids[brokerId]
are dedicated to one server :
local_credentials_dir
: String
Optional
Default
clusters.<clusterId>.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
keystore_name
: String
Optional
Default is
cluster.[clusterID].keystore_name
.
Name of the Java KeyStore on the deployer host insidelocal_credentials_dir
. Used to encrypt and authenticate the broker to endpoints with TLS.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
keystore_password
: String
Optional
Default
clusters.<clusterId>.keystore_password
Password of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
The security configurations inside clusters.<clusterId>
are common to every broker inside the cluster :
ssl_enabled
: boolean
Optional
Default
False
.
Enable SSL for the Kafka broker.
local_credentials_dir
: String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.brokers_with_ids[brokerId].local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
keystore_name
: String
Mandatory if SSL is enabled, or override
Overridden by
clusters.<clusterId>.brokers_with_ids[brokerId].keystore_name
.
Name of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
Required to authenticate the Kafka broker's SSL client.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
keystore_password
: String
Mandatory if SSL is enabled, or override
Overridden by
clusters.<clusterId>.brokers_with_ids[brokerId].keystore_password
.
Password of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
truststore_name
: String
Optional
Default
platform.platform_truststore_name
.
Name of the Java Truststore on the deployer host insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust with TLS.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
truststore_password
: String
Mandatory if SSL is enabled.
Default
platform.platform_truststore_password
.
Password of the Java Truststore on the deployer host.
ssl_client_auth
: boolean
Optional
Default
required
.
The Kafka broker SSL certificate verification mode.
Values may berequired
,requested
ornone
.
Ifrequired
, the client certificate authentication is mandatory.
Ifrequested
, the client certificate authentication is optional. If the client certificate is provided, the Kafka broker will still proceed to a verification.
Ifnone
, the client certificate authentication is ignored.
Shiva¶
Shiva is the distributed, resilient jobs/services manager used for tasks both at PunchPlatform system level (monitoring, housekeeping...) and at user processing level (channels).
Shiva is made of nodes communicating through a kafka cluster. Nodes can be leaders (masters of the cluster) or runners ( tasks executors) or both. The operator commands will be available on PunchPlatform operators linux accounts.
{
"shiva": {
"shiva_version": "punchplatform-shiva-6.1.0",
"clusters": {
"common": {
"reporters": [
"myreporter"
],
"storage": {
"type": "kafka",
"kafka_cluster": "common"
},
"servers": {
"localhost": {
"runner": true,
"can_be_master": true,
"tags": []
}
}
}
}
}
}
shiva_version
Mandatory: Version of shiva app to deploy. File located in archives.
clusters.<clusterId>.reporters
String[]
MANDATORY
A list of reporters used by shiva referenced by id. Ids must be declared in the dedicated 'reporters' section
clusters.<clusterId>.storage.type
Mandatory
Describes in which type of storage operator information will be stored : file (which means that data will be stored on filesystem) or kafka
clusters.<clusterId>.storage.kafka_cluster
Mandatory (but only present when type is 'kafka'). Identifier of the kafka cluster in which the operator will store its internal management and synchronization data. This must be one of the keys in
kafka.clusters
dictionary documented previously.
clusters.<clusterId>.servers.<serverName>
Mandatory
For each shiva node to be deployed, section containing the configuration of the node. The server name is used for resolving the administration interface of the shiva node from the deployment machine.
clusters.<clusterId>.servers.<serverName>.runner
Mandatory
Boolean indicating if this shiva node will have the 'runner' role. Runners are in charge of executing locally tasks assigned to them by the leader (active master).
clusters.<clusterId>.servers.<serverName>.can_be_master
Mandatory
Boolean indicating if this shiva node can become the leader of the cluster. The leader is in charge of assigning tasks to an appropriate node, given current balancing of tasks among available runners that match the task tags requirements.
If no leader is available, runners will keep executing their assigned services, but no resilience is possible in case of a runner shutdown, and no new task or periodic job execution will occur.
clusters.<clusterId>.servers.<serverName>.tags
OPTIONAL
List of comma-separated tags strings. This is useful only for worker nodes. Tags are user-defined information strings associated to each node.
When submitting a task to the Shiva cluster, the user can specify a tags. This allows for tasks placement depending on user needs such as network areas, pre-installed modules required for running the task, etc..
Default value: ansible_hostname
clusters.<clusterId>.shiva_cluster_jvm_xmx
OPTIONAL
The max size allowed to each Shiva node JVM (used by the Shiva startup script).
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"shiva": {
"shiva_version": "punch-shiva-6.4.5",
"clusters": {
"local": {
"reporters": [
"myreporter"
],
"storage": {
"type": "kafka",
"kafka_cluster": "local"
},
"ssl_enabled": true,
"keystore_name": "server-keystore.jks",
"keystore_password": "@{DEPLOYMENT_SECRETS.kafka.keystore_pass}",
"custom_additional_credentials_files": [
"server-keystore.jks",
"truststore.jks",
"server-key.pem",
"server-cert.pem",
"ca.pem"
],
"custom_secrets_file": "shiva_secrets.json",
"servers": {
"server4": {
"runner": true,
"can_be_master": true,
"tags": [
"local"
]
},
"server5": {
"runner": true,
"can_be_master": true,
"tags": [
"local"
]
}
}
}
}
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── truststore.jks
├── ca.pem
├── server4
│ ├── server-key.pem
│ ├── server-keystore.jks
│ └── server-cert.pem
└── server5
├── server-key.pem
├── server-keystore.jks
└── server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
Some SSL configurations are dedicated to a single shiva server or override the cluster's configurations, inside
shiva.cluster.[clusterID].servers.[serverID]
:
local_credentials_dir
: String
Optional
Default
shiva.cluster.[clusterID].local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andcluster.[clusterID].servers.[serverID].local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
keystore_name
: String
Optional
Default
shiva.cluster.[clusterID].keystore_name
.
Name of the Java KeyStore on the deployer host insidelocal_credentials_dir
. Used to encrypt the Shiva client connexions to endpoints with TLS.
Used by Shiva to consume or produce metadata about the shiva applications inside Kafka. Used by Shiva to report metrics to Kafka if thereporters
type iskafka
.
Used by Shiva to report metrics to Elasticsearch if thereporters
type iselasticsearch
.
Required to authenticate the Shiva client to
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
keystore_password
: String
Optional
Default
shiva.cluster.[clusterID].keystore_password
.
Password of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
custom_additional_credentials_files
: String Array
Optional
Default
shiva.cluster.[clusterID].custom_additional_credentials_files
.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the daemons' user during runtime to run punchlines and applications.
The filenames cannot contain '/' chars. Will be deployed on the targeted server inside the/home/{punchplatform_daemons_user}/.secrets
directory.
custom_secrets_file
: String
Optional
Default
shiva.cluster.[clusterID].custom_secrets_file
.
Optional secret file locally insidelocal_credentials_dir
.
Json file containing secrets used by the daemons' user during runtime to run punchlines and applications.
Will be deployed on the targeted server inside the/home/{operator_username}/.secrets
directory and calleduser_secrets.json
(forced).
credentials.user
: String
Optional
Default
None
.
Shiva client's username to authenticate to elasticsearch
credentials.password
: String
Optional
Default
None
.
Shiva client's password to authenticate to elasticsearch
Some SSL configurations are common to every server of a cluster inside shiva.cluster.[clusterID]
:
ssl_enabled
: Boolean
Optional
Default
False
.
Enable SSL for Shiva clients and reporters.
local_credentials_dir
: String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andshiva.cluster.[clusterID].local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
keystore_name
: String
Mandatory if SSL is enabled, or override
Overridden by
cluster.[clusterID].servers.[serverID].keystore_name
.
Name of the Java KeyStore on the deployer host insidelocal_credentials_dir
. Used to encrypt the Shiva client connexions to endpoints with TLS.
Used by Shiva to consume or produce metadata about the shiva applications inside Kafka. Used by Shiva to report metrics to Kafka if thereporters
type iskafka
.
Used by Shiva to report metrics to Elasticsearch if thereporters
type iselasticsearch
.
Required to authenticate the Shiva client to
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
keystore_password
: String
Mandatory if SSL is enabled, or override
Overridden by
cluster.[clusterID].servers.[serverID].keystore_password
.
Password of the Java KeyStore on the deployer host insidelocal_credentials_dir
.
truststore_name
: String
Optional
Default
platform.platform_truststore_name
.
Name of the Java Truststore on the deployer host insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust with TLS.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
truststore_password
: String
Optional
Default
platform.platform_truststore_password
.
Password of the Java Truststore on the deployer host.
custom_additional_credentials_files
: String Array
Optional
Default
None
.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the daemons' user during runtime to run punchlines and applications.
The filenames cannot contain '/' chars. Will be deployed on every cluster's server inside the/home/{punchplatform_daemons_user}/.secrets
directory.
custom_secrets_file
: String
Optional
Default
platform.platform_local_common_secrets_filename
.
Optional secret file locally insidelocal_credentials_dir
.
Json file containing secrets used by the daemons' user during runtime to run punchlines and applications.
Will be deployed on every cluster's server inside the/home/{operator_username}/.secrets
directory and calleduser_secrets.json
(forced),
Gateway¶
The Punchplatform Gateway is a Rest service used to redirect requests to other Rest services or to backend services, such as Punchlines. It can therefore act as a "proxy" operator, for example acting on behalf of a user connected to a Kibana Punchplatform Plugin.
It can also be used to apply a security layer over these services, providing safe authentication to standard backends (OpenLDAP, AD, ...), a tenant based access control and SSL connections.
Each cluster of gateways is deployed for serving requests on behalf of only one tenant. This allows to provide/allow different level of available actions for different tenants of the platform, and to ensure inter-tenant isolation by limiting access to this tenant configuration only.
{
"gateway": {
"gateway_version": "6.1.0",
"clusters": {
"mycluster": {
"tenant": "mytenant",
"gateway_memory_mb": 512,
"servers": {
"server1": {
"inet_address": "server1",
"port": 4242
}
},
"ui": {
"enabled": true
},
"elasticsearch": {
"data_cluster": {
"cluster_id": "es_search",
"hosts": [
"server1:9200"
]
},
"metric_cluster": {
"cluster_id": "es_metrics",
"hosts": [
"server2:9200"
],
"index_name": "mytenant-metrics"
}
},
"extraction_enabled": true,
"extraction_memory": "512m",
"extraction_formats": [
"csv",
"json"
],
"resources": {
"doc_dir": "/data/doc",
"archives_dir": "/data/archives",
"manager": {
"metadata": [
{
"type": "elasticsearch",
"hosts": [
"server2:9200"
],
"index": "resources-metadata"
}
],
"data": [
{
"type": "file",
"root_path": "/tmp/punchplatform/resources"
}
]
}
},
"reporters": [
"myreporter"
]
}
}
}
}
Concerning the REST API server and the Punch features :
gateway_version
: String
Mandatory.
Version of gateway app to deploy. File located in archives.
gateway.clusters.[clusterId].tenant
: String
Mandatory.
Tenant name affected to the cluster. The cluster will internally use the Elasticsearch cluster services, the Spark cluster services and the Zookeeper cluster services provided by the tenant configuration.
gateway.clusters.[clusterId].gateway_memory_mb
: int
Defaults to 512.
Gateway memory heap size (Xms and Xmx) that will be configured in the service.
gateway.clusters.[clusterId].servers.[serverId].inet_address
: String
Mandatory.
ip of the interface address where the gateway server should be deployed.
gateway.clusters.[clusterId].servers.[serverId].port
: Integer
Mandatory.
Port number the gateway server will use to listen on the interface.
gateway.clusters.[clusterId].channel_management_enabled
: Boolean
Optional.
Defaulttrue
.
If true, enable/channels
endpoint.
RequiresPUNCHPLATFORM_CONF_DIR
andPUNCHPLATFORM_INSTALL_DIR
environment configurations.
gateway.clusters.[clusterId].puncher_enabled
: Boolean
Optional.
Defaulttrue
.
If true, enable/puncher
endpoint.
Requires andPUNCHPLATFORM_INSTALL_DIR
environment configuration.
gateway.clusters.[clusterId].punchline_enabled
: Boolean
Optional.
Defaulttrue
.
If true, enable/punchline
endpoint.
RequiresPUNCHPLATFORM_CONF_DIR
andPUNCHPLATFORM_INSTALL_DIR
environment configurations.
Concerning the Elasticsearch forwarding :
Info
The gateway.clusters.[clusterId].elasticsearch
section is optional.
This section may be configured with 2 sections, according to the type of the
elasticsearch cluster :
- data_cluster (optional)
- metric_cluster (optional)
gateway.clusters.[clusterId].elasticsearch.data_cluster.hosts
: String array
Mandatory in this section.
List of the data cluster's hosts. Pattern is["host:port"]
.
gateway.clusters.[clusterId].elasticsearch.data_cluster.cluster_id
: String array
Optional.
Reference to a cluster defined before.
Info
Good pratice: You must choose between a single cluster_id
and the hosts
list.
gateway.clusters.[clusterId].elasticsearch.data_cluster.prefix
: String
Optional.
DefaultNone
.
If set, the targeted Elasticsearch path will be modified with a prefix for every sent request.
Example: setprefix: "my/path"
will send the ES requests tohost:9200/my/path/{client_path}
.
gateway.clusters.[clusterId].elasticsearch.data_cluster.settings
: String array
Optional.
Default[]
.
List of additional elasticsearch settings to configure for the data cluster.
Pattern is["key:value"]
.
gateway.clusters.[clusterId].elasticsearch.data_cluster.metrics_credentials.user
: String
Optional.
DefaultNone
.
Username used by the gateway's REST client to connect to the ES data cluster for shards and nodes metrics.
These credentials are different from the user's credentials used to request the ES data cluster. These metrics are used then by the request filtering feature.
Read Request filtering documentation.
gateway.clusters.[clusterId].elasticsearch.data_cluster.metrics_credentials.password
: String
Optional.
DefaultNone
.
Password used by the gateway's REST client to connect to the ES data cluster for shards and nodes metrics.
These credentials are different from the user's credentials used to request the ES data cluster. These metrics are used then by the request filtering feature.
Read Request filtering documentation.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.index_name
: String
Mandatory in this section.
Name of the index where the metrics are sent.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.hosts
: String array
Mandatory in this section.
List of the metric cluster's hosts.
Pattern is["host:port"]
.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.prefix
: String
Optional.
DefaultNone
.
If set, the targeted Elasticsearch Rest API address will be modified with a path prefix for every sent requests.
Example: setprefix: "my/path"
will send the ES requests tohost:9200/my/path/{client_path}
.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.settings
: String array
Optional.
DefaultNone
.
List of additional elasticsearch settings to configure for the metric cluster.
Pattern iskey:value
.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.credentials.user
: String
Optional. Default
None
.
Username used by the gateway's REST client to connect to the ES cluster.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.credentials.password
: String
Optional.
DefaultNone
.
Password used by the gateway's REST client to connect to the ES cluster.
gateway.clusters.[clusterId].elasticsearch.metric_cluster.cluster_id
: String
Optional.
Reference to a cluster defined before
Concerning the Punch UI plugin extraction :
Info
To enable the Punch UI plugin extraction, you have to set "extraction_enabled": true
gateway.clusters.[clusterId].extraction_enabled
: Boolean
Optional.
Defaultfalse
.
Iftrue
, enable the extraction service. Necessary for the extraction page of the Punch Kibana plugin.
gateway.clusters.[clusterId].extraction_formats
: String array
Optional.
Default[]
.
Supported output formats for extraction. Possible values arecsv
andjson
.
gateway.clusters.[clusterId].extraction_memory
: String
Optional.
Default512m
.
Java memory assigned to the extraction.
Warning
If you configured the extraction service, you MUST enable the resource manager with Elasticsearch as the metadata backend. The hosts of the metadata backend MUST match the host of the Elasticsearch metric cluster. Check the Resource Manager Settings.
Concerning the Punch resources :
- gateway.clusters.[clusterId].resources.resources_dir
: String array
Optional.
Resources path dir, default isdeployment_settings.platform.setups_root
.
gateway.clusters.[clusterId].resources.archives_dir
: String array
Mandatory in this section.
Archives storage location on Gateway's host for archiving and extraction service.
gateway.clusters.[clusterId].resources.doc_dir
: String array
Mandatory in this section.
Documentation location on Gateway's host.
gateway.clusters.[clusterId].resources.punchlines_dir
: String
Mandatory in this section.
Uploaded punchline location name.
This name cannot be an absolute path (starting with a '/').
Concerning the Gateway reporting and metrics :
gateway.clusters.[clusterId].reporters
String array
Mandatory.
A list of the Gateway reporters and referenced by id.
Ids must be declared in the dedicated reporters section.
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"gateway": {
"gateway_version": "6.4.5",
"clusters": {
"cluster1": {
"tenant": "mytenant",
"custom_secrets_file": "gateway_user_secrets.json",
"ssl_enabled": true,
"key_store_name": "server-keystore.jks",
"server_key_store_password": "@{DEPLOYMENT_SECRETS.gateway.keystore_pass}",
"client_private_key_name": "server-key.pem",
"client_certificate_name": "server-cert.pem",
"servers": {
"server1": {
"inet_address": "server1",
"port": 4242
}
}
}
}
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── truststore.jks
├── ca.pem
└── server1
├── server-key.pem
├── server-keystore.jks
└── server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
The Gateway server can be configured with TLS. Every client connexions to inet_address:port
will be encrypted using
the following settings in gateway.clusters.[clusterId]
:
local_credentials_dir
: String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andgateway.clusters.[clusterId].servers.[serverId].ssl.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
key_store_name
: String
Mandatory in this section
Name of the Java KeyStore on the deployer host inside
local_credentials_dir
.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST be injks
format.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
key_store_type
: String
Optional
Default
JKS
.
Type of the keystore.
EitherJKS
(java provider) orPKCS12
(openssl provider).
server_key_store_password
: String
Optional
Default
None
.
Password protecting the keystore.
Provide no password if the keystore is not protected.
key_alias
: String
Optional
Default
None
.
Alias of the server's key to use inside the keystore.
key_password
: String
Optional
Default
None
.
Password protecting the server's key inside the keystore.
Provide no password if the key is not protected.
trust_store_name
: String
Optional
Default
platform.platform_truststore_name
.
Name of the Java Truststore on the deployer host insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust with TLS.
Used by the Gateway server, the Gateway clients and the reporters clients.
The name cannot contain '/' chars. It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
trust_store_password
: String
Optional
Default
platform.platform_truststore_password
.
Password of the Java Truststore on the deployer host.
ssl_client_auth
: String
Optional
Default
need
.
The client's SSL certificate authentication mode. Values may beneed
,want
ornone
.
Ifneed
, the client certificate authentication is mandatory.
Ifwant
, the client certificate authentication is optional.
Ifnone
, the client certificate authentication is ignored.
custom_secrets_file
: String
Optional
Default
None
.
Optional secret file locally insidelocal_credentials_dir
.
Json file containing secrets used by the gateway daemons user during runtime.
Will be deployed on the targeted server inside the/home/{operator_username}/.secrets
directory and calleduser_secrets.json
(forced).
The Gateway uses an Elasticsearch client for each service addressing an ES cluster. This client can be configured with TLS for the following client settings :
- the
data_cluster
client - the
metric_cluster
client - the
punchlines
feature ifpunchline_enabled
istrue
- the
resource manager
if the metadata backend type iselasticsearch
- the
reporters
if the configured type iselasticsearch
To configure such an Elasticsearch client with TLS, provide the following settings in
gateway.clusters.[clusterId].servers.[serverId].ssl
:
local_credentials_dir
: Optional, String
Optional
Default
clusters.<clusterId>.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
elasticsearch_ssl_enabled
: boolean
Optional
Default
True
if the server's SSL section is defined, else 'False' Enable or disable SSL for the Gateway's REST client to Elasticsearch.
client_ca_name
: String
Optional
Default is configured in platform section :
platform.platform_ca_name
.
CA filename located insidelocal_ssl_certs_dir
.
Contains the certificates to trust for the Gateway's REST client to Elasticsearch.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
client_private_key_name
: String
Optional
Default
None
.
Private key name located insidelocal_ssl_certs_dir
.
Used by the Gateway's REST client to Elasticsearch to encrypt with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
client_certificate_name
: String
Optional
Default
None
.
Certificate name located insidelocal_ssl_certs_dir
.
Used by the Gateway's REST client to Elasticsearch with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
credentials.user
: String
Optional
Default
None
.
Username to authenticate to Elasticsearch.
Used by the Gatewayreporters
if type iselasticsaerch
.
credentials.password
: String
Optional
Default
None
.
Password to authenticate to Elasticsearch.
Used by the Gatewayreporters
if type iselasticsaerch
.
Resource Manager¶
Info
The resource manager is optional.
The resource manager section is composed with 2 lists :
- Metadata list section for multiple metadata backends
- Data list section for multiple data storage backends
gateway.clusters.[clusterId].resources.manager.metadata.type
: String
Mandatory.
Metadata backend type.
Onlyelasticsearch
is actually supported.
gateway.clusters.[clusterId].resources.manager.data.type
: String
Mandatory.
Data storage type.
Onlyfile
is actually supported.
If the metadata backend type is elasticsearch
:
gateway.clusters.[clusterId].resources.manager.metadata.hosts
: String array
Mandatory.
List of ES hosts.
Pattern is["host:port"]
.
Warning
If you configured the extraction service, the hosts of the metadata backend MUST match the host of the Elasticsearch metric cluster.
gateway.clusters.[clusterId].resources.manager.metadata.index
: String
Mandatory. Index name where the metadata will be stored as json documents.
gateway.clusters.[clusterId].resources.manager.metadata.prefix
: String
Optional.
DefaultNone
.
If set, the targeted Elasticsearch Rest API url for metadata will be modified with a path prefix for every sent requests.
Example: setprefix: "my/path"
will send the ES requests tohost:9200/my/path/{client_path}
.
gateway.clusters.[clusterId].resources.manager.metadata.credentials.user
: String
Optional.
DefaultNone
. Username if the connexion to ES requires a user authentication.
gateway.clusters.[clusterId].resources.manager.metadata.credentials.password
: String
Optional.
DefaultNone
.
Password if the connexion to ES requires a secret authentication.
If the data storage type is file
:
gateway.clusters.[clusterId].resources.manager.data.root_path
: String
Mandatory.
Path to store the data. The gateway service MUST have the proper write access permissions to this path.
If the data storage type is minio
:
gateway.clusters.[clusterId].resources.manager.data.host
: String
Mandatory.
Host of the minio cluster.
gateway.clusters.[clusterId].resources.manager.data.access_key
: String
Mandatory. Minio access key.
gateway.clusters.[clusterId].resources.manager.data.secret_key
: String
Mandatory.
Minio secret key
Spark¶
Apache Spark is an open-source cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.
{
"spark": {
"punchplatform_analytics_deployment_version": "punchplatform-analytics-deployment-6.1.0",
"clusters": {
"spark_main": {
"master": {
"servers": [
"node01"
],
"listen_interface": "eth0",
"master_port": 7077,
"rest_port": 6066,
"ui_port": 8081
},
"slaves": {
"node01": {
"listen_interface": "eth0",
"slave_port": 7078,
"webui_port": 8084
},
"node02": {
"listen_interface": "eth0",
"slave_port": 7078,
"webui_port": 8084
},
"node03": {
"listen_interface": "eth0",
"slave_port": 7078,
"webui_port": 8084
}
},
"spark_workers_by_punchplatform_spark": 1,
"zk_cluster": "common",
"zk_root": "spark-2.4.0-main",
"slaves_cpu": 4,
"slaves_memory": "1G"
}
}
}
}
punchplatform_analytics_deployment_version
Mandatory: version of PML
clusters.clusterId
Mandatory: clusterId is a string composed of alphanumeric characters. The clusterId must be unique.
clusters.<clusterId>.master
Mandatory: JSON content containing the spark master settings.
cluster.<clusterId>.master.servers
Mandatory: a list of servers on which a spark master will be installed.
Some issues came from servers hosts, use IP address.
cluster.<clusterId>.master.listen_interface
Mandatory: interface to bind spark master.
cluster.<clusterId>.master.master_port
Mandatory: Integer. TCP port use by Spark master.
cluster.<clusterId>.master.rest_port
Mandatory: Integer. TCP port use by Spark master for application submission.
cluster.<clusterId>.master.ui_port
Mandatory: Integer. TCP port use by the UI of Spark master.
clusters.<clusterId>.slaves
Mandatory: Dictionary indexed by the hostnames of the nodes composing the Spark cluster.
clusters.<clusterId>.slaves.nodeHostname.listen_interface
Mandatory: Network interface on which you want to bind your spark
clusters.<clusterId>.slaves.nodeHostname.slave_port
Mandatory: Integer use by spark slave.
clusters.<clusterId>.slaves.nodeHostname.webui_port
Mandatory: Integer use by UI spark slave.
clusters.<clusterId>.spark_workers_by_punchplatform_spark
Mandatory: Integer. Number of workers by slave.
clusters.<clusterId>.zk_cluster
Mandatory: ID of zookeeper cluster use by Spark Master to be in high-availability.
clusters.<clusterId>.zk_root
Mandatory: String use by spark master to write data for high-availability in zookeeper.
clusters.<clusterId>.slaves_cpu
Mandatory: Integer. Allocation of CPU for each slaves
clusters.<clusterId>.slaves_memory
Mandatory: Integer. Allocation of Memory for each slaves
clusters.<clusterId>.metrics
OPTIONAL
metrics reporter configuration. At the moment only elasticsearch is supporter.
Example : metrics.elasticsearch.cluster_id: "es_search"
SSL/TLS and secrets
The security configurations inside clusters.<clusterId>.servers.<server_id>
are dedicated to one server :
local_credentials_dir
: String
Optional
Default
clusters.<clusterId>.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
custom_additional_credentials_files
: String Array
Optional
If provided, will supplement
clusters.<clusterId>.custom_additional_credentials_files
.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the daemons' user during runtime to run punchlines and applications.
The filenames cannot contain '/' chars.
Will be deployed on the targeted server inside the/home/{punchplatform_daemons_user}/.secrets
directory.
The security configurations inside clusters.<clusterId>
are common to every broker inside the cluster :
local_credentials_dir
: String
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andclusters.<clusterId>.servers.<serverId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
custom_additional_credentials_files
: String array
Optional
If provided, will supplement
clusters.<clusterId>.servers.<serverId>.custom_additional_credentials_files
for every server inside the current cluster.
Optional credentials files locally insidelocal_credentials_dir
.
These files may be private keys, certificates, keystores or any file used by the daemons' user during runtime to run punchlines and applications.
The filenames cannot contain '/' chars.
Will be deployed on the targeted server inside the/home/{punchplatform_daemons_user}/.secrets
directory.
Ceph¶
Ceph is the scalable, distributed objects storage facility used by Punchplatform for archiving, or for delivering CephFS distributed multi-mountable filesystem, or S3-compatible objects storage REST API.
Please note that at the moment, the Punchplatform deployer is not providing automated mean of running the REST API component. This component can be activated on a CEPH admin station (see ceph.admin) by referring to CEPH documentation of [ceph-rest-api] command and of associated configuration.
It is used to deploy a Ceph cluster (a distributed storage system) and archive data.
{
"ceph": {
"version": "13.2.5",
"clusters": {
"main": {
"production_network": "192.168.0.0/24",
"fsid": "b5ee2a02-b92c-4829-8d43-0eb17314c0f6",
"storm_clusters_clients": [
"main"
],
"osd_min_bind_port": 6800,
"osd_max_bind_port": 6803,
"mgr_min_bind_port": 6810,
"mgr_max_bind_port": 6813,
"erasure_coding_profile": {
"k": 2,
"m": 1
},
"pools": {
"mytenant-data": {
"type": "erasure-coded",
"pg_num": 128,
"pgp_num": 128
},
"mytenant-fsmeta": {
"type": "replicated",
"pg_num": 32,
"pgp_num": 32,
"replication_factor": 2
},
"mytenant-fsdata": {
"type": "erasure-coded",
"pg_num": 32,
"pgp_num": 32
}
},
"filesystems": {
"myfs": {
"metadata_pool": "mytenant-fsmeta",
"data_pool": "mytenant-fsdata"
}
},
"admins": [
"node01",
"node02"
],
"admin_rest_apis": {
"node02": {
"listening_address": "node02",
"listening_port": 5050
},
"node03": {
"listening_address": "node03",
"listening_port": 5050
}
},
"monitors": {
"node01": {
"id": 0,
"production_address": "node01"
},
"node02": {
"id": 1,
"production_address": "node02"
},
"node03": {
"id": 2,
"production_address": "node03"
}
},
"osds": {
"node01": {
"id": 0,
"device": "/dev/sdb",
"device_type": "disk",
"crush_device_class": "hdd",
"production_address": "node01"
},
"node02": [
{
"id": 0,
"device": "/dev/sdb",
"device_type": "disk",
"crush_device_class": "ssd",
"production_address": "node02",
"osd_min_bind_port": 6800,
"osd_max_bind_port": 6803
},
{
"id": 101,
"device": "/dev/sdc",
"device_type": "disk",
"crush_device_class": "hdd",
"production_address": "node02",
"osd_min_bind_port": 6850,
"osd_max_bind_port": 6857
}
],
"node03": {
"id": 2,
"device": "/dev/sdb",
"device_type": "disk",
"crush_device_class": "hdd",
"production_address": "node03"
}
},
"managers": {
"node01": {
"id": 0
},
"node02": {
"id": 1
},
"node03": {
"id": 2
}
},
"metadataservers": {
"node01": {
"id": 0,
"production_address": "node01"
},
"node02": {
"id": 1,
"production_address": "node02"
}
}
}
}
}
}
Warning
Ceph package must be installed on deployment machine, it can be performed using additional packages provided in deployer.
version
Mandatory: specify the ceph version.
clusters
Mandatory: You can use several Ceph clusters, depending on your needs. Declare clusters here.
clusters.<cluster_name>
Mandatory: The name of your Ceph cluster.
clusters.<cluster_name>.production_network
Mandatory: Production network, used by Ceph clients to communicate with storage servers and monitors.
clusters.<cluster_name>.transport_network
Optional: Transport network, used by Ceph storage servers to ensure data replication and heartbeat traffic. By default, the transport network is the production network.
clusters.<cluster_name>.fsid
Mandatory: Unique Ceph cluster ID.
clusters.<cluster_name>.storm_clusters_clients
Mandatory: Specify here names of Storm clusters (specified in punchplatform.properties configuration file). All slaves nodes of the Storm clusters will be the clients of the Ceph cluster.
clusters.<cluster_name>.osd_min_bind_port
Optional
OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overridden by specifying a min port (and > a max port in next field). Default value is 6800. This must of course > differ from other daemons. If you have multiple OSDs on a single node > (see 'osds' setting section) then this specific parameter should be > set inside each of the osd section, to ensure that the multiple OSDs of > the node have different port ranges
clusters.<cluster_name>.osd_max_bind_port
Optional
OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overridden by specifying a max port (and > a min port in previous field). Default value is 7300. Of course, this must > differ from other daemons. If you have multiple OSDs on a single > node (see 'osds' setting section) then this specific parameter should > be set inside each of the osd section, to ensure that the multiple OSDs > of the node have different port ranges
clusters.<cluster_name>.mgr_min_bind_port
OPTIONAL
managers nodes bind on one port between 6800 and 7300. This default range can be overridden by specifying a min port (and a max port in next field). Default value is 6800.
clusters.<cluster_name>.mgr_max_bind_port
Optional
This default range can be overridden by specifying a max port (and a min port in previous field).
Default value is 7300.
clusters.<cluster_name>.erasure_coding_profile
Optional
Erasure coding profile used by all erasure coded pools can be specified in this section.
clusters.<cluster_name>.erasure_coding_profile.k
Mandatory
IN erasure_coding_profile SECTION: k value is the number of data chunks. See Ceph section for more details. Be careful when specifying this parameter. Default value is (NumberOf(OSD) - 1).
clusters.<cluster_name>.erasure_coding_profile.m
Mandatory IN erasure_coding_profile SECTION
k value is the number of data erasure codes. It represents the number of tolerant nodes' loss. See Ceph section for more details. Be careful when specifying this parameter. Default value is 1.
clusters.<cluster_name>.pools
Mandatory
Dictionary that specify data pools that should exist and be accessible by Ceph clients from PunchPlatform storm topologies. Typically, one data pool can be declared by tenant to facilitate isolation and easy purge of a tenant if needed. Each key in the dictionary is the name of the pool.
clusters.<cluster_name>.pools.<pool_name>.type
Mandatory
type of pool resilience : either (which mean either a non-resilient pool, or resilience will be achieved through multiple storage) or (which means resilience will be achieved through a RAID-like algorithm). For Ceph-fs filesystem metadata only 'replicated' value is supported. Note that to achieve actual resilience, when using the 'replicated' value, you need additionally to provide a 'replication_factor' of at least 2.
clusters.<cluster_name>.pools.<pool_name>.replication_factor
Mandatory (but only present when type is 'replicated'). This is the total number of data replica (i.e. a value of '1' means 'non resilient'). This value may be changed afterwards to increase/reduce resilience.
clusters.<cluster_name>.pools.<pool_name>.pg_num
Optional
number of Placement Groups (aggregates of objects in a pool). Default value is 128.
clusters.<cluster_name>.pools.<pool_name>.pgp_num
Optional: number of PGP. Default value is 128.
clusters.<cluster_name>.filesystems
Mandatory
Dictionary that specify cephFS filesystems that should exist and be accessible by Ceph clients from PunchPlatform storm topologies. Typically, a filesystem can be declared by tenant to facilitate isolation and easy purge of a tenant if needed. Each key in the dictionary is the name of the filesystem.
clusters.<cluster_name>.filesystems.<filesystem_name>.metadata_pool
Mandatory
name of a ceph pool that will store directory structure/files metadata information about the CephFS filesystem. This must be a pool of 'replicated' type.
clusters.<cluster_name>.filesystems.<filesystem_name>.data_pool
Mandatory
name of a ceph pool that will store files content of the filesystem. In current PunchPlatform release, this must be a pool of 'replicated' type.
clusters.<cluster_name>.admins
Mandatory
Array of nodes names hosting Ceph Admin nodes. These nodes will hold a copy of the ceph cluster administration keyring, and of ceph tools used for the command-line administration of the cluster.
clusters.<cluster_name>.admin_rest_apis
Mandatory
Dictionary that specify the nodes that will run the ceph admin rest api daemon. This API will then be usable for monitoring the cluster status, either by direct invocation through a web browser, or by the Punchplatform embedded monitoring system. Keys of this dictionary must be reachable host names from the deployer node.
clusters.<cluster_name>.admin_rest_apis.<node_name>.listening_address
Mandatory: Binding address on which the rest api daemon will be listening.
clusters.<cluster_name>.admin_rest_apis.<node_name>.listening_address
Mandatory: Binding port on which the rest api daemon will be listening.
clusters.<cluster_name>.monitors
Mandatory: Monitors maintain the cluster map (OSD endpoints, etc...). A
clusters.<cluster_name>.monitors.<node_name>
Mandatory: Names of monitor nodes.
clusters.<cluster_name>.monitors.<node_name>.id
MANDATORY
Unique ID of monitor. This ID must be unique relative to the cluster of monitor nodes (an OSD could have the same ID in the same cluster)
clusters.<cluster_name>.monitors.<node_name>.production_address
Mandatory: Monitors bind this address to listen requests from clients.
clusters.<cluster_name>.osds
Mandatory: OSD (Object Storage Node) host the data.
clusters.<cluster_name>.osds.<node_name>
Mandatory: Name of osd node. This is an array of json dictionary, each one describing an OSD daemon running on the host, and managing one block device for data storage
clusters.<cluster_name>.osds.<node_name>[].id
Mandatory: IDs have to be unique in the OSD cluster (a monitor could have the same ID in the same cluster).
clusters.<cluster_name>.osds.<node_name>[].device
Mandatory: Specify the device on the OSD where data is stored. This can be a disk device or a logical volume device.
clusters.<cluster_name>.osds.<node_name>[].crush_device_class
Optional
This is a service class tag than can be used to mark the node in the Ceph crush placement tree. This can then be used for placement rules. Default value is 'None', but it is advised to provide either , depending on the actual device type. Note that this value is used only by punchplatform deployer at OSD node creation time ; if you want to change this information afterwards, please refer to standard ceph tools for updating osd device class in crush table.
clusters.<cluster_name>.osds.<node_name>[].production_address
Mandatory
The production address which is the endpoint used by Ceph clients to get or put data.
clusters.<cluster_name>.osds.<node_name>[].transport_address
Optional
The transport address which is used internally for data replication and heartbeat traffic. By default, the transport address is the production address.
clusters.<cluster_name>.osds.<node_name>[].initial_weight
Optional
The relative weight of this storage node when deciding to store data chunks. Nominal (default) value is 1.0, which is the same as other nodes. A weight of 0.0 means NO data will be stored on this node. This value is useful when inserting a new node in an existing cluster, to avoid immediate total rebalancing ; it is also useful when clearing data from a node to prepare removal of the node. This parameter is used only by PunchPlatform cluster deployer, when creating a new OSD. To change the osd weight after deployment, please refer to official CEPH documentation or this howto.
clusters.<cluster_name>.osds.<node_name>[].osd_min_bind_port
Optional: OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overridden by specifying a min port (and a max port in next setting). Default value is the value provided by the same setting at
- cluster level.
This must of course differ from other daemons. If you have multiple OSDs on a single node (see 'osds' setting section) then this specific parameter should be set inside each of the individual osd section, to ensure that the multiple OSDs of the node have different port ranges
clusters.<cluster_name>.osds.<node_name>[].osd_max_bind_port
Optional
OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overridden by specifying a max port (and a min port in previous setting). Default value is the value provided by the same setting at cluster level. This must of course differ from other daemons. If you have multiple OSDs on a single node (see 'osds' setting section) then this specific parameter should be set inside each of the individual osd section, to ensure that the multiple OSDs of the node have different port ranges
clusters.<cluster_name>.managers
Mandatory: Managers provide additional monitoring and interfaces to external monitoring and management systems. They're usually collocated with monitors.
clusters.<cluster_name>.managers.<node_name>.id
Mandatory
IDs have to be unique in the managers cluster (a monitor could have the same ID in the same cluster).
clusters.<cluster_name>.metadataservers
Optional: MDS (Metadata server) : manages the structure and metadata required for CephFS filesystem instances. At least one MDS is needed for the CephFS feature activation. At least 2 must be defined for high-availability of the Ceph FS feature.
clusters.<cluster_name>.metadataservers.<node_name>
Mandatory: Name of mds node. This is the host on which the feature will be deployed.
clusters.<cluster_name>.metadataservers.<node_name>.id
Mandatory: IDs have to be unique in the MDS cluster (usually, first one is 0)
clusters.<cluster_name>.metadataservers.<node_name>.production_address
Mandatory: The production address which is the endpoint used by CephFS clients to reach the MDS node.
Clickhouse¶
Clickhouse is a column-oriented database for online analytical processing of queries.
{
"clickhouse": {
"clickhouse_version": "20.4.6.53",
"clusters": {
"common": {
"shards": [
{
"servers": [
"server1"
]
}
],
"zk_cluster": "common",
"zk_root": "clickhouse",
"http_port": 8123,
"tcp_port": 9100
}
}
}
}
clickhouse_version
Mandatory: specify the Clickhouse version.
clusters.clusterId
Mandatory: clusterId is a string composed of alphanumeric characters. The clusterId must be unique.
clusters.<clusterId>.shards
Mandatory : A list of Clickhouse shard.
clusters.<clusterId>.shards.servers
Mandatory : A list of Clickhouse replicates.
clusters.<clusterId>.http_port
Mandatory : HTTP Port used by Clickhouse.
clusters.<clusterId>.tcp_port
Mandatory : TCP Port used by Clickhouse.
clusters.<clusterId>.zk_cluster
Zookeeper cluster used only if replication is enabled.
clusters.<clusterId>.zk_root
Zookeeper root directory.
Minio¶
It is used to deploy a Minio cluster (a distributed storage system) and archive data.
{
"minio": {
"minio_version": "<RELEASE_VERSION>",
"minio_access_key": "admin",
"minio_secret_key": "punchplatform",
"minio_public_cert": "<PATH_TO_PUBLIC_CRT>",
"minio_private_key": "<PATH_TO_PRIVATE_KEY>",
"clusters": {
"common": {
"hosts": [
"server1"
],
"port": "9000"
}
}
}
}
minio_version
Mandatory: specify the Minio version.
minio_access_key
Mandatory: Access Key to login on Minio.
minio_secret_key
Mandatory: Secret Key to login on Minio.
minio_public_cert
Path to the public cert for Minio.
minio_private_key
Path to the private key for Minio.
clusters.clusterId
Mandatory: clusterId is a string composed of alphanumeric characters. The clusterId must be unique.
clusters.<clusterId>.hosts
A list of hosts. Minio will be installed on these servers.
clusters.<clusterId>.port
Port used by Minio.
Warning
A Minio cluster can contains one single node or at least 4 nodes for a cluster mode (2 & 3 nodes deployment are not supported)
MLFlow¶
It is used to deploy many MLFlow Tracking servers (Storing, Exposing and Packaging AI models).
"mlflow": {
"version": "<RELEASE_VERSION>",
"servers": {
"<SERVER_NAME>": {
"bind_address": "<SERVER_BINDING_ADDRESS>",
"port": "<SERVER_PORT>",
"logs_path": "<LOGS_PATH>",
"artifacts_path": "<ARTIFACTS_PATH>"
"s3_endpoint_url": "<S3_ENDPOINT_URL>"
}
}
}
-
version
(string)Mandatory: specify the MLFlow version.
-
servers
(string)Mandatory: specify MLFlow tracking server instance.
-
servers.<SERVER_NAME>
(string)Mandatory: specify the name of your MLFlow tracking server instance.
-
servers.<SERVER_NAME>.bind_address
(string)Mandatory: specify MLFlow tracking server binding address.
-
servers.<SERVER_NAME>.port
(string)Mandatory: specify MLFlow tracking server port.
-
servers.<SERVER_NAME>.s3_access_key_id
(string)Mandatory: specify MLFlow S3 acces key ID
-
servers.<SERVER_NAME>.secret_access_key
(string)Mandatory: specify MLFlow access key
-
servers.<SERVER_NAME>.s3_session_token
(string)Mandatory: specify MLFlow S3 session token
-
servers.<SERVER_NAME>.s3_security_token
(string)Mandatory: specify MLFlow S3 security token
-
servers.<SERVER_NAME>.logs_path
(string)Mandatory: specify MLFlow tracking server logs (AI model logs) path.
Must be reachable by MLFlow tracking server.
Example: /tmp/mlflow-logs
-
servers.<SERVER_NAME>.artifacts_path
(string)Mandatory: specify MLFlow tracking server artifacts (AI model path) path.
Path protocol is one of the following schemes : ['', 'file', 's3' ]
Must be reachable by MLFlow tracking server and client (Notebook).
We advise you to use Punch Minio as shared storage.
If using s3 protocols you must specify the setting s3_endpoint_url
Example: S3://mlflow-artifacts (mlflow-artifacts bucket must be created).
-
servers.<SERVER_NAME>.s3_endpoint_url
(string)Mandatory if artifacts_path is s3://*: specify the s3 access url.
S3_endpoint_url is the url to access the S3 server while artifacts_path is tha path inside the s3 serveur.
you must provide the credentials in the secret manager file.
Example: http://myS3Serveur.com
-
servers.<SERVER_NAME>.s3_settings
(string{})Optionnal: specify a list of S3/AWS key: value variables
Each variables keys will be an environment variable declared at runtime.
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_SECURITY_TOKEN
The full list of the available settings are documented in mlflow documentation.
Example: { "MLFLOW_S3_IGNORE_TLS": "true" }
Elastic Beats¶
Auditbeat¶
It is a small component that get system call to
an Elasticsearch cluster:
{
"auditbeat": {
"auditbeat_version": "7.10.2",
"reporting_interval": 30,
"auditd": [
{
"hosts": [
"node01"
],
"audit_rule": [
"-w /etc/passwd -p wa -k identity",
"-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access"
]
}
],
"file_integrity": [
{
"hosts": [
"node01"
],
"paths": [
"/bin"
]
},
{
"hosts": [
"node02",
"node3"
],
"paths": [
"/bin",
"/usr/bin"
],
"recursive": true,
"exclude_files": [
"~$"
]
}
],
"elasticsearch": {
"cluster_id": "es_search"
}
}
}
Or a Kafka cluster:
{
"auditbeat": {
"auditbeat_version": "7.10.2",
"reporting_interval": 30,
"auditd": [
{
"hosts": [
"node01"
],
"audit_rule": [
"-w /etc/passwd -p wa -k identity",
"-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access"
]
}
],
"file_integrity": [
{
"hosts": [
"node01"
],
"paths": [
"/bin"
]
},
{
"hosts": [
"node02",
"node3"
],
"paths": [
"/bin",
"/usr/bin"
],
"recursive": true,
"exclude_files": [
"~$"
]
}
],
"kafka": {
"cluster_id": "local"
}
}
}
auditbeat_version
Mandatory: version of auditbeat
reporting_interval
(integer)
The time in seconds between two reports
auditd.hosts
(string[])
A list of hosts. Auditbeat will be installed on these servers to execute audit rules.
auditd.audit_rule
(string)
A string containing the audit rules that should be installed to the kernel.
There should be one rule per line.
Comments can be embedded in the string using # as a prefix.
The format for rules is the same used by the Linux auditctl utility.
Auditbeat supports adding file watches (-w) and syscall rules (-a or -A).
file_integrity.hosts
(string[])
A list of hosts. Auditbeat will be installed on these servers to check file integrity.
file_integrity.paths
(string[])
A list of paths (directories or files) to watch. Globs are not supported.
The specified paths should exist when the metricset is started.
file_integrity.exclude_files
(string[])
A list of regular expressions used to filter out events for unwanted files.
The expressions are matched against the full path of every file and directory.
By default, no files are excluded. See Regular expression support for a list of supported regexp patterns.
It is recommended to wrap regular expressions in single quotation marks to avoid issues with YAML escaping rules.
recursive
(boolean: false)
By default, the watches set to the paths specified in paths are not recursive.
This means that only changes to the contents of these directories are watched.
If recursive is set to true, the file_integrity module will watch for changes on these directories and all their subdirectories.
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"auditbeat": {
"auditbeat_version": "7.10.2",
"reporting_interval": 30,
"auditd": [
{
"hosts": [
"node01"
],
"audit_rule": [
"-w /etc/passwd -p wa -k identity"
]
}
],
"file_integrity": [
{
"hosts": [
"node01"
],
"paths": [
"/bin"
]
}
],
"elasticsearch": {
"cluster_id": "es_search",
"ssl_enabled": true
},
"elasticsearch_private_key_name": "auditbeat-server-key.pem",
"elasticsearch_certificate_name": "auditbeat-server-cert.crt",
"elasticsearch_user": "bob",
"elasticsearch_password": "bobspassword"
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── ca.pem
├── node01
│ ├── auditbeat-server-key.pem
│ └── auditbeat-server-cert.pem
└── node02
├── auditbeat-server-key.pem
└── auditbeat-server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
Enable SSL connexions :
elasticsearch.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Elasticsearch cluster
kafka.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Kafka cluster
The TLS configurations in auditbeat.servers.<serverId>
section are dedicated to one single server :
local_credentials_dir
(string)
Optional
Default
auditbeat.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andauditbeat.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
elasticsearch_private_key_name
(string)
Optional
Default
auditbeat.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
auditbeat.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
auditbeat.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
auditbeat.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
The TLS configurations in auditbeat
section are common to every server :
local_credentials_dir
(string)
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andauditbeat.servers.<serverId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
ca_name
(string)
Mandatory or override in node section
Default is configured in platform section :
platform.platform_ca_name
. CA filename located insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust for TLS.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_user
(string)
Optional
Default
admin
.
Username for Elasticsearch when Open distro is deployed
elasticsearch_password
(string)
Optional
Default
admin
.
Password for Elasticsearch when Open distro is deployed
Filebeat¶
It is a small component that send system logs to an Elasticsearch Cluster:
{
"filebeat": {
"filebeat_version": "7.10.2",
"files": [
{
"hosts": [
"node01"
],
"path": [
"/var/log/auth.log"
]
},
{
"hosts": [
"node02"
],
"path": [
"/var/log/syslog"
]
}
],
"elasticsearch": {
"cluster_id": "es_search"
}
}
}
Or a Kafka cluster:
{
"filebeat": {
"filebeat_version": "7.10.2",
"files": [
{
"hosts": [
"node01"
],
"path": [
"/var/log/auth.log"
]
},
{
"hosts": [
"node02"
],
"path": [
"/var/log/syslog"
]
}
],
"kafka": {
"cluster_id": "local",
"topic_name": "filebeat-topic"
}
}
}
filebeat_version
Mandatory
version of filebeat
files
(map[], mandatory)
This section contains a list of hosts and path to monitor
elasticsearch
(map)
This section enables the elasticsearch reporter
elasticsearch.cluster_id
(string, mandatory)
Name of the elasticsearch cluster used to store ceph metrics.
kafka
(map)
This section enables the kafka reporter
kafka.cluster_id
(string, mandatory)
Name of the kafka cluster.
kafka.topic_name
(string, mandatory)
Name of the kafka topic to store metrics from filebeat
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"filebeat": {
"filebeat_version": "7.10.2",
"files": [
{
"hosts": [
"node01"
],
"path": [
"/var/log/auth.log"
]
},
{
"hosts": [
"node02"
],
"path": [
"/var/log/syslog"
]
}
],
"elasticsearch": {
"cluster_id": "es_search",
"ssl_enabled": true
}
},
"elasticsearch_private_key_name": "filebeat-server-key.pem",
"elasticsearch_certificate_name": "filebeat-server-cert.crt",
"elasticsearch_user": "bob",
"elasticsearch_password": "bobspassword"
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── ca.pem
├── node01
│ ├── filebeat-server-key.pem
│ └── filebeat-server-cert.pem
└── node02
├── filebeat-server-key.pem
└── filebeat-server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
Enable SSL connexions :
elasticsearch.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Elasticsearch cluster
kafka.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Kafka cluster
The TLS configurations in filebeat.servers.<serverId>
section are dedicated to one single server :
local_credentials_dir
(string)
Optional
Default
filebeat.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andfilebeat.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
elasticsearch_private_key_name
(string)
Optional
Default
filebeat.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
filebeat.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
filebeat.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
filebeat.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
The TLS configurations in filebeat
section are common to every server :
local_credentials_dir
(string)
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andfilebeat.servers.<serverId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
ca_name
(string)
Mandatory or override in node section
Default is configured in platform section :
platform.platform_ca_name
. CA filename located insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust for TLS.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_user
(string)
Optional
Default
admin
.
Username for Elasticsearch when Open distro is deployed
elasticsearch_password
(string)
Optional
Default
admin
.
Password for Elasticsearch when Open distro is deployed
Metricbeat¶
It is a small component that send system metrics to:
An Elasticsearch Cluster:
{
"metricbeat": {
"metricbeat_version": "7.10.2",
"modules": {
"system": {
"high_frequency_system_metrics": {
"metricsets": [
"cpu",
"load",
"memory"
],
"reporting_interval": "30s"
},
"normal_frequency_system_metrics": {
"metricsets": [
"fsstat"
],
"reporting_interval": "5m"
},
"slow_frequency_system_metrics": {
"metricsets": [
"uptime"
],
"reporting_interval": "1h"
}
}
},
"elasticsearch": {
"cluster_id": "es_search"
}
}
}
Or a Kafka cluster:
{
"metricbeat": {
"metricbeat_version": "7.10.2",
"modules": {
"system": {
"high_frequency_system_metrics": {
"metricsets": [
"cpu",
"load",
"memory"
],
"reporting_interval": "30s"
},
"normal_frequency_system_metrics": {
"metricsets": [
"fsstat"
],
"reporting_interval": "5m"
},
"slow_frequency_system_metrics": {
"metricsets": [
"uptime"
],
"reporting_interval": "1h"
}
}
},
"kafka": {
"cluster_id": "local",
"topic_name": "platform-system-metrics"
}
}
}
metricbeat_version
Mandatory: version of metricbeat
reporting_interval
(integer)
Interval in seconds use by metricbeat to report system metrics
servers
(map)
To monitor external servers by deploying metricbeat, you can provide a list of additional hosts. At the end, you'll deploy metricbeat on all servers which composed the PunchPlatform + additional servers.
modules
(string, mandatory)
Names of metricbeat module
modules.[module_name]
(string, mandatory)
Name of a dedicated custom metricbeat metricset
modules.[module_name].[metric_name].metricsets
(string, mandatory)
Metricsets of each module To have the full metricset, take a look at the official documentation
modules.[module_name].[metric_name].reporting_interval
(string, mandatory)
String containing period between two metricsets collection. For example: 10s, 1m, 1h
modules.[module_name].[metric_name].hosts
(string)
Hosts require for some modules such as zookeeper and kafka To have the full metricset, take a look at the official documentation
elasticsearch
(map)
This section enabling the elasticsearch reporter
elasticsearch.cluster_id
(string, mandatory)
Name of the elasticsearch cluster used to store system metrics
kafka
(map)
When present, this section enables the kafka metrics reporter
kafka.cluster_id
(string, mandatory)
Name of the kafka cluster
kafka.topic_name
(string, mandatory)
Name of the kafka topic to store metrics from metricbeat
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"metricbeat": {
"metricbeat_version": "7.10.2",
"modules": {
"system": {
"high_frequency_system_metrics": {
"metricsets": [
"cpu",
"load",
"memory"
],
"reporting_interval": "30s"
},
"normal_frequency_system_metrics": {
"metricsets": [
"fsstat"
],
"reporting_interval": "5m"
},
"slow_frequency_system_metrics": {
"metricsets": [
"uptime"
],
"reporting_interval": "1h"
}
}
},
"elasticsearch": {
"cluster_id": "es_search",
"ssl_enabled": true
},
"elasticsearch_private_key_name": "metricbeat-server-key.pem",
"elasticsearch_certificate_name": "metricbeat-server-cert.crt",
"elasticsearch_user": "bob",
"elasticsearch_password": "bobspassword"
}
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── ca.pem
├── node01
│ ├── metricbeat-server-key.pem
│ └── metricbeat-server-cert.pem
└── node02
├── metricbeat-server-key.pem
└── metricbeat-server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
Enable SSL connexions :
elasticsearch.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Elasticsearch cluster
kafka.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Kafka cluster
The TLS configurations in metricbeat.servers.<serverId>
section are dedicated to one single server :
local_credentials_dir
(string)
Optional
Default
metricbeat.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andmetricbeat.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
elasticsearch_private_key_name
(string)
Optional
Default
metricbeat.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
metricbeat.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
metricbeat.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
metricbeat.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
The TLS configurations in metricbeat
section are common to every server :
local_credentials_dir
(string)
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andmetricbeat.servers.<serverId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
ca_name
(string)
Mandatory or override in node section
Default is configured in platform section :
platform.platform_ca_name
. CA filename located insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust for TLS.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_user
(string)
Optional
Default
admin
.
Username for Elasticsearch when Open distro is deployed
elasticsearch_password
(string)
Optional
Default
admin
.
Password for Elasticsearch when Open distro is deployed
Packetbeat¶
It is a small component that send network packets to
an Elasticsearch cluster:
{
"packetbeat": {
"packetbeat_version": "7.10.2",
"reporting_interval": 30,
"interfaces": [
{
"hosts": [
"node01"
],
"interface": "eth0"
},
{
"hosts": [
"node02"
],
"interface": "any"
}
],
"elasticsearch": {
"cluster_id": "es_search"
}
}
}
Or a Kafka cluster:
{
"packetbeat": {
"packetbeat_version": "7.10.2",
"reporting_interval": 30,
"interfaces": [
{
"hosts": [
"node01"
],
"interface": "eth0"
},
{
"hosts": [
"node02"
],
"interface": "any"
}
],
"kafka": {
"cluster_id": "local"
}
}
}
packetbeat_version
Mandatory: version of packetbeat
reporting_interval
(integer)
Interval in seconds use by metricbeat to report system metrics
elasticsearch
(map)
This section enables the elasticsearch reporter
elasticsearch.cluster_id
(string, mandatory)
Name of the elasticsearch cluster used to store system metrics
kafka
(map)
This section enables the kafka reporter
kafka.cluster_id
(string, mandatory)
Name of the kafka cluster
SSL/TLS and secrets
If you set the property platform.platform_local_credentials_dir
and if the secrets/credentials files of a configured
host are located inside a directory <platform.platform_local_credentials_dir>/<configured_host>
, you may configure the
security like this :
{
"packetbeat": {
"packetbeat_version": "7.10.2",
"reporting_interval": 30,
"interfaces": [
{
"hosts": [
"node01"
],
"interface": "eth0"
},
{
"hosts": [
"node02"
],
"interface": "any"
}
],
"elasticsearch": {
"cluster_id": "es_search",
"ssl_enabled": true
}
},
"elasticsearch_private_key_name": "packetbeat-server-key.pem",
"elasticsearch_certificate_name": "packetbeat-server-cert.crt",
"elasticsearch_user": "bob",
"elasticsearch_password": "bobspassword"
}
What the security file structure may look like in this case ?
<platform.platform_local_credentials_dir>
├── ca.pem
├── node01
│ ├── packetbeat-server-key.pem
│ └── packetbeat-server-cert.pem
└── node02
├── packetbeat-server-key.pem
└── packetbeat-server-cert.pem
The filenames may be the same in all directories, but their content may obviously differ.
Enable SSL connexions :
elasticsearch.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Elasticsearch cluster
kafka.ssl_enabled
(boolean)
Optional
Default
False
.
Enable SSL for the Beat's client to the Kafka cluster
The TLS configurations in packetbeat.servers.<serverId>
section are dedicated to one single server :
local_credentials_dir
(string)
Optional
Default
packetbeat.local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andpacketbeat.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
elasticsearch_private_key_name
(string)
Optional
Default
packetbeat.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
packetbeat.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
packetbeat.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
packetbeat.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
The TLS configurations in packetbeat
section are common to every server :
local_credentials_dir
(string)
Optional
Default
platform.platform_local_credentials_dir
.
If provided, will supplementplatform.platform_local_credentials_dir
andpacketbeat.servers.<serverId>.local_credentials_dir
.
The local path of a directory located on the deployer's machine and containing some specific credentials for hosts (i.e. certs, keys, ca, secrets files ..).
Every key or keystore name configured for this component will be searched after :
. A matching name inside the current folder, or inside the default one.
. If not found, a matching name inside a subfolder, called after each configured cluster's hosts.
Example : if this component is deployed on 'host1', the provided keys or keystores are searched inside<local_credentials_dir>
then inside<local_credentials_dir>/host1/
. Same behavior for each configured host.
ca_name
(string)
Mandatory or override in node section
Default is configured in platform section :
platform.platform_ca_name
. CA filename located insidelocal_credentials_dir
.
Contains the certificates of endpoints to trust for TLS.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.elasticsearch_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_private_key_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_private_key_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt the connexion to endpoints with TLS.
MUST be inPKCS8
format.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
kafka_certificate_name
(string)
Optional
Default
None
.
Overridden byservers.<serverId>.kafka_certificate_name
.
File name located insidelocal_credentials_dir
, or a<local_credentials_dir>/<hostname>/
wherehostname
match the server where the file will be used.
Used to encrypt and authenticate the client to endpoints with TLS.
MUST respect thex509
standard.
The name cannot contain '/' chars.
It will be placed inside the/home/{punchplatform_daemons_user}/.secrets
directory.
elasticsearch_user
(string)
Optional
Default
admin
.
Username for Elasticsearch when Open distro is deployed
elasticsearch_password
(string)
Optional
Default
admin
.
Password for Elasticsearch when Open distro is deployed
Advanced content¶
Punch Configuration Manager¶
previously named: git bare
This section is used to locate the git bare of the PunchPlatform configuration.
{
"git_settings": {
"synchro_mode": "ssh",
"git_remote_user": "gituser",
"git_bare_server": "node02",
"git_bare_server_address": "node02",
"punchplatform_conf_repo_git_url": "/mnt/pp00p/pp-conf.git"
}
}
-
synchro_mode
: String- use ssh protocol to contact the git bare server
-
git_remote_user
: String- Use this user to establish the ssh connection. User must already exist.
-
git_bare_server
: String- name of the server where the git bare is located
-
git_bare_server_address
: String- name of the interface of the server where the git bare is located if you use a custom ssh port, you can use the
port by typing :
:
- name of the interface of the server where the git bare is located if you use a custom ssh port, you can use the
port by typing :
-
punchplatform_conf_repo_git_url
- path on the git bare server to locate the bare directory
{
"punchplatform_conf_repo_branch": "master",
"lmc_conf_dir": "pp-conf"
}
punchplatform_conf_repo_branch
Optional
By default, the deployer will assume you are using a branch named in your configuration directory and will clone it when deploying PunchPlatform administration/monitoring service and when deploying initial administration user. If you want the deployer to clone another branch, you can provide it with this setting.
lmc_conf_dir
Optional: Name of folder which contains a working copy of git repository previously defined used by the punchplatform-admin-server only