Skip to content

punchplatform-deployment.settings

Overview

Together with the punchplatform.properties file, the punchplatform-deployment.settings file is required to define:

  • the versions of all the external dependencies (e.g Storm, Zookeeper, ...)
  • the folders where to store softwares, data and logs
  • the unix users in charge of running services or executing administation actions
  • some of the external dependencies's parameters (e.g number of Storm workers, jvm xmx, ldap credentials, ...)

Both files are required by the PunchPlatform deployer to generate a complete ansible inventory, in turn used to fully deploy your platform.

Location

The punchplatform-deployment.settings configuration file must be located in a platforms/<platformName> sub-folder of you deployment configuration directory, where platformName is typically 'production'. A symbolic link named punchplatform-deployment.settings must next be set from the configuration root folder. That is, it must look like this:

1
2
3
4
5
> $PUNCHPLATFORM_CONF_DIR
    ├── punchplatform-deployment.settings -> platform/singlenode/punchplatform-deployment.settings
    └── platform
        └── singlenode
            └── punchplatform-deployment.settings

Note

In order to deploy a new platform, remember you start by creating a configuration folder on your deployer host. You must then set the [PUNCHPLATFORM_CONF_DIR] environment variable to point to that directory. That variable is expected to be correctly setby the deployer and platform command line tools. Refer to the manual pages

The reason to use a symbolic link is to let you later on switch from one platform to another while keeping the same tenant and channels configuration. It is extremely convenient so as to test your channels on a secondary test platform, and apply it later to your production platform.

After the deployment completes, some of your target servers, the ones acting as administration servers, will be equipped with similar configuration folders. The PUNCHPLATFORM_CONF_DIR environment variable will be set as well on these servers. These folders, usually located under /opt/soc_conf or /data/soc_conf, are actually git clones of a central git repository, and will be used at runtime by the platform to start and/or monitor your channels. All that is setup for you by the deployer. For now keep in mind that you are only defining the folder and files needed for deployment.

Content

This file a JSON file, in which you are (this is not standard json, though) free to add '#' prefixed comments.

You can test your json syntax by using sed 's/#.*//g' punchplatform-deployment.settings | jq .

Mandatory Parameters

The following variables defines keys location and users to be setup on all your target servers.

1
2
3
4
5
"setups_root": "/data/opt",
"remote_data_root_directory": "/data",
"remote_logs_root_directory": "/var/log/punchplatform",
"punchplatform_daemons_user": "punchplatform",
"punchplatform_group": "punchplatform",
  • setups_root

    MANDATORY

    root folder where all software packages will be installed machines. It must match the install dirs in punchplatform.properties configuration file.

  • remote_data_root_directory

    MANDATORY

    The root data directory. That folder will contain elasticsearch zookeeper kafka etc.. data. It must be mounted on a partition with enough disk capacity.

  • remote_logs_root_directory

    MANDATORY

    The root log folder.

  • punchplatform_daemons_user

    MANDATORY

    the unix daemon user in charge of running the various platform services. This user is non interactive, and will not be granted a home directory.

  • punchplatform_group

    MANDATORY

    the user group associated to all users (daemon or operators) setup on your servers.

  • punchplatform_conf_repo_branch

    OPTIONAL

    By default, the deployer assumes you use a git branch named in your configuration directory. It will clone that branch on the servers defined with a monitoring or administration role. I.e. a role that requires a configuration folder to be installed. Use this property to clone another branch.

Ansible Inventory Settings

1
"ansible_inventory_settings" : "[punchplatform_cluster:vars]nansible_ssh_port=8022"
  • ansible_inventory_settings

    Optional: this settings must be used to defined some additional settings for ansible deployment. For instance : ansible_ssh_port, ansible_ssh_user, etc...

PunchPlatform Operator

This section provides the tools to administrate the full PunchPlatform by a human for servers. It can be an admin server or a workstation.

1
2
3
4
5
6
7
8
9
"punchplatform_operator" : {
    "configuration_name_dir_from_home" : "pp-conf",
    "punchplatform_conf_url" : "localhost:2181/punchplatform-primary",
    "operators_username" : ["admin1","admin2"],
    "servers" : {
        "node01" : {}
    }
},
"punchplatform_operator_environment_version": "punchplatform-operator-environment-5.7.0-SNAPSHOT"

Important

We strongly recommend to use a git repository to keep safe your PunchPlatform configuration. Take a look at git_settings section

Important

For this role to work, Zookeeper with an admin cluster must be deployed on your platform

  • configuration_name_dir_from_home

    Mandatory

    Name of the directory which contains the all configuration of the PunchPlatform

  • punchplatform_conf_url

    Mandatory

    Zookeeper path to get the full punchplatform configuration running on the platform. It follows Zookeeper connect strings scheme (host1:port1,host2:port2,...,hostn:portn/punchplatform). The root node (/punchplatform in the above example) should be set to the root node of the associated zookkeper cluster (see punchplatform.properties zookeeper section)

  • operators_username

    Optional

    In addition to punchplatform_admin_user, all custom user used to administrate the PunchPlatform

  • servers

    Mandatory

    Comma separated array - describe the servers used by operator to administrate the punchplatform. Usually, the servers are workstations

  • punchplatform_version

    Mandatory

    Version of PunchPlatform

  • punchplatform_conf_repo_git_local_url

    Optional

    Absolute path to access to git bare . Mandatory if the git_settings is not defined - configuration not recommended

  • punchplatform_operator_environment_version

    Mandatory

    Version of the punchplatform operator environment. To start/stop channels/jobs, the punchplatform operator needs severals libraries and shell. This operator environment package give all needed scripts and jars.

Zookeeper

1
2
3
4
"zookeeper_version" : "apache-zookeeper-3.5.5-bin",
"zookeeper_nodes_production_interface" : "eth0",
"zookeeper_childopts" : "-server -Xmx256m -Xms256m",
"zookeeper_admin_cluster_name": "common",
  • zookeeper_version

    • Mandatory: the zookeeper version
  • zookeeper_admin_cluster_name: String

    • The zookeeper name of the admin cluster (for instance : common). The admin cluster is the one in which the applicable version of the configuration will be stored (see Administration overview)
  • zookeeper_nodes_production_interface: String

    • Zookeeper production network interface
  • zookeeper_childopts: String "-server -Xmx1024m -Xms1024m"

    • JVM options for Zookeeper default "-server -Xmx1024m -Xms1024m"

Elasticsearch

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
"elasticsearch_version" : "6.8.2",
"elasticsearch_plugins": {
    "opendistro_security": {
      "version": "0.10.0.1",
      "local_ssl_certs_dir": "/data/certs"
    },
    "opendistro_alerting": {
      "version": "0.10.0.1"
    }
}
  • elasticsearch_version

    Mandatory : version of Elasticsearch

Elasticsearch Opendistro Security plugin

  • version

    Mandatory : version of Opendistro Security plugin for Elasticsearch. Trigger the plugin installation during Elasticsearch deployment.

  • local_ssl_certs_dir

    Mandatory : directory located on the deployer's system containing all the SSL keys and certificates that will be used by Opendistro Security for Elasticsearch to encrypt ES transport protocol.

Elasticsearch Opendistro Alerting plugin

  • version

    Mandatory : version of Opendistro Alerting plugin for Elasticsearch. Trigger the plugin installation during Elasticsearch deployment.

Kibana

You need a operator environment on the same environment as Kibana excepted for documentation only.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
"kibana_version" : "6.8.2",
"repository": "http://fr.archive.ubuntu.com/ubuntu/",
"kibana_plugins": {
    "punchplatform": {
      "version": "5.6.0-SNAPSHOT"
    },
    "opendistro_security": {
      "version": "0.10.0.1"
    },
    "opendistro_alerting": {
      "version": "0.10.0.1"
    }
}
  • kibana_version

    Mandatory : version of Kibana

  • repository:

    Optional, Boolean : but mandatory if chrooted

Kibana plugins

Kibana Punchplatform plugin

  • version

    Mandatory : version of Punchplatform plugin for Kibana. Trigger the plugin installation during Kibana deployment.

Kibana Opendistro Security plugin

  • version

    Mandatory : version of Opendistro Security plugin for Kibana. Trigger the plugin installation during Kibana deployment.

Kibana Opendistro Alerting plugin

  • version

    Mandatory : version of Opendistro Alerting plugin for Kibana. Trigger the plugin installation during Kibana deployment.

Storm

1
2
"storm_version" : "apache-storm-1.2.2",
"storm_nimbus_nodes_production_interface" : "eth0",
  • storm_version

    Mandatory: version of storm

  • storm_nimbus_nodes_production_interface

    Mandatory: network interface binded by storm nimbus (master) for production usage

  • storm_nimbus_jvm_xmx

    Optional

    Set the Xmx of the nimbus jvm default value: 1024m

  • storm_ui_jvm_xmx

    Optional

    Set the Xmx of the ui jvm default value: 256m

  • storm_supervisor_jvm_xmx

    Optional

    Set the Xmx of the storm supervisor jvm default value: 256m

Kafka

1
2
"kafka_version" : "kafka_2.11-1.1.0",
"kafka_brokers_production_interface" : "eth0"
  • kafka_version

    Mandatory

    version of kafka

  • kafka_brokers_production_interface

    Mandatory

    network interface binded by kafka broker for production usage

Shiva

Shiva is the distributed, resilient jobs/services manager used for tasks both at PunchPlatform system level (monitoring, housekeeping...) and at user processing level (channels).

Shiva is made of nodes communicating through a zookeeper cluster. Nodes can be leaders (masters of the cluster) or runners (tasks executors) or both. The operator commands will be available on PunchPlatform operators linux accounts (cf. punchplatform-deployment.settings)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
"shiva_version": "punchplatform-shiva-5.7.0-SNAPSHOT",
"shiva_plugins": {
    "spark": {
        "version": "spark-2.4.3-bin-hadoop2.7",
        "analytics_deployment_version": "punchplatform-analytics-deployment-5.6.0",
        "analytics_client_version": "punchplatform-analytics-client-5.6.0"
    },
    "logstash": {
        "version": "logstash-oss-6.8.2"
    },
    "storm": {
        "version": "apache-storm-1.2.2",
        "topology_jar_version": "punchplatform-topology-5.6.0"
    }
}
  • shiva_version

    Mandatory: Version of shiva app to deploy. File located in archives.

  • shiva_plugins

    Optional: Define the binaries version of the available plugins.

  • shiva_plugins.<plugin_name>

    Optional: The plugin name with its own configuration.

    For now, supported options are 'spark', 'logstash' and 'storm'. These configuration are used deploy the associated plugins on the right hosts based on the Shiva plugin section of the punchplatform.properties.

Spark

1
2
"spark_version" : "spark-2.4.3-bin-hadoop2.7",
"punchplatform_analytics_deployment_version" : "punchplatform-analytics-deployment-5.7.0-SNAPSHOT"
  • spark_version

    Mandatory: version of apache spark

  • punchplatform_analytics_deployment_version

    Mandatory: version of PML

Ceph

It is used to deploy a Ceph cluster (a distributed storage system) and archive data.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
"ceph" :   "version": "13.2.5",
"clusters": {
 "main": {
   "production_network": "192.168.0.0/24",
   "fsid": "b5ee2a02-b92c-4829-8d43-0eb17314c0f6",
   "storm_clusters_clients": [
     "main"
   ],
   "osd_min_bind_port": 6800,
   "osd_max_bind_port": 6803,
   "mgr_min_bind_port": 6810,
   "mgr_max_bind_port": 6813,
   "erasure_coding_profile": {
     "k": 2,
     "m": 1
   },
   "pools": {
     "mytenant-data": {
       "type": "erasure-coded",
       "pg_num": 128,
       "pgp_num": 128
     },
     "mytenant-fsmeta": {
       "type": "replicated",
       "pg_num": 32,
       "pgp_num": 32,
       "replication_factor": 2
     },
     "mytenant-fsdata": {
       "type": "erasure-coded",
       "pg_num": 32,
       "pgp_num": 32
     }
   },
   "filesystems": {
     "myfs": {
       "metadata_pool": "mytenant-fsmeta",
       "data_pool": "mytenant-fsdata"
     }
   },
   "admins": [
     "node01",
     "node02"
   ],
   "admin_rest_apis": {
     "node02": {
       "listening_address": "node02",
       "listening_port": 5050
     },
     "node03": {
       "listening_address": "node03",
       "listening_port": 5050
     }
   },
   "monitors": {
     "node01": {
       "id": 0,
       "production_address": "node01"
     },
     "node02": {
       "id": 1,
       "production_address": "node02"
     },
     "node03": {
       "id": 2,
       "production_address": "node03"
     }
   },
   "osds": {
     "node01": {
       "id": 0,
       "device": "/dev/sdb",
       "device_type": "disk",
       "crush_device_class": "hdd",
       "production_address": "node01"
     },
     "node02": [
       {
         "id": 0,
         "device": "/dev/sdb",
         "device_type": "disk",
         "crush_device_class": "ssd",
         "production_address": "node02",
         "osd_min_bind_port": 6800,
         "osd_max_bind_port": 6803
       },
       {
         "id": 101,
         "device": "/dev/sdc",
         "device_type": "disk",
         "crush_device_class": "hdd",
         "production_address": "node02",
         "osd_min_bind_port": 6850,
         "osd_max_bind_port": 6857
       }
     ],
     "node03": {
       "id": 2,
       "device": "/dev/sdb",
       "device_type": "disk",
       "crush_device_class": "hdd",
       "production_address": "node03"
     }
   },
   "managers": {
     "node01": {
       "id": 0
     },
     "node02": {
       "id": 1
     },
     "node03": {
       "id": 2
     }
   },
   "metadataservers": {
     "node01": {
       "id": 0,
       "production_address": "node01"
     },
     "node02": {
       "id": 1,
       "production_address": "node02"
     }
   }
 }
}

Warning

Ceph package must be installed on deployment machine, it can be performed using additional packages provided in deployer.

  • version

    Mandatory: specify the ceph version.

  • clusters

    Mandatory: You can use several Ceph clusters, depending on your needs. Declare clusters here.

  • clusters.<cluster_name>

    Mandatory: The name of your Ceph cluster.

  • clusters.<cluster_name>.production_network

    Mandatory: Production network, used by Ceph clients to communicate with storage servers and monitors.

  • clusters.<cluster_name>.transport_network

    Optional: Transport network, used by Ceph storage servers to ensure data replication and heartbeat traffic. By default the transport network is the production network.

  • clusters.<cluster_name>.fsid

    Mandatory: Unique Ceph cluster ID.

  • clusters.<cluster_name>.storm_clusters_clients

    Mandatory: Specify here names of Storm clusters (specified in punchplatform.properties configuration file). All slaves nodes of the Storm clusters will be the clients of the Ceph cluster.

  • clusters.<cluster_name>.osd_min_bind_port

    Optional

    OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overrided by specifying a min port (and a max port in next field). Default value is 6800. This must of course differ from other daemons. If you have multiple OSDs on a single node (see 'osds' setting section) then this specific parameter should be set inside each of the osd section, to ensure that the multiple OSDs of the node have different port ranges

  • clusters.<cluster_name>.osd_max_bind_port

    Optional

    OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overrided by specifying a max port (and a min port in previous field). Default value is 7300. This must of course differ from other daemons. If you have multiple OSDs on a single node (see 'osds' setting section) then this specific parameter should be set inside each of the osd section, to ensure that the multiple OSDs of the node have different port ranges

  • clusters.<cluster_name>.mgr_min_bind_port

    OPTIONAL

    managers nodes bind on one port between 6800 and 7300.  This default range can be overrided by specifying a min port (and a max port in next field). Default value is 6800.

  • clusters.<cluster_name>.mgr_max_bind_port

    Optional

    This default range can be overrided by specifying a max port (and a min port in previous field).

    Default value is 7300.

  • clusters.<cluster_name>.erasure_coding_profile

    Optional

    Erasure coding profile used by all erasure coded pools can be specified in this section.

  • clusters.<cluster_name>.erasure_coding_profile.k

    Mandatory

    IN erasure_coding_profile SECTION: k value is the number of data chunks. See Ceph section for more details. Be careful when specifying this parameter. Default value is (NumberOf(OSD) - 1).

  • clusters.<cluster_name>.erasure_coding_profile.m

    Mandatory IN erasure_coding_profile SECTION

    k value is the number of data erasure codes. It represents the number of tolerant nodes loss. See Ceph section for more details. Be careful when specifying this parameter. Default value is 1.

  • clusters.<cluster_name>.pools

    Mandatory

    Dictionary that specify data pools that should exist and be accessible by Ceph clients from PunchPlatform storm topologies. Typically, one data pool can be declared by tenant to facilitate isolation and easy purge of a tenant if needed. Each key in the dictionary is the name of the pool.

  • clusters.<cluster_name>.pools.<pool_name>.type

    Mandatory

    type of pool resilience : either (which mean either a non-resilient pool, or resilience will be achieved through multiple storage) or (which means resilience will be achieved through a RAID-like algorithm). For Ceph-fs filesystem metadata only 'replicated' value is supported. Note that to achieve actual resilience, when using the 'replicated' value, you need additionnally to provide a 'replication_factor' of at least 2.

  • clusters.<cluster_name>.pools.<pool_name>.replication_factor

    Mandatory (but only present when type is 'replicated'. This is the total number of data replica (i.e. a value of '1' means 'non resilient'). This value may be changed afterwards to increase/reduce resilience.

  • clusters.<cluster_name>.pools.<pool_name>.pg_num

    Optional

    number of Placement Groups (aggregates of objects in a pool). Default value is 128.

  • clusters.<cluster_name>.pools.<pool_name>.pgp_num

    Optional: number of PGP. Default value is 128.

  • clusters.<cluster_name>.filesystems

    Mandatory

    Dictionary that specify cephFS filesystems that should exist and be accessible by Ceph clients from PunchPlatform storm topologies. Typically, a filesystem can be declared by tenant to facilitate isolation and easy purge of a tenant if needed. Each key in the dictionary is the name of the filesystem.

  • clusters.<cluster_name>.filesystems.<filesystem_name>.metadata_pool

    Mandatory

    name of a ceph pool that will store directory structure/files metadata information about the CephFS filesystem. This must be a pool of 'replicated' type.

  • clusters.<cluster_name>.filesystems.<filesystem_name>.data_pool

    Mandatory

    name of a ceph pool that will store files content of the filesystem. In current PunchPlatform release, this must be a pool of 'replicated' type.

  • clusters.<cluster_name>.admins

    Mandatory

    Array of nodes names hosting Ceph Admin nodes. These nodes will hold a copy of the ceph cluster administration keyring, and of ceph tools used for the command-line administration of the cluster.

  • clusters.<cluster_name>.admin_rest_apis

    Mandatory

    Dictionary that specify the nodes that will run the ceph admin rest api daemon. This API will then be usable for monitoring the cluster status, either by direct invokation through a web browser, or by the Punchplatform embedded monitoring system. Keys of this dictionary must be reachable host names from the deployer node.

  • clusters.<cluster_name>.admin_rest_apis.<node_name>.listening_address

    Mandatory: Binding address on which the rest api daemon will be listening.

  • clusters.<cluster_name>.admin_rest_apis.<node_name>.listening_address

    Mandatory: Binding port on which the rest api daemon will be listening.

  • clusters.<cluster_name>.monitors

    Mandatory: Monitors maintain the cluster map (OSD endpoints, etc). A

  • clusters.<cluster_name>.monitors.<node_name>

    Mandatory: Names of monitor nodes.

  • clusters.<cluster_name>.monitors.<node_name>.id

    MANDATORY

    Unique ID of monitor. This ID must be unique relative to the cluster of monitor nodes (an OSD could have the same ID in the same cluster)

  • clusters.<cluster_name>.monitors.<node_name>.production_address

    Mandatory: Monitors bind this address to listen requests from clients.

  • clusters.<cluster_name>.osds

    Mandatory: OSD (Object Storage Node) host the data.

  • clusters.<cluster_name>.osds.<node_name>

    Mandatory: Name of osd node. This is an array of json dictionary, each one describing an OSD daemon runing on the host, and managing one block device for data storage

  • clusters.<cluster_name>.osds.<node_name>[].id

    Mandatory: IDs have to be unique in the OSD cluster (a monitor could have the same ID in the same cluster).

  • clusters.<cluster_name>.osds.<node_name>[].device

    Mandatory: Specify the device on the OSD where data is stored. This can be a disk device or a logical volume device.

  • clusters.<cluster_name>.osds.<node_name>[].crush_device_class

    Optional

    This is a sevice class tag than can be used to mark the node in the Ceph crush placement tree. This can then be used for placement rules. Default value is 'None', but it is advised to provide either , depending on the actual device type. Note that this value is used only by punchplatform deployer at OSD node creation time ; if you want to change this information afterwards, please refer to standard ceph tools for updating osd device class in crush table.

  • clusters.<cluster_name>.osds.<node_name>[].production_address

    Mandatory

    The production address which is the endpoint used by Ceph clients to get or put data.

  • clusters.<cluster_name>.osds.<node_name>[].transport_address

    Optional

    The transport address which is used internally for data replication and heartbeat traffic. By default the transport address is the production address.

  • clusters.<cluster_name>.osds.<node_name>[].initial_weight

    Optional

    The relative weight of this storage node when deciding to store data chunks. Nominal (default) value is 1.0, which is the same as other nodes. A weight of 0.0 means NO data will be stored on this node. This value is useful when inserting a new node in an existing cluster, to avoid immediate total rebalancing ; it is also useful when clearing data from a node to prepare removal of the node. This parameter is used only by PunchPlatform cluster deployer, when creating a new OSD. To change the osd weight after deployment, please refer to official CEPH documentation or this howto.

  • clusters.<cluster_name>.osds.<node_name>[].osd_min_bind_port

    Optional: OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overrided by specifying a min port (and a max port in next setting). Default value is the value provided by the same setting at

  • cluster level.

    This must of course differ from other daemons. If you have multiple OSDs on a single node (see 'osds' setting section) then this specific parameter should be set inside each of the individual osd section, to ensure that the multiple OSDs of the node have different port ranges

  • clusters.<cluster_name>.osds.<node_name>[].osd_max_bind_port

    Optional

    OSD (data nodes) bind on one to four ports between 6800 and 7300. This default range can be overrided by specifying a max port (and a min port in previoius setting). Default value is the value provided by the same setting at cluster level. This must of course differ from other daemons. If you have multiple OSDs on a single node (see 'osds' setting section) then this specific parameter should be set inside each of the individual osd section, to ensure that the multiple OSDs of the node have different port ranges

  • clusters.<cluster_name>.managers

    Mandatory: Managers provide additional monitoring and interfaces to external monitoring and management systems. They're usually colocated with monitors.

  • clusters.<cluster_name>.managers.<node_name>.id

    Mandatory

    IDs have to be unique in the managers cluster (a monitor or an could have the same ID in the same cluster).

  • clusters.<cluster_name>.metadataservers

    Optional: MDS (Metadata server) : manages the structure and metadata required for CephFS filesystem instances. At least one MDS is needed for the CephFS feature activation. At least 2 must be defined for high-avalability of the Ceph FS feature.

  • clusters.<cluster_name>.metadataservers.<node_name>

    Mandatory: Name of mds node. This is the host on which the feature will be deployed.

  • clusters.<cluster_name>.metadataservers.<node_name>.id

    Mandatory: IDs have to be unique in the MDS cluster (usually, first one is 0)

  • clusters.<cluster_name>.metadataservers.<node_name>.production_address

    Mandatory: The production address which is the endpoint used by CephFS clients to reach the MDS node.

Elastic Beats

Auditbeat

1
"auditbeat_version" : "6.8.2"
  • auditbeat_version

    Mandatory: version of auditbeat

Filebeat

1
"filebeat_version" : "6.8.2"
  • filebeat_version

    Mandatory

    version of filebeat

The base remote logs directory. It must match the install dirs in punchplatform.properties configuration file.

Metricbeat

1
"metricbeat_version" : "6.8.2"
  • metricbeat_version

    Mandatory: version of metricbeat

Packetbeat

1
"packetbeat_version" : "6.8.2"
  • packetbeat_version

    Mandatory: version of packetbeat

The base remote logs directory. It must match the install dirs in punchplatform.properties configuration file.

Advanced content

Punch Elasticsearch Security

previously named: Modsecurity

Punch Elasticsearch Security is an Apache module to protect deletion and integrity against your Elasticsearch cluster.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
"modsecurity" : {
    "modsecurity_production_interface" : "eth0",
    "port" : 9100,
    "domains" : {
        "admin": {
            "elasticsearch_security_aliases_pattern": "events-mytenant-kibana-[-a-zA-Z0-9.*_:]+",
            "elasticsearch_security_index_pattern": "events-mytenant-[-a-zA-Z0-9.*_:]+"

        }
    }
}
  • modsecurity.modsecurity_production_interface

    Mandatory: interface used by modescurity on the target host.

  • modsecurity.port

    Mandatory: port uses by apache for modesecurity

  • modsecurity.<domain_name>

    Mandatory: name of the client. Please check the kibana name's

  • modsecurity.<client_name>.elasticsearch_security_index_pattern

    Mandatory

    regexp on the name of the index for modsecurity configuration. This parameter is used to restrict requests for access to data. The purpose is to prevent any access to other indexes than the user profile that accesses this specific kibana domain/instance is entitled to.

    This parameter MUST match all indexes that contain data allowed to the user, not only aliases which names the user 'sees' in the Kibana interface. For example, if the kibana provides an 'index pattern' that in fact is an alias (e.g. : events-mytenant-kibana-bluecoat-lastmonth), the pattern must match underlying indexes that contain the data (e.g. : events-mytenant-bluecoat-2017.07.05 ).

    This is because Kibana will determine which indexes contain useful data within a 'user level' alias, and will issue unitary requests to only the underlying indexes that hold data matching the query time scope.

    To configure what aliases the user is allowed to see/uses at his Graphical User Interface level, please provide a different value to the 'elasticsearch_security_aliases_pattern'.

    If non-wildcard index patterns are used in Kibana, then this setting MUST also match the said index patterns, which will be queried 'directly' by kibana, without making any difference between indexes and aliases. Example : if a user has authorized data in indexes named following the'events-mytenant--' pattern, but sees them only through aliases named following the 'events-mytenant-kibana-', then the setting should be : TODO

    To authorize everything please fill TODO

  • modsecurity.<client_name>.elasticsearch_security_aliases_pattern

    Optional

    Regexp on the name of the user-level aliases for modsecurity configuration. This setting MUST be provided if the user is allowed only to select some aliases within his kibana instance, instead of actually using indexes pattern that match real unitary indexes names.

    If this setting is not provided, then it will default to the 'elasticsearch_security_index_pattern' setting value, and may lead to kibana malfunction or Elasticsearch overuse, especially if the provided value to this other setting is in fact an aliases pattern.

    If you want to force kibana to use pre-flight requests to determine actual low-level indexes useful to query against a time-scope, then kibana indexes pattern must contain a '' and therefore, this setting should enforce presence of a ''.

    Example : if a user has authorized data in indexes named following the'events- mytenant--' pattern, but sees them only through aliases named following the events-mytenant-kibana-<technoname>, then the setting should be : events-mytenant-kibana-[-.:0-9a-zA-Z*_]*[*][-.:0-9a-zA-Z*_]*.

    To authorize everything please fill TODO

!!!! info "Do not forget to edit your Elasticsearch section in punchplatform.properties to enabled Punch Security"

Punch Configuration Manager

previously named: git bare

This section is used to locate the git bare of the PunchPlatform configuration.

1
2
3
4
5
6
7
"git_settings" : {
    "synchro_mode" : "ssh",
    "git_remote_user" : "gituser",
    "git_bare_server" : "node02",
    "git_bare_server_address" : "node02",
    "punchplatform_conf_repo_git_url" : "/mnt/pp00p/pp-conf.git"
}
  • synchro_mode: String

    • use ssh protocol to contact the git bare server
  • git_remote_user: String

    • Use this user to etablish the ssh connection. User must already exists.
  • git_bare_server: String

    • name of the server where the git bare is located
  • git_bare_server_address: String

    • name of the interface of the server where the git bare is located if you use a custom ssh port, you can use the port by typing : :
  • punchplatform_conf_repo_git_url

    • path on the git bare server to locate the bare directory
1
2
3
4
5
"punchplatform_conf_repo_branch" : "master",
"lmc_conf_dir" : "pp-conf",
"supervisor_waiting_time": 15,
"supervisor_logfile_backups": 10, 
"supervisor_logfile_maxbytes": "50MB"
  • punchplatform_conf_repo_branch

    Optional

    By default, the deployer will assume you are using a branch named in your configuration directory and will clone it when deploying PunchPlatform administration/monitoring service and when deploying initial administration user. If you want the deployer to clone an other branch, you can provide it with this setting.

  • lmc_conf_dir

    Optional: Name of folder which contains a working copy of git repository previously defined used by the punchplatform-admin-server only

  • supervisor_waiting_time

    Optional: Waiting time (seconds) before ansible checks if supervisord is started or not (this checkup is performed each time a component starts)

  • supervisor_logfile_maxbytes

    OPTIONAL

    The maximum number of bytes that may be consumed by log files before it is rotated (suffix multipliers like , can be used in the value.

    Default value: 50MB

  • supervisor_logfile_backups

    OPTIONAL

    The number of log files backups to keep around resulting from process log file rotation.

    Default value: 10