Post-deployment additional platform Configuration¶
When the platform software components and operator environment deployment is completed, some post-install configuration import/deployment are required to customize your platform the the solution constraints, and to actually deploy some business-level logic and processing inside the punch framework.
Production standard resources¶
The PunchPlatform team created a set of resource that you must import and apply to your platform to be production ready. The main purpose of these actions is to provide resources which use full platform logs, operator actions capture and application metrics in order to help monitor Punchplatform in production.
From the root folder of the unzipped deployer (e.g. 'punchplatform-deployment-X.Y.Z'), the Elasticsearch and Kibana related resources are located at:
punchplatform-deployment-5.4.0
└── resources/
├── elasticsearch
│ └── templates
│ └── platform
│ ├── pp_mapping_applications.json
│ ├── pp_mapping_applicative_monitoring.json
│ ├── pp_mapping_archive.json
│ ├── pp_mapping_gateway.json
│ ├── pp_mapping_metadata.json
│ ├── pp_mapping_platform_health.json
│ ├── pp_mapping_platform_logs.json
│ ├── pp_mapping_platform_monitoring.json
│ ├── pp_mapping_topology_metrics.json
│ └── pp_monitoring_default_refresh.json
└── kibana
└── dashboards
├── archiving_monitoring
│ └── archiving_monitoring.ndjson
├── platform_monitoring
│ └── platform_monitoring.ndjson
├── spark_monitoring
│ └── spark_monitoring.ndjson
└── tenants_monitoring
└── tenants_monitoring.ndjson
We will see in the next paragraphs how to import these standard configuration, as well as the custom ones prepared by the platform/solution integrator team.
Tip
The Kibana dashboards and Elasticsearch templates configuration need only be applied ONCE to each targetted cluster (or when they changed), because they are persisted in the Elasticsearch cluster.
But it is advisable to keep these resources in the configuration folder of your runtime environment to enable fast redployment in case of damages to your reference configuration (operator mistake) or in case of redeployment after major incident.
Update the Punch runtime configuration¶
Update the entire runtime environment and configuration of your platform with :
punchplatform-deployer.sh deploy -t platform_configuration
Info
This action requires the following file inside your $PUNCHPLATFORM_CONF_DIR
:
- resolv.yaml
- punchplatform-deployment.settings
This action will update the runtime environment and configuration of all the servers for the following components :
- Punch Shiva workers
- Punch Gateway servers
- Punch operator nodes
This role ensures that the update is performed with the platform admin permissions and no operator will access to the runtime changes, even if they benefit from it using operator commands.
Info
Updating the resolver content do not require the restart of the impacted services.
Warning
Updating the extension of the resolver (eg. from 'resolv.hjson' to 'resolv.yaml') require the restart of the Gateway servers and the Shiva workers. This restart is not automatic, you have to restart them manually. An operator will need to source its environment again.
Importing Operator resources and tenants¶
Once your deployment has succeeded, your operator nodes needs Punch resources containing your channels and custom resources as well as the Punch provided resources to work properly.
You can do do that in different ways :
- Manually copy your configuration to all operator nodes
- Git management
- Post installation deployment through the Punch deployer
Here is the command to copy your Punch on all operator nodes configuration using Punch deployer :
punchplatform-deployer.sh --copy-configuration
This is a post installation command so you have the same prerequisites as for a classic deployment before executing it.
Danger
Note that using such a 'one time' copy (as compared to git configuration management with shared repository) will not prevent multiple operator to change their own copy of your resources and applications configuration. So such a copy shortcut is normally reserved for single-operator-account, single-operator-machine set ups, when there is no need of further configuration management setup.
For all other production use cases, central, unique, highly available shared configuration folder, or configuration management and trained users are more than strongly advised.
Elasticsearch Templates¶
Once your Punch configuration have been imported to operator nodes, you have now some Elasticsearch templates in your resources.
These templates are settings (indexing fields settings, index replication settings...) that will apply to each new Elasticsearch index as soon as it is created, based on the index name matching pattern in each file.
The templates that you must load are the ones under the "platform" directory. They are needed to correctly insert the monitoring events generated by the PunchPlatform itself. They must be imported to your monitoring elasticsearch instance, but to avoid mistakes depending on your actual setup, you can import them to each of your Elasticsearch cluster.
To do so, you can do it by hand using this Operator environment command for each folder (mandatory or custom) containing templates to import:
# Of course replace 'MyClusterName' with the targetted Elasticsearch cluster id
# taken from your punchplatform-deployment.settings file.
punchplatform-push-es-templates.sh --directory resources/elasticsearch/templates/platform -c MyClusterName --verbose
Info
If you only want to import a single template file, then you can use a direct Elastic REST API call:
$ curl -H "Content-Type: application/json" -c MyClusterName -XPUT oneElasticsearchNode:9200/_template/mapping_metrics -d @mapping_metrics.json
Kibana Dashboards¶
The PunchPlatform comes with already made custom Kibana dashboard to easily monitor your platform All these dashboards are available under the "dashboard" directory
Once your Punch configuration have been imported to your operator nodes, you have now some Kibana dashboards ("standard" from the punch deployer resources, and optionnally custom-prepared by your integrator team) in your resources.
You need to actually import appropriate dashboard into each running kibana domain instance (see deployment settings to identify all instances of Kibana in your platform, and associated Elasticsearch cluster and kibana index name).
Warning
You must push Elasticsearch templates before pushing the Kibana dashboards to avoid mapping errors during the import.
To import the dashboards you can use this command-line:
punchplatform-setup-kibana.sh --import -d <kibana domain from your punchplatform-deployment.settings file>
Or manually, follow these quick steps:
- Go to Kibana UI
- On the left-side panel, go to the "Management > Saved Objects > Import"
- Drag-n-drop or select the previous NDJSON file (repeat for each file to import)
- Go to the "Dashboards" tab and start exploring your dashboards !
You can get information about these dashboards here : Punch Dashboards
Opendistro configuration¶
If your solution uses Opendistro security plugin, then you need to import your platform-specific configuration into each secured Elasticsearch cluster of your platform, after deploying the Elasticsearch for the first time on this platform.
To know how to do that, and how such specific configuration should be prepared/tested/exported first on development/integration environment, please have a look at the overall process and at the production deployment command line
Initial Channels/Applications configuration import¶
Your integrator team normally prepared an initial configuration for your platform applications and pipelines, even if operators may be allowed to improve/complement/update these channels later.
The post-deployment process therefore involve importing these pre-designed "tenants" configuration folder inside the operator(s) configuration folder, and then start the imported channels.
Even if you want to deploy an "empty" integration platform for testing or developing more configuration, you usually import standard "platform" tenant services to
- collect and centralize all shiva logs and shiva application metrics and operator actions
- run platform health monitoring micro-service
- run housekeeping tasks to manage old data in [Elasticsearch]Operations/Platform_Administration/Elasticsearch_housekeeping_guide.html or in Archiving storage.
- run 'platform tenant' channels health monitoring micro-service
Please refer to Reference architecture and configuration for production examples of such configuration.