Skip to content

PConsole on Minikube

Prerequisite

Install Minikube

Get and install minikube.

additional functionalities can be added

minikube addons enable dashboard
minikube addons enable ingress

Check your minikube cluster by navigating through it's dashboard:

minikube start
minikube dashboard

Install PConsole

Follow our getting started on PConsole

Edit $PUNCHPLATFORM_PROPERTIES_FILE if needed. E.g. you may want to change the path of kubernetes config file (kubernetes.clusters..config_path).

All of our configuration files compatible with kubernetes are available in kast and training_kubernetes tenant.

If you want to make training_kubernetes tenant available to PConsole, modify kubernetes key $PUNCHPLATFORM_PROPERTIES_FILE with the one below:

{
  "kubernetes": {
    "clusters": {
      "kastcluster": {
        "config_path": "$HOME/.kube/config",
        "tenants": {
          "kast": {
            "spark_service_account": "spark-sa",
            "spark_role": "spark-role",
            "sparkline_container": "gitlab.thalesdigital.io:5005/punch/product/pp-punch/sparkline:7.0.0",
            "init_container": "gitlab.thalesdigital.io:5005/punch/product/pp-punch/resourcectl:7.0.1-SNAPSHOT",
            "stormline_container": "gitlab.thalesdigital.io:5005/punch/product/pp-punch/stormline:7.0.1",
            "image_pull_policy": "Always",
            "image_pull_secret": "mysecret"
          },
           "training_kubernetes": {
            "spark_service_account": "spark-sa",
            "spark_role": "spark-role",
            "sparkline_container": "gitlab.thalesdigital.io:5005/punch/product/pp-punch/sparkline:7.0.0",
            "init_container": "gitlab.thalesdigital.io:5005/punch/product/pp-punch/resourcectl:7.0.1-SNAPSHOT",
            "stormline_container": "gitlab.thalesdigital.io:5005/punch/product/pp-punch/stormline:7.0.1",
            "image_pull_policy": "Always",
            "image_pull_secret": "mysecret"
          }
        }
      }
    }
  }
}

Information on each parameter definition can be view on this page.

Install argo eventbus, workflows and events

In this section, we are going to install argo events and argo workflows in namepspace: kast

Do not use argo helm chart

In punch we support only helm >= 3

Installing argo-events requires more tweaks !

Installation order matters

# create namespace kast if not already done
kubectl create ns kast

argo event

# switch to a temporary working directory
mkdir -p /tmp/argo-event && cd /tmp/argo-event
wget https://raw.githubusercontent.com/argoproj/argo-events/v1.3.1/manifests/namespace-install.yaml
cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kast
resources:
- install.yaml
EOF
kubectl kustomize | kubectl apply -f-

argo eventbus and workflows

# eventbus
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/v1.3.1/examples/eventbus/native.yaml -n kast
# workflows
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-workflows/v3.0.2/manifests/namespace-install.yaml -n kast

Punchline RBAC

Note: according to your development use-case, you should tweak priviledges.

# rbac: use admin-user SA in your argo templates
kubectl apply -f- <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kast
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kast
EOF

Kafka installation

Follow this link to install a kafka cluster for you development environment.

Deploy mandatory k8s resources

These resources should exists for each tenant (i.e. each namespace).

You will have to get in contact with us to have access to our container repository for retrieving punch images

Follow this documentation for installing them.

Domain name

On minikube, it can be a pain to configure on the host machine a local-dns, which enable you to use domain name instead of ip adresses. Minikube provide an easy alternative using it's tunnel command. This will help you moving forward without any headache.

Example with minio

# create a namespace
kubectl create ns minio
# install minio in minio namespace
helm install --set accessKey=admin,secretKey=password --generate-name minio/minio --namespace minio
# install a loadbalancer in minio namespace
kubectl -n minio apply -f- <<EOF
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  selector:
    app: minio
  ports:
    - protocol: TCP
      port: 9000
      targetPort: 9000
  type: LoadBalancer
EOF

kubectl -n kast apply -f- <<EOF
---
apiVersion: v1
data:
  # base64 of minio
  accesskey: YWRtaW4=
  # base64 of minio123
  secretkey: cGFzc3dvcmQ=
kind: Secret
metadata:
  name: artifacts-minio
  namespace: kast
EOF

expose the ip address to your host machine in /etc/hosts

kubectl ns minio
# use tunnel to expose internal minio ip adress to it's host machine
minikube tunnel
# check ip exposed by using svc
kubectl get svc -n minio

# get port and address of loadbalancer
PORT=$(kubectl get svc -n minio minio-service -o jsonpath="{.spec.ports[0].port}")
ADDRESS=$(kubectl get svc -n minio minio-service -o jsonpath="{.spec.clusterIP}")
echo "$ADDRESS minio.demo.kubernetes" | sudo tee -a /etc/hosts

Reset Minikube

If you need to restart from a fresh starting point:

minikube delete --purge --all