Skip to content

TroubleShootings

Pod(s) eviction

Cause

This may be due to insufficient cluster resources

Why ?

This problem can either be:

  • resources policies are defined on namespace/pod/deploy/...
  • your cluster nodes does not have enough resources allocated to them

Solution

  • Scaling up your hardware
  • Adding new workers to your existing kubernetes cluster
  • Redefining resources policies on resource instances (pods, deploys,...)
  • Deleting/cleaning up unused instances
Kubectl prompt does not return when deleting resources

Cause

kubectl delete ns testingnamespace # kubectl does not return

kubectl get ns # testingnamespace is blocked in Terminating state

kubectl get sparkline -n testingnamespace # return a list of sparklines resources that were not created properly

In kubernetes, it is very common to use finalizers to control the deletion of resources. This is even more relevant when it involves operators and CRDs.

Why ?

Operators in general, watches the state of applied instances of CRDs and needs to guarantee in someway that the desired state of the kubernetes cluster is reached, hence finalizers.

A common example that you may encounter is the deletion of a namespace containing CRD definition where one of it's instances has finalizers attached to them. The namespace deletion will be blocked and has it's status set to Terminating.

Solution

A way to force the deletion, regardless that finalizers exists or not is to manually remove them from each resource instance that has them.

The example below shows how to resolve sparklines garbage collection problem:

RESOURCE_TYPE=sparkline \
RESOURCE_NAME=java-sample \
kubectl patch $RESOURCE_TYPE/$RESOURCE_NAME --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
I can't see my Stormline metrics in Prometheus

First, check that you have properly configured your stormline. You should provide : - spec.metrics.port: <port> : the port number you want to expose your metrics on. You choose its value. - metadata.annotations.prometheus.io/port: "<port>" : same port number. - metadata.annotations.prometheus.io/path: "/metrics" -metadata.annotations.prometheus.io/scrape: "true"` To check that your metrics are exposed, you can take a look in your pods logs. You should find this log :

[INFO] message="expose metrics over HTTP" port="<port>"
If you can find it, your metrics are exposed to prometheus. If no metrics is available, check the scrape configuration of your prometheus.