K8s: Useful kubectl logs options - Thu, Aug 18, 2022
K8s: Useful kubectl logs options
Despite having logging and monitoring tools like Grafane loki
and ELK
I still use kubectl logs
just because it is sometimes the quickest way to grep
for something in a specific container log. This snippet contains some nifty options of kubectl logs
that I found quiet useful.
Select logs by deployment
To get all the logs for a specific deployment you can use the following syntax:
kubectl logs -f deployment/nginx-controller
The output will contain the logs of all containers of the deployment, including replicas. This can be useful for quickly finding an error statement in a complex deployment.
Include pod id in logs
If you want to use kubectl logs
to get the logs of multiple pods it might not be obvious which pod actually logged which statement. That is especially true for replicas. The prefix
option adds a prefix to each log line with which you can exactly identify which container in which pod logged the message:
kubectl logs --prefix=true -f deployment/nginx-controller
[pod/nginx-controller-778465dd87-ljpjs/controller] W0818 05:19:31.005124 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
[pod/nginx-controller-778465dd87-ljpjs/controller] I0818 05:19:31.005282 1 main.go:230] "Creating API client" host="https://102.68.0.1:443"
Again this is useful for finding something like an error in more complex deployments.
Select logs by labels
While logs of deployments are somehow related, the label option let’s you get logs from containers only sharing the same labels:
kubectl logs --tail=-1 --prefix=true -l app.kubernetes.io/part-of=backend
That command will give me all logs part of the backend
of a system. This is useful for quickly tracing an error id being logged by different containers.
Note that I added the option --tail=-1
. Per default the number of log lines is restricted to 10
when using a label selector. -1
gives you all log lines.
Logs of crash looped pods
Sometimes you have pods which are crash looping. And if crashed pods are not being kept it is hard to get the reason why the pod has failed. The previous
option lets you get the log of the previously failed pod:
kubectl logs nginx-controller-778465dd87-ljpjs --previous
This is useful to get information about the reasons for the crash.
P.S.: Some of these examples where inspired by the kubectl cheat sheet which I can highly recommend for finding commands for the most common use cases…