We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose. Learn more.
Collecting Logs with Fluentbit
This document describes how to set up Fluent Bit, a log processor and forwarder, to collect your kubernetes logs in a central directory. This is not required for running Knative, but can be helpful with Knative Serving, which will automatically delete pods (and their associated logs) when they are no longer needed. Note that Fluent Bit supports exporting to a number of other log providers; if you already have an existing log provider (for example, Splunk, Datadog, ElasticSearch, or Stackdriver), then you may only need the second part of setting up and configuring log forwarders.
Setting up log collection consists of two pieces: running a log forwarding DaemonSet on each node, and running a collector somewhere in the cluster (in our example, we use a StatefulSet which stores logs on a Kubernetes PersistentVolumeClaim, but you could also use a HostPath).
Setting up the collector
It’s useful to set up the collector before the forwarders, because you’ll need the address of the collector when configuring the forwarders, and the forwarders may queue logs until the collector is ready.
fluent-bit-collector.yaml defines a
StatefulSet as well as a Kubernetes Service which allows accessing and reading
the logs from within the cluster. The supplied configuration will create the
monitoring configuration in a namespace called
logging. You can apply the
kubectl apply --filename https://github.com/knative/docs/raw/main/docs/install/collecting-logs/fluent-bit-collector.yaml
The default configuration will classify logs into Knative, apps (pods with an
app= label which aren’t Knative), and the default to logging with the pod
name; this can be changed by updating the
before or after installation. Once the ConfigMap is updated, you’ll need to
restart Fluent Bit (for example, by deleting the pod and letting the StatefulSet
To access the logs through your web browser:
kubectl port-forward --namespace logging service/log-collector 8080:80
And then visit http://localhost:8080/.
You can also open a shell in the
nginx pod and search the logs using unix
kubectl exec --namespace logging --stdin --tty --container nginx log-collector-0
Setting up the forwarders
For the most part, you can follow the
Fluent Bit directions for installing on Kubernetes.
Those directions will set up a Fluent Bit DaemonSet which forwards logs to
ElasticSearch by default; when the directions call for creating the ConfigMap,
you’ll want to either replace the elasticsearch configuration with
fluent-bit-configmap.yaml or add the
following block to the ConfigMap and update the
@INCLUDE output-elasticsearch.conf to be
output-forward.conf: | [OUTPUT] Name forward Host log-collector.logging Port 24224 Require_ack_response True
If you are using a different log collection infrastructure (Splunk, for example), follow the directions in the FluentBit documentation on how to configure your forwarders.
NOTE: This describes a development environment setup, and is not appropriate for production.
If you are using a local Kubernetes cluster for development (Kind, Docker
Desktop, or Minikube), you can create a
hostPath PersistentVolume to store the
logs on your desktop OS. This will allow you to use all your normal desktop
tools on the files without needing Kubernetes-specific tools.
The PersistentVolumeClaim will look something like this, but the
vary based on your Kubernetes software and host operating system. Some example
values are documented below.
apiVersion: v1 kind: PersistentVolume metadata: name: shared-logs labels: app: logs-collector spec: accessModes: - "ReadWriteOnce" storageClassName: manual claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: logs-log-collector-0 namespace: logging capacity: storage: 5Gi hostPath: path: <see below>
And then you’ll need to update the StatefulSet’s
shared-logs volume, like this fragment of yaml:
volumeClaimTemplates: metadata: name: logs spec: accessModes: ["ReadWriteOnce"] volumeName: shared-logs
When creating your cluster, you’ll need to use a
kind-config.yaml and specify
extraMounts for each node, like so:
apiversion: kind.x-k8s.io/v1alpha4 kind: Cluster nodes: - role: control-plane extraMounts: - hostPath: ./logs containerPath: /shared/logs - role: worker extraMounts: - hostPath: ./logs containerPath: /shared/logs
You can then use
/shared/logs as the
spec.hostPath.path in your
PersistentVolume. Note that the directory path
./logs is relative to the
directory that the Kind cluster was created in.
Docker desktop automatically creates some shared mounts between the host and the guest operating systems, so you only need to know the path to your home directory. Here are some examples for different operating systems:
Minikube requires an explicit command to mount a directory into the VM running
Kubernetes. This command
logs directory inside the current directory onto
/mnt/logs in the
minikube mount ./logs:/mnt/logs
You would then reference
/mnt/logs as the
hostPath.path in the