Auditing Deployment Changes with Kubernetes Audit Logs

Learn how to record all API requests to the Kubernetes API server, including kubectl commands and internal requests from Kubernetes components and controllers.
I like to solve technical infrastructure issues ;)

Auditing Deployment Changes with Kubernetes Audit Logs

Learn how to record all API requests to the Kubernetes API server, including kubectl commands and internal requests from Kubernetes components and controllers.

In today’s cloud-native world, Kubernetes is essential for orchestrating, managing, and scaling applications. As organizations adopt Kubernetes, visibility into cluster changes becomes crucial. Handling Kubernetes audit logs is a key aspect of building a successful application. These logs capture detailed records of all activities and events in the cluster, enabling change tracking, issue troubleshooting, and regulatory compliance.

What are Kubernetes Audit Logs?

Kubernetes audit logs record all API requests to the Kubernetes API server, including requests from users executing kubectl commands and internal requests from Kubernetes components and controllers. The audit log entries contain rich metadata about each API request, including:

  • Timestamp of when the request was received
  • Source IP address of the client making the request
  • Authenticated user or service account that initiated the request
  • Specific operation being requested (get, create, update, delete, etc.)
  • Group, version, resource, and namespace targeted by the request
  • Complete request and response object payloads
  • Response status codes indicating success or failure

By maintaining a centralized audit trail of all API interactions, Kubernetes audit logs provide administrators and security teams with deep visibility into deployments, configurations, access controls, and other operational activities. This audit trail is invaluable for security monitoring, change tracking, compliance reporting, and forensic investigations.

Why monitor Kubernetes Audit Logs?

Monitoring Kubernetes audit logs offers numerous security, compliance, and operational benefits:

Security monitoring and threat detection:

  • Detect unauthorized access attempts or suspicious activities, such as failed login attempts and attempts to access sensitive resources.
  • Identify insider threats by tracking user activities and resource access patterns.
  • Implement security information and event management (SIEM) by integrating audit logs into centralized security monitoring tools.

Change tracking and operational visibility:

  • Maintain a chronological record of all create, update, and delete activities in the cluster.
  • Track configuration changes to deployments, services, network policies, and other Kubernetes objects.
  • Monitor administrative activities like namespace creation, role binding updates, and cluster autoscaling events.

Forensics and incident response:

  • In the event of a security breach or compliance violation, audit logs serve as a vital source of information for forensic analysis and root cause investigations.
  • Trace the sequence of events and actions that led to an incident, including who performed those actions and what resources were impacted.
  • Support evidence gathering and reporting requirements during incident response procedures.

Compliance and audit reporting:

  • Many regulatory frameworks and industry standards, such as HIPAA, PCI-DSS, GDPR, and NIST Cybersecurity Framework, mandate the maintenance of comprehensive audit trails for all system activities.
  • Audit logs provide concrete evidence of the implementation of security controls, user access management, and adherence to defined policies and procedures.
  • Generate compliance reports and audit artifacts by querying and analyzing the audit log data.

By monitoring and analyzing Kubernetes audit logs, organizations can meet security best practices, implement robust access controls, maintain operational hygiene, and quickly investigate and respond to incidents or compliance violations.

Enabling Kubernetes Audit Logging

Audit logging is not enabled by default in Kubernetes clusters. It must be explicitly configured by defining an audit policy that specifies what events should be logged and at what level of detail.

The audit policy is defined in a YAML file, which is passed to the kube-apiserver via the –audit-policy-file flag. The policy file allows you to set different audit levels (None, Metadata, Request, or RequestResponse) based on the user, resource, verb, or object involved.

apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

Once the policy file is created, update the kube-apiserver manifest to point to the policy file location and the path where audit logs should be written. After restarting the kube-apiserver pod, it will begin writing audit logs per the configured policy.

spec:
  containers:
    - command:
    <snip>
      --audit-policy-file=/etc/kubernetes/audit-policy.yaml 
      --audit-log-path=/var/log/audit/audit.log
      --audit-log-maxage=30 # no of days
      --audit-log-maxbackup=10 # no of max backup files
      --audit-log-maxsize=100 # max size of the log file in MB
    <snip>

These flags:

  • Specify the location of your audit policy file.
  • Set the path where audit logs will be stored.
  • Configure log rotation: keep logs for 30 days, with a maximum of 10 backup files, each up to 100 MB in size.

Once this step is completed you need to mount the volume using:

...
volumeMounts:
  - mountPath: /etc/kubernetes/audit-policy.yaml
    name: audit
    readOnly: true
  - mountPath: /var/log/kubernetes/audit/
    name: audit-log
    readOnly: false

And finally mount the host file:

...
volumes:
- name: audit
  hostPath:
    path: /etc/kubernetes/audit-policy.yaml
    type: File

- name: audit-log
  hostPath:
    path: /var/log/kubernetes/audit/
    type: DirectoryOrCreate

After making these changes, the kube-apiserver pod will automatically restart. If it doesn’t, check the pod logs for issues at:

/var/log/pods/kube-system_kube-apiserver-<hostname>_<id>/kube-apiserver/0.log

Now that the configuration is complete, logs will appear in the specified log file.

Conclusion

As Kubernetes becomes central to deploying and operating modern applications, auditing cluster activities is critical for security, compliance, and governance. Kubernetes audit logs serve as your cluster’s record keeper, capturing every change. By setting up audit logging, you can track who did what and when, providing a clear picture of your cluster’s activities. Enabling and monitoring these logs gives organizations vital visibility into cluster changes, potential threats, and overall operational hygiene.

This visibility is invaluable. It helps spot security threats, understand what changed when issues arise, and prove compliance with industry rules. Although setting up audit logs requires effort—choosing what to log and where to store it—the benefits are significant. As your cluster grows, using smart tools to manage and analyze these logs becomes essential.

In short, Kubernetes audit logs are your cluster’s safety net. They record every deployment change, helping you build and grow your applications with confidence. In a fast-changing environment, these logs ensure you always know what’s happening in your Kubernetes cluster.

aviator releases

Aviator.co | Blog

Subscribe

Be the first to know once we publish a new blog post

Join our Discord

Learn best practices from modern engineering teams

Get a free 30-min consultation with the Aviator team to improve developer experience across your organization.