Reporting
Policy reports are Kubernetes Custom Resources, generated and managed automatically by Kyverno, which contain the results of applying matching Kubernetes resources to Kyverno ClusterPolicy or Policy resources. They are created for validate
and verifyImages
rules when the policy in which they are contained is configured with spec.validationFailureAction: audit
or spec.background: true
and a resource applies to one or more rules according to the policy definition. If resources violate multiple rules, there will be multiple entries. When resources are deleted, their entry will be removed from the report. For example, if a validate policy in audit
mode exists containing a single rule which requires that all resources set the label team
and a user creates a Pod which does not set the team
label, Kyverno will allow the Pod’s creation but record it as a fail
result in a policy report due to the Pod being in violation of the policy and rule. Policies configured with spec.validationFailureAction: enforce
immediately block violating resources and therefore do not generate policy reports. Policy reports are an ideal way to observe the impact a Kyverno policy may have in a cluster without causing disruption. The insights gained from these policy reports may be used to provide valuable feedback to both users/developers so they may take appropriate action to bring offending resources into alignment, and to policy authors or cluster operators to help them refine policies prior to changing them to enforce
mode. Because reports are decoupled from policies, standard Kubernetes RBAC can then be applied to separate those who can see and manipulate policies from those who can view reports.
Policy reports are created based on two different triggers: an admission event (a CREATE
, UPDATE
, or DELETE
action performed against a resource) or the result of a background scan discovering existing resources. Policy reports, like Kyverno policies, have both Namespaced and cluster-scoped variants; a PolicyReport
is a Namespaced resource while a ClusterPolicyReport
is a cluster-scoped resource. However, unlike Policy
and ClusterPolicy
resources, the PolicyReport
and ClusterPolicyReport
resources contain results from resources which are at the same scope and not what is determined by the Kyverno policy. For example, a ClusterPolicy
(a cluster-scoped policy) contains a rule which matches on Pods (a Namespaced resource). Results generated from this policy and rule are written to a PolicyReport
in the Namespace where the Pod exists.
Kyverno uses a standard and open format published by the Kubernetes Policy working group which proposes a common policy report format across Kubernetes tools. Below is an example of a ClusterPolicyReport
which shows Namespaces in violation of a validate rule which requires the team
label be present.
1apiVersion: wgpolicyk8s.io/v1alpha2
2kind: ClusterPolicyReport
3metadata:
4 creationTimestamp: "2022-10-18T11:55:20Z"
5 generation: 1
6 labels:
7 app.kubernetes.io/managed-by: kyverno
8 name: cpol-require-ns-labels
9 resourceVersion: "950"
10 uid: 6dde3d0d-d2e8-48d9-8b56-47b3c5e7a3b3
11results:
12- category: Best Practices
13 message: 'validation error: The label `team` is required. rule check-for-ns-labels
14 failed at path /metadata/labels/team/'
15 policy: require-ns-labels
16 resources:
17 - apiVersion: v1
18 kind: Namespace
19 name: kube-node-lease
20 uid: 06e5056f-76a3-461a-8d45-2793b8bd5bbc
21 result: fail
22 rule: check-for-ns-labels
23 scored: true
24 severity: medium
25 source: kyverno
26 timestamp:
27 nanos: 0
28 seconds: 1666094105
The report’s contents can be found under the results[]
object in which it displays a number of fields including the resource that was matched against the rule in the parent policy.
Policy reports are created in a 1:1 relationship with a Kyverno policy. The naming follows the convention <policy_type>-<policy_name>
where <report_type>
uses the alias pol
(for Policy
) or cpol
(for ClusterPolicy
).
Note
Policy reports show policy results for current resources in the cluster only. For information on resources that were blocked during admission controls, use the policy rule execution metric or look at events on the corresponding Kyverno policy.Policy reports have a few configuration options available. For details, see the container flags section.
Note
Policy reports created from background scans are not subject to the configuration of either a resource filter or a Namespace selector defined in the Kyverno ConfigMap. This is due to the fact that the overhead required to generate these reports is minimal and should have no performance impact.Report result logic
Entries in a policy report contain a result
field which can be either pass
, skip
, warn
, error
, or fail
.
Result | Description |
---|---|
pass | The resource was applicable to a rule and the pattern passed evaluation. |
skip | Preconditions were not satisfied (if applicable) in a rule and so further processing was not performed. |
fail | The resource failed the pattern evaluation. |
warn | The annotation policies.kyverno.io/scored has been set to "false" in the policy converting otherwise fail results to warn . |
error | Variable substitution failed outside of preconditions and elsewhere in the rule (ex., in the pattern). |
Viewing policy report summaries
You can view a summary of the Namespaced policy reports using the following command:
1kubectl get policyreport -A
For example, below are the policy reports for a small test cluster (polr
is the alias for PolicyReports
) in which the only policy installed is named disallow-privileged-containers
.
1$ kubectl get polr -A
2NAMESPACE NAME PASS FAIL WARN ERROR SKIP AGE
3kube-system cpol-disallow-privileged-containers 14 0 0 0 0 6s
4kyverno cpol-disallow-privileged-containers 2 0 0 0 0 5s
5default cpol-disallow-privileged-containers 0 1 0 0 0 5s
Similarly, you can view the cluster-wide report using:
1kubectl get clusterpolicyreport
Tip
For a graphical view of Policy Reports, check out Policy Reporter.Note
If you’ve set thepolicies.kyverno.io/scored
annotation to "false"
in your policy, then the policy violations will be reported as warnings rather than failures. By default, it is set to "true"
and policy violations are reported as failures.Viewing policy violations
Since the report provides information on all rule and resource execution, returning only select entries requires a filter expression.
Policy reports can be inspected using either kubectl describe
or kubectl get
. For example, here is a command, requiring yq
, to view only failures for the (Namespaced) report called cpol-disallow-privileged-containers
:
1kubectl get polr cpol-disallow-privileged-containers -o jsonpath='{.results[?(@.result=="fail")]}' | yq -p json -
1category: Pod Security Standards (Baseline)
2message: 'validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. . rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/'
3policy: disallow-privileged-containers
4resources:
5 - apiVersion: v1
6 kind: Pod
7 name: h0nk
8 namespace: default
9 uid: 71a4a8c8-8e02-46ed-af3d-db38f590a5b6
10result: fail
11rule: privileged-containers
12scored: true
13severity: medium
14source: kyverno
15timestamp:
16 nanos: 0
17 seconds: 1.666094801e+09
18---
19category: Pod Security Standards (Baseline)
20message: 'validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. . rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/'
21policy: disallow-privileged-containers
22resources:
23 - apiVersion: v1
24 kind: Pod
25 name: badpod
26 namespace: default
27 uid: 8ef79afd-f2b4-44f8-8c50-dabdda45b8c0
28result: fail
29rule: privileged-containers
30scored: true
31severity: medium
32source: kyverno
33timestamp:
34 nanos: 0
35 seconds: 1.666095335e+09
Example: Trigger a PolicyReport
By default, a PolicyReport
object (Namespaced) is created in the same Namespace where resources apply to one or more Kyverno policies, be they Policy
or ClusterPolicy
policies.
A single Kyverno ClusterPolicy exists with a single rule which ensures Pods cannot mount Secrets as environment variables.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: secrets-not-from-env-vars
5spec:
6 background: true
7 validationFailureAction: audit
8 rules:
9 - name: secrets-not-from-env-vars
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 message: "Secrets must be mounted as volumes, not as environment variables."
17 pattern:
18 spec:
19 containers:
20 - name: "*"
21 =(env):
22 - =(valueFrom):
23 X(secretKeyRef): "null"
Creating a Pod in this Namespace which does not use any Secrets (and thereby does not violate the secrets-not-from-env-vars
rule in the ClusterPolicy) will generate the first entry in the PolicyReport, but listed as a PASS
.
1$ kubectl run busybox --image busybox:1.28 -- sleep 9999
2pod/busybox created
3
4$ kubectl get po
5NAME READY STATUS RESTARTS AGE
6busybox 1/1 Running 0 66s
7
8$ kubectl get polr
9NAME PASS FAIL WARN ERROR SKIP AGE
10cpol-secrets-not-from-env-vars 1 0 0 0 0 6s
Inspect the PolicyReport in the default
Namespace to view its contents. Notice that the busybox
Pod is listed as having passed.
1$ kubectl get polr cpol-secrets-not-from-env-vars -o yaml
2
3<snipped>
4results:
5- message: validation rule 'secrets-not-from-env-vars' passed.
6 policy: secrets-not-from-env-vars
7 resources:
8 - apiVersion: v1
9 kind: Pod
10 name: busybox
11 namespace: default
12 uid: 0dd94825-cc6e-435b-982b-fb76ac2fdc2a
13 result: pass
14 rule: secrets-not-from-env-vars
15 scored: true
16 source: kyverno
17 timestamp:
18 nanos: 0
19 seconds: 1666097147
20summary:
21 error: 0
22 fail: 0
23 pass: 1
24 skip: 0
25 warn: 0
Create another Pod which violates the rule in the sample policy. Because the rule is written with validationFailureAction: audit
, resources are allowed to be created which violate the rule. If this occurs, another entry will be created in the PolicyReport which denotes this condition as a FAIL. By contrast, if validationFailureAction: enforce
and an offending resource was attempted creation, it would be immediately blocked and therefore would not generate another entry in the report.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: secret-pod
5spec:
6 containers:
7 - name: busybox
8 image: busybox:1.28
9 env:
10 - name: SECRET_STUFF
11 valueFrom:
12 secretKeyRef:
13 name: mysecret
14 key: mysecretname
Since the above Pod spec was allowed and it violated the rule, there should now be a failure entry in the PolicyReport in the default
Namespace.
1$ kubectl get polr polr-ns-default -o yaml
2
3<snipped>
4- message: 'validation error: Secrets must be mounted as volumes, not as environment
5 variables. rule secrets-not-from-env-vars failed at path /spec/containers/0/env/0/valueFrom/secretKeyRef/'
6 policy: secrets-not-from-env-vars
7 resources:
8 - apiVersion: v1
9 kind: Pod
10 name: secret-pod
11 namespace: default
12 uid: 72a7422c-fb6f-486f-b274-1ca0de55d49d
13 result: fail
14 rule: secrets-not-from-env-vars
15 scored: true
16 source: kyverno
17 timestamp:
18 nanos: 0
19 seconds: 1666098438
20summary:
21 error: 0
22 fail: 1
23 pass: 1
24 skip: 0
25 warn: 0
Lastly, delete the Pod called secret-pod
and once again check the PolicyReport object.
1$ kubectl delete po secret-pod
2pod "secret-pod" deleted
3
4$ k get polr cpol-secrets-not-from-env-vars
5NAME PASS FAIL WARN ERROR SKIP AGE
6cpol-secrets-not-from-env-vars 1 0 0 0 0 2m21s
Notice how the PolicyReport has removed the previously-failed entry when the violating Pod was deleted.
Example: Trigger a ClusterPolicyReport
A ClusterPolicyReport is the same concept as a PolicyReport only it contains resources which are cluster scoped rather than Namespaced.
As an example, create the following sample ClusterPolicy containing a single rule which validates that all new Namespaces should contain the label called thisshouldntexist
and have some value. Notice how validationFailureAction: audit
and background: true
in this ClusterPolicy.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: require-ns-labels
5spec:
6 validationFailureAction: audit
7 background: true
8 rules:
9 - name: check-for-labels-on-namespace
10 match:
11 any:
12 - resources:
13 kinds:
14 - Namespace
15 validate:
16 message: "The label `thisshouldntexist` is required."
17 pattern:
18 metadata:
19 labels:
20 thisshouldntexist: "?*"
After creating this sample ClusterPolicy, check for the existence of a ClusterPolicyReport object.
1$ k get cpolr
2NAME PASS FAIL WARN ERROR SKIP AGE
3cpol-require-ns-labels 0 5 0 0 0 76m```
Notice that a ClusterPolicyReport named cpol-require-ns-labels
exists with five failures.
The ClusterPolicyReport, when inspected, has the same structure as the PolicyReport object and contains entries in the results
and summary
objects with the outcomes of a policy audit.
1results:
2- message: 'validation error: The label `thisshouldntexist` is required. rule check-for-labels-on-namespace
3 failed at path /metadata/labels/thisshouldntexist/'
4 policy: require-ns-labels
5 resources:
6 - apiVersion: v1
7 kind: Namespace
8 name: kube-node-lease
9 uid: 06e5056f-76a3-461a-8d45-2793b8bd5bbc
10 result: fail
11 rule: check-for-labels-on-namespace
12 scored: true
13 source: kyverno
14 timestamp:
15 nanos: 0
16 seconds: 1666098654
17- message: 'validation error: The label `thisshouldntexist` is required. rule check-for-labels-on-namespace
18 failed at path /metadata/labels/thisshouldntexist/'
19 policy: require-ns-labels
20 resources:
21 - apiVersion: v1
22 kind: Namespace
23 name: default
24 uid: 4ffe22fd-0927-4ed1-8b04-50ca7ed58626
25 result: fail
26 rule: check-for-labels-on-namespace
27 scored: true
28 source: kyverno
29 timestamp:
30 nanos: 0
31 seconds: 1666098654
32- message: 'validation error: The label `thisshouldntexist` is required. rule check-for-labels-on-namespace
33 failed at path /metadata/labels/thisshouldntexist/'
34 policy: require-ns-labels
35 resources:
36 - apiVersion: v1
37 kind: Namespace
38 name: kyverno
39 uid: 5d87cd66-ce30-4abc-b863-f3b97715a5f1
40 result: fail
41 rule: check-for-labels-on-namespace
42 scored: true
43 source: kyverno
44 timestamp:
45 nanos: 0
46 seconds: 1666098654
47- message: 'validation error: The label `thisshouldntexist` is required. rule check-for-labels-on-namespace
48 failed at path /metadata/labels/thisshouldntexist/'
49 policy: require-ns-labels
50 resources:
51 - apiVersion: v1
52 kind: Namespace
53 name: kube-public
54 uid: c077ee71-435b-4921-9e05-8751fee71b64
55 result: fail
56 rule: check-for-labels-on-namespace
57 scored: true
58 source: kyverno
59 timestamp:
60 nanos: 0
61 seconds: 1666098654
62- message: 'validation error: The label `thisshouldntexist` is required. rule check-for-labels-on-namespace
63 failed at path /metadata/labels/thisshouldntexist/'
64 policy: require-ns-labels
65 resources:
66 - apiVersion: v1
67 kind: Namespace
68 name: kube-system
69 uid: e63fabde-b572-4b07-b899-b2230f4eac69
70 result: fail
71 rule: check-for-labels-on-namespace
72 scored: true
73 source: kyverno
74 timestamp:
75 nanos: 0
76 seconds: 1666098654
77summary:
78 error: 0
79 fail: 5
80 pass: 0
81 skip: 0
82 warn: 0
Report internals
The PolicyReport
and ClusterPolicyReport
are the final resources composed of matching resources as determined by Kyverno Policy
and ClusterPolicy
objects, however these reports are built of four intermediary resources. For matching resources which were caught during admission mode, AdmissionReport
and ClusterAdmissionReport
resources are created. For results of background processing, BackgroundScanReport
and ClusterBackgroundScanReport
resources are created. An example of a ClusterAdmissionReport
is shown below.
1apiVersion: kyverno.io/v1alpha2
2kind: ClusterAdmissionReport
3metadata:
4 creationTimestamp: "2022-10-18T13:15:09Z"
5 generation: 1
6 labels:
7 app.kubernetes.io/managed-by: kyverno
8 audit.kyverno.io/resource.hash: a7ec5160f220c5b83c26b5c8f7dc35b6
9 audit.kyverno.io/resource.uid: 61946422-14ba-4aa2-94b4-229d38446381
10 cpol.kyverno.io/require-ns-labels: "4773"
11 name: c0cc7337-9bcd-4d53-abb2-93f7f5555216
12 resourceVersion: "4986"
13 uid: 10babc6c-9e6e-4386-abed-c13f50091523
14spec:
15 owner:
16 apiVersion: v1
17 kind: Namespace
18 name: testing
19 uid: 61946422-14ba-4aa2-94b4-229d38446381
20 results:
21 - message: 'validation error: The label `thisshouldntexist` is required. rule check-for-labels-on-namespace
22 failed at path /metadata/labels/thisshouldntexist/'
23 policy: require-ns-labels
24 result: fail
25 rule: check-for-labels-on-namespace
26 scored: true
27 source: kyverno
28 timestamp:
29 nanos: 0
30 seconds: 1666098909
31 summary:
32 error: 0
33 fail: 1
34 pass: 0
35 skip: 0
36 warn: 0
These intermediary resources have the same basic contents as a policy report and are used internally by Kyverno to build the final policy report. Kyverno will merge these results automatically into the appropriate policy report and there is no manual interaction typically required.