Generate Resources
A generate
rule can be used to create additional resources when a new resource is created or when the source is updated. This is useful to create supporting resources, such as new RoleBindings or NetworkPolicies for a Namespace.
The generate
rule supports match
and exclude
blocks, like other rules. Hence, the trigger for applying this rule can be the creation of any resource. It is also possible to match or exclude API requests based on subjects, roles, etc.
The generate
rule is triggered during the API CREATE operation. To keep resources synchronized across changes, you can use the synchronize
property. When synchronize
is set to true
, the generated resource is kept in-sync with the source resource (which can be defined as part of the policy or may be an existing resource), and generated resources cannot be modified by users. If synchronize
is set to false
then users can update or delete the generated resource directly.
Note
As of Kyverno 1.3.0, resources generated withsynchronize=true
may be modified or deleted by other Kubernetes controllers and users with appropriate access permissions, and Kyverno will recreate or update the resource to comply with configured policies.When using a generate
rule, the origin resource can be either an existing resource defined within Kubernetes, or a new resource defined in the rule itself. When the origin resource is a pre-existing resource such as a ConfigMap or Secret, for example, the clone
object is used. When the origin resource is a new resource defined within the manifest of the rule, the data
object is used. These are mutually exclusive, and only one may be specified in a rule.
Caution
Deleting the policy containing agenerate
rule with a data
object and synchronize=true
will cause immediate deletion of the downstream generated resources. Policies containing a clone
object are not subject to this behavior.Kubernetes has many default resource types even before considering CustomResources defined in CustomResourceDefinitions (CRDs). While Kyverno can generate these CustomResources as well, both these as well as certain default Kubernetes resources may require granting additional privileges to the ClusterRole responsible for the generate
behavior. To enable Kyverno to generate these other types, see the section on customizing permissions.
Note
When generating a custom resource, it is necessary to set the apiVersion (ex.,spec.generate.apiVersion
) and kind (ex., spec.generate.kind
).Kyverno will create an intermediate object called a UpdateRequest
which is used to queue work items for the final resource generation. To get the details and status of a generated resource, check the details of the UpdateRequest
. The following will give the list of UpdateRequests
.
1kubectl get updaterequests -A
A UpdateRequest
status can have one of four values:
Completed
: the UpdateRequest
controller created resources defined in the policy
Failed
: the UpdateRequest
controller failed to process the rules
Pending
: the request is yet to be processed or the resource has not been created
Skip
: marked when triggering the generate policy by adding a label/annotation to the existing resource, while the selector is not defined in the policy itself.
Generate a ConfigMap using inline data
This policy sets the Zookeeper and Kafka connection strings for all namespaces based upon a ConfigMap defined within the rule itself. Notice that this rule has the generate.data
object defined in which case the rule will create a new ConfigMap called zk-kafka-address
using the data specified in the rule’s manifest.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: zk-kafka-address
5spec:
6 rules:
7 - name: k-kafka-address
8 match:
9 any:
10 - resources:
11 kinds:
12 - Namespace
13 exclude:
14 any:
15 - resources:
16 namespaces:
17 - kube-system
18 - default
19 - kube-public
20 - kyverno
21 generate:
22 synchronize: true
23 apiVersion: v1
24 kind: ConfigMap
25 name: zk-kafka-address
26 # generate the resource in the new namespace
27 namespace: "{{request.object.metadata.name}}"
28 data:
29 kind: ConfigMap
30 metadata:
31 labels:
32 somekey: somevalue
33 data:
34 ZK_ADDRESS: "192.168.10.10:2181,192.168.10.11:2181,192.168.10.12:2181"
35 KAFKA_ADDRESS: "192.168.10.13:9092,192.168.10.14:9092,192.168.10.15:9092"
Clone a ConfigMap and propagate changes
In this policy, the source of the data is an existing ConfigMap resource named config-template
which is stored in the default
Namespace. Notice how the generate
rule here instead uses the generate.clone
object when the origin data exists within Kubernetes.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: basic-policy
5spec:
6 rules:
7 - name: Clone ConfigMap
8 match:
9 any:
10 - resources:
11 kinds:
12 - Namespace
13 exclude:
14 any:
15 - resources:
16 namespaces:
17 - kube-system
18 - default
19 - kube-public
20 - kyverno
21 generate:
22 # Kind of generated resource
23 kind: ConfigMap
24 # apiVersion of the generated resource
25 apiVersion: v1
26 # Name of the generated resource
27 name: default-config
28 # namespace for the generated resource
29 namespace: "{{request.object.metadata.name}}"
30 # propagate changes from the upstream resource
31 synchronize: true
32 clone:
33 namespace: default
34 name: config-template
Cloning Multiple Resources
Kyverno, as of 1.8, has the ability to clone multiple resources in a single rule definition for use cases where several resources must be cloned from a source Namespace to a destination Namespace. By using the generate.cloneList
object, multiple kinds from the same Namespace may be specified. Use of an optional selector
can scope down the source of the clones to only those having the matching label(s). The below policy clones Secrets and ConfigMaps from the staging
Namespace which carry the label allowedToBeCloned="true"
.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: sync-secret-with-multi-clone
5spec:
6 rules:
7 - name: sync-secret
8 match:
9 any:
10 - resources:
11 kinds:
12 - Namespace
13 exclude:
14 any:
15 - resources:
16 namespaces:
17 - kube-system
18 - default
19 - kube-public
20 - kyverno
21 generate:
22 namespace: "{{request.object.metadata.name}}"
23 synchronize: true
24 cloneList:
25 namespace: staging
26 kinds:
27 - v1/Secret
28 - v1/ConfigMap
29 selector:
30 matchLabels:
31 allowedToBeCloned: "true"
Generating Bindings
In order for Kyverno to generate a new RoleBinding or ClusterRoleBinding resource, its ServiceAccount must first be bound to the same Role or ClusterRole which you’re attempting to generate. If this is not done, Kubernetes blocks the request because it sees a possible privilege escalation attempt from the Kyverno ServiceAccount. This is not a Kyverno function but rather how Kubernetes RBAC is designed to work.
For example, if you wish to write a generate
rule which creates a new RoleBinding resource granting some user the admin
role over a new Namespace, the Kyverno ServiceAccount must have a ClusterRoleBinding in place for that same admin
role.
Create a new ClusterRoleBinding for the Kyverno ServiceAccount by default called kyverno
.
1apiVersion: rbac.authorization.k8s.io/v1
2kind: ClusterRoleBinding
3metadata:
4 name: kyverno:generate-admin
5roleRef:
6 apiGroup: rbac.authorization.k8s.io
7 kind: ClusterRole
8 name: admin
9subjects:
10- kind: ServiceAccount
11 name: kyverno
12 namespace: kyverno
Now, create a generate
rule as you normally would which assigns a test user named steven
to the admin
ClusterRole for a new Namespace. The built-in ClusterRole named admin
in this rule must match the ClusterRole granted to the Kyverno ServiceAccount in the previous ClusterRoleBinding.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: steven-rolebinding
5spec:
6 rules:
7 - name: steven-rolebinding
8 match:
9 any:
10 - resources:
11 kinds:
12 - Namespace
13 generate:
14 kind: RoleBinding
15 apiVersion: rbac.authorization.k8s.io/v1
16 name: steven-rolebinding
17 namespace: "{{request.object.metadata.name}}"
18 data:
19 subjects:
20 - kind: User
21 name: steven
22 apiGroup: rbac.authorization.k8s.io
23 roleRef:
24 kind: ClusterRole
25 name: admin
26 apiGroup: rbac.authorization.k8s.io
When a new Namespace is created, Kyverno will generate a new RoleBinding called steven-rolebinding
which grants the user steven
the admin
ClusterRole over said new Namespace.
Generate a NetworkPolicy
In this example, new namespaces will receive a NetworkPolicy that denies all inbound and outbound traffic. Similar to the first example, the generate.data
object is used to define, as an overlay pattern, the spec
for the NetworkPolicy resource.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: default
5spec:
6 rules:
7 - name: deny-all-traffic
8 match:
9 any:
10 - resources:
11 kinds:
12 - Namespace
13 exclude:
14 any:
15 - resources:
16 namespaces:
17 - kube-system
18 - default
19 - kube-public
20 - kyverno
21 generate:
22 kind: NetworkPolicy
23 apiVersion: networking.k8s.io/v1
24 name: deny-all-traffic
25 namespace: "{{request.object.metadata.name}}"
26 data:
27 spec:
28 # select all pods in the namespace
29 podSelector: {}
30 policyTypes:
31 - Ingress
32 - Egress
Linking resources with ownerReferences
In some cases, a triggering (source) resource and generated (downstream) resource need to share the same lifecycle. That is, when the triggering resource is deleted so too should the generated resource. This is valuable because some resources are only needed in the presence of another, for example a Service of type LoadBalancer
necessitating the need for a specific network policy in some CNI plug-ins. While Kyverno will not take care of this task internally, Kubernetes can by setting the ownerReferences
field in the generated resource. With the below example, when the generated ConfigMap specifies the metadata.ownerReferences[]
object and defines the following fields including uid
, which references the triggering Service resource, an owner-dependent relationship is formed. Later, if the Service is deleted, the ConfigMap will be as well. See the Kubernetes documentation for more details including an important caveat around the scoping of these references. Specifically, Namespaced resources cannot be the owners of cluster-scoped resources, and cross-namespace references are also disallowed.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: demo-ownerref
5spec:
6 background: false
7 rules:
8 - name: demo-ownerref-svc-cm
9 match:
10 any:
11 - resources:
12 kinds:
13 - Service
14 generate:
15 kind: ConfigMap
16 apiVersion: v1
17 name: "{{request.object.metadata.name}}-gen-cm"
18 namespace: "{{request.namespace}}"
19 synchronize: false
20 data:
21 metadata:
22 ownerReferences:
23 - apiVersion: v1
24 kind: Service
25 name: "{{request.object.metadata.name}}"
26 uid: "{{request.object.metadata.uid}}"
27 data:
28 foo: bar
Generate for Existing resources
With Kyverno 1.7, Kyverno supports the generate for existing resources. Generate existing policies are applied in the background which creates target resources based on the match statement within the policy. They may also optionally be configured to apply upon updates to the policy itself.
Generate NetworkPolicy in Existing Namespaces
By default, policy will not be applied on existing trigger resource when it is installed. This behavior can be configured via generateExistingOnPolicyUpdate
attribute. Only if you set generateExistingOnPolicyUpdate
to true
, Kyverno will generate the target resource in existing triggers on policy CREATE and UPDATE events.
In this example policy, which triggers based on the resource kind Namespace
a new NetworkPolicy will be generated in all new or existing Namespaces.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: generate-resources
5spec:
6 generateExistingOnPolicyUpdate: true
7 rules:
8 - name: generate-existing-networkpolicy
9 match:
10 any:
11 - resources:
12 kinds:
13 - Namespace
14 generate:
15 kind: NetworkPolicy
16 apiVersion: networking.k8s.io/v1
17 name: default-deny
18 namespace: "{{request.object.metadata.name}}"
19 synchronize: true
20 data:
21 metadata:
22 labels:
23 created-by: kyverno
24 spec:
25 podSelector: {}
26 policyTypes:
27 - Ingress
28 - Egress
Generate PodDisruptionBudget for Existing Deployments
Similarly, this Cluster Policy will create a PodDisruptionBudget
resource for existing or new deployments.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: create-default-pdb
5spec:
6 generateExistingOnPolicyUpdate: true
7 rules:
8 - name: create-default-pdb
9 match:
10 any:
11 - resources:
12 kinds:
13 - Deployment
14 exclude:
15 resources:
16 namespaces:
17 - local-path-storage
18 generate:
19 apiVersion: policy/v1
20 kind: PodDisruptionBudget
21 name: "{{request.object.metadata.name}}-default-pdb"
22 namespace: "{{request.object.metadata.namespace}}"
23 synchronize: true
24 data:
25 spec:
26 minAvailable: 1
27 selector:
28 matchLabels:
29 "{{request.object.metadata.labels}}"
Troubleshooting
To troubleshoot policy application failures, inspect the UpdateRequest
Custom Resource to get details.
For example, if the corresponding permission is not granted to Kyverno, you should see this error in the updaterequest.status
:
1$ kubectl get ur -n kyverno
2NAME POLICY RULETYPE RESOURCEKIND RESOURCENAME RESOURCENAMESPACE STATUS AGE
3ur-7gtbx create-default-pdb generate Deployment nginx-deployment test Failed 2s
4
5$ kubectl describe ur ur-7gtbx -n kyverno
6Name: ur-7gtbx
7Namespace: kyverno
8...
9
10status:
11 message: 'poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kyverno:kyverno-service-account"
12 cannot create resource "poddisruptionbudgets" in API group "policy" in the namespace "test"'
13 state: Failed
Generating resources into existing namespaces
This feature has been deprecated in Kyverno 1.7+, refer to this section to generate for existing resources.
Use of a generate
rule is common when creating net new resources from the point after which the policy was created. For example, a Kyverno generate
policy is created so that all future namespaces can receive a standard set of Kubernetes resources. However, it is also possible to generate resources into existing resources, namely the Namespace construct. This can be extremely useful when deploying Kyverno to an existing cluster in use where you wish policy to apply retroactively.
Normally, Kyverno does not alter existing objects in any way as a central tenet of its design. However, using this method of controlled roll-out, you may use generate
rules to create new objects into existing Namespaces. To do so, follow these steps:
Identify some Kubernetes label or annotation which is not yet defined on any Namespace but can be used to add to existing ones signaling to Kyverno that these Namespaces should be targets for
generate
rules. The metadata can be anything, but it should be descriptive for this purpose and not in use anywhere else nor use reserved keys such askubernetes.io
orkyverno.io
.Create a ClusterPolicy with a rule containing a
match
statement which matches on kindNamespace
as well as the label or annotation you have set aside. In thesync-secret
policy below, it matches on not only Namespaces but a label ofmycorp-rollout=true
and copies into these Namespaces a Secret calledcorp-secret
stored in thedefault
Namespace.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: sync-secret
5spec:
6 rules:
7 - name: sync-secret
8 match:
9 any:
10 - resources:
11 kinds:
12 - Namespace
13 selector:
14 matchLabels:
15 mycorp-rollout: "true"
16 generate:
17 kind: Secret
18 apiVersion: v1
19 name: corp-secret
20 namespace: "{{request.object.metadata.name}}"
21 synchronize : true
22 clone:
23 namespace: default
24 name: corp-secret
Create the policy as usual.
On an existing Namespace where you wish to have the Secret
corp-secret
copied into it, label it withmycorp-rollout=true
. This step must be completed after the ClusterPolicy exists. If it is labeled before, Kyverno will not see the request.
1$ kubectl label ns prod-bus-app1 mycorp-rollout=true
2
3namespace/prod-bus-app1 labeled
- Check the Namespace you just labeled to see if the Secret exists.
1$ kubectl -n prod-bus-app1 get secret
2
3NAME TYPE DATA AGE
4corp-secret Opaque 2 10s
- Repeat these steps as needed on any additional Namespaces where you wish this ClusterPolicy to apply its
generate
rule.
If you would like Kyverno to remove the resource it generated into these existing Namespaces, you may unlabel the Namespace.
1$ kubectl label ns prod-bus-app1 mycorp-rollout-
The Secret from the previous example should be removed.