kubernetes/test/e2e/dra/kind.yaml
Patrick Ohly 7a480de230 E2E: check system logs for DATA RACE reports
While an E2E suite runs, logs of all containers in the "kube-system"
namespace (customizable via command line flag) are retrieved and checked for
"DATA RACE" errors. The kubelet logs are retrieved through the log query
feature. By default, this is attempted on all nodes and it is a test failure
if the log query feature is not enabled. The nodes to check can be customized
via another command line flag.

Data races only get reported if the cluster components were built with data
race detection, so in most clusters this additional checking wouldn't find
anything and therefore is off by default.

The failure message is in Markdown format and ready to be copy-and-pasted into
a GitHub issue.
2026-01-16 15:14:54 +01:00

86 lines
2.8 KiB
YAML

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
# Enable CDI as described in
# https://github.com/container-orchestrated-devices/container-device-interface#containerd-configuration
- |-
[plugins."io.containerd.grpc.v1.cri"]
enable_cdi = true
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
kubeadmConfigPatches:
- |
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
enableSystemLogHandler: true
enableSystemLogQuery: true
# v1beta4 for the future (v1.35.0+ ?)
# https://github.com/kubernetes-sigs/kind/issues/3847
# TODO: drop v1beta3 when kind makes the switch
- |
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
scheduler:
extraArgs:
- name: "v"
value: "5"
- name: "vmodule"
value: "allocator*=6,pools*=6,dynamicresources=6,allocateddevices=6,dra_manager=6,extendeddynamicresources=6" # structured/internal/*/allocator*.go, DRA scheduler plugin
controllerManager:
extraArgs:
- name: "v"
value: "5"
- name: "vmodule"
value: "controller=6" # resourceclaim/controller.go - should have renamed it when copying the controller it was based on!
apiServer:
extraArgs:
runtime-config: "resource.k8s.io/v1alpha3=true,resource.k8s.io/v1beta1=true,resource.k8s.io/v1beta2=true"
- |
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
nodeRegistration:
kubeletExtraArgs:
- name: "v"
value: "5"
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
- name: "v"
value: "5"
# v1beta3 for v1.23.0 ... ?
- |
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
scheduler:
extraArgs:
v: "5"
vmodule: "allocator*=6,pools*=6,dynamicresources=6,allocateddevices=6,dra_manager=6,extendeddynamicresources=6" # structured/internal/*/allocator*.go, DRA scheduler plugin
controllerManager:
extraArgs:
v: "5"
vmodule: "controller=6" # resourceclaim/controller.go - should have renamed it when copying the controller it was based on!
apiServer:
extraArgs:
runtime-config: "resource.k8s.io/v1alpha3=true,resource.k8s.io/v1beta1=true,resource.k8s.io/v1beta2=true"
- |
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
nodeRegistration:
kubeletExtraArgs:
v: "5"
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
v: "5"
# Feature gates must be the last entry in this YAML.
# Some Prow jobs add more feature gates with
#
# --config <(cat test/e2e/dra/kind.yaml; echo " <some feature>: true")
featureGates:
DynamicResourceAllocation: true
NodeLogQuery: true