The spaces are unnecessary because Ginkgo adds spaces automatically.
This was detected before only for tests using the wrapper functions,
now it also gets detected for ginkgo methods.
The pod resize e2e tests use memory limits as low as 20Mi for Guaranteed
QoS pods. On OpenShift/CRI-O, the container runtime (runc) runs inside
the pod's cgroup and requires ~20-22MB of memory during container
creation and restart operations. This causes intermittent OOM kills
when the pod's memory limit is at or below runc's memory footprint.
This issue does not occur on containerd-based clusters because
containerd's shim runs outside the pod's cgroup by default (ShimCgroup=""),
so runc's memory is not charged against the pod's limit.
Increase memory limits to provide sufficient headroom for runc:
- originalMem: 20Mi -> 35Mi
- reducedMem: 15Mi -> 30Mi
- increasedMem: 25Mi -> 40Mi
The test validates resize behavior, not minimal memory limits, so
larger values do not reduce test coverage.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
This has been replaced by `//build:...` for a long time now.
Removal of the old build tag was automated with:
for i in $(git grep -l '^// +build' | grep -v -e '^vendor/'); do if ! grep -q '^// Code generated' "$i"; then sed -i -e '/^\/\/ +build/d' "$i"; fi; done
The pod resize e2e tests use memory limits as low as 20Mi for Guaranteed
QoS pods. On OpenShift/CRI-O, the container runtime (runc) runs inside
the pod's cgroup and requires ~20-22MB of memory during container
creation and restart operations. This causes intermittent OOM kills
when the pod's memory limit is at or below runc's memory footprint.
This issue does not occur on containerd-based clusters because
containerd's shim runs outside the pod's cgroup by default (ShimCgroup=""),
so runc's memory is not charged against the pod's limit.
Increase memory limits to provide sufficient headroom for runc:
- originalMem: 20Mi -> 35Mi
- reducedMem: 15Mi -> 30Mi
- increasedMem: 25Mi -> 40Mi
The test validates resize behavior, not minimal memory limits, so
larger values do not reduce test coverage.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Remove the e2e test since we switched to beta (enabled by default)
instead of GA. We re-add the test in 1.36.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
this feature gate was meant to be ephemeral, and only was used for guaranteeing a
cluster admin didn't accidentally relax PSA policies before the kubelet would deny a pod
was created if it didn't support user namespaces. As of kube 1.33, the supported apiserver version
skew of n-3 guarantees that all supported kubelets are of 1.30 or later, meaning they do this.
Now, we can unconditionally relax PSA policy if a pod is in a user namespace.
This PR reserves older policies default behavior by never relaxing
Signed-off-by: Peter Hunt <pehunt@redhat.com>
As discovered in ticket 134737 `hostname` is buggy on busybox
due to the musl backend that it uses. something in the /etc/hosts
that k8s generates trips its parser and it doesn't work properly
in the ipv6 tests.
To workaround that use an image that has a glibc backend,
so that the hostname command works.
This reports and fixes for test/e2e:
ERROR: E2E suite initialization was faulty, these errors must be fixed:
ERROR: apimachinery/mutatingadmissionpolicy.go:184: full test name is not unique: "[sig-api-machinery] MutatingAdmissionPolicy [Privileged:ClusterAdmin] [Feature:MutatingAdmissionPolicy] [FeatureGate:MutatingAdmissionPolicy] [Beta] [Feature:OffByDefault] should support MutatingAdmissionPolicy API operations" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e/apimachinery/mutatingadmissionpolicy.go:184, /nvme/gopath/src/k8s.io/kubernetes/test/e2e/apimachinery/mutatingadmissionpolicy.go:606)
ERROR: apimachinery/mutatingadmissionpolicy.go:412: full test name is not unique: "[sig-api-machinery] MutatingAdmissionPolicy [Privileged:ClusterAdmin] [Feature:MutatingAdmissionPolicy] [FeatureGate:MutatingAdmissionPolicy] [Beta] [Feature:OffByDefault] should support MutatingAdmissionPolicyBinding API operations" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e/apimachinery/mutatingadmissionpolicy.go:412, /nvme/gopath/src/k8s.io/kubernetes/test/e2e/apimachinery/mutatingadmissionpolicy.go:834)
ERROR: common/node/pod_level_resources.go:250: full test name is not unique: "[sig-node] Pod Level Resources [Serial] [Feature:PodLevelResources] [FeatureGate:PodLevelResources] [Beta] Guaranteed QoS pod with container resources" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e/common/node/pod_level_resources.go:250 (2x))
ERROR: dra/dra.go:1899: full test name is not unique: "[sig-node] [DRA] kubelet [Feature:DynamicResourceAllocation] [FeatureGate:DRAConsumableCapacity] [Alpha] [Feature:OffByDefault] [FeatureGate:DynamicResourceAllocation] must allow multiple allocations and consume capacity [KubeletMinVersion:1.34]" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e/dra/dra.go:1899 (2x))
ERROR: storage/testsuites/volume_group_snapshottable.go:173: full test name is not unique: "[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: (delete policy)] volumegroupsnapshottable [Feature:volumegroupsnapshot] VolumeGroupSnapshottable should create snapshots for multiple volumes in a pod" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_group_snapshottable.go:173 (2x))
ERROR: storage/testsuites/volume_group_snapshottable.go:173: full test name is not unique: "[sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io] [Serial] [Testpattern: (delete policy)] volumegroupsnapshottable [Feature:volumegroupsnapshot] VolumeGroupSnapshottable should create snapshots for multiple volumes in a pod" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_group_snapshottable.go:173 (2x))
And for test/e2e_node:
ERROR: cpu_manager_test.go:1622: full test name is not unique: "[sig-node] CPU Manager [Serial] [Feature:CPUManager] when checking the CFS quota management should disable for guaranteed pod with exclusive CPUs assigned" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e_node/cpu_manager_test.go:1622, /nvme/gopath/src/k8s.io/kubernetes/test/e2e_node/cpu_manager_test.go:1642)
ERROR: eviction_test.go:800: full test name is not unique: "[sig-node] LocalStorageCapacityIsolationFSQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota] [Feature:LSCIQuotaMonitoring] [Feature:UserNamespacesSupport] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods" (/nvme/gopath/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:800 (2x))