Commit graph

2080 commits

Author SHA1 Message Date
Kubernetes Prow Robot
1817e10998
Merge pull request #136185 from tallclair/ndf-bitmap
Optimize NodeDeclaredFeatures with a bitmap FeatureSet implementation
2026-03-14 06:37:34 +05:30
Kubernetes Prow Robot
7f3a5ab96f
Merge pull request #136579 from romanbaron/reuse-scheduling-signature
Reuse pod scheduling signature for opportunistic batching
2026-03-13 20:15:39 +05:30
Patrick Ohly
85bca3b684 DRA device taints: fix beta-enabled, alpha-disable configurations
DeviceTaintRule is off by default because the corresponding v1beta2 API group
is off. When enabled, the potentially still disabled v1alpha3 API version was
used instead of the new v1beta2, causing the scheduler to fail while setting up
informers and then not scheduling pods.
2026-03-13 09:20:57 +01:00
Tim Allclair
39a4f7654a Optimize NDF FeatureMapper.Unmap for sparse feature sets 2026-03-13 04:28:16 +00:00
Tim Allclair
b1b75f93d7 Make size explicit; switch to binary string format 2026-03-13 04:28:16 +00:00
Tim Allclair
1c2b07fa22 Avoid computing feature diff when not necessary 2026-03-13 04:28:16 +00:00
Tim Allclair
f91f641a65 Switch to bitmapped FeatureSet implementation. 2026-03-13 04:28:16 +00:00
Tim Allclair
e4521526b4 NodeDeclaredFeatures: Add global default NDF registry 2026-03-13 04:28:16 +00:00
Kubernetes Prow Robot
21305568b0
Merge pull request #137083 from brejman/generate-plugin
Add placement generator plugin interfaces and logic for running them
2026-03-13 06:15:33 +05:30
Roman Baron
6fcb95e72e scheduler: Moved TestQueuedPodInfo_UpdateInvalidatesSignature from queue/scheduling_queue_test.go to framework/types_test.go 2026-03-12 21:24:23 +02:00
Roman Baron
c0e973dc70 scheduler: Replaced context.Context and testing.T parameters with ktesting.TContext in scheduling_queue_test.go 2026-03-12 17:31:11 +02:00
Kubernetes Prow Robot
9874e76ac4
Merge pull request #137662 from tosi3k/revert-136254-extend-postfilter
Revert "Extend PostFilterResult with a list of victim Pods"
2026-03-12 17:33:42 +05:30
Bartosz
43c5d2a419
Add PlacementGeneratePlugin interface and runner 2026-03-12 09:33:05 +00:00
Antoni Zawodny
fa29c9db6a
Revert "Extend PostFilterResult with a list of victim Pods" 2026-03-12 10:03:35 +01:00
Roman Baron
de1385fe1b scheduler: Added ObserveFrameworkDurationAsync to metrics recorder 2026-03-12 10:31:38 +02:00
Roman Baron
7b00255135 scheduler: Removed plugin stats from pod signing process 2026-03-12 10:31:04 +02:00
Roman Baron
2904e7f309 scheduler: replaced logger with HandleErrorWithLogger 2026-03-12 10:30:38 +02:00
Roman Baron
e6ee21c94f scheduler: Invalidate PodSignature in QueuedPodInfo Update method 2026-03-12 10:30:38 +02:00
Roman Baron
3a6c169034 scheduler: Reuse scheduling signature for opportunistic batching 2026-03-12 10:30:32 +02:00
Tsubasa Watanabe
30b811a99b DRA Device Binding Conditions: add metrics for prebind flow
This commit introduces metrics and improves log outputs for
DRA Device Binding Conditions (KEP-5007):

- scheduler_dra_bindingconditions_allocations_total

  Counts the number of per-device scheduling attempts
  during PreBind where BindingConditions are in use

- scheduler_dra_bindingconditions_wait_duration_seconds

  Observes the time spent waiting for BindingConditions
  to be satisfied during PreBind.

Signed-off-by: Tsubasa Watanabe <w.tsubasa@fujitsu.com>
2026-03-12 17:19:13 +09:00
Kubernetes Prow Robot
efc8cc256a
Merge pull request #137201 from brejman/score-plugin
Add placement scorer plugin interfaces and logic for running them
2026-03-12 12:41:41 +05:30
Kubernetes Prow Robot
031f8ac9ed
Merge pull request #136287 from abel-von/optimize-podgroupinfo
scheduler: optimize podGroupInfo to minimize the lock time
2026-03-12 12:41:34 +05:30
Kubernetes Prow Robot
802b3f744b
Merge pull request #133622 from KunWuLuan/feat/volume-limit-acc
Use indexer to acclerate volume limit plugin
2026-03-12 11:27:34 +05:30
Abel Feng
2153ed1852 scheduler: optimize podGroupInfo to minimize the lock time
Gang scheduler will add the pod into the podGroupInfo before pod
enqueue, if there are thousands of pods in a podGroupInfo, The
call of AssumedPods, AssignedPods and AllPods will hold the lock and
clone the map, so that new Pods should wait there for a long time to add into
the podGroupInfo, also it will be a long wait to enqueue the pod.

In our test, a pod will wait seconds to enqueue if we have 50000 pod in
a gang group.

In this PR, we can avoid the traverse and clone of the map by adding
AllPodsCount, AssumedPodsCount, AssignedPodsCount method, and we make
sure that assumed pods and assigned pods are disjoint.
2026-03-11 17:21:36 +08:00
Kubernetes Prow Robot
d47f3f253b
Merge pull request #137343 from gnufied/prevent-podscheduling-optin
Add API changes to prevent pod scheduling via CSIDriver object
2026-03-11 03:53:17 +05:30
Kubernetes Prow Robot
69144c9081
Merge pull request #137371 from pohly/dra-bind-claim-panic
DRA scheduler: fix potential panic when DRABindingConditions are enabled
2026-03-11 03:03:25 +05:30
Patrick Ohly
f33176fc00 DRA scheduler: add unit tests for AllocationTimestamp
The code paths for adding AllocationTimestamp were not tested well. None of
the test cases verified that an AllocationTimestamp gets added at all because
go-cmp was instructed to ignore the unpredictable field.

We can do better than that and at least check for existence by normalizing all
non-nil time stamps to the empty time. This affects all tests where the binding
conditions and thus AllocationTimestamp support is enabled.

The retry loop for status updates was untested. The fake client has to return a
conflict status error to trigger it. This enables writing a test case where a
concurrent deallocation would have caused the nil panic without the previous
fix.

For binding conditions, one test case gets added which runs through the full
flow of allocating a claim and trying to bind it. All other test cases seem to
have started with the claim already allocated.

Altogether this increases coverage from 82.4% to 83.7%.
2026-03-10 16:25:53 +01:00
Bartosz
f50ae7284a
Remove feature gate check for placement score plugin validation 2026-03-10 09:42:07 +00:00
Bartosz
335f043756
Add Min/MaxScore to replace Min/MaxNodeScore 2026-03-10 09:42:05 +00:00
Bartosz
db3c8f3a4b
Add PlacementScorePlugin interface and runner 2026-03-10 09:42:04 +00:00
Antoni Zawodny
3f094dc228
Create Workload API v1alpha2 (#136976)
* Drop WorkloadRef field and introduce SchedulingGroup field in Pod API

* Introduce v1alpha2 Workload and PodGroup APIs, drop v1alpha1 Workload API

Co-authored-by: yongruilin <yongrlin@outlook.com>

* Run hack/update-codegen.sh

* Adjust kube-scheduler code and integration tests to v1alpha2 API

* Drop v1alpha1 scheduling API group and run make update

---------

Co-authored-by: yongruilin <yongrlin@outlook.com>
2026-03-10 07:59:10 +05:30
Kubernetes Prow Robot
7bec24fbb3
Merge pull request #137475 from troychiu/flaky-test-extended-resource-name-no-resource
Fix scheduler flaky test: wait for DeviceClass cache sync in dynamicresources tests
2026-03-10 02:59:11 +05:30
Troy Chiu
1d2165b29c Fix scheduler flaky test: wait for DeviceClass cache sync in dynamicresources tests
When DRAExtendedResource is enabled, the dynamicresources test setup
registers an event handler for DeviceClasses but was not waiting for it
to sync. This can lead to flaky tests where the cache is not fully
populated when the test starts.

This change captures the event handler registration and includes its
DoneChecker in a WaitFor call.
2026-03-09 19:24:13 +00:00
Hemant Kumar
e1a97a780d Update scheduler to check PreventPodSchedulingIfMissing 2026-03-09 12:55:17 -04:00
Kubernetes Prow Robot
f5bafe93ac
Merge pull request #135048 from yliaog/beta_promo
DRA Extended Resource: promote to Beta in 1.36
2026-03-07 01:12:19 +05:30
Rita Zhang
c4f88de33e
Move DRAAdminAccess feature to GA (#137373)
* Move DRAAdminAccess feature to GA

Signed-off-by: Rita Zhang <rita.z.zhang@gmail.com>

* address comments

Signed-off-by: Rita Zhang <rita.z.zhang@gmail.com>

---------

Signed-off-by: Rita Zhang <rita.z.zhang@gmail.com>
2026-03-05 23:42:21 +05:30
Kubernetes Prow Robot
fbe2820983
Merge pull request #136944 from brejman/kep-5732-tas-placement-generation-phase
Prepare workload scheduling cycle for placement simulation
2026-03-05 16:48:19 +05:30
Kubernetes Prow Robot
8bd1505fc0
Merge pull request #137108 from pohly/logtools-update
golangci-lint: bump to logtools v0.10.1
2026-03-05 10:14:16 +05:30
Kubernetes Prow Robot
cfc79e6d64
Merge pull request #137408 from dims/disable-hard-fail-on-deadcode-elimination-script-for-go-1.26
Disable hard fail on deadcode-elimination script for 1.26
2026-03-05 02:24:27 +05:30
Kubernetes Prow Robot
8275484dcf
Merge pull request #137297 from atombrella/feature/pkg_forvar_modernize
Remove redundant variable re-assignment in for-loops under pkg
2026-03-05 00:28:20 +05:30
Davanum Srinivas
4513498be1
Disable hard fail on deadcode-elimination script for 1.26
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2026-03-04 10:54:32 -05:00
Patrick Ohly
b895ce734f golangci-lint: bump to logtools v0.10.1
This fixes a bug that caused log calls involving `klog.Logger` to not be
checked.

As a result we have to fix some code that is now considered faulty:

    ERROR: pkg/controller/serviceaccount/tokens_controller.go:382:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (e *TokensController) generateTokenIfNeeded(ctx context.Context, logger klog.Logger, serviceAccount *v1.ServiceAccount, cachedSecret *v1.Secret) ( /* retry */ bool, error) {
    ERROR: ^
    ERROR: pkg/controller/storageversionmigrator/storageversionmigrator.go:299:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (svmc *SVMController) runMigration(ctx context.Context, logger klog.Logger, gvr schema.GroupVersionResource, resourceMonitor *garbagecollector.Monitor, toBeProcessedSVM *svmv1beta1.StorageVersionMigration, listResourceVersion string) (err error, failed bool) {
    ERROR: ^
    ERROR: pkg/proxy/node.go:121:3: logging function "Error" should not use format specifier "%q" (logcheck)
    ERROR: 		klog.FromContext(ctx).Error(nil, "Timed out waiting for node %q to exist", nodeName)
    ERROR: 		^
    ERROR: pkg/proxy/node.go:123:3: logging function "Error" should not use format specifier "%q" (logcheck)
    ERROR: 		klog.FromContext(ctx).Error(nil, "Timed out waiting for node %q to be assigned IPs", nodeName)
    ERROR: 		^
    ERROR: pkg/scheduler/backend/queue/scheduling_queue.go:610:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (p *PriorityQueue) runPreEnqueuePlugin(ctx context.Context, logger klog.Logger, pl fwk.PreEnqueuePlugin, pInfo *framework.QueuedPodInfo, shouldRecordMetric bool) *fwk.Status {
    ERROR: ^
    ERROR: pkg/scheduler/framework/plugins/dynamicresources/extendeddynamicresources.go:286:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (pl *DynamicResources) deleteClaim(ctx context.Context, claim *resourceapi.ResourceClaim, logger klog.Logger) error {
    ERROR: ^
    ERROR: pkg/scheduler/framework/plugins/dynamicresources/extendeddynamicresources.go:499:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (pl *DynamicResources) waitForExtendedClaimInAssumeCache(
    ERROR: ^
    ERROR: pkg/scheduler/framework/plugins/dynamicresources/extendeddynamicresources.go:528:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (pl *DynamicResources) createExtendedResourceClaimInAPI(
    ERROR: ^
    ERROR: pkg/scheduler/framework/plugins/dynamicresources/extendeddynamicresources.go:592:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (pl *DynamicResources) unreserveExtendedResourceClaim(ctx context.Context, logger klog.Logger, pod *v1.Pod, state *stateData) {
    ERROR: ^
    ERROR: pkg/scheduler/framework/runtime/batch.go:171:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (b *OpportunisticBatch) batchStateCompatible(ctx context.Context, logger klog.Logger, pod *v1.Pod, signature fwk.PodSignature, cycleCount int64, state fwk.CycleState, nodeInfos fwk.NodeInfoLister) bool {
    ERROR: ^
    ERROR: staging/src/k8s.io/component-base/featuregate/feature_gate.go:890:4: Additional arguments to Info should always be Key Value pairs. Please check if there is any key or value missing. (logcheck)
    ERROR: 			logger.Info("Warning: SetEmulationVersionAndMinCompatibilityVersion will change already queried feature", "featureGate", feature, "oldValue", oldVal, newVal)
    ERROR: 			^
    ERROR: test/images/sample-device-plugin/sampledeviceplugin.go:108:2: logging function "Info" should not use format specifier "%s" (logcheck)
    ERROR: 	logger.Info("pluginSocksDir: %s", pluginSocksDir)
    ERROR: 	^
    ERROR: test/images/sample-device-plugin/sampledeviceplugin.go:123:2: logging function "Info" should not use format specifier "%s" (logcheck)
    ERROR: 	logger.Info("CDI_ENABLED: %s", cdiEnabled)
    ERROR: 	^

While waiting for this to merge, another call was added which also doesn't
follow conventions:

    ERROR: pkg/kubelet/kubelet.go:2454:1: A function should accept either a context or a logger, but not both. Having both makes calling the function harder because it must be defined whether the context must contain the logger and callers have to follow that. (logcheck)
    ERROR: func (kl *Kubelet) deletePod(ctx context.Context, logger klog.Logger, pod *v1.Pod) error {
    ERROR: ^

Contextual logging has been beta and enabled by default for several releases
now. It's mostly just a matter of wrapping up and declaring it GA. Therefore
the calls which directly call WithName or WithValues (always have an effect)
are left as-is instead of converting them to use the klog wrappers (support
disabling the effect). To allow that, the linter gets reconfigured to not
complain about this anymore, anywhere.

The calls which would have to be fixed otherwise are:

    ERROR: pkg/kubelet/cm/dra/claiminfo.go:170:11: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger = logger.WithName("dra-claiminfo")
    ERROR: 	         ^
    ERROR: pkg/kubelet/cm/dra/healthinfo.go:45:11: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger = logger.WithName("dra-healthinfo")
    ERROR: 	         ^
    ERROR: pkg/kubelet/cm/dra/healthinfo.go:89:11: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger = logger.WithName("dra-healthinfo")
    ERROR: 	         ^
    ERROR: pkg/kubelet/cm/dra/healthinfo.go:157:11: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger = logger.WithName("dra-healthinfo")
    ERROR: 	         ^
    ERROR: pkg/kubelet/cm/dra/manager.go:175:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-manager")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/manager.go:239:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-manager")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/manager.go:593:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-manager")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/manager.go:781:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(context.Background()).WithName("dra-manager")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/manager.go:898:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-manager")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/manager_test.go:1638:15: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 				logger := klog.FromContext(streamCtx).WithName(st.Name())
    ERROR: 				          ^
    ERROR: pkg/kubelet/cm/dra/plugin/dra_plugin.go:77:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-plugin")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/plugin/dra_plugin.go:108:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-plugin")
    ERROR: 	          ^
    ERROR: pkg/kubelet/cm/dra/plugin/dra_plugin.go:161:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	logger := klog.FromContext(ctx).WithName("dra-plugin")
    ERROR: 	          ^
    ERROR: staging/src/k8s.io/dynamic-resource-allocation/resourceslice/tracker/tracker.go:695:14: function "WithValues" should be called through klogr.LoggerWithValues (logcheck)
    ERROR: 			logger := logger.WithValues("device", deviceID)
    ERROR: 			          ^
    ERROR: test/integration/apiserver/watchcache_test.go:42:54: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	etcd0URL, stopEtcd0, err := framework.RunCustomEtcd(klog.FromContext(ctx).WithName("etcd0"), "etcd_watchcache0", etcdArgs)
    ERROR: 	                                                    ^
    ERROR: test/integration/apiserver/watchcache_test.go:47:54: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 	etcd1URL, stopEtcd1, err := framework.RunCustomEtcd(klog.FromContext(ctx).WithName("etcd1"), "etcd_watchcache1", etcdArgs)
    ERROR: 	                                                    ^
    ERROR: test/integration/scheduler_perf/scheduler_perf.go:1149:12: function "WithName" should be called through klogr.LoggerWithName (logcheck)
    ERROR: 		logger = logger.WithName(tCtx.Name())
    ERROR: 		         ^
2026-03-04 12:08:18 +01:00
Bartosz
e5461b7701
Prepare pod group scheduling cycle for placement simulation 2026-03-03 18:07:13 +00:00
Kubernetes Prow Robot
80648018ad
Merge pull request #137216 from macsko/podgroup_cycle_cleanup
Adjust pod group scheduling cycle code
2026-03-03 22:07:55 +05:30
Patrick Ohly
dd6f4d3a16 DRA scheduler: avoid panic during PreBind
It can happen that a claim gets deallocated in parallel to adding a new pod to
ReservedFor. Without binding conditions, that was caught by the apiserver
validation. With binding conditions, the code which checks and sets
AllocationTimestamp panics with a nil pointer access.

This has been observed in the TestDRA/all/ShareResourceClaimSequentially
integration test, but couldn't be reproduced locally:

    E0303 07:43:20.158261   39037 panic.go:262] "Observed a panic" panic="runtime error: invalid memory address or nil pointer dereference" panicGoValue="\"invalid memory address or nil pointer dereference\"" stacktrace=<
    	goroutine 554266 [running]:
    	k8s.io/apimachinery/pkg/util/runtime.logPanic({0x69ce9f0, 0xc017bc00f0}, {0x59381a0, 0x91c6570})
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/runtime/runtime.go:132 +0xbc
    	k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x69ce8d8, 0x93d87a0}, {0x59381a0, 0x91c6570}, {0xc020506f00, 0x0, 0x200?})
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/runtime/runtime.go:107 +0x116
    	k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00685ddc0?})
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/runtime/runtime.go:64 +0x17b
    	panic({0x59381a0?, 0x91c6570?})
    		/usr/local/go/src/runtime/panic.go:783 +0x132
    	k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources.(*DynamicResources).bindClaim.func2()
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go:1247 +0x730
    	k8s.io/client-go/util/retry.OnError.func1()
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/util/retry/util.go:51 +0x30
    	k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x9ebcca?)
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:150 +0x3e
    	k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x2, 0x0}, 0xc020507410)
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/backoff.go:477 +0x5a
    	k8s.io/client-go/util/retry.OnError({0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0xc02c2d5380?, 0x4?)
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/util/retry/util.go:50 +0x96
    	k8s.io/client-go/util/retry.RetryOnConflict(...)
    		/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/util/retry/util.go:104
    	k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources.(*DynamicResources).bindClaim(0xc0024adb20, {0x69cea28, 0xc0061acbe0}, 0xc011876800, 0x0, 0xc025163408, {0xc021909d80, 0x8})
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go:1207 +0x845
    	k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources.(*DynamicResources).PreBind-range1(...)
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go:1073
    	k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources.(*DynamicResources).PreBind.(*claimStore).all.func2(...)
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/claims.go:72
    	k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources.(*DynamicResources).PreBind(0xc0024adb20, {0x69cea28, 0xc0061acbe0}, {0x69fc840?, 0xc0286f2540?}, 0xc025163408, {0xc021909d80, 0x8})
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go:1071 +0x246
    	k8s.io/kubernetes/pkg/scheduler/framework/runtime.(*frameworkImpl).runPreBindPlugin(0xc009628dc8, {0x69cea28, 0xc0061acbe0}, {0x7eabfddddb30, 0xc0024adb20}, {0x69fc840, 0xc0286f2540}, 0xc025163408, {0xc021909d80, 0x8})
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go:1532 +0x2e2
    	k8s.io/kubernetes/pkg/scheduler/framework/runtime.(*frameworkImpl).RunPreBindPlugins.func2({0x7eabfddddb30, 0xc0024adb20})
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go:1461 +0x1cf
    	k8s.io/kubernetes/pkg/scheduler/framework/runtime.(*frameworkImpl).RunPreBindPlugins(0xc009628dc8, {0x69cea28, 0xc0061ac690}, {0x69fc840, 0xc0286f2540}, 0xc025163408, {0xc021909d80, 0x8})
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go:1484 +0x623
    	k8s.io/kubernetes/pkg/scheduler.(*Scheduler).bindingCycle(0xc02a6a7500, {0x69cea28, 0xc00f22a690}, {0x69fc840, 0xc0286f2540}, {0x6a32e60, 0xc009628dc8}, {{0xc021909d80, 0x8}, 0x8, ...}, ...)
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/schedule_one.go:457 +0x72a
    	k8s.io/kubernetes/pkg/scheduler.(*Scheduler).runBindingCycle(0xc02a6a7500, {0x69ce9f0?, 0xc0027e5470?}, {0x69fc840, 0xc0286f2540}, {0x6a32e60, 0xc009628dc8}, {{0xc021909d80, 0x8}, 0x8, ...}, ...)
    		/home/prow/go/src/k8s.io/kubernetes/pkg/scheduler/schedule_one.go:164 +0x1e8
2026-03-03 16:21:38 +01:00
Sunyanan Choochotkaew
e035c41256
DRA: Promote DRAConsumableCapacity to Beta
Signed-off-by: Sunyanan Choochotkaew <sunyanan.choochotkaew1@ibm.com>
2026-03-03 18:30:26 +09:00
Maciej Skoczeń
912bf9c4ef Use a single RunPermitPlugins function and call AddWaitingPod outside a framework 2026-03-02 15:14:12 +00:00
Mads Jensen
f11bb48738 Remove redundant re-assignment in for-loops under pkg
This the forvar rule from modernize. The semantics of the for-loop
changed from Go 1.22 to make this pattern obsolete.
2026-03-02 08:47:43 +01:00
yliao
c215164395 fix test/integration/scheduler/batch 2026-02-27 22:29:36 +00:00
Jordan Liggitt
4ab6ae2a59
Drop direct use of github.com/stretchr/testify in component-helpers 2026-02-20 14:50:15 -05:00