start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
/*
|
|
|
|
|
Copyright 2016 The Kubernetes Authors.
|
|
|
|
|
|
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
you may not use this file except in compliance with the License.
|
|
|
|
|
You may obtain a copy of the License at
|
|
|
|
|
|
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
|
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
See the License for the specific language governing permissions and
|
|
|
|
|
limitations under the License.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
package cloud
|
|
|
|
|
|
|
|
|
|
import (
|
2018-02-02 16:12:07 -05:00
|
|
|
"context"
|
2018-05-31 13:53:57 -04:00
|
|
|
"errors"
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
"fmt"
|
|
|
|
|
"time"
|
|
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
"k8s.io/api/core/v1"
|
2019-08-14 14:34:47 -04:00
|
|
|
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
2017-01-11 09:09:48 -05:00
|
|
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
|
|
|
|
"k8s.io/apimachinery/pkg/types"
|
|
|
|
|
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
|
|
|
|
"k8s.io/apimachinery/pkg/util/wait"
|
2017-06-23 16:56:37 -04:00
|
|
|
coreinformers "k8s.io/client-go/informers/core/v1"
|
|
|
|
|
clientset "k8s.io/client-go/kubernetes"
|
2017-07-10 13:54:48 -04:00
|
|
|
"k8s.io/client-go/kubernetes/scheme"
|
2017-01-30 13:39:54 -05:00
|
|
|
v1core "k8s.io/client-go/kubernetes/typed/core/v1"
|
2017-03-29 19:21:42 -04:00
|
|
|
"k8s.io/client-go/tools/cache"
|
2017-01-30 13:39:54 -05:00
|
|
|
"k8s.io/client-go/tools/record"
|
2017-08-10 03:03:41 -04:00
|
|
|
clientretry "k8s.io/client-go/util/retry"
|
2018-09-05 18:58:22 -04:00
|
|
|
cloudprovider "k8s.io/cloud-provider"
|
2019-10-04 10:26:06 -04:00
|
|
|
cloudnodeutil "k8s.io/cloud-provider/node/helpers"
|
2018-11-28 16:07:04 -05:00
|
|
|
"k8s.io/klog"
|
2017-05-30 10:46:00 -04:00
|
|
|
kubeletapis "k8s.io/kubernetes/pkg/kubelet/apis"
|
2018-09-27 22:37:38 -04:00
|
|
|
schedulerapi "k8s.io/kubernetes/pkg/scheduler/api"
|
2017-03-29 19:21:42 -04:00
|
|
|
nodeutil "k8s.io/kubernetes/pkg/util/node"
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
)
|
|
|
|
|
|
2019-08-14 14:34:47 -04:00
|
|
|
// labelReconcileInfo lists Node labels to reconcile, and how to reconcile them.
|
|
|
|
|
// primaryKey and secondaryKey are keys of labels to reconcile.
|
|
|
|
|
// - If both keys exist, but their values don't match. Use the value from the
|
|
|
|
|
// primaryKey as the source of truth to reconcile.
|
|
|
|
|
// - If ensureSecondaryExists is true, and the secondaryKey does not
|
|
|
|
|
// exist, secondaryKey will be added with the value of the primaryKey.
|
|
|
|
|
var labelReconcileInfo = []struct {
|
|
|
|
|
primaryKey string
|
|
|
|
|
secondaryKey string
|
|
|
|
|
ensureSecondaryExists bool
|
|
|
|
|
}{
|
|
|
|
|
{
|
|
|
|
|
// Reconcile the beta and the GA zone label using the beta label as
|
|
|
|
|
// the source of truth
|
|
|
|
|
// TODO: switch the primary key to GA labels in v1.21
|
|
|
|
|
primaryKey: v1.LabelZoneFailureDomain,
|
|
|
|
|
secondaryKey: v1.LabelZoneFailureDomainStable,
|
|
|
|
|
ensureSecondaryExists: true,
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
// Reconcile the beta and the stable region label using the beta label as
|
|
|
|
|
// the source of truth
|
|
|
|
|
// TODO: switch the primary key to GA labels in v1.21
|
|
|
|
|
primaryKey: v1.LabelZoneRegion,
|
|
|
|
|
secondaryKey: v1.LabelZoneRegionStable,
|
|
|
|
|
ensureSecondaryExists: true,
|
|
|
|
|
},
|
2019-11-08 10:32:22 -05:00
|
|
|
{
|
|
|
|
|
// Reconcile the beta and the stable instance-type label using the beta label as
|
|
|
|
|
// the source of truth
|
|
|
|
|
// TODO: switch the primary key to GA labels in v1.21
|
|
|
|
|
primaryKey: v1.LabelInstanceType,
|
|
|
|
|
secondaryKey: v1.LabelInstanceTypeStable,
|
|
|
|
|
ensureSecondaryExists: true,
|
|
|
|
|
},
|
2019-08-14 14:34:47 -04:00
|
|
|
}
|
|
|
|
|
|
2018-02-16 06:24:27 -05:00
|
|
|
var UpdateNodeSpecBackoff = wait.Backoff{
|
|
|
|
|
Steps: 20,
|
|
|
|
|
Duration: 50 * time.Millisecond,
|
|
|
|
|
Jitter: 1.0,
|
|
|
|
|
}
|
2017-03-29 19:21:42 -04:00
|
|
|
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
type CloudNodeController struct {
|
2017-02-06 13:35:50 -05:00
|
|
|
nodeInformer coreinformers.NodeInformer
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
kubeClient clientset.Interface
|
|
|
|
|
recorder record.EventRecorder
|
|
|
|
|
|
|
|
|
|
cloud cloudprovider.Interface
|
|
|
|
|
|
2017-03-29 19:21:42 -04:00
|
|
|
nodeStatusUpdateFrequency time.Duration
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
}
|
|
|
|
|
|
2016-12-17 12:27:48 -05:00
|
|
|
// NewCloudNodeController creates a CloudNodeController object
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
func NewCloudNodeController(
|
2017-02-06 13:35:50 -05:00
|
|
|
nodeInformer coreinformers.NodeInformer,
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
kubeClient clientset.Interface,
|
|
|
|
|
cloud cloudprovider.Interface,
|
2019-09-06 22:47:11 -04:00
|
|
|
nodeStatusUpdateFrequency time.Duration) (*CloudNodeController, error) {
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
|
|
|
|
|
eventBroadcaster := record.NewBroadcaster()
|
2017-09-13 14:37:30 -04:00
|
|
|
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "cloud-node-controller"})
|
2018-11-09 13:49:10 -05:00
|
|
|
eventBroadcaster.StartLogging(klog.Infof)
|
2019-09-06 22:47:11 -04:00
|
|
|
|
|
|
|
|
klog.Infof("Sending events to api server.")
|
|
|
|
|
eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: kubeClient.CoreV1().Events("")})
|
|
|
|
|
|
|
|
|
|
if _, ok := cloud.Instances(); !ok {
|
|
|
|
|
return nil, errors.New("cloud provider does not support instances")
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
cnc := &CloudNodeController{
|
2017-03-29 19:21:42 -04:00
|
|
|
nodeInformer: nodeInformer,
|
|
|
|
|
kubeClient: kubeClient,
|
|
|
|
|
recorder: recorder,
|
|
|
|
|
cloud: cloud,
|
|
|
|
|
nodeStatusUpdateFrequency: nodeStatusUpdateFrequency,
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
}
|
2017-03-29 19:21:42 -04:00
|
|
|
|
2018-02-14 15:20:34 -05:00
|
|
|
// Use shared informer to listen to add/update of nodes. Note that any nodes
|
|
|
|
|
// that exist before node controller starts will show up in the update method
|
|
|
|
|
cnc.nodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
|
2019-10-13 21:11:22 -04:00
|
|
|
AddFunc: func(obj interface{}) { cnc.AddCloudNode(context.TODO(), obj) },
|
|
|
|
|
UpdateFunc: func(oldObj, newObj interface{}) { cnc.UpdateCloudNode(context.TODO(), oldObj, newObj) },
|
2017-03-29 19:21:42 -04:00
|
|
|
})
|
|
|
|
|
|
2019-09-06 22:47:11 -04:00
|
|
|
return cnc, nil
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
}
|
|
|
|
|
|
2018-10-28 21:57:23 -04:00
|
|
|
// This controller updates newly registered nodes with information
|
|
|
|
|
// from the cloud provider. This call is blocking so should be called
|
|
|
|
|
// via a goroutine
|
2018-05-15 05:08:35 -04:00
|
|
|
func (cnc *CloudNodeController) Run(stopCh <-chan struct{}) {
|
2017-03-29 19:21:42 -04:00
|
|
|
defer utilruntime.HandleCrash()
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
|
2017-03-29 19:21:42 -04:00
|
|
|
// The following loops run communicate with the APIServer with a worst case complexity
|
|
|
|
|
// of O(num_nodes) per cycle. These functions are justified here because these events fire
|
|
|
|
|
// very infrequently. DO NOT MODIFY this to perform frequent operations.
|
|
|
|
|
|
|
|
|
|
// Start a loop to periodically update the node addresses obtained from the cloud
|
2019-10-13 21:11:22 -04:00
|
|
|
wait.Until(func() { cnc.UpdateNodeStatus(context.TODO()) }, cnc.nodeStatusUpdateFrequency, stopCh)
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// UpdateNodeStatus updates the node status, such as node addresses
|
2019-10-13 21:11:22 -04:00
|
|
|
func (cnc *CloudNodeController) UpdateNodeStatus(ctx context.Context) {
|
2017-03-29 19:21:42 -04:00
|
|
|
instances, ok := cnc.cloud.Instances()
|
|
|
|
|
if !ok {
|
|
|
|
|
utilruntime.HandleError(fmt.Errorf("failed to get instances from cloud provider"))
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
nodes, err := cnc.kubeClient.CoreV1().Nodes().List(metav1.ListOptions{ResourceVersion: "0"})
|
|
|
|
|
if err != nil {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.Errorf("Error monitoring node status: %v", err)
|
2017-03-29 19:21:42 -04:00
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
for i := range nodes.Items {
|
2019-10-13 21:11:22 -04:00
|
|
|
cnc.updateNodeAddress(ctx, &nodes.Items[i], instances)
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
2019-08-14 14:34:47 -04:00
|
|
|
|
|
|
|
|
for _, node := range nodes.Items {
|
|
|
|
|
err = cnc.reconcileNodeLabels(node.Name)
|
|
|
|
|
if err != nil {
|
|
|
|
|
klog.Errorf("Error reconciling node labels for node %q, err: %v", node.Name, err)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// reconcileNodeLabels reconciles node labels transitioning from beta to GA
|
|
|
|
|
func (cnc *CloudNodeController) reconcileNodeLabels(nodeName string) error {
|
|
|
|
|
node, err := cnc.nodeInformer.Lister().Get(nodeName)
|
|
|
|
|
if err != nil {
|
|
|
|
|
// If node not found, just ignore it.
|
|
|
|
|
if apierrors.IsNotFound(err) {
|
|
|
|
|
return nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return err
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if node.Labels == nil {
|
|
|
|
|
// Nothing to reconcile.
|
|
|
|
|
return nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
labelsToUpdate := map[string]string{}
|
|
|
|
|
for _, r := range labelReconcileInfo {
|
|
|
|
|
primaryValue, primaryExists := node.Labels[r.primaryKey]
|
|
|
|
|
secondaryValue, secondaryExists := node.Labels[r.secondaryKey]
|
|
|
|
|
|
|
|
|
|
if !primaryExists {
|
|
|
|
|
// The primary label key does not exist. This should not happen
|
|
|
|
|
// within our supported version skew range, when no external
|
|
|
|
|
// components/factors modifying the node object. Ignore this case.
|
|
|
|
|
continue
|
|
|
|
|
}
|
|
|
|
|
if secondaryExists && primaryValue != secondaryValue {
|
|
|
|
|
// Secondary label exists, but not consistent with the primary
|
|
|
|
|
// label. Need to reconcile.
|
|
|
|
|
labelsToUpdate[r.secondaryKey] = primaryValue
|
|
|
|
|
|
|
|
|
|
} else if !secondaryExists && r.ensureSecondaryExists {
|
|
|
|
|
// Apply secondary label based on primary label.
|
|
|
|
|
labelsToUpdate[r.secondaryKey] = primaryValue
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if len(labelsToUpdate) == 0 {
|
|
|
|
|
return nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if !cloudnodeutil.AddOrUpdateLabelsOnNode(cnc.kubeClient, labelsToUpdate, node) {
|
|
|
|
|
return fmt.Errorf("failed update labels for node %+v", node)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return nil
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// UpdateNodeAddress updates the nodeAddress of a single node
|
2019-10-13 21:11:22 -04:00
|
|
|
func (cnc *CloudNodeController) updateNodeAddress(ctx context.Context, node *v1.Node, instances cloudprovider.Instances) {
|
2017-03-29 19:21:42 -04:00
|
|
|
// Do not process nodes that are still tainted
|
|
|
|
|
cloudTaint := getCloudTaint(node.Spec.Taints)
|
|
|
|
|
if cloudTaint != nil {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.V(5).Infof("This node %s is still tainted. Will not process.", node.Name)
|
2017-03-29 19:21:42 -04:00
|
|
|
return
|
|
|
|
|
}
|
2017-08-22 10:35:43 -04:00
|
|
|
// Node that isn't present according to the cloud provider shouldn't have its address updated
|
2019-10-13 21:11:22 -04:00
|
|
|
exists, err := ensureNodeExistsByProviderID(ctx, instances, node)
|
2017-08-22 10:35:43 -04:00
|
|
|
if err != nil {
|
|
|
|
|
// Continue to update node address when not sure the node is not exists
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.Errorf("%v", err)
|
2017-08-22 10:35:43 -04:00
|
|
|
} else if !exists {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.V(4).Infof("The node %s is no longer present according to the cloud provider, do not process.", node.Name)
|
2017-08-22 10:35:43 -04:00
|
|
|
return
|
|
|
|
|
}
|
2017-03-29 19:21:42 -04:00
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
nodeAddresses, err := getNodeAddressesByProviderIDOrName(ctx, instances, node)
|
2017-03-29 19:21:42 -04:00
|
|
|
if err != nil {
|
2019-10-28 14:36:29 -04:00
|
|
|
klog.Errorf("Error getting node addresses for node %q: %v", node.Name, err)
|
2017-03-29 19:21:42 -04:00
|
|
|
return
|
|
|
|
|
}
|
2018-05-31 13:53:57 -04:00
|
|
|
|
|
|
|
|
if len(nodeAddresses) == 0 {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.V(5).Infof("Skipping node address update for node %q since cloud provider did not return any", node.Name)
|
2018-05-31 13:53:57 -04:00
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-29 19:21:42 -04:00
|
|
|
// Check if a hostname address exists in the cloud provided addresses
|
|
|
|
|
hostnameExists := false
|
|
|
|
|
for i := range nodeAddresses {
|
|
|
|
|
if nodeAddresses[i].Type == v1.NodeHostName {
|
|
|
|
|
hostnameExists = true
|
2019-09-18 19:57:39 -04:00
|
|
|
break
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
// If hostname was not present in cloud provided addresses, use the hostname
|
|
|
|
|
// from the existing node (populated by kubelet)
|
|
|
|
|
if !hostnameExists {
|
|
|
|
|
for _, addr := range node.Status.Addresses {
|
|
|
|
|
if addr.Type == v1.NodeHostName {
|
|
|
|
|
nodeAddresses = append(nodeAddresses, addr)
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
}
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
// If nodeIP was suggested by user, ensure that
|
|
|
|
|
// it can be found in the cloud as well (consistent with the behaviour in kubelet)
|
|
|
|
|
if nodeIP, ok := ensureNodeProvidedIPExists(node, nodeAddresses); ok {
|
|
|
|
|
if nodeIP == nil {
|
2019-10-28 14:36:29 -04:00
|
|
|
klog.Errorf("Specified Node IP not found in cloudprovider for node %q", node.Name)
|
2017-03-29 19:21:42 -04:00
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
}
|
2019-02-13 19:16:13 -05:00
|
|
|
if !nodeAddressesChangeDetected(node.Status.Addresses, nodeAddresses) {
|
2017-03-29 19:21:42 -04:00
|
|
|
return
|
|
|
|
|
}
|
2019-02-13 19:16:13 -05:00
|
|
|
newNode := node.DeepCopy()
|
|
|
|
|
newNode.Status.Addresses = nodeAddresses
|
2017-07-31 01:08:42 -04:00
|
|
|
_, _, err = nodeutil.PatchNodeStatus(cnc.kubeClient.CoreV1(), types.NodeName(node.Name), node, newNode)
|
2017-03-29 19:21:42 -04:00
|
|
|
if err != nil {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.Errorf("Error patching node with cloud ip addresses = [%v]", err)
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
}
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
// nodeModifier is used to carry changes to node objects across multiple attempts to update them
|
|
|
|
|
// in a retry-if-conflict loop.
|
|
|
|
|
type nodeModifier func(*v1.Node)
|
|
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
func (cnc *CloudNodeController) UpdateCloudNode(ctx context.Context, _, newObj interface{}) {
|
2018-11-28 16:07:04 -05:00
|
|
|
node, ok := newObj.(*v1.Node)
|
|
|
|
|
if !ok {
|
2018-02-14 15:20:34 -05:00
|
|
|
utilruntime.HandleError(fmt.Errorf("unexpected object type: %v", newObj))
|
|
|
|
|
return
|
|
|
|
|
}
|
2018-11-28 16:07:04 -05:00
|
|
|
|
|
|
|
|
cloudTaint := getCloudTaint(node.Spec.Taints)
|
|
|
|
|
if cloudTaint == nil {
|
|
|
|
|
// The node has already been initialized so nothing to do.
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
cnc.initializeNode(ctx, node)
|
2018-02-14 15:20:34 -05:00
|
|
|
}
|
|
|
|
|
|
2018-11-28 16:07:04 -05:00
|
|
|
// AddCloudNode handles initializing new nodes registered with the cloud taint.
|
2019-10-13 21:11:22 -04:00
|
|
|
func (cnc *CloudNodeController) AddCloudNode(ctx context.Context, obj interface{}) {
|
2017-03-29 19:21:42 -04:00
|
|
|
node := obj.(*v1.Node)
|
|
|
|
|
|
|
|
|
|
cloudTaint := getCloudTaint(node.Spec.Taints)
|
|
|
|
|
if cloudTaint == nil {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.V(2).Infof("This node %s is registered without the cloud taint. Will not process.", node.Name)
|
2017-03-29 19:21:42 -04:00
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
cnc.initializeNode(ctx, node)
|
2018-11-28 16:07:04 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// This processes nodes that were added into the cluster, and cloud initialize them if appropriate
|
2019-10-13 21:11:22 -04:00
|
|
|
func (cnc *CloudNodeController) initializeNode(ctx context.Context, node *v1.Node) {
|
2019-11-28 11:28:33 -05:00
|
|
|
klog.Infof("Initializing node %s with cloud provider", node.Name)
|
2018-11-28 16:07:04 -05:00
|
|
|
|
2018-02-15 14:50:22 -05:00
|
|
|
instances, ok := cnc.cloud.Instances()
|
|
|
|
|
if !ok {
|
|
|
|
|
utilruntime.HandleError(fmt.Errorf("failed to get instances from cloud provider"))
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-29 19:21:42 -04:00
|
|
|
err := clientretry.RetryOnConflict(UpdateNodeSpecBackoff, func() error {
|
2018-06-13 07:58:41 -04:00
|
|
|
// TODO(wlan0): Move this logic to the route controller using the node taint instead of condition
|
|
|
|
|
// Since there are node taints, do we still need this?
|
|
|
|
|
// This condition marks the node as unusable until routes are initialized in the cloud provider
|
|
|
|
|
if cnc.cloud.ProviderName() == "gce" {
|
2019-10-04 10:26:06 -04:00
|
|
|
if err := cloudnodeutil.SetNodeCondition(cnc.kubeClient, types.NodeName(node.Name), v1.NodeCondition{
|
2018-06-13 07:58:41 -04:00
|
|
|
Type: v1.NodeNetworkUnavailable,
|
|
|
|
|
Status: v1.ConditionTrue,
|
|
|
|
|
Reason: "NoRouteCreated",
|
|
|
|
|
Message: "Node created without a route",
|
|
|
|
|
LastTransitionTime: metav1.Now(),
|
|
|
|
|
}); err != nil {
|
|
|
|
|
return err
|
|
|
|
|
}
|
|
|
|
|
}
|
2019-11-28 11:28:33 -05:00
|
|
|
return nil
|
|
|
|
|
})
|
|
|
|
|
if err != nil {
|
|
|
|
|
utilruntime.HandleError(err)
|
|
|
|
|
return
|
|
|
|
|
}
|
2018-06-13 07:58:41 -04:00
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
curNode, err := cnc.kubeClient.CoreV1().Nodes().Get(node.Name, metav1.GetOptions{})
|
|
|
|
|
if err != nil {
|
|
|
|
|
utilruntime.HandleError(fmt.Errorf("failed to get node %s: %v", node.Name, err))
|
|
|
|
|
return
|
|
|
|
|
}
|
2019-05-08 04:54:33 -04:00
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
cloudTaint := getCloudTaint(curNode.Spec.Taints)
|
|
|
|
|
if cloudTaint == nil {
|
|
|
|
|
// Node object received from event had the cloud taint but was outdated,
|
|
|
|
|
// the node has actually already been initialized.
|
|
|
|
|
return
|
|
|
|
|
}
|
2017-08-15 23:34:50 -04:00
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
nodeModifiers, err := cnc.getNodeModifiersFromCloudProvider(ctx, curNode, instances)
|
|
|
|
|
if err != nil {
|
|
|
|
|
utilruntime.HandleError(fmt.Errorf("failed to initialize node %s at cloudprovider: %v", node.Name, err))
|
|
|
|
|
return
|
|
|
|
|
}
|
2017-03-29 19:21:42 -04:00
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
nodeModifiers = append(nodeModifiers, func(n *v1.Node) {
|
|
|
|
|
n.Spec.Taints = excludeCloudTaint(n.Spec.Taints)
|
|
|
|
|
})
|
2017-03-29 19:21:42 -04:00
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
err = clientretry.RetryOnConflict(UpdateNodeSpecBackoff, func() error {
|
|
|
|
|
curNode, err := cnc.kubeClient.CoreV1().Nodes().Get(node.Name, metav1.GetOptions{})
|
|
|
|
|
if err != nil {
|
2017-03-29 19:21:42 -04:00
|
|
|
return err
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
for _, modify := range nodeModifiers {
|
|
|
|
|
modify(curNode)
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
_, err = cnc.kubeClient.CoreV1().Nodes().Update(curNode)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return err
|
|
|
|
|
}
|
2019-11-28 11:28:33 -05:00
|
|
|
|
2017-03-29 19:21:42 -04:00
|
|
|
// After adding, call UpdateNodeAddress to set the CloudProvider provided IPAddresses
|
|
|
|
|
// So that users do not see any significant delay in IP addresses being filled into the node
|
2019-10-13 21:11:22 -04:00
|
|
|
cnc.updateNodeAddress(ctx, curNode, instances)
|
2019-05-08 04:54:33 -04:00
|
|
|
|
|
|
|
|
klog.Infof("Successfully initialized node %s with cloud provider", node.Name)
|
2017-03-29 19:21:42 -04:00
|
|
|
return nil
|
|
|
|
|
})
|
|
|
|
|
if err != nil {
|
|
|
|
|
utilruntime.HandleError(err)
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-28 11:28:33 -05:00
|
|
|
// getNodeModifiersFromCloudProvider returns a slice of nodeModifiers that update
|
|
|
|
|
// a node object with provider-specific information.
|
|
|
|
|
// All of the returned functions are idempotent, because they are used in a retry-if-conflict
|
|
|
|
|
// loop, meaning they could get called multiple times.
|
|
|
|
|
func (cnc *CloudNodeController) getNodeModifiersFromCloudProvider(ctx context.Context, node *v1.Node, instances cloudprovider.Instances) ([]nodeModifier, error) {
|
|
|
|
|
var nodeModifiers []nodeModifier
|
|
|
|
|
|
|
|
|
|
if node.Spec.ProviderID == "" {
|
|
|
|
|
providerID, err := cloudprovider.GetInstanceProviderID(ctx, cnc.cloud, types.NodeName(node.Name))
|
|
|
|
|
if err == nil {
|
|
|
|
|
nodeModifiers = append(nodeModifiers, func(n *v1.Node) {
|
|
|
|
|
if n.Spec.ProviderID == "" {
|
|
|
|
|
n.Spec.ProviderID = providerID
|
|
|
|
|
}
|
|
|
|
|
})
|
|
|
|
|
} else {
|
|
|
|
|
// we should attempt to set providerID on node, but
|
|
|
|
|
// we can continue if we fail since we will attempt to set
|
|
|
|
|
// node addresses given the node name in getNodeAddressesByProviderIDOrName
|
|
|
|
|
klog.Errorf("failed to set node provider id: %v", err)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
nodeAddresses, err := getNodeAddressesByProviderIDOrName(ctx, instances, node)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return nil, err
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// If user provided an IP address, ensure that IP address is found
|
|
|
|
|
// in the cloud provider before removing the taint on the node
|
|
|
|
|
if nodeIP, ok := ensureNodeProvidedIPExists(node, nodeAddresses); ok {
|
|
|
|
|
if nodeIP == nil {
|
|
|
|
|
return nil, errors.New("failed to find kubelet node IP from cloud provider")
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if instanceType, err := getInstanceTypeByProviderIDOrName(ctx, instances, node); err != nil {
|
|
|
|
|
return nil, err
|
|
|
|
|
} else if instanceType != "" {
|
|
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", v1.LabelInstanceType, instanceType)
|
|
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", v1.LabelInstanceTypeStable, instanceType)
|
|
|
|
|
nodeModifiers = append(nodeModifiers, func(n *v1.Node) {
|
|
|
|
|
if n.Labels == nil {
|
|
|
|
|
n.Labels = map[string]string{}
|
|
|
|
|
}
|
|
|
|
|
n.Labels[v1.LabelInstanceType] = instanceType
|
|
|
|
|
n.Labels[v1.LabelInstanceTypeStable] = instanceType
|
|
|
|
|
})
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if zones, ok := cnc.cloud.Zones(); ok {
|
|
|
|
|
zone, err := getZoneByProviderIDOrName(ctx, zones, node)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return nil, fmt.Errorf("failed to get zone from cloud provider: %v", err)
|
|
|
|
|
}
|
|
|
|
|
if zone.FailureDomain != "" {
|
|
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", v1.LabelZoneFailureDomain, zone.FailureDomain)
|
|
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", v1.LabelZoneFailureDomainStable, zone.FailureDomain)
|
|
|
|
|
nodeModifiers = append(nodeModifiers, func(n *v1.Node) {
|
|
|
|
|
if n.Labels == nil {
|
|
|
|
|
n.Labels = map[string]string{}
|
|
|
|
|
}
|
|
|
|
|
n.Labels[v1.LabelZoneFailureDomain] = zone.FailureDomain
|
|
|
|
|
n.Labels[v1.LabelZoneFailureDomainStable] = zone.FailureDomain
|
|
|
|
|
})
|
|
|
|
|
}
|
|
|
|
|
if zone.Region != "" {
|
|
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", v1.LabelZoneRegion, zone.Region)
|
|
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", v1.LabelZoneRegionStable, zone.Region)
|
|
|
|
|
nodeModifiers = append(nodeModifiers, func(n *v1.Node) {
|
|
|
|
|
if n.Labels == nil {
|
|
|
|
|
n.Labels = map[string]string{}
|
|
|
|
|
}
|
|
|
|
|
n.Labels[v1.LabelZoneRegion] = zone.Region
|
|
|
|
|
n.Labels[v1.LabelZoneRegionStable] = zone.Region
|
|
|
|
|
})
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return nodeModifiers, nil
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-29 19:21:42 -04:00
|
|
|
func getCloudTaint(taints []v1.Taint) *v1.Taint {
|
|
|
|
|
for _, taint := range taints {
|
2018-09-27 22:37:38 -04:00
|
|
|
if taint.Key == schedulerapi.TaintExternalCloudProvider {
|
2017-03-29 19:21:42 -04:00
|
|
|
return &taint
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return nil
|
|
|
|
|
}
|
|
|
|
|
|
2018-11-28 16:07:04 -05:00
|
|
|
func excludeCloudTaint(taints []v1.Taint) []v1.Taint {
|
2017-03-29 19:21:42 -04:00
|
|
|
newTaints := []v1.Taint{}
|
|
|
|
|
for _, taint := range taints {
|
2018-11-28 16:07:04 -05:00
|
|
|
if taint.Key == schedulerapi.TaintExternalCloudProvider {
|
2017-03-29 19:21:42 -04:00
|
|
|
continue
|
|
|
|
|
}
|
|
|
|
|
newTaints = append(newTaints, taint)
|
|
|
|
|
}
|
|
|
|
|
return newTaints
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-28 01:43:29 -04:00
|
|
|
// ensureNodeExistsByProviderID checks if the instance exists by the provider id,
|
|
|
|
|
// If provider id in spec is empty it calls instanceId with node name to get provider id
|
2019-10-13 21:11:22 -04:00
|
|
|
func ensureNodeExistsByProviderID(ctx context.Context, instances cloudprovider.Instances, node *v1.Node) (bool, error) {
|
2018-04-26 23:26:40 -04:00
|
|
|
providerID := node.Spec.ProviderID
|
|
|
|
|
if providerID == "" {
|
|
|
|
|
var err error
|
2019-10-13 21:11:22 -04:00
|
|
|
providerID, err = instances.InstanceID(ctx, types.NodeName(node.Name))
|
2018-04-26 23:26:40 -04:00
|
|
|
if err != nil {
|
2018-05-27 01:29:36 -04:00
|
|
|
if err == cloudprovider.InstanceNotFound {
|
|
|
|
|
return false, nil
|
|
|
|
|
}
|
2018-04-26 23:26:40 -04:00
|
|
|
return false, err
|
2017-08-21 14:55:43 -04:00
|
|
|
}
|
|
|
|
|
|
2018-04-26 23:26:40 -04:00
|
|
|
if providerID == "" {
|
2018-11-09 13:49:10 -05:00
|
|
|
klog.Warningf("Cannot find valid providerID for node name %q, assuming non existence", node.Name)
|
2018-04-26 23:26:40 -04:00
|
|
|
return false, nil
|
|
|
|
|
}
|
2017-08-21 14:55:43 -04:00
|
|
|
}
|
|
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
return instances.InstanceExistsByProviderID(ctx, providerID)
|
2017-08-21 14:55:43 -04:00
|
|
|
}
|
|
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
func getNodeAddressesByProviderIDOrName(ctx context.Context, instances cloudprovider.Instances, node *v1.Node) ([]v1.NodeAddress, error) {
|
|
|
|
|
nodeAddresses, err := instances.NodeAddressesByProviderID(ctx, node.Spec.ProviderID)
|
2017-03-29 19:21:42 -04:00
|
|
|
if err != nil {
|
|
|
|
|
providerIDErr := err
|
2019-10-13 21:11:22 -04:00
|
|
|
nodeAddresses, err = instances.NodeAddresses(ctx, types.NodeName(node.Name))
|
2017-03-29 19:21:42 -04:00
|
|
|
if err != nil {
|
2019-10-28 14:36:29 -04:00
|
|
|
return nil, fmt.Errorf("error fetching node by provider ID: %v, and error by node name: %v", providerIDErr, err)
|
2017-03-29 19:21:42 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return nodeAddresses, nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func nodeAddressesChangeDetected(addressSet1, addressSet2 []v1.NodeAddress) bool {
|
|
|
|
|
if len(addressSet1) != len(addressSet2) {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
addressMap1 := map[v1.NodeAddressType]string{}
|
|
|
|
|
|
|
|
|
|
for i := range addressSet1 {
|
|
|
|
|
addressMap1[addressSet1[i].Type] = addressSet1[i].Address
|
|
|
|
|
}
|
|
|
|
|
|
2019-02-13 19:16:13 -05:00
|
|
|
for _, v := range addressSet2 {
|
2019-02-13 22:46:17 -05:00
|
|
|
if addressMap1[v.Type] != v.Address {
|
2017-03-29 19:21:42 -04:00
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func ensureNodeProvidedIPExists(node *v1.Node, nodeAddresses []v1.NodeAddress) (*v1.NodeAddress, bool) {
|
|
|
|
|
var nodeIP *v1.NodeAddress
|
|
|
|
|
nodeIPExists := false
|
2017-05-30 10:46:00 -04:00
|
|
|
if providedIP, ok := node.ObjectMeta.Annotations[kubeletapis.AnnotationProvidedIPAddr]; ok {
|
2017-03-29 19:21:42 -04:00
|
|
|
nodeIPExists = true
|
|
|
|
|
for i := range nodeAddresses {
|
|
|
|
|
if nodeAddresses[i].Address == providedIP {
|
|
|
|
|
nodeIP = &nodeAddresses[i]
|
|
|
|
|
break
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return nodeIP, nodeIPExists
|
|
|
|
|
}
|
|
|
|
|
|
2019-10-13 21:11:22 -04:00
|
|
|
func getInstanceTypeByProviderIDOrName(ctx context.Context, instances cloudprovider.Instances, node *v1.Node) (string, error) {
|
|
|
|
|
instanceType, err := instances.InstanceTypeByProviderID(ctx, node.Spec.ProviderID)
|
2017-03-29 19:21:42 -04:00
|
|
|
if err != nil {
|
|
|
|
|
providerIDErr := err
|
2019-10-13 21:11:22 -04:00
|
|
|
instanceType, err = instances.InstanceType(ctx, types.NodeName(node.Name))
|
2017-03-29 19:21:42 -04:00
|
|
|
if err != nil {
|
|
|
|
|
return "", fmt.Errorf("InstanceType: Error fetching by providerID: %v Error fetching by NodeName: %v", providerIDErr, err)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return instanceType, err
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-23 21:30:01 -05:00
|
|
|
}
|
2017-08-17 14:46:25 -04:00
|
|
|
|
|
|
|
|
// getZoneByProviderIDorName will attempt to get the zone of node using its providerID
|
|
|
|
|
// then it's name. If both attempts fail, an error is returned
|
2019-10-13 21:11:22 -04:00
|
|
|
func getZoneByProviderIDOrName(ctx context.Context, zones cloudprovider.Zones, node *v1.Node) (cloudprovider.Zone, error) {
|
|
|
|
|
zone, err := zones.GetZoneByProviderID(ctx, node.Spec.ProviderID)
|
2017-08-17 14:46:25 -04:00
|
|
|
if err != nil {
|
|
|
|
|
providerIDErr := err
|
2019-10-13 21:11:22 -04:00
|
|
|
zone, err = zones.GetZoneByNodeName(ctx, types.NodeName(node.Name))
|
2017-08-17 14:46:25 -04:00
|
|
|
if err != nil {
|
|
|
|
|
return cloudprovider.Zone{}, fmt.Errorf("Zone: Error fetching by providerID: %v Error fetching by NodeName: %v", providerIDErr, err)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return zone, nil
|
|
|
|
|
}
|