mirror of
https://github.com/kubernetes/kubernetes.git
synced 2026-02-24 02:30:39 -05:00
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Run nvidia-gpu device-plugin daemonset as an addon on GCE nodes that have nvidia GPUs attached - Instead of the old `Accelerators` feature that added `alpha.kubernetes.io/nvidia-gpu` resource, use the new `DevicePlugins` feature that adds vendor specific resources. (In case of nvidia GPUs it will add `nvidia.com/gpu` resource.) - Add node label to GCE nodes with accelerators attached. This node label is the same as what GKE attaches to node pools with accelerators attached. (For example, for nvidia-tesla-p100 GPU, the label would be `cloud.google.com/gke-accelerator=nvidia-tesla-p100`) This will help us target accelerator specific daemonsets etc. to these nodes. - Run nvidia-gpu device-plugin daemonset as an addon on GCE nodes that have nvidia GPUs attached. - Some minor documentation improvements in addon manager. **Release note**: ```release-note GCE nodes with NVIDIA GPUs attached now expose `nvidia.com/gpu` as a resource instead of `alpha.kubernetes.io/nvidia-gpu`. ``` /sig cluster-lifecycle /sig scheduling /area hw-accelerators https://github.com/kubernetes/features/issues/368 |
||
|---|---|---|
| .. | ||
| addons | ||
| aws | ||
| centos | ||
| gce | ||
| images | ||
| juju | ||
| kubemark | ||
| kubernetes-anywhere | ||
| lib | ||
| libvirt-coreos | ||
| local | ||
| log-dump | ||
| openstack-heat | ||
| photon-controller | ||
| pre-existing | ||
| saltbase | ||
| skeleton | ||
| vagrant | ||
| vsphere | ||
| windows | ||
| BUILD | ||
| clientbin.sh | ||
| common.sh | ||
| get-kube-binaries.sh | ||
| get-kube-local.sh | ||
| get-kube.sh | ||
| kube-down.sh | ||
| kube-push.sh | ||
| kube-up.sh | ||
| kube-util.sh | ||
| kubeadm.sh | ||
| kubectl.sh | ||
| options.md | ||
| OWNERS | ||
| README.md | ||
| restore-from-backup.sh | ||
| test-e2e.sh | ||
| test-network.sh | ||
| test-smoke.sh | ||
| update-storage-objects.sh | ||
| validate-cluster.sh | ||
Cluster Configuration
Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Please submit new automation deployments to kube-deploy. Deployments in this directory will continue to be maintained and supported at their current level of support.
The scripts and data in this directory automate creation and configuration of a Kubernetes cluster, including networking, DNS, nodes, and master components.
See the getting-started guides for examples of how to use the scripts.
cloudprovider/config-default.sh contains a set of tweakable definitions/parameters for the cluster.
The heavy lifting of configuring the VMs is done by SaltStack.