vault/Dockerfile

228 lines
9.3 KiB
Docker
Raw Normal View History

# Copyright IBM Corp. 2016, 2025
# SPDX-License-Identifier: BUSL-1.1
## DOCKERHUB DOCKERFILE ##
FROM alpine:3 AS default
ARG BIN_NAME
# NAME and PRODUCT_VERSION are the name of the software in releases.hashicorp.com
# and the version to download. Example: NAME=vault PRODUCT_VERSION=1.2.3.
ARG NAME=vault
ARG PRODUCT_VERSION
ARG PRODUCT_REVISION
# TARGETARCH and TARGETOS are set automatically when --platform is provided.
ARG TARGETOS TARGETARCH
# LICENSE_SOURCE is the path to IBM license documents, which may be architecture-specific.
ARG LICENSE_SOURCE
# LICENSE_DEST is the path where license files are installed in the container
ARG LICENSE_DEST
# Additional metadata labels used by container registries, platforms
# and certification scanners.
LABEL name="Vault" \
maintainer="Vault Team <vault@hashicorp.com>" \
vendor="HashiCorp" \
version=${PRODUCT_VERSION} \
release=${PRODUCT_REVISION} \
revision=${PRODUCT_REVISION} \
summary="Vault is a tool for securely accessing secrets." \
description="Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log."
# Copy the license file as per Legal requirement
COPY ${LICENSE_SOURCE} ${LICENSE_DEST}
# Set ARGs as ENV so that they can be used in ENTRYPOINT/CMD
ENV NAME=$NAME
# Create a non-root user to run the software.
RUN addgroup ${NAME} && adduser -S -G ${NAME} ${NAME}
RUN apk add --no-cache libcap su-exec dumb-init tzdata
COPY dist/$TARGETOS/$TARGETARCH/$BIN_NAME /bin/
# /vault/logs is made available to use as a location to store audit logs, if
# desired; /vault/file is made available to use as a location with the file
# storage backend, if desired; the server will be started with /vault/config as
# the configuration directory so you can add additional config files in that
# location.
RUN mkdir -p /vault/logs && \
mkdir -p /vault/file && \
mkdir -p /vault/config && \
chown -R ${NAME}:${NAME} /vault
# Expose the logs directory as a volume since there's potentially long-running
# state in there
VOLUME /vault/logs
# Expose the file directory as a volume since there's potentially long-running
# state in there
VOLUME /vault/file
# 8200/tcp is the primary interface that applications use to interact with
# Vault.
EXPOSE 8200
# The entry point script uses dumb-init as the top-level process to reap any
# zombie processes created by Vault sub-processes.
#
# For production derivatives of this container, you should add the IPC_LOCK
# capability so that Vault can mlock memory.
COPY .release/docker/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
# # By default you'll get a single-node development server that stores everything
# # in RAM and bootstraps itself. Don't use this configuration for production.
CMD ["server", "-dev"]
## UBI DOCKERFILE ##
FROM registry.access.redhat.com/ubi10/ubi-minimal AS ubi
ARG BIN_NAME
# NAME and PRODUCT_VERSION are the name of the software in releases.hashicorp.com
# and the version to download. Example: NAME=vault PRODUCT_VERSION=1.2.3.
ARG NAME=vault
ARG PRODUCT_VERSION
ARG PRODUCT_REVISION
# TARGETARCH and TARGETOS are set automatically when --platform is provided.
ARG TARGETOS TARGETARCH
# LICENSE_SOURCE is the path to IBM license documents, which may be architecture-specific.
ARG LICENSE_SOURCE
# LICENSE_DEST is the path where license files are installed in the container
ARG LICENSE_DEST
# Additional metadata labels used by container registries, platforms
# and certification scanners.
LABEL name="Vault" \
maintainer="Vault Team <vault@hashicorp.com>" \
vendor="HashiCorp" \
version=${PRODUCT_VERSION} \
release=${PRODUCT_REVISION} \
revision=${PRODUCT_REVISION} \
summary="Vault is a tool for securely accessing secrets." \
description="Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log."
# Set ARGs as ENV so that they can be used in ENTRYPOINT/CMD
ENV NAME=$NAME
# Copy the license file as per Legal requirement
COPY ${LICENSE_SOURCE} ${LICENSE_DEST}/
# We must have a copy of the license in this directory to comply with the HasLicense Redhat requirement
# Note the trailing slash on the first argument -- plain files meet the requirement but directories do not.
COPY ${LICENSE_SOURCE}/ /licenses/
# Set up certificates, our base tools, and Vault. Unlike the other version of
# this (https://github.com/hashicorp/docker-vault/blob/master/ubi/Dockerfile),
# we copy in the Vault binary from CRT.
RUN set -eux; \
microdnf install -y ca-certificates gnupg openssl libcap tzdata procps shadow-utils util-linux tar
# Create a non-root user to run the software.
RUN groupadd --gid 1000 vault && \
adduser --uid 100 --system -g vault vault && \
usermod -a -G root vault
# Copy in the new Vault from CRT pipeline, rather than fetching it from our
# public releases.
COPY dist/$TARGETOS/$TARGETARCH/$BIN_NAME /bin/
# /vault/logs is made available to use as a location to store audit logs, if
# desired; /vault/file is made available to use as a location with the file
# storage backend, if desired; the server will be started with /vault/config as
# the configuration directory so you can add additional config files in that
# location.
ENV HOME=/home/vault
RUN mkdir -p /vault/logs && \
mkdir -p /vault/file && \
mkdir -p /vault/config && \
mkdir -p $HOME && \
chown -R vault /vault && chown -R vault $HOME && \
chgrp -R 0 $HOME && chmod -R g+rwX $HOME && \
chgrp -R 0 /vault && chmod -R g+rwX /vault
# Expose the logs directory as a volume since there's potentially long-running
# state in there
VOLUME /vault/logs
# Expose the file directory as a volume since there's potentially long-running
# state in there
VOLUME /vault/file
# 8200/tcp is the primary interface that applications use to interact with
# Vault.
EXPOSE 8200
# The entry point script uses dumb-init as the top-level process to reap any
# zombie processes created by Vault sub-processes.
#
# For production derivatives of this container, you should add the IPC_LOCK
# capability so that Vault can mlock memory.
COPY .release/docker/ubi-docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
# Use the Vault user as the default user for starting this container.
USER vault
# # By default you'll get a single-node development server that stores everything
# # in RAM and bootstraps itself. Don't use this configuration for production.
CMD ["server", "-dev"]
FROM ubi AS ubi-fips
FROM ubi AS ubi-hsm
FROM ubi AS ubi-hsm-fips
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
## Builder:
#
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
# A build container used to build the Vault binary. We use focal because the
# version of glibc is old enough for all of our supported distros for editions
# that require CGO. This container is used in CI to build all binaries that
# require CGO.
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
#
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
# To run it locally, first build the builder container:
# docker build -t builder --build-arg GO_VERSION=$(cat .go-version) .
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
#
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
# Then build Vault using the builder container:
# docker run -it -v $(pwd):/build -v GITHUB_TOKEN=$GITHUB_TOKEN --env GO_TAGS='ui enterprise cgo hsm venthsm' --env GOARCH=s390x --env GOOS=linux --env VERSION=1.20.0-beta1 --env VERSION_METADATA=ent.hsm --env CGO_ENABLED=1 builder make ci-build
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
#
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
# You can also share your local Go modules with the container to avoid downloading
# them every time:
# docker run -it -v $(pwd):/build -v $(go env GOMODCACHE):/go-mod-cache --env GITHUB_TOKEN=$GITHUB_TOKEN --env GO_TAGS='ui enterprise cgo hsm venthsm' --env GOARCH=s390x --env GOOS=linux --env VERSION=1.20.0-beta1 --env VERSION_METADATA=ent.hsm --env GOMODCACHE=/go-mod-cache --env CGO_ENABLED=1 builder make ci-build
#
# If you have a linux machine you can also share the tools
# GOBIN="$(go env GOPATH)/bin" make tools
# docker run -it -v $(pwd):/build -v $(go env GOMODCACHE):/go-mod-cache -v "$(go env GOPATH)/bin":/opt/tools/bin --env GITHUB_TOKEN=$GITHUB_TOKEN --env GO_TAGS='ui enterprise cgo hsm venthsm' --env GOARCH=s390x --env GOOS=linux --env VERSION=1.20.0-beta1 --env VERSION_METADATA=ent.hsm --env GOMODCACHE=/go-mod-cache --env CGO_ENABLED=1 builder make ci-build
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
FROM ubuntu:focal AS builder
# Pass in the GO_VERSION as a build-arg
ARG GO_VERSION
# Set our environment
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
ENV PATH="/root/go/bin:/opt/go/bin:/opt/tools/bin:$PATH"
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
ENV GOPRIVATE='github.com/hashicorp/*'
# Install the necessary system tooling to cross compile vault for our various
# CGO targets. Do this separately from branch specific Go and build toolchains
# so our various builder image layers can share cache.
COPY .build/system.sh .
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
RUN chmod +x system.sh && ./system.sh && rm -rf system.sh
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
# Install the correct Go toolchain
COPY .build/go.sh .
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
RUN chmod +x go.sh && ./go.sh && rm -rf go.sh
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
[VAULT-39671] tools: use github cache for all tools (#9622) (#9634) * [VAULT-39671] tools: use github cache for external tools We currently have some ~13 tools that we need available both locally for development and in CI for building, linting, and formatting, and testing Vault. Each branch that we maintain often uses the same set of tools but often pinned to different versions. For development, we have a `make tools` target that will execute the `tools/tool.sh` installation script for the various tools at the correct pin. This works well enough but is cumbersome if you’re working across many branches that have divergent versions. For CI the problem is speed and repetition. For each build job (~10) and Go test job (16-52) we have to install most of the same tools for each job. As we have extremely limited Github Actions cache we can’t afford to cache the entire vault go build cache, so if we were to build them from source each time we incur a penalty of downloading all of the modules and building each tool from source. This yields about an extra 2 minutes per job to install all of the tools. We’ve worked around this problem by writing composite actions that download pre-built binaries of the same tools instead of building them from source. That usually takes a few seconds. The downside of that approach is rate limiting, which Github has become much more aggressive in enforcing. That leads us to where we are before this work: - For builds in the compatibility docker container: the tools are built from source and cached as separate builder image layer. (usually fast as we get cache hits, slow on cache misses) - For builds that compile directly on the runner: the tools are installed on each job runner by composite github actions (fast, uses API requests, prone to throttling) - For tests, they use the same composite actions to install the tools on each job. (fast, uses API requests, prone to throttling) This also leads to inconsistencies since there are two sources of truth: the composite actions have their own version pin outside of those in `tools.sh`. This has led to drift. We previously tried to save some API requests and move all builds into the container. That almost works but docker's build conatiner had a hard time with some esoteric builds. We could special case it but it's a bandaid at best. A prior version of this work (VAULT-39654) investigated using `go tool`, but there were some showstopper issues with that workflow that make it a non-starter for us. Instead, we’ll attempt to use more actions cache to resolve the throttling. This will allow us to have a single source of truth for tools, their pins, and afford us the same speed on cache hits as we had previously without downloading the tools from github releases thousands of times per day. We add a new composite github action for installing our tools. - On cache misses it builds the tools and installs them into a cacheable path. - On cache hits it restore the cacheable path. - It adds the tools to the GITHUB_PATH to ensure runner based jobs can find them. - For Docker builds it mounts the tools at `/opt/tools/bin` which is part of the PATH in the container. - It uses a cache key of the SHA of the tools directory along with the working directory SHA which is required to deal with actions/cache issues. This results in: - A single source of truth for tools and their pins - A single cache for tools that can be re-used between all CI and build jobs - No more Github API calls for tooling. *_Rate limiting will be a thing of the past._* Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2025-09-25 17:26:02 -04:00
# Install the vault tools installer. It might be required during build if the
# pre-build tools are not mounted into the container.
[VAULT-35682] build(cgo): Build CGO binaries in a container (#30834) Ubuntu 20.04 has reached EOL and is no longer a supported runner host distro. Historically we've relied on it for our CGO builds as it contains an old enough version of glibc that we can retain compatibility with all of our supported distros and build on a single host distro. Rather than requiring a new RHEL 8 builder (or some equivalent), we instead build CGO binaries inside an Ubuntu 20.04 container along with its glibc and various C compilers. I've separated out system package changes, the Go toolchain install, and external build tools tools install into different container layers so that the builder container used for each branch is maximally cacheable. On cache misses these changes result in noticeably longer build times for CGO binaries. That is unavoidable with this strategy. Most of the time our builds will get a cache hit on all layers unless they've changed any of the following: - .build/* - .go-version - .github/actions/build-vault - tools/tools.sh - Dockerfile I've tried my best to reduce the cache space used by each layer. Currently our build container takes about 220MB of cache space. About half of that ought to be shared cache between main and release branches. I would expect total new cache used to be in the 500-600MB range, or about 5% of our total space. Some follow-up idea that we might want to consider: - Build everything inside the build container and remove the github actions that set up external tools - Instead of building external tools with `go install`, migrate them into build scripts that install pre-built `linux/amd64` binaries - Migrate external to `go tool` and use it in the builder container. This requires us to be on 1.24 everywhere so ought not be considered until that is a reality. Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-06-03 19:32:14 -04:00
COPY tools/tools.sh .
RUN chmod +x tools.sh
# Run the build
COPY .build/entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]