* test: Add E2E tests for `state list` and `state show` commands
* test: Update `mockPluggableStateStorageProvider` to log a warning during tests where the values in `MockStates` aren't compatible with the `ReadStateBytesFn` default function. Make existing test set an appropriate value in `MockStates`.
* test: Update `mockPluggableStateStorageProvider` helper to include a resource schema
* test: Add command-level test for `state list` showing integration with pluggable state storage code.
* test: Add command-level test for `state show` showing integration with pluggable state storage code.
* test: Add command-level test for `state pull` showing integration with pluggable state storage code.
* test: Add command-level test for `state identities` showing integration with pluggable state storage code.
* test: Add command-level test for `state rm` showing integration with pluggable state storage code.
* test: Add command-level test for `state mv` showing integration with pluggable state storage code.
* test: Add command-level test for `state push` showing integration with pluggable state storage code.
* test: Add command-level test for `state replace-provider` showing integration with pluggable state storage code.
* test: Change shared test fixture to not be named after a specific command under test.
This test fixure is reused across tests that need the config to define a state store but otherwise rely on the mock provider to set up the test scenario.
* test: Update test to use shared test fixture
* test: Remove redundant test fixture
The internal/command/testdata/state-commands-state-store and internal/command/testdata/state-store-unchanged test fixtures are the same.
* fix: Re-add logic for setting chunk size in the context of E2E tests using grpcwrap package
This was removed, incorrectly, in https://github.com/hashicorp/terraform/pull/37899
* refactor: Let panic happen if there's incompatibility between mock returned from `mockPluggableStateStorageProvider` and the `MockStates` that's been set in the mock
* test: Refactor to contain paths in reused variable, remove unnecessary .gitkeep
* test: Remove unneeded test code
* feat: Update the `workspace new` subcommand to work with PSS, add E2E testing
* refactor: Replace instances of `ioutil` with `os` while looking at the workspace command
* docs: Update code comments in `workspace new` command
* test: Update E2E test using PSS with workspace commands to assert state files are created by given commands
* test: Include `workspace show` in happy path E2E test using PSS
* fix: Allow DeleteState RPC to include the id of the state to delete
* test: Include `workspace delete` in happy path E2E test using PSS
* fix: Avoid assignment to nil map in mock provider during WriteStateBytes
* test: Add integration test for workspace commands when using PSS
We still need an E2E test for this, to ensure that the GRPC-related packages pass all the expected data between core and the provider.
* test: Update test to reflect changes in the test fixture configuration
* docs: Fix code comment
* test: Change test to build its own Terraform binary with experiments enabled
* refactor: Replace use of `newMeta` with reuse of Meta that we re-set the UI value on
* feat: Implement `inmem` state store in provider-simple-v6
* feat: Add filesystem state store `fs` in provider-simple-v6, no locking implemented
* refactor: Move PSS chunking-related constants into the `pluggable` package, so they can be reused.
* feat: Implement PSS-related methods in grpcwrap package
* test: Add E2E test checking an init and apply (no plan) workflow is usable with both PSS implementations
* fix: Ensure state stores are configured with a suggested chunk size from Core
---------
Co-authored-by: Radek Simko <radeksimko@users.noreply.github.com>
The providers mirror command was updated to inspect the lock file,
however that was not part of the original intent for the command, and
it's possible that the command needs to be run without a lock file.
Since we have been honoring the lock file for the past few releases,
let's keep that consistent and allow disabling the file with
`-lock-file=false`.
When we added these functions we hadn't yet settled on a single convention
for how provider-contributed functions ought to be named, and so these
followed the convention previously used for Terraform's own built-in
functions.
We've now decided to use the new provider namespace as an opportunity to
partially fix the historical design error of naming functions as all
lowercase without any separation between words. This makes the provider
functions inconsistent with the built-in functions, but should at least
hopefully encourage the provider functions to be consistent with one
another, so that the rule for which to use is relatively easy to remember.
This also switches the word order because the provider functions convention
prefers the verb to come first: encode_tfvars instead of tfvarsencode.
Using the new possibility of provider-contributed functions, this
introduces three new functions which live in the
terraform.io/builtin/terraform provider, rather than being language
builtins, due to their Terraform-domain-specific nature.
The three new functions are:
- tfvarsencode: takes a mapping value and tries to transform it into
Terraform CLI's "tfvars" syntax, which is a small subset of HCL that
only supports key/value pairs with constant values.
- tfvarsdecode: takes a string containing content that could potentially
appear in a "tfvars" file and returns an object representing the
raw variable values defined inside.
- exprencode: takes an arbitrary Terraform value and produces a string
that would yield a similar value if parsed as a Terraform expression.
All three of these are very specialized, of use only in unusual situations
where someone is "gluing together" different Terraform configurations etc
when the usual strategies such as data sources are not suitable. There's
more information on the motivations for (and limitations of) each function
in the included documentation.
When we originally introduced the trust-on-first-use checksum locking
mechanism in v0.14, we had to make some tricky decisions about how it
should interact with the pre-existing optional read-through global cache
of provider packages:
The global cache essentially conflicts with the checksum locking because
if the needed provider is already in the cache then Terraform skips
installing the provider from upstream and therefore misses the opportunity
to capture the signed checksums published by the provider developer. We
can't use the signed checksums to verify a cache entry because the origin
registry protocol is still using the legacy ziphash scheme and that is
only usable for the original zipped provider packages and not for the
unpacked-layout cache directory. Therefore we decided to prioritize the
existing cache directory behavior at the expense of the lock file behavior,
making Terraform produce an incomplete lock file in that case.
Now that we've had some real-world experience with the lock file mechanism,
we can see that the chosen compromise was not ideal because it causes
"terraform init" to behave significantly differently in its lock file
update behavior depending on whether or not a particular provider is
already cached. By robbing Terraform of its opportunity to fetch the
official checksums, Terraform must generate a lock file that is inherently
non-portable, which is problematic for any team which works with the same
Terraform configuration on multiple different platforms.
This change addresses that problem by essentially flipping the decision so
that we'll prioritize the lock file behavior over the provider cache
behavior. Now a global cache entry is eligible for use if and only if the
lock file already contains a checksum that matches the cache entry. This
means that the first time a particular configuration sees a new provider
it will always be fetched from the configured installation source
(typically the origin registry) and record the checksums from that source.
On subsequent installs of the same provider version already locked,
Terraform will then consider the cache entry to be eligible and skip
re-downloading the same package.
This intentionally makes the global cache mechanism subordinate to the
lock file mechanism: the lock file must be populated in order for the
global cache to be effective. For those who have many separate
configurations which all refer to the same provider version, they will
need to re-download the provider once for each configuration in order to
gather the information needed to populate the lock file, whereas before
they would have only downloaded it for the _first_ configuration using
that provider.
This should therefore remove the most significant cause of folks ending
up with incomplete lock files that don't work for colleagues using other
platforms, and the expense of bypassing the cache for the first use of
each new package with each new configuration. This tradeoff seems
reasonable because otherwise such users would inevitably need to run
"terraform providers lock" separately anyway, and that command _always_
bypasses the cache. Although this change does decrease the hit rate of the
cache, if we subtract the never-cached downloads caused by
"terraform providers lock" then this is a net benefit overall, and does
the right thing by default without the need to run a separate command.
We have various mechanisms that aim to ensure that the installed provider
plugins are consistent with the lock file and that the lock file is
consistent with the provider requirements, and we do have existing unit
tests for them, but all of those cases mock our fake out at least part of
the process and in the past that's caused us to miss usability
regressions, where we still catch the error but do so at the wrong layer
and thus generate error message lacking useful additional context.
Here we'll add some new end-to-end tests to supplement the existing unit
tests, making sure things work as expected when we assemble the system
together as we would in a release. These tests cover a number of different
ways in which the plugin selections can grow inconsistent.
These new tests all run only when we're in a context where we're allowed
to access the network, because they exercise the real plugin installer
codepath. We could technically build this to use a local filesystem mirror
or other such override to avoid that, but the point here is to make sure
we see the expected behavior in the main case, and so it's worth the
small additional cost of downloading the null provider from the real
registry.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.