* stacks: support sensitive input values in planned changes
* stacks: Fix panic when applying sensitive inputs
The stacks-specific check that a given plan's input variable values
remain unchanged at apply time would previously panic when given a
marked value. Marks are not important for this check, so we simply
remove them to ensure that it passes. This seems in line with how
AssertObjectCompatible works, which I'm taking as precedent.
The new test added in this commit panics without the corresponding code
change.
---------
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* stacks: fix bug preventing cross-provider move refactorings
* also make provider functions work
* stacks: add support for provider functions in .tfstack.hcl files
Previously we had the entire raw prior state serialized as part of the
"plan header" message in the raw plan, which meant that the maximum state
size was constrained by the maximum allowed gRPC message size.
Instead now we'll use a separate raw plan element for each raw prior state
element, so that we're limited only in the size of individual items rather
than size of the state as a whole.
This deals with the last remaining (non-deprecated) case where our RPC
protocol tries to pack an entire raw state or plan into a single protobuf
message, and so we're now standardized on using a streaming approach in
all cases.
When the topmost stack configuration declares an ephemeral input variable,
its values must be provided separately for each of the plan and apply
phases.
Therefore here we extend the API to allow specifying input variable values
during the apply phase, and add rules to check whether all of the
apply-time-required input variables have been specified and whether any
non-ephemeral variables are either unspecified or re-specified with equal
values during the apply phase.
This also extends the FindStackConfigurationComponents response to include
more metadata about the input variables and output values so that a caller
can know which ones are ephemeral. The name of that RPC function had
already become a little too specific with the inclusion of embedded stack
information and is even moreso now; we might choose to rename it to
something more generic like "AnalyzeStackConfiguration" in future, but
that'd be a breaking change and therefore requires more coordination.
* stacks: emit events for deferred actions
* deferral allowed is always on
* Update internal/rpcapi/stacks.go
Co-authored-by: Daniel Schmidt <danielmschmidt92@gmail.com>
---------
Co-authored-by: Daniel Schmidt <danielmschmidt92@gmail.com>
* stacks: add deferred resource instance planned change to protobuf
* stacks: add deferred resource instance to stack plan sequence
* stacks: add planned change for deferred actions
* stacks: refactor planned change resource instance planned
moving the components out of the main function definition so that we can reuse the implementation for deferred resource instances which wraps the message used for PlannedChangeResourceInstancePlanned
* stacks: track deferred changes in stackplan
* add simple tests
* fix tests
* address comments
---------
Co-authored-by: Liam Cervante <liam.cervante@hashicorp.com>
In the very first implementation of "sensitive values" we were
unfortunately not disciplined about separating the idea of "marked value"
from the idea of "sensitive value" (where the latter is a subset of the
former). The first implementation just assumed that any marking whatsoever
meant "sensitive".
We later improved that by adding the marks package and the marks.Sensitive
value to standardize on the representation of "sensitive value" as being
a value marked with _that specific mark_.
However, we did not perform a thorough review of all of the mark-handling
codepaths to make sure they all agreed on that definition. In particular,
the state and plan models were both designed as if they supported arbitrary
marks but then in practice marks other than marks.Sensitive would be
handled in various inconsistent ways: dropped entirely, or interpreted as
if marks.Sensitive, and possibly do so inconsistently when a value is
used only in memory vs. round-tripped through a wire/file format.
The goal of this commit is to resolve those oddities so that there are now
two possible situations:
- General mark handling: some codepaths genuinely handle marks
generically, by transporting them from input value to output value in
a way consistent with how cty itself deals with marks. This is the
ideal case because it means we can add new marks in future and assume
these codepaths will handle them correctly without any further
modifications.
- Sensitive-only mark preservation: the codepaths that interact with our
wire protocols and file formats typically have only specialized support
for sensitive values in particular, and lack support for any other
marks. Those codepaths are now subject to a new rule where they must
return an error if asked to deal with any other mark, so that if we
introduce new marks in future we'll be forced either to define how we'll
avoid those markings reaching the file/wire formats or extend the
file/wire formats to support the new marks.
Some new helper functions in package marks are intended to standardize how
we deal with the "sensitive values only" situations, in the hope that
this will make it easier to keep things consistent as the codebase evolves
in future.
In practice the modules runtime only ever uses marks.Sensitive as a mark
today, so all of these checks are effectively covering "should never
happen" cases. The only other mark Terraform uses is an implementation
detail of "terraform console" and does not interact with any of the
codepaths that only support sensitive values in particular.
Components expect that check results will be round-tripped
through the plan in order to update unknown check results
during the apply. We hadn't wired this up for stack plans,
resulting in panics when the apply tries to update a check
status that doesn't exist.
Due to an oversight in our handling of resource instance objects that are
neither in configuration nor plan -- which is true for data resources that
have since been removed from the configuration -- we were generating plan
change objects that were lacking a provider configuration address, which
made them syntactically invalid and thus not reloadable using the
raw plan parser.
This is a bit of a strange situation since we don't technically _need_ a
provider configuration address for these; all we're going to do is just
unceremoniously delete them from the state during apply anyway. However,
we always have the provider configuration address available anyway, so
adding this in here is overall simpler than changing the parser, the
models it populates, and all of the downstream users of those models to
treat this field as optional.
This commit is more test case than it is fix, since the fix was relatively
straightforward once I had a test case to reproduce the problem it's
fixing.
These ideas are both already implied by some logic elsewhere in the system,
but until now we didn't have the decision logic centralized in a single
place that could therefore evolve over time without necessarily always
updating every caller together.
We'll now have the modules runtime produce its own boolean ruling about
each characteristic, which callers can rely on for the mechanical
decision-making of whether to offer the user an "approve" prompt, and
whether to remind the user after apply that it was an incomplete plan
that will probably therefore need at least one more plan/apply round to
converge.
The "Applyable" flag directly replaces the previous method Plan.CanApply,
with equivalent logic. Making this a field instead of a method means that
we can freeze it as part of a saved plan, rather than recalculating it
when we reload the plan, and we can export the field value in our export
formats like JSON while ensuring it'll always be consistent with what
Terraform is using internally.
Callers can (and should) still use other context in the plan to return
more tailored messages for specific situations they already know about
that might be useful to users, but with these flags as a baseline callers
can now just fall back to a generic presentation when encountering a
situation they don't yet understand, rather than making the wrong decision
and causing something strange to happen. That is: a lack of awareness of
a new rule will now cause just a generic message in the UI, rather than
incorrect behavior.
This commit mostly just deals with populating the flags, and then all of
the direct consequences of that on our various tests. Further changes to
actually make use of these flags elsewhere in the system will follow in
later commits, both in this repository and in other repositories.
Components can emit sensitive values as outputs, which can be consumed
as inputs to other components. This commit ensures that such values are
correctly processed in order to pass their sensitivity to the modules
runtime.
Previously we were directly using the planproto.DynamicValue message type,
but that's symmetrical with plans.DynamicValue in that it encodes only
the value itself and not associated metadata such as the paths that have
sensitive value marks.
This new tfstackdata1.DynamicValue therefore echoes
terraform1.DynamicValue by carrying both the value and the metadata
together.
This implies a slight change to one of the existing message types that
was encoding a map of planned input values for each component instance,
but in practice we're not currently making use of that data anyway -- it's
there to enable a hypothetical extra correctness check in the stacks
runtime so that we can detect inconsistency problems closer to their
source -- and so this change doesn't affect anything in practice. Before
we make use of it we'll probably want to change the internal API a little
so that we can preserve the sensitive value metadata on those values, but
that's beyond the scope of this PR.
The main point of adding this is to track the output values for a component
instance as part of the raw state, and so the new field for that is also
added here although nothing will read or write it yet. Actual use of that
new field will follow in future commits.
Add additional fields to the AppliedChange#ResourceInstance
We're adding Resource Mode, Resource Type and Provider Address to the AppliedChange's ResourceInstance
This recorded that we needed to encode the type information for the
output values. In the end we didn't need to do anything special for that
because we're borrowing the pre-serialized data originally designed for
the traditional Terraform CLI saved plan format, and that encodes the
type information as part of the MessagePack encoding in-band.
Previously we were still partially relying on the record of this in the
embedded planproto object, but since we're now allowing "planned changes"
that don't actually include a change in that traditional sense we need to
track the provider configuration address separately as a top-level field.
As with some of the addresses before, this means we're now storing the
provider config address redundantly in two places when there _is_ a
planproto change. Hopefully we'll improve that someday, but we don't need
to sweat it too much right now because this only affects the internal
raw plan format that is not subject to any compatibility promises between
releases, so we can freely change it in any future version.
Previously our raw plan data structure only included information about
resource instances that have planned actions, which meant we were missing
those which only get their state updated during refresh and don't actually
need any real work done.
Now we'll track everything, accepting that some "planned changes" for
resource instance objects will not actually include a planned change in
the typical sense, and instead only represent a state update.
Previously we had an asymmetry between PlannedChange and AppliedChange,
where the latter supported any number of "description" objects whereas
the former only supported zero or one.
This change then is primarily motivated by making these two APIs
consistent, but also has some other benefits:
- The generated RPC stub packages will now have a specific type to use
to represent _just_ a change description for a plan, so callers can
pass around and preserve just those objects in isolation, rather than
always having to use the entire PlannedChange message for that purpose.
- This gives us a little more flexibility if later we find we need to
return more than one description associated with a single logical
change in future. For example, if we end up needing to make a breaking
change to how we describe a particular kind of change, we might send
an object containing both the old form and the new form together,
and instruct clients that support the new form to treat it as taking
priority over the old form when both are present.
We don't yet know of any specific reason to do this, but this protocol
is likely to be with us for a long time and so we need to prioritize
leaving room for future additions we haven't thought of yet.
This is a breaking change to the API from a client's perspective, but
that's acceptable at the current stage of development where nothing has
been committed in a stable release yet.
We need to retain the prior state (in a form that Terraform Core's modules
runtime can accept) between the plan and apply phases because Terraform
uses it to verify that the provider's "final plan" is consistent with
the initial plan.
We previously tried to do this by reusing the "old" value from the planned
change, but that's not really possible in practice because of the
historical implementation detail that Terraform wants state information
in a JSON format and plan information in MessagePack.
This also contains the beginnings of handling the "discarding" of
unrecognized keys from the state data structures, although we'll probably
need to do more in that area later since this is just the bare minimum.
Previously we incorrectly assumed that resource instances only have one
"current" object each. However, resource instances can also have deposed
objects which we must also keep track of.
This is a breaking change to the rpcapi since we're now using a new
message type for a resource instance _object_ address in several places.
That breakage is intentional here because at the time of writing this
commit the rpcapi is not yet in any stable release and we want to force
updating our existing internal client to properly handle deposed objects
too.