We should not need to encode and decode change values within core, since
the encoded version is only technically needed for serialization. This
pattern stems from the conversion to current changes system, but back
then we did not have easy access to the correct schemas at the time to
encode and decode the entire set of changes.
Moving the core handling of changes to only use the decoded values will
drastically improve evaluation efficiency, removing a round trip
through encoded values for every resource reference.
* stacks: fix bug preventing cross-provider move refactorings
* also make provider functions work
* stacks: add support for provider functions in .tfstack.hcl files
* stacks: add deferred resource instance planned change to protobuf
* stacks: add deferred resource instance to stack plan sequence
* stacks: add planned change for deferred actions
* stacks: refactor planned change resource instance planned
moving the components out of the main function definition so that we can reuse the implementation for deferred resource instances which wraps the message used for PlannedChangeResourceInstancePlanned
* stacks: track deferred changes in stackplan
* add simple tests
* fix tests
* address comments
---------
Co-authored-by: Liam Cervante <liam.cervante@hashicorp.com>
Components expect that check results will be round-tripped
through the plan in order to update unknown check results
during the apply. We hadn't wired this up for stack plans,
resulting in panics when the apply tries to update a check
status that doesn't exist.
These ideas are both already implied by some logic elsewhere in the system,
but until now we didn't have the decision logic centralized in a single
place that could therefore evolve over time without necessarily always
updating every caller together.
We'll now have the modules runtime produce its own boolean ruling about
each characteristic, which callers can rely on for the mechanical
decision-making of whether to offer the user an "approve" prompt, and
whether to remind the user after apply that it was an incomplete plan
that will probably therefore need at least one more plan/apply round to
converge.
The "Applyable" flag directly replaces the previous method Plan.CanApply,
with equivalent logic. Making this a field instead of a method means that
we can freeze it as part of a saved plan, rather than recalculating it
when we reload the plan, and we can export the field value in our export
formats like JSON while ensuring it'll always be consistent with what
Terraform is using internally.
Callers can (and should) still use other context in the plan to return
more tailored messages for specific situations they already know about
that might be useful to users, but with these flags as a baseline callers
can now just fall back to a generic presentation when encountering a
situation they don't yet understand, rather than making the wrong decision
and causing something strange to happen. That is: a lack of awareness of
a new rule will now cause just a generic message in the UI, rather than
incorrect behavior.
This commit mostly just deals with populating the flags, and then all of
the direct consequences of that on our various tests. Further changes to
actually make use of these flags elsewhere in the system will follow in
later commits, both in this repository and in other repositories.
Previously we were still partially relying on the record of this in the
embedded planproto object, but since we're now allowing "planned changes"
that don't actually include a change in that traditional sense we need to
track the provider configuration address separately as a top-level field.
As with some of the addresses before, this means we're now storing the
provider config address redundantly in two places when there _is_ a
planproto change. Hopefully we'll improve that someday, but we don't need
to sweat it too much right now because this only affects the internal
raw plan format that is not subject to any compatibility promises between
releases, so we can freely change it in any future version.
Previously our raw plan data structure only included information about
resource instances that have planned actions, which meant we were missing
those which only get their state updated during refresh and don't actually
need any real work done.
Now we'll track everything, accepting that some "planned changes" for
resource instance objects will not actually include a planned change in
the typical sense, and instead only represent a state update.
We need to retain the prior state (in a form that Terraform Core's modules
runtime can accept) between the plan and apply phases because Terraform
uses it to verify that the provider's "final plan" is consistent with
the initial plan.
We previously tried to do this by reusing the "old" value from the planned
change, but that's not really possible in practice because of the
historical implementation detail that Terraform wants state information
in a JSON format and plan information in MessagePack.
This also contains the beginnings of handling the "discarding" of
unrecognized keys from the state data structures, although we'll probably
need to do more in that area later since this is just the bare minimum.
We previously took a few shortcuts just to more quickly produce a proof
of concept. This gets us considerably closer to a properly-working handoff
from plan to apply, at least for create, update, and replace actions.
Destroying stuff that's been removed from the configuration will take some
further work since currently everything we do is driven by objects in the
configuration. That will follow in later commits.
This is a first pass at actually driving the apply phase through to
completion, using the ChangeExec function to schedule the real change
operations from the plan and then, just like for planning, a visit to
each participating object to ask it to check itself and report the results
of any changes it has made.
Many of our objects don't have any external side-effects of their own and
so just need to check their results are still valid after the apply phase
has replaced unknown values with known values. For those we can mostly
just share the same logic between plan and apply aside from asking for
ApplyPhase instead of PlanPhase.
ComponentInstance and OutputValue are the two objects whose treatment
differs the most between plan and apply, because both of those need to
emit externally-visible "applied change" objects to explain their
side-effects to the caller.
The apply-walk driver has a lot of behavior in common with the existing
plan-walk driver. For this commit we're just accepting that and having
two very similar pieces of code that differ only in some leaf details.
In a future commit we might try to generalize that so we can share more
logic between the two, but only if we can do that without making the code
significantly harder to read.
This package includes both some model types to represent plans (and parts
thereof) in memory, and logic for translating between the in-memory
representation and our protocol-buffers-based storage serialization.
Unlike the traditional saved plan file format, the stacks model for
planning is slightly asymmetrical: while constructing the plan the system
will emit chunks of the overall plan piecemeal as PlannedChange values,
streamed to the rpcapi client. When reloading that plan to apply it,
the client must provide all of the raw plan blobs in sequence which we
can then decode one by one to construct a single Plan object covering the
entire effect of the plan.
Another intentional asymmetry is that PlannedChange includes enough
information to produce both the raw internal plan representation and the
public-facing plan description, whereas Plan only includes the subset of
information actually required for Terraform Core to apply the plan, from
the raw internal representation exclusively.