We encovered that deferred action invocations don't get provider
addresses, which prevents us from loading the schema. That being said, I
think it shouldn't be an issue, but will come back to revisit this as we
build the support end to end.
Add a test for deferred actions support
* add tests that highlight known issues in the destroy mechanism
* separate refresh during destroy plans
* use the refresh outputs during destroy plans
* copywrite headers
* stacks: support sensitive input values in planned changes
* stacks: Fix panic when applying sensitive inputs
The stacks-specific check that a given plan's input variable values
remain unchanged at apply time would previously panic when given a
marked value. Marks are not important for this check, so we simply
remove them to ensure that it passes. This seems in line with how
AssertObjectCompatible works, which I'm taking as precedent.
The new test added in this commit panics without the corresponding code
change.
---------
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
We should not need to encode and decode change values within core, since
the encoded version is only technically needed for serialization. This
pattern stems from the conversion to current changes system, but back
then we did not have easy access to the correct schemas at the time to
encode and decode the entire set of changes.
Moving the core handling of changes to only use the decoded values will
drastically improve evaluation efficiency, removing a round trip
through encoded values for every resource reference.
* stacks: fix bug preventing cross-provider move refactorings
* also make provider functions work
* stacks: add support for provider functions in .tfstack.hcl files
Previously we had the entire raw prior state serialized as part of the
"plan header" message in the raw plan, which meant that the maximum state
size was constrained by the maximum allowed gRPC message size.
Instead now we'll use a separate raw plan element for each raw prior state
element, so that we're limited only in the size of individual items rather
than size of the state as a whole.
This deals with the last remaining (non-deprecated) case where our RPC
protocol tries to pack an entire raw state or plan into a single protobuf
message, and so we're now standardized on using a streaming approach in
all cases.
Previously we expected clients to provide an inline raw prior state to
PlanStackChanges and an inline raw plan to ApplyStackChanges, which was
a simpler design but meant that we might end up generating a state or plan
that's too large to be submitted in a single gRPC request, which would then
be difficult to resolve.
Instead we'll offer separate RPC functions for loading raw state and plan
using a gRPC streaming approach, which better mirrors the streaming
approach we use to _emit_ these artifacts. Although we don't actually need
this benefit right now, this makes it possible in principle for a client
that's running PlanStackChanges to feed back the raw planned actions
concurrently into OpenPlan and thus avoid buffering the whole plan on the
client side at all.
This required resolving the pre-existing FIXME about the inconsistency
where stackeval wants a raw plan for apply but expects the caller to
have dealt with loading the prior state for planning. Here it's resolved
in the direction of the caller (rpcapi) always being responsible for
loading both artifacts, because that means we can continue supporting the
old inline approach for a while without that complexity having to infect
the lower layers.
Ideally we should remove the legacy approach before this API becomes
constrained by compatibility promises, but I've preserved the old API
for now to give us some flexibility in when we update the existing
clients of this API to use the new approach.
Co-authored-by: Martin Atkins <mart@degeneration.co.uk>